url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1710.01845 | Exponential convergence rate of ruin probabilities for level-dependent Lévy-driven risk processes | We explicitly find the rate of exponential long-term convergence for the ruin probability in a level-dependent Lévy-driven risk model, as time goes to infinity. Siegmund duality allows to reduce the pro blem to long-term convergence of a reflected jump-diffusion to its stationary distribution, which is handled via Lyapunov functions. | \section{Introduction}
A non-life insurance company holds at time $t=0$ an initial capital $u = X(0)\geq0$, collects premiums at a rate $p(x)>0$ depending on the current level of the capital $X(t)=x$, and pays from time to time a compensation (when a claim is filed). The aggregated size of claims up to time $t>0$ is modeled by a compound Poisson process $(L(t)\text{ , }t\geq0)$. That is, the number of claims is governed by a homogeneous Poisson process of intensity $\beta$ independent from the claim sizes. The claim sizes, in turn, form a sequence $U_1,U_2,\ldots$ of i.i.d. nonnegative random variables with cumulative distribution function $B(\cdot)$. The net worth of the insurance company is then given by a continuous-time stochastic process $X = (X(t),\, t \ge 0)$, with
\begin{equation}\label{eq:CP-classic}
X(t) = u+\int_0^{t}p(X(s))\text{d}s-\sum_{k=1}^{N(t)}U_k=u+\int_0^{t}p(X(s))\,\text{d}s-L(t),\ t \ge 0.
\end{equation}
Examples of such level-dependent premium rate include the insurance company downgrading the premium rate from $p_1$ to $p_2$ when the reserves reach a certain threshold; or incorporating a constant interest force: $p(x)=p+ix$. In this work, a more general risk model is considered. The surplus \eqref{eq:CP-classic} is perturbed by a Brownian motion $\{W(t)\text{ , }t\geq0\}$, multiplied by a diffusion parameter $\sigma$, to account for the fluctuations around the premium rate. This diffusion parameter may also depend on $X(t)$. We further let the accumulated liability $L(t)$ be governed by a pure jump nondecreasing L\'evy process, starting from $L(0) = 0$. The financial reserves of the insurance company evolve according to the following dynamics:
\begin{equation}
\label{eq:LevelDependentLevyDrivenRiskProcess}
\text{d}X(t)= p(X(t))\,\text{d}t+\sigma(X(t))\,\text{d}W(t)-\text{d}L(t),\quad X(0) = u.
\end{equation}
In risk theory, one of the main challenges is the evaluation of ruin probabilities. The probability of ultimate ruin is the probability that the reserves ever drop below zero:
\begin{equation}
\label{eq:ruin-infinite-horizon}
\psi(u) = \mathbb P\bigl(\inf\limits_{t \ge 0}X(t) \le 0\bigr).
\end{equation}
We stress dependence of $\psi$ on the initial capital $u$. The probability of ruin by time $T$ is defined as
\begin{equation}
\label{eq:ruin-finite-horizon}
\psi(u, T) := \mathbb P\bigl(\inf\limits_{0 \le t \le T}X(t) \le 0\bigr).
\end{equation}
We often refer to $\psi(u)$ and $\psi(u,T)$ as ruin probabilities for infinite and finite time horizon, respectively. For a comprehensive overview on risk theory and ruin probabilities, see the book~\cite{AsAl10}.
We study the rate of exponential convergence of the finite-time horizon ruin probability toward its infinite-time counterpart. The goal of this article is to provide an explicit estimate for such rate: To find constants $C, k > 0$ such that
\begin{equation}
\label{eq:exp-convergence}
0 \le \psi(u) - \psi(u, T) \le Ce^{-kT},\ \ \mbox{for all}\ \ T, u \ge 0.
\end{equation}
This is achieved via a duality argument. For the original model~\eqref{eq:CP-classic}, define the {\it storage process} $Y=\{Y(t)\text{ , }t\geq0\}$ as follows:
\begin{equation}\label{eq:StorageProcess}
Y(t)=L(t)-\int_{0}^{t}p(Y(s))\,\mathrm{d} s.
\end{equation}
We assume that $p(y) = 0$ for $y < 0$. This makes zero a reflecting barrier. This is essentially a time-reversed version of the risk model \eqref{eq:CP-classic}, reflected at $0$. For the general model~\eqref{eq:LevelDependentLevyDrivenRiskProcess} perturbed by Brownian motion, the dual process is a reflected jump-diffusion on the positive half-line. As $t \to \infty$, $Y(t)$ weakly converges to some distribution $Y(\infty)$. The crucial observation is: For $T > 0$ and $u \ge 0$,
$$
\mathbb P(Y(T) \ge u) = \psi(u, T),\quad \mathbb P(Y(\infty) \ge u) = \psi(u).
$$
This is a particular case of Siegmund duality, see Siegmund \cite{Siegmund}. This method was first employed in \cite{Levy}, for the similar duality between absorbed and reflected Brownian motion. It has become a standard tool in risk theory since the seminal paper of Prabhu \cite{Pa61}, see also \cite[Chapter III, Section 2]{AsAl10}. The problem ~\eqref{eq:exp-convergence} therefore reduces to the study of the convergence of $Y(t)$ toward $Y(\infty)$ as $t \to \infty$:
$$
0 \le \mathbb{P}(Y(\infty)>u) - \mathbb P(Y(T) \ge u) \le Ce^{-kT}.
$$
A \textit{stochastically ordered} real-valued Markov process $Y = \{Y(t)\text{ , }t\geq0\}$ is such that, for all $y_1 \ge y_2$, we can couple two copies $Y_1(t)$ and $Y_2(t)$ of $Y(t)$ starting from $Y_1(0) = y_1$ and $Y_2(0) = y_2$, in such a way that $Y_1(t) \ge Y_2(t)$ a.s. for all $t \ge 0$. A {\it Lyapunov function} for a Markov process with generator $\mathcal L$ is, roughly speaking, a function $V \ge 1$ such that $\mathcal L V(x) \le -cV(x)$ for some constant $c > 0$, for all $x$ outside of a compact set. Then we can combine this coupling method with a Lyapunov function to get a simple, explicit, and in some cases, sharp estimate for the rate $k$. This method was first applied in Lund and Tweedie \cite{LT1996} for discrete-time Markov chains, and in Lund et al. \cite{LMT1996} for continuous-time Markov processes. A direct application of their results yields the rate of convergence for the storage process defined in \eqref{eq:StorageProcess} and the level-dependent compound Poisson risk model \eqref{eq:CP-classic}. However, the dual model associated to the risk process~\eqref{eq:LevelDependentLevyDrivenRiskProcess} is a more general process: This is a reflected jump-diffusion on the positive half-line.
\smallskip
The same method as in Lund et al. \cite{LMT1996} has been refined in a recent paper by Sarantsev \cite{MyOwn12} and applied to reflected jump-diffusions on the half line. The jump part is not a general L\'evy process, but rather a state-dependent compound Poisson process, which makes a.s. finitely many jumps in finite time. In a recent paper \cite{MyOwn16}, it was applied to {\it Walsh diffusions} (processes which move along the rays emanating from the origin in $\mathbb{R}^d$ as one-dimensional diffusions; as they hit the origin, they choose a new ray randomly). Without attempting to give an exhaustive survey, let us mention classic papers \cite{DMT1995, MT1993a, MT1993b} which use Lyapunov functions (without stochastic ordering) to prove the very fact of exponential long-term convergence, and a related paper of Sarantsev \cite{MyOwn10}. However, to estimate the rate $k$ explicitly is a harder problem. Some partial results in this direction are provided in the papers~\cite{BCG2008, Davies, Explicit, RR1996, RT1999, RT2000}.
\smallskip
In this paper, we combine these two methods: Lyapunov functions and stochastic ordering, to find the rate of convergence of the process $Y$, which is dual to the original process $X$ from~\eqref{eq:LevelDependentLevyDrivenRiskProcess}. This process $Y$, as noted above, is a reflected jump-diffusion on the half-line. We apply the same method developed in \cite{LMT1996, MyOwn12}. In the general case, it can have infinitely many jumps during finite time, or can have no diffusion component, as in the level dependent compound Poisson risk model from~\eqref{eq:CP-classic}. Therefore, we need to adjust the argument from \cite{MyOwn12}. Our method only applies in the case of light tailed claim size. Asmussen and Teugels in \cite{AsTe96} studied the convergence of ruin probabilities in the compound Poisson risk model with sub-exponentially distributed claim size. It is shown that the convergence takes place at a sub-exponential rate.
\smallskip
The paper is organized as follows. In Section 2, we define assumptions on $p$, $\sigma$, and the L\'evy process $L$. We also introduce the concept of Siegmund duality to reduce the problem to convergence rate of a reflected jump-diffusion to its stationary distribution. Our main results are stated in Section 3: Theorem~\ref{thm:main-1} and Corollary \ref{cor:main-1} provide an estimate for the exponential rate of convergence. Section \ref{sec:ComputationConvergenceRate} gives examples of calculations of the rate $k$. The proof of Theorem~\ref{thm:main-1} is carried out in Section 5. Proofs of some technical lemmata are postponed until Appendix.
\section{Definitions and Siegmund duality}
First, let us impose assumptions on our model~\eqref{eq:LevelDependentLevyDrivenRiskProcess}. Recall that the wealth of the insurance company is modeled by the right-continuous process with left limits $X = (X(t),\, t \ge 0)$, governed by the following integral equation:
$$
X(t) = u + \int_0^tp(X(s))\,\mathrm{d} s + \int_0^t\sigma(X(s))\,\mathrm{d} W(s) - L(t),
$$
or, equivalently, by the stochastic differential equation (SDE) with initial condition $X(0) = u$, given by \eqref{eq:LevelDependentLevyDrivenRiskProcess}. We say that $X$ is {\it driven} by the Brownian motion $W$ and L\'evy process $L$.
A function $f : \mathbb{R} \to \mathbb{R}$, or $f : \mathbb{R}_+ \to \mathbb{R}$, is {\it Lipschitz continuous} if there exists a constant $K$ such that $|f(x) - f(y)| \le K|x-y|$ for all $x$ and $y$.
\begin{assumption}\label{as:1}
The function $p : \mathbb{R}_+ \to \mathbb{R}$ is Lipschitz. The function $\sigma : \mathbb{R}_+ \to \mathbb{R}_{\begin{color}{blue}+\end{color}}$ is bounded, and continuously differentiable with Lipschitz continuous derivative $\sigma'$.
\label{asmp:Lipschitz}
\end{assumption}
\begin{assumption}
The process $L$ is a pure jump subordinator, that is, a L\'evy process (stationary independent increments) with $L(0) = 0$, and with a.s. nondecreasing trajectories, which are right continuous with left limits. The process $W$ is a standard Brownian motion, independent of $L$.
\label{asmp:Levy}
\end{assumption}
Assumption \ref{asmp:Lipschitz} is not too restrictive as it allows to consider classical risk process such as: (a) the compound Poisson risk process when $p(x)=p$, and $\sigma(x)=0$; (b) the compound Poisson risk process under constant interest force when $p(x)=p+ix$, and $\sigma(x)=0$. However, the regime-switching premium rate when the surplus hits some target is not covered.
\smallskip
Assumption \ref{asmp:Levy} allows the study of the compound Poisson risk process perturbed by a diffusion when $p(x)=p$, and $\sigma(x)=\sigma$, extensively discussed in the paper by Dufresne and Gerber \cite{DuGe91}, as well as the L\'evy-driven risk process defined for example in Morales and Schoutens \cite{MoSc03}. It is known from the standard theory, see for example \cite[Section 6.2]{KS1998}, that the {\it L\'evy measure} of this process is a measure $\mu$ on $\mathbb{R}_+$ which satisfies
\begin{equation}
\label{eq:classic-mean}
\int_0^{\infty}(1\wedge x)\,\mu(\mathrm{d} x) < \infty.
\end{equation}
From Assumption~\ref{asmp:Levy}, we have:
$$
\mathbb E e^{-\lambda L(t)} = \exp\left(t\kappa(-\lambda)\right),\ \ \mbox{for every}\ \ t, \lambda \ge 0,
$$
where $\kappa(\lambda)$ is the {\it L\'evy exponent:}
\begin{equation}
\label{eq:levy-exp}
\kappa(\lambda) := \int_0^{\infty}\left[e^{\lambda x} - 1\right]\mu(\mathrm{d} x),\ \lambda \in \mathbb{R}.
\end{equation}
Under Assumptions~\ref{asmp:Lipschitz} and~\ref{asmp:Levy}, $L$ is a Feller continuous strong Markov process, with generator
\begin{equation}
\label{eq:explicit-generator}
\mathcal N f(x) = \int_0^{\infty}\left[f(x+y) - f(x)\right]\,\mu(\mathrm{d} y),
\end{equation}
for $f \in C^2(\mathbb{R})$ with compact support. For our purposes, we impose an additional assumption.
\begin{assumption} The measure $\mu$ has finite exponential moment: for some $\lambda_0 > 0$, we have
\begin{equation}
\label{eq:finite-exp}
\int_1^{\infty}e^{\lambda_0 x}\,\mu(\mathrm{d} x) < \infty.
\end{equation}
\label{asmp:finite-exp}
\end{assumption}
\begin{remark}
The existence of exponential moments on the jump sizes distribution prevent us from considering heavy tailed claim size distribution as in Asmussen and Teugels \cite{AsTe96}.
\end{remark}
Under Assumption~\ref{asmp:finite-exp}, we can combine~\eqref{eq:classic-mean} and~\eqref{eq:finite-exp} to get:
$$
\kappa(\lambda) < \infty \ \ \mbox{for}\ \ \lambda \in [0, \lambda_0).
$$
Then we can extend the formula~\eqref{eq:explicit-generator} for functions $f \in C^2(\mathbb{R})$ which satisfy
\begin{equation}
\label{eq:exp-bdd}
\sup\limits_{x \ge 0}e^{-\lambda x}|f(x)| < \infty\ \mbox{for some}\ \lambda \in (0, \lambda_0).
\end{equation}
The proof of the following technical lemma is postponed to the Appendix \ref{appendix:ProofLemma1}.
\begin{lemma} Under Assumptions~\ref{asmp:Levy} and~\ref{asmp:finite-exp}, the following quantity is finite:
\label{lemma:tech}
\begin{equation}
\label{eq:Mean}
m(\mu) := \int_0^{\infty}x\,\mu(\mathrm{d} x) < \infty.
\end{equation}
\end{lemma}
\begin{ex}
If $\{L(t),\, t \ge 0\}$ is a compound Poisson process with jump intensity $\beta$ and distribution $B$ for each jump, then the L\'evy measure is given by $\mu(\cdot) = \beta B(\cdot)$.
\end{ex}
The following lemma can be proved by a classic argument, a version of which can be found in any textbook on stochastic analysis, see for example \cite[Section 5.2]{KS1998}. For the sake of completeness, we give the proof in the Appendix \ref{appendix:ProofLemma2}.
\begin{lemma}\label{lem:ExistenceXProcess} Under Assumptions~\ref{asmp:Lipschitz} and~\ref{asmp:Levy}, for every initial condition $X(0) = u$ there exists (in the strong sense, that is, on a given probability space) a pathwise unique version of~\eqref{eq:LevelDependentLevyDrivenRiskProcess}, driven by the given Brownian motion $W$ and L\'evy process $L$. This is a Markov process, with generator
\begin{equation}
\label{eq:generator-absorbed}
\mathcal L f(x) := p(x)f'(x) + \frac12\sigma^2(x)f''(x) + \int_0^{\infty}[f(x-y) - f(x)]\,\mu(\mathrm{d} y)
\end{equation}
for $f \in C^2(\mathbb{R})$ with compact support. Under Assumption~\ref{asmp:finite-exp}, this expression~\eqref{eq:generator-absorbed} is also valid for functions $f \in C^2(\mathbb{R})$ satisfying~\eqref{eq:exp-bdd} with $f(-x)$ instead of $f(x)$.
\end{lemma}
Define the {\it ruin probability} in finite and infinite time horizons as in~\eqref{eq:ruin-finite-horizon} and~\eqref{eq:ruin-infinite-horizon}. We are interested in finding an estimate of the form
$$
0 \le \psi(u) - \psi(u, T) \le Ce^{-kT},\ u, T \ge 0,
$$
for some constants $C,\,k > 0$. Recall the concept of {\it Siegmund duality}.
\begin{definition} Two Markov processes $X = (X(t),\, t \ge 0)$ and $Y = (Y(t),\, t \ge 0)$ on $\mathbb{R}_+$ are called {\it Siegmund dual} if for all $t, x, y \ge 0$,
$$
\mathbb P_x(X(t) \ge y) = \mathbb P_y(Y(t) \le x).
$$
Here, the indices $x$ and $y$ refer to initial conditions $X(0) = x$ and $Y(0) = y$.
\end{definition}
Using Siegmund duality allow us to reduce our problem about ruin probabilities to another problem: long-term convergence to the stationary distribution of a reflected jump-diffusion $Y=\{Y(t)\text{ , }t\geq0\}$. Take some functions $p_{\ast},\, \sigma_{\ast} :\mathbb{R}_+\to\mathbb{R}$.
\begin{definition} Consider an $\mathbb{R}_+$-valued process $Y = (Y(t),\, t \ge 0)$ with right-continuous trajectories with left limits, which satisfies the following SDE:
\begin{equation}
\label{eq:RSDE}
Y(t) = Y(0) + \int_0^tp_{\ast}(Y(s))\,\mathrm{d} s + \int_0^t\sigma_{\ast}(Y(s))\,\mathrm{d} W(s) + L(t) + R(t),
\end{equation}
where $R = (R(t),\, t \ge 0)$ is a nondecreasing right-continuous process with left limits, which starts from $R(0) = 0$ and can increase only when $Y(t) = 0$. Then the process $Y$ is called a {\it reflected jump-diffusion on the half-line}, with {\it drift coefficient} $p_{\ast}$, {\it diffusion coefficient} $\sigma_{\ast}$, and {\it driving jump process} $L$ with {\it L\'evy measure} $\mu$.
\end{definition}
The following result is the counterpart of Lemma \ref{lem:ExistenceXProcess} for the process $Y=\{Y(t)\text{ , }t\geq0\}$.
\begin{lemma}\label{lemma:ExistenceRProcess}
If $p_{\ast}$ and $\sigma_{\ast}$ are Lipschitz, then for every initial condition $Y(0) = y$, there exists in the strong sense a pathwise unique version of~\eqref{eq:RSDE}. This is a Markov process with generator $\mathcal A$, given by the formula
\begin{equation}
\label{eq:A}
\mathcal A f(x) = p_{\ast}(x)f'(x) + \frac12\sigma_{\ast}^{2}(x)f''(x) + \int_0^{\infty}\left[f(x + y) - f(x)\right]\,\mu(\mathrm{d} y),
\end{equation}
for $f \in C^2(\mathbb{R}_+)$ with compact support and with $f'(0) = 0$.
\label{lem:ExistenceRProcess}
\end{lemma}
The proof, which is similar to that of Lemma~\ref{lem:ExistenceXProcess}, is provided in the Appendix \ref{appendix:ProofLemma3}.
\smallskip
It was shown in \cite{Siegmund} that a Markov process on $\mathbb{R}_+$ has a (Siegmund) dual process if and only if it is stochastically ordered.
\begin{thm} A Markov process $X$, corresponding to a transition semigroup $(P^t)_{t\geq0}$, is {\it stochastically ordered}, if and only if one of the following two conditions holds:
\smallskip
(a) the semigroup $(P^t)_{t \ge 0}$ maps bounded nondecreasing functions into bounded nondecreasing functions; that is, for every bounded nondecreasing $f : \mathbb{R}_+ \to \mathbb{R}$ and every $t \ge 0$, the function $P^tf$ is also bounded and nondecreasing;
\smallskip
(b) for every $t \ge 0$ and $c \ge 0$, the function $x \mapsto \mathbb P_x(X(t) \ge c)$ is nondecreasing in $x$;
\smallskip
\label{thm:stoch-order}
\end{thm}
\begin{proof}
This equivalence follows from \cite{Kamae}.
\end{proof}
Now, consider the process~\eqref{eq:LevelDependentLevyDrivenRiskProcess}, stopped at hitting $0$. The following result is well known in the literature; however, in the Appendix~\ref{appendix:ProofStochOrder} we provide a simple proof for the sake of completeness.
\begin{lemma} The process~\eqref{eq:LevelDependentLevyDrivenRiskProcess} is stochastically ordered.
\label{lemma:stoch-ordered-original}
\end{lemma}
It was first shown in \cite[p.210]{Levy} that absorbed and reflected Brownian motions on $\mathbb{R}_+$ are Siegmund dual. Since then, several more papers dealt with duality for more general processes, including jump-diffusions in \cite{Kol}. In particular, we have the following result.
\begin{lemma} Under Assumptions~\ref{asmp:Lipschitz} and~\ref{asmp:Levy}, the Siegmund dual process for the jump-diffusion~\eqref{eq:LevelDependentLevyDrivenRiskProcess}, absorbed at zero, is the reflected jump-diffusion on $\mathbb{R}_+$ from~\eqref{eq:RSDE}, starting at $Y(0)=0$, with drift and diffusion coefficients
\begin{equation}
\label{eq:new-drift}
p_{\ast}(x) = -p(x) - \sigma(x)\sigma'(x),
\end{equation}
\begin{equation}
\label{eq:new-difusion}
\sigma_{\ast}(x) = \sigma(x).
\end{equation}
\end{lemma}
\begin{proof}
The result is a direct application of \cite[Proposition 3.1]{Kol}
\end{proof}
We have shown that under Assumptions \ref{asmp:Lipschitz}, \ref{asmp:Levy} and \ref{asmp:finite-exp}, the wealth process is a stochastically ordered Markov process that admits as a Siegmund dual process a Markov process defined as a reflected jump-diffusion process. Therefore, the rate of convergence for ruin probabilities is determined by studying the one of its associated dual process $Y=\{Y(t)\text{ , }t\geq0\}$.
\section{Main results}
A common method to prove an exponential rate of convergence toward the stationary distribution is to construct a Lyapunov function.
\begin{definition}
Let $V:\mathbb{R}_+\to[1,\infty)$ be a continuous function and assume there exists $b,k,z>0$ such that
\begin{equation}\label{eq:LyapunovFunction}
\mathcal A V(x)\leq-kV(x)+b{1}_{[0,z]}(x),\text{ }x\in\mathbb{R}_+.
\end{equation}
then $V$ is called a \textit{Lyapunov function}.
\end{definition}
We shall build a Lyapunov function for the Markov process $Y$ in the form $V_{\lambda}(x)=e^{\lambda x}$, for $\lambda>0$. This choice appears to be suitable to tackle the rate of convergence problem of reflected jump-diffusions process as the generator acts on it in a simple way. Under Assumption~\ref{asmp:finite-exp}, consider the function
$$
\phi(\lambda, x) := p_*(x)\lambda + \frac12\sigma^2(x)\lambda^2 + \kappa(\lambda),\ \lambda \in[0,\lambda_0),\ x \in \mathbb{R}.
$$
For a signed measure $\nu$ on $\mathbb{R}_+$ and a function $f:\mathbb{R}_+\to\mathbb{R}$, we denote by $(\nu, f)=\int f\text{d}\nu$. Additionally, for a function $f:\mathbb{R}_+\to\left[1,+\infty\right)$, define the following norm: $\norm{\nu}_f:=\sup_{|g| \le f}|(\nu,g)|$. If $f\equiv1$, then $\norm{\cdot}_f$ is the total variation norm. Define
\begin{equation}
\label{eq:Phi-definition}
\Phi(\lambda)=\inf\limits_{x \ge 0}(-\phi(\lambda, x)) = -\sup\limits_{x \ge 0}\phi(\lambda, x).
\end{equation}
\begin{thm}
\label{thm:main-1}
Under Assumptions~\ref{asmp:Lipschitz},~\ref{asmp:Levy},~\ref{asmp:finite-exp}, suppose
\begin{equation}
\label{eq:ergodic}
\Phi(\lambda) > 0\ \mbox{for some}\ \lambda \in (0, \lambda_0).
\end{equation}
Then there exists a unique stationary distribution $\pi$ for the reflected jump-diffusion $Y$. Take a $\lambda \in (0, \lambda_0)$ such that $k = \Phi(\lambda) > 0$. This stationary distribution satisfies $(\pi, V_{\lambda})< \infty$. The transition function $Q^t(x, \cdot)$ of the process $Y$ satisfies
\begin{equation}
\label{eq:uniform-ergodicity}
\norm{Q^t(x, \cdot) - \pi(\cdot)}_{V_\lambda} \le \left[V_{\lambda}(x) + (\pi,V_{\lambda})\right]e^{-kt}.
\end{equation}
\end{thm}
The proof of Theorem~\ref{thm:main-1} is postponed until Section~\ref{sec:ProofMainThm}. The central result of this paper is a corollary of Theorem \ref{thm:main-1}, direct consequence of the duality link established between the processes $X$ and $Y$.
\begin{corollary} Under Assumptions~\ref{asmp:Lipschitz},~\ref{asmp:Levy}, ~\ref{asmp:finite-exp}, and the condition~\eqref{eq:ergodic},
\begin{equation}\label{eq:CorConvergenceRateRuinProbabilities}
0 \le \psi(u) - \psi(u, T) \le \left[1 + (\pi, V_{\lambda})\right]e^{-kT},\quad u, T \ge 0.
\end{equation}
\label{cor:main-1}
\end{corollary}
\begin{proof}
In virtue of Siegmund duality we have that
\begin{equation}\label{eq:DualityInCor}
\psi(u) - \psi(u, T)=\mathbb P(Y(\infty) \ge u) - \mathbb P(Y(T) \ge u),
\end{equation}
where $Y=\left(Y(t)\text{ , }t\geq0\right)$ is a reflected jump-diffusion on $\mathbb{R}_{+}$, starting at $Y(0)=0$, and $Y(\infty)$ is a random variable distributed as $\pi$. We may rewrite \eqref{eq:DualityInCor} as
\begin{equation*}
\psi(u) - \psi(u, T)=\pi\left(\left[u,\infty\right)\right) - Q^{T}(0,\left[u,\infty\right)).
\end{equation*}
Then the inequality \eqref{eq:CorConvergenceRateRuinProbabilities} follows immediately from the application of Theorem \ref{thm:main-1}.
\end{proof}
In the space-homogeneous case: $p(x) \equiv p$ and $\sigma(x) \equiv \sigma$, the quantity
$\phi(\lambda, x)$ is independent of $x$, and condition~\eqref{eq:ergodic} means that there exists a $\lambda > 0$ such that $\phi(\lambda) < 0$. Then $p_{\ast} = p$, and
$$
\phi'(0) = -p + \psi'(0) = -p + m(\mu).
$$
It is easy to show that $\phi(\cdot)$ is a convex function with $\phi(0) = 0$. Therefore, condition~\eqref{eq:ergodic} holds if and only if $\phi'(0) < 0$, or, equivalently,
\begin{equation}
\label{eq:ergodic-hom}
p > m(\mu).
\end{equation}
\section{Explicit rate of exponential convergence calculation}\label{sec:ComputationConvergenceRate}
In this section, we aim at studying the rate $k$ of exponential convergence depending on the parameters of the risk model.
\subsection{Compound Poisson risk model perturbed by a diffusion}
In this subsection, the risk process $X=(X(t)\text{ , }t\geq0)$ is defined as
\begin{equation}
X(t)=u+pt+\sigma W(t)-\sum_{k=1}^{N(t)}U_k,
\end{equation}
where $u\geq0$ denotes the initial capital and $p$ corresponds to the premium rate. The process $W=(W(t)\text{ , }t\geq0)$ is a standard Brownian motion allowing to capture the volatility around the premium rate encapsulated in the parameter $\sigma>0$. The process $N=(N(t)\text{ , }t\geq0)$ is a homogeneous Poisson process with intensity $\beta>0$, independent from the claim sizes $U_1,U_2,\ldots$ which are \textbf{i.i.d.} with distribution function $B$.
The premium rate satisfies the {\it net benefit condition:} $p=(1+\eta)\beta\mathbb{E}(U)$, where $\eta>0$ is {\it safety loading.}
\smallskip
We can study the rate of exponential convergence of ruin probabilities; specifically, how it depends on the parameters of the model: (a) the diffusion coefficient $\sigma$ in front of the perturbation term; (b) the safety loading $\eta$; (c) the shape of the claim size distribution. The function $\phi(\lambda, x)$ for this risk process is given by
$$
\phi(\lambda, x) = -p\lambda + \frac12\sigma^2\lambda^2 + \beta\left[\widehat{B}(\lambda)-1\right],\ \lambda \ge 0,\ x \in \mathbb{R},
$$
where $\widehat{B}(\lambda)=\mathbb{E}(e^{\lambda U})$ denotes the moment generating function (MGF) of the claim amounts distribution. As the expression of $\phi(\lambda, x)$ actually does not depend on $x$ then
$$
\inf\limits_{x \ge 0}(-\phi(\lambda, x))=\Phi(\lambda)= p\lambda - \frac12\sigma^2\lambda^2 - \beta\left[\widehat{B}(\lambda)-1\right],\ \lambda \ge 0,\ x \in \mathbb{R}.
$$
The rate of exponential convergence follows from
$$
k=\underset{\{\lambda\geq0\text{ ; }\widehat{B}(\lambda)<\infty\}}{\text{max}}\,\Phi(\lambda).
$$
The function $\Phi(.)$ is strictly concave as
\begin{equation*}
\Phi''(\lambda)=-\sigma^{2}-\beta\widehat{B}''(\lambda)<0\text{ for all }\lambda\geq0.
\end{equation*}
It follows that
\begin{equation}\label{eq:LambdaStar}
\lambda_{\ast}:=\underset{\{\lambda\geq0\text{ ; }\widehat{B}(\lambda)<\infty\}}{\text{argmax}}\,\Phi(\lambda)
\end{equation}
is solution of the equation
\begin{equation*}
p-\sigma^{2}\lambda-\beta\widehat{B}'(\lambda)=0,
\end{equation*}
under the constraint $\lambda^{\ast}\in\{\lambda\geq0\text{ ; }\widehat{B}(\lambda)<\infty\}$. The rate of exponential convergence is then given by
$$
k=\Phi(\lambda_{\ast})=p\lambda_{\ast} - \frac12\sigma^2\lambda_{\ast}^2 - \beta\left[\widehat{B}(\lambda_{\ast})-1\right].
$$
In this example, we compare the rate of convergence $k$ for three claim sizes distribution: the {\it Gamma distribution} $\text{Gamma}(\alpha, \beta)$ with associated probability density function
$$
p(x; \alpha, \beta) =\begin{cases} \frac{\delta^{\alpha}}{\Gamma(\alpha)}x^{\alpha-1}e^{-\delta x},&\text{ for }t>0\\
0,&\text{ Otherwise},
\end{cases}
$$
the {\it exponential distribution} $\Exp(\delta) = \text{Gamma}(1, \delta)$, and the mixture of exponential distributions $\text{MExp}(p,\delta_1,\delta_2)$ with associated probability density function
$$
p(x; p,\delta_1,\delta_2)=\begin{cases}
p\delta_1e^{-\delta_1 x}+(1-p)\delta_2e^{-\delta_2 x},&\text{ if }x>0,\\
0,&\text{ otherwise}.
\end{cases}
$$
Let the claim size be distributed as $\text{Gamma}(2,1)$. Table \ref{Tab:ConvergenceRateSafetyLoadingVolatility} gives the rate of exponential convergence for various combinations of values for the safety loading and the volatility.
\begin{table}[h!]\centering
\ra{1.3}
\scriptsize
\begin{tabular}{@{}ll|rrrrrr@{}}\toprule
&&\multicolumn{6}{c}{Safety loading}\\
\cmidrule{3-8}
\multicolumn{2}{c|}{Volatility}&$\eta=0.05$&$\eta=0.1$&$\eta=0.15$&$\eta=0.2$&$\eta=0.25$&$\eta=0.3$\\
\midrule
$\sigma=$& 0&0.00082 & 0.00319 & 0.00704 & 0.01227 & 0.01881 & 0.02658 \\
& 1&0.0007 & 0.00277 & 0.00613 & 0.01073 & 0.01653 & 0.02345 \\
&2&0.0005 & 0.00197 & 0.00439 & 0.00775 & 0.01201 & 0.01716 \\
& 3&0.00033 & 0.00132 & 0.00297 & 0.00526 & 0.00819 & 0.01174 \\
&4&0.00023 & 0.00091 & 0.00204 & 0.00361 & 0.00563 & 0.0081 \\
&5&0.00016 & 0.00064 & 0.00145 & 0.00257 & 0.00402 & 0.00578 \\
& 6&0.00012 & 0.00048 & 0.00107 & 0.0019 & 0.00297 & 0.00427 \\
& 7&0.00009 & 0.00036 & 0.00082 & 0.00145 & 0.00227 & 0.00327 \\
& 8&0.00007 & 0.00029 & 0.00064 & 0.00114 & 0.00178 & 0.00257 \\
& 9&0.00006 & 0.00023 & 0.00052 & 0.00092 & 0.00144 & 0.00207 \\
& 10&0.00005 & 0.00019 & 0.00042 & 0.00075 & 0.00118 & 0.0017 \\
\bottomrule
\end{tabular}\caption{Rate of exponential convergence in the compound Poisson risk model perturbed by a diffusion, with $\text{Gamma}(2,1)$ distributed claim sizes, and different values for $\sigma$ and $\eta$.}
\label{Tab:ConvergenceRateSafetyLoadingVolatility}
\end{table}
For a given value of the safety loading, the rate of convergences decreases when the volatility increases. Conversely, for a given volatility level, the rate of convergence increases with the safety loading. The first row of Table~\ref{Tab:ConvergenceRateSafetyLoadingVolatility} contains the rates of convergence when $\sigma=0$, associated to the compound Poisson risk model. Figure~\ref{fig:ConvergenceRateSafetyLoadingVolatility} displays the rates of exponential convergence depending of the volatility level for different values of the safety loading: $\eta=0.1,0.2,0.3$.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{Fig1.pdf}
\caption{The rate of exponential convergence in the compound Poisson risk model perturbed by a diffusion depending on the volatility, for $\eta=0.1,0.2,0.3$.}
\label{fig:ConvergenceRateSafetyLoadingVolatility}
\end{figure}
\begin{remark}
Consider the compound Poisson risk model perturbed by a diffusion under constant interest force $i>0$ by assuming that $p(x)=p+ix$, the function $\phi(\lambda,x)$ then becomes
$$
\phi(\lambda, x) = -(p+ix)\lambda + \frac12\sigma^2\lambda^2 + \beta\left[\widehat{B}(\lambda)-1\right],\ \lambda \ge 0,\ x \in \mathbb{R}.
$$
Although the function $\phi(\lambda,x)$ depends on $x$, it is easily seen that
$$
\inf\limits_{x \ge 0}(-\phi(\lambda, x))=\Phi(\lambda)= p\lambda - \frac12\sigma^2\lambda^2 - \beta\left[\widehat{B}(\lambda)-1\right],\ \lambda \ge 0,\ x \in \mathbb{R}.
$$
The maximization problem is the same as for the compound Poisson risk model perturbed by a diffusion and will lead to the same rate of convergence.
\end{remark}
Let us turn to the study of rate of convergence for different claim sizes distributions. We assume that the claim sizes are either exponentially distributed $\text{Exp}(1/2)$, gamma distributed $\text{Gamma}(2,1)$, or mixture of exponential distributed $\text{MExp}(1/4,3/4,1/4,3/4)$. The mean associated to the claim sizes distributions is the same, but the variance differs:
$$
\Var\left[\text{Gamma}(2,1)\right]<\Var\left[ \text{Exp}(1/2)\right]<\Var\left[ \text{MExp}(3/4,3/4,1/4)\right].
$$
Table \ref{Tab:ConvergenceRateClaimDistribution} contains the values of the rate of exponential convergence over the three claim size distributions.
\begin{table}[h!]\centering
\ra{1.3}
\scriptsize
\begin{tabular}{@{}lll|rcrcr@{}}\toprule
&&&\multicolumn{5}{|c}{Claim Sizes Distributions}\\
\cmidrule{4-8}
Volatility&\multicolumn{2}{c|}{Safety Loadings}&$\text{Exp}(1/2)$&\phantom{abc}&$\text{Gamma}(2,1)$&\phantom{abc}&$\text{MExp}(3/4,3/4,1/4)$\\
\midrule
$\sigma = 0 $&$\eta =$& 0.1 & 0.00238 && 0.00319 && 0.00177 \\
&& 0.2 & 0.00911 && 0.01227 && 0.00668 \\
&& 0.3 & 0.01965 && 0.02658 && 0.01426 \\
\cmidrule{1-3}
$\sigma = 1$ &$\eta =$& 0.1 & 0.00214 && 0.00277 && 0.00163 \\
&& 0.2 & 0.00824 && 0.01073 && 0.00621 \\
&& 0.3 & 0.01791 && 0.02345 && 0.01335 \\
\cmidrule{1-3}
$\sigma = 2$ &$\eta =$& 0.1 & 0.00163 && 0.00197 && 0.00132 \\
&& 0.2 & 0.00638 && 0.00775 && 0.00511 \\
&& 0.3 & 0.01405 && 0.01716 && 0.01114 \\
\cmidrule{1-3}
$\sigma = 3$ &$\eta =$& 0.1 & 0.00116 && 0.00132 && 0.001 \\
&& 0.2 & 0.0046 && 0.00526 && 0.00392 \\
&& 0.3 & 0.01024 && 0.01174 && 0.00865 \\
\cmidrule{1-3}
$\sigma = 4$ &$\eta =$& 0.1 & 0.00083 && 0.00091 && 0.00074 \\
&& 0.2 & 0.0033 && 0.00361 && 0.00294 \\
&& 0.3 &0.00737 && 0.0081 && 0.00654 \\
\cmidrule{1-3}
$\sigma = 5$ &$\eta =$& 0.1 & 0.0006 && 0.00064 && 0.00056 \\
&& 0.2 & 0.00241 && 0.00257 && 0.00222 \\
&& 0.3 & 0.00541 && 0.00578 && 0.00496 \\
\cmidrule{1-3}
$\sigma = 6$ &$\eta =$& 0.1 & 0.00045 && 0.00048 && 0.00043 \\
&& 0.2 & 0.00181 && 0.0019 && 0.0017 \\
&& 0.3 & 0.00407 && 0.00427 && 0.00382 \\
\cmidrule{1-3}
$\sigma = 7$ &$\eta =$& 0.1 & 0.00035 && 0.00036 && 0.00033 \\
&& 0.2 & 0.0014 && 0.00145 && 0.00134 \\
&& 0.3 & 0.00315 && 0.00327 && 0.003 \\
\cmidrule{1-3}
$\sigma = 8$ &$\eta =$& 0.1 & 0.00028&& 0.00029 && 0.00027 \\
&& 0.2 & 0.00111 && 0.00114 && 0.00107 \\
&& 0.3 & 0.0025 && 0.00257 && 0.0024 \\
\cmidrule{1-3}
$\sigma = 9$ &$\eta =$& 0.1 & 0.00022 && 0.00023 && 0.00022 \\
&& 0.2 & 0.0009 && 0.00092 && 0.00087 \\
&& 0.3 & 0.00202 && 0.00207 && 0.00196 \\
\cmidrule{1-3}
$\sigma =10$ &$\eta =$& 0.1 & 0.00019 && 0.00019 && 0.00018 \\
&& 0.2 & 0.00074 && 0.00075 && 0.00072 \\
&& 0.3 & 0.00167 && 0.0017 && 0.00162 \\
\bottomrule
\end{tabular}\caption{Rate of exponential convergence in the compound Poisson risk model perturbed by a diffusion for different claim size distribution.}
\label{Tab:ConvergenceRateClaimDistribution}
\end{table}
The fastest convergence occurs in the gamma cases and the slowliest in the mixture of exponential case.
Figure \ref{fig:ConvergenceRateVolatilitySafetyLoadingClaims} displays the evolution of the rate of exponential convergence depending on the safety loading and the diffusion parameter for the different assumption over the claim sizes.
\begin{figure}[h!]
\centering
\subfigure[The rate of exponential convergence depending on the safety loading and diffusion $\sigma=2$.]
{
\includegraphics[width=6cm]{Fig2.pdf}
\label{fig:ConvergenceRateSafetyLoadingClaims}
}
\subfigure[The rate of exponential convergence depending on the volatility and safety loading $\eta=0.1$.]
{
\includegraphics[width=6cm]{Fig3.pdf}
\label{fig:ConvergenceRateVolatilityClaims}
}
\caption{The rate of exponential convergence in the compound Poisson risk model perturbed by a diffusion for different claim sizes distributions}
\label{fig:ConvergenceRateVolatilitySafetyLoadingClaims}
\end{figure}
In the wake of this numerical study, we may conclude that the speed of convergence depends on the variance of the process. Increasing the variance through the claim sizes distribution or via the diffusion component makes the convergence toward the stationary distribution slower.
\subsection{L\'evy driven risk process.}
In this subsection, we compare the rate of exponential convergence of the ruin probabilities when the liability of the insurance company is modeled by a \textit{gamma process} and an \textit{inverse Gaussian L\'evy process}. The L\'evy measure of a \textit{gamma process}, $\text{GammaP}(\alpha,\beta)$, is given by
\begin{equation}\label{eq:LevyMeasureGammaProcess}
\mu(\text{d}x)=\frac{\alpha e^{-\beta x}}{x},\text{ for }x>0,
\end{equation}
where $\alpha,\,\beta>0$. Its L\'evy exponent is
\begin{equation}\label{eq:LevyExpGammaProcess}
\kappa(\lambda)=\alpha\ln\left(\frac{\beta}{\beta-\lambda}\right),\text{ for }\lambda\in[0,\beta).
\end{equation}
The function $\Phi(\cdot)$ is strictly concave as
\begin{equation*}
\Phi''(\lambda)=-\sigma^{2}-\frac{\alpha}{(\beta-\lambda)^2}<0.
\end{equation*}
It follows that $\lambda_{\ast}$ is the solution of the equation
\begin{equation*}
p-\sigma^{2}\lambda-\frac{\alpha}{\beta-\lambda}=0.
\end{equation*}
The rate of exponential convergence is then given by
$$
k=\Phi(\lambda_{\ast})=p\lambda_{\ast} - \frac12\sigma^2\lambda_{\ast}^2 - \alpha\ln\left(\frac{\beta}{\beta-\lambda_\ast}\right).
$$
The L\'evy measure associated to the \textit{inverse Gaussian L\'evy process}, $\text{IGP}(\gamma)$, is defined as
\begin{equation}\label{eq:LevyMeasureInverseGaussianProcess}
\mu(\text{d}x)=\frac{1}{\sqrt{2\pi}x^{3/2}}e^{-x\gamma^{2}/2},\text{ for }x>0.
\end{equation}
where $\gamma>0$. Its L\'evy exponent is
\begin{equation}\label{eq:LevyExpIGProcess}
\kappa(\lambda)=\gamma-\sqrt{\gamma^{2}-2\lambda},\text{ for }\lambda\in[0,\gamma^{2}/2).
\end{equation}
The function $\Phi$ is strictly concave as
\begin{equation*}
\Phi''(\lambda)=-\sigma^{2}-(\gamma^{2}-2\lambda)^{-3/2}<0
\end{equation*}
It follows that $\lambda_{\ast}$ is the solution of the equation
\begin{equation*}
p-\sigma^{2}\lambda-\frac{1}{\sqrt{\gamma^{2}-2\lambda}}=0,
\end{equation*}
The rate of exponential convergence is then given by
$$
k=\Phi(\lambda_{\ast})=p\lambda_{\ast} - \frac12\sigma^2\lambda_{\ast}^2 - \gamma+\sqrt{\gamma^2-2\lambda_\ast}.
$$
We set $\gamma=1$, $\alpha=1/2$, $\beta=1/2$, to match the first moment of the liabilities in both risk model at time $t=1$. Table \ref{Tab:ConvergenceRateLevyDriven} contains the value of the exponential rate of convergence when the liability of the insurance company is governed by a \textit{gamma process} or an \textit{inverse Gausian L\'evy process} depending on the safety loading and the volatility of the diffusion.
\begin{table}[h!]\centering
\ra{1.3}
\scriptsize
\begin{tabular}{@{}lll|rcr@{}}\toprule
&&&\multicolumn{3}{|c}{L\'evy processes}\\
\cmidrule{4-6}
Volatility&\multicolumn{2}{c|}{Safety Loadings}&\text{GammaP}(1/2,1/2)&\phantom{abc}&\text{IGP}(1)\\
\midrule
$\sigma=0$ &$\eta =$& 0.1 & 0.02617 && 0.05 \\
&& 0.2 & 0.05442 && 0.1 \\
&& 0.3 & 0.08441 && 0.15 \\
\cmidrule{1-3}
$\sigma= 1$ &$\eta =$& 0.1 & 0.01809 & &0.0271 \\
&& 0.2 & 0.03882 && 0.05806 \\
&& 0.3 & 0.06189 && 0.09238 \\
\cmidrule{1-3}
$\sigma= 2$&$\eta =$ & 0.1 & 0.00921 && 0.01104 \\
&& 0.2 & 0.02013 && 0.02412 \\
&& 0.3 & 0.03272 & &0.03923 \\
\cmidrule{1-3}
$\sigma= 3$&$\eta =$ & 0.1 & 0.00503 && 0.00552 \\
&& 0.2 & 0.01101 && 0.01207 \\
&& 0.3 & 0.01794 && 0.01965 \\
\cmidrule{1-3}
$\sigma= 4$&$\eta =$ & 0.1 & 0.00307 && 0.00324 \\
&& 0.2 & 0.00671 && 0.00709 \\
&& 0.3 & 0.01094 && 0.01153 \\
\cmidrule{1-3}
$\sigma= 5$&$\eta =$ & 0.1 & 0.00204 && 0.00212 \\
&& 0.2 & 0.00447 && 0.00463 \\
&& 0.3 & 0.00727 && 0.00753 \\
\cmidrule{1-3}
$\sigma= 6$&$\eta =$ & 0.1 & 0.00145 && 0.00149 \\
&& 0.2 & 0.00317 & &0.00325 \\
&& 0.3 & 0.00516 & &0.00529 \\
\cmidrule{1-3}
$\sigma= 7$&$\eta =$ & 0.1 & 0.00108 && 0.0011 \\
&& 0.2 & 0.00236 && 0.0024 \\
&& 0.3 & 0.00384 && 0.00391 \\
\cmidrule{1-3}
$\sigma= 8$&$\eta =$ & 0.1 & 0.00083 && 0.00085 \\
&& 0.2 & 0.00182 & &0.00185 \\
&& 0.3 & 0.00296 && 0.00301 \\
\cmidrule{1-3}
$\sigma= 9 $&$\eta =$& 0.1 & 0.00066 && 0.00067 \\
&& 0.2 & 0.00145 && 0.00146 \\
&& 0.3 & 0.00236 && 0.00238 \\
\cmidrule{1-3}
$\sigma= 10$&$\eta =$ & 0.1 & 0.00054 && 0.00054 \\
&& 0.2 & 0.00118 && 0.00119 \\
&& 0.3 & 0.00192 & &0.00193 \\
\bottomrule
\end{tabular}\caption{Rate of exponential convergence in L\'evy driven risk models.}
\label{Tab:ConvergenceRateLevyDriven}
\end{table}
Figure \ref{fig:ConvergenceRateSafetyLoadingLevy} displays the rates of exponential convergence for the considered L\'evy driven risk models.
\begin{figure}[h!]
\centering
\subfigure[The rate of exponential convergence depending on the safety loading, and volatility $\sigma=1$.]
{
\includegraphics[width=6cm]{Fig4.pdf}
\label{fig:ConvergenceRateSafetyLoadingLevy}
}
\subfigure[The rate of exponential convergence depending on the volatility and safety loading $\eta=0.2$.]
{
\includegraphics[width=6cm]{Fig5.pdf}
\label{fig:ConvergenceRateVolatilityLevy}
}
\caption{The rate of exponential convergence for L\'evy driven risk processes.}
\label{fig:ConvergenceRateLevy}
\end{figure}
We observe that the impact of the volatility and the safety loading on the convergence rate remains the same as in the compound Poisson case. The rate of exponential convergence is noticeably greater when the liability of the insurance company follows an \textit{inverse Gaussian L\'evy process}.
\section{Proof of Theorem~\ref{thm:main-1}}\label{sec:ProofMainThm}
If $Y$ were a reflected jump-diffusion with a.s. finitely many jumps in finite time, and with positive diffusion coefficient, then we could directly apply \cite[Theorem 4.1, Theorem 4.3]{MyOwn12}, and complete the proof of Theorem~\ref{thm:main-1}. However, we might have: (a) zero diffusion coefficient $\sigma(x) = 0$ for some $x$; (b) infinite L\'evy measure $\mu$, that is, infinitely many jumps in finite time horizon.
\smallskip
In the proof of \cite[Theorem 3.2]{MyOwn12}, we used the following property: for all $t > 0$, $x \in \mathbb{R}_+$, and $A \subseteq \mathbb{R}_+$ of positive Lebesgue measure, we have $Q^t(x, A) > 0$. This property might not hold for the case $\sigma(x) = 0$ for some $x \in \mathbb{R}_+$. We bypass this difficulty via the following method: approximating the reflected jump-diffusion $Y$ by a ``regular'' reflected jump-diffusion, where $\sigma(x) > 0$ for $x \in \mathbb{R}_+$, and the L\'evy measure is finite.
\smallskip
For an $\varepsilon > 0$, let $Y_{\varepsilon} = (Y_{\varepsilon}(t),\, t \ge 0)$ be the reflected jump-diffusion on $\mathbb{R}_+$, with drift coefficient $p_*$, diffusion coefficient $\sigma_{\varepsilon}(\cdot) = \sigma(\cdot) + \varepsilon$, and jump measure $\mu_{\varepsilon}(\cdot) = \mu(\cdot\cap[\varepsilon, \varepsilon^{-1}])$. Note that this is a reflected jump-diffusion with positive diffusion coefficient, and with finite L\'evy measure: $\sigma_{\varepsilon}(y) > 0$ for all $y \in \mathbb{R}_+$, and $\mu_{\varepsilon}(\mathbb{R}_+) < \infty$. Therefore, we can apply results of \cite{MyOwn12} to this process. For $x \in \mathbb{R}_+$, let
$$
\phi_{\varepsilon}(x, \lambda) := p_{\ast}(x)\lambda + \frac12\sigma^2_{\varepsilon}(x)\lambda^2 + \int_{\varepsilon}^{\varepsilon^{-1}}\left(e^{\lambda y} - 1\right)\,\mu_{\varepsilon}(\mathrm{d} y).
$$
For every $x \ge 0$, we have:
\begin{equation}
\label{eq:difference-k}
\phi(x, \lambda) -\phi_{\varepsilon}(x, \lambda) = -\left[\varepsilon\sigma_{\varepsilon}(x) + \frac12\varepsilon^2\right]\lambda^2 + \left(\int_0^{\varepsilon} + \int_{\varepsilon^{-1}}^{\infty}\right)\left(e^{\lambda y} - 1\right)\,\mu(\mathrm{d} y).
\end{equation}
Recall also that
\begin{equation}
\label{eq:finite-exp-moment}
\int_0^{\infty}\left(e^{\lambda y} - 1\right)\,\mu(\mathrm{d} y) < \infty.
\end{equation}
Combining~\eqref{eq:difference-k} with~\eqref{eq:finite-exp-moment} and the boundedness of $\sigma$ from Assumption \ref{as:1}, we have:
\begin{equation}
\label{eq:uniform-convergence-k}
\sup\limits_{x \ge 0}\left|\phi_{\varepsilon}(x, \lambda) - \phi(x, \lambda)\right| \to 0,\ \varepsilon \downarrow 0.
\end{equation}
By our assumptions,
\begin{equation}
\label{eq:k-sup-negative}
\sup\limits_{x \ge 0}\phi(x, \lambda) = -\Phi(\lambda) < 0.
\end{equation}
From~\eqref{eq:uniform-convergence-k}, we have:
\begin{equation}
\label{eq:k-sup-convergence}
-\sup\limits_{x \ge 0}\phi_{\varepsilon}(x, \lambda) =: \Phi_{\varepsilon}(\lambda) \to \Phi(\lambda).
\end{equation}
From~\eqref{eq:k-sup-convergence} and~\eqref{eq:k-sup-negative}, we conclude that there exists an $\varepsilon_0 > 0$ such that for $\varepsilon \in [0,\varepsilon_0]$, $\Phi_{\varepsilon}(\lambda) > 0$. Apply \cite[Theorem 4.3]{MyOwn12} to prove the statement of Theorem~\ref{thm:main-1} for the process $Y_{\varepsilon}$. For consistency of notation, denote $Y_0 := Y$. There exists a unique stationary distribution $\pi_{\varepsilon}$ for $Y_{\varepsilon}$, which satisfies $(\pi_{\varepsilon}, V_{\lambda}) < \infty$; and the transition kernel $Q_{\varepsilon}^t(x, \cdot)$ of this process $Y_{\varepsilon}$ satisfies
\begin{equation}
\label{eq:exp-ergodicity-eps}
\norm{Q_{\varepsilon}^t(x, \cdot) - \pi_{\varepsilon}(\cdot)}_{V_{\lambda}} \le \left[V_{\lambda}(x) + (\pi_{\varepsilon}, V_{\lambda})\right]e^{-\Phi_{\varepsilon}(\lambda)t}.
\end{equation}
We would like to take the limit $\varepsilon \downarrow 0$ in~\eqref{eq:exp-ergodicity-eps}. To this end, let us introduce some new notation. Take a smooth $C^{\infty}$ function $\theta : \mathbb{R}_+ \to \mathbb{R}_+$ which is nondecreasing, and satisfies
$$
\theta(x) =
\begin{cases}
0,\ x \le s_-;\\
x,\ x \ge s_+;
\end{cases}
\ \ \theta(x) \le x,
$$
for some fixed $s_+ > s_- > 0$. The function $\theta$ is Lipschitz on $\mathbb{R}_+$: there exists a constant $C(\theta) > 0$ such that
\begin{equation}
\label{eq:theta-Lipschitz}
|\theta(s_1) - \theta(s_2)| \le C(\theta)|s_1 - s_2|\ \mbox{for all}\ s_1, s_2 \in \mathbb{R}_+.
\end{equation}
Next, define
$$
\tilde{V}_{\lambda}(x) = V_{\lambda}(\theta(x)) = e^{\lambda\theta(x)}.
$$
The process $Y_{\varepsilon}$ has the generator $\mathcal L_{\varepsilon}$, given by the formula
$$
\mathcal L_{\varepsilon}f(x) = p_{\ast}(x)f'(x) + \frac12\sigma_{\varepsilon}^2(x)f''(x) + \int_{\varepsilon}^{\varepsilon^{-1}}\left[f(x+y) - f(x)\right]\,\mu(\mathrm{d} y)
$$
for $f \in C^2(\mathbb{R}_+)$ with $f'(0) = 0$. Repeating calculations from \cite[Theorem 3.2]{MyOwn12} with minor changes, we get:
\begin{equation}
\label{eq:Lyapunov-example}
\mathcal L_{\varepsilon}\tilde{V}_{\lambda}(x) \le -\Phi_{\varepsilon}(\lambda)\tilde{V}_{\lambda}(x) + c_{\varepsilon}1_{[0, s_+]}(x),\ x \in \mathbb{R}_+,
\end{equation}
with the constant
\begin{equation}
\label{eq:c-constant}
c_{\varepsilon} := \max\limits_{x\in[0, s_+]}\left[\mathcal L_{\varepsilon}\tilde{V}_{\lambda}(x) + \phi_{\varepsilon}(\lambda,x)\tilde{V}_{\lambda}(x)\right].
\end{equation}
\begin{lemma}
$\varlimsup_{\varepsilon \downarrow 0}(\pi_{\varepsilon}, V_{\lambda}) < \infty$.
\label{lemma:finite-limsup}
\end{lemma}
\begin{proof}
The functions $V_{\lambda}$ and $\tilde{V}_{\lambda}(x)$ are of the same order, in the sense that
\begin{equation}
\label{eq:equiv}
0 < \inf\limits_{x \ge 0}\frac{\tilde{V}_{\lambda}(x)}{V_{\lambda}(x)} \le \sup\limits_{x \ge 0}\frac{\tilde{V}_{\lambda}(x)}{V_{\lambda}(x)} < \infty.
\end{equation}
Therefore, it suffices to show that
\begin{equation}
\label{eq:tilde-finite-limsup}
\varlimsup\limits_{\varepsilon \downarrow 0}(\pi_{\varepsilon}, \tilde{V}_{\lambda}) < \infty.
\end{equation}
Apply the probability measure $\pi_{\varepsilon}$ to both sides of the inequality~\eqref{eq:Lyapunov-example}. This probability measure is stationary; therefore, the left-hand side of~\eqref{eq:Lyapunov-example} becomes $(\pi_{\varepsilon}, \mathcal L_{\varepsilon}\tilde{V}_{\lambda}) = 0$. Therefore,
$$
-\Phi_{\varepsilon}(\lambda)\bigl(\pi_{\varepsilon}, \tilde{V}_{\lambda}\bigr) + c_{\varepsilon}\bigl(\pi_{\varepsilon}, 1_{[0, s_+]}\bigr) \ge 0.
$$
Since $(\pi_{\varepsilon}, 1_{[0, s_+]}) = \pi_{\varepsilon}([0, s_+]) \le 1$, we get:
\begin{equation}
\label{eq:upper-bound}
\bigl(\pi_{\varepsilon}, \tilde{V}_{\lambda}\bigr) \le \frac{c_{\varepsilon}}{\Phi_{\varepsilon}(\lambda)}.
\end{equation}
From~\eqref{eq:k-sup-convergence} and~\eqref{eq:upper-bound}, to show~\eqref{eq:tilde-finite-limsup}, it suffices to show that
\begin{equation}
\label{eq:c-eps}
\varlimsup\limits_{\varepsilon\downarrow 0}c_{\varepsilon} < \infty.
\end{equation}
This, in turn, would follow from~\eqref{eq:c-constant},~\eqref{eq:k-sup-convergence}, and the following relation:
\begin{equation}
\label{eq:uniform-conv-of-generators}
\mathcal L_{\varepsilon}\tilde{V}_{\lambda}(x) \to \mathcal L\tilde{V}_{\lambda}(x),\ \ \mbox{uniformly on}\ \ [0, s_+].
\end{equation}
We can express the difference of generators as
\begin{align}
\label{eq:difference-generators}
\begin{split}
\mathcal L_{\varepsilon}&\tilde{V}_{\lambda}(x) - \mathcal L\tilde{V}_{\lambda}(x) \\ & = \frac12\left(\sigma_{\varepsilon}^2(x) - \sigma^2(x)\right)f''(x) - \left(\int_0^{\varepsilon}+\int_{\varepsilon^{-1}}^{\infty}\right)\left[\tilde{V}_{\lambda}(x+y) - \tilde{V}_{\lambda}(x)\right]\,\mu(\mathrm{d} y).
\end{split}
\end{align}
The first term in the right-hand side of~\eqref{eq:difference-generators} is equal to
$\frac12(2\varepsilon\sigma(x) + \varepsilon^2)f''(x)$. Since $\sigma$ is bounded, this term converges to $0$ as $\varepsilon \downarrow 0$ uniformly on $[0, s_+]$. It suffices to prove that the second term converges to zero as well. For all $x, y \ge 0$, using~\eqref{eq:theta-Lipschitz}, we have:
\begin{align}
\label{eq:elementary-est}
\begin{split}
& 0 \le \tilde{V}_{\lambda}(x + y) - \tilde{V}_{\lambda}(x) = e^{\lambda\theta(x+y)} - e^{\lambda\theta(x)} \\ & = e^{\lambda\theta(x)}\left[e^{\lambda(\theta(x+y) - \theta(x))} - 1\right] \le \tilde{V}_{\lambda}(x)\left[e^{\lambda C(\theta)y} - 1\right].
\end{split}
\end{align}
Changing the parameter $s_-$ and letting $s_- \downarrow 0$, we have: $\theta(x) \to x$ uniformly on $\mathbb{R}_+$. Therefore, we can make the Lipschitz constant $C(\theta)$ as close to $1$ as necessary. Also, note that for $\lambda'$ in some neighborhood of $\lambda$, we have:
\begin{equation}
\label{eq:finite-exp-moment-nbhd}
\int_0^{\infty}\left(e^{\lambda'x} - 1\right)\,\mu(\mathrm{d} x) < \infty.
\end{equation}
Combining~\eqref{eq:finite-exp-moment-nbhd} with~\eqref{eq:elementary-est}, using that $\sup_{x \in [0, s_+]}\tilde{V}_{\lambda}(x) < \infty$, and making $C(\theta)$ close enough to $1$, we complete the proof that the second term in the right-hand side of~\eqref{eq:difference-generators} tends to $0$ as $\varepsilon \downarrow 0$. This completes the proof of~\eqref{eq:uniform-conv-of-generators}, and with it that of~\eqref{eq:c-eps} and Lemma~\ref{lemma:finite-limsup}.
\end{proof}
Now, we state a fundamental lemma, and complete the proof of Theorem~\ref{thm:main-1} assuming that this lemma is proved. The proof is postponed until the end of this section.
\begin{lemma}
Take a version $\tilde{Y}_{\varepsilon}$ of the reflected jump-diffusion $Y_{\varepsilon}$, starting from $y_{\varepsilon} \ge 0$, for $\varepsilon \ge 0$. If $y_{\varepsilon} \to y_0$, then we can couple $\tilde{Y}_{\varepsilon}$ and $\tilde{Y}_0$ so that for every $T \ge 0$,
$$
\lim\limits_{\varepsilon \downarrow 0}\mathbb E\sup\limits_{0 \le t \le T}\bigl|\tilde{Y}_{\varepsilon}(t) - \tilde{Y}_{0}(t)\bigr|^2 = 0.
$$
\label{lemma:fundamental}
\end{lemma}
Since $V_{\lambda}(\infty) = \infty$, Lemma~\ref{lemma:finite-limsup} implies tightness of the familly $(\pi_{\varepsilon})_{\varepsilon \in (0, \varepsilon_0]}$ of probability measures. Now take a stationary version $\overline{Y}_{\varepsilon}$ of the reflected jump-diffusion $Y_{\varepsilon}$: for every $t \ge 0$, let $\overline{Y}_{\varepsilon}(t) \sim \pi_{\varepsilon}$. Take a sequence $(\varepsilon_n)_{n \ge 1}$ such that $\varepsilon_n \downarrow 0$ as $n \to \infty$, and $\pi_{\varepsilon_n} \Rightarrow \pi_0$ (where $\Rightarrow$ stands for weak convergence) for some probability measure $\pi_0$ on $\mathbb{R}_+$. It follows from Lemma~\ref{lemma:fundamental} that for every $t \ge 0$, we have: $\overline{Y}_{\varepsilon_n}(t) \Rightarrow \overline{Y}_0(t)$ as $n \to \infty$, where $\overline{Y}_0$ is a stationary version of the reflected jump-diffusion $Y_0$: that is, $\overline{Y}_0(t) \sim \pi_0$ for every $t \ge 0$. In other words, we proved that the reflected jump-diffusion $Y_0$ has a stationary distribution $\pi_0$.
\smallskip
Next, take a measurable function $g : \mathbb{R}_+ \to \mathbb{R}$ such that $|g(x)| \le V_{\lambda}(x)$ for all $x \in \mathbb{R}_+$.
\begin{lemma} $\left(\pi_{\varepsilon_n}, g\right) \to (\pi_0, g)$ as $n \to \infty$.
\label{lemma:DCT}
\end{lemma}
\begin{proof} The function $\Phi$ is a supremum of a family of functions $-\phi(\cdot, x)$, which are continuous in $\lambda$. Therefore, $\Phi$ is lower semicontinuous, and the set $\{\lambda > 0\mid \Phi(\lambda) > 0\}$ is open. Apply Lemma~\ref{lemma:finite-limsup} to some $\lambda' > \lambda$ (which exists by the observation above). Then we get:
$$
\varlimsup\limits_{\varepsilon \downarrow 0}\left(\pi_{\varepsilon_n}, V_{\lambda'}\right) < \infty.
$$
Note also that $|g(x)|^{\lambda'/\lambda} \le \left[V_{\lambda}(x)\right]^{\lambda'/\lambda} = V_{\lambda'}(x)$ for all $x \ge 0$. Therefore, the family $(\pi_{\varepsilon}g^{-1})_{\varepsilon \in (0, \varepsilon_0]}$ of probability distributions is uniformly integrable. Uniform integrability plus a.s. convergence imply convergence of expected values. Thus we complete the proof of Lemma~\ref{lemma:DCT}.
\end{proof}
For all $\varepsilon \ge 0$, take a copy $Y^{\varepsilon}$ of $Y_{\varepsilon}$ starting from the same initial point $x \in \mathbb{R}_+$.
\begin{lemma} For every $t \ge 0$, we have: $\mathbb E g(Y^{\varepsilon}(t)) \to \mathbb E g(Y^0(t))$ as $\varepsilon \downarrow 0$.
\label{lemma:DCT-time}
\end{lemma}
\begin{proof} Following calculations in the proof of \cite[Theorem 3.2]{MyOwn12}, we get:
\begin{equation}
\label{eq:Lyapunov-integral}
\mathbb E \tilde{V}_{\lambda}(Y^{\varepsilon}(t)) - \tilde{V}_{\lambda}(x) \le \int_0^t\left[-\Phi_{\varepsilon}(\lambda)\tilde{V}_{\lambda}(Y^{\varepsilon}(s)) + c_{\varepsilon}1_{[0, s_+]}(s)\right]\,\mathrm{d} s \le c_{\varepsilon}t.
\end{equation}
Therefore, from~\eqref{eq:Lyapunov-integral} we have:
\begin{equation}
\label{eq:bdd-new}
\varlimsup\limits_{\varepsilon\downarrow 0}\mathbb E \tilde{V}_{\lambda}(Y^{\varepsilon}(t)) < \infty.
\end{equation}
From~\eqref{eq:equiv}, ~\eqref{eq:bdd-new} holds for $V_{\lambda}$ in place of $\tilde{V}_{\lambda}$. This is also true for $\lambda' > \lambda$ slightly larger than $\lambda$.
Applying the same uniform integrability argument as in the proof of Lemma~\ref{lemma:DCT}, we complete the proof of Lemma~\ref{lemma:DCT-time}.
\end{proof}
Finally, let us complete the proof of Theorem~\ref{thm:main-1}. From~\eqref{eq:exp-ergodicity-eps}, we have:
\begin{equation}
\label{eq:final-step}
\left|\mathbb E g(Y^{\varepsilon}(t)) - (\pi_{\varepsilon}, g)\right| \le \left[V_{\lambda}(x) + \left(\pi_{\varepsilon}, V_{\lambda}\right)\right]e^{-\Phi_{\varepsilon}(\lambda)t}.
\end{equation}
Taking $\varepsilon = \varepsilon_n$ and letting $n \to \infty$ in~\eqref{eq:final-step}, we use Lemma~\ref{lemma:DCT} and~\ref{lemma:DCT-time} to conclude that
\begin{equation}
\label{eq:final-formula}
\left|\mathbb E g(Y^{0}(t)) - (\pi_{0}, g)\right| \le \left[V_{\lambda}(x) + \left(\pi_{0}, V_{\lambda}\right)\right]e^{-\Phi(\lambda)t}.
\end{equation}
Take the supremum over all functions $g : \mathbb{R}_+ \to \mathbb{R}$ which satisfy $|g(x)| \le V_{\lambda}(x)$ for all $x \in \mathbb{R}_+$, and complete the proof of Theorem~\ref{thm:main-1} for Lipschitz $p_*$.
\subsection{Proof of Lemma~\ref{lemma:fundamental}} Let us take a probability space with independent Brownian motion $W$ and L\'evy process $L$, and let $L_{\varepsilon}$ be a subordinator process with L\'evy measure $\mu_{\varepsilon}$, obtained from $L$ by eliminating all jumps of size less than $\varepsilon$ and greater than $\varepsilon^{-1}$. For consistency of notation, let $L_0 := 0$. For every $\varepsilon \ge 0$, we can represent
\begin{equation}
\label{eq:reflected-formula}
\tilde{Y}_{\varepsilon}(t) = y_{\varepsilon} + \int_0^tp_{\ast}(\tilde{Y}_{\varepsilon}(s))\,\mathrm{d} s + \int_0^t\sigma_{\varepsilon}(\tilde{Y}_{\varepsilon}(s))\,\mathrm{d} W(s) + L_{\varepsilon}(t) + N_{\varepsilon}(t),\ t \ge 0.
\end{equation}
Here, $N_{\varepsilon}$ is a nondecreasing right-continuous process with left limits, with $N_{\varepsilon}(0) = 0$, which can increase only when $\tilde{Y}_{\varepsilon} = 0$. We can rewrite~\eqref{eq:reflected-formula} as
\begin{equation}
\label{eq:thru-X}
\tilde{Y}_{\varepsilon}(t) = \mathcal X_{\varepsilon}(t) + \int_0^tp_{\ast}(\tilde{Y}_{\varepsilon}(s))\,\mathrm{d} s + \int_0^t\sigma(\tilde{Y}_{\varepsilon}(s))\,\mathrm{d} W(s) + N_{\varepsilon}(t),\ t \ge 0.
\end{equation}
Here, we introduce a new piece of notation:
\begin{equation}
\label{eq:cal-X}
\mathcal X_{\varepsilon}(t) = y_{\varepsilon} + L_{\varepsilon}(t) + \varepsilon W(t),\ t \ge 0.
\end{equation}
The process $L(\cdot) - L_{\varepsilon}(\cdot)$ is nondecreasing. By Assumption 2.3, as $\varepsilon \downarrow 0$, for every $T > 0$,
\begin{equation}
\label{eq:conv-of-L}
\mathbb E\sup\limits_{0 \le t \le T}\left|L(t) - L_{\varepsilon}(t)\right|^2 = \mathbb E\left(L(T) - L_{\varepsilon}(T)\right)^2 = T\left(\int_0^{\varepsilon} + \int_{\varepsilon^{-1}}^{\infty}\right)\,x^2\,\mu(\mathrm{d} x) \to 0.
\end{equation}
From~\eqref{eq:cal-X} and~\eqref{eq:conv-of-L}, we have:
\begin{equation}
\label{eq:eps-0}
\mathbb E\sup\limits_{0 \le t \le T}\left|\mathcal X_0(t) - \mathcal X_{\varepsilon}(t)\right|^2 \to 0,\ \varepsilon \downarrow 0.
\end{equation}
Fix time horizon $T > 0$, and consider the space $\mathcal E_T$ of all right-continuous adapted processes $Z = (Z(t),\, 0 \le t \le T)$ with left limits such that
$$
\norm{Z}_{2, T}^2 := \mathbb E\sup\limits_{0 \le t \le T}Z^2(t) < \infty.
$$
This is a Banach space with norm $\norm{\cdot}_{2, T}$. Fix an $\mathcal X \in \mathcal E_T$. Let us introduce two mappings
$\mathcal P_{\mathcal X},\, \mathcal S : \mathcal E_T \to \mathcal E_T$: The mapping $\mathcal P_{\mathcal X}$ is given by
$$
\mathcal P_{\mathcal X}(Z)(t) = \mathcal X(t) + \int_0^tp_{\ast}(Z(s))\,\mathrm{d} s + \int_0^t\sigma(Z(s))\,\mathrm{d} W(s),\ 0 \le t \le T.
$$
Whereas $\mathcal S$ is the classic Skorohod mapping:
$$
\mathcal S(Z)(t) = Z(t) + \sup\limits_{0 \le s \le t}(Z(s))_-,\ 0 \le t \le T,
$$
where $(a)_{-}:=\max(-a, 0)$ for any $a\in\mathbb{R}$. For any $\mathcal X \in \mathcal E_T$, let $\mathcal R_{\mathcal X} := \mathcal S\circ\mathcal P_{\mathcal X}$. Then we can represent~\eqref{eq:thru-X} as
\begin{equation}
\label{eq:fixed-point-equation}
Y_{\varepsilon} = (\mathcal S \circ \mathcal P_{\mathcal X_{\varepsilon}}(Y_{\varepsilon})) = \mathcal R_{\mathcal X_{\varepsilon}}(Y_{\varepsilon}).
\end{equation}
It is straightforward to show, using Lipschitz properties of $p_{\ast}$ and $\sigma$, that these mappings indeed map $\mathcal E_T$ into $\mathcal E_T$. Moreover, a classic result is that $\mathcal S$ is $1$-Lipschitz. See, for example, \cite{Whitt}. Assume $C(p_{\ast})$ and $C(\sigma)$ are Lipschitz constants for functions $p_{\ast}$ and $\sigma$.
\begin{lemma} For $\mathcal X, \mathcal X', \mathcal Z, \mathcal Z' \in \mathcal E_T$, the following Lipschitz property holds with constant
\begin{equation}
\label{eq:C-T}
C_T := C(p_\ast)T + 2C(\sigma)T^{1/2}.
\end{equation}
\begin{equation}
\label{eq:Lipschitz-general}
\norm{\mathcal R_{\mathcal X}(\mathcal Z) - \mathcal R_{\mathcal X'}(\mathcal Z')}_{2, T} \le C_T\norm{\mathcal Z - \mathcal Z'}_{2, T} + \norm{\mathcal X - \mathcal X'}_{2, T}.
\end{equation}
\label{lemma:Lipschitz-general}
\end{lemma}
\begin{proof} Since $\mathcal S$ is $1$-Lipschitz, it suffices to show~\eqref{eq:Lipschitz-general} for $\mathcal P_{\mathcal X}$ instead of $\mathcal R_{\mathcal X}$. We can express the difference between $\mathcal P_{\mathcal X}(\mathcal Z)$ and $\mathcal P_{\mathcal X'}(\mathcal Z')$ as follows: for $t \in [0, T]$,
\begin{align}
\label{eq:big-difference}
\begin{split}
\mathcal P_{\mathcal X}&(\mathcal Z)(t) - \mathcal P_{\mathcal X'}(\mathcal Z')(t) = \mathcal X(t) - \mathcal X'(t) \\ & + \int_0^t\left[p_\ast(\mathcal Z(s)) - p_\ast(\mathcal Z'(s))\right]\,\mathrm{d} s + \int_0^t\left[\sigma(\mathcal Z(s)) - \sigma(\mathcal Z'(s))\right]\,\mathrm{d} W(s).
\end{split}
\end{align}
Denoting by $I$ and $M$ the second and third terms in the right-hand side of~\eqref{eq:big-difference}, we have:
\begin{align}
\label{eq:norm-diff}
\norm{\mathcal P_{\mathcal X}(\mathcal Z)(t) - \mathcal P_{\mathcal X'}(\mathcal Z')(t)}_{2, T} \le \norm{\mathcal X - \mathcal X'}_{2, T} + \norm{I}_{2, T} + \norm{M}_{2, T}.
\end{align}
The norm $\norm{I}_{2, T}$ is estimated in a straightforward way using the Lipschitz property of $\sigma$:
\begin{align}
\label{eq:term-I}
\begin{split}
\norm{I}_{2, T}^2 &= \mathbb E\sup\limits_{0 \le t \le T}I^2(t)\le \mathbb E\sup\limits_{0 \le t \le T}\left(\int_0^tC(p_\ast)\left[\mathcal Z(s) - \mathcal Z'(s)\right]\,\mathrm{d} s\right)^2\\& \le T^2C^2(p_\ast)\cdot \mathbb E\sup\limits_{0 \le s \le T}\left[\mathcal Z(s) - \mathcal Z'(s)\right]^2 = T^2C^2(p_\ast)\norm{\mathcal Z - \mathcal Z'}^2_{2, T}.
\end{split}
\end{align}
Finally, the norm $\norm{M}_{2, T}$ can be estimated using the martingale inequalities:
\begin{align}
\label{eq:term-M}
\begin{split}
\norm{M}_{2, T}^2 & = \mathbb E\sup\limits_{0 \le t \le T}M^2(t)\\
& \le 4\mathbb E M^2(T)\\
& = 4\int_0^T\left[\sigma(\mathcal Z(s)) - \sigma(\mathcal Z'(s))\right]^2\,\mathrm{d} s \\
& \le
4C^2(\sigma)T\cdot\mathbb E\sup\limits_{0 \le t \le T}{(\mathcal Z(t) - \mathcal Z'(t))}^2\\
& = 4C^2(\sigma)T\norm{\mathcal Z - \mathcal Z'}^2_{2, T}.
\end{split}
\end{align}
Combining~\eqref{eq:norm-diff},~\eqref{eq:term-I},~\eqref{eq:term-M}, we complete the proof of~\eqref{eq:Lipschitz-general}.
\end{proof}
For small enough $T$, the constant $C_T$ from~\eqref{eq:C-T} is strictly less than $1$. Assume this is the case until the end of the proof. Then for every $\mathcal X \in \mathcal E_T$, the mapping $\mathcal R_{\mathcal X}$ is contractive. Therefore, it has a unique fixed point, which can be obtained by successive approximations:
$$
\mathcal Y(\mathcal X) = \lim\limits_{n \to \infty}\mathcal R_{\mathcal X}^n(\mathcal Z).
$$
In particular, the equation~\eqref{eq:fixed-point-equation} has a unique solution, which is obtained by successive approximations:
$$
Y_{\varepsilon} = \lim\limits_{n \to \infty}\mathcal R^n_{\mathcal X_{\varepsilon}}(\mathcal Z).
$$
We can take $\mathcal Z = 0$ as initial condition, or any other element in $\mathcal E_T$. Applying the mappings in Lemma~\ref{lemma:Lipschitz-general} once again, we have:
$$
\norm{\mathcal R^2_{\mathcal X}(\mathcal Z) - \mathcal R^2_{\mathcal X'}(\mathcal Z')}_{2, T} \le C_T^2\norm{\mathcal Z - \mathcal Z'} + (1+ C_T)\norm{\mathcal X - \mathcal X'}.
$$
By induction over $n = 1, 2, \ldots$ we get:
\begin{align}
\label{eq:iteration-Lipschitz}
\begin{split}
\norm{\mathcal R^n_{\mathcal X}(\mathcal Z) - \mathcal R^n_{\mathcal X'}(\mathcal Z')}_{2, T} \le C_T^n\norm{\mathcal Z - \mathcal Z'}_{2, T} + \left(1 + C_T + \ldots + C_T^{n-1}\right)\norm{\mathcal X - \mathcal X'}_{2, T}.
\end{split}
\end{align}
Let $n \to \infty$ in~\eqref{eq:iteration-Lipschitz}. If $C_T < 1$, then
\begin{equation}
\label{eq:Lipschitz-solution}
\norm{\mathcal Y(\mathcal X) - \mathcal Y(\mathcal X')}_{2, T} \le \frac1{1 - C_T}\norm{\mathcal X - \mathcal X'}_{2, T}.
\end{equation}
Letting $\mathcal X = \mathcal X_0$ and $\mathcal X' = \mathcal X_{\varepsilon}$ in~\eqref{eq:Lipschitz-solution}, and using~\eqref{eq:eps-0}, we complete the proof of Lemma~\ref{lemma:fundamental}.
\section{Concluding remarks}
We showed that the convergence of ruin probabilities in a rather broad class of risk processes is achieved exponentially fast. This rate is easy to compute (at least in the examples considered in Section \ref{sec:ComputationConvergenceRate}), and happened to be sharp when the premium rate and its variability are independent from the current wealth of the insurance company. A natural question relies on the practical implication of having access to the value of the rate of exponential convergence; in particular, whether this leads to an numerical approximation of the finite time ruin probability. This issue has been discussed in Asmussen \cite{As84}, the answer was negative. Another direction is to relax the condition upon the tail of the claim size. It is of practical interest to let the claim size distribution be heavy tailed. An extension of the early work of Asmussen and Teugels~\cite{AsTe96} could be envisaged. For example, in the work of Tang \cite{Ta05}, a compound Poisson risk model under constant interest force with sub-exponentially distributed claim size is considered. When comparing the asymptotics provided by Tang \cite[(2.5), (3.2)]{Ta05}, it seems that exponential convergence holds for large initial reserves. Yet another direction for future research might be to relax the Lipschitz property of the drift.
\section*{Acknowledgements} Pierre-Olivier Goffard was partially funded by a Center of Actuarial Excellence Education Grant given to the University of California, Santa Barbara, from the Society of Actuaries. Andrey Sarantsev was supported in part by the NSF grant DMS 1409434 (with Jean-Pierre Fouque as a Principal Investigator) during this work.
| {
"timestamp": "2018-07-02T02:04:59",
"yymm": "1710",
"arxiv_id": "1710.01845",
"language": "en",
"url": "https://arxiv.org/abs/1710.01845",
"abstract": "We explicitly find the rate of exponential long-term convergence for the ruin probability in a level-dependent Lévy-driven risk model, as time goes to infinity. Siegmund duality allows to reduce the pro blem to long-term convergence of a reflected jump-diffusion to its stationary distribution, which is handled via Lyapunov functions.",
"subjects": "Probability (math.PR)",
"title": "Exponential convergence rate of ruin probabilities for level-dependent Lévy-driven risk processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750488609222,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7095221853767067
} |
https://arxiv.org/abs/0802.2329 | Hilbert functions of multigraded algebras, mixed multiplicities of ideals and their applications | This paper is a survey on major results on Hilbert functions of multigraded algebras and mixed multiplicities of ideals, including their applications to the computation of Milnor numbers of complex analytic hypersurfaces with isolated singularity, multiplicities of blowup algebras and mixed volumes of polytopes. | \section{Introduction}
Let $R=\bigoplus_{t=0}^\infty R_t$ be a
Noetherian graded algebra over a field $k$. Then $R_t$ is a finite dimensional
$k$-vector space. In his historic paper, \cite{hi1} Hilbert considered
the generating function
$$H(R,z):=\sum_{t=0}^\infty H_R(t)z^t$$
of the sequence $H_R(t):=\dim_kR_t.$ By using his {\em Syzygy Theorem}, he proved that
if $R=k[f_1,f_2,\ldots,f_s]$ where $f_i \in R_{d_i}$ for $i=1,2,\ldots,s,$
then there exists a polynomial $h(z) \in \ZZ[z]$ such that
$$ H(R,z)=\frac{h(z)}{(1-z^{d_1})(1-z^{d_2})\cdots(1-z^{d_s})}.$$
We say that $R$ is standard if $R$ is generated over $R_0$ by elements of degree 1.
In this case
the Hilbert function $H_R(t)$ is given by a polynomial $P_R(t)$ for all $t$
large enough. Lasker \cite{la} showed that the Krull dimension of
$R,$ denoted by $\dim R,$ is $\deg P_R(t) +1.$ In the same paper, Lasker indicated that these results
could be generalized to Hilbert functions of $\NN^r$-graded algebras.
The ideas of Lasker and Noether were presented by Van der Waerden in
a detailed exposition \cite{w}.
Let $X$ and $Y$ be two sets of $m+1$ and $n+1$ indeterminates, respectively.
A polynomial in $k[X,Y]$ is bihomogeneous if it is homogeneous in $X$ and $Y$ separately.
An ideal $I$ is called bihomogeneous if it is generated by bihomogeneous polynomials.
Let $V \subset \PP^m \times \PP^n$ be the zero set of a collection of bihomogeneous polynomials.
Then the ideal $I(V)$ of polynomials in $k[X,Y]$ which vanish on $V$ is
bihomogeneous. Therefore, the coordinate ring $k[X,Y]/I(V)$ is a bigraded algebra of the form
$$R = \bigoplus_{(u,v)\in \NN^2}R_{(u,v)},$$
where $R_{(u,v)}$ is a finite dimensional $k$-vector space.
Van der Waerden showed that $H_R(u,v) := \dim_k R_{(u,v)}$ is given by a polynomial $P_R(u,v)$
with rational coefficients for all large values of $u,v.$ The degree of $P_R(u,v)$
is at most $\dim R-2.$ Let $r=\deg P_R(u,v).$ Write $P_R(u,v)$ in the form
$$
P_R(u,v)=\sum_{i+j \leq r} e_{ij}(R) \binom{u}{i}\binom{v}{j}.
$$
We call $P_R(u,v)$ the {\it Hilbert polynomial} and the numbers $e_{ij}(R)$ with $i+j = r$ the {\it mixed multiplicities} of $R$. The mixed multiplicities have geometrical significance.\medskip
\noindent {\bf Theorem} (Van der Waerden, 1928) {\em Let $P$ be a bihomogeneous prime ideal
of $k[X,Y]$ and $R = k[X,Y]/P$. Then $e_{ij}(R)$ is the number of points of intersection of the variety
$$V(P) = \{{\alpha} \in \PP^m\times \PP^n|\ f({\alpha}) = 0\ \forall\ f \in P\}$$
with a linear space defined by $i$ general linear equations in $X$ and $j$ general linear equations
in $Y.$ }
\medskip
The Hilbert polynomial $P_R(u,v)$ and the mixed multiplicities $e_{ij}(R)$ can be defined for any Noetherian bigraded algebra $R$ over an Artinian local ring which is standard in the sense that it is generated by elements of degree $(1,0)$ and $(0,1)$. These objects were not so well studied until recently. The total degree of $P_R(u,v)$ was characterized independently by Schiffel \cite{sc} and Verma-Katz-Mandal \cite{vkm}. They showed that $\deg P_R(u,v)+2$ is the maximal dimension of the relevant prime ideals of $R.$
Verma-Katz-Mandal also showed that the mixed multiplicities $e_{ij}(R)$ can be any sequence of non-negative integers with at least a positive entry. Recently, Trung was able to characterize the degrees in $u$ and $v$ of $P_R(u,v)$ and the positive mixed multiplicities in \cite{Tr1}. In particular, he showed that the range of the positive mixed multiplicities is rigid if $R$ is a domain or a Cohen-Macaulay ring, thereby solving an open question of Verma-Katz-Mandal.
An important case of mixed multiplicities of a bigraded algebra is the mixed multiplicities of two ideals.
Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring. For any pair of ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals $I$ and $J,$ one can consider the length function $\ell(A/I^uJ^v)$ which is the sum transform of the Hilbert function of the standard bigraded algebra
$$R(I|J) := \bigoplus_{u,v \ge 0} I^uJ^v/I^{u+1}J^v$$
over the quotient ring $A/I$. Bhattacharya \cite{bh} showed
that this function is given by a polynomial $P(u,v)$ of degree $d =\dim A$ and that it can be written as
$$P(u,v)=\sum_{i+j\le d}a_{ij}(I|J)\binom{u+i}{i}\binom{v+j}{j}$$
for certain integers $a_{ij}(I|J).$
We set $e_j(I|J):= a_{ij}(I|J)$ for $i+j= d$.
These integers were named later as mixed multiplicities by Teissier in \cite{t1} where
he found significant applications of $e_j(I|J)$ in the study of singularities of complex analytic hypersurfaces.
In particular, the Milnor numbers
of linear sections of a complex analytic hypersurface at an isolated singularity are exactly the mixed multiplicities of the maximal ideal and the Jacobian ideal of the hypersurface.
Teissier found several interesting properties of mixed multiplicities of ideals
which have inspired subsequent works substantially.
His characterization of mixed multiplicities as Samuel's multiplicities of general elements led Rees \cite{r4} to the introduction of joint reductions of ideals which generalize the important concept of reduction of an ideal in multiplicity theory.
Another instance is the inequalities
$$e_j(I|J)^d \le e(I)^{d-j}e(J)^j$$
for $j = 0,...,d$ which implies the Minkowski inequality
$$e(IJ)^{1/d} \le e(I)^{1/d}+e(J)^{1/d}.$$
Teissier raised it as a conjecture and showed it for reduced Cohen-Macaulay complex analytic algebras \cite{t3}.
This conjecture was settled in the affirmative by Rees and Sharp in \cite{resh}.
Mixed multiplicities are also defined if $I$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal and $J$ an arbitrary ideal by using the standard graded algebra $R(I|J)$. Katz-Verma \cite{kv} and Verma \cite{v2} \cite{v7} studied first the mixed multiplicities in these cases. They showed that these mixed multiplicities can be used to compute the multiplicity of the Rees algebra and the extended Rees algebra. D'Cruz \cite{d3} obtained multiplicity formula for multigraded extended Rees algebra. Herzog-Trung-Ulrich \cite{htu} have devised an effective method to compute the multiplicity of the Rees algebras which is similar to that of Gr\"obner bases. This method has been exploited by
Hoang \cite{Ho} \cite{Ho2} and Raghavan-Verma \cite{rv} to compute mixed multiplicities of ideals generated by $d$-sequences, quadratic sequences and filter-regular sequences of homogeneous elements of non-decreasing degrees.
A systematic study of mixed multiplicities of two ideals in the general case was carried out by Trung in \cite{Tr1}. He characterized the positive mixed multiplicities and showed how to compute them by means of general elements. As a consequence, the range of the positive mixed multiplicities is rigid and depends only on the ideal $J$.
Mixed multiplicities are also defined for an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal and a sequence of ideals of $A$.
To handle the complexity of this case Trung and Verma \cite{TV} used a multigraded version of the associated graded ring in order to introduce
superficial sequence for a collection of ideals. Using this notion they obtained similar results as in the case of two ideals.
These results can be applied to describe mixed volumes of lattice polytopes as mixed multiplicities, thereby giving a purely algebraic proof of Bernstein's theorem which asserts that the number of common zeros of a system of $n$ Laurent polynomials in $n$ indeterminates with
finitely many zeroes in the torus $(\CC^*)^n$ is bounded above by the mixed volume of their Newton polytopes.
Another interesting instance of mixed multiplicities is the multiplicity sequence of an ideal introduced by Achilles and Manaresi in \cite{AM2}.
Let $I$ be an arbitrary ideal in a local ring $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$. The associated graded ring
$$R = \bigoplus_{(u,v) \in \NN^2}\big({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^uI^v + I^{v+1}/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^{u+1}I^v +I^{v+1}\big),$$
is a standard bigraded algebra over the residue field $A/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$.
The sum transform
$$H_R^{(1,1)}(u,v) := \sum_{i = 0}^u\sum_{j = 0}^vH_R(i,j)$$
of the Hilbert function of $R$ is given by a polynomial $P_R^{(1,1)}(u,v)$ of degree $d$ for $u,v$ large enough.
If we write this polynomial in the form
$$P_R^{(1,1)}(u,v) =\sum_{i= 0}^d \frac{c_{ij}(R)}{i!(d-i)!}u^iv^{d-i} +
\text{\rm lower-degree terms},$$
then $c_i(I) := c_{i\; d-i}(R) $ are non-negative integers for $i = 0,...,d$.
Achilles and Manaresi call $c_0(I),...,c_d(I)$ the {\it multiplicity sequence} of $I$.
The multiplicity sequence can be considered as a generalization of the multiplicity of an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal.
In fact, if $I$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal, then $c_0(I) = e(I)$ and $c_i(I) = 0$ for $i > 0$.
In particular, $c_0(I) > 0$ if and only if the analytic spread of $I$, $s(I),$ equals $d.$ In this case, $ c_0(I)$ is called the $j$-{\it multiplicity} of $I$ \cite{AM1}. Flenner-Manaresi \cite{FM} used $j$-multiplicity to give a numerical criterion for reduction of ideals.
The multiplicity sequence can be computed by means of the intersection algorithm which was introduced by St\"uckrad-Vogel \cite{SV} in order to prove a refined version of the Bezout's theorem.
In general, the Hilbert function $H_R(u,v)$ of a finitely generated bigraded algebra $R$ over a field $k$ is not a polynomial for large $u,v$. However, if $R$ is generated by elements of
bidegrees $(1,0),(d_1,1),\ldots,(d_r,1)$, where $d_1,\ldots,d_r$ are non-negative integers, then
there exist integers $c$ and $v_0$ such that $H_R(u,v)$
is equal to a polynomial $P_R(u,v)$ for $u \ge cv$ and $v \ge v_0$.
This case was considered first by P.~Roberts in \cite{ro} and then by Hoang-Trung in \cite{HT}.
Hilbert polynomials of bigraded algebras of
the above type appear in Gabber's proof of Serre's non-negativity
conjecture \cite{Ro2} for intersection multiplicities and that the positivity of certain coefficient
of such a Hilbert polynomial is strongly related to Serre's positivity
conjecture on intersection multiplicities \cite{Ro3}.
An instance of such non-standard bigraded algebra is the Rees algebra of a homogeneous ideal $I$ in a standard graded algebra $A$. The existence of the Hilbert polynomial in this case allows us to study the behavior of the Hilbert polynomials of the quotient rings $A/I^v$ for $v$ large enough \cite{HPV}. We can also use the mixed multiplicities of the Rees algebra of $I$ to compute the degree of the embedded varieties of the blow-ups of $\Proj A$ along $I$.
The above development of the theory of Hilbert functions of multigraded algebras and of mixed multiplicities of ideals will be discussed in more detail in subsequent sections.
Illustrating examples and open problems for further study will be given throughout the paper.
The results discussed in this paper merely reflect authors' interests and do not cover all the developments due to lack of space and time and also due to their ignorance.
\noindent
{\bf Acknowledgements:} The authors thank Bernard Teissier
for several useful comments.
\section{Hilbert functions of multigraded algebras}
Let $R = \bigoplus_{(u,v) \in \NN^2} R_{(u,v)}$ be a Noetherian bigraded algebra over an Artinian local ring $R_{(0,0)}=k.$
We define the Hilbert function of $R$ by $H_R(u,v) := \ell(R_{(u,v)})$,
where $\ell$ denotes the length.
If $R$ is standard graded, $H_R(u,v)$ is given by a polynomial $P_R(u,v)$ for all $u, v$ large enough.
In order to determine the total degree of $P_R(u,v)$ we need the following notions.
We say that a bihomogeneous ideal $I$ of $R$
is called {\em irrelevant} if $I_{(u,v)}=R_{(u,v)}$ for all $u,v$ large. We say that $I$ is
{\em relevant} if it is not irrelevant. Let $\Proj R$ denote the set of all bihomogeneous relevant prime ideals of $R$.
The {\it relevant dimension} $\rdim R$ of
$R$ is defined by
$$ \rdim R=\max \{\dim R/P|\ P \in \Proj R \}.$$
The total degree of $P_R(u,v)$ was found independently by Schiffel \cite{sc} and Katz-Mandal-Verma \cite{vkm}.
\begin{Theorem}
$\deg P_R(u,v) =\rdim R-2.$
\end{Theorem}
Let $r = \rdim R-2$. As in the case $k$ is a field, if we write $P_R(u,v)$ in the form
$$
P_R(u,v)=\sum_{i+j \leq r} e_{ij}(R) \binom{u+i}{i}\binom{v+j}{j},
$$
then the numbers $e_{ij}(R)$ are non-negative integers for all $i,j$ with $i+j = r$.
These numbers are called the {\it mixed multiplicities} of $R$.
\begin{Example}
{\rm Let $R = k[x_1,...,x_m,y_1,...,y_n]$ with $\deg x_i = (1,0)$ and $\deg y_j = (0,1)$.
Then
$$H_R(u,v)= P_R(u,v) = \binom{u+m-1}{m-1}\binom{v+n-1}{n-1}$$
for all $(u,v) \in \NN^2$. Therefore, $\deg P_R(u,v) = m+n-2$ and}
$$e_{ij}(R) = \left\{\begin{array}{ll} 1 &\ \text{ if }\ i = m-1, j = n-1,\\
0 & \text{\;\;\;otherwise}.\end{array} \right.$$
\end{Example}
The computation of mixed multiplicities can be reduced to the case of a bigraded domain by using the following associativity formula
(see e.g. \cite{HT}).
\begin{Proposition} \label{associative}
Let ${\mathcal A}(R)$ be the set of the prime ideals $P \in
\Proj R$ with $\dim R/P = \rdim R$. Then
$$e_{ij}(R)= \sum_{P \in {\mathcal A}(R)}\ell(R_P)e_{ij}(R/P).$$
\end{Proposition}
Katz, Mandal and Verma \cite{vkm} showed that the mixed multiplicities can be any sequence of non-negative integers with at least a positive entry.
\begin{Example}
{\rm Let $a_0,...,a_n$ be an arbitrary sequence of non-negative integers with at least a positive entry.
Let $S = k[x_0,...,x_n,y_0,y_1,...,y_n]$ be a bigraded polynomial ring with $\deg x_i = (1,0)$ and $\deg y_j = (0,1)$. Let $Q_t = (x_0^{a_t},x_1,...,x_{n-t-1},y_0,y_1,...,y_{t-1})$, $t = 0,...,n$.
Set $R = S/Q_0 \cap Q_1 \cap \cdots \cap Q_n$. Then $\rdim R = n$ and
${\mathcal A}(R) = \{P_0,...,P_n\}$, where
$P_t = (x_0,x_1,...,x_{n-t-1},y_1,...,y_{t-1})$. We have $\ell(R_{P_t}) = a_t$ and
$$e_{in-i}(R/P_t) = \left\{\begin{array}{ll} 1 &\ \text{ if }\ i = t,\\
0 & \text{otherwise}.\end{array} \right.$$
By the associativity formula we get
$e_{in-i} = a_i$ for $i = 0,...,n$.}
\end{Example}
A standard way to make a standard bigraded algebra $R$ into an
$\NN$-graded algebra is by defining $R_t =\bigoplus_{u+v=t} R_{(u,v)}.$
This algebra is obviously standard graded.
For any Noetherian standard graded $\NN$-graded algebra $R$ over an Artinian local ring, we have
$\rdim R = \dim R$. Let $d = \dim R$. If we write
$P_R(t)$ in the form
$$P_R(t)=\sum_{i=0}^{d-1} a_i(R) \binom{t+d-1-i}{d-1-i},$$
then $e(R) := a_0(R)$ is called the {\it multiplicity} of $R$.
The relationship between multiplicity and mixed multiplicities was found independently in the unpublished
thesis of Dade \cite{da} and in \cite{vkm}.
\begin{Theorem}[Dade, 1960; Katz-Mandal-Verma, 1994] \label{total}
Let $R$ be a Noetherian bigraded algebra over an Artinian local ring.
Assume that the ideals $(R_{(1,0)})$ and $(R_{(0,1)})$ have positive height. Then
$$e(R)=\sum_{i+j=\dim R-2} e_{ij}(R).$$
\end{Theorem}
The above results for bigraded algebras have been extended to multigraded modules by Herrmann-Hyry-Ribbe-Tang in the paper \cite{hhrt}. To summarize their results we fix some notations.
Let $s$ be any non-negative integer. Let $R = \oplus_{u \in \NN^s}R_u$
be a Noetherian standard $\NN^s$-graded algebra
over an Artinian local ring $k$, where \lq standard\rq\ means
$R$ is generated by homogeneous elements of degrees $(0,..,1,..,0)$, where $1$
occurs only as the $i$th component,
$i = 1,...,s.$
For ${\alpha} = ({\alpha}_1,...,{\alpha}_s)$ and ${\beta} = ({\beta}_1,...,{\beta}_s)$ we write ${\alpha} > {\beta}$
if ${\alpha}_i > {\beta}_i$ for all $i=1,\ldots, s.$
Let
$$R_+ = \bigoplus_{ {\alpha} > {\bf 0}} R_{\alpha}.$$
We define $\Proj R$ to be the set of all $\NN^s$-graded prime ideals which do not contain
$R_+$. It is easy to see that $P \in \Proj R$ if and only if $P_u \neq R_u$ for all $u \in \NN^s$.
Let $M = \bigoplus_{u \in \ZZ^s}M_u$ be a finitely generated $\ZZ^s$-graded module over $R$.
Then $M_u$ is a $k$-module of finite length. We call $H_M(u) := \ell(M_u)$ the
{\it Hilbert function} of $M$. Moreover, we define the {\it relevant dimension} of $R$ to be the number
$$\rdim M := \max\{\dim R/P|\ P \in \Proj R\ \text{and}\ M_P \neq 0\}.$$
\begin{Theorem}[Herrmann-Hyry-Ribbe-Tang, 1997]
For $u \gg 0$, $H_M(u)$ is given by a polynomial $P_M(u)$ with rational coefficients
having total degree $\rdim M-s$.
\end{Theorem}
Let $r = \rdim R-s$. If we write $P_M(u)$ in the form
$$P_M(u) = \sum_{{\alpha} \in \NN^s, |{\alpha}| = r} \frac{1}{{\alpha}!}e_{\alpha}(M)u^{\alpha} +
\text{terms of degree $< r$},$$
where ${\alpha} = ({\alpha}_1,...,{\alpha}_s)$ with
$$
|{\alpha}| := {\alpha}_1 + \cdots + {\alpha}_s, \ \
{\alpha}! := {\alpha}_1!\cdots{\alpha}_s!,\ \ \mbox{and} \ \
u^{\alpha} := u_1^{{\alpha}_1}\cdots u_s^{{\alpha}_s},
$$
then $e_{\alpha}(M)$ are non-negative integers if $|{\alpha}| = r$. We call these coefficients the {\it mixed multiplicities} of
$M$. \par
Now we consider the difference of the Hilbert function and the Hilbert polynomial.
Let $R$ be a Noetherian standard $\NN$-graded algebra over an Artinian local ring.
Let $M=\bigoplus_{t\in \ZZ} M_t$ be a finite $\ZZ$-graded module over $R$.
Let $H^i_{R_+}(M)$ denote the $i$th local cohomology module of $M$ with respect to $R_+$.
Then for all $t \in \ZZ$,
$$H_M(t)-P_M(t)=\sum_{i\ge 0}(-1)^i\ell(H^i_{R_+}(M)_t),$$
which is known as the Grothendieck-Serre formula.
A similar formula for the bigraded case was proved by Jayanthan and Verma \cite{jv}.
\begin{Theorem}[Jayanthan-Verma, 2002]
Let $R$ be a Noetherian standard bigraded algebra over an Artinian local ring.
Let $M$ be a finite bigraded module over $R$. Then for all $u,v \in \ZZ$,
$$
H_M(u,v) - P_M(u,v) = \sum_{i\geq 0}(-1)^i\ell(H^i_{R_+}(M)_{(u,v)}).
$$
\end{Theorem}
\section{Positivity of mixed multiplicities}
Let $R$ be a Noetherian standard bigraded algebra over an Artinian local ring.
Can we say which mixed multiplicities of $R$
are positive ?
To give an answer to this question let us first express the total degree of the Hilbert polynomial $P_R(u,v)$ in another way. For any pair of ideals $\mathfrak a, \mathfrak b$ let
$${\mathfrak a}:{\mathfrak b}^\infty := \{x \in S|\ \text{there is a positive integer $n$ such that $x{\mathfrak b}^n \subseteq {\mathfrak a}$}\}.$$
It is easy to see that $\rdim R = \dim R/0:R_+^\infty$ and therefore
$$\deg P_R(u,v) = \dim R/0:R_+^\infty-2.$$
Similarly, one can also compute the partial degrees of the Hilbert polynomial $P_R(u,v)$ \cite{Tr2}.
\begin{Theorem}[Trung, 2001] \label{degree}
\begin{eqnarray*} \deg_uP_R(u,v) & = & \dim R/(0:R_+^\infty+(R_{(0,1)}))-1,\\
\deg_vP_R(u,v) & = & \dim R/(0:R_+^\infty+(R_{(1,0)}))-1. \end{eqnarray*}
\end{Theorem}
For simplicity we set
\begin{eqnarray*}
r & := & \dim R/0:R_+^\infty-2,\\
r_1 & := & \dim R/(0:R_+^\infty+(R_{(0,1)}))-1,\\
r_2 & := & \dim R/(0:R_+^\infty + (R_{(1,0)}))-1.
\end{eqnarray*}
\begin{Corollary}\label{zero}
$e_{ij}(R) = 0$ for $i > r_1$ or $j > r_2$.
\end{Corollary}
To characterize the positive mixed multiplicities we shall need the concept of a filter regular sequence which originates from the theory of generalized Cohen-Macaulay rings \cite{CST}. A sequence $z_1,\ldots,z_s$ of homogeneous elements in $R$ is called {\it filter-regular} if for $i = 1,\ldots,s$, we have
$$[(z_1,\ldots,z_{i-1}):z_i]_{(u,v)} = (z_1,\ldots,z_{i-1})_{(u,v)}$$
for $u$ and $v$ large enough. It is easy to see that $z_1,\ldots,z_s$ is filter-regular if and only if $z_i \not\in P$ for all associated prime ideals $P \not\supseteq R_+$ of $(z_1,\ldots,z_{i-1})$, $i = 1,\ldots,s$ (see e.g. \cite{Tr} for basic properties).
The following result gives an effective criterion for the positivity of a mixed multiplicity $e_{ij}(R)$ and shows how to compute $e_{ij}(R)$ as the multiplicity of an $\NN$-graded algebra \cite{Tr2}.
\begin{Theorem}[Trung, 2001] \label{nonzero}
Let $i, j$ be non-negative integers with $i+j = r$.
Let $x_1,\ldots,x_i$ be a filter-regular sequence of homogeneous elements of degree $(1,0)$. Then $e_{ij}(R) > 0$ if and only if
$$\dim R/((x_1,\ldots,x_i):R_+^\infty+(R_{(0,1)})) = j+1.$$
In this case, if we choose homogeneous elements $y_1,\ldots,y_j$ of degree $(0,1)$ such that $x_1,\ldots,x_i,y_1,\ldots,y_j$ is a filter-regular sequence, then
$$e_{ij}(R) = e(R/(x_1,\ldots,x_i,y_1,\ldots,y_j):R_+^\infty).$$
\end{Theorem}
If the residue field of $R_0$ is infinite, one can always find homogeneous elements $x_1,\ldots,x_i$ of degrees $(1,0)$ and $y_1,\ldots,y_j$ of degree $(0,1)$ such that $x_1,\ldots,x_i,y_1,\ldots,y_j$ is a filter-regular sequence.
For $i = 0$ we get the condition $r_2 +1 = r$ which yields the following criterion for the positivity of $e_{0r}$.
\begin{Corollary} \label{first}
$e_{0r}(R) > 0$ $(e_{r0}(R) > 0)$ if and only if $r_2+1 = r$ $(r_1+1 = r)$.
\end{Corollary}
In spite of Corollary \ref{zero} one might ask whether $e_{i\;r-i}(R) > 0$ for $i = r_1,r-r_2$.
Using Theorem \ref{nonzero} one can easily construct examples with $e_{i\;r-i}(R) = 0$ for $i = r_1,r-r_2$.
\begin{Example}
{\rm Let $R = k[X,Y]/(x_1,y_1) \cap (x_1,x_2,x_3) \cap (y_1,y_2,y_3)$ with $X = \{x_1,x_2,x_3,x_4\}$, $Y = \{y_1,y_2,y_3,y_4\}$ and $\deg x_i = (1,0),\ \deg y_i = (0,1),\ i = 1,2,3,4$. Then $R/(R_{(1,0)}) \cong k[Y]$ and $R/(R_{(0,1)}) \cong k[X]$. Since $0:R_+^\infty = 0$, we get
\begin{align*}
r & = \dim R - 2 = 4,\\
r_1 &= \dim R/(R_{(1,0)}) - 1= 3,\\
r_2 & = \dim R/(R_{(0,1)}) - 1 = 3.
\end{align*}
It is clear that $x_4$ is a non-zerodivisor in $R$. Since
$x_4R:R_+^\infty + (R_{(1,0)}) = (x_1,x_2,x_3,x_4,y_1)R,$
we have
$$\dim R/(x_4R:R_+^\infty + (R_{(1,0)}) = \dim k[y_2,y_3,y_4] = 3 < 3+1.$$
Hence $e_{13}(R) = 0$. By symmetry we also have $e_{31}(R) = 0$. Now we want to compute the only non-vanishing mixed multiplicity $e_{22}(R)$ of $R$. It is easy to check that $x_4,x_2,y_4,y_2$ is a filter-regular sequence in $R$.
Put $Q = (x_4,x_2,y_4,y_2)$. Then
$$R/Q:R_+^\infty = k[X,Y]/(x_1,x_2,x_4,y_1,y_2,y_4) \cong k[x_3,y_3].$$
Hence $e_{22}(R) = e(R/Q:R_+^\infty) = \ell(k).$}
\end{Example}
We say that the sequence of positive mixed multiplicities is {\it rigid} if there are integers $a, b$ such that $e_{i\;r-i}(R) > 0$ for $a \le i \le b$ and $e_{i\;r-i}(R) = 0$ otherwise. Obviously, that is the case if $e_{i\;r-i}(R) > 0$ for $r-r_2 \le i \le r_1$.
Katz, Mandal and Verma \cite{vkm} raised the question whether the sequence of positive mixed multiplicities is rigid if $R$ is a domain or Cohen-Macaulay. We shall see that this question has a positive answer by showing that $e_{i\;r-i}(R) > 0$ for $r-r_2 \le i \le r_1$ in these cases.
Recall that a commutative noetherian ring $S$ is said to be {\it connected in codimension 1} if the minimal dimension of closed subsets
$Z \subseteq \operatorname{Spec}(S)$ for which $\operatorname{Spec}(S)\setminus Z$ is disconnected is equal to $\dim S-1$.
Using a version of Grothendieck's Connectedness Theorem due to Brodmann and Rung \cite{br} one can prove the following sufficient condition for the rigidity of mixed multiplicities.
\begin{Theorem}[Trung, 2001] \label{rigid1}
Assume that all maximal chains of prime ideals in $R/0:R_+^\infty$ have the same length.
Then $e_{i\;r-i}(R)> 0$ for $i = r-r_2, r_1$. If $R/0:R_+^\infty$ is moreover connected in codimension 1, then $e_{i\;r-i}(R)> 0$ for $r-r_2 \le i \le r_1$.
\end{Theorem}
If $R$ is a domain or a Cohen-Macaulay ring with $\height R_+ \ge 1$, then $R$ is connected in codimension 1 by
Hartshorne's Connectedness Theorem. Hence the sequence of positive mixed multiplicities is rigid in these cases.
\begin{Corollary} \label{rigid2}
Let $R$ be a domain or a Cohen-Macaulay ring with $\height R_+ \ge 1$. Then $e_{i\;r-i}(R)> 0$ for $r-r_2 \le i \le r_1$.
\end{Corollary}
\section{Mixed multiplicities of ideals: the ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary case}
Throughout this section $(A, {\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ will denote Noetherian local ring of positive dimension $d$ with infinite residue field.
Let $I$ be an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. Then $A/I^t$ is of finite length for all $t \ge 0$.
It is well-known that the function $\ell (A/I^t)$ is given by a polynomial $P_I(t)$ for large $t,$ called the {\it Hilbert-Samuel polynomial} of $I.$ The degree of $P_I(t)$ is $d$ and if we write $P_I(t)$ in terms of binomial coefficients as:
$$P_I(t)=e_0(I)\binom{t+d-1}{d}-e_1(I)\binom{t+d-2}{d-1}+\cdots+(-1)^de_d(I).$$
The coefficients $e_0(I), e_1(I),\ldots, e_d(I)$ are integers.
The coefficient $e_0(I)$ is a positive integer called the ({\it Samuel's}) {\it multiplicity} of $I$ and it will be denoted by
$e(I).$
Let $J$ be an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal (not necessarily different from $I$). Bhattacharya \cite{bh} showed a similar property for the bivariate function
$$B(u, v) := \ell(A/I^uJ^v).$$
\begin{Theorem}[Bhattacharya, 1955]
There exists a polynomial $P(u,v)$ of total degree $d$ in $u$ and $v$ with rational coefficients so that $B(u,v) = P (u,v)$ for all large $u,v.$
The terms of total degree $d$ in $P(u,v)$ have the form
$$\frac{1} {d!}\left\{e_0(I|J)r^d + \cdots +\binom{d}{i}e_i(I|J)u^{d-i}v^i + \cdots + e_d(I|J)s^d\right\}$$
where $e_0(I|J),...,e_d(I|J)$ are certain positive
integers.
\end{Theorem}
The numbers $e_0(I|J ), \ldots , e_i(I|J), \ldots, e_d(I|J)$
were termed as the {\it mixed multiplicities} of $I$ and $J$ by Teissier \cite{t1}.
We have the following relationship between mixed multiplicities and multiplicity.
\begin{Proposition} [Rees, 1961]
$e_0(I|J) = e(I)$ and $e_d(I|J) = e(J).$
\end{Proposition}
For all positive numbers $u,v$ we have $P_{I^uJ^v}(t) = P(ut,vt)$. Therefore,
$$
e(I^uJ^v) = e_0(I|J)u^d +\cdots +\binom{d}{i}e_i(I|J)u^{d-i}v^i +\cdots + e_d(I|J)v^d.
$$
This is perhaps the reason, why Teissier defined
$e_0(I|J), \ldots, e_d(I|J),$ to be the mixed multiplicities of the ideals $I$ and $J.$
We shall see later that mixed multiplicities can always be expressed as Samuel's multiplicities.
There are numerous ways of computing multiplicity of an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal.
We refer the reader to \cite[Chapter 11]{sh}. In particular, if $A$ is a Cohen-Macaulay ring and $I$ is a parameter ideal, then $e(I) = \ell(A/I)$.
An effective way for the computation of multiplicity was discovered by Northcott and Rees \cite{nr} by using the following notion: An ideal $J \subseteq I$ is called a {\it reduction} of $I$ if there exists an integer $n$ such that $JI^n = I^{n+1}.$ They showed that any minimal reduction of an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal $I$ is a parameter ideal if $R/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$ is infinite. It is easy to check that if $J$ is a reduction of $I$ then $e(I)=e(J).$
There is a deep connection between reductions and multiplicity \cite{r1}.
\begin{Theorem} [Rees Multiplicity Theorem, 1961]
Let $A$ be a quasi-unmixed local ring. Let $J \subseteq I$ be ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. Then $J$ is a reduction of $I$ if and only if $e(I)=e(J).$
\end{Theorem}
Recall that $A$ is called a {\em quasi-unmixed local ring} if
$\dim \hat{A}/p=\dim R$ for each minimal prime $p$ of $\hat{A}$ where
the ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-adic completion of $A$ is denoted by $\hat{A}.$
To generalize Rees Multiplicity Theorem for mixed multiplicities we need to consider a sequence of ideals.
\begin{Theorem} [Teissier, 1973]
Let ${\bf I} = I_1, \ldots, I_s$ be a sequence of ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals.
For any $u = (u_1,...,u_s)\in \NN^s$ let ${{\bf I}}^u = I_1^{u_1}\cdots I_s^{u_s}$.
Then the function $\ell(A/{{\bf I}}^u)$
is given by a polynomial $P(u)$ of total degree $d$
for all $u \gg 0.$
The polynomial $P(u)$ can be written in terms of binomial coefficients as
$$P(u)=\sum_{{\alpha} \in \NN^s,|{\alpha}| \leq d}e_{\alpha}({{\bf I}})
\binom{u_1+{\alpha}_1}{{\alpha}_1} \binom{u_2+{\alpha}_2}{{\alpha}_2}\cdots \binom{u_s+{\alpha}_s}{{\alpha}_s}
$$
where $e_{\alpha}({{\bf I}})$ are integers which are positive if $|{\alpha}|=d.$
\end{Theorem}
The integers $e_{\alpha}({\bf I})$ with $|{\alpha}|=d$ are called the {\it mixed multiplicities} of the sequence ${\bf I}.$
\cite{t1}.
Teissier showed that each mixed multiplicity of ${\bf I}$ is the multiplicity of certain ideals generated by systems of parameters.
Let $I = (c_1,...,c_r)$ be an arbitrary ideal in $A$. We say that a given property holds for a {\it sufficiently general element} $a \in I$ if there exists a non-empty Zariski-open subset $U \subseteq k^r$ such that whenever $a = \sum_{j=1}^r \alpha_jc_j$ and the image of $(\alpha_1,\ldots,\alpha_r)$ in $k^r$ belongs to $U$, then the given property holds for $a$.
\begin{Theorem}[Teissier, 1973]
Let ${\alpha} = ({\alpha}_1,...,{\alpha}_s)$ with $|{\alpha}| = d$. Let $J$ be a parameter ideal generated by ${\alpha}_1$ general elements in $I_1$, ..., ${\alpha}_s$ general elements in $I_s$. Then
$$e_{\alpha}({\bf I})=e(J).$$
\end{Theorem}
This result was then generalized by Rees for joint reductions \cite{r4}. A sequence of elements $a_1, \ldots, a_s$ is called a {\it joint reduction} of the ideals $I_1,...,I_s$ in $A$
if $a_i \in I_i$ for $i=1,2,\ldots, s$ and the ideal $$\sum_{i=1}^{s}a_iI_1 \cdots I_{i-1}I_{i+1} \cdots I_s$$
is a reduction of $I_1 \cdots I_s.$
The existence of joint reductions was established first for a set of $d$ ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals by Rees and for
any number of arbitrary ideals by O'Carroll \cite{oc}.
The next theorem is one of the fundamental results relating joint reductions with mixed multiplicities.
\begin{Theorem}[Rees Mixed Multiplicity Theorem, 1982]
Let ${\bf I}$ be a sequence of $d$ ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. Let $J$ be an ideal generated by a joint reduction of ${\bf I}$.
Put ${\bf 1}=(1,1,\ldots,1).$ Then
$$ e_{\bf 1}({\bf I})= e(J). $$
\end{Theorem}
It is now natural to ask for the converse of Rees Mixed Multiplicity theorem. This was done for two-dimensional quasi-unmixed local rings by Verma \cite{v1} and in any dimension by Swanson \cite{sw2}.
\begin{Theorem}[Swanson's Mixed Multiplicity Theorem, 1992]
Let $A$ be a quasi-unmixed local ring.
Let ${\bf I} = I_1, \ldots, I_s$ be a sequence of ideals of $A.$ Let $a_j \in I_j$ for
$j = 1, \ldots, s.$ Suppose the ideals $(a_1, a_2, \ldots, a_s)$
and $I_1, \ldots, I_s$ have equal radicals and their common height is $s.$ Suppose that
$$ e((a_1, \ldots, a_s)A_\wp) = e_{\bf 1} (I_1A_\wp,...,I_sA_\wp)
$$
for each minimal prime $\wp$ of $(a_1, \ldots, a_s).$ Then
$a_1, \ldots, a_s$ is a joint reduction of $I_1, \ldots, I_s.$
\end{Theorem}
\section{Mixed multiplicities of two ideals: the general case}
Now we are going to extend the notion of mixed multiplicities of two ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals $I,J$ to the case when
$I$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal and $J$ is an arbitrary ideal of a local ring $(A,\mathfrak m)$.
To this end we associate with $I, J$ the standard bigraded algebra
$$R(I|J) := \oplus_{u,v \ge 0} I^uJ^v/I^{u+1}J^v.$$
over the quotient ring $A/I$.
Let $R = R(I|J)$. Since $A/I$ is an artinian ring, $R$ has a Hilbert polynomial $P_R(u,v)$.
We call the mixed multiplicities $e_{ij}(R)$ the {\it mixed multiplicities} of the ideals $I$ and $J$.
This notion coincides with the mixed multiplicities of the last section if $J$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal.
In fact, we have
$$\ell(A/I^uJ^v) = \sum_{t=0}^{u-1}\ell(I^tJ^v/I^{t+1}J^v)$$
for all $u, v \ge 0$. From this it follows that
$e_j(I|J) = e_{ij}(R)$ for $j < d$.
For this reason we will set
$$e_j(I|J) := e_{ij}(R)$$
for any ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal $I$ and any ideal $J$ in $A$.
Katz and Verma \cite{kv} showed if $\height J \ge 1$, then $\deg P_R(u,v) = \dim A -1$ and $e_0(I|J) = e(I)$.
This result can be generalized as follows \cite{Tr2}.
\begin{Lemma}\label{e0}
Let $J$ be an ideal. Then $\deg P_R(u,v) = \dim A/0:J^\infty-1$ and $e_0(I|J) = e(I,A/0:J^\infty)$.
\end{Lemma}
We shall denote by $e(I,A/Q)$ the multiplicity of the ideal $(I+Q)/Q$ in the quotient ring $A/Q$ for any ideal $Q$ of $A$.
The positivity of the mixed multiplicities $e_j(I|J)$ is closely related to the dimension of the {\it fiber ring} of $I$, which is defined as the graded algebra
$$F(I) := \bigoplus_{n\ge 0}I^n/{\mathfrak m}I^n.$$
It can be shown that $J$ is a reduction of $I$ if and only if the ideal of $F(I)$ generated by the initial forms of generators of $J$ in $F(I)_1 = I/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} I$ is a primary ideal of the maximal graded ideal of $F(I)$. From this it follows that if the residue field of $A$ is infinite, the minimal number of generators of any minimal reduction $J$ of $I$ is equal to $\dim F(I)$. For this reason, $\dim F(I)$ is termed the {\it analytic spread} of $I$ and denoted by $s(I)$. If $I$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal, we have $s(I) = \dim A$. We refer the reader to \cite{nr} for more details.
Katz and Verma \cite{kv} proved that $e_i(I|J) = 0$ for $i \ge s(J)$. This is a consequence of
the following bound for the partial degree $\deg_uP_R(u,v)$ of $P_R(u,v)$ \cite{Tr2}.
\begin{Proposition} \label{deg u}
$\deg_u P_R(u,v) < s(J)$.
\end{Proposition}
\begin{Question}
Can one express $\deg_u P_R(u,v)$ in terms of $I$ and $J$?
\end{Question}
To test the positivity of a mixed multiplicity $e_i(I|J) $ we have the following criterion, which also shows how to compute $e_i(I|J) $ as a Samuel multiplicity \cite{Tr2}.
\begin{Theorem}[Trung, 2001] \label{main}
Let $J$ be an arbitrary ideal of $A$ and $0 \le i < s(J)$. Let $a_1,\ldots,a_i$ be elements in $J$ such that their images in $J/IJ$ and $J/J^2$ form filter-regular sequences in $R(I|J)$ and $R(J|I)$, respectively. Then $e_i(I|J) > 0$ if and only if
$$\dim A/(a_1,\ldots,a_i):J^\infty = \dim A/0:J^\infty-i.$$
In this case, we have
$$e_i(I|J) = e(I,A/(a_1,\ldots,a_i):J^\infty).$$
\end{Theorem}
Theorem \ref{main} requires the existence of elements in $J$ with special properties. However, if the residue field of $A$ is infinite,
such elements always exist. In fact, any sequence of general elements $a_1,...,a_i$ in $J$ satisfies the assumption.
As a consequence of Theorem \ref{main} we obtain the rigidity of mixed multiplicities and the independence of their positivity from the ideal $I$.
\begin{Corollary} \label{rigid3}
Let $\rho = \max \{i|\ e_i(I|J) > 0\}.$ Then\par
{\rm (i) } $\height J -1 \le \rho \le s(J)-1$,\par
{\rm (ii) } $e_i(I|J) > 0$ for $0 \le i \le \rho$,\par
{\rm (iii)} $\max \{i|\ e_i(I'|J) > 0\} = \rho$ for any $\mathfrak m$-primary ideal $I'$ of $A$.
\end{Corollary}
Since $\rho$ doesn't depend on $I$, we set $\rho(J) := \rho$.
In general we don't have the equality $\rho(J) = s(J)-1$.
\begin{Example}
{\rm Let $A = k[[x_1,x_2,x_3,x_4]]/(x_1) \cap (x_2,x_3)$. Let $I$ be the maximal ideal of $A$ and $J = (x_1,x_4)A$. Then $F(J) \cong k[x_1,x_4]$. Hence $s(J) = \dim F(J) = 2$. One can verify that the images of $x_4$ in $J/IJ$ and $J/J^2$ are filter-regular elements in $R(I|J)$ and $R(J|I)$. We have $0:J^\infty = 0$ and $x_4A:J^\infty = (x_2,x_3,x_4)A$. Hence $\dim A/x_4A:J^\infty = 1 < \dim A/0:J^\infty - 1 = 2$. By Theorem \ref{main} this implies $e_1(I|J) = 0$.}
\end{Example}
We have the following sufficient condition for $\rho(J) = s(J)-1$.
\begin{Corollary} \label{rigid4}
Suppose all maximal chains of prime ideals in $A/0:J^\infty$ have the same length. Then $e_i(I|J)> 0$ for $0 \le i \le s(J)-1$.
\end{Corollary}
\begin{Question}
Can one express $\rho(J)$ in terms of $J$?
\end{Question}
For the computation of $e_i(I|J)$ we may replace $I$ and $J$ by their reductions. That means $e_i(I|J) = e_i(I'|J')$
for arbitrary reductions $I'$ and $J'$ of $I$ and $J$, respectively.
Using reductions one obtains the following simple formula for the mixed multiplicities $e_i(I|J)$, $i \le \height J-1$ \cite{Tr2}.
\begin{Proposition} \label{parameter}
Let $0 \le i \le \height J-1$. Let $a_1,\ldots,a_i$ and $b_1,\ldots,b_{d-i}$ be sufficiently general elements in $J$ and $I$, respectively. Then
$$e_i(I|J) = e(I,A/(a_1,\ldots,a_i)) = e((a_1,\ldots,a_i,b_1,\ldots,b_{d-i})).$$
\end{Proposition}
As a consequence we obtain the following interpretation of $e_1({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|J)$ which was proved by Katz and Verma \cite{kv1}.
For any ideal $J$ of $A$ we denote by $o(J)$ the ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-{\it adic order} of $J$, that is, the largest integer $n$ such that $J \subseteq {\mathfrak m}^n$.
\begin{Corollary} \label{e1}
Let $(A,\mathfrak m)$ be a regular local ring and $J$ an ideal with $\height J \ge 2$.
Then $$e_1({\mathfrak m}|J) = o(J).$$
\end{Corollary}
For $i \ge \height J$ we couldn't find a simple formula for $e_i(I|J)$ except in the following case \cite{Tr2}.
Recall that $J$ is called {\it generically a complete intersection} if $\height {\mathfrak p} = d-\dim A/J$ and $J_{\mathfrak p}$ is generated by $\height {\mathfrak p}$ elements for every associated prime ideal $\mathfrak p$ of $J$ with $\dim A/{\mathfrak p} = \dim A/J$.
\begin{Proposition} \label{deviation}
Let $J$ be an ideal of $A$ with $0 < s = \height J< s(J)$. Assume that $J$ is generically a complete intersection. Let $a_1,\ldots,a_s$ and $b_1,\ldots,b_{d-s}$ be sufficiently general elements in $J$ and $I$, respectively. Then
$$e_s(I|J) = e(I,A/(a_1,\ldots,a_s)) - e(I,A/J).$$
\end{Proposition}
\section{Milnor numbers and mixed multiplicities}
A geometric interpretation of the mixed multiplicities was found by Teissier in the
Carg\`{e}se paper \cite{t1} in 1973. Teissier was interested in Milnor numbers of isolated singularities of complex analytic hypersurfaces. We will now recall the concept of Milnor number and point out some of its basic properties found in \cite{mi}.
Let $f : U \subset \CC^{n+1} \rightarrow \CC$ be an analytic function in an open neighborhood $U$ of $\CC^{n+1}.$ Put
$ S_\epsilon = \{z \in \CC^{n+1} : ||z|| = \epsilon\}.$ Define the map
$\phi_\epsilon(z):S_\epsilon \setminus \{f=0\} \longrightarrow S^1$ by
$\phi_\epsilon(z)=f(z)/||f(z)||.$ Let $f_{z_i}:=\partial f/\partial z_i$ denote
the partial derivative of $f$ with respect to $z_i.$
Milnor and Palamodov proved the following
\begin{Theorem} [Milnor, Palamodov]
If the origin is an isolated singularity of $f(z)$ then the fibers of
$\phi_\epsilon$ for small $\epsilon$ have the homotopy type of a
bouquet of $\mu$ spheres of dimension $n$ having a single common point
where
$$\mu = \dim_{\CC} \frac{\CC\{z_0, z_1, \ldots , z_n\}}
{(f_{z_0}, \ldots, f_{z_n})}.$$
\end{Theorem}
The number $\mu$ is called the {\it Milnor number} of the isolated singularity. Therefore the Milnor number is nothing but the multiplicity of the Jacobian ideal
$$J(f) := (f_{z_0}, \ldots, f_{z_n})$$
of $f.$ The Milnor number is a very useful invariant to detect the topology of a singularity.
Let $(X, x)$ and $(Y, y)$ be two germs of reduced complex analytic hypersurfaces of same dimension $n.$
Then they are called {\it topologically equivalent} if there exist representatives
$(X, x) \subset (U, x)$ and $(Y, y)\subset (V, y)$ where $U$ and $V$
are open in $\CC^{n+1}$ and a homeomorphism of pairs between $(U, x)$ and $(V, y)$
which carries $X$ to $Y.$
The basic result relating the Milnor number with the topology of the singularity is the following:
\begin{Theorem}[Milnor, 1968]
Let $(X, x)$ and $(Y, y)$ be two germs of hypersurfaces with isolated singularity having same topological type. Then $\mu_x(X)=\mu_y(Y).$
\end{Theorem}
Teissier \cite{t1} refined the notion of Milnor number.
\begin{Theorem}[Teissier, 1973]
Let $(X, x)$ be a germ of a hypersurface in $\CC^{n+1}$ with an isolated singularity. Let $E$ be an $i$-dimensional affine subspace of $\CC^{n+1}$
passing through $x.$ If $E$ is chosen sufficiently general then the Milnor number of $X \cap E$ at $x$ is independent of $E.$
\end{Theorem}
The Milnor number of $X \cap E,$ as in the above theorem,
is denoted by $\mu^{(i)}(X, x).$
Note that $\mu^{(n+1)}(X,x)$ is the Milnor number of the isolated singularity. Moreover $\mu^{(1)}_x(X)=m_x(X)-1$ where $m_x(X)$ denotes the
multiplicity of the hypersurface $X$ at $x.$
Put
$$\mu^*(X, x) = (\mu^{(n+1)}(X, x), \mu^{(n)} (X, x), \ldots , \mu^{(0)}(X, x))$$
\medskip
\noindent
{\bf Teissier's Conjecture (cf. \cite{t1})}
If $(X,x)$ and $(Y,y)$ have same topological type then
$$\mu^*_x(X)=\mu^*_y(Y).$$
The above conjecture contains Zariski's conjecture \cite{z} to the effect that $m_x(X)=m_y(Y)$ for topologically equivalent isolated singularities of hypersurfaces, as a special case. Zariski's conjecture is still open.
Suppose $f_t(z_0,z_1,\ldots, z_n)$ is an analytic
family of $n$-dimensional hypersurfaces with isolated singularity at the origin. Suppose
all these singularities have the same Milnor number. Hironaka conjectured that under these
hypotheses the singularities have same topological type when $n=1.$ This conjecture and the case of $n\not=2$ were settled
in the affirmative by Trang and Ramanujam \cite{tr}.
A counterexample to Teissier's conjecture was given
in 1975 by J. Brian\c{c}on and J.-P. Speder by constructing a family of quasi-homogeneous
surfaces with contant Milnor number which is topologically trivial.
It is now known that the constancy of the sequence $\mu^*$ in a
family $F(t; x_1,\ldots ,x_n)=0$ of hypersurfaces with isolated
singularities at the origin is equivalent to the topological
triviality of general nonsingular sections of all dimensions through
the $t$-axis; this follows from \cite{t1} and \cite{bs2}.
Teissier devised a way to calculate the sequence $\mu^*(X, x).$
It turns out that the sequence $\mu^*(X, x)$ is identical to the sequence of mixed multiplicities of the Jacobian ideal $J(f )$ and ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$.
More precisely the following result was proved by Teissier \cite{t1}.
Let $A=\CC\{z_0,z_1,\ldots,z_n\}$ denote the ring of convergent power series.
Let $f \in A$ be the equation of a hypersurface singularity $(X, 0).$
If $(X,0)$ is an isolated singularity, $J(f)$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal. Hence
the function $\ell(A/J(f )^u{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^v)$ is given by a polynomial
$P (u,v)$ of total degree $n + 1$ for large $u$ and $v$.
\begin{Theorem}[Teissier, 1973]
Let $(X, 0)$ be a germ of a hypersurface in $\CC^{n+1}$ with an isolated singularity.
Put $\mu^{(i)}= \mu^{(i)}(X, 0).$
Then the terms of total degree $n +1$ in $P(u,v)$ have the form
$$\frac{1} {(n+ 1) \;!} \sum_{i=0}^{n+1}
\binom{n+1}{i}\mu^{(n+1-i)}u^{(n+1-i)}v^i.
$$
\end{Theorem}
If $(X,0)$ is not an isolated singularity, using mixed multiplicities in the general case we can also compute
the Milnor number of general hyperplane sections of $(X,0)$.
Let $s$ be the codimension of the singular locus of $X$. Let $E$ be a general $i$-plane in $\CC^n$ passing through the origin, $i \le s$.
Then $(X \cap E,0)$ is an isolated singularity. Let $\mu(X \cap E,0)$ denote its Milnor number.
Let $a_1,\ldots,a_i$ be general elements in $J(f)$. Let $b_1,\ldots,b_{n-i}$ be the defining equation of $E$.
It is easily seen that
\begin{eqnarray*}
\mu(X \cap E,0) & = &\ell(A/(a_1,\ldots,a_i,b_1,\ldots,b_{n-i}))\\
& = & e((a_1,\ldots,a_i,b_1,\ldots,b_{n-i})).
\end{eqnarray*}
On the other hand, by Proposition \ref{parameter} we have
$$e_i({\mathfrak m}|J(f)) = e((a_1,\ldots,a_i,b_1,\ldots,b_{n-i}))$$
for $i = 0,\ldots,s-1$. Hence we obtain the following formula for $\mu(X \cap E,0)$ \cite{Tr2}.
\begin{Theorem}
Let $(X,0)$ be a germ of a complex analytic hypersurface. With the above notations we have
$$\mu(X\cap E,0) = e_i({\mathfrak m}|J(f)).$$
\end{Theorem}
\section{Multiplicities of blow-up algebras}
In this section we will discuss how mixed multiplicities arise naturally in the calculation of multiplicity of various blowup algebras. Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a Noetherian local ring and $d = \dim A$. Let
$I$ be an ideal of $A.$ The {\it Rees algebra} of $I$ is the graded $A$-algebra
$$A[It] :=\bigoplus_{n=0}^{\infty} I^nt^n$$
where $t$ is an indeterminate. This graded algebra has a unique maximal homogeneous ideal
$M = ({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}, It).$ To compute the multiplicity of the local ring $A[It]_M$ we recall the following fact.
For any ideal $Q$ of a commutative ring $S$ we define the {\it associated graded ring} of $Q$ as the standard graded algebra
$$G_Q(S) = \bigoplus_{t\ge0}Q^n/Q^{n+1}$$
over the quotient ring $S/Q$.
If $Q$ is a maximal ideal of $S$, then $G_Q(S)$ has the Hilbert function $\ell(Q^n/Q^{n+1})$.
It is easy to check that
$$e(S_Q) = e(G_Q(S)).$$
Huneke and Sally \cite{hs} observed that
$$\ell(M^n/M^{n+1}) = \sum_{i=0}^{n} \ell ({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^{n-i}I^i/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^{n-i+1}I^i).$$
They calculated this function for integrally closed ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals in a two dimensional regular local ring and obtained the formula
$$
e(A[It]_M ) = 1 + o(I),$$
where $o(I)$ denotes the ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-adic order of $I$.
On the other hand, we know by Corollary \ref{e1} that $e_1({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I) = o(I)$.
Therefore, the above formula can be rewritten as
$$e(A[It]_M ) = e_0({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I) + e_1({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I).$$
That raises the natural question whether such a formula holds for an arbitrary ideal $I$.
This question was answered by Verma in 1988 for ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals \cite{v2} and in 1992 for the general case \cite{v7}.
\begin{Theorem}[Verma, 1992] \label{Rees-Verma}
Let $(A, {\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring of dimension $d.$ Let $I$ be an arbitrary ideal of positive height. Then
$$
e(A[It]_M ) = e_0({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I) + e_1({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I) + e_2({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I) + \cdots + e_{d-1}({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I).$$
\end{Theorem}
As a consequence, the above result of Huneke and Sally also holds for an arbitrary ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal in a two dimensional regular local ring.
A similar formula for the multiplicity of the {\it extended Rees algebra}
$$A[It,t^{-1}] :=\bigoplus_{n \in \ZZ}^{\infty} I^nt^n\; (I^n = A\ \text{ for } n < 0)$$
was found by Katz and Verma \cite{kv}.
\begin{Theorem}[Katz-Verma, 1992] \label{Katz-Verma}
Let $(A, {\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring of dimension $d.$ Let $I$ be an ideal of positive height.
Let $N = ({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n},It,t^{-1}) \subset A[It,t^{-1}]$.
Then
$$
e(A[It,t^{-1}]_N) = \frac{1}{2^d} \left[e({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^2 + I) + \sum^{d-1}_{j=0} 2^j e_j({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^2 + I|I)\right].
$$
\end{Theorem}
Similarly the multiplicity of the fiber ring $F(I)$ can be expressed in terms of a mixed multiplicity of ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$ and $I$ \cite{jpv}. Furthermore, this knowledge also helps in detecting the Cohen-Macaulay property of $F(I)$ for certain classes of ideals.
Though we have a well developed theory of mixed multiplicities when $I$ is an $\mathfrak m$-primary ideal \cite{t1}, \cite{r1}, there have been few cases where the mixed multiplicities can be computed in terms of well-known invariants of $\mathfrak m$ and $I$ when $I$ is not an $\mathfrak m$-primary ideal.
The first explicit computation of mixed multiplicities for a non-trivial case was done by Katz and Verma in \cite{kv1} for height 2 almost complete intersection prime ideals in a polynomial ring in three variables.
In 1992, Herzog, Trung and Ulrich \cite{htu} computed the multiplicity of Rees algebras of homogeneous ideals generated by a $d$-sequence.
Recall that a sequence of elements $x_1,\ldots,x_n$ is said to be a $d$-sequence if
(1) $x_i\notin (x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n),$
(2) $(x_1,\ldots,x_i):x_{i+1}x_k=(x_1,\ldots,x_i):x_k$ for all $k\ge i+1$ and all $i\ge 0$.
Examples of $d$-sequences are regular sequences, the maximal minors of an $n \times (n+1)$ matrix of indeterminates or the generators of an almost complete intersection \cite{Hu}.
Let $A$ be a standard graded ring over a field $k$ and ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$ the maximal graded ideal of $A$.
Let $I = (x_1, \ldots , x_n)$ be an ideal generated by a $d$-sequence of homogeneous elements in $R$
with $\deg (x_1)\leq \cdots \leq \deg(x_n).$ Herzog, Trung and Ulrich calculated the multiplicity of the Rees algebra $A[It]$ in terms of the multiplicities of $A/I_j$ where
$$I_j = (x_1,\cdots, x_{j-1}) : x_j\;\; (j = 1, \ldots, n).$$
They used a technique which is similar to that of Gr\"obner bases and which does not involve mixed multiplicities.
\begin{Theorem}[Herzog-Trung-Ulrich, 1992] \label{HTR}
Let $I$ be an ideal generated by a homogeneous $d$-sequence $x_1,\ldots,x_n$ of $A$ with $\deg x_1\le \ldots\le \deg x_n$. Then
$$e(A[It]_M) = \left\{ \begin{array}{lll}
\sum_{j=1}^se(A/I_j) & if & \dim A/I_1 = \dim A,\\
\sum_{j=1}^se(A/I_j) + e(A) & if & \dim A/I_1 = \dim A-1,\\
e(A) & if & \dim A/I_1 \le \dim A-2.
\end{array} \right.$$
\end{Theorem}
On the other hand, using essentially the same technique, N.D. Hoang \cite{Ho} was able to compute the mixed multiplicities
$e_i({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I)$ in this case, namely,
$$e_i(\mathfrak m|I)= \left\{ \begin{array}{ll} e(A/{I_{i+1}}) & \text{if $0 \le i\le s-1,$}\\
0 & \text{if $i \ge s,$}
\end{array}\right. $$
where $s = \max \{i|\dim A/{I_i}=\dim A/{I_1}-i+1\}$.
Combining this with Theorem \ref{Rees-Verma} he could recover the result of Herzog, Trung and Ulrich.
Using the same technique Raghavan and Verma \cite{rv} computed the Hilbert series of the associated graded ring
$G_M(A[It]).$
\begin{Theorem}[Raghavan-Verma, 1997]
Let $I$ be a graded ideal generated by a $d$-sequence as above.
Let $R =G_M(A[It])$. Then
$$
H(R; \lm_1, \lm_2) = H(A, \lm_1) + \lm_2
\sum_{s=1}^n \frac{H(A/I_s, \lm_1)} {(1 - \lm_2)^s}.
$$
\end{Theorem}
Inspired of work of Raghavan and Simis \cite{rasi}, Raghavan and Verma also computed the Hilbert series of the associated graded ring of homogeneous ideals generated by quadratic sequences, a generalization of $d$-sequences. As a consequence they obtained the following concrete formula for determinantal ideals.
\begin{Theorem}[Raghavan-Verma, 1997]
Let $A$ be the polynomial ring in $mn$ indeterminates over a field where $m \leq n .$ Let $X$ be an $m \times n$
matrix of these indeterminates. Let $I$ denote the ideal of
$A$ generated by the maximal minors of $X.$
Then the Hilbert series of the bigraded algebra $R =G_M(A[It])$
is given by the formula:
$$
H(R ; \lm_1, \lm_2) = H(A, \lm_1) + \lm_2\sum_{\omega \in \Omega}
H (A/(\Pi^{\omega}), \lm_1) H(F_{\omega}, \lm_2)
$$
where
\begin{eqnarray*}
\Pi &=& \mbox{poset of all minors of all sizes of the matrix} \; X\\
\Omega &=& \mbox{ideal of}\; \; \Pi \;\mbox{consisting of maximal minors of}\;\; X \\ \Pi^{\omega} &=& \{\pi \in \Pi : \pi \ngeq \omega\} \\
(\Pi^{\omega}) &=& \mbox{ideal of}\; A \;\mbox{generated by} \;\;
\; \Pi^{\omega}. \\
F_{\omega} &=& \mbox{the face ring over}\; k\;
\mbox{of} \;\;
\Pi_{\omega} := \{\pi \in \Pi : \pi \leq \omega\}.
\end{eqnarray*}
\end{Theorem}
The techniques of Herzog, Trung and Ulrich was also used to compute the multiplicity of Rees algebras of ideals generated by filter-regular sequences of homogeneous elements \cite{Tr}.
Recall that a sequence $x_1,\ldots,x_n$ of elements of $A$ is called {\em filter-regular} with respect to $I$ if $x_i\notin \wp$ for all associated prime ideal $\wp \not\supseteq I$ of $(x_1,\ldots,x_{i-1})$, $i=1,\ldots,n$.
If $A$ is a {\it generalized Cohen-Macaulay ring}, that is, $A_\pp$ is Cohen-Macaulay and $\dim A/\pp + \height \pp = \dim A$ for all prime ideals $\pp \neq {\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$, then every ideal generated by a subsystem of parameters $x_1,\ldots,x_n$ is filter-regular with respect to $I = (x_1,...,x_n).$
\begin{Theorem}[Trung, 1993] \label{filter-regular}
Let $I$ be a homogeneous ideal of $A$ generated by a subsystems of parameters $x_1,\ldots,x_n$ which is a filter-regular sequence with respect to $I$ with $\deg x_1=a_1\le \ldots \le \deg x_n=a_n$. Then
$$e\left(A[It]_M\right)=\left(1+\sum_{i=1}^{n-1}a_1\ldots a_i\right)e(A),$$
$$e\left(A[It,t^{-1}]_N\right)=\left(1+\sum_{i=l}^{n-1} a_1\ldots a_i\right)e(A),$$
where $l$ is the largest integer for which $a_l=1$, $l=0$ and $a_1\ldots a_l=1$ if $a_i>1$ for all $i=1,\ldots,n$.
\end{Theorem}
Comparing the first formula with Theorem \ref{Rees-Verma} one might ask whether in the above case,
$$e_i(\mathfrak m|I)= \left\{ \begin{array}{ll} a_1\cdots a_ie(A) & \text{if $0 \le i\le n-1,$}\\
0 & \text{if $i \ge n.$}
\end{array}\right. $$
This has been proved by N. D. Hoang in \cite{Ho}.
\begin{Question}
Can one drop the condition $a_1 \le \ldots \le a_n$ in Theorem \ref{filter-regular}?
\end{Question}
Note that the proof for a positive answer to this question in \cite{Vi2} is not correct.
Finally we report a formula for the multiplicity of the local ring $A[It]_{{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} A[It]}$. This is useful in finding when certain symmetric algebras are Cohen-Macaulay with minimal multiplicity.
\begin{Proposition}[Yoshida, 1995]
Let $(R, {\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a quasi-unmixed local ring. Let $I$ be an ideal of
positive height in $R$ with $\mu(I) = s(I) = n.$ Then
$$ e\left(A[It]_{{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} A[It]}\right)
= e_{n-1}({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} \mid I).$$
\end{Proposition}
\section{Mixed multiplicities of a sequence of ideals: the general case}
For some applications we need to extend the notion of mixed multiplicities to a sequence of ideals which are not necessarily ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary.
This can be done in the same manner as for mixed multiplicities of two ideals.
Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring (or a standard graded algebra over a
field with maximal graded ideal ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$). Let $I$ be an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary
ideal and ${\bf J} = J_1,\ldots,J_s$ a sequence of ideals of $A$.
One can define the $\NN^{s+1}$-graded algebra
$$R(I|{\bf J}) := \bigoplus_{(u_0,u_1,...,u_s) \in
\NN^{s+1}}I^{u_0}J_1^{u_1}...J_s^{u_s}/I^{u_0+1}J_1^{u_1}...J_s^{u_s}.$$
This algebra can be viewed as the associated graded ring of the ideal $(I)$ of
the {\it multi-Rees algebra}
$$A[It_0,J_1t_1,...,J_st_s] := \bigoplus_{(u_0,u_1,...,u_s) \in
\NN^{s+1}} I^{u_0}J_1^{u_1}...J_s^{u_s}t_0^{u_0}t_1^{u_1}...t_s^{u_s}.$$
For short, set $R = R(I|{\bf J})$. Then $R$ is a standard
$\NN^{s+1}$-graded algebra. Hence
it has a Hilbert polynomial $P_R(u)$. For any ${\alpha} \in \NN^{s+1}$ with
$|{\alpha}| = \deg P_R(u)$ we will set
$$e_{\alpha}(I|{\bf J}) := e_{\alpha}(R).$$
If $J_1,\ldots,J_s$ are ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals, $e_{\alpha}(I|{\bf J})$
coincides with the mixed multiplicities defined by using the function $\ell(A/I^{u_0}J_1^{u_1}...J_s^{u_s})$.
However, the techniques used in the ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary
case are not applicable for non ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. For instance, mixed multiplicities of ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals are always positive, whereas they may be zero in the general case.
We have to develop new techniques to prove the following general result which allows us to test the positivity of mixed multiplicities and to compute them by means of Samuel's multiplicity \cite{TV}.
Throughout this section let $J := J_1...J_s$ and $d := \dim A/0:J^\infty.$
\begin{Theorem}[Trung-Verma, 2007] \label{dimension}
Let $d \ge 1$. Then $\deg P_R(u) = d - 1$ and
$$e_{(d-1,0,...,0)}(I|{\bf J} ) = e(I,A/0:J^\infty).$$
\end{Theorem}
We shall need the following notation for the computation of mixed multiplicities.
A sequence of homogeneous elements $z_1,...,z_m$ in a multigraded algebra $S$ is
called {\it filter-regular} if
$$[(z_1,...,z_{i-1}):z_i]_u = (z_1,...,z_{i-1})_u$$
for $u \gg 0$, $i = 1,...,m$. It is easy to see that this is equivalent to
the condition
$z_i \not\in P$ for any associated prime
$P \not\supseteq S_+$ of $S/(z_1,...,z_{i-1})$.
We will work now in the $\ZZ^{s+1}$-graded algebra
$$S := \bigoplus_{u\in\ZZ^{s+1}}I^{u_0}J_1^{u_1}...J_s^{u_s}
/I^{u_0+1}J_1^{u_1+1}...J_s^{u_s+1},$$
which is the associated graded ring of
the algebra $A[It_0,J_1t_1,...,J_st_s]$ with respect to the ideal $(IJ)$.
Let ${\varepsilon}_1,...,{\varepsilon}_m$ be any non-decreasing sequence of indices with
$1 \le {\varepsilon}_i \le s$. Let $x_1,...,x_m$ be a sequence of elements of $A$
with $x_i \in J_{{\varepsilon}_i}$, $i = 1,...,m$. We denote by $x_i^*$
the residue class of $x_i$
in $J_{{\varepsilon}_i}/IJJ_{{\varepsilon}_i}$.
We call $x_1,...,x_m$ an $({\varepsilon}_1,...,{\varepsilon}_m)$-{\it superficial sequence} for the
ideals $J_1,...,J_s$ (with respect to $I$) if $x_1^*,...,x_m^*$ is a
filter-regular sequence in $S$. \par
The above notion can be considered as a generalization of the classical notion of a
superficial element of an ideal, which plays an important role in the theory
of multiplicity. Recall that an element $x \in \aa $ is called superficial with respect to an
ideal $\aa$ if there is an integer $c$ such that
$$(\aa^n:x) \cap \aa^c = \aa^{n-1}$$ for
$n \gg 0$. A sequence of elements $x_1,...,x_m \in \aa$ is called a
superficial sequence of $\aa$ if the residue class of $x_i$ in
$A/(x_1,...,x_{i-1})$ is a superficial element of the ideal
$\aa/(x_1,...,x_{i-1})$, $i = 1,...,m$. It is known that this is equivalent to
the condition that the initial forms of $x_1,...,x_m$ in $\aa/\aa^2$
form a filter-regular sequence in the associated graded ring
$\oplus_{n\ge 0}\aa^n/\aa^{n+1}$ (see e.g. \cite{Tr}).
\par
We have the following criterion for the positivity of mixed multiplicities
(a somewhat weaker result was obtained by Viet in \cite{Vi}).
\begin{Theorem}[Trung-Verma, 2007] \label{positivity}
Let ${\alpha} = ({\alpha}_0,{\alpha}_1,...,{\alpha}_s)$ be any sequence of non-negative integers
with $|{\alpha}| = d-1$. Let $Q$ be any ideal generated by an
$({\alpha}_1,...,{\alpha}_s)$-superficial sequence of the ideals
$I,J_1,...,J_s$. Then $e_{\alpha}(I|{\bf J}) > 0$
if and only if $\dim A/Q:J^\infty = {\alpha}_0+1.$ In this case,
$$e_{\alpha}(I|{\bf J}) = e(I,A/Q:J^\infty).$$
\end{Theorem}
Let $k$ be the residue field of $A$. Using the prime avoidance
characterization of a superficial element we can easily see that
superficial sequences exist if $k$ is infinite. In fact, any sequence which consists
of ${\alpha}_1$ general elements in $J_1$, ... , ${\alpha}_s$ elements in $J_s$
forms an $({\alpha}_1,...,{\alpha}_s)$-superficial sequence for the ideals $J_1,...,J_s$.
The following result shows that the positivity of mixed multiplicities does not depend on the ideal $I$ and that
the sequence of positive mixed multiplicities is rigid.
\begin{Corollary} \label{rigid}
Let ${\alpha} = ({\alpha}_0,{\alpha}_1,...,{\alpha}_s)$ be any sequence of non-negative integers
with $|{\alpha}| = d -1$. Assume that $e_{\alpha}(I|{\bf J}) > 0$.
Then\par
{\rm (a) } $e_{\alpha}(I'|{\bf J}) > 0$ for any ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal
$I'$,\par
{\rm (b) } $e_{\beta}(I|{\bf J}) > 0$ for all
${\beta} = ({\beta}_0,\ldots,{\beta}_n)$ with $|{\beta}| = d-1$
and ${\beta}_i \le {\alpha}_i$, $i = 1,\ldots,n$.
\end{Corollary}
Mixed multiplicities of a sequence of ideals can be used to compute the multiplicity of multi-Rees algebras.
\begin{Theorem}[Verma, 1992]
Let ${\bf J} = J_1, \ldots, J_s$ be a sequence of ideals of positive height.
Let $M = ({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}, J_1t_1, \ldots, J_st_s) \subset A[J_1t_1, \ldots, J_st_s]$ and $e_1 = (1,0,...,0) \in \NN^{s+1}$.
Then
$$
e(A[J_1t_1, \ldots, J_st_s]_M) =
\sum_{{{\alpha}}\in \NN^{s+1},\;\; |{\alpha}|=d-1}
e_{{\alpha} + e_1}({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}| {\bf J}).
$$
\end{Theorem}
We shall see in the next section that mixed volumes of lattice polytopes are special cases of mixed multiplicities of ideals.
\section{Mixed volume of lattice polytopes}
Let us first recall the definition of mixed volumes.
Given two polytopes $P, Q$ in ${\mathbb R}} \def\CC{{\mathbb C}} \def\AA{{\mathbb A}^n$ (which need not to be
different), their Minkowski sum is defined as the polytope
$$P + Q :=\{a + b \mid \ a \in P,\ b \in Q\}$$
The $n$-dimensional {\it mixed volume} of a collection of $n$
polytopes $Q_1,...,Q_n$ in
${\mathbb R}} \def\CC{{\mathbb C}} \def\AA{{\mathbb A}^n$ is the value
$$MV_n(Q_1, \ldots,Q_n) :=
\sum_{h=1}^s \sum_{1\le i_1 <...< i_h\le n} (-1)^{n-h}
V_n(Q_{i_1}+\cdots + Q_{i_h}),$$
where $V_n$ denotes the $n$-dimensional Euclidean volume.
Mixed volumes play an important role in convex geometry \cite{BF} and elimination theory \cite{GKZ}.
Our interest in mixed volumes arises from the following result of Bernstein
\cite{Be}, \cite{Kh} which relates the number of solutions of a system of polynomial
equations to the mixed volume of their Newton polytopes.
For a Laurent polynomial $f \in \CC[x_1^{\pm 1},...,x_n^{\pm 1}]$ we denote by $Q_f$ the convex polytope spanned by the lattice points ${\alpha} = ({\alpha}_1,...,{\alpha}_n)$
such that the monomial $x_1^{{\alpha}_1}\cdots x_n^{{\alpha}_n}$ appears in $f$. This polytope is called the {\it Newton polytope} of $f$.
\begin{Theorem}[Bernstein, 1975]
Let $f_1,...,f_n$ be Laurent polynomials in
$\CC[x_1^{\pm 1},...,x_n^{\pm 1}]$ with
finitely many common zeros in the torus $(\CC^*)^n,$ where $\CC^* = \CC \setminus \{0\}$.
Then the number of common
zeros of $f_1,...,f_n$ in $(\CC^*)^n$ is bounded above by the mixed volume
$MV_n(Q_{f_1},...,Q_{f_n})$.
Moreover, this bound is attained for a generic choice of coefficients in
$f_1,..., f_n$.
\end{Theorem}
Here, a generic choice of coefficients in $f_1,..., f_n$ means that the
supporting monomials of $f_1,..., f_n$ remain the same while their
coefficients vary in a non-empty open parameter space. \par
Bernstein's theorem is a generalization of Bezout's theorem which says that if $f_1,...,f_n$ are polynomials in $n$ variables having finitely many common zeros, then the number of common zeros of $f_1,...,f_n$ is bounded by $\deg f_1 \cdots\deg f_n$. In fact, by translation we may assume that the common zeros of $f_1,...,f_n$ lie in $(\CC^*)^n$. Let $P_i$ denote the $n$-simplex spanned by the origin and all points of the form $(0,...,\deg f_i,...,0)$. Then $Q_{f_i} \subseteq P_i$.
This implies $MV_n(Q_{f_1},...,Q_{f_n}) \le MV_n(P_1,...,P_n)$. It is easy to check that $MV_n(P_1,...,P_n) = \deg f_1 \cdots\deg f_n$.
Bernstein's theorem is a beautiful example of the interaction between
algebra and combinatorics. The original proof in \cite{Be} has more or
less a combinatorial flavor. A geometric proof using intersection theory was
given by Teissier \cite{t5} (see also the expositions \cite{Fu}). Here we sketch
an algebraic proof of Bernstein's theorem by means of mixed multiplicities.
First of all, using homogenization we can reformulate Bernstein's theorem as follows.
\begin{Theorem} \label{homogen}
Let $k$ be an algebraically closed field. Let $g_1,..., g_n$ be homogeneous
Laurent polynomials in $\CC[x_0^{\pm 1},x_1^{\pm 1},...,x_n^{\pm 1}]$ with
finitely many common zeros in $\PP_{\CC^*}^n$. Then
$$ | \{ {\alpha} \in \PP_{\CC^*}^n \mid g_i({\alpha})=0,\; i=1, 2, \ldots,n \}|
\leq \frac{MV_n(Q_{g_1},...,Q_{g_n})}{\sqrt{n+1}}.$$
Moreover, this bound is attained for a generic choice of coefficients in
$g_1,..., g_n$.
\end{Theorem}
We may reduce the above theorem to the case of polynomials. In fact,
if we multiply the given Laurent polynomials with an appropriate
monomial, then we will obtain a new system of polynomials. Obviously, the new
polynomials have the same common
zeros in $\PP_{\CC^*}^n$. Since their Newton polytopes are translations of the old ones, their
mixed volumes do not change, too.
Now assume that $g_1,...,g_n$ are homogeneous polynomials in the polynomial ring $A =
k[x_0,x_1,...,x_n]$, where $k$ is a field.
Let $M_i$ be the set of monomials occuring in $g_i$.
Let ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$ be the maximal graded ideal of $A$ and $J_i$ the
ideals of $A$ generated by $M_i$.
Put
$$R = R({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|J_1,...,J_n).$$
We know by Theorem \ref{dimension} that $\deg P_R(u) = n$.
First, using the interpretation of mixed multiplicities as Samuel's multiplicity we
can prove the following bound for the number of common zeros of $g_1,...,g_n$
in $\PP_{k^*}^n$, where $k^* = k \setminus \{0\}$.
\begin{Theorem}[Trung-Verma, 2007] \label{multiplicity}
Let $k$ be an algebraically closed field. Let $g_1,..., g_n$ be homogeneous
polynomials in $k[x_0,x_1,...,x_n]$ with
finitely many common zeros in $\PP_{k^*}^n$. Then
$$
| \{ {\alpha} \in \PP_{k^*}^n \mid g_i({\alpha})=0,\; i=1, 2, \ldots,n \}|
\leq e_{(0,1,...,1)}(R).
$$
Moreover, this bound is attained for a
generic choice of coefficients in $g_1,..., g_n$ if $k$ has
characteristic zero.
\end{Theorem}
It remains to show that
$$e_{(0,1,...,1)}(R) = \frac{MV_n(Q_{g_1},...,Q_{g_n})}{\sqrt{n+1}}.$$
To prove that we shall need the following basic property of mixed volumes.
Let $g_0 = x_0 \cdots x_n$ and ${\bf Q} = (Q_{g_0},Q_{g_1},...,Q_{g_n})$.
Let ${\lambda} = ({\lambda}_0,...,{\lambda}_n)$ be any sequence of positive integers.
We denote by ${\lambda}{\bf Q}$ the Minkowski sum ${\lambda}_0Q_0+ \cdots + {\lambda}_nQ_n$
and by
${\bf Q}_{\lambda}$ the multiset of ${\lambda}_0$ copies of polytopes $Q_0$,...,${\lambda}_n$ copies of polytopes
$Q_n$.
Minkowski showed that the volume of the polytope ${\lambda}{\bf Q}$ is a
homogeneous polynomial in ${\lambda}$ whose coefficients are mixed volumes up to
constants:
$$V_n({\lambda}{\bf Q}) = \sum_{{\alpha} \in \NN^s, |{\alpha}| =
n}\frac{1}{{\alpha}!}MV_n({\bf Q}_{\alpha}){\lambda}^{\alpha}.$$
On the other hand, there is also a Minkowski formula for mixed multiplicities,
which arises in the computation of the multiplicity of the $\NN$-graded algebra
$$R^{\lambda} := \bigoplus_{t \ge 0}R_{t{\lambda}}.$$
One calls $R^{\lambda}$ the ${\lambda}$-{\it diagonal subalgebra} of $R$. This notion plays
an important role in the study of embeddings of blowups of projective
schemes \cite{CHTV}. \par
It is easy to check that for all positive integers ${\lambda},$
$$e(R^{\lambda}) = n!\displaystyle \sum_{{\alpha} \in \NN^{s+1},\;\; |{\alpha}| = n}
\frac{1}{{\alpha}!}e_{\alpha}(R){\lambda}^{\alpha}.$$
Since $R^{\lambda}$ is a ring generated by monomials of the same degree, using Ehrhart's theory for the
number of lattice points in lattice polytopes
(see e.g. \cite{St}) we have
$$e(R^{\lambda}) = \frac{n!V_n({\lambda}{\bf Q})}{\sqrt{n+1}}.$$
Now we can compare the two Minkowski formulas and obtain the following relationship between mixed multiplicities and mixed volumes.
\begin{Theorem}[Trung-Verma, 2007]
With the above notation we have
$$e_{\alpha}(R) = \frac{MV_n({\bf Q}_{\alpha})}{\sqrt{n+1}}$$
for any ${\alpha} \in \NN^{n+1}$ with $|{\alpha}| = n$.
\end{Theorem}
As a consequence, for ${\alpha} = (0,1,...,1)$ we get
$$e_{(0,1,...,1)}(R) = \frac{MV_n(Q_{g_1},...,Q_{g_n})}{\sqrt{n+1}}= MV_n(Q_{f_1},...,Q_{f_n}),$$
which completes the proof of Bernstein's theorem. Similarly, we can show that Bernstein's theorem holds for polynomials over any algebraically closed field of characteristic zero.
Since any collection of $n$ lattice polytopes in ${\mathbb R}} \def\CC{{\mathbb C}} \def\AA{{\mathbb A}^n$ can be realized as the Newton polytopes of $n$ polynomials,
we can always express mixed volumes of lattice polytopes as mixed multiplcities of ideals.
It is known that computing mixed volumes is a hard enumerative problem.
Instead of that we can now compute mixed multiplicities of
the associated graded ring of the multigraded Rees algebra
$A[J_1t_1,...,J_nt_n]$ with respect to the ideal ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$. By
Theorem \ref{positivity}, these mixed multiplicities can be interpreted as
Samuel multiplicities. The computation of these multiplicities can be
carried out by computer algebra systems such as {\it Cocoa, Macaulay 2} and
{\it Singular.}
\section{Minkowski inequalities and equalities}
Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring of dimension $d$. Suppose $I$ and $J$ are ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. Then $e(IJ)$ can be computed
if the mixed multiplicities of $I$ and $J$ are known:
$$
e(IJ) = e(I) +\binom{d}{1} e_1(I|J) +\cdots + \binom{d}{i}e_i(I|J) + \cdots + e(J).$$
Teissier \cite{t1} made the following conjectures based on the comparison of this formula with the binomial expansion
$$(e(I)^{\frac{1}{d}} + e(J)^{\frac{1}{d}})^d=
e(I) +\cdots + \binom{d}{i}e(I)^{\frac{d-i}{d}}e(J)^{\frac{i}{d}}+\cdots +e(J).$$
\sk
\noindent {\bf Teissier's First Conjecture}:
{\em For all $i = 0, 1, ..., d,$}
$$e_i(I|J)^d \leq e(I)^{d-i}e(J)^i.$$
The validity of Teissier's First Conjecture implies the
{\em Minkowski's inequality} for multiplicities:
$$e(IJ)^{1/d} \leq e(I)^{1/d} + e(J)^{1/d}.$$
\sk
\noindent {\bf Teissier's Second Conjecture}:
{\em Let $d \geq 2.$ Put $e_i(I|J)=e_i$ for $i=1, \ldots, d-1.$ Then}
$$\frac{e_1}{e_0} \leq \frac{e_2}{e_1} \leq \cdots \leq \frac{e_d}{e_{d-1}}.$$
Teissier proved that the second conjecture implies the first. He also showed the validity of the second conjecture for reduced Cohen-Macaulay complex analytic algebras \cite{t3}. Rees and Sharp \cite{resh} investigated these conjectures for all local rings
and proved:
\begin{Theorem}[Rees and Sharp, 1978] \label{Rees-Sharp}
Teissier's conjectures and hence Minkowski inequality for multiplicities holds for all local rings.
\end{Theorem}
It is natural to ask when equalities hold in Minkowski inequalities. It is easy to see that if $I$ and $J$ are {\it projectively equivalent}, that is, there exist positive integers $r$ and $s$ so that $I^r$ and $J^s$ have the same integral closures, then
$$e(IJ)^{1/d} = e(I)^{1/d}+ e(J)^{1/d}.$$
The converse was proved by Teissier \cite{t4} for Cohen-Macaulay normal complex analytic algebras by using mixed multiplicities. Then Katz \cite{ka} showed that in quasi-unmixed local rings,
Minkowski equalities hold for ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals $I$ and $J$ if and only if
they are projectively equivalent.
It is interesting to note that Rees Multiplicity Theorem is a consequence of Minkowski equalities.
We reproduce Teissier's lightning proof of the converse of Rees Multiplicity Theorem found in \cite{t4}.
Let $A$ be a quasi-unmixed local ring. Let $J \subseteq I$ be ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals with $e(I)=e(J)=e.$ Since $JI \subseteq I^2,$ $e(IJ) \geq e(I^2)=2^d e(I).$ Hence
$$2 e^{1/d} \leq e(IJ)^{1/d} \leq e^{1/d}+ e^{1/d} =2 e^{1/d},$$
which implies $e(IJ)^{1/d} = e^{1/d}+ e^{1/d}$. Therefore, $I$ and $J$ are projectively equivalent
so that there exist positive integers $r$ and $s$ such that $\overline{I^r}=\overline{J^s}.$
It follows that $e(I^r) = e(J^s)$. Since $e(I^r)=r^de$ and $e(J^s)=s^de,$ we get $r = s$ which means that $J$ is a reduction of $I.$
We have seen in the preceding section that mixed volume is only a special case of mixed multiplicities.
Threfore, properties of mixed volumes may predict unknown properties of mixed multiplicities. For instance,
consider the famous Alexandroff-Fenchel inequality among mixed
volumes:
$$MV_n(Q_1,...,Q_n)^2 \ge MV_n(Q_1,Q_1,Q_3,...,Q_n)MV_n(Q_2,Q_2,Q_3,...,Q_n).$$
Khovanski \cite{Kh} and Teissier \cite{t5} used the Hodge index theorem in
intersection theory to prove this inequality.
This leads us to believe that a similar inequality should hold for
mixed multiplicities \cite{TV}.
\begin{Question}
{\rm Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local (or standard graded) ring
with $\dim A = n+1 \ge 3$. Let $I$ be an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal and
$J_1,...,J_n$ ideals of height $n$. Put $\alpha=(0,1, \ldots, 1).$
Is it true that
$$
e_{\alpha}(I|J_1,...,J_n)^2 \ge
e_{\alpha}(I|J_1,J_1,J_3,...,J_n)e_{\alpha}(I|J_2,J_2,J_3,...,J_n)~?
$$}
\end{Question}
Using Theorem \ref{positivity} we can reduce this theorem to the case
$\dim A = 3$.
In this case, we have to prove the simpler formula:
$$e_{(0,1,1)}(I|J_1,J_2)^2 \ge e_{(0,1,1)}(I|J_1,J_1)e_{(0,1,1)}(I|J_2,J_2).$$
Unfortunately, we were unable to give an answer to the above question.
The difficulty can be seen from the fact that
the above inequality does not hold if $J_1,...,J_n$ are
${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideals. In fact, we can even show that the
reverse inequality holds, namely,
$$e_{\alpha}(I|J_1,...,J_n)^2 \le
e_{\alpha}(I|J_1,J_1,J_3,...,J_n)e_{\alpha}(I|J_2,J_2,J_3,...,J_n)$$
where $\alpha=(0,1,1, \ldots, 1).$
For that we only need to show the inequality
$$e_{(1,1)}(J_1|J_2)^2 \le e(J_1,A)e(J_2,A),$$
for a two-dimensional ring $A$. But this is a special case of Theorem \ref{Rees-Sharp}.
\section{The multiplicity sequence}
The main aim of this section is to present another generalization of the multiplicity of an ideal by mixed multiplicities.
Let $(A,{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n})$ be a local ring of dimension $d$ and $I$ an ideal of $A$.
If $I$ is ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary, we can consider the Hilbert-Samuel function $\ell(A/I^t)$ and define the multiplicity $e(I)$. Actually, $e(I)$ is the multiplicity of the associated graded ring $G := G_I(A)$.
If $I$ is not ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary, we can replace $G$ by the associated graded ring of the ideal ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} G$ of $G$:
$$R := G_{{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} G}(G) = \bigoplus_{(u,v) \in \NN^2}\big({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^uI^v + I^{v+1}/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}^{u+1}I^v +I^{v+1}\big).$$
This is a standard bigraded algebra over the residue field $A/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$.
Hence we can consider the Hilbert function $H_R(u,v)$.
Since $H_R(u,v)$ is a polynomial for $u,v$ large enough, the sum transform
$$H_R^{(1,1)}(u,v) := \sum_{i = 0}^u\sum_{j = 0}^vH_R(u,v)$$
is given by a polynomial $P_R^{(1,1)}(u,v)$ for $u,v$ large enough. It is easy to check that $\deg P_R^{(1,1)}(u,v) = d$.
If we write this polynomial in the form
$$P_R^{(1,1)}(u,v) =\sum_{i= 0}^d
\frac{c_{i \; d-i }(R)}{i!(d-i)!}u^iv^{d-i} +
\text{\rm lower-degree terms},$$
then $c_{i\;d-i}(R) $ are non-negative integers for $i = 0,...,d$. We set
$$c_i(I) := c_{i\;d-i}(R).$$
It is easily seen that the mixed multiplicities of $R$ belong to the multiplicity sequence: $e_{ij}(R) = c_{i+1}(I)$ for $i+j = d-2$.
The multiplicity of the associated graded ring $G$ with respect to the maximal graded ideal can be expressed as the sum of the multiplicity sequence.
\begin{Theorem}[Dade, 1960]
Let $M$ denote the maximal graded ideal of the associated graded ring $G$ of $I$. Then
$$e(G_M) = \sum_{j=0}^dc_j(I).$$
\end{Theorem}
Achilles and Manaresi \cite{AM2} call $c_0(I),...,c_d(I)$ the {\it multiplicity sequence} of $I$.
The multiplicity sequence can be considered as a generalization of the multiplicity of an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal.
\begin{Theorem}[Achilles-Manaresi, 1997]
Let $s = s(I)$ and $r = \dim A/I$. Then\par
{\rm (a) } $c_j(I) = 0$ for $j <d-s$ and $j > r$,\par
{\rm (b) } $c_{d-s}(I) = \sum e({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} G_P)e(G/P),$ where $P$ runs through all highest associated prime ideals of ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} G$ such that $\dim G/P + \height P = d$, \par
{\rm (c) } $c_r(I) = \sum e(I_\wp)e(A/\wp),$ where $\wp$ runs through all highest associated prime ideals of $I$ such that $\dim A/\wp + \height \wp = d$.
\end{Theorem}
As a consequence, if $I$ is an ${\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}$-primary ideal, then $c_0(I) = e(I)$ and $c_i(I) = 0$ for $i > 0$.
In particular, $c_0(I) > 0$ if and only if $s(I) = d$. In this case, we set $j(I) := c_0(I)$ and call it the $j$-{\it multiplicity} of $I$ \cite{AM1}.
Using $j$-multiplicity one can extend Rees Multiplicity Theorem for arbitrary ideals as follows \cite{FM}.
\begin{Theorem}[Flenner-Manaresi-Ulrich]
Let $J$ be an ideal in $J$.
If $J$ is a reduction of $I$, then $j(J_\wp) = j(I_\wp)$ for all prime ideals $\wp \supseteq I$ with $s(I_\wp) = \dim A_\wp$. The converse holds if $A$ is a quasi-unmixed local ring.
\end{Theorem}
The multiplicity sequence can be computed by the following formula.
\begin{Theorem}[Achilles-Manaresi, 1997] \label{AM}
Let $Q = (x_1,...,x_s)$ be a minimal reduction of $I$ such that the images of $x_1,...,x_s$ in $R_{(0,1)} = I/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} I$ is a filter-regular sequence of $R$ with respect to the ideal $(R_{(0,1)})$.
Set $Q_0 := 0$ and $Q_i = (x_1,...,x_i)$ for $i = 1,...,s$. Then
$$c_{d-i}(I) = \sum \ell(A_\wp/(Q_{i-1}:I^\infty,x_i)_\wp)e(A/\wp),$$
where $\wp$ runs through all associated prime ideals containing $I$ of the ideal $(Q_{i-1}:I^\infty,x_i)$.
\end{Theorem}
For the computation of the multiplicity sequence we may assume that the residue field of $A$ is infinite.
In this case we can always find a minimal reduction $Q$ of $I$ which satisfies the assumption of the above theorem.
Moreover, if $J$ is a reduction of $I$, then ideal $J^*$ of $R$ generated by the images of the elements of $J$ in $R_{(0,1)}= I/{\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n} I$ has the property $(J^*)_{(u,v)} = (R_{(0,1)})_{(u,v)}$ for all $u,v$ large enough.
Therefore, we can find a minimal reduction $Q$ of $J$ (and hence of $I$) which satisfies the assumption of the above theorem for both ideals $J$ and $I$. As an immediate consequence we obtain the following fact
(a complicated proof was given by Ciuperca in \cite{C}):
\begin{Corollary}
Let $J$ be a reduction of $I$, then $c_i(J) = c_i(I)$ for all $i = 0,...,d$.
\end{Corollary}
Inspired of Rees Multiplicity Theorem we raise the following question.
\begin{Question}
{\rm Let $A$ be a quasi-unmixed local ring and $J$ an ideal in $I$ with $\sqrt{J} = \sqrt{I}$.
Is $J$ a reduction of $I$ if $c_i(J) = c_i(I)$ for $i = 0,...,d$?}
\end{Question}
The multiplicity sequence can be used to compute the degree of the St\"uckrad-Vogel cycles in the intersection algorithm.
Let $X, Y \subset {\PP}_k^n$ be two equidimensional subschemes. One can associate with $X \cap Y$ certain cycles $v_1,...,v_n$ as follows \cite{SV}, \cite{Vo}.
Let $V$ be the ruled join variety of $X$ and $Y$ in
$${\PP}_{k(t)}^{2n+1} = \Proj\ k(t)[x_0,\ldots,x_n,y_0,\ldots,y_n].$$
where $k(t) = k(t_{ij}|\ 1 \le i \le n+1, 0 \le j \le n)$ is a pure transcendental extension of $k$.
Put $w_0 = [V]$. Let $E$ be the linear subspace of
${\PP}_{k(t)}^{2n+1}$ given by the equations $x_0-y_0 = \cdots = x_n-y_n = 0$. For $i = 0,\ldots,n-1$ let $h_i$ denote the divisor of $V$ given by the equation $\sum_{j=0}^nt_{ij}(x_j-y_j) = 0$. If $w_{i-1}$ is defined for some $i \ge 1$, we decompose $w_{i-1} \cap h_{i-1} = v_i+w_i,$ where the support of $v_i$ lies in $E$ and $w_i$ has no components contained in $E$.
Using the cycles $v_1,...,v_n$ St\"uckrad and Vogel proved that there exist a set ${\Lambda (X,Y)}$ of irreducible subschemes $C$ of $(X \cap Y) \times_k k(t)$ and intersection numbers $j(X,Y;C)$ such that
$$\deg X\deg Y = \sum_{C \in {\Lambda(X,Y)}}j(X,Y;C)\deg C.$$
Algebraically, if we set $A = k(t)[x_0,\ldots,x_n,y_0,\ldots,y_n]/(I_X,I_Y)$, where $I_X$ and $I_Y$ denote the defining ideals of $X$ and $Y$ in $k[x_0,\ldots,x_n]$ and $k[y_0,\ldots,y_n]$, and $I = (x_0-y_0,\ldots,x_n-y_n)A$,
then $\deg v_i = c_{d-i}(I)$ by Theorem \ref{AM}. \par
Using Theorem \ref{main} we can also describe $\deg v_i$ in terms of the mixed multiplicities $e_i({\mathfrak m}|I)$
\cite{Tr2}.
\begin{Theorem}[Trung, 2003]
With the above notations we have
$$\deg v_i = e_{i-1}({\mathfrak m}|I) - e_i({\mathfrak m}|I).$$
\end{Theorem}
Achilles and Rams \cite{AR} showed that the Segre numbers introduced by Gaffney and Gassler \cite{GG} in singularity theory and the extended index of intersection introduced by Tworzewski \cite{Tw} in analytic intersection theory are special cases of the multiplicity sequence. We refer the readers to the report \cite{AM3} for further applications of the multiplicity sequence.
\section{Hilbert function of non-standard bigraded algebras}
In general, the Hilbert function $H_R(u,v)$ of a finitely generated bigraded algebra $R$ over a field $k$ is not a polynomial for large $u,v$.
In this section we will study the case when $R$ is generated by elements of
bidegrees $(1,0),(d_1,1),\ldots,(d_r,1)$,
where $d_1,\ldots,d_r$ are non-negative integers.
This case was considered first by P.~Roberts in \cite{ro} where it is
shown that there exist integers $c$ and $v_0$ such that $H_R(u,v)$
is equal to a polynomial $P_R(u,v)$ for $u \ge cv$ and $v \ge v_0$.
He calls $P_R(u,v)$ the {\it Hilbert polynomial} of the bigraded
algebra $R$.
It is worth remarking that Hilbert polynomials of bigraded algebras of
the above type appear in Gabber's proof of Serre's non-negativity
conjecture (see e.g. \cite{Ro2}) and that the positivity of certain coefficient
of such a Hilbert polynomial is strongly related to Serre's positivity
conjecture on intersection multiplicities \cite{Ro3}. \par
Roberts' result can be made more precise as follows \cite{HT}.
\begin{Theorem} \label{exist}
Let $d = \max\{d_1,\ldots,d_r\}.$
There exist integers $u_0,v_0$ such that for $u\ge dv+ u_0$ and $v \ge
v_0$, $H_R(u,v)=P_R(u,v)$.
\end{Theorem}
If $R$ is a standard bigraded algebra, then $d = 0$. Hence the Hilbert function $H_R(u,v)$ is given by a polynomial for $u,v$ large enough. As in the standard bigraded case, the total degree $\deg P_R(u,v)$ can also be expressed in terms of the relevant dimension of $R$ \cite{HT}.
\begin{Theorem} \label{total 1}
$\deg P_R(u,v) = \rdim R-2.$
\end{Theorem}
The partial degree $\deg_u P_R(u,v)$ can be expressed in terms of the graded modules
$$R_v := \bigoplus_{u \ge 0}R_{(u,v)}.$$
Note that $R_0$ is a finitely generated standard $\NN$-graded algebra
and $R_v$ is a finitely generated graded $R_0$-module. Define
$$\operatorname{sdim} R := \dim (R/0:R_+^\infty)_0.$$
If $R$ is a standard bigraded algebra, then $(R/0:R_+^\infty)_0 = R/(0:R_+^\infty + (R_{(0,1)}))$.
\begin{Theorem} \label{partial}
For $t$ large enough,
$$\deg_u P_R(u,v) = \dim R_t-1 = \operatorname{sdim} R.$$
\end{Theorem}
This result was already proved implicitly by P. Roberts for bigraded algebras generated by
elements of bidegree $(1,0),(0,1),(1,1)$ \cite{Ro3}.
By Theorem \ref{total 1} and Theorem \ref{partial} we always have
$$\rdim R = \deg P_R(u,v) \ge \deg_u P_R(u,v) = \operatorname{sdim} R.$$
Note that the inequality may be strict.
\begin{Question}
Does there exist similar formulas for $\deg_v P_R(u,v) $?
\end{Question}
Now we write the Hilbert polynomial $P_R(u,v)$ in the form
$$P_R(u,v) = \sum_{i= 0}^s \frac{e_i(R)}{i!(s-i)!}u^iv^{s-i} +
\text{\rm lower-degree terms},$$
where $s = \deg P_R(u,v)$. Following Teissier we call the numbers $e_i(R)$ the {\it mixed multiplicities} of
$R$. One can show that the mixed multiplicities $e_i(R)$ satisfy the associativity formula of Proposition \ref{associative}.
Unlike the case of standard bigraded algebras, a mixed multiplicities $e_i(R)$ may be negative.
\begin{Example} \label{polynomial}
{\rm Let $S =
k[X_1,\ldots,X_m,Y_1,\ldots,Y_n]$ $(m \ge 1, n \ge 1)$ be a bigraded polynomial ring with
$\deg X_i = (1,0)$ and $\deg Y_j = (d_i,1).$
We have $H_S(u,v) = P_S(u,v)$ for $u \ge dv$ with $\deg P_S(u,v) = m+n-2$ and
$$e_{i,m+n-2-i} = \left\{\begin{array}{ll} (-1)^{m-i-1}\displaystyle
\sum_{j_1+ \ldots + j_n = m-1-i}d_1^{j_1}\cdots d_n^{j_n} & \text{if }\ i
< m,\\
0 & \text{if }\ i \ge m.
\end{array}\right.$$}
\end{Example}
We set $\rho_R := \max\{i|\ e_i(R) \neq 0\}.$
\begin{Theorem}[Hoang-Trung, 2003] \label{coefficient}
The mixed multiplicities $e_i(R)$ are integers with $e_{\rho_R}(R) >
0$.
\end{Theorem}
We always have $\rho_R \le \deg_uP_R(u,v)$.
The following result gives a sufficient condition for $\rho_R = \deg_uP_R(u,v)$.
Note that this condition is satisfied if
$R$ is a domain or a Cohen-Macaulay ring.
\begin{Proposition}[Hoang-Trung, 2003] \label{equi}
Suppose $\dim R/P = \rdim R$ for all minimal prime ideals of $\Proj
R$. Let $d = \deg_u P_R(u,v)$. Then $e_d(R) > 0$.
\end{Proposition}
As a consequence we obtain the following generalization of a result by P. Roberts \cite{Ro3} if
$R$ is generated by elements of degree $(1,0),(0,1),(1,1)$.
That result was used to give a criterion for the positivity of Serre's
intersection multiplicity.
\begin{Corollary}
Suppose there exists an associated prime ideal $P$ with $\dim R/P = \rdim R$
and $\operatorname{sdim} R/P = \operatorname{sdim} R$. Let $s = \operatorname{sdim} R$. Then $e_s(R) > 0$.
\end{Corollary}
\begin{Question}
Can one describe $\rho_R$ in terms of well-understood invariants of $R$?
\end{Question}
\section{Hilbert function of bigraded Rees algebras}
The inspiration for our study on Hilbert function of non-standard bigraded algebras
comes mainly from the fact these algebras include Rees algebras of homogeneous ideals.
Let $A$ be a standard graded algebra over a field $k$. Let $I$ be a
homogeneous ideal of $A$. The Rees algebra $A[It]$ is naturally bigraded:
$$A[It]_{(u,v)} := (I^v)_ut^v$$
for all $(u,v) \in \NN^2$.
Let $A = k[x_1,\ldots,x_n]$, where
$x_1,\ldots,x_n$ are homogeneous elements with $\deg x_i = 1$. Let $I =
(f_1,\ldots,f_r)$, where $f_1,\ldots,f_r$ are homogeneous elements with
$\deg f_j = d_j$. Put $y_j = f_jt$. Then $A[It]$ is generated by the
elements $x_1,\ldots,x_n$ and $y_1,\ldots,y_r$ with $\deg x_i = (1,0)$
and $\deg y_j = (d_j,1)$. Hence $A[It]$ belongs to the class of
bigraded algebras considered in the preceding section.
\begin{Theorem}[Hoang-Trung, 2003] \label{Rees}
Set $d = \max\{d_1,\ldots,d_r\}$
and $s = \dim A/0:I^\infty-1$. There exist integers $u_0,v_0$ such
that for
$u \ge dv+u_0$ and $v \ge v_0$, the Hilbert function $H_{A[It]}(u,v)$
is equal to a polynomial $P_{A[It]}(u,v)$ with
$$\deg P_{A[It]}(u,v) = \deg_u P_{A[It]}(u,v) = s.$$
Moreover, if $s \ge 0$ and $P_{A[It]}(u,v)$ is written in the form
$$P_{A[It]}(u,v) = \sum_{i=0}^s\frac{e_i(A[It])}{i!(s-i)!}u^iv^{s-i} +
\text{\rm lower-degree terms},$$
then the coefficients $e_i(A[It])$ are integers for all $i$ with
$e_s(A[It]) = e(A/0:I^\infty)$.
\end{Theorem}
This result has some interesting applications.
First of all, the Hilbert polynomial $P_{A[It]}(u,v)$ can be used to compute
the Hilbert polynomial of the quotient ring $A/I^v$. In fact, we have
$$P_{A/I^v}(u) = P_A(u) - P_{A[It]}(u,v)$$
for $v$ large enough.
In particular, we can prove the following property of the function $e_{i}(M/I^kM)$ for any finitely generated graded $A$-module $M$ \cite{HPV}.
\begin{Theorem}[Herzog-Puthenpurakal-Verma, 2007]
The Hilbert coefficient $e_{i}(M/I^kM)$ as a function of $k$ is of
polynomial type of degree $\leq n-d+i$, where $d = \dim M/IM$.
\end{Theorem}
Let $V$ denote the blow-up of the subscheme of $\Proj A$ defined by
$I$. It is known that $V$ can be embedded into a projective space by the
linear system $(I^e)_c$ for any pair of positive integers $e,c$ with $c >
de$ \cite{CH}. Such embeddings often yield interesting rational
varieties such as the Bordiga-White surfaces,
the Room surfaces and the Buchsbaum-Eisenbud varieties. Let $V_{c,e}$ denote the embedded variety.
The homogeneous coordinate ring of $V_{c,e}$ is the subalgebra $k[(I^e)_c]$ of $A$.
It has been observed in \cite{STV} and \cite{CHTV} that $k[(I^e)_c]$ can be identified as the subalgebra of $A[It]$ along
the diagonal $\{(cv,ev)|\ v \in \NN\}$ of $\NN^2$.
Since $P_{A[It]}(cv,ev)$ is the Hilbert polynomial of $k[(I^e)_c]$, we
may get uniform information
on all such embeddings from $P_{A[It]}(u,v)$.
\begin{Proposition} \label{embedded} Let $s = \dim A/0:I^\infty-1$.
Assume that $c > de$. Then
$$\deg V_{c,e} = \displaystyle \sum_{i=0}^s \binom{s}{i}e_i(A[It])c^ie^{s-i}.$$
\end{Proposition}
If $I$ is generated by a $d$-sequence we have the following formula for the mixed multiplicities
$e_i(A[It])$, which displays a completely different behavior of
$e_i(A[It])$ than that of $e_i({\frk m}} \def\Mm{{\frk M}} \def\nn{{\frk n}|I)$ (see Theorem \ref{HTR}).
\begin{Theorem}[Hoang-Trung, 2003] \label{d-sequence}
Let $I$ be an ideal generated by a homogeneous $d$-sequence
$f_1,\ldots,f_r$ with $\deg f_j= d_j$ and $d_1\le\ldots\le d_r$. Let
$I_q=(f_1,\ldots,f_{q-1}):f_q$ for $q=1,\ldots,r$. Set
\begin{align*}
s & := \dim A/I_1-1,\\
m& := \max\{q|\ \dim A/I_q + q-2 = s\}.
\end{align*}
Then $\deg P_{A[It]}(u,v) = s$ and
$$e_i(A[It]) =
\sum_{q=1}^{\min\{m,s-i+1\}}(-1)^{s-q-i+1}e(A/I_q)\sum_{j_1+\ldots +j_q=s-q-i+1}d_1^{j_1}\ldots d_q^{j_q}$$
for $i = 0,\ldots,s$.
\end{Theorem}
Now we will apply Theorem \ref{d-sequence} to compute the mixed multiplicities of Rees algebras of complete intersections and of determinantal ideals.
\begin{Corollary} \label{regular}
Let $f_1,\ldots f_r$ be a homogeneous
regular sequence with $\deg f_1=d_1 \le\ldots\le \deg f_r = d_r$ and $I
= (f_1,\ldots,f_r)$. Set $s = \dim A-1$. Then $\deg P_{A[It]}(u,v) =
s$ and
\begin{align*}
& e_i(A[It]) =\\
& \sum_{q=1}^{\min\{u,v-i+1\}} (-1)^{s-q-i+1}
e(A)\sum_{j_1+\ldots +j_q=s-q-i+1}d_1^{j_1+1}\ldots d_{q-1}^{j_{q-1}+1}d_q^{j_q}
\end{align*}
for $i = 0,\ldots,s$.
\end{Corollary}
\begin{Corollary} \label{minor}
Let $A = k[X]$, where $X$ is a
$(r-1)\times r$ matrix of indeterminates. Let $I$ be the ideal of the maximal
minors of $X$ in $A$. Set $s = (r-1)\times r-1$. Then $\deg
P_{A[It]}(u,v) = s$ and
$$e_i(A[It]) = \sum_{q=1}^{\min\{u,v-i+1\}} (-1)^{s-q-i+1}\binom{r -1}{q-1} \binom{s-i}{q-1} r^{s-q-i+1}$$
for $i = 0,\ldots,s$.
\end{Corollary}
Hoang \cite{Ho2} also computed $e_i(A[It])$ in the case $I$ is the defining ideal of a rational normal curve.
| {
"timestamp": "2008-02-17T04:07:34",
"yymm": "0802",
"arxiv_id": "0802.2329",
"language": "en",
"url": "https://arxiv.org/abs/0802.2329",
"abstract": "This paper is a survey on major results on Hilbert functions of multigraded algebras and mixed multiplicities of ideals, including their applications to the computation of Milnor numbers of complex analytic hypersurfaces with isolated singularity, multiplicities of blowup algebras and mixed volumes of polytopes.",
"subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG)",
"title": "Hilbert functions of multigraded algebras, mixed multiplicities of ideals and their applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750536904563,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7095221828967465
} |
https://arxiv.org/abs/1508.03699 | On the structure of braid groups on complexes | We consider the braid groups $\mathbf{B}_n(X)$ on finite simplicial complexes $X$, which are generalizations of those on both manifolds and graphs that have been studied already by many authors. We figure out the relationships between geometric decompositions for $X$ and their effects on braid groups, and provide an algorithmic way to compute the group presentations for $\mathbf{B}_n(X)$ with the aid of them.As applications, we give complete criteria for both the surface embeddability and planarity for $X$, which are the torsion-freeness of the braid group $\mathbf{B}_n(X)$ and its abelianization $H_1(\mathbf{B}_n(X))$, respectively. | \section{Introduction}
The braid group $\mathbf{B}_n(D^2)$ on a 2-disk $D^2$ was firstly introduced by E.~Artin in 1920's, and Fox and Neuwirth generalized it to braid groups $\mathbf{B}_n(X)$ on arbitrary topological spaces $X$ via {\em configuration spaces}, which are defined as follows. For a compact, connected topological space $X$, the {\em ordered configuration space $F_n(X)$} is the set of $n$-tuples of distinct points in $X$, and the orbit space $B_n(X)$ under the action of the symmetric group $\mathbf{S}_n$ on $F_n(X)$ permutting coordinates is called the {\em unordered configuration space} on $X$.
\[F_n(X)=X^n\setminus \Delta,\quad B_n(X)=F_n(X)/\mathbf{S}_n,\]
where
\[\Delta=\{(x_1,\dots,x_n)|x_i=x_j \text{ for some }i\neq j\}\subset X^n.\]
Let $\bar*_n$ and $*_n$ be basepoints for $F_n(X)$ and $B_n(X)$, respectively.
Then the {\em pure $n$-braid group} $\mathbf{P}_n(X,\bar*_n)$ and {\em (full) $n$-braid group $\mathbf{B}_n(X,*_n)$} are defined to be the fundamental groups of the configuration spaces $F_n(X)$ and $B_n(X)$, respectively. We will suppress basepoints and denote these groups by $\mathbf{P}_n(X)$ and $\mathbf{B}_n(X)$ unless any ambiguity occurs.
However, most of research on braid groups has been focused on braid groups on manifolds, more specifically, on surfaces, until the end of 20th century when Ghrist presented a pioneering paper \cite{Gh} about braid groups on {\em graphs $\Gamma$} which are finite, 1-dimensional simplicial complexes.
In 2000, Abrams defined in his Ph.~D. thesis \cite{Ab} a combinatorial version of configuration space, called a {\em discrete configuration space}, consisting of $n$ open cells in $\Gamma$ having pairwise no common boundaries.
A discrete configuration space has the benefit that it admits a cubical complex structure making the description of paths of points easier. However it depends not only on homeomoprhic type but also the cell structure of the underlying graph $\Gamma$.
Abrams overcame this problem by proving stability up to homotopy under the subdivision of edges once $\Gamma$ is sufficiently subdivided.
Crisp and Wiest showed the embeddabilities between braid groups on graphs and surface groups into right-angled Artin groups, which is one of the most important subjects in geometric group theory.
Farley and Sabalka in \cite{FS} used Forman's {\em Discrete Morse theory} \cite{For} on discrete configuration spaces to provide an algorithmic way to compute a presentation of $\mathbf{B}_n(\Gamma)$, and furthermore they figured out the relation between braid groups on trees and right-angled Artin groups.
On the extension of these works, Kim-Ko-Park in \cite{KKP} and Ko-Park in \cite{KP} provided geometric criteria for the braid group on a given graph to be right-angled Artin, and moreover a new algebraic criterion for the planarity of a graph, and answered some open questions as well.
On the contrary, for a simplicial complex, not manifold, of dimension 2 or higher, braid theory is still unexplored. We will focus on the braid groups on finite, connected simplicial complex $X$ of arbitrary dimension, which are generalizations of both graphs and surfaces.
We consider modifications---attaching or removing higher cells, edge contraction or inverses, and so on--- and how these modifications change the braid groups.
Indeed, via suitable modifications we may obtain a {\em simple} complex $X'$ of dimension 2 whose vertices have very obvious links. Furthermore, this can be done without changing the braid group.
\begin{thm}\label{thm:simple}
Let $X$ be a complex. Then there is a simple complex $X'$ of dimension $2$ such that
$\mathbf{B}_n(X)\simeq\mathbf{B}_n(X')$ for all $n\ge 1$.
\end{thm}
Once we have a simple complex $X$, then it can be decomposed by {\em cuts} into much simpler pieces, and eventually into {\em elementary} complexes, where an elementary complex plays the role of a building block and can be thought as either a {\em star} graph or a manifold of dimension at least 2.
For the build-up process, we provide two types of {\em combination theorem} which are generalizations of capping-off and connected sum.
Furthermore, the combination theorems ensure that the build-up process preserves some geometry of the given pieces. In other words, the braid group $\mathbf{B}_n(X)$ captures some geometric properties of $X$ as observed before.
More precisely, we start with the obvious observations about the various embeddabilities of $X$ into manifolds as follows.
For two complexes $X$ and $Y$, we denote by $Y\subset X$ and say that $X$ {\em contains} $Y$ if there is an simplicial embedding between them after sufficient subdivisions.
Then a complex $X$ embeds into
(i) a circle iff $T_3\not\subset X$;
(ii) a surface iff $S_0\not\subset X$; and
(iii) a plane iff $K_5, K_{3,3}, S_0\not\subset X$.
The complexes $T_3$ and $S_0$ are the {\em tripod} and the cone $C(S^1\sqcup\{*\})$ of the union of a circle and a point, respectively. See Figure~\ref{fig:S_0}. The graphs $K_n$ and $K_{m,n}$ are complete and complete bipartite graphs, respectively.
\begin{figure}[ht]
\[
T_3=\vcenter{\hbox{\input{T_3.pdf_tex}}}\qquad\qquad
S_0=\vcenter{\hbox{\input{S_0.pdf_tex}}}
\]
\caption{A tripod $T_3$ and a complex $S_0$}
\label{fig:S_0}
\label{fig:tripod}
\end{figure}
Then it can be formulated as follows.
\begin{thm}\label{thm:emb}
Let $X$ be a finite, connected simplicial complex different from $S^2$ and $\mathbb{R}P^2$. Then $X$ embeds into
\begin{enumerate}
\item a circle if and only if $\mathbf{B}_n(X)$ is abelian for any $n\ge 1$;
\item a surface if and only if $\mathbf{B}_n(X)$ is torsion-free for any $n\ge 1$;
\item a plane if and only if $H_1(\mathbf{B}_n(X))$ is torsion-free for any $n\ge 1$.
\end{enumerate}
Moreover, if $X$ does not embed into any surface, then $\mathbf{B}_n(X)$ contains $\mathbf{S}_n$ for any $n\ge 1$.
\end{thm}
Remark that we exclude the cases for $S^2$ and $\mathbb{R}P^2$ since their braid groups have torsion even though they are braid groups on {\em surfaces}, 2-dimensional manifolds.
However, by the complementary statement, they are classified as $\mathbf{B}_n(X)$ contains a torsion but not the whole $\mathbf{S}_n$ for any $n\ge 3$.
The rest of this paper is organized as follows. In Section~2, we define braid groups on complexes and basic notions. In Section~3, we define the modifications and prove Theorem~\ref{thm:simple}, and in Section~4 we define two operations, called {\em unwrapping} and {\em connected-sum decomposition}, and look at the shapes of the elementary complexes.
The effects of the inverses, called {\em closure} and {\em connected sum}, of these two operations on braid groups will be discussed separately in Section~5 and Section~6, which they let us know how the build-up process is working.
Finally, in Section 7, as applications we prove the criteria, Theorem~\ref{thm:emb}, for the embeddability of given complex $X$ into a surface and a plane.
\section{Braid groups on complexes}
Throughout this paper, a {\em complex} denoted by $X$ means a finite, connected, simplicial complex of dimension at least 1.
Especially, a complex of dimension 1 is usually denoted by $\Gamma$ and called a {\em graph}.
Since the braid group on $X$ depends only on the homeomorphism type of $X$, we sometimes assume that $X$ is {\em sufficiently subdivided}, which can be achieved via the barycentric subdivision twice.
The {\em star} $\st(K)$ is the union of all open simplices whose closure intersects $K$, and the {\em link} $\lk(K)$ of $K$ is the complement of the star $\st(\overline K)$ of the closure $\overline K$ in its closure $\overline{\st(K)}$. That is, $\lk(K)=\overline{\st(K)}\setminus \st(\overline{K})$, as usual.
Note that both $F_n(X)$ and $B_n(X)$ can be regarded as finite simplicial complexes up to homotopy as follows.
Since $X$ is a finite simplicial complex, so is the $n$-fold product $X^n$, and after barycentric subdivisions if necessary, the diagonal $\Delta$ becomes a simplicial subcomplex of $X^n$.
Hence the further subdivision makes $X^n\setminus \st(\Delta)$ a strong deformation retract of $X^n\setminus \Delta=F_n(X)$. Therefore if we endow a metric $d$ on $X$, we may assume that there exists a constant $\epsilon=\epsilon(X)>0$ such that any two points of $\mathbf{x}\in F_n(X)$ never approach within $\epsilon$ of each other with respect to the metric $d$, and the same holds for $B_n(X)$.
From the definitions of configuration spaces, we have the following exact sequence.
\begin{equation}\label{eq:bs}
1\longrightarrow \mathbf{P}_n(X,\bar*_n)\longrightarrow \mathbf{B}_n(X,*_n)\stackrel{\rho}{\longrightarrow}\mathbf{S}(*_n)
\end{equation}
Here the group $\mathbf{S}(*_n)$ is the symmetric group on the set $*_n$, usually denoted by $\mathbf{S}_n$, and the map $\rho$ is called the {\em induced permutation}.
It is easy to see that $\mathbf{P}_n(I)=\mathbf{B}_n(I)=\{e\}$, and $\mathbf{P}_n(S^1)=n\mathbb{Z}\subset\mathbb{Z}=\mathbf{B}_n(S^1)$.
On the other hand, it is known that $\rho$ for $\mathbf{B}_n(T_3)$ on a tripod $T_3$ is surjective for each $n\ge 2$. Hence whenever $X$ contains $T_3$, then $\rho$ is surjective as well.
\begin{prop}
Let $X$ be a complex. Then $X$ embeds into a circle if and only if $\mathbf{B}_n(X)$ is abelian for any $n\ge 1$.
\end{prop}
\begin{proof}
The only if part is obvious.
Suppose $\mathbf{B}_n(X)$ is abelian. Then since $\mathbf{S}_n$ is non-abelian for any $n\ge3$, $\rho$ never be surjective. Hence $X$ is either $I$ or $S^1$, and therefore it embeds into a circle.
\end{proof}
We call $X$ {\em trivial} if $X$ is either $I$ or $S^1$. Then $T_3$ can be thought as the obstruction complex for given complex to be trivial.
From now on we assume that $X$ is non-trivial.
\begin{defn}
For any $x\in X$, there is a trichotemy as follows.
\begin{enumerate}
\item $x$ is in the {\em interior $\mathring{X}$} if $\lk(x)\simeq S^k$ for some $k\ge 0$;
\item $x$ is in the {\em boundary $\partial X$} if $\lk(x)\simeq D^k$ for some $k\ge 0$;
\item $x$ is in the {\em branch set $\br(X)$} of $X$ otherwise.
\end{enumerate}
\end{defn}
\begin{defn}
Let $X$ be a complex.
\begin{enumerate}
\item A 0-cell $v$ is called a {\em vertex}, whose {\em valency $\val(v)$} is defined by the number of connected components of $\lk(v)$.
\item
A 1-cell $e=(v,w)$ is called an {\em edge} if there is no 2-cell containing $e$ in its boundary.
\item
For a subset $K$ of $X$, a {\em deletion $X_K$ of $K$} in $X$ is defined by the complement $X\setminus \st(K)$ of $\st(K)$.
\end{enumerate}
\end{defn}
\begin{figure}[ht]
\def.95 \textwidth{.95 \textwidth}
\input{X_Decomposition.pdf_tex}
\caption{A decomposition of $X$ into sets of interior, boundary, and branch points}
\label{fig:IntBoundaryBranch}
\end{figure}
The Figure~\ref{fig:IntBoundaryBranch} shows an example.
The thin lines and dots are in $\br(X)$, and the thick lines are in $\partial X$.
Note that $\br(X)$ is a closed subcomplex of $X$, and $X$ is a manifold if and only if $\br(X)=\emptyset$.
\begin{thm}\cite{Bir}
Let $M$ be a manifold of dimension at least $3$, not necessarily compact and possibly with boundary. Then the pure and full braid groups are as follows.
\[\mathbf{P}_n(M)=\prod^n \pi_1(M),\qquad
\mathbf{B}_n(M)=\prod^n\pi_1(M)\rtimes\mathbf{S}_n,\]
where the symmetric group $\mathbf{S}_n$ acts on the product $\prod^n\pi_1(M)$ by permuting factors.
\end{thm}
Hence there is no braid theory for manifolds of dimension 3 or higher.
On the other hand, for a surface $\Sigma$, then there is a fiber bundle structures between the ordered configuration spaces $F_n(\Sigma)$'s which can be used to compute and analyze braid groups on $\Sigma$.
Note that since compact surfaces are completely characterized by a few parameters, so are their braid groups.
Indeed, for a given surface $\Sigma$, one can extract geometric information from its braid group as follows.
The proof is obvious by the group presentation for $\mathbf{B}_n(\Sigma)$, see \cite{Bel}, and we omit the proof.
\begin{thm}\cite{Bel, Bir, GG}\label{thm:surface}
Let $\Sigma$ be a surface. Then the following holds.
\begin{enumerate}
\item $\mathbf{B}_n(\Sigma)$ has torsion if and only if $\Sigma$ is either $S^2$ or $\mathbb{R}P^2$.
\item The abelianization $H_1(\mathbf{B}_n(\Sigma))$ has torsion if and only if $\Sigma$ is nonplanar.
\end{enumerate}
\end{thm}
On the contrary, if $X$ is not a manifold, the global topology for a complex $X$ can hardly be determined by a few parameters in general, even if $X$ is 1-dimensional.
Therefore one might not expect that similar results hold for $X$, but surprisingly, the braid group still detects some of the global geometry of the complex $X$ when $X$ is a graph $\Gamma$ as follows.
\begin{thm}\cite{KKP, Gh}
Let $\Gamma$ be a graph. Then $\mathbf{B}_n(\Gamma)$ is {\em always} torsion free, and moreover $H_1(\mathbf{B}_n(\Gamma))$ has torsion if and only if $\Gamma$ is nonplanar.
\end{thm}
Hence Theorem~\ref{thm:emb} is the generalization of the two theorems above, and to prove our theorem, we adopt the notion from graph theory which is the mimic of the {\em minor} relation, that is, edge contraction and deletion. Note that the minor relations reduce the number of edges and so the result is usually considered as simpler than the original one. However, they may increase valencies, and the higher valency tends to imply a more complicated situation in the computation of the braid group. Therefore we may think that a complex having lower valencies is simpler.
Rigorously speaking, we define a simple complex as follows.
\begin{defn}
A vertex $v$ is said to be {\em simple} if $\lk(v)$ is either connected, a disjoint union of a connected complex and a point, or 0-dimensional.
A complex $X$ is said to be {\em simple} if all vertices in $X$ are simple.
\end{defn}
Hence a simple complex is really easy to handle, but is too special and far from the generic ones.
However, we claim that any complex can be transformed into a simple complex by a sequence of certain modifications such as attaching and removing (higher) cells, where each step induces an isomorphism between braid groups.
For convenience's sake, we say that an embedding $f:X\to Y$ is a {\em braid equivalence} if it induces an isomorphism $f_*:\mathbf{B}_n(X)\to\mathbf{B}_n(Y)$ for each $n\ge 1$, respectively.
Moreover, we simply say that $X$ and $Y$ are {\em braid equivalent} if they can be joined by a sequence of (possibly {\em inverse} of) braid equivalences, denoted by $X\equiv_B Y$, respectively.
Then the above claim can be reformulated as for any $X$, there is a simple representative in the braid equivalence class of $X$ as presented in Theorem~\ref{thm:simple}. We will prove this proposition later.
\subsection{Appending a point}
Let $v\in\partial X$, and $i_v:B_{n-1}(X\setminus\{v\})\to B_n(X)$ for $n\ge 1$
be an embedding defined as
\[i_v(\mathbf{x})=\{v\}\cup \mathbf{x}\]
for $\mathbf{x}\in B_{n-1}(X\setminus\{v\})$.
Note that $i_v(\emptyset) = \{v\}$ if $n=1$.
Then it induces a homomorphism
\[(i_v)_*:\mathbf{B}_{n-1}(X\setminus \{v\},*_{n-1})\to\mathbf{B}_n(X,i_v(*_{n-1})),\]
where $*_{n-1}$ is a basepoint for $B_{n-1}(X\setminus\{v\})$.
Note that $B_n(X\setminus \{v\})$ is homotopy equivalent to $B_n(X)$ via the inclusion, whose homotopy inverse is a map $h$ resizing cells incident to $v$.
Hence we can consider a composition
\[\bar i_v:B_{n-1}(X)\stackrel{h}\longrightarrow B_{n-1}(X\setminus\{v\})\stackrel{i_v}\longrightarrow B_n(X),\]
which induces
\[(\bar i_v)_*:\mathbf{B}_{n-1}(X,*_{n-1})\to\mathbf{B}_n(X,\bar i_v(*_{n-1})).\]
We can use safely $i_v$ and $(i_v)_*$ instead of $\bar i_v$ and $(\bar i_v)_*$ since there is no ambiguity up to homotopy.
\begin{prop}\label{prop:injection}
Let $X$ be a complex and $v\in\partial X$.
Then the homomorphism $(i_v)_*:\mathbf{B}_{n-1}(X)\to\mathbf{B}_n(X)$ is injective for all $n\ge1$.
\end{prop}
\begin{proof}
Let $B_{1,n-1}(X)=F_n(X)/\mathbf{S}_{n-1}$ by considering $\mathbf{S}_{n-1}$ as a subgroup of $\mathbf{S}_n$ which consists of permutations on $\{1,\dots,n\}$ fixing 1. Then
\[B_{1,n-1}(X)=\{(x_1,\{x_2,\dots,x_n\})| x_i\neq x_j\text{ if }i\neq j\},\]
and the quotient map $p:B_{1,n-1}(X)\to B_n(X)$ forgetting the order is a covering map which is non-regular in general.
Moreover, $i_v$ lifts to $\tilde i_v:B_{n-1}(X\setminus v)\to B_{1,n-1}(X)$ defined by $\tilde i_v(\mathbf{x}) = (v,\mathbf{x})$ and there is a map $\pi:B_{1,n-1}(X)\to B_{n-1}(X)$ forgetting the first coordinate, namely,
\[\pi( x_1, \{x_2,\dots,x_n\}) = \{x_2,\dots,x_n\},\]
satisfying that $\pi\circ\tilde i_v$ is homotopic to the identity, and it induces the isomorphism $\pi_*\circ (\tilde i_v)_*$.
Hence $(\tilde i_v)_*$ is injective and therefore so is $(i_v)_* = p_*\circ(\tilde i_v)_*$.
\end{proof}
\section{Modifications and simple complexes}
\subsection{Edge contraction}
The first nontrivial observation is about edge contraction as follows.
\begin{prop}\label{prop:generaledgecontraction}
Let $X$ be a complex and $e$ be an edge. Then the quotient map $q:X\to X/\bar e$ induces a map
\[q^*:\mathbf{B}_n(X/\bar e)\to\mathbf{B}_n(X),\]
which is surjective if none of $\partial e$ is of valency 1.
\end{prop}
\begin{proof}
We first consider a subspace $B_{n;\le 1}(X;\bar e)$ of $B_n(X)$ consisting of configurations $\mathbf{x}=\{x_1,\dots,x_n\}$ such that at most 1 of $x_i$'s is lying on $\bar e$, or equivalently, for $\mathbf{x}\in B_n(X)$,
\[\mathbf{x}\in B_{n;\le 1}(X;\bar e)\Longleftrightarrow \#(\mathbf{x}\cap \bar e)\le 1.\]
Then the map $q$ induces $q|:B_{n;\le1}(X;\bar e)\to B_n(X/\bar e)$.
Let $*_n\subset X\setminus \bar e$ be a basepoint for both $B_{n;\le 1}(X;\bar e)$ and $B_n(X/\bar e)$, and
let a path $\gamma:(I,\partial I)\to (B_n(X/\bar e), *_n)$ be given.
Then it is not hard to prove that there exists a lift $\tilde\gamma:(I,\partial I)\to B_{n;\le 1}(X;\bar e)$ so that $q\circ\tilde\gamma = \gamma$ by regarding $\bar e$ as a path. Moreover, the lift is unique up to homotopy since $\bar e$ is contractible.
Therefore, $q|$ induces an isomorphism $(q|)_*$ between fundamental groups. Then the map $q^*$ is defined by a composition $\iota_*\circ (q|)_*^{-1}$, where $\iota_*$ is the map induced from the obvious inclusion $\iota:B_{n;\le 1}(X;\bar e)\to B_n(X)$, and is well-defined as desired.
\begin{figure}[ht]
\[
\xymatrix{
X=\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{1.2}\input{X_e1.pdf_tex}}}\ar[r]^-q&
\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{1.2}\input{X_e2.pdf_tex}}}=X/\bar e\\
\tilde\gamma=\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{1.2}\input{X_e1_gamma.pdf_tex}}}&
\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{1.2}\input{X_e2_gamma.pdf_tex}}}=\gamma\ar@{|->}[l]
}
\]
\caption{Local pictures of $X/\bar e$ and $X$, and the lift $\tilde \gamma$ of a path $\gamma$}
\label{fig:Xe}
\end{figure}
Suppose that none of $\partial e$ is of valency 1.
Then for the surjectivity, it is enough to show that $\iota_*$ is surjective. In other words, any $\delta:(I,\partial I)\to(B_n(X),*_n)$ is homotoped to $\delta'$ relative to the boundary such that $\#(\delta'(t)\cap \bar e)\le 1$ for all $t\in[0,1]$.
At first break $\delta$ into several pieces according to the change of $m(t)=\#(\delta(t)\cap \bar e)$, and use induction on $m$.
Then since both $\val(v)$ and $\val(w)\ge 2$, we have enough room for a given configuration to be evacuated from $e$.
This can be done easily and we omit the detail.
\end{proof}
If none of $\partial e$ is of valency 1, then we may say that $X$ is simpler than $X/\bar e$ according to the definition of simplicity.
Note that $q$ does not directly induce the map between braid groups since it is not an embedding.
Under some conditions, one can find an embedding which plays a similar role to $q$ so that it induces precisely the inverse of $q^*$, and therefore an isomorphism. We will see this later.
On the other hand, if one of $\partial e$ is of valency 1, then $q$ can be considered as a strong deformation retract and therefore $q^*$ is actually induced from the obvious embedding $X/\bar e\to X$ which is a homotopy inverse of $q$.
However, $q^*$ is neither injective nor surjective in general. It depends on the structure of $\st(e)$.
\begin{ex}[2-braid group on a tree]\label{ex:tree}
We denote by $T_k$ a labelled tree homeomorphic to the cone of $k$ points as depicted in Figure~\ref{fig:corolla}.
\begin{figure}[ht]
\input{Tk.pdf_tex}
\caption{A labelled tree $T_k$ with only one vertex of valency $k\ge 3$}
\label{fig:corolla}
\end{figure}
Then it is known that $\mathbf{B}_n(T_k)$ is always a free group as follows.
\begin{lem}\label{lem:corolla}\cite{KKP}
The braid group $\mathbf{B}_n(T_k)$ is a free group of rank $r=r(n,k,k)$, where
\begin{equation}
\label{eq:rank}r(n,\nu,\mu)=(\nu-2)\binom{n+\mu-2}{n-1} - \binom{n+\mu-2}{n} -(\nu-\mu-1).
\end{equation}
\end{lem}
Especially, the 2-braid group $\mathbf{B}_2(T_k)$ is of rank $\binom{k-1}2$, indexed by $\{(i,j)|2\le i<j\le k\}$. Indeed, each pair $(i,j)$ corresponds to the loop $s_{i,j}$ in $B_2(T_k)$ as follows.
We first consider the tripod $T_3$ with the cone point $0$. Assume that two points $a$ and $b$ are initially lying on the edge $(1,0)$ and moreover $b$ is closer to $0$ than $a$.
Then we move $b$ to the second leaf and $a$ to the third leaf, and back $b$ to the initial position of $a$ and back $a$ to that of $b$.
See Figure~\ref{fig:generator} and we will rigorously define this loop later in detail.
Then the loop $s$ defined in this way generates the infinite cyclic group which is actually $\mathbf{B}_2(T_3)$.
\begin{figure}[ht]
\[
s=\left(\vcenter{\hbox{\input{T_3_generator1.pdf_tex}}}\right)\cdot
\left(\vcenter{\hbox{\input{T_3_generator2.pdf_tex}}}\right)\cdot
\left(\vcenter{\hbox{\input{T_3_generator3.pdf_tex}}}\right)\cdot
\left(\vcenter{\hbox{\input{T_3_generator4.pdf_tex}}}\right)
\]
\caption{A loop $s$ in $B_2(T_3)$}
\label{fig:generator}
\end{figure}
For each pair $(i,j)$ with $i\neq j$, there exists a unique embedding $T_3\to T_k$ such that it maps $\partial T_3=\{1,2,3\}$ to $\{1,i,j\}\subset\partial T_k$ in order.
Then $s_{i,j}$ is nothing but the image of $s$ under the induced homomorphism $\mathbf{B}_2(T_3)\to\mathbf{B}_2(T_k)$. Note that $s_{j,i}$ is the inverse of $s_{i,j}$.
Now let $T$ be a tree with $k=\#(\partial T)$. We first label on $\partial T$ arbitrarily. Then there exists a unique label-preserving map $q:T\to T_k$ which takes a quotient by all internal edges and it induces a surjective homomorphism $q^*:\mathbf{B}_2(T_k)\to\mathbf{B}_2(T)$ by Proposition~\ref{prop:generaledgecontraction}.
By the definition of $s_{i,j}$ above, the image $q^*(s_{i,j})$ coincides with the image of $s$ under the unique embedding $T_3\to T$ sending $\{1,2,3\}$ to $\{1,i,j\}$ as before.
We mean by the {\em center} $c(i,j)$ of $i$ and $j$ in $T$ that the image of $0\in T_3$ under this embedding.
\begin{figure}[ht]
\[
T=\vcenter{\hbox{\input{ordered_tree.pdf_tex}}}\qquad\qquad
\vcenter{\hbox{\input{ordered_tree_s24.pdf_tex}}}
\]
\caption{A tree with labelled leaves and an embedding of $T_3$ corresponding to $s_{2,4}$}
\label{fig:orderedtree}
\end{figure}
Suppose that there is an isotopy $H:T_3\times I\to T$
such that $H_t(1)=1$ for all $t$ and $H_0(\{2,3\})=\{i,j\},H_1(\{2,3\})=\{i',j'\}$.
Then it defines a homotopy between the images of $s_{i,j}$ and $s_{i',j'}$ in $B_2(T)$, hence they are considered as the same in $\mathbf{B}_2(T)$.
More precisely, the given tree $T$ defines an equivalence relation on $\{(i,j)|2\le i<j\le k\}$ as follows.
\begin{defn}[Equivalence relation coming from a tree $T$ with ordered leaves]
Suppose the set $\partial T$ of leaves are indexed by $\{1,\dots,k\}$ and let $\binom{\partial T-1}2$ denote the set $\{(i,j)|2\le i<j\le k\}$.
Then we define an equivalence relation $\sim_T$ on $\binom{\partial T-1}2$ as $(i,j)\sim_{T}(i',j')$ if and only if
\begin{enumerate}
\item $c(i,j)=c(i',j')\in T$;
\item $[i]=[i']$ and $[j]=[j']$ in $\pi_0(T\setminus\{c(i,j)\})$.
\end{enumerate}
\end{defn}
Therefore, the equivalence classes depend not on the whole tree $T$ but only on the local shape, namely the {\em tangent space}, of each vetex of valency $\ge 3$. Hence each generator $s_{i,j}$ corresponds to a triple $(v, e_1, e_2)$ of a vertex $v$ with $\val(v)\ge 3$, and two half-edges $e_1$ and $e_2$ emitting from $v$ which are not heading to the chosen point in the boundary, $1\in\partial T$ in our example.
One can prove that this equivalence gives the complete set of defining relators for $\mathbf{B}_2(T)$, and therefore $\mathbf{B}_2(T)$ is free as well.
\[\mathbf{B}_2(T)=\langle s_{2,3},\dots,s_{k-1,k}| s_{i,j}=s_{i',j'}\text{ if }(i,j)\sim_T(i',j')\rangle.\]
The rank is given by
\begin{equation}\label{eq:treerank}
r_2(T)=\sum_{v\in V(T)}\binom{\val(v)-1}2,
\end{equation}
where $V(T)$ is the set of vertices of $T$.
\end{ex}
\begin{rmk}
Recall the point appending map $\mathbf{B}_{n-1}(T)\to\mathbf{B}_n(T)$ defined in Proposition~\ref{prop:injection}, and consider all possible compositions which yield $\mathbf{B}_2(T)\to\mathbf{B}_n(T)$.
Then the images of $s_{i,j}$'s under these compositions generate $\mathbf{B}_n(T)$.
More precisely, each generator is characterized by a vertex of valency $\ge 3$, two edges as before, and in addition the number of points in each component of the complement of that vertex in $T$. See \cite{KKP} for detail.
\end{rmk}
\subsection{Attaching higher cells}
We first consider the {\em generalized capping-off} $X$, which is to attach a $k$-simplex along a $(k-1)$-sphere of a given complex $Y$.
We exclude the cases when $(X,Y)=(D^2,S^1)$ or $(D^3,S^2)$ because
their braid groups are already known, and moreover they are extremal in the sense of that the braid groups change dramatically before and after attaching simplices.
\begin{prop}\label{prop:highercell}
Let $X=Y\sqcup_\phi D^k$ via the embedding $\phi:\partial D^k=S^{k-1}\to Y$ for some $k\ge 3$ and a complex $Y$ different from $S^2$.
Then the embedding $Y\to X$ is a braid equivalence.
\end{prop}
\begin{proof}
We identify $\partial D^k$ with the subspace of $Y$ via $\phi$ from now on.
Let $*\in\mathring{D}^k$ be a point, and consider the subspace $B_{n-1;1}(X;*)$ of $B_n(X)$ consisting of configurations containing $*$.
Then this is of codimension $k\ge3$, in the sense that $B_{n-1;1}(X;*)\times\mathbb{R}^3$ can be embedded into $B_n(X)$.
Hence we may assume that all paths and homotopies in $B_n(X)$ are in general position with respect to $\{*\}$ and therefore they avoid $*$. In other words, the inclusion $X\setminus\{*\}\to X$ is a braid equivalence.
Let $D^k\setminus\{*\}\to\partial D^k$ be the radial projection, or the strong deformation retract, which naturally extends to $r:X\setminus\{*\}\to Y$, the homotopy inverse of the inclusion $Y\to X\setminus\{*\}$.
\begin{figure}[ht]
\[
\input{radialprojection.pdf_tex}\qquad\qquad
\input{radialprojection2.pdf_tex}
\]
\caption{A radial projection on $X\setminus\{*\}$ and an extended ray $[p,p+\epsilon]$}
\label{fig:radialprojection}
\end{figure}
Consider $B_n^{r\text{-fail}}=\{\mathbf{x}\in B_n(X\setminus\{*\})| \#(r(\mathbf{x}))<n\}$ consisting of configurations $\mathbf{x}$ such that at least two points in $\mathbf{x}$ are lying in a ray emitting from $*$ in $D^k$. Roughly speaking, it is the set of failures for $r$ to be extended to $\bar r:B_n(X\setminus\{*\})\to B_n(Y)$.
Then $B_n^{r\text{-fail}}$ is of codimension at least 2 in $B_n(X\setminus\{*\})$ as follows.
\begin{align*}
\codim\left(B_n^{r\text{-fail}}\subset B_n(X\setminus\{*\})\right) \ge \codim(\text{ray}\subset D^k\setminus\{*\})=(k-1)\ge 2.
\end{align*}
Hence by assuming the general position with respect to $B_n^{r\text{-fail}}$, we may assume that any loop misses $B_n^{r\text{-fail}}$ for all $k\ge 3$, and so does any disk for $k\ge 4$. Note that when $k=3$, a disk in $B_n(X\setminus\{*\})$ may intersect finitely many times with $B_n^{r\text{-fail}}$. Therefore, the map $r$ induces the surjective homomorphism
\[
r^*:\mathbf{B}_n(Y)\to\mathbf{B}_n(X\setminus\{*\})\simeq\mathbf{B}_n(X),
\]
which is also injective if $k\ge 4$.
We claim that $r^*$ is an isomorphism for $k=3$ as well.
Suppose that $\partial D^3\subset\mathring{X}$, or equivalently, $\partial D^3$ is a component $\partial_0 Y$ of $\partial Y$. Then $X$ is an ordinary {\em capping-off} of the $2$-sphere $\partial_0 Y$ in $Y$, and so $Y\setminus \partial_0 Y\simeq X\setminus\{*\}$.
Since the homotopy equivalence between $Y$ and $Y\setminus\partial_0 Y$ induces the braid equivalence, the inclusions $Y\to X\setminus\{*\}\to X$ induce
\[
\mathbf{B}_n(Y)\simeq\mathbf{B}_n(X\setminus\{*\})\simeq\mathbf{B}_n(X).
\]
Indeed, the strong deformation retract pushing $X\setminus\{*\}$ into $Y\setminus\partial_0 Y$, slightly smaller than $r$, induces the isomorphism $r^*$.
Suppose that $\partial D^3\not\subset\mathring{X}$. Then since $Y\neq S^2$ by the hypothesis, $\partial D^3\not\subset\partial X$, and therefore $\partial D^3$ must intersect $\br(X)$.
The existence of a branch point $p\in\partial D^3\cap\br(X)$ implies that we can extend the ray $[*,p]$ emitting from $*$ passing through $p$ a little bit more. We denote the extended ray by $[p,p+\epsilon]$, where $p+\epsilon$ is a {\em point} lying in $\st(p)\setminus D^3$.
\begin{figure}[ht]
\[
\xymatrix@C=4pc{
\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{0.8}\input{disk_U.pdf_tex}}}\ar[r]^-{f=\{f_i\}}&
\vcenter{\hbox{\input{radialprojection3.pdf_tex}}}\subset B_n(X\setminus\{*\})
}
\]
\caption{A homotopy disk and a small neighborhood $U$}
\label{fig:extendedray}
\end{figure}
Let $f=\{f_1(z),\dots,f_n(z)\}:(D^2,\partial D^2)\to(B_n(X\setminus\{*\}),B_n(Y))$ be given. To prove the claim, it suffices to show that $f$ can be homotoped into $B_n(Y)$.
Since $f$ is in general position with respect to $B_n^{r\text{-fail}}$, without loss of generality, we may assume that $f(D^2)$ intersects $B_n^{r\text{-fail}}$ exactly once at $0\in D^2$, and furthermore that there exists only one ray $[*,p']$ emitting from $*$, which contains exactly two points, say $f_1(0)$ and $f_2(0)$, among $f(0)$. Here $p'=r(f_1(0))=r(f_2(0))$.
Then we can further homotope $f$ by keeping $f$ in the general position so that $p'$ becomes $p$, and one of $f_1$ and $f_2$, say $f_1$, is constantly $p$ on a neighborhood $U\subset D^2$ of $0$.
The last comes from that in a small enough neighborhood, each $f_i$ can be homotoped separately.
Finally, we pull down $f_1$ on $U$ by using $[p,p+\epsilon]$ so that $f_1(0)\subset (p,p+\epsilon)$ as depicted in Figure~\ref{fig:sink}, and then $r\circ f:D^2\to B_n(Y)$ is well-defined and homotopic to $f$ relative to $\partial D^2$, as desired.
\end{proof}
We call the subcomplex $f^{-1}(B_n^{r\text{-fail}})$ of $D^2$ a {\em failure locus}.
\begin{figure}[ht]
\[
\xymatrix{
U=\vcenter{\hbox{\input{homotopy0.pdf_tex}}}\simeq
\vcenter{\hbox{\input{homotopy1.pdf_tex}}}\quad\ar[r]^-{f_1}&\quad
\vcenter{\hbox{\input{homotopy2.pdf_tex}}}\subset Y
}
\]
\caption{Pulling down a homotopy disk along the extended ray}
\label{fig:sink}
\end{figure}
\begin{rmk}\label{rmk:link1}
The effect of capping-off as above on a link $\lk(v)$ for $v\in S^{k-1}\subset Y$ is again a capping-off of $(k-2)$-sphere in $\lk(v)$ since $\lk(v)\cap S^{k-1}=S^{k-2}$ and $lk(v)\cap D^k=D^{k-1}$ in $X$.
Conversely, for any $v\in X$ and embedded sphere $S$ in $\lk(v)$, there exists a capping-off on $X$ which caps $\lk(v)$ off along $S$.
\end{rmk}
The direct consequence of the above proposition is as follows.
\begin{cor}\label{cor:2skeleton}
Let $X$ be a complex.
Then the embedding $X^{(2)}\to X$ of $2$-skeleton $X^{(2)}$ is a braid equivalence unless $X=D^3$ and $X^{(2)}=S^2$.
\end{cor}
Moreover, we can obtain the same result for the 2-cell attaching under the certain condition.
\begin{cor}\label{cor:2cell}
Let $X=Y\sqcup_\phi D^2$ via the embedding $\phi:\partial D^2\to Y$.
Suppose $\phi(\partial D^2)$ bounds a disc $D'$ in $Y$, and $\mathring{D}'\cap \br(Y)\neq\emptyset$.
Then the embedding $Y\to X$ is a braid equivalence.
\end{cor}
\begin{proof}
Note that there exists an embedded sphere $S=D\cup D'$ in $X$, such that
\[
\mathring{D}'\cap \br(Y)\subset S\cap \br(X)\neq \emptyset.
\]
Hence $X$ satisfies the assumption of Proposition~\ref{prop:highercell}, and so $X\to X\sqcup_{S}D^3$ is a braid equivalence.
Then the embedding $Y\to X$ induces a surjection $\mathbf{B}_n(Y)\to\mathbf{B}_n(X)\simeq\mathbf{B}_n(X\sqcup_S D^3)$ as before, and moreover, in this case, we can think a strong deformation retraction $r'$ from $X\sqcup_S D^3$ to $Y$, which is nothing but an {\em elementary collapsing}.
Then essentially the same argument as before with this elementary collapsing implies the braid equivalence of $Y\to X$. We omit the detail.
\end{proof}
In the last part of the proof, we are using the existence of a branch point in $\mathring{D}'$ again.
\begin{rmk}\label{rmk:link2}
Similar to the previous remark, the capping-off along $S^1\subset Y$ affects as the capping-off on $\lk(v)$ for any $v\in S^1$ along the $0$-sphere $S^0=\lk(v)\cap S^1$, and {\em vice versa}.
\end{rmk}
On the other hand, we can consider another type of embedding as follows.
Let $e=(v,w)$ be an edge of $X$ such that the closure $\overline{\st(v)}$ is homeomorphic to the boundary wedge sum $D^k\vee_\partial \bar e$,
or equivalently, $\lk(v)=D^{k-1}\sqcup \{w\}$.
Let $Y$ be a space obtained from $X$ by replacing $\overline{\st(v)}$ with $C_w(D^{k-1})$, where $C_w(D^{k-1})$ is a cone of $D^{k-1}\subset \lk(v)$ with the cone point corresponding to $w$. See Figure~\ref{fig:edgecontraction}. Then there is an obvious embedding $f:X\to Y$.
Note that $Y$ is homeomorphic to the quotient $X/\bar e$ but $f$ is different from the quotient map.
\begin{figure}[ht]
\[
\xymatrix{
\qquad\qquad X\ar[r]^f&X/\bar e\qquad\qquad \\
\overline{\st(v)}=\vcenter{\hbox{\input{diskwithedge1.pdf_tex}}}\ar[r]\ar@<-1.7pc>[u]&
\vcenter{\hbox{\input{diskwithedge2.pdf_tex}}}=C_w(D^{k-1})\ar@<1.7pc>[u]
}
\]
\caption{An embedding having the same effect as the edge contraction}
\label{fig:edgecontraction}
\end{figure}
\begin{prop}\label{prop:edgecontraction}
Let $X$ be given and $e=(v,w)$ be an edge in $X$ with $\lk(v)=D^{k-1}\sqcup \{w\}$ for some $k\ge 2$.
Then the embedding $f:X\to X/\bar e$ defined as above is a braid equivalence.
\end{prop}
\begin{proof}
We first endow a metric $d$ on $X$, and the induced metric $d'$ on $X/\bar e$. Assume that $d(v,w)=diam(\bar e)=1$.
Let $f_\epsilon:X\to X/\bar e$ for $0\le\epsilon<1$ be an embedding such that $diam'(f_\epsilon(\bar e))=1-\epsilon$ and
for any $0<\epsilon<\epsilon'<1$,
\[
f(X)=f_0(X)\subsetneq f_\epsilon(X)\subsetneq f_{\epsilon'}(X)\subsetneq X/\bar e
\]
as depicted in Figure~\ref{fig:deformation}.
For convenience sake, we define $f_1$ by the quotient map $X\to X/\bar e$,
and so $X/\bar e=\varinjlim_{\epsilon\in [0,1)} f_\epsilon(X)$. This also implies that
\begin{equation}
\label{eqn:limit}
B_n(X/\bar e) = \varinjlim_{\epsilon\in [0,1)} f_\epsilon(B_n(X)).
\end{equation}
Moreover, since all $f_\epsilon(X)$'s are ambient isotopic in $X/\bar e$, so are $f_\epsilon(B_n(X))$'s in $B_n(X/\bar e)$. Especially, the inclusion $f_\epsilon(B_n(X))\subset f_{\epsilon'}(B_n(X))$ is a homotopy equivalence for any $\epsilon<\epsilon'<1$.
Hence by this fact and (\ref{eqn:limit}), it suffices to show that any given $c:(D^m,\partial D^m)\to(B_n(X/\bar e), f_0(B_n(X)))$ factors through the inclusion
$f_\epsilon(B_n(X))\to B_n(X/\bar e)$ for some $\epsilon<1$ up to homotopy.
Since the image of $c$ is compact, we can choose a constant $\epsilon$ such that
\[0<1-\epsilon<\frac 13 \min_{x\in D^m} \min \{ d'(x_i,x_j) | x_i\neq x_j\in c(x)\}.\]
Now let $r_\epsilon:X/\bar e\times[0,1]\to X/\bar e$ be a strong deformation retraction of $X/\bar e$ onto $f_\epsilon(X)$. Then the composition $r_\epsilon\circ c$ is a well-defined homotopy between $c$ and a map into $f_\epsilon(B_n(X))$. This completes the proof.
\end{proof}
\begin{figure}[ht]
\[
f_0(X)=\vcenter{\hbox{\includegraphics{diskwithedge1.pdf}}}\subsetneq
\vcenter{\hbox{\includegraphics{diskwithedge3.pdf}}}\subsetneq
\vcenter{\hbox{\includegraphics{diskwithedge4.pdf}}}\subsetneq
\vcenter{\hbox{\includegraphics{diskwithedge2.pdf}}}=f_1(X)
\]
\caption{Local pictures of $f_0(X), f_{\epsilon}(X),f_{\epsilon'}(X)$ and $f_1(X)$ for $0<\epsilon<\epsilon'<1$}
\label{fig:deformation}
\end{figure}
\subsection{Simple complex}
For given $X$, we want to find a {\em simple} complex $X'$ whose braid groups are isomorphic to those on $X$. To do this, we need the following proposition which is the 2-dimensional analogue of Proposition~\ref{prop:edgecontraction}.
\begin{prop}\label{prop:graphedgecontraction}
Let $X$ be a complex of dimension $2$ and $e=(v,w)$ be an edge in $X$ with $\lk(v)=\Gamma\sqcup\{w\}$ for some connected graph $\Gamma$.
If $\Gamma=S^1$, we assume furthermore that $w\not\in\partial X$.
Then there exist braid equivalences
\[X\to \overline X\to\overline{X}/\bar{e}\leftarrow X/\bar e\]
for some complex $\overline X$, and therefore $X\equiv_B X/\bar e$.
\end{prop}
\begin{proof}
Suppose $\Gamma$ is trivial. Then we set $\overline{X}$ to be $X$ itself.
If $\Gamma=I$, then this is a special case of Proposition~\ref{prop:edgecontraction}.
We assume that $\Gamma\neq I$.
Then the strategy is as follows.
We first attach cells to $X$ near $v$ without touching $e$ to obtain $X'$ so that $X\to \overline{X}$ is a braid equivalence and $\lk(v)=D^k\sqcup\{w\}$ in $\overline{X}$ for some $k$. This can be done by Proposition~\ref{prop:highercell} and Corollary~\ref{cor:2cell} since $v$ is a branch point and it always satisfies the assumption of Corollary~\ref{cor:2cell}.
Then there exists a braid equivalence $\overline{X}\to\overline{X}/\bar{e}$ by Proposition~\ref{prop:edgecontraction}.
Notice that $\bar{X}/\bar{e}$ can be obtained from $X/\bar{e}$ by attaching cells in the exactly same ways as we did for $X$ to obtain $\overline{X}$.
Hence we have a braid equivalence $X/\bar{e}\to \overline{X}/\bar{e}$, and therefore there exist braid equivalences $X\to \overline{X}\to \overline{X}/\bar{e}\leftarrow X/\bar{e}$ as claimed.
Recall the effects of attaching cells on $\lk(v)$ as mentioned in Remark~\ref{rmk:link1} and \ref{rmk:link2}, which are capping-off along embedded spheres in $\lk(v)$.
Hence, it suffices to show that $\lk(v)$ can be transformed to $D^k$ by the iterated capping-off process, and this is actually equivalent to showing that $\lk(v)$ is a subset of the 1-skeleton $K^{(1)}$ for some simplicial complex $K$ homeomorphic to $D^k$.
Since any graph embeds into $\mathbb{R}^3$, it is always possible for $k=3$ and so is it for $k=2$ when $\Gamma$ is planar. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:simple}]
Let $X_1$ be the 2-skeleton of a sufficiently subdivided complex $X$. Then by Corollary~\ref{cor:2skeleton}, $X\equiv_B X_1$.
Let $w\in \br(X_1)$ be a non-simple vertex.
That is, $\lk(w)$ has at least 2 graph components or only one graph component with $\val(w)\ge 3$.
Let $\Gamma$ be a graph component of $\lk(w)$ and $X_2$ be a complex having an edge $e=(v,w)$ such that $X_1=X_2/\bar e$ and $\lk(v)=\Gamma\sqcup\{w\}$.
Then $X_1\equiv_B X_2$ by Proposition~\ref{prop:graphedgecontraction}.
Since $v$ is simple, and $\val(w)$ in $X_2$ is equal to $\val(w)$ in $X_1$ but the number of graph components of $\lk(w)$ in $X_2$ is strictly less than that in $X_1$.
Therefore by the induction on the number of graph components in nonsimple vertices, we eventualy obtain a simple complex $X'$ which is braid equivalent to $X$.
\end{proof}
\section{Decompositions and Elementary complexes}\label{sec:elementary}
Let $X$ and $Y$ be simple complexes. We define two operations on $X$ and a pair $X, Y$ as follows.
\begin{defn}[$k$-closure]
Let $X$ be a {\em connected} complex and $\mathbf{v}=\{v_1,\dots,v_k\}\subset\partial X$. A {\em $k$-closure $\widehat{(X,\mathbf{v})}$ of $X$ along $\mathbf{v}$} is a complex obtained by the mapping cone of the embedding $\mathbf{v}\to X$, and called {\em trivial} if
$\overline{\st(v)}$ is trivial, or equivalently, $k=1$ and $\overline{\st(v)}=I$.
\end{defn}
Notice that $\widehat{(X,\mathbf{v})}$ can be also obtained by gluing $T_k$ to $X$ along $\mathbf{v}$,
and $\widehat{(X,\mathbf{v})}=X$ if and only if it is a trivial closure. Moreover, if $\overline{\st(v)}=T_k$ for some $k\ge 1$ and $X_v$ is connected, then the closure
of $X_v$ along $\lk(v)$ becomes $X$ itself by definition of $X_v$.
Hence $X_v$ is a kind of the inverse of the closure operation, usually called {\em unwrapping} in graph theory.
Especially, we denote by $\Theta_k$ {\em the closure $\widehat{(T_k,\partial T_k)}$ of $T_k$ along $\partial T_k$}, which is the union of two $T_k$'s and has two distinguished vertices $0$ and $0'$ of valency $k$.
We will suppress $\mathbf{v}$ unless any ambiguity occurs.
Let $v\in X$ be a vertex with $\overline{\st(v)}=T_k$ for some $k\ge 1$. Then we denote by $\vec v$ a vertex $v$ together with an ordering on $\lk(v)$. In order words, we may identify $\st(v)$ with the labelled $T_k$, and we regard $\lk(v)=\{v_1,\dots,v_k\}$ as an ordered $k$-tuple $(v_1,\dots,v_k)$ if $\vec v$ is given.
We call $\vec v$ {\em a vertex with ordering}.
\begin{defn}[$k$-connected sum]
Let $X$ and $Y$ be complexes, and $\vec v$ and $\vec w$ be vertices of orderings of valency $k\ge 1$ in $X$ and $Y$.
We further assume that both $X_v$ and $Y_w$ are connected.
A {\em $k$-connected sum $X\# Y$ of $(X,\vec v)$ and $(Y,\vec w)$} is a complex obtained by connecting each $v_i$ and $w_i$ via an interval $e_i$,
and is called {\em trivial} if one of $X$ and $Y$ is $\Theta_k$.
\end{defn}
See Figure~\ref{fig:connectedsum} for a pictorial definition for connected sum.
Note that $(X,\vec v)\# (Y,\vec w)$ is a boundary wedge sum $X\vee_\partial Y$ if $k=1$, and is an ordinary connected sum if $k=2$.
\begin{figure}[ht]
\input{connectedsum.pdf_tex}
\caption{A $k$-connected sum}
\label{fig:connectedsum}
\end{figure}
Since $\Theta_k$ is a $k$-closure of $T_k$, for any vertex $v\in X$ with $\overline{\st(v)}=T_k$ for some $k\ge 1$, a $k$-connected sum $(X,\vec v)\#(\Theta_k,\vec 0)$ is nothing but a $k$-closure of $X_v$ along $\lk(v)$ and so it is $X$ itself whatever the orderings on $v$ and $0$ are. Therefore $\Theta_k$ plays the role of the identity under the $k$-connected sum.
We sometimes suppress $\vec v$ and $\vec w$ unless any ambiguity occurs, and also say that $X$ is {\em decomposed into $Y$ and $Z$ via $k$-connected sum} if $X=Y\# Z$.
Furthermore, it is easy to see that both unwrapping and connected sum decomposition reduce the first Betti number or the number of vertices of connected components.
Therefore by continuing these operation, we eventually have components which are elementary in the sense of the following definition.
\begin{defn}[Elementary complex]
Let $X$ be a simple complex of dimension 2. We say that $X$ is {\em elementary} if $X$ can be expressed as neither a nontrivial $k$-closure nor a nontrivial $k$-connected sum.
\end{defn}
Let $X$ be an elementary complex.
Suppose $\dim X=1$. Then elementariness forces $X$ to be a tree having at most 1 vertex of valency $k\ge 3$. Therefore $X$ is homeomorphic to $T_k$, which admits a trivial closure structure only, and so is elementary.
Suppose $\dim X=2$ and $f:X\to M$ is a simplicial embedding into a piecewise linear manifold $M$. Assume that $\dim M$ is minimal among all possible such embeddings.
Then $\dim M\le 4$ since $\dim X=2$.
Let $N(X)$ be the closed regular neighborhood of $X$ in $M$, or equivalently, $\overline{\st(X)}$ in $M$ after sufficient barycentric subdivisions.
Hence $N(X)$ can be obtained by attaching 2 or higher dimensional cells to $X$. Roughly speaking, $N(X)$ is a {\em thickening} of $X$.
Note that $N(X)$ depends on $f$ but $\dim N(X)$ does not.
If $\dim N(X)=2$, or equivalently, $X$ can be embedded into a surface, then we claim that there is no branch point in $X$ and so $X$ itself is a surface $\Sigma=N(X)$.
Suppose $v\in \br(X)$. If $\lk(v)$ is 0-dimensional, then $\overline{\st(v)}\simeq T_k$ for $k=\val(v)\ge 3$ and therefore $X$ can be decomposed further via a nontrivial $k$-closure or a nontrivial $\ell$-connected sum for some $\ell< k$.
This contradicts to elementariness of $X$ and so $\overline{\st(v)}$ is homeomorphic to a boundary wedge sum $D^2\vee_\partial [v,w)$ of a disk and an half-open edge $[v,w)$ since $X$ is simple and embeds into a surface. Hence either $w$ is a point of 2-closure of $X_w$ when $X_w$ is connected, or $v$ is a point of 1-closure or 1-connected sum of $X$ otherwise. This does not happen by the elementariness of $X$, and so $\overline{\st(v)}=D^2$. Therefore $v\not\in \br(X)$, and this contradiction implies that $\br(X)=\emptyset$.
For a surface $\Sigma$, we omit the group presentation of $\mathbf{B}_n(\Sigma)$ which is well-known, and actually we already introduced the result we need in Theorem~\ref{thm:surface}.
On the other hand, if $\dim N(X)\ge3$, then by the same argument as above, all vertices of $\br(X)$ are of valency 1. In this case, we call $X$ a {\em branched surface}.
Moreover, we can attach cells of dimension at least 2 to $X$ to obtain $N(X)$ so that the inclusion $X\to N(X)$ becomes a braid equivalence by Proposition~\ref{prop:highercell} and Corollary~\ref{cor:2cell}.
This is essentially the same process as described in the proof of Theorem~\ref{thm:simple}.
Therefore
\[\mathbf{B}_n(X)\simeq\mathbf{B}_n(N(X))\simeq\prod^n\pi_1(N(X))\rtimes\mathbf{S}_n\simeq\prod^n\pi_1(X)\rtimes\mathbf{S}_n.\]
\begin{lem}\label{lem:elementary}
Let $X$ be an elementary complex. Then $X$ is either $T_k$, a surface $\Sigma$, or a branched surface.
Moreover, if $X$ is a branched surface, then
\[\mathbf{B}_n(X)\simeq\prod^n\pi_1(X)\rtimes\mathbf{S}_n.\]
\end{lem}
\begin{ex}[An elementary, non-manifold complex $S_0$]\label{ex:S_0}
Let $S_0$ be the complex obtained by gluing a disk and an interval, depicted in Figure~\ref{fig:S_0}.
\[S_0=\{(x,y,0)|-1\le x, y\le 1\}\cup \{(0,0,z)|0\le z\le 1\}\subset \mathbb{R}^3.\]
Then it is obvious that $S_0$ is elementary and $\dim N(S_0)=3$ since $S_0$ can not be embedded into any surface.
Hence $N(S_0)\simeq D^3$ and therefore $\mathbf{B}_n(S_0)$ is isomorphic to $\mathbf{S}_n$ via the induced permutation $\rho$.
It is not hard to see that an elementary complex $X$ embeds into a surface if and only if it does not contain $S_0$.
Furthermore, the same holds for non elementary complexes. This is easy and we will see later.
In this sense, $S_0$ is the obstruction complex for given complex to be embedded into a surface.
\end{ex}
In the following two sections, we will present the braid groups on $\widehat X$ and $X\#Y$ in terms of the braid groups on $X$ and both $X$ and $Y$, respectively.
Let $X$ and $Y$ be connected, disjoint subspaces of $Z$. Then for convenience's sake, we denote by $B_{r;s}(X;Y)$ the subspace of $B_{r+s}(Z)$ defined as
\[B_{r;s}(X;Y)=\left\{\left.\mathbf{x}\in B_{r+s}(Z)\right| \#(\mathbf{x}\cap X)=r, \#(\mathbf{x}\cap Y)=s\right\}\simeq B_r(X)\times B_s(Y).\]
Hence $\pi_1(B_{r;s}(X;Y))=\mathbf{B}_r(X)\times\mathbf{B}_s(Y)$ and is denoted by $\mathbf{B}_{r;s}(X;Y)$.
\section{\texorpdfstring{$k$}{k}-Closure of a complex}
Let $X$ be a complex and $\mathbf{v}=\{v_1,\dots,v_k\}\subset \partial X$. Let $v$ be the cone point of the $k$-closure $\widehat X$ of $X$ along $\mathbf{v}$, which is the mapping cone of $\mathbf{v}\to X$.
We denote by $e_i$ the {\em oriented} edge $(v_i,v)$ from $v_i$ to $v$ in $\widehat X$. Then for each $1\le i\le k$, the concatenation $e_1e_i^{-1}$ defines a path $\delta_i$ in $B_n(\widehat X)$ from $i_{v_1}(\mathbf{x})$ to $i_{v_i}(\mathbf{x})$ for any $\mathbf{x}\in B_{n-1}(X)$ in the obvious way, and we denote this path by $\delta_i$ again unless any ambiguity occurs. Obviously, $\delta_1$ defines a path homotopic to a constant path.
Now we endow a metric $d$ on $\widehat X$. Then there is a constant $\epsilon=\epsilon(\widehat X)$ as discussed earlier so that $d(x_i,x_j)\ge \epsilon$ for any $x_i, x_j\in\mathbf{x}\in B_n(\widehat X)$.
Then by subdividing all edges adjcent to $v$, we may assume that the diameter of $\overline{\st(v)}$ is less than $\epsilon$.
In other words, we may assume that any configuration $\mathbf{x}=B_n(\widehat X)$ intersects $\overline{\st(v)}$ at most once, that is, $\#\left(\mathbf{x}\cap\overline{\st(v)}\right)\le 1$.
Therefore, $B_n(\widehat X)$ is separated into two subsets according to the presence of a point in $\overline{\st(v)}$, and is the union of two subspaces $B_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right)$ and $B_n(X)$ whose intersection is
\begin{equation}\label{eq:subspace}
B_{n-1;1}(X\setminus\mathbf{v};\mathbf{v})=\bigsqcup_{i=1}^k \left(B_{n-1}(X\setminus\mathbf{v})\times\{v_i\}\right).
\end{equation}
Notice that the intersection is not connected by the assumption that there is at most 1 point in $\overline{\st(v)}$.
Hence we need to choose paths joining components to make it connected, and make the Seifert-van Kampen theorem applicable.
To this end, we fix a basepoint $*_\ell$ of $B_\ell(X)$ for each $\ell\le n$ such that $i_{v_1}(*_{n-1})=*_n$.
Then we choose a path $\gamma_i$ in $B_n(X)$ for each $1\le i\le k$ between $*_n$ and $i_{v_i}(*_{n-1})$ such that $\gamma_i(t)$ avoids $\mathbf{v}$ for $0<t<1$ and $\bigcup_{i=1}^k \gamma_i\subset B_n(X)$ is homotopy equivalent to $T_k$.
In other words, if $\gamma_i$ and $\gamma_j$ intersect at some point, then the images of $\gamma_i$ and $\gamma_j$ must coincide from the beginning. Since $i_{v_1}(*_{n-1})=*_n$, we may assume that $\gamma_1$ is a constant path at $*_n$ for convenience sake.
In practice, the most convenient way to choose $\gamma_i$'s is as follows.
At first, we fix a set $\{\gamma^0_2,\dots,\gamma^0_k\}$ of paths in $B_n(T_k)$ as depicted in Figure~\ref{fig:gamma}.
\begin{figure}[ht]
\[
*_n=\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{0.7}\input{gamma_i_0.pdf_tex}}}
\xrightarrow{\gamma_i^0=\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{0.7}\input{gamma_i_1.pdf_tex}}}\quad}
\vcenter{\hbox{\def0.7}\input{gamma_i_1.pdf_tex}}}\quad{0.7}\input{gamma_i_2.pdf_tex}}}=i_{v_i}(*_{n-1}),
\]
\caption{A path $\gamma_i^0$ for $T_k$ joining $*_n$ and $i_{v_i}(*_{n-1})$}
\label{fig:gamma}
\end{figure}
Since $X$ is connected, there exists a tree $T\subset X$ with $\partial T = \mathbf{v}$.
Then there is a map $q:T\to T_k$ which contracts all internal edges of $T$ and induces a homotopy equivalence.
Hence similar to Proposition~\ref{prop:generaledgecontraction}, we can find a lift $\gamma_i$ for each $\gamma_i^0$. In this case, points in $*_n$ are lying near $v_1$.
\begin{lem}\label{lem:thetasubgroup}
Let $X, \mathbf{v}$ as above. Then there exists a homomorphism
\[\Psi_{\widehat X}:\mathbf{B}_n(\Theta_k)\to\mathbf{B}_n(\widehat X).\]
\end{lem}
\begin{proof}
Let $T$ be as above. Then by gluing $T_k$ to both $T$ and $T_k$, we have a map
\[\hat q: \widehat{T}\to \Theta_k,\]
where $\widehat{T}$ is an obtained graph homotopy equivalent to $\Theta_k$.
By Proposition~\ref{prop:generaledgecontraction}, it induces a surjective map $\hat q^*:\mathbf{B}_n(\Theta_k)\to\mathbf{B}_n(\widehat{T})$.
Then the desired map $\Psi_{\widehat X}$ is just a composition of $\hat q^*$ and the map induced by the inclusion $\widehat{T}\to \widehat X$.
\end{proof}
It is important to remark that $\Psi_{\widehat X}$ is neither injective nor surjective in general by the same reason as stated in the discussion after Proposition~\ref{prop:generaledgecontraction}.
Let $\widehat{B}_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right)=\left(\bigcup_i\gamma_i\right)\cup B_{n-1;1}\left(X\setminus\mathbf{v}; \overline{\st(v)}\right)$.
Then for each $i$, since
\[
\gamma_i\cap B_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right)=\{*_n, i_{v_i}(*_{n-1})\}
\]
and $*_n$ and $i_{v_i}(*_{n-1})$ is connected via the path $\delta_i^{-1}\delta_1$ in $\overline{\st(v)}$, and so $\gamma_i$ together with $\gamma_1$ defines a loop $\gamma_i\delta_i^{-1}\delta_1\gamma_1^{-1}$.
Hence $\bigcup_i\gamma_i$ contributes $(k-1)$ loops and so $(k-1)$ free letters in the fundamental group.
More precisely, let $t_i=[\gamma_i\delta_i^{-1}\delta_1\gamma_1^{-1}]$ denote the homotopy class, and we set $t_1=e$.
Then
\begin{align*}
\pi_1\left(\widehat{B}_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right),*_n\right)&\simeq
\mathbf{B}_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right)\ast\langle t_2,\dots,t_k\rangle\\
&\simeq\left(\mathbf{B}_{n-1}(X\setminus\mathbf{v})\times \pi_1\left(\overline{\st(v)}\right)\right)\ast\langle t_2,\dots,t_k\rangle\\
&\simeq\mathbf{B}_{n-1}(X)\ast\langle t_2,\dots,t_k\rangle.
\end{align*}
On the other hand, the intersection between $\widehat{B}_{n-1;1}\left(X\setminus\mathbf{v};\overline{\st(v)}\right)$ and $B_n(X)$ is precisely $\left(\bigcup_i\gamma_i\right)\cup B_{n-1;1}(X;\mathbf{v})$, and homotopy equivalent to a wedge sum of $k$-copies of $B_{n-1}(X)$ indexed by $v_i$'s as shown in (\ref{eq:subspace}).
Hence its fundamental group is isomorphic to $\ast^k_{i=1}\mathbf{B}_{n-1}(X)$.
Moreover, for each $i$, there are two inclusions $\widehat{\phi}_i$ and $\psi_i$ from $\mathbf{B}_{n-1}(X)$ to
$\mathbf{B}_{n-1}(X)\ast\langle t_2,\dots,t_k\rangle$ and $\mathbf{B}_n(X)$ defined as
for all $\beta\in\mathbf{B}_{n-1}(X)$, as paths
\[\widehat{\phi}_i(\beta)=\psi_i(\beta)=\gamma_i\cdot i_{v_i}(\beta)\cdot\gamma_i^{-1}.\]
However, as group elements,
\[\widehat{\phi}_i(\beta)=t_i\beta t_i^{-1},\quad\psi_i(\beta)=v_{i*}(\beta),\]
where $v_{i*}=(\gamma_i)_*^{-1}(i_{v_i})_*$ and $(\gamma_i)_*$ is the automorphism changing the basepoint from $*_n$ to $i_{v_i}(*_{n-1})$.
Therefore we have a diagram below whose push-out defines $\mathbf{B}_n(\widehat X)$ by the Seifert-van Kampen theorem.
\begin{equation}\label{eq:pushout}
\xymatrix{
\mathbf{B}_{n-1}(X)\ast\langle t_2,\dots,t_k\rangle & &\mathbf{B}_n(X)\\
&\mathop{\ast}_{i=1}^k\mathbf{B}_{n-1}(X)\ar[lu]^{\ast_{i=1}^k\widehat{\phi}_i}\ar[ru]_{\ast_{i=1}^k\psi_i}
}
\end{equation}
Hence,
\begin{align*}
\mathbf{B}_n(\widehat X)&=\frac{\left(\mathbf{B}_{n-1}(X)\ast\langle t_2,\dots,t_k\rangle\right)\ast\mathbf{B}_n(X)}
{\left\langle\!\!\left\langle
\widehat{\phi}_i(\beta)=\psi_i(\beta),\forall \beta\in\mathbf{B}_{n-1}(X),1\le i\le k
\right\rangle\!\!\right\rangle}\\
&=\frac{\mathbf{B}_n(X)\ast\langle t_2,\dots,t_k\rangle}
{\langle\!\langle
t_i\beta t_i^{-1}=v_{i*}(\beta),\forall \beta\in\mathbf{B}_{n-1}(X),2\le i\le k
\rangle\!\rangle}.
\end{align*}
The last equality follows by identifying $\mathbf{B}_{n-1}$ as a subgroup of $\mathbf{B}_n(X)$ via $\widehat{\phi}_1$ and $v_{1*}$.
Note that when $k=2$, then $\mathbf{B}_n(\widehat X)$ is an ordinary {\em HNN extension} of $\mathbf{B}_n(X)$ with the associated group $\mathbf{B}_{n-1}(X)$.
\begin{thm}\label{thm:closure}
Let $X$ be a complex and $\mathbf{v}=\{v_1,\dots,v_k\}\subset \partial X$.
Then the braid group $\mathbf{B}_n(\widehat X)$ on the $k$-closure $\widehat X$ of $X$ along $\mathbf{v}$ is as follows. For $n\ge 1$,
\[\mathbf{B}_n(\widehat X)=\frac{\mathbf{B}_n(X)\ast\langle t_2,\dots,t_k\rangle}
{\langle\!\langle
t_i\beta t_i^{-1}=v_{i*}(\beta),\forall\beta\in\mathbf{B}_{n-1}(X),2\le i\le k
\rangle\!\rangle},\]
where
\[v_{i*}=(\gamma_i)_*^{-1}(i_{v_i})_*:\mathbf{B}_{n-1}(X)\to\mathbf{B}_n(X),\]
and $\gamma_i$ is a chosen path joining basepoints of $B_n(X)$ and $i_{v_i}(B_{n-1}(X))$.
Moreover, $\mathbf{B}_{n-1}(X)$ is identified with a subgroup of $\mathbf{B}_n(X)$ via $v_{1*}$.
\end{thm}
Let $\partial_0 X$ be a connected component of $\partial X$ of dimension at least 1, and suppose $\{v_1,\dots, v_k\}\subset \partial_0 X$.
Then since $\st(\partial_0 X)$ is of dimension at least 2, we may choose $\gamma_i$'s in a small enough collar neighborhood of $\partial_0 X$ as depicted in Figure~\ref{fig:paths} so that they intersect pairwise only at $v_1$.
Then
each path $\gamma_i$ can be regarded as disjoint from $B_{n-1}(X)$.
This implies the triviality of the action $(\gamma_i)_*$ on $(i_{v_i})_*(\mathbf{B}_{n-1}(X))$. Hence $v_{i*}(\beta)=\beta$ for all $\beta\in\mathbf{B}_{n-1}(X)$, and the defining relator is nothing but the commutativity between $t_i$ and any $\beta\in\mathbf{B}_{n-1}(X)$.
\begin{cor}\label{cor:boundary}
Let $X, \mathbf{v}$ be as above. Suppose $\dim X\ge 2$ and all $v_i$'s are lying in the same component of $\partial X$. Then
\[\mathbf{B}_n(\widehat X)\simeq \mathbf{B}_n(X)\ast\langle t_2,\dots, t_k\rangle/
\langle\!\langle[\mathbf{B}_{n-1}(X), t_i],2\le i\le k\rangle\!\rangle.\]
\end{cor}
\begin{figure}[ht]
\[
\widehat X=\vcenter{\hbox{\input{paths.pdf_tex}}}
\]
\caption{A choice of paths $\{\gamma_i\}$ when $\{v_1,\dots,v_k\}\subset\partial_0 X$}
\label{fig:paths}
\end{figure}
\begin{rmk}\label{rmk:nonplanar}
On the other hand, for a surface $\Sigma$, if we take a closure $\widehat \Sigma$ along points not contained in a single boundary component of $\Sigma$, then $\widehat \Sigma$ is always nonplanar.
Moreover it contains a nonplanar graph.
\end{rmk}
\begin{ex}[2 braid group on the closure of a tree]\label{ex:treeclosure}
Let $T$ be a tree with $k=\#(\partial T)$ and $\widehat T$ be the $k$-closure of $T$ along $\partial T$. Since $\mathbf{B}_1(T)$ is trivial, $\mathbf{B}_2(\widehat T)$ is a free group and admits the following presentation.
\begin{align*}
\mathbf{B}_2(\widehat T)&=\mathbf{B}_2(T)\ast \pi_1(\Theta_k)\\
&=\langle
s_{2,3},\dots,s_{k-1,k},t_2,\dots,t_k|s_{i,j} = s_{i',j'}\text{ if }(i,j)\sim_T(i',j')
\rangle.
\end{align*}
Hence the rank is $r_2(T)+(k-1)$, where $r_2(T)$ is the rank of $\mathbf{B}_2(T)$ given by the formula (\ref{eq:treerank}) in Example~\ref{ex:tree}.
\end{ex}
\begin{ex}[The braid group on $\Theta_k$]\label{ex:theta}
Since $\Theta_k=\widehat T_k$, this is a special case of the previous example.
Recall that $T_k$ produces only the trivial equivalence relation on $\binom{\partial T_k-1}2$. Therefore
$\mathbf{B}_2(\Theta_k)$ is a free group of rank $\binom{k-1}2+(k-1)=\binom{k}2$.
Now we consider $\mathbf{B}_n(\Theta_k)$ which is generated by $t_i$'s and $\mathbf{B}_n(T_k)$. More specifically, we have a diagram
\[
\xymatrix@C=3pc{
\mathbf{B}_2(T_k)\ar[r]^{(i_{v_1})_*^{n-2}} \ar[d]_{\widehat{(\cdot)}_*} & \mathbf{B}_n(T_k) \ar[d]^{\widehat{(\cdot)}_*}\\
\mathbf{B}_2(\Theta_k)\ar[r]^{\xi}&\mathbf{B}_n(\Theta_k),
}
\]
where the vertical arrows are induced by the inclusion $\widehat{(\cdot)}:T_k\to\Theta_k$, and $\xi$ is defined by
\[
\xi(\sigma_{i,j}) = \left(\widehat{(\cdot)}_*\circ (i_{v_1})_*^{n-2} \right) (\sigma_{i,j}),\quad
\xi(t_i) = t_i\in\mathbf{B}_n(\Theta_k).
\]
Note that $\xi$ is well-defined since $\mathbf{B}_2(\Theta_k)$ is free, and commutativity is obvious by the definition of $\xi$.
\begin{lem}\cite{KP}
The map $\xi$ is surjective.
\end{lem}
It is not hard to prove this lemma. Indeed, one can prove this by drawing carefully the paths representing the generators for $\mathbf{B}_n(T_k)$ and $t_i$'s.
In general, for any tree $T$ with $k=\#(\partial T)$, there is a surjective homomorphism $\mathbf{B}_2(\widehat T)\to\mathbf{B}_n(\widehat T)$.
\end{ex}
\begin{rmk}
The decomposition defined above is nothing but a {\em graph-of-groups} structure for $B_n(X)$ over the graph $\Theta_k$ as follows. Note that this is essentially same as the push-out diagram in (\ref{eq:pushout}).
\[
\xymatrix@C=7pc @R=0.4pc{
&\mathbf{B}_{n-1}(X)\ar@(l,u)[lddd]_{Id}\ar@(r,u)[rddd]^{v_{1*}}\\
&\mbox{}&\\
&\mathbf{B}_{n-1}(X)\ar@/_/[ld]_-{Id}\ar@/^/[rd]^{v_{2*}}&\\
\mathbf{B}_{n-1}(X)& \mbox{} &\mathbf{B}_n(X)
\\
&\mathbf{B}_{n-1}(X)\ar@/^/[lu]^-{Id}\ar@/_/[ru]_{v_{3*}}\ar@{}[dd]|-{\vdots}&\\
&\mbox{}&\\
&\mathbf{B}_{n-1}(X)\ar@(l,d)[luuu]^{Id}\ar@(r,d)[ruuu]_{v_{k*}}
}
\]
Here each cycle involving $v_{1*}$ and $v_{i*}$ corresponds to the generator $t_i$, and we call $t_i$'s {\em stable letters} for $\mathbf{B}_n(X)$ as the ordinary HNN extension.
\end{rmk}
\section{\texorpdfstring{$k$}{k}-Connected sum of a pair of complexes}
We will use the generalized notion, called a {\em complex-of-groups}, to consider the braid group on $X\# Y$ of two given complexes $X$ and $Y$, and we briefly review about complex-of-groups. See \cite{Cor} for details.
\subsection{Complex-of-groups}
For two cells $\sigma$ and $\tau$ of a regular CW-complex, we denote by $\sigma\succ\tau$ if $\tau$ is a face of $\sigma$.
A face $\tau$ of $\sigma$ is {\em principal} if it is of codimension 1. By a {\em directed corner $\alpha$} of $\sigma$ we mean a triple $(\tau_1,\sigma,\tau_2)$ where $\tau_i$ are two different principal faces of $\sigma$ having a unique principal face $\tau_1\cap\tau_2$ in common. For a directed corner $\alpha=(\tau_1,\sigma,\tau_2)$, we denote by $\bar\alpha$ the inverse $(\tau_2,\sigma,\tau_1)$ of $\alpha$.
\begin{defn}[Complex-of-spaces]\label{def:complexofspaces}\cite{Cor}
A {\em (good) complex-of-spaces} $\mathcal{K}$ over $K$ is a CW-complex with a cellular map $p:\mathcal{K}\to K$ satisfying the conditions as follows:
\begin{enumerate}
\item for each cell $\sigma$ of $K$, there is a connected CW-complex $\mathcal{K}_\sigma$ with $p^{-1}(\sigma)\simeq \mathcal{K}_\sigma\times \sigma$;
\item for each cell $\sigma$ of $K$, the inclusion-induced map $\pi_1(\mathcal{K}_\sigma)\to\pi_1(\mathcal{K})$ is injective.
\end{enumerate}
\end{defn}
A $k$-skeleton $\mathcal{K}^{(k)}$ is defined as $p^{-1}(K^{(k)})$. Then the fundamental group $\pi_1(\mathcal{K})$ is isomorphic to $\pi_1(\mathcal{K}^{(2)})$. Moreover, there is a surjection $\pi_1(\mathcal{K}^{(1)})\to\pi_1(\mathcal{K})$ whose kernel is generated by elements corresponding to $\partial \tilde\sigma$ where $\tilde\sigma$ is a lift of 2-cell $\sigma\in K$ \cite[\S4]{Cor}.
\begin{defn}[Complex-of-groups]\label{def:complexofgroups}\cite{Cor}
A {\em complex-of-groups} $\mathcal{G}$ is a triple $(K, G, \phi)$ where
\begin{enumerate}
\item $K$ is a regular CW-complex;
\item $G$ assigns to each cell $\sigma$ of $K$ a group $G_\sigma$ and each pair $(\sigma,\tau)$ with $\sigma\succ\tau$ an injective homomorphism $i_{\sigma,\tau}:G_\sigma\to G_\tau$;
\item $\phi$ is a {\em corner labeling function} that assigns to each direct corner $\alpha=(\tau_1,\sigma,\tau_2)$ an element $\phi(\alpha)\in G_{\tau_1\cap\tau_2}$ satisfying the condition as follows:
\begin{enumerate}
\item $\phi(\bar\alpha)=\phi(\alpha)^{-1}$ for each directed corner $\alpha$;
\item If $\alpha=(\tau_1,\sigma,\tau_2)$, then the two compositions $G_\sigma\to G_{\tau_1}\to G_{\tau_1\cap\tau_2}$ and $G_\sigma\to G_{\tau_2}\to G_{\tau_1\cap\tau_2}$ differ by conjugation by $\phi(\alpha)$.
\end{enumerate}
\end{enumerate}
\end{defn}
For a complex-of-spaces $\mathcal{K}$ over $K$, an {\em associated} complex-of-groups $\mathcal{G}$ over $K$ can be defined by taking $G_\sigma=\pi_1(\mathcal{K}_\sigma)$. Note that for each $\sigma\succ\tau$, the inclusion $G_\sigma\to G_\tau$ depends on the choice of basepoints of $\mathcal{K}_\sigma$ and $\mathcal{K}_\tau$ and the choice of a path joining them. Hence, it is uniquely determined only up to inner automorphism on $G_\tau$.
A $k$-skeleton $\mathcal{G}^{(k)}$ of a complex-of-groups is nothing but a restriction on $K^{(k)}$. We say that $\mathcal{G}$ is a graph-of-groups when $K$ is 1-dimensional.
Let $\mathcal{G}$ be a graph-of-groups and $T$ be a maximal tree of $K$. We identify the generators for $\pi_1(\Gamma)$ with the set of {\em oriented} edges in $\Gamma\setminus T$.
Let $\tilde\pi_1(\mathcal{G})$ be the free product of $\pi_1(K)$ and $\colim_T K$ of $K$ over $T$.
Indeed, $\colim_T K$ is obtained from the {\em free product with amalgamation} of vertex groups along all edge groups.
Then we define a group $\pi_1(\mathcal{G})$ by {\em HNN extension} with all edge groups corresponding to the generators of $\pi_1(K)$.
More precisely, $\pi_1(\mathcal{G})$ is obtained by declaring $i_{e,v}(g)=e^{-1} i_{e,w}(g) e$ for all $g\in G_e$ in $\tilde\pi_1(G)$ for each edge $e=(v,w)\in \Gamma\setminus T$.
Similar to before, the fundamental group $\pi_1(\mathcal{G})$ is the same as $\pi_1(\mathcal{G}^{(2)})$ which is a quotient of $\pi_1(\mathcal{G}^{(1)})$ by elements coming from {\em corner and edge reading} for each 2-cell $\sigma$ of $K$.
For a 2-cell $\sigma$, let $\partial\sigma=e_1e_2\dots e_m$. Then the label $\phi(\sigma)$ on $\sigma$ is defined up to cyclic permutation as
\[
\phi(\sigma)=e_1\phi(\alpha_1)e_2\phi(\alpha_2)\dots e_m\phi(\alpha_m),
\]
where $\phi(\alpha_i)$ is a corner label for $\alpha_i=(e_i,\sigma,e_{i+1})$, and $e_i\in\pi_1(K^{(1)})$ is either trivial when it belongs to the maximal tree $T$ or a corresponding generator otherwise.
Remark that if a complex-of-groups $\mathcal{G}$ is associated with a complex-of-spaces $\mathcal{K}$, then $\pi_1(\mathcal{K})\simeq\pi_1(\mathcal{G})$, and so one may identify these two concepts only for the fundamental group.
\subsection{\texorpdfstring{$k$}{k}-connected sum}
Let $X$ and $Y$ be complexes, and $\vec v\in X$ and $\vec w\in Y$ be vertices of valency $k\ge 1$ with orderings $\lk(\vec v)=(v_1,\cdots,v_k)$, and $\lk(\vec w)=(w_1,\cdots, w_k)$, respectively.
We denote $(X,\vec v)\#(Y,\vec w)$ by $X\#Y$, edges $(v_i, w_i)$ by $e_i$ and $E=\cup e_i$ for simplicity.
For a given metric $d$ on $X\#Y$, by rescaling $X\#Y$ near $E$ after sufficient subdivisions, we may assume that for any configuration $\mathbf{x}\in B_n(X\#Y)$, the closure $\bar e_i$ of each $e_i$ contains at most 1 point of $\mathbf{x}$.
Then according to whether each $\bar e_i$ contains a point, $B_n(X\#Y)$ can be split as the disjoint union of subspaces indexed by (subset of) the power set of $E$ as follows.
Let $F\subset E$ be a subset of edges with $\#(F)=a\le n$, and let $\prod F$ denote the product of closures of edges contained in $F$, so that it is homeomorphic to a closed $a$-cube $D^a$. Then the spaces
\[B_{r;s}(X_v;Y_w)\times \prod F\]
indexed by $F$ and $r,s$ with $r+s=n-a$ decompose $B_n(X\#Y)$ as desired.
We simply denote these pieces by $B_{r;s}(F)$ where $r+s=n-\#(F)$, and regard $B_{r;s}(F)$ as $\#(F)$-dimensional cube.
Then for each $e_i=(v_i,w_i)\in F$, the two maps defined by
\[i_{v_i}:B_{r;s}(F)\to B_{r+1;s}(F\setminus \{e_i\}),\quad
i_{w_i}:B_{r;s}(F)\to B_{r;s+1}(F\setminus \{e_i\})\]
correspond to two face maps of the $a$-dimensional cube $\prod F$.
Moreover, by Proposition~\ref{prop:injection}, these induce injective homomorphisms on fundamental groups. We denote these maps by $v_{i*}$ and $w_{i*}$ for simplicity.
Hence all this information defines a complex-of-spaces $\mathcal{K}$ for $B_n(X\#Y)$ over the cube complex $K(n,k)$ depending on $n$ and $k$. Let $\mathcal{G}$ be an associated complex-of-groups with $\mathcal{K}$.
Then $\pi_1(\mathcal{K})=\mathbf{B}_n(X\#Y)=\pi_1(\mathcal{G})$.
\begin{rmk}
The dimension $\dim K$ of the cube complex $K$ is the minimum between $n$ and $k$, and moreover $K$ can be defined inductively as
\[K(n,k)=K(n,k-1)\sqcup K(n-1,k-1)\times I,\]
but this is not necessary in this paper and we omit the detail.
\end{rmk}
Since $\pi_1(\mathcal{G})$ depends only on the 2-skeleton of $K$ as mentioned before, we consider a 2-complex-of-groups $\mathcal{G}^{(2)}$ over the 2-skeleton $K^{(2)}$.
\begin{figure}[ht]
{\scriptsize\[
\xymatrix@C=0pc@R=-0.2pc{
\text{(0-cells)}& &\mathbf{B}_{r+1;s-1}(\emptyset)&& \mathbf{B}_{r;s}(\emptyset) & & \mathbf{B}_{r-1,s+1}(\emptyset)\\
& & &\vdots & & \vdots &\\
& & &\mathbf{B}_{r;s-1}(\{e_i\})\ar[ruu]^{w_{i*}}\ar[luu]_{v_{i*}} & & \mathbf{B}_{r-1;s}(\{e_i\})\ar[luu]_{v_{i*}} \ar[ruu]^{w_{i*}}\\
\text{(1-cells)}& \dots& &\vdots & & \vdots & &\dots\\
& & &\mathbf{B}_{r;s-1}(\{e_j\})\ar[ruuuu]_{w_{j*}}\ar[luuuu]^{v_{j*}} & & \mathbf{B}_{r-1;s}(\{e_j\})\ar[ruuuu]_{v_{j*}}\ar[luuuu]^{v_{j*}}\\
& & & \vdots & & \vdots &\\
& & \vdots & & \vdots & & \vdots\\
\text{(2-cells)}& &\mathbf{B}_{r;s-2}(\{e_i, e_j\})\ar[ruuuuu]^{w_{j*}}\ar[ruuu]_{w_{i*}}& &\mathbf{B}_{r-1;s-1}(\{e_i, e_j\})\ar[luuuuu]_{v_{j*}}\ar[luuu]^{v_{i*}}\ar[ruuuuu]^{w_{j*}}\ar[ruuu]_{w_{i*}} & & \mathbf{B}_{r-2;s}(\{e_i,e_j\})\ar[luuuuu]_{v_{j*}}\ar[luuu]^{v_{i*}}\\
& &\vdots & & \vdots & & \vdots
}
\]}
\caption{A complex-of-groups $\mathcal{G}$}
\label{fig:complexofgroups}
\end{figure}
\begin{figure}[ht]
{\scriptsize\[
\xymatrix@C=1pc{
\{v_i, v_j\}& e_i\cup \{v_j\}\ar[l]_{i_{v_i}}\ar[r]^{i_{w_i}} & \{w_i, v_j\}\\
\{v_i\}\cup e_j\ar[u]^{i_{v_j}}\ar[d]_{i_{w_j}}& e_i\times e_j\ar[l]_{i_{v_i}}\ar[r]^{i_{w_i}}\ar[u]^{i_{v_j}}\ar[d]_{i_{w_j}} & \{w_i\} \cup e_j\ar[u]^{i_{v_j}}\ar[d]_{i_{w_j}}\\
\{v_i, w_j\}& e_i\cup \{w_j\}\ar[l]_{i_{v_i}}\ar[r]^{i_{w_i}} & \{w_i, w_j\}\\
}\qquad
\xymatrix@C=1pc{
\mathbf{B}_{r+1;s-1}(\emptyset)& \mathbf{B}_{r;s-1}(\{e_i\})\ar[l]_{v_{i*}}\ar[r]^{w_{i*}}&\mathbf{B}_{r;s}(\emptyset)\\
\mathbf{B}_{r;s-1}(\{e_j\})\ar[u]^{v_{j*}}\ar[d]_{w_{j*}}&
\mathbf{B}_{r-1;s-1}(\{e_i,e_j\})\ar[u]^{v_{j*}}\ar[d]_{w_{j*}}\ar[l]_{v_{i*}}\ar[r]^{w_{i*}}
&\mathbf{B}_{r-1;s}(\{e_j\})\ar[u]^{v_{j*}}\ar[d]_{w_{j*}}\\
\mathbf{B}_{r;s}(\emptyset)& \mathbf{B}_{r-1;s}(\{e_i\})\ar[l]_{v_{i*}}\ar[r]^{w_{i*}}&\mathbf{B}_{r-1;s+1}(\emptyset)
}
\]
}
\caption{A 2-cell $e_i\times e_j$ in $K$ and corresponding complex-of-groups $\mathbf{B}_{r-1;s-1}(\{e_i,e_j\})$ in $\mathcal{K}$}
\label{fig:2cell}
\end{figure}
The way to compute $\pi_1(\mathcal{G}^{(2)})$ is described earlier, and
before doing the computation, we fix basepoints $*_r$ and $*_s$ for $B_r(X_v)$ and $B_s(Y_w)$ for each $r$ and $s$, respectively. We denote $*_r\sqcup *_s$ by $*_{r;s}$.
Let $F\subset E$ with $\#(F)\le n$ and $e_1\not\in F$.
For each $r$, we glue $B_{r;s}(F)$ and $B_{r-1;s+1}(F)$ via $B_{r-1;s}(F\cup\{e_1\})$ by using paths $e_1^{r-1;s}$ from $*_{r;s}$ to $*_{r-1;s+1}$.
Then the distinction on basepoints $*_{r;s}$ for $B_{r;s}(F)$ disappears, and we denote them by $*_m$ if $m=r+s$.
Moreover, we may assume that all points of each $*_m$ are lying on $e_1$, and label them by $\{1,\dots,m\}$ with respect to the order coming from the orientation $v_1\to w_1$.
For $i\ge 2$, we choose paths $\gamma_i^m$ and $\delta_i^m$ from $*_m$ to $i_{v_i}(*_{m-1})$ and $i_{w_i}(*_{m-1})$ such that $\gamma_i^m$ moves only the first point in $X_v$, and $\delta_i^m$ moves only the last point in $Y_w$.
For convenience's sake, we set $\gamma_1^m$ and $\delta_1^m$ to be constant paths. Notice that $e_1$ also defines a constant path but has the effect of changing the domain from $B_{r;s}(F)$ to $B_{r-1;s+1}(F)$. We will suppress the decorations for $\gamma_i$ and $\delta_i$ unless any ambiguity occurs.
Note that two maps $v_{i*}$ and $w_{i*}$ are precisely $(\gamma_i)_*^{-1} (i_{v_i})_*$ and $(\delta_i)_*^{-1}(i_{w_i})_*$, respectively.
\begin{lem}
Let $(X, v)$ and $(Y,w)$ be as above. Then there exist homomorphisms
\[\Phi_X:\mathbf{B}_n(X)\to\mathbf{B}_n(X\# Y),\quad\Phi_Y:\mathbf{B}_n(Y)\to\mathbf{B}_n(X\# Y).\]
\end{lem}
\begin{proof}
Since $X_v$ and $Y_w$ are connected, there are embedded trees $T_X$ and $T_Y$ in $X_v$ and $Y_w$, respectively, such that $\partial T_X=\ln(v)$ and $\partial T_Y=\ln(w)$.
We denote by $q_X:T_Y\to T_k$ and $q_Y:T_X\to T_k$ the label-preserving map defined in Example~\ref{ex:tree}.
Hence as described in Lemma~\ref{lem:thetasubgroup}, we have
\[\hat q_X: X_v\cup T_Y \to X_v\cup T_k = X \text{ and }\hat q_Y: T_X\cup Y_w \to T_k\cup Y_w = Y,\]
which induce $\hat q_X^*$ and $\hat q_Y^*$ by Proposition~\ref{prop:generaledgecontraction}.
Hence we have the desired homomorphisms
\[
\Phi_X:\mathbf{B}_n(X)\stackrel{\hat q_X^*}{\longrightarrow}\mathbf{B}_n(X_v\cup T_Y)\to\mathbf{B}_n(X\# Y)
\]
and
\[
\Phi_Y:\mathbf{B}_n(Y)\stackrel{\hat q_Y^*}{\longrightarrow}\mathbf{B}_n(T_X\cup Y_w)\to\mathbf{B}_n(X\# Y)
\]
by composing the maps induced by the inclusions $X_v\cup T_Y\to X\# Y$ and $T_X\cup Y_w\to X\#Y$.
\end{proof}
Now we can do exactly the same business as before. Let $t_{i,r}=[\gamma_i e_i^{r-1;n-r}\delta_i^{-1}]$ for $1\le r\le n$. More precisely, the action of $t_{i,r}$ on $\mathbf{B}_{r-1;s}(X_v;Y_w)$ is as follows.
\begin{equation}\label{eq:edge}
t_{i,r}^{-1} v_{i*}(\beta) t_{i,r} = w_{i*}(\beta),
\end{equation}
where for all $\beta\in\pi_1({B}_{r-1;s}(\{e_i\}))\simeq\mathbf{B}_{r-1;s}(X_v;Y_w)$ with $r+s=n$.
Therefore,
\begin{align*}
\pi_1(\mathcal{G}^{(1)})&=\frac{\left(\mathop{\ast}_{r+s=n}\mathbf{B}_{r;s}(X_v;Y_w)\right)\ast\langle t_{i,r}|2\le i\le k, 1\le r\le n\rangle}
{\langle\!\langle
t_{i,r}^{-1} v_{i*}(\beta) t_{i,r} = w_{i*}(\beta)\quad \beta\in\mathbf{B}_{r-1;s}(X_v;Y_w)
\rangle\!\rangle}
\end{align*}
As before, we may identify $\mathbf{B}_r(X_v)$ and $\mathbf{B}_s(Y_w)$ with subgroups of $\mathbf{B}_n(X_v)$ and $\mathbf{B}_n(Y_w)$ via $v_{1*}$ and $w_{1*}$, respectively.
Then for $\alpha\in\mathbf{B}_{r-1}(X_v)$, the supports of $\alpha$ and $\delta_i$ are disjoint. Hence $w_{i*}(\alpha)=\alpha$, and similarly, $v_{i*}(\beta)=\beta$ for any $\beta\in\mathbf{B}_s(Y_w)$. That is,
\begin{align*}
\pi_1(\mathcal{G}^{(1)})&=\frac{\left(\mathop{\ast}_{r+s=n}\mathbf{B}_{r;s}(X_v;Y_w)\right)\ast\langle t_{i,r}|2\le i\le k, 1\le r\le n\rangle}
{\left\langle\!\!\!\left\langle
\begin{cases}
t_{i,r}^{-1} v_{i*}(\alpha) t_{i,r} = \alpha\quad \alpha\in\mathbf{B}_{r-1}(X_v)\\
t_{i,r}^{-1} \beta t_{i,r} = w_{i*}(\beta)\quad \beta\in\mathbf{B}_s(Y_w)
\end{cases}
\right\rangle\!\!\!\right\rangle}
\end{align*}
Moreover, this is a quotient of
\[
\mathbf{B}_n(X_v)\ast\mathbf{B}_n(Y_w)\ast\left\langle t_{i,r}, u_{i,r}\left| u_{i,r}=t_{i,n-r}^{-1},2\le i\le k, 1\le r\le n\right.\right\rangle
\]
since $\mathbf{B}_{r;s}(X_v;Y_w)=\mathbf{B}_r(X_v)\times\mathbf{B}_s(Y_w)$ and both $\mathbf{B}_r(X_v)$ and $\mathbf{B}_s(Y_w)$ are subgroups of $\mathbf{B}_n(X_v)$ and $\mathbf{B}_n(Y_w)$, respectively.
Here, we furthermore add a set $\{u_{i,r}\}$ of dummy generators by declaring $u_{i,r}=t_{i,n-r}^{-1}$.
Then it becomes
\begin{align*}
\pi_1(\mathcal{G}^{(1)})&=\frac{\mathbf{B}_n(X_v)\ast\mathbf{B}_n(Y_w)\ast\left\langle t_{i,r}, u_{i,r}\left| u_{i,r}=t_{i,n-r}^{-1},2\le i\le k, 1\le r\le n\right.\right\rangle}
{\left\langle\!\!\!\left\langle
\begin{cases}
[\mathbf{B}_r(X_v),\mathbf{B}_s(Y_w)]& r+s=n\\
t_{i,r}^{-1} v_{i*}(\alpha) t_{i,r} = \alpha & \alpha\in\mathbf{B}_{r-1}(X_v)\\
u_{i,s}^{-1} w_{i*}(\beta) u_{i,s} = \beta & \beta\in\mathbf{B}_{s}(Y_w)
\end{cases}
\right\rangle\!\!\!\right\rangle}.
\end{align*}
Notice that the second and third types of defining relators are those appearing in $\mathbf{B}_n(X)$ and $\mathbf{B}_n(Y)$.
Suppose $k=1$. Then $X_v\equiv_B X$, $Y_w\equiv_B Y$ and $X\# Y$ is just a boundary wedge sum $X\vee_{\partial} Y$ of $X$ and $Y$.
Since there is no $t_{i,r}$, we have
\[
\mathbf{B}_n(X\vee_{\partial} Y)\simeq
\frac{\mathbf{B}_n(X)\ast\mathbf{B}_n(Y)}
{\left\langle\!\left\langle
[\mathbf{B}_r(X),\mathbf{B}_s(Y)], r+s=n
\right\rangle\!\right\rangle}.
\]
In this case, we have a graph-of groups as well over the linear graph of length $n$. Hence $\mathbf{B}_n(X\vee_\partial Y)$ is obtained by the iterated amalgamated free product.
On the other hand, if $n=1$, then the decoration for $t_i$ is not necessary. Therefore
\[
\pi_1(X\#Y)\simeq
\pi_1(X_v)\ast\pi_1(Y_w)\ast\langle t_2,\dots, t_k\rangle\simeq
\frac{\pi_1(X)\ast\pi_1(Y)}
{\langle\!\langle t_i u_i | 2\le i\le k
\rangle\!\rangle},
\]
where $t_i$'s and $u_i$'s in the right correspond to generators defined as before. This is nothing but the usual Seifert-van Kampen theorem.
Suppose $k, n\ge 2$ and let $F_{i,j}=\{e_i,e_j\}$ with $i<j$.
Then the boundary $\partial\left(\prod F\right)$ has 4 corners, and so we have to consider 8 maps, $\{LU,UL, UR, RU, RD, DR, DL, LD\}$ corresponding to the ways of compositions as shown in Figure~\ref{fig:2cell}.
In the northwest corner, we need consider two maps left-and-up $LU$ and up-and-left $UL$.
Then for $r+s=n-2$, the maps $LU$ and $UL$ are the compositions
\[
LU:\mathbf{B}_{r;s}(X_v;Y_w)\xrightarrow{v_{i*}}\mathbf{B}_{r+1;s}(X_v;Y_w)\xrightarrow{v_{j*}}\mathbf{B}_{r+2;s}(X_v;Y_w)
\]
\[
UL:\mathbf{B}_{r;s}(X_v;Y_w)\xrightarrow{v_{j*}}\mathbf{B}_{r+1;s}(X_v;Y_w)\xrightarrow{v_{i*}}\mathbf{B}_{r+2;s}(X_v;Y_w),
\]
which are nothing but conjugates by two paths $\eta_1$ and $\eta_2$ from $*_n$ to $*_{n-2}\sqcup\{v_i,v_j\}$,
where $\eta_1$ moves the first point to $v_j$ and the second point to $v_i$ but $\eta_2$ moves the first to $v_i$ and the second to $v_j$.
More precisely, $\eta_1=\gamma_j\cdot i_{v_j}(\gamma_i)$ and $\eta_2=\gamma_i\cdot i_{v_i}(\gamma_j)$, and therefore they differ by the loop $\eta_2\cdot\eta_1^{-1}$ which represents the element exactly the same as $s_{i,j}$ defined in Example~\ref{ex:tree}, which is the image of the generator of $\mathbf{B}_2(T_3)$ via the map induced from the embedding $(T_3,(1,2,3))\to (T_X,(1,i,j))$.
Note that when $i=1$, then we set $s_{1,j}$ to be trivial for all $1<j$.
In summary,
\[
UL(\cdot) = s_{i,j} LU(\cdot) s_{i,j}^{-1}.
\]
In the southeast corner, we have a similar result as above. That is, the two maps $RD$ for right-and-down and $DR$ for down-and-right are related as
\[
DR(\cdot)=s_{i,j}' RD(\cdot) s_{i,j}'^{-1},
\]
where $s_{i,j}'$ is the image of the generator of $\mathbf{B}_2(T_3)$ via the map induced from the embedding $(T_3,(1,2,3))\to(T_Y,(1,i,j))$, and is set to be trivial if $i=1$.
In the northeast corner, two maps $UR$ for up-and-right and $RU$ for right-and-up coincide since the supports of $\gamma_j$ and $\delta_i$ are disjoint, and so we need not conjugate for transport.
In the southwest corner, this is the exactly same situation as above and so the two maps $LD$ and $DL$ for left-and-down and down-and-left coincide.
On the other hand, the four edges of $\partial(\prod F)$ are as follows.
\begin{align*}
U: v_{i*}(\beta) \mapsto w_{i*}(\beta),\qquad &L: w_{j*}(\beta) \to v_{j*}(\beta)\qquad\forall\beta\in\mathbf{B}_{r+1;s}(X_v;Y_w),\\
R: v_{j*}(\beta) \mapsto w_{j*}(\beta),\qquad &D: w_{i*}(\beta) \to v_{i*}(\beta)\qquad\forall\beta\in\mathbf{B}_{r;s+1}(X_v;Y_w).
\end{align*}
Then as seen in the computation of $\pi_1(\mathcal{G}^{(1)})$, the maps $U,R,D$ and $L$ satisfy relations similar to (\ref{eq:edge}), which are given by
\begin{align*}
U(\cdot)=t_{i,r+2}^{-1} (\cdot) t_{i,r+2}, &\quad
L(\cdot)=u_{j,s}^{-1}(\cdot) u_{j,s}, \\
R(\cdot)=t_{j,r+1}^{-1} (\cdot) t_{j,r+1}, &\quad
D(\cdot)=u_{i,s+1}^{-1}(\cdot) u_{i,s+1}.
\end{align*}
Recall that we set $t_{1,r}=u_{1,s}$ to be trivial.
Finally, the reading of the edges and corners of $\partial[e_i,e_j]$ gives us a word
\[
H_{i,j} = s_{i,j}^{-1} t_{i,r+2} t_{j,r+1} s_{i,j}'^{-1} u_{i,s+1} u_{j,s},
\]
and $\pi_1(\mathcal{G}^{(2)})$ is obtained by declaring $H_{i,j}=e$ for all $1\le i<j\le k$.
Suppose $i=1$. Then since $r+s=n-2$ with $0\le r\le n-2$,
\[
H_{1,j}=t_{j,r+1}u_{j,s}= t_{j,r+1} t_{j,r+2}^{-1}.
\]
Therefore the defining relator $H_{1,j}=e$ removes all decorations on $t_j$'s and on $u_j$'s as well.
If $i>1$, then $H_{i,j}=e$ gives
$s_{i,j}^{-1} t_i t_j s_{i,j}'^{-1} u_i u_j=e$, or equivalently,
\[
s_{i,j}t_j t_i s_{i,j}' u_j u_i=e.
\]
In summary,
\begin{equation}\label{eq:connectedsum}
\mathbf{B}_n(X\# Y)
=\frac{\mathbf{B}_n(X)\ast\mathbf{B}_n(Y)}
{\left\langle\!\!\!\left\langle
\begin{cases}
[\mathbf{B}_r(X_v),\mathbf{B}_s(Y_w)] & r+s=n\\
t_i u_i = e& 2\le i\le k\\
s_{i,j} t_j t_i s_{i,j}' u_j u_i = e& 2\le i<j\le k
\end{cases}
\right\rangle\!\!\!\right\rangle}.
\end{equation}
Before we state the theorem, we will discuss the $s_{i,j}$'s further.
As mentioned above, both $s_{i,j}$ and $s_{i,j}'$ naturally come from $\mathbf{B}_2(T_k)$ as follows.
Let us break $\Theta_k$ into $T_k^L$ and $T_k^R$, which are left and right halves. That is, we may assume that if $\br(\Theta_k)=\{0,0'\}$,
\[T_k^L=\Theta_k\setminus \st(0),\quad T_k^R=\Theta_k\setminus \st(0').\]
We also denote the closures of these halves by $\Theta_k^L$ and $\Theta_k^R$, and denote the maps by $\widehat{(\cdot)}_{L}$ and $\widehat{(\cdot)}_{R}$.
\[\widehat{(\cdot)}_L:T_k^L\to\Theta_k^L,\quad\widehat{(\cdot)}_R:T_k^R\to\Theta_k^R.\]
Then as seen in Example~\ref{ex:theta},
\[\mathbf{B}_2(\Theta_k^L)=\langle \sigma_{i,j}, t_\ell\rangle,\quad
\mathbf{B}_2(\Theta_k^R)=\langle \sigma'_{i,j}, u_\ell\rangle.\]
Since $\Theta_k^L\#\Theta_k^R = \Theta_k$, we can use (\ref{eq:connectedsum}) to compute $\mathbf{B}_n(\Theta_k)$.
Notice that $s_{i,j}$ and $s_{i,j}'$ correspond to $\sigma_{i,j}$ and $\sigma_{i,j}'$, respectively since they essentially come from $\mathbf{B}_2(T_k^L)$ and $\mathbf{B}_2(T_k^R)$ by definition.
Therefore,
\[\mathbf{B}_2(\Theta_k)=\mathbf{B}_2(\Theta_k^1\# \Theta_k^2)=\langle \sigma_{i,j}, t_\ell, \sigma'_{i,j}, u_\ell |
t_\ell u_\ell =e, \sigma_{i,j}t_j t_i \sigma'_{i,j} u_j u_i=e\rangle,\]
which is obviously isomorphic to $\mathbf{B}_2(\Theta_k^L)$ and $\mathbf{B}_2(\Theta_k^R)$.
Now we turn back to $\mathbf{B}_n(X\# Y)$.
Recall the surjective map $\xi$ described in Example~\ref{ex:theta}. In this situation, we have two surjections
\[\xi_L:\mathbf{B}_2(\Theta_k)\to\mathbf{B}_n(\Theta_k^L),\quad
\xi_R:\mathbf{B}_2(\Theta_k)\to\mathbf{B}_n(\Theta_k^R)\]
satisfying that
\[\widehat{(\cdot)}_{L,*}\circ(i_{v_1})_*^{n-2}=\xi_L\circ \widehat{(\cdot)}_{L,*},\quad
\widehat{(\cdot)}_{R,*}\circ(i_{w_1})_*^{n-2}=\xi_R\circ \widehat{(\cdot)}_{R,*}.\]
Then $s_{i,j}$ and $s_{i,j}'$ are nothing but the images of $\sigma_{i,j}$ and $\sigma_{i,j}'$ in $\mathbf{B}_2(\Theta_k)$ under the compositions $\Psi_X\circ\xi_L$ and $\Psi_Y\circ\xi_R$.
\[(\Psi_X\circ\xi_L)(\sigma_{i,j})=s_{i,j}\in\mathbf{B}_n(X),\quad
(\Psi_Y\circ\xi_R)(\sigma_{i,j}')=s_{i,j}'\in\mathbf{B}_n(Y).\]
Moreover, we have
\[(\Psi_X\circ\xi_L)(t_\ell)=t_\ell\in\mathbf{B}_n(X),\quad
(\Psi_Y\circ\xi_R)(u_\ell)=u_\ell\in\mathbf{B}_n(Y),\]
which correspond to the stable letters for $\mathbf{B}_n(X)$ and $\mathbf{B}_n(Y)$.
Therefore, the second and third types of defining relations in (\ref{eq:connectedsum}) precisely declare that the two images of $\mathbf{B}_2(\Theta_k)$ under $\Psi_X\circ\xi_L$ and $\Psi_Y\circ\xi_R$ are the same.
Moreover, both $\Psi_X\circ\xi_L$ and $\Psi_Y\circ\xi_R$ factor through $\xi:\mathbf{B}_2(\Theta_k)\to\mathbf{B}_n(\Theta_k)$,
so there exist $\tilde\Psi_X$ and $\tilde\Psi_Y$ satisfying the following commutative diagram, where the innermost square involving $\tilde\Psi_X$ and $\tilde\Psi_Y$ is the push-out diagram.
\[
\xymatrix{
&\mathbf{B}_n(\Theta_k)\ar[r]^{\Psi_X} & \mathbf{B}_n(X)\ar[rd]\ar[rrd]^{\Phi_X}\\
\mathbf{B}_2(\Theta_k)\ar@{->>}[ru]^{\xi_L}\ar@{->>}[rd]_{\xi_R}\ar@{->>}[r]^{\xi} & \mathbf{B}_n(\Theta_k)\ar[u]_{\simeq}\ar[d]^{\simeq} \ar[ru]_{\tilde\Psi_X}\ar[rd]^{\tilde\Psi_Y} & & \widetilde{\mathbf{B}}_n(X;Y)\ar@{-->}[r]^-{\exists Q}&\mathbf{B}_n(X\#Y)\\
&\mathbf{B}_n(\Theta_k)\ar[r]_{\Psi_Y} & \mathbf{B}_n(Y)\ar[ru]\ar[rru]_{\Phi_Y}
}
\]
Hence the group $\widetilde{\mathbf{B}}_n(X;Y)$ is defined as the free product with amalgamation as follows.
\begin{align*}
\widetilde{\mathbf{B}}_n(X;Y) &= \frac{
\mathbf{B}_n(X)\ast\mathbf{B}_n(Y)}
{\langle\!\langle
\tilde\Psi_X(\beta)=\tilde\Psi_Y(\beta)|\beta\in\mathbf{B}_n(\Theta_k)
\rangle\!\rangle}\\
&=\frac{
\mathbf{B}_n(X)\ast\mathbf{B}_n(Y)}
{\left\langle\!\!\!\left\langle
\begin{cases}
t_i u_i = e& 2\le i\le k\\
s_{i,j} t_j t_i s_{i,j}' u_j u_i = e& 2\le i< j\le k
\end{cases}
\right\rangle\!\!\!\right\rangle}.
\end{align*}
Finally, the map $Q$ is obviously taking the quotient $\widetilde{\mathbf{B}}_n(X;Y)$ by
\[\langle\!\langle[\mathbf{B}_r(X_v),\mathbf{B}_s(X_w)], r+s=n\rangle\!\rangle.\]
In summary, we have the following theorem.
\begin{thm}\label{thm:connectedsum}
Let $X, Y$ be complexes, $\vec v\in X$ and $\vec w\in Y$ be vertices of valency $k\ge 1$ with orderings $\lk(\vec v)=(v_1,\dots,v_k)$ and $\lk(\vec w)=(w_1,\dots,w_k)$, respectively. Then the braid group $\mathbf{B}_n((X,\vec v)\# (Y,\vec w))$ for $n\ge 1$ is as follows.
\begin{align*}
\mathbf{B}_n((X,\vec v)\# (Y,\vec w))
=\mathbf{B}_n(X)\ast_{\mathbf{B}_n(\Theta_k)}\mathbf{B}_n(X)
\big/\langle\!\langle[\mathbf{B}_r(X_v),\mathbf{B}_s(Y_w)], r+s=n\rangle\!\rangle
\end{align*}
for
\[
\mathbf{B}_n(X)\xleftarrow{\tilde\Psi_X}\mathbf{B}_n(\Theta_k)\xrightarrow{\tilde\Psi_Y}\mathbf{B}_n(Y),
\]
where $\tilde\Psi_X$ and $\tilde\Psi_Y$ are defined as
\[\tilde\Psi_X\circ\xi=\Psi_X\circ\xi_L,\quad\tilde\Psi_Y\circ\xi=\Psi_Y\circ\xi_R.\]
\end{thm}
\begin{ex}[2-braid group on the union of two trees]\label{ex:twotrees}
Let $T$ and $T'$ be trees with $k=\#(\partial T)=\#(\partial T')$, and $\widehat T$ and $\widehat T'$ be $k$-closures whose closing vertices are denoted by $v$ and $w$, respectively.
We fix orderings on $\lk(v)$ and $\lk(w)$, and consider 2-braid group on $(\widehat T,\vec v)\#(\widehat T',\vec w)$.
Then by Example~\ref{ex:treeclosure} and Theorem~\ref{thm:connectedsum},
\[
\mathbf{B}_2(\widehat T\#\widehat T')=\left\langle
s_{i,j},s'_{i,j},t_r\left|
\begin{matrix}
s_{i,j}' =t_i^{-1}t_j^{-1}s_{i,j}^{-1}t_i t_j& 2\le i<j\le k\\
s_{i,j}=s_{i',j'}&(i,j)\sim_T(i',j')\\
s_{i,j}'=
s_{i',j'}' & (i,j)\sim_{T'}(i',j')
\end{matrix}
\right.
\right\rangle.
\]
Note that the generators $s_{i,j}'$ are not necessary by the first type of defining relators, moreover, the third reduces $s_{i,j}$'s as well. Indeed, under certain condition it has a generating set consisting of $t_r$'s and only one $s_{i,j}$.
We will see this later.
\end{ex}
\begin{rmk}
Both $k$-closures and $k$-connected sums for any $k\ge 1$ can be obtained by using iterated $2$-closures and $1$-connected sums, which always give graph-of-groups structures.
\end{rmk}
\section{Applications -- Embeddabilities and connectivities}
In this section, we will prove Theorem~\ref{thm:emb}~(2) and (3), namely, how the braid group $\mathbf{B}_n(X)$, or its abelianization $H_1(\mathbf{B}_n(X))$ is related with the geometry of $X$.
Unless mentioned otherwise, we assume that $X$ is sufficiently subdivided and simple.
\subsection{Surface embeddability}
We start with the following easy observation.
\begin{lem}\label{lem:embed}
Let $X$ and $Y$ be simple complexes. Then $X$ and $Y$ embed into surfaces if and only if so do $\widehat X$ and $\widehat Y$ if and only if so does $X\# Y$,
for arbitrary closures and connected sums.
Especially, $X$ embeds into a surface if and only if so does any elementary subcomplex of $X$.
\end{lem}
\begin{lem}\label{lem:torsioncombination}
Let $X$ and $Y$ be complexes.
Then the following are equivalent.
\begin{enumerate}
\item $\mathbf{B}_n(X)$ and $\mathbf{B}_n(Y)$ are torsion-free for all $n\ge 1$.
\item $\mathbf{B}_n(\widehat X)$ and $\mathbf{B}_n(\widehat Y)$ are torsion-free for all $n\ge 1$.
\item $\mathbf{B}_n(X\# Y)$ is torsion-free for any $n\ge 1$.
\end{enumerate}
\end{lem}
\begin{proof}
As remarked in the end of the previous section, both $k$-closures and $k$-connected sums yield graph-of-groups structures, which correspond to HNN-extensions and free products with amalgamations.
The proof follows since these group operations preserve both torsions and torsion-freeness.
\end{proof}
It is worth remarking that indeed the configuration space $B_n(X)$ or $B_n(X\# Y)$ is {\em aspherical} if and only if so is $B_n(X_v)$ or so are $B_n(X)$ and $B_n(Y)$, respectively. This follows easily by considering graphs-of-spaces structures on configuration spaces.
\begin{cor}\label{cor:torsion}
Let $X$ be a complex, not necessarily elementary. Suppose $X$ can not be embedded into any surface. Then $\mathbf{B}_n(X)$ contains $\mathbf{S}_n$ as a subgroup.
\end{cor}
\begin{proof}
By Lemma~\ref{lem:embed}, there exists an elementary subcomplex $Y$ in $X$, which does not embed into any surface.
Then as mentioned in Example~\ref{ex:S_0}, we suppose that there exists an embedding $i:S_0\to Y\subset X$.
Let $\rho$ and $\rho'$ be the induced permutations from $\mathbf{B}_n(S_0)$ and $\mathbf{B}_n(X)$, respectively.
Then it is obvious that $\rho=\rho'\circ i_*$. However, $\rho$ is an isomorphism and therefore $\rho'\circ i_*\circ\rho^{-1}$ is the identity on $\mathbf{S}_n$. In other words,
\[i_*\circ\rho^{-1}:\mathbf{S}_n\to\mathbf{B}_n(X)\]
is injective.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:emb}~(2)]
Suppose $X$ can be embedded into a surface $\Sigma$. Then it can be modified to a simple complex $X'$ by using only the reverse process of edge contractions as Proposition~\ref{prop:graphedgecontraction}.
Hence $X'\equiv_B X$ and embeds into $\Sigma$, and moreover, all the elementary complexes contained in $X'$ embed into surfaces by Lemma~\ref{lem:embed}, as well.
Therefore by Lemma~\ref{lem:elementary}, all their braid groups are torsion-free.
As seen in Lemma~\ref{lem:torsioncombination}, since the build-up processes preserve torsion-freeness, $\mathbf{B}_n(X)$ is torsion-free, as desired.
The converse follows from Corollary~\ref{cor:torsion}.
\end{proof}
\subsection{The first homology groups}
Suppose $n\ge 2$. Then the induced permutation $\rho$ induces the map
\[\bar\rho:H_1(\mathbf{B}_n(X))\to H_1(\mathbf{S}_n)\simeq\mathbb{Z}_2.\]
\begin{cor}\label{cor:elementary}
Let $X$ be an elementary complex.
Then for $n\ge 2$,
\[
H_1(\mathbf{B}_n(X))\simeq\begin{cases}
\mathbb{Z}^r& X=T_k;\\
H_1(X)\oplus\langle[\sigma]\rangle & \dim X=2, X\text{ is planar};\\
H_1(X)\oplus\langle[\sigma]\rangle/\langle 2[\sigma]\rangle & otherwise,
\end{cases}
\]
where $\bar\rho([\sigma])$ is nontrivial in $H_1(\mathbf{S}_n)\simeq \mathbb{Z}_2$, and $r=r(n,k,k)$ is given by the equation $(\ref{eq:rank})$.
Especially, $H_1(\mathbf{B}_n(X))$ is torsion-free if and only if $X$ is planar.
\end{cor}
\begin{proof}
This is a direct consequence of the discussion in Section~\ref{sec:elementary}.
\end{proof}
Hence Theorem~\ref{thm:emb}~(3) holds for elementary complexes, and from now on
we assume that $X$ is nonelementary, and furthermore $\partial X\neq\emptyset$. If $\partial X=\emptyset$, then there exists $w\in \br(X)$ such that $S_0\subset\st(w)$, and we obtain a boundary by attaching a disk in $\st(w)$ as depicted in Figure~\ref{fig:S_0'}.
Therefore we can consider a map $\mathbf{B}_1(X)\to\mathbf{B}_n(X)$ which is a composition of $(n-1)$ maps described in Proposition~\ref{prop:injection}.
This induces the map $i_{X,n}:H_1(X)\to H_1(\mathbf{B}_n(X))$, whose cokernel plays an important role in proving our theorem.
For example, since $\bar\rho$ is a surjection and $H_1(X)\subset\ker(\bar\rho)$, we have a surjection $\coker(i_{X,n})\to H_1(\mathbf{S}_n)\simeq\mathbb{Z}_2$. Therefore $\coker(i_{X,n})$ can never be trivial for any $n\ge 2$.
On the other hand, if we have an embedding $\iota:X\to Y$, then it induces a map $\iota_*:\coker(i_{X,n})\to\coker(i_{Y,n})$. Since $\bar\rho$ is equivariant, $\iota_*$ is nontrivial too.
\begin{cor}\cite{KKP}\label{cor:nonplanargraph}
Suppose $X$ contains a nonplanar graph. Then $\coker(i_{X,n})$ has $2$-torsion.
\end{cor}
\begin{proof}
For a nonplanar graph $\Gamma$, it is known that $H_1(\mathbf{B}_n(\Gamma))$ has 2-torsion corresponding to the generator for $H_1(\mathbf{S}_n)$. See \cite{KKP} or Proposition~\ref{prop:connectedsum} below. Hence for any complex $X$ containing a nonplanar graph, $\coker(i_{X,n})$ has 2-torsion as discussed above.
\end{proof}
We observe how $k$-closure and $k$-connected sum affect the first homology of braid groups as follows. These two lemma are direct consequences of Theorem~\ref{thm:closure} and Theorem~\ref{thm:connectedsum}.
\begin{lem}\label{lem:exactseq1}
Let $X$ be a complex and $\widehat X$ be a $k$-closure of $X$. Then there exists a commutative diagram with exact rows as follows.
\[
\xymatrix{
1\ar[r]&H_1(X)\ar[r]\ar[d]^{i_{X,n}}&
H_1(\widehat X)\ar[r]\ar[d]^{i_{\widehat X,n}}&
H_1(\Theta_k)\ar[r]\ar[d]^{\simeq}&1\\
&H_1(\mathbf{B}_n(X))\ar[r]&
H_1(\mathbf{B}_n(\widehat X))\ar[r]&
H_1(\Theta_k)\ar[r]&1.
}
\]
\end{lem}
Note that since $H_1(\Theta_k)\simeq\mathbb{Z}^{k-1}$ is free abelian, the surjective map in each row splits.
\begin{cor}\label{cor:closures}
Let $Y$ be an elementary complex of dimension 2. Suppose $X$ is obtained by taking closures several times of $Y$.
Then
\[
H_1(\mathbf{B}_n(X))=\begin{cases}
H_1(X)\oplus\mathbb{Z}& X\text{ is planar};\\
H_1(X)\oplus\mathbb{Z}_2& otherwise.
\end{cases}
\]
\end{cor}
\begin{proof}
By Lemma~\ref{lem:exactseq1}, the map $\coker(i_{Y,n})\to\coker(i_{X,n})$ is surjective, and Corollary~\ref{cor:elementary} implies that $\coker(i_{Y,n})$ is either $\mathbb{Z}$ or $\mathbb{Z}_2$.
If $Y$ is nonplanar, then $X$ is nonplanar and $\coker(i_{Y,n})\simeq\mathbb{Z}_2$. Hence so is $\coker(i_{X,n})$ by above.
Suppose $Y$ is planar but $X$ is nonplanar. Then since $Y$ is a surface, $X$ contains a nonplanar graph as mentioned in Remark~\ref{rmk:nonplanar}. Hence $\coker(i_{X,n})\simeq\mathbb{Z}_2$ by Corollary~\ref{cor:nonplanargraph}.
Suppose $X$ is planar, and consider the map $\iota_*:\coker(i_{X,n})\to \coker(i_{\mathbb{R}^2,n})\simeq\mathbb{Z}$ induced by the embedding $\iota:X\to\mathbb{R}^2$. Since $\iota_*$ is nontrivial and $\coker(i_{X,n})$ is generated by a single element, $\iota_*$ must be an isomorphism.
\end{proof}
\begin{lem}\label{lem:exactseq2}
Let $X, Y$ and $v,w$ be as given in Theorem~\ref{thm:connectedsum}. Then there exists an exact sequence as follows.
\[
\xymatrix@C=1pc{
1\ar[r]&H_1(\Theta_k)\ar[r]\ar[d]^{i_{\Theta_k,n}}&
H_1(X)\oplus H_1(Y)\ar[r]\ar[d]^{i_{X,n}\oplus i_{Y,n}}&
H_1(X\# Y)\ar[r]\ar[d]^{i_{X\# Y,n}}&1\\
&H_1(\mathbf{B}_n(\Theta_k))\ar[r]&
H_1(\mathbf{B}_n(X))\oplus H_1(\mathbf{B}_n(Y))\ar[r]&
H_1(\mathbf{B}_n(X\# Y))\ar[r]&1.
}
\]
\end{lem}
\subsection{Connectivities}
We adopt another notion for a decomposition, called a {\em cut}. This notion originated in graphs and is extended to complexes in slightly different ways as follows.
\begin{defn}
Let $X$ be a sufficiently subdivided simple complex. A set $\mathbf{v}=\{v_1,\dots,v_k\}$ of vertices in $\br(X)$ is a {\em $k$-cut} if $X_\mathbf{v}=X\setminus \st(\mathbf{v})$ is disconnected.
We say that a $k$-cut $\mathbf{v}$ is {\em trivial} if there exists a subcomplex $Y\subset X$ with $\mathbf{v}\subset\partial Y$ and $X$ is homeomorphic to $\widehat{(Y,\mathbf{v})}$,
and that $X$ is {\em vertex-$k$-connected} unless there is a nontrivial $(k-1)$-cut.
\end{defn}
\begin{rmk}
If $\mathbf{v}=\{v_1,\dots,v_k\}$ is a trivial $k$-cut and $\widehat Y=X$, then $\val(v_i)=1$ in $Y$ since $\mathbf{v}\subset\partial Y$.
Therefore $\val(v_i)=2$ in $X$ for all $i$.
However, since all $v_i$ are in $\br(X)$ and $X$ is simple, $\lk(v_i)=D^m\sqcup \{*\}$ for some $m\ge 1$.
Consequently, a trivial cut may exist only when $X$ is of dimension at least 2.
Especially, if there is a trivial 1-cut $v$, then it satisfies the assumption of Proposition~\ref{prop:edgecontraction} and therefore we may assume that there is no trivial 1-cut without loss of any generality.
\end{rmk}
For example, any nontrivial tree has a nontrivial 1-cut and so it is not vertex-2-connected, and $\Theta_k$ for $k\ge 3$ has a unique nontrivial 2-cut but no 1-cut, hence it is vertex-2-connected.
\begin{figure}[ht]
\[
S_0'=\vcenter{\hbox{\includegraphics{S_0_2.pdf}}}\quad\equiv_B\quad
\vcenter{\hbox{\includegraphics{S_0.pdf}}}=S_0
\]
\caption{A complex $S_0'$ which is braid equivalent to $S_0$}
\label{fig:S_0'}
\end{figure}
We introduce the famous result of Menger about the relationship between vertex-$k$-connectivity and the existence of embedded $\Theta_k$ as follows.
\begin{lem}\cite{M}
Let $\Gamma$ be a graph without a vertex of valency 1.
Then $\Gamma$ is vertex-$k$-connected if and only if for any $v,w\in\br(\Gamma)$, there is an embedding $(\Theta_k, \{0,0'\})\to(\Gamma,\{v,w\})$ of pairs.
\end{lem}
\begin{ex}[Vertex-3-connectivity for the union of two trees]\label{ex:twotrees2}
Recall the graph $\widehat T\#\widehat T'$ defined in Example~\ref{ex:twotrees}.
Then it is vertex-3-connected only if there is an embedding $(\Theta_3,\{0,0'\})\to(\widehat T\#\widehat T', \{v,w\})$ for any $v\in \br(T)$ and $w\in\br(T')$. Indeed, we may assume that such $\Theta_k$ always passes the point $1\in\partial T$.
Let $T_3^L$ and $T_3^R$ be two halves of $\Theta_k$ as before.
Then the restrictions of an embedding $\Theta_k\to\widehat T\#\widehat T$ to $T_3^L$ and $T_3^R$ give us a pair of equivalence classes $[s_{i,j}]$ and $[s_{i,j}']$ with respect to $\sim_T$ and $\sim_{T'}$,
which are related as
$s_{i,j}' =t_i^{-1}t_j^{-1}s_{i,j}^{-1}t_i t_j$ as described in Example~\ref{ex:tree} and Example~\ref{ex:twotrees}.
Therefore the vertex-3-connectivity of $\widehat T\#\widehat T'$ implies that any pair of generators for $\mathbf{B}_2(T)$ and $\mathbf{B}_2(T')$ are related, and so $\mathbf{B}_2(\widehat T\#\widehat T')$ is generated by $t_i$'s and only one $s_{i,j}$.
\end{ex}
\subsubsection{1-cuts}
Assume that $X$ has a 1-cut $v$ of valency $k\ge 2$, and $X_1\dots, X_m$ are connected components of $X_v$. Note that if $k=2$, then $m$ must be 2 and this is a 1-connected sum decomposition of $X$. Therefore we assume that $k\ge 3$.
For $1\le i\le k$, let $k_i$ be the number of vertices in $X_i$ which are adjacent to $v$.
Let $\mathbf{k}=(k_1,\dots,k_m)$, then $\sum_{i=1}^m k_i = k$.
Now we decompose $X$ into $(m+1)$ pieces, namely, $\widehat X_1,\dots, \widehat X_m$ and $\widehat T_{\mathbf{k}}$, via $k_i$-connected sums for all $1\le i\le m$, where $\widehat T_{\mathbf{k}}$ looks like a graph depicted in Figure~\ref{fig:1cut}.
We apply Lemma~\ref{lem:exactseq2} for $X= \widehat T_{\mathbf{k}} \# \widehat X_1\#\dots\# \widehat X_m$ as follows.
\begin{align*}
\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\Theta_{k_i}))
\xrightarrow{\oplus\tilde\Psi_i}
H_1(\mathbf{B}_n(\widehat T_{\mathbf{k}}))\oplus\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widehat X_i))
\longrightarrow
H_1(\mathbf{B}_n(X))
\longrightarrow 1.
\end{align*}
\begin{figure}[ht]
\[\vcenter{\hbox{\scriptsize\input{1cut.pdf_tex}}}\quad=\quad
\vcenter{\hbox{\scriptsize\input{1cut_decomposition.pdf_tex}}}\]
\caption{A decomposition of $X$ near 1-cut $v$ and a graph $\widehat T_{\mathbf{k}}$ with $\mathbf{k}=(3,2,1,2,4,2)$}
\label{fig:1cut}
\end{figure}
\begin{lem}\cite[Lemma~3.11]{KP}\label{lem:1cut}
The first homology group $H_1(\mathbf{B}_n(\widehat T_{\mathbf{k}}))$ is isomorphic to
\[H_1(\mathbf{B}_n(\widehat T_{\mathbf{k}})) = \mathbb{Z}^{r(n, k,m)} \oplus \bigoplus_{i=1}^m H_1(\mathbf{B}_n(\Theta_{k_i})).\]
\end{lem}
Hence the obvious embedding $\Theta_{k_i}\to \widehat{T}_{\mathbf{k}}$ yields an injection
\[\tilde\Psi_i:H_1(\mathbf{B}_n(\Theta_{k_i}))\to H_1(\mathbf{B}_n(\widehat T_{\mathbf{k}})),\]
and therefore the sequence above becomes a short exact sequence. Moreover, we have the following lemma which is obvious by the decomposition in Lemma~\ref{lem:1cut}.
\begin{lem}
Let $X, v$ and $X_i$'s be as before. Then
\[H_1(\mathbf{B}_n(X))= \mathbb{Z}^{r(n,k,m)}\oplus \bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widehat X_i)).\]
\end{lem}
In summary, we can say that each nontrivial 1-cut $v$ contributes to the first Betti number as much as $r(n,k,m)$ where $k=\val(v)$ and $m=\#(\pi_0(X_v))$.
From now on, we assume that $X$ has no 1-cut.
\begin{lem}\cite[Lemma~3.12]{KP}\label{lem:biconn}
Let $\Gamma$ be a graph without a 1-cut. Then for all $n\ge 2$, $H_1(\mathbf{B}_n(\Gamma))\simeq H_1(\mathbf{B}_2(\Gamma))$.
Therefore, $H_1(\mathbf{B}_n(\Theta_k))\simeq H_1(\Theta_k)\oplus\mathbb{Z}^{\binom{k-1}2}$.
\end{lem}
\subsubsection{2-cuts}
Assume that $X$ has no 1-cut but a nontrivial 2-cut $\mathbf{v}=\{v_1,v_2\}$, and $X_1,\dots, X_m$ are the connected components of $X_\mathbf{v}$ as before.
Let $k_{i,j}$ be the number of components of $\lk(v_j)$ in $X_i$ and $\mathbf{k}_j=(k_{1,j},\dots,k_{m,j})$ for $1\le i\le m, j=1,2$.
Similar to above, we decompose $X$ into $(m+1)$-pieces via $(k_{i,1}+k_{i,2})$-connected sums as depicted in Figure~\ref{fig:2cut}. The connected summands will be denoted by $\widehat X_1,\dots, \widehat X_m$ and $\Theta_{\mathbf{k}_1, \mathbf{k}_2}$.
Then by Lemma~\ref{lem:exactseq2}, $H_1(\mathbf{B}_n(X))$ is isomorphic to the cokernel of
\[
\xymatrix{
\displaystyle{\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\Theta_{k_{i,1}+k_{i,2}}))}\ar[r]^-{\oplus \tilde\Psi_i}&
H_1(\mathbf{B}_n(\Theta_{\mathbf{k}_1,\mathbf{k}_2}))\oplus\displaystyle{\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widehat X_i)).}
}
\]
\begin{figure}[ht]
\[X\quad=\quad\vcenter{\hbox{\scriptsize\input{2cut.pdf_tex}}}\quad=\quad
\vcenter{\hbox{\scriptsize\input{2cut_decomposition.pdf_tex}}}\]
\caption{A decomposition of $X$ near a 2-cut $\mathbf{v}$ and a graph $\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ with $\mathbf{k}_1=(3,2,3)$ and $\mathbf{k}_2=(2,1,3)$}
\label{fig:2cut}
\end{figure}
\begin{figure}[ht]
\[
\vcenter{\hbox{\input{2cut_theta.pdf_tex}}}\quad\stackrel{q}{\longleftarrow}\quad
\vcenter{\hbox{\input{2cut_thetatilde.pdf_tex}}}\quad=\quad
\vcenter{\hbox{\scriptsize\input{2cut_thetatilde_decomposition.pdf_tex}}}
\]
\caption{A decomposition of $\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ via 2-connected-sums}
\label{fig:tildethetadecomposition}
\end{figure}
Let $\Theta_{a,b,c}$ denote a (possibly subdivided) graph obtained by replacing respective edges of the triangle with $a$, $b$ and $c$ multiple edges.
We take $2$-connected sums between $\Theta_{k_{i,1},k_{i,2},1}$ and $\Theta_m$ to obtain $\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}$. Then $\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ comes from $\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ by contracting all edges adjacent to vertices of $\Theta_m$. See Figure~\ref{fig:tildethetadecomposition}.
We want to use $\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ instead of $\Theta_{\mathbf{k}_1,\mathbf{k}_2}$.
That is, we define $\widetilde X$ by taking $(k_{i,1}+k_{i,2})$-connected-sums between $\widehat X_i$ and $\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}$.
The lemma below ensures that we can safely do this.
\begin{lem}\cite[Lemma~3.14]{KP}\label{lem:2cut}
Let $q:\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2}\to\Theta_{\mathbf{k}_1,\mathbf{k}_2}$ be the quotient map and $q^*:\mathbf{B}_n(\Theta_{\mathbf{k}_1,\mathbf{k}_2})\to\mathbf{B}_n(\widetilde\Theta_{\mathbf{k}_1,\mathbf{k}_2})$ be the map defined in Proposition~\ref{prop:generaledgecontraction}.
Then $q_*$ induces an isomorphism between abelianizations, that is, the first homology group $H_1(\mathbf{B}_n(\Theta_{\mathbf{k}_1,\mathbf{k}_2}))$ is isomorphic to
$H_1(\mathbf{B}_n(\widetilde \Theta_{\mathbf{k}_1,\mathbf{k}_2}))$
\end{lem}
Therefore $H_1(\mathbf{B}_n(X))=H_1(\mathbf{B}_n(\widetilde X))$ and now we can decompose $\widetilde X$ by using $2$-connected-sums into $\widetilde X_i$'s and $\Theta_m$, where $\widetilde X_i=\widehat X_i \# \Theta_{k_{i,1},k_{i,2},1}$.
\begin{figure}[ht]
\begin{align*}
\widetilde X\quad=\quad&\vcenter{\hbox{\scriptsize\input{tildeX_decomposition1.pdf_tex}}}\\
\quad=\quad&\vcenter{\hbox{\scriptsize\input{tildeX_decomposition2.pdf_tex}}}\\
\quad=\quad&\Theta_3\#\widetilde X_1\#\widetilde X_2\#\widetilde X_3.
\end{align*}
\caption{A 2-connected-sum decomposition of $\widetilde X$ near a 2-cut}
\label{fig:tildeX}
\end{figure}
By Lemma~\ref{lem:exactseq2} again, we have a short exact sequence
\[1\to\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\Theta_2))\to
H_1(\mathbf{B}_n(\Theta_m))\oplus\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widetilde X_i))\to
H_1(\mathbf{B}_n(\widetilde X))\to 1.\]
\begin{lem}
Let $X, \mathbf{v}$ and $X_i$'s be as above. Then
\begin{align*}
H_1(\mathbf{B}_n(X))\oplus\mathbb{Z}^m&=H_1(\mathbf{B}_n(\Theta_m))\oplus\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widetilde X_i))\\
&=\mathbb{Z}^{\binom{m}2}\oplus\bigoplus_{i=1}^m H_1(\mathbf{B}_n(\widetilde X_i)).
\end{align*}
\end{lem}
\begin{proof}
This follows easily from the above exact sequence and $H_1(\mathbf{B}_n(\Theta_2))=\mathbb{Z}$.
\end{proof}
\subsubsection{Vertex-3-connected complexes}
We claim the following.
\begin{prop}\label{prop:3conn}
Let $X$ be a simple and vertex-3-connected complex. Then for $n\ge 2$,
\[
H_1(\mathbf{B}_n(X))=\begin{cases}
H_1(X)\oplus\langle[\sigma]\rangle&X\text{ is planar};\\
H_1(X)\oplus\langle[\sigma]\rangle/\langle 2[\sigma]\rangle&X\text{ is nonplanar}.
\end{cases}
\]
\end{prop}
This is a generalization of the result for vertex-3-connected graphs stated in \cite[Lemma~3.15]{KP}, and we can prove Theorem~\ref{thm:emb}~(3) with the aid of this proposition as follows.
\begin{proof}[Proof of Theorem~\ref{thm:emb}~(3)]
Since vertex-1 and 2-connected sums and 1-cut and 2-cut decompositions preserve planarity, $X$ is nonplanar if and only if $X$ has a nonplanar vertex-3-connected component with respect to 1-cut and 2-cut decompositions.
On the other hand, Lemma~\ref{lem:1cut} implies that $H_1(\mathbf{B}_n(X))$ has torsion if and only if one of its vertex-2-connected components does. However, Lemma~\ref{lem:2cut} does not imply directly the corresponding result because of $\mathbb{Z}$ summands.
Each $\mathbb{Z}$ summand is mapped to a summand of the homology group $H_1(\widetilde X_i)$ of a vertex-3-connected component $\widetilde X_i$ via the map induced from the embedding $\Theta_2=S^1\to \widetilde X_i$.
Hence $H_1(\mathbf{B}_n(X))$ has torsion for a vertex-2-connected complex $X$ if and only if one of its vertex-3-connected components does.
Finally, Proposition~\ref{prop:3conn} completes the proof.
\end{proof}
For the rest of the paper, we will prove Proposition~\ref{prop:3conn}. Hence from now on, we suppose that $X$ is a simple and vertex-3-connected complex.
\begin{lem}\label{lem:3conntrees}
Let $T$ and $T'$ be trees with $k=\#(\partial T)=\#(\partial T')$. Suppose $\widehat T\#\widehat T'$ is vertex-3-connected.
Then Proposition~\ref{prop:3conn} holds for $\widehat T\#\widehat T'$.
\end{lem}
\begin{proof}
This is a special case of Lemma~3.15 in \cite{KP}, and we introduce a new proof.
By Lemma~\ref{lem:biconn}, it suffices to consider $H_1(\mathbf{B}_2(\widehat T\#\widehat T'))$.
Recall the group presentation for $\mathbf{B}_2(\widehat T\#\widehat T')$ from Example~\ref{ex:twotrees}. Then its abelianization has a presentation as follows.
\[
H_1(\mathbf{B}_2(\widehat T\#\widehat T'))=\mathbb{Z}^{k-1}\oplus\bigoplus_{2\le i<j\le k}\langle [s_{i,j}]\rangle\bigg/
\left\langle [s_{i,j}]-[s_{i',j'}] \left| \begin{matrix}(i,j)\sim_T(i',j')\text{ or}\\(i,j)\sim_{T'}(i',j')\end{matrix}\right.\right\rangle.
\]
Then as shown in Example~\ref{ex:twotrees2}, vertex-3-connectedness implies that the two equivalence relations $\sim_T$ and $\sim_{T'}$ are engaged with each other so that they make all $[s_{i,j}]$'s equivalent.
Hence the cokernel of $i_{\widehat T\#\widehat T',2}$ is generated by a single element $[s]$.
If $\widehat T\#\widehat T'$ is nonplanar, then they must be glued in a twisted way. That is, one of $[s_{i,j}]$'s must be identified with its inverse $-[s_{i,j}]$, and therefore $s$ is 2-torsion since
\[2[s]=[s_{i,j}]-(-[s_{i,j}])=0.\]
\end{proof}
\begin{prop}\label{prop:connectedsum}
Let $X=Y\# Z$. Suppose that Proposition~\ref{prop:3conn} holds for all vertex-3-connected subcomplexes of $Y$ and $Z$. Then it does for $X$ as well.
\end{prop}
\begin{proof}
Let $X=(Y,\vec v)\# (Z,\vec w)$ for $\lk(\vec v) =(v_1,\dots,v_k)$ and $\lk(\vec w)=(w_1,\dots,w_k)$. Then both $Y_v$ and $Z_z$ are connected by definition, and therefore both $Y$ and $Z$ are vertex-2-connected.
However, in general, $Y$ and $Z$ are not necessarily vertex-3-connected, and if a 2-cut exists in $Y$ or $Z$, then one of the two vertices is precisely $v$ or $w$, respectively.
Then from the 2-cut decompositions for $Y$ and $Z$, we can obtain two trees $T_Y\subset Y_v$ and $T_Z\subset Z_w$ such that $\partial T_Y=\lk(v)$ and $\partial T_Z=\lk(w)$, where vertices in $T_Y$ and $T_Z$ correspond to vertex-3-connected components in $Y$ and $Z$, respectively.
Since $X$ is vertex-3-connected, so is $\widehat T_Y\#\widehat T_Z$ by construction.
Moreover, by the assumption about $Y$ and $Z$, $\coker(i_{Y,n})$ and $\coker(i_{Z,n})$ are generated by $r_2(T_Y)$ and $r_2(T_Z)$ elements, respectively.
The commutative diagram in Lemma~\ref{lem:exactseq2} produces an exact sequence
\[\coker(i_{\Theta_k,n})\to\coker(i_{Y,n})\oplus\coker(i_{Z,n})\to\coker(i_{X,n})\to 1.\]
However, the quotient of $\coker(i_{Y,n})\oplus\coker(i_{Z,n})$ by the image of $\coker(i_{\Theta_k,n})$ is nothing but $\coker(i_{\widehat T_Y\#\widehat T_Z,n})$ and by Lemma~\ref{lem:3conntrees}, it is either $\mathbb{Z}$ or $\mathbb{Z}_2$.
Finally, it is $\mathbb{Z}_2$ if and only if either one of vertex-3-connected components of $Y$ and $Z$ is nonplanar, or both $Y$ and $Z$ are planar but $\widehat T_Y\#\widehat T_Z$ is nonplanar. Since these conditions are equivalent to the nonplanarity of $X$, we are done.
\end{proof}
Therefore we may assume furthermore that $X$ is not decomposable in a nontrivial way via $k$-connected sum for all $k\ge 1$. However, note that $X$ might be expressible as a closure.
Indeed, there is no such complex of dimension 1, a graph, as follows. If it exists, then it has at least 3 vertices of valency $\ge 3$ by vertex-3-connectedness, but at most 3 as well since it is always decomposable when a graph has at least 4 vertices of valency $\ge 3$ as follows. First we divide the set $V(\Gamma)$ of vertices of valency $\ge3$ into two parts $V_1$ and $V_2$, where both the full subgraphs $\Gamma_1$ and $\Gamma_2$ containing $V_1$ and $V_2$ are connected.
Then for each $i$, consider the closure of the complement of $\st(\Gamma_i)$ along its boundary. Then $\Gamma$ is nothing but the connected sum of these two complexes. Hence it is decomposable, and therefore the only possibilities are $\Theta_{a,b,c}$ defined above. Since vertex-3-connectedness implies the absence of multiple edges, two of $a,b$ and $c$ must be 1. This is a contradiction.
Therefore $X$ must be obtained by taking closures several times of an elementary complex of dimension 2.
However, this case has been treated already in Corollary~\ref{cor:closures}. This completes the proof of Proposition~\ref{prop:3conn}.
| {
"timestamp": "2015-08-18T02:02:07",
"yymm": "1508",
"arxiv_id": "1508.03699",
"language": "en",
"url": "https://arxiv.org/abs/1508.03699",
"abstract": "We consider the braid groups $\\mathbf{B}_n(X)$ on finite simplicial complexes $X$, which are generalizations of those on both manifolds and graphs that have been studied already by many authors. We figure out the relationships between geometric decompositions for $X$ and their effects on braid groups, and provide an algorithmic way to compute the group presentations for $\\mathbf{B}_n(X)$ with the aid of them.As applications, we give complete criteria for both the surface embeddability and planarity for $X$, which are the torsion-freeness of the braid group $\\mathbf{B}_n(X)$ and its abelianization $H_1(\\mathbf{B}_n(X))$, respectively.",
"subjects": "Geometric Topology (math.GT)",
"title": "On the structure of braid groups on complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750533189538,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7095221826297868
} |
https://arxiv.org/abs/1503.06789 | The Liouville theorem as a problem of common eigenfunctions | It is shown that, by appropriately defining the eigenfunctions of a function defined on the extended phase space, the Liouville theorem on solutions of the Hamilton--Jacobi equation can be formulated as the problem of finding common eigenfunctions of $n$ constants of motion in involution, where $n$ is the number of degrees of freedom of the system. | \section{Introduction}
In the framework of the Hamiltonian formulation of classical mechanics, the Liouville theorem asserts that, for a mechanical system with $n$ degrees of freedom, if we have $n$ constants of motion in involution, $F_{1}, F_{2}, \ldots, F_{n}$ (that is, $\{ F_{i}, F_{j} \} = 0$ for $i, j = 1, 2, \ldots, n$, where $\{ \; , \; \}$ is the Poisson bracket), then a complete solution of the Hamilton--Jacobi (HJ) equation can be found by quadrature \cite{Wi,GV,BB,FM,DB,Li}.
More precisely, if $F_{1}(q_{i}, p_{i}, t), \ldots, F_{n}(q_{i}, p_{i}, t)$ are $n$ constants of motion in involution, that is,
\begin{equation}
\frac{\partial F_{i}}{\partial t} + \{ F_{i}, H \} = 0, \qquad i = 1, 2, \dots, n \label{cm}
\end{equation}
and
\begin{equation}
\{ F_{i}, F_{j} \} = 0, \qquad i, j = 1, 2, \dots, n, \label{inv}
\end{equation}
where $\{ \;, \; \}$ denotes the Poisson bracket (with the convention $\{ q_{i}, p_{j} \} = \delta_{ij}$), then, assuming that
\begin{equation}
\det \left( \frac{\partial F_{i}}{\partial p_{j}} \right) \not= 0, \label{reg}
\end{equation}
so that, locally at least, we can express the $p_{i}$ as functions of $q_{j}, F_{j}$, and $t$, the differential form
\begin{equation}
p_{i}(q_{j}, F_{j}, t) \, {\rm d} q_{i} - H \big( q_{i}, p_{i}(q_{j}, F_{j}, t), t \big) \, {\rm d} t \label{df}
\end{equation}
is the differential of some function, $S(q_{i}, t)$, which is a complete solution of the HJ equation (here and in what follows, there is summation over repeated indices).
The aim of this paper is to show that the Liouville theorem can be formulated in another form, closer to the standard formalism of quantum mechanics. Specifically, we shall show that if $S(q_{i}, t)$ is a common eigenfunction of the functions $F_{1}, F_{2}, \ldots, F_{n}$ (a concept to be defined below) then, by adding to $S$ an appropriate function of $t$ only, one obtains a complete solution of the HJ equation.
In Section 2 we present the definition of the eigenfunctions of a function $f(q_{i}, p_{i}, t)$ and we show that two functions, $f(q_{i}, p_{i}, t)$ and $g(q_{i}, p_{i}, t)$, have common eigenfunctions if and only if their Poisson bracket vanishes. Then, we prove that, if conditions (\ref{cm})--(\ref{reg}) hold, a common eigenfunction of $F_{1}, F_{2}, \ldots, F_{n}$ is, up to an additive function of $t$ only, a complete solution of the HJ equation. In Section 3 we give some illustrative examples, emphasizing the fact that we can make use of constants of motion that depend explicitly on the time.
The statement of the Liouville theorem presented here allows us to see that the Liouville theorem is analogous to one of the methods employed to solve the Schr\"odinger equation, where we look for the common eigenfunctions of a complete set of mutually commuting operators that also commute with the Hamiltonian (e.g., for a spherically symmetric Hamiltonian we consider the common eigenfunctions of $L^{2}$, $L_{z}$ and $H$).
\section{Eigenfunctions of a function and complete solutions of the Hamilton--Jacobi equation}
We start by giving the definition of the eigenfunctions of a real-valued function $f(q_{i}, p_{i}, t)$: We shall say that $S(q_{i}, t)$ is an eigenfunction of $f(q_{i}, p_{i}, t)$, with eigenvalue $\lambda$, if $S$ is a solution of the first-order partial differential equation
\begin{equation}
f(q_{i}, \frac{\partial S}{\partial q_{i}}, t) = \lambda. \label{eig}
\end{equation}
(It may be noticed that if $f$ is a time-independent Hamiltonian, then (\ref{eig}) is the corresponding time-independent HJ equation.) We note that if $S(q_{i}, t)$ is an eigenfunction of $f(q_{i}, p_{i}, t)$ with eigenvalue $\lambda$, then so it is $S(q_{i}, t) + \phi(t)$, for any function $\phi(t)$ of $t$ only, and that the solutions of (\ref{eig}) will depend parametrically on $\lambda$. Of course, in order for (\ref{eig}) to be a differential equation, $f$ must depend on one of the $p_{i}$, at least.
For instance, according to this definition, the eigenfunctions of the function
\begin{equation}
F(q, p, t) = m \omega q \sin \omega t + p \cos \omega t, \label{cons}
\end{equation}
where $m$ and $\omega$ are constants, are the solutions of the differential equation
\[
m \omega q \sin \omega t + \frac{\partial S}{\partial q} \cos \omega t = \lambda,
\]
which can be readily integrated giving
\begin{equation}
S = \lambda q \sec \omega t - \frac{m \omega}{2} q^{2} \tan \omega t + \phi(t),
\end{equation}
where $\phi(t)$ is an arbitrary function of $t$ only. Note that $\lambda$ may be a function of $t$.
If $S(q_{i}, t)$ is a common eigenfunction of $f(q_{i}, p_{i}, t)$ and $g(q_{i}, p_{i}, t)$, with eigenvalues $\lambda$ and $\mu$, respectively, that is, $S$ satisfies (\ref{eig}) and
\[
g(q_{i}, \frac{\partial S}{\partial q_{i}}, t) = \mu,
\]
then, differentiating with respect to $q_{i}$, making use of the chain rule, we obtain
\[
\frac{\partial f}{\partial q_{i}} + \frac{\partial f}{\partial p_{j}} \frac{\partial^{2} S}{\partial q_{j} \partial q_{i}} = 0 \quad {\rm and} \quad \frac{\partial g}{\partial q_{i}} + \frac{\partial g}{\partial p_{j}} \frac{\partial^{2} S}{\partial q_{j} \partial q_{i}} = 0,
\]
hence,
\begin{eqnarray*}
\{ f, g \} & = & \frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} - \frac{\partial g}{\partial q_{i}} \frac{\partial f}{\partial p_{i}} = - \frac{\partial f}{\partial p_{j}} \frac{\partial^{2} S}{\partial q_{j} \partial q_{i}} \frac{\partial g}{\partial p_{i}} + \frac{\partial g}{\partial p_{j}} \frac{\partial^{2} S}{\partial q_{j} \partial q_{i}} \frac{\partial f}{\partial p_{i}} \\
& = & \left( \frac{\partial^{2} S}{\partial q_{j} \partial q_{i}} - \frac{\partial^{2} S}{\partial q_{i} \partial q_{j}} \right) \frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial p_{j}} = 0.
\end{eqnarray*}
Thus, if $f(q_{i}, p_{i}, t)$ and $g(q_{i}, p_{i}, t)$ possess common eigenfunctions, then $\{ f, g \} = 0$.
In order to see that the converse is also true, we now assume that $\{ f, g \} = 0$. If $f$ and $g$ are functionally independent, then there exists, locally at least, a set of canonical coordinates, $Q_{i}, P_{i}$, such that, $P_{1} = f$ and $P_{2} = g$. Then, the eigenvalue equations for $f$ and $g$ are $\partial S/\partial Q_{1} = \lambda$ and $\partial S/\partial Q_{2} = \mu$, which have the simultaneous solutions $S = \lambda Q_{1} + \mu Q_{2} + \phi(Q_{3}, \ldots, Q_{n}, t)$, where $\phi$ is an arbitrary function of $n - 1$ variables, thus showing that $f$ and $g$ have common eigenfunctions. (Note that this does not mean that every eigenfunction of $f$ is an eigenfunction of $g$ (cf.\ \cite{Sn}, Sec.\ 2.9).) The expression for $S$ in terms of the original coordinates, $(q_{i}, t)$, is not given by the simple substitution of the $Q_{i}$ as functions of $(q_{i}, p_{i}, t)$ \cite{CT}; what is relevant here is the existence of common eigenfunctions for $f$ and $g$.
In the case where $f$ and $g$ are functionally dependent, the eigenvalue equations for $f$ and $g$ are equivalent to each other and, trivially, possess common solutions.
\subsection{Alternative formulation of the Liouville theorem}
We now assume that $F_{1}, \ldots, F_{n}$ are $n$ functions satisfying (\ref{cm})--(\ref{reg}), and we consider a common eigenfunction $S(q_{i}, t)$ of $F_{1}, \ldots, F_{n}$, with eigenvalues $\lambda_{1}, \ldots, \lambda_{n}$, respectively, then, assuming that the eigenvalues are constant, differentiating with respect to $t$ both sides of the equation
\begin{equation}
F_{i}(q_{j}, \frac{\partial S}{\partial q_{j}}, t) = \lambda_{i}, \label{eigf}
\end{equation}
making use of the chain rule, the Hamilton equations and (\ref{cm}), we have
\begin{eqnarray*}
0 & = & \frac{\partial F_{i}}{\partial q_{j}} \dot{q_{j}} + \frac{\partial F_{i}}{\partial p_{j}} \frac{{\rm d}}{{\rm d} t} \frac{\partial S}{\partial q_{j}} + \frac{\partial F_{i}}{\partial t} \\
& = & \frac{\partial F_{i}}{\partial q_{j}} \frac{\partial H}{\partial p_{j}} + \frac{\partial F_{i}}{\partial p_{j}} \left( \frac{\partial^{2} S}{\partial t \partial q_{j}} + \frac{\partial^{2} S}{\partial q_{k} \partial q_{j}} \dot{q}_{k} \right) - \frac{\partial F_{i}}{\partial q_{j}} \frac{\partial H}{\partial p_{j}} + \frac{\partial F_{i}}{\partial p_{j}} \frac{\partial H}{\partial q_{j}} \\
& = & \frac{\partial F_{i}}{\partial p_{j}} \left( \frac{\partial^{2} S}{\partial t \partial q_{j}} + \frac{\partial^{2} S}{\partial q_{k} \partial q_{j}} \frac{\partial H}{\partial p_{k}} + \frac{\partial H}{\partial q_{j}} \right), \qquad i = 1, \ldots, n.
\end{eqnarray*}
By virtue of (\ref{reg}), the last equations are equivalent to
\begin{eqnarray*}
0 & = & \frac{\partial^{2} S}{\partial t \partial q_{j}} + \frac{\partial^{2} S}{\partial q_{k} \partial q_{j}} \frac{\partial H}{\partial p_{k}} + \frac{\partial H}{\partial q_{j}} \\
& = & \frac{\partial}{\partial q_{j}} \left[ \frac{\partial S}{\partial t} + H(q_{k}, \frac{\partial S}{\partial q_{k}}, t) \right], \qquad j = 1, \ldots, n,
\end{eqnarray*}
which implies that the expression inside the brackets is a function of $t$ only,
\[
\frac{\partial S}{\partial t} + H(q_{k}, \frac{\partial S}{\partial q_{k}}, t) = \chi(t).
\]
Thus,
\[
\tilde{S} = S - \int^{t} \chi(u) \, {\rm d} u
\]
is a solution of the HJ equation. We can verify that this solution is complete by differentiating (\ref{eigf}) with respect to $\lambda_{j}$, which gives
\[
\frac{\partial F_{i}}{\partial p_{k}} \frac{\partial^{2} S}{\partial \lambda_{j} \partial q_{k}} = \delta_{ij}.
\]
Taking into account (\ref{reg}), this last equation shows that $\det (\partial^{2} S/\partial \lambda_{j} \partial q_{k}) \not= 0$.
\section{Examples}
In this section we give some examples of the method presented above.
\subsection{One-dimensional harmonic oscillator}
The function
\[
F(q, p, t) = m \omega q \sin \omega t + p \cos \omega t
\]
already considered above [see (\ref{cons})], is a constant of motion if the Hamiltonian is given by
\begin{equation}
H = \frac{p^{2}}{2m} + \frac{m \omega^{2}}{2} q^{2}, \label{hoh}
\end{equation}
where $\omega$ is a constant. According to the results of the preceding section, if $\lambda$ is a constant,
\begin{equation}
S = \lambda q \sec \omega t - \frac{m \omega}{2} q^{2} \tan \omega t + \phi(t)
\end{equation}
must be a solution of the HJ equation for the Hamiltonian (\ref{hoh}), if the function $\phi$ is appropriately chosen. A direct computation yields
\[
\frac{1}{2m} \left( \frac{\partial S}{\partial q} \right)^{2} + \frac{m \omega^{2}}{2} q^{2} + \frac{\partial S}{\partial t} = \frac{\lambda^{2}}{2m} \sec^{2} \omega t + \phi'(t),
\]
and, therefore, choosing $\phi(t) = - \lambda^{2} \tan \omega t/2m \omega$, we obtain the complete solution of the HJ equation
\[
S = \lambda q \sec \omega t - \left( \frac{\lambda^{2}}{2m} + \frac{m \omega^{2}}{2} q^{2} \right) \frac{\tan \omega t}{\omega}.
\]
Note that, in this case, $H$ is also a constant of motion and, as pointed out above, the equation that determines the eigenfunctions of $H$ is just the time-independent HJ equation. However, the constant of motion (\ref{cons}) leads to simpler expressions.
\subsection{Particle in a time-dependent force field}
As a second example we consider the time-dependent Hamiltonian
\[
H = \frac{p^{2}}{2m} - ktq,
\]
where $k$ is a constant. One can readily verify that
\[
F = p - \frac{kt^{2}}{2}
\]
is a constant of motion and that the eigenfunctions of $F$, i.e., the solutions of
\[
\frac{\partial S}{\partial q} - \frac{kt^{2}}{2} = \lambda,
\]
are given by
\begin{equation}
S = \lambda q + \frac{kt^{2}}{2} q + \phi(t), \label{pf2}
\end{equation}
where $\phi(t)$ is an arbitrary function of $t$ only. Then
\[
\frac{1}{2m} \left( \frac{\partial S}{\partial q} \right)^{2} - \frac{kt^{2}}{2} + \frac{\partial S}{\partial t} = \frac{\lambda^{2}}{2m} + \frac{\lambda k t^{2}}{2m} + \frac{k^{2} t^{4}}{8m} + \phi'(t),
\]
hence, choosing
\[
\phi(t) = - \frac{\lambda^{2} t}{2m} - \frac{\lambda kt^{3}}{6m} - \frac{k^{2} t^{5}}{40m},
\]
(\ref{pf2}) is a complete solution of the HJ equation (which is not separable).
\subsection{Particle in two dimensions}
As a final example we consider the Hamiltonian
\begin{equation}
H = \frac{1}{2m} \left[ \left( p_{x} + \frac{eB}{2c} y \right)^{2} + \left( p_{y} - \frac{eB}{2c} x \right)^{2} \right], \label{magn}
\end{equation}
which corresponds to a charged particle of mass $m$ and electric charge $e$ in a uniform magnetic field $B$. The functions
\begin{eqnarray}
F_{1} & = & \frac{1}{2} (1 + \cos \omega t) p_{x} - \frac{1}{2} p_{y} \sin \omega t + \frac{m \omega}{4} x \sin \omega t - \frac{m \omega}{4} (1 - \cos \omega t) y, \label{f1} \\
F_{2} & = & \frac{1}{2} (1 + \cos \omega t) p_{y} + \frac{1}{2} p_{x} \sin \omega t + \frac{m \omega}{4} y \sin \omega t + \frac{m \omega}{4} (1 - \cos \omega t) x, \label{f2}
\end{eqnarray}
where $\omega \equiv eB/mc$, are constants of motion in involution, which correspond to the values of the canonical momenta $p_{x}$ and $p_{y}$, respectively, at $t = 0$.
From (\ref{f1}) and (\ref{f2}) one finds that the common eigenfunctions of $F_{1}$ and $F_{2}$, with eigenvalues $\lambda_{1}$ and $\lambda_{2}$, respectively, are
\[
S = \lambda_{1} x + \lambda_{2} y + \tan {\textstyle \frac{1}{2}} \omega t \left[ \lambda_{2} x - \lambda_{1} y - \frac{m \omega}{4} (x^{2} + y^{2}) \right] + \phi(t),
\]
where $\phi(t)$ is an arbitrary function of $t$ only. Substituting this expression into the HJ equation one finds that $S$ is a solution of this equation if and only if
\[
\frac{\lambda_{1}{}^{2} + \lambda_{2}{}^{2}}{2m} \sec^{2} {\textstyle \frac{1}{2}} \omega t + \phi'(t) = 0,
\]
hence,
\[
S = \lambda_{1} x + \lambda_{2} y - \left[ \left( \lambda_{1} + \frac{m \omega}{2} y \right)^{2} + \left( \lambda_{2} - \frac{m \omega}{2} x \right)^{2} \right] \frac{\tan {\textstyle \frac{1}{2}} \omega t}{m \omega}
\]
is a complete solution of the HJ equation.
\section{Concluding remarks}
As pointed out above, the formulation of the Liouville theorem given here makes use of terms analogous to those employed in the standard formalism of quantum mechanics, thus providing another example of the parallelism between both theories. Another advantage of the version of the Liouville theorem given above is that its proof is shorter than those usually presented in the textbooks.
| {
"timestamp": "2015-03-25T01:00:21",
"yymm": "1503",
"arxiv_id": "1503.06789",
"language": "en",
"url": "https://arxiv.org/abs/1503.06789",
"abstract": "It is shown that, by appropriately defining the eigenfunctions of a function defined on the extended phase space, the Liouville theorem on solutions of the Hamilton--Jacobi equation can be formulated as the problem of finding common eigenfunctions of $n$ constants of motion in involution, where $n$ is the number of degrees of freedom of the system.",
"subjects": "Classical Physics (physics.class-ph)",
"title": "The Liouville theorem as a problem of common eigenfunctions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750533189538,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7095221826297868
} |
https://arxiv.org/abs/2007.02524 | The moduli space of stable rank 2 parabolic bundles over an elliptic curve with 3 marked points | We explicitly describe the moduli space $M^s(X,3)$ of stable rank 2 parabolic bundles over an elliptic curve $X$ with trivial determinant bundle and 3 marked points. Specifically, we exhibit $M^s(X,3)$ as a blow-up of an embedded elliptic curve in $(\mathbb{CP}^1)^3$. The moduli space $M^s(X,3)$ can also be interpreted as the $SU(2)$ character variety of the 3-punctured torus. Our description of $M^s(X,3)$ reproduces the known Poincaré polynomial for this space. | \section{Introduction}
Given a curve $C$, one can define a moduli space $M^s(C,n)$ of stable
rank 2 parabolic bundles over $C$ with trivial determinant bundle and
$n$ marked points.
The space $M^s(C,n)$ has the structure of a smooth complex
manifold of dimension $3(g-1) + n$, where $g$ is the genus of the curve $C$.
In general, the space $M^s(C,n)$ depends on a positive real parameter
$\mu$ known as the \emph{weight}.
For $\mu$ sufficiently small ($\mu < 1/n$ will suffice), the space
$M^s(C,n)$ is independent of $\mu$, but as $\mu$ increases it may
cross critical values at which $M^s(C,n)$ undergoes certain birational
transformations \cite{Boden,Thaddeus}.
The moduli space $M^s(C,n)$ can also be interpreted as an
$SU(2)$-character variety, which is defined as the space of conjugacy
classes of $SU(2)$-representations of the fundamental group of
$C$ with $n$ punctures, where loops around the punctures are required
to correspond to $SU(2)$-matrices conjugate to
$\diag(e^{2\pi i \mu}, e^{-2\pi i\mu})$.
Moduli spaces of parabolic bundles on curves are natural objects of
study in algebraic geometry, and also play an important role in
low-dimensional topology.
In particular, these spaces have a canonical symplectic structure and
can be used to define Floer homology theories of links
\cite{Boozer-2,Hedden-1,Hedden-2,Horton}.
Explicit descriptions of $M^s(C,n)$ are known for small values of $n$
and $C$ a rational or elliptic curve.
For rational curves, it is well-known that for small weight we have
\begin{align*}
M^s(\mathbbm{CP}^1,0) &= M^s(\mathbbm{CP}^1,1) = M^s(\mathbbm{CP}^1,2) = \varnothing, \\
M^s(\mathbbm{CP}^1,3) &= \{pt\}, \\
M^s(\mathbbm{CP}^1,4) &= \mathbbm{CP}^1 - \{\textup{3 points}\}, \\
M^s(\mathbbm{CP}^1,5) &= \mathbbm{CP}^2 \# 4 \overline{\mathbbm{CP}}^2.
\end{align*}
The structure of $M^s(\mathbbm{CP}^1,6)$ for $\mu = 1/4$, corresponding to the
traceless character variety, was recently described by Kirk
\cite{Kirk}.
For an elliptic curve $X$, it is straightforward to show that for
small weight we have
\begin{align*}
M^s(X,0) &= \varnothing, &
M^s(X,1) &= \mathbbm{CP}^1.
\end{align*}
It was recently shown by Vargas \cite{Vargas} that $M^s(X,2)$ is the
complement of an embedded elliptic curve in $(\mathbbm{CP}^1)^2$.
Our goal in this paper is to explicitly describe the structure of
$M^s(X,3)$.
We prove the following result:
\begin{theorem}
\label{theorem:intro-main}
For small weight, the moduli space $M^s(X,3)$ for an elliptic curve
$X$ is a blow-up of an embedded elliptic curve in $(\mathbbm{CP}^1)^3$.
\end{theorem}
To prove Theorem \ref{theorem:intro-main}, we make use of an explicit
description of Hecke modifications of rank 2 holomorphic vector
bundles on elliptic curves that is derived in \cite{Boozer}.
Roughly speaking, a Hecke modification is a way of locally
modifying a vector bundle near a point to obtain a new vector bundle.
The moduli space $M^s(X,3)$ plays an important role in a conjectural
Floer homology theory for links in lens spaces discussed in
\cite{Boozer}, and Theorem \ref{theorem:intro-main} was motivated by
this application.
The paper is organized as follows.
In Sections
\ref{sec:parabolic} and \ref{sec:vector},
we review the background material we will need on parabolic bundles
and vector bundles on elliptic curves.
In Section
\ref{sec:description},
we use Hecke-modification methods to explicitly
describe $M^s(X,3)$.
In Section
\ref{sec:relationship},
we relate $M^s(X,3)$ to $M^{ss}(X,2)$, the moduli space
of $S$-equivalence classes of rank 2 semistable parabolic bundles with
trivial determinant bundle and 2 marked points.
In Section
\ref{sec:poincare},
we use our description of $M^s(X,3)$ to reproduce the known
Poincar\'{e} polynomial for this space.
\section{Parabolic bundles}
\label{sec:parabolic}
The concept of a parabolic bundle was introduced in
\cite{Mehta-Seshadri}.
We will not need this concept in its full generality; rather, we will
consider only parabolic bundles of a certain restricted form, which is
discussed at greater length in \cite[Appendix B]{Boozer}.
For our purposes here, a rank $2$ parabolic bundle over a curve $C$
consists of a rank 2 holomorphic vector bundle $E$ over $C$ with
trivial determinant bundle, distinct marked points
$p_1, \cdots, p_n \in C$, a
line $\ell_{p_i} \in \mathbbm{P}(E_{p_i})$ in the fiber $E_{p_i}$ over
each marked point $p_i$, and a positive real parameter $\mu$ known as
the \emph{weight}.
For simplicity, we will suppress the curve $C$ and weight $\mu$ in the
notation and denote a parabolic bundle as
$(E,\ell_{p_1},\cdots,\ell_{p_n})$.
In order to describe the stability properties of parabolic bundles, it
is helpful to introduce some additional terminology.
Recall that the degree of the proper subbundles of a vector
bundle $E$ on a curve $C$ is bounded above.
Given a rank 2 holomorphic vector bundle $E$, we say that a line
$\ell_p \in \mathbbm{P}(E_p)$ is \emph{bad} if there is a line
subbundle $L$ of $E$ of maximal degree such that $\ell_p = L_p$, and
\emph{good} otherwise.
We say that lines
$\ell_{p_1} \in \mathbbm{P}(E_{p_1}), \cdots,
\ell_{p_n} \in \mathbbm{P}(E_{p_n})$ are
\emph{bad in the same direction} if there is a
line subbundle $L$ of $E$ of maximal degree such that
$\ell_{p_i} = L_{p_i}$ for $i=1, \cdots, n$.
Consider a parabolic bundle
${\mathcal E} = (E,\ell_{p_1},\cdots,\ell_{p_n})$.
Let $m$ denote the maximum number of lines of ${\mathcal E}$ that are bad in
the same direction.
For sufficiently small weight ($\mu < 1/n$ will suffice), we
can characterize the stability and semistability of ${\mathcal E}$ as
follows.
If $E$ is unstable, then ${\mathcal E}$ is unstable.
If $E$ is semistable, then ${\mathcal E}$ is stable if $m < n/2$, semistable
if $m \leq n/2$, and unstable if $m > n/2$.
Note that if $n$ is odd then stability and semistability are
equivalent.
We define a moduli space $M^s(C,n)$ of isomorphism classes of stable
parabolic bundles with $n$ marked points.
As with vector bundles, one can define a notion of $S$-equivalent
semistable parabolic bundles, and we define a moduli space
$M^{ss}(C,n)$ of $S$-equivalence classes of semistable parabolic
bundles.
For odd $n$ we have that $M^s(C,n) = M^{ss}(C,n)$.
For even $n$ we have that $M^s(C,n)$ is an open subset of
$M^{ss}(C,n)$.
\section{Vector bundles on elliptic curves}
\label{sec:vector}
Vector bundles on elliptic curves were classified by Atiyah
\cite{Atiyah}, and are well understood.
Here we briefly summarize the results regarding vector bundles on
elliptic curves that we will need; these results are either well-known
(see for example \cite{Atiyah,Iena,Teixidor,Tu}) or derived in
\cite[Section 5]{Boozer}.
\subsection{Line bundles}
Isomorphism classes of degree 0 line bundles on an elliptic curve $X$
are parameterized by the Jacobian $\Jac(X)$.
Given a basepoint $e \in X$, we define the \emph{Abel-Jacobi}
isomorphism $X \rightarrow \Jac(X)$, $p \mapsto [{\mathcal O}(p-e)]$.
Given points $p, e \in X$, we define a \emph{translation map}
$\tau_{p-e}:\Jac(X) \rightarrow \Jac(X)$,
$[L] \mapsto [L \otimes {\mathcal O}(p-e)]$.
We say that a line bundle $L$ is \emph{2-torsion} if $L^2 = {\mathcal O}$.
There are four 2-torsion line bundles, which we
denote $L_i$ for $i=1,2,3,4$.
\subsection{Semistable rank 2 vector bundles}
\label{sec:vector-bundles}
For our purposes here, we need only consider semistable rank 2 vector
bundles on an elliptic curve $X$ with trivial determinant bundle.
There are three classes of such bundles.
First, we have vector bundles of the form $E = L \oplus L^{-1}$, where
$L$ is a degree 0 line bundle such that $L^2 \neq {\mathcal O}$.
There are two bad lines $L_p, (L^{-1})_p \in \mathbbm{P}(E_p)$ in the
fiber $E_p$ over a point $p \in X$, and all other lines in
$\mathbbm{P}(E_p)$ are good.
The automorphism group $\Aut(E)$ of $E$ consists of $GL(2,\mathbbm{C})$
matrices of the form
\begin{align*}
\left(\begin{array}{cc}
A & 0 \\
0 & D
\end{array}\right).
\end{align*}
Each bad line $L_p, (L^{-1})_p \in \mathbbm{P}(E_p)$ is fixed by the
automorphisms of $E$,
and there is a unique (up to rescaling by a constant) automorphism
carrying any good line $\ell_p \in \mathbbm{P}(E_p)$ to any other good
line $\ell_p' \in \mathbbm{P}(E_p)$.
Second, we have four vector bundles of the form
$E = L_i \oplus L_i$, where $L_i$ is a 2-torsion line bundle.
All lines $\ell_p \in \mathbbm{P}(E_p)$ in the fiber $E_p$ over a
point $p \in X$ are bad.
The automorphism group $\Aut(E)$ of $E$ is $GL(2,\mathbbm{C})$, and
there is a unique (up to rescaling by a constant) automorphism
carrying any triple of lines
$(\ell_{p_1}, \ell_{p_2}, \ell_{p_3}) \in
\mathbbm{P}(E_{p_1}) \times \mathbbm{P}(E_{p_2}) \times
\mathbbm{P}(E_{p_3})$
such that no two lines are bad in the same direction to any other
triple of lines
$(\ell_{p_1}', \ell_{p_2}', \ell_{p_3}') \in
\mathbbm{P}(E_{p_1}) \times \mathbbm{P}(E_{p_2}) \times
\mathbbm{P}(E_{p_3})$
such that no two lines are bad in the same direction.
Third, we have four vector bundles of the form
$E = F_2 \otimes L_i$,
where $L_i$ is a 2-torsion line bundle and $F_2$ is the unique
non-split extension of ${\mathcal O}$ by ${\mathcal O}$:
\begin{eqnarray*}
\begin{tikzcd}
0 \arrow{r} &
{\mathcal O} \arrow{r} &
F_2 \arrow{r} &
{\mathcal O} \arrow{r} &
0.
\end{tikzcd}
\end{eqnarray*}
There is a unique bad $(L_i)_p \in \mathbbm{P}(E_p)$ in the fiber
$E_p$ over a point $p \in X$, and all other lines in
$\mathbbm{P}(E_p)$ are good.
The automorphism group $\Aut(E)$ of $E$ consists of $GL(2,\mathbbm{C})$
matrices of the form
\begin{align*}
\left(\begin{array}{cc}
A & B \\
0 & A
\end{array}\right).
\end{align*}
The bad line $(L_i)_p \in \mathbbm{P}(E_p)$ is fixed by the
automorphisms of $E$,
and there is a unique (up to rescaling by a constant) automorphism
carrying any good line $\ell_p \in \mathbbm{P}(E_p)$ to any other good
line $\ell_p' \in \mathbbm{P}(E_p)$.
We define $M^{ss}(X)$ to be the moduli space of $S$-equivalence
classes of semistable rank 2 vector bundles on $X$ with trivial
determinant bundle.
In \cite{Tu} it is shown that $M^{ss}(X)$ is isomorphic to $\mathbbm{CP}^1$, as
can be understood as follows.
The above classification shows that we can parameterize semistable
rank 2 vector bundles with trivial determinant bundle as
$L \oplus L^{-1}$ for $[L] \in \Jac(X)$, together with the four
bundles $F_2 \otimes L_i$.
The bundles $L \oplus L^{-1}$ and $L^{-1} \oplus L$ are isomorphic,
hence $S$-equivalent, and are thus identified in $M^{ss}(X)$.
One can show that the bundles $F_2 \otimes L_i$ and $L_i \oplus L_i$
are $S$-equivalent, and are thus identified in $M^{ss}(X)$.
It follows that $M^s(X)$ is the quotient of $\Jac(X)$
by the involution $[L] \mapsto [L^{-1}]$, which yields a space known
as the \emph{pillowcase} that is isomorphic to $\mathbbm{CP}^1$.
We define a map $p:\Jac(X) \rightarrow M^{ss}(X)$,
$[L] \mapsto [L \oplus L^{-1}]$, which is a branched double-cover with
four branch points $p([L_i]) \in M^{ss}(X)$ corresponding to four
ramification points $[L_i] \in \Jac(X)$ that are fixed by the
involution $[L] \mapsto [L^{-1}]$.
\subsection{Hecke modifications of rank 2 vector bundles}
Given a rank 2 vector bundle $E$ over a curve $C$, distinct points
$p_1, \cdots, p_n \in C$, and lines
$\ell_{p_i} \in \mathbbm{P}(E_{p_i})$ for each point $p_i$, one can
perform a \emph{Hecke modification} of $E$ at each point $p_i$ using
data provided by the line $\ell_{p_i}$ so as to obtain a new vector
bundle that we will denote $H(E,\ell_{p_1},\cdots,\ell_{p_n})$.
One way to describe $H(E,\ell_{p_1},\cdots,\ell_{p_n})$ is as follows.
Let ${\mathcal E}$ denote the sheaf of sections of $E$, and define a subsheaf
${\mathcal F}$ of ${\mathcal E}$ whose set of sections over an open subset $U$ of
$X$ is given by
\begin{align*}
{\mathcal F}(U) = \{s \in {\mathcal E}(U) \mid
\textup{$p_i \in U \implies s(p_i) \in \ell_{p_i}$ for
$i=1,\cdots,n$}\}.
\end{align*}
We define $H(E,\ell_{p_1},\cdots,\ell_{p_n})$ to be the vector bundle
whose sheaf of sections is ${\mathcal F}$.
Hecke modifications of rank 2 vector bundles on elliptic curves are
described explicitly in \cite{Boozer}.
For our purposes here, all the results we will need can be
described in terms of a few properties of a certain map $h_e$, which
we define as follows.
Given a semistable rank 2 vector bundle $E$ with trivial determinant
bundle over an elliptic curve $X$, distinct points $p, q, e \in X$
such that $p + q = 2e$, and lines
$\ell_p \in \mathbbm{P}(E_p)$ and
$\ell_q \in \mathbbm{P}(E_q)$ that are
not bad in the same direction, we define
$h_e(E,\ell_p, \ell_q) \in M^{ss}(X)$ as
\begin{align*}
h_e(E,\ell_p,\ell_q) = [H(E,\ell_p,\ell_q) \otimes {\mathcal O}(e)].
\end{align*}
One can show that if the lines $\ell_p$ and $\ell_q$ are not bad in
the same direction, then $H(E,\ell_p,\ell_q) \otimes {\mathcal O}(e)$ is in
fact a semistable vector bundle with trivial determinant bundle and
thus represents a point in $M^{ss}(X)$.
It is clear that the order of the lines doesn't matter in the
definition of $H(E,\ell_p,\ell_q)$, so
\begin{align*}
h_e(E,\ell_p,\ell_q) = h_e(E,\ell_q,\ell_p).
\end{align*}
If $(E,\ell_p,\ell_q)$ and $(E',\ell_p',\ell_q')$ are isomorphic
parabolic bundles, one can show that
\begin{align*}
h_e(E,\ell_p,\ell_q) =
h_e(E',\ell_p',\ell_q').
\end{align*}
In particular, for $\phi \in \Aut(E)$ we have
\begin{align*}
h_e(E,\ell_p,\ell_q) =
h_e(E,\phi(\ell_p),\phi(\ell_q)).
\end{align*}
If the line $\ell_p$ is good and $E \neq L_i \oplus L_i$, then
we have the following result:
\begin{theorem}[{\cite[Lemma 5.26]{Boozer}}]
\label{theorem:he-iso}
If $E = L \oplus L^{-1}$ for $L^2 \neq {\mathcal O}$ or
$E = F_2 \otimes L_i$, and
$\ell_p \in \mathbbm{P}(E_p)$ is a good line, then the map
$\mathbbm{P}(E_q) \rightarrow M^{ss}(X)$,
$\ell_q \mapsto h_e(E,\ell_p,\ell_q)$ is an isomorphism.
\end{theorem}
If the line $\ell_p$ is bad, then $h_e(E,\ell_p,\ell_q)$ is
uniquely determined by $E$ and the points $p$ and $e$:
\begin{theorem}[{\cite[Lemma 5.27]{Boozer}}]
\label{theorem:he-eval}
If $\ell_p \in \mathbbm{P}(E_p)$ is a bad line and
$\ell_q \in \mathbbm{P}(E_q)$ is not bad in the same direction as
$\ell_p$, then $h_e(E,\ell_p,\ell_q)$ is given by
\begin{align*}
&h_e(L \oplus L^{-1},L_p,\ell_q) =
(p \circ \tau_{p-e})([L]), &
&h_e(L \oplus L^{-1},(L^{-1})_p,\ell_q) =
(p \circ \tau_{e-p})([L]), \\
&h_e(F_2 \otimes L_i,(L_i)_p,\ell_q) =
(p \circ \tau_{p-e})([L_i]), &
&h_e(L_i \oplus L_i,\ell_p,\ell_q) =
(p \circ \tau_{p-e})([L_i]).
\end{align*}
\end{theorem}
Note that
$(p \circ \tau_{p-e})([L_i]) = (p \circ \tau_{e-p})([L_i])$.
Note also that in Theorem \ref{theorem:he-eval} the line
$\ell_q \in \mathbbm{P}(E_q)$ is allowed to be bad, just not bad in
the same direction as $\ell_p \in \mathbbm{P}(E_p)$.
For example, we have
\begin{align*}
h_e(L \oplus L^{-1}, L_p, L^{-1}_q) =
(p \circ \tau_{p-e})([L]) =
(p \circ \tau_{e-q})([L]).
\end{align*}
\section{Description of $M^s(X,3)$}
\label{sec:description}
We consider here the moduli space $M^s(X,3)$ of stable rank 2
parabolic bundles over an elliptic curve $X$ with trivial determinant
bundle and 3 marked points $p_1, p_2, p_3 \in X$.
If $[E,\ell_{p_1},\ell_{p_2},\ell_{p_3}] \in M^s(X,3)$,
then $E$ is semistable and no two of the lines
$\ell_{p_1},\ell_{p_2},\ell_{p_3}$ are bad in the same direction.
It follows that $E$ has one of the three forms described in
Section \ref{sec:vector-bundles};
that is, $E = L \oplus L^{-1}$ for $L^2 \neq {\mathcal O}$,
$E = L_i \oplus L_i$, or $E = F_2 \otimes L_i$.
Choose points $e_1, e_2, e_3 \in X$ such that
\begin{align*}
p_1 + p_2 &= 2e_3, &
p_3 + p_1 &= 2e_2, &
p_2 + p_3 &= 2e_1.
\end{align*}
Define a map
$\pi = (\pi_1,\pi_2,\pi_3):M^s(X,3) \rightarrow (M^{ss}(X))^3$ by
\begin{align*}
\pi([E,\ell_{p_1},\ell_{p_2},\ell_{p_3}]) =
([E],\,h_{e_2}(E,\ell_{p_1},\ell_{p_3}),\,h_{e_1}(E,\ell_{p_2},\ell_{p_3})).
\end{align*}
Note that since $E$ is semistable and no two of the lines $\ell_{p_1}$,
$\ell_{p_2}$, $\ell_{p_3}$ are bad in the same direction, we can in
fact define $h_{e_2}(E,\ell_{p_1},\ell_{p_3})$ and
$h_{e_1}(E,\ell_{p_2},\ell_{p_3})$.
It is useful to decompose $M^s(X,3)$ as
\begin{align*}
M^s(X,3) =
\{\textup{$\ell_{p_3}$ good}\} \cup
\{\textup{$\ell_{p_3}$ bad}\},
\end{align*}
where the open submanifold
$\{\textup{$\ell_{p_3}$ good}\}$ and the closed submanifold
$\{\textup{$\ell_{p_3}$ bad}\}$ consist of points
$[E,\ell_{p_1},\ell_{p_2},\ell_{p_3}] \in M^s(X,3)$ for which
$\ell_{p_3}$ is a good and bad line, respectively.
The open submanifold $\{\textup{$\ell_{p_3}$ good}\}$ is described by
the following result:
\begin{theorem}
\label{theorem:subset-good}
The restriction of $\pi:M^s(X,3) \rightarrow (M^{ss}(X))^3$ to
$\{\textup{$\ell_{p_3}$ good}\} \rightarrow
\pi(\{\textup{$\ell_{p_3}$ good}\})$ is an
isomorphism.
\end{theorem}
\begin{proof}
If
$[E,\ell_{p_1},\ell_{p_2},\ell_{p_3}] \in
\{\textup{$\ell_{p_3}$ good}\}$,
then $E$ must have good lines, hence
$E = L \oplus L^{-1}$ for $L^2 \neq {\mathcal O}$ or $E = F_2 \otimes L_i$.
For each point $[E] \in M^{ss}(X)$, choose a representative $E$ of
$[E]$ and a good line $\ell_{p_3}' \in \mathbbm{P}(E_{p_3})$.
We can define a map
$\pi_1^{-1}([E]) \cap \{\textup{$\ell_{p_3}$ good}\} \rightarrow
\mathbbm{P}(E_{p_1}) \times \mathbbm{P}(E_{p_2})$,
\begin{align*}
[E,\ell_{p_1},\ell_{p_2},\ell_{p_3}] \mapsto
(\phi(\ell_{p_1}), \phi(\ell_{p_2})),
\end{align*}
where $\phi$ is the unique (up to rescaling by a constant)
automorphism of $E$ such that $\phi(\ell_{p_3}) = \ell_{p_3}'$.
This map is an isomorphism onto its image, hence by Theorem
\ref{theorem:he-iso} the map
$(\pi_2,\pi_3):
\pi_1^{-1}([E]) \cap \{\textup{$\ell_{p_3}$ good}\} \rightarrow
(M^{ss}(X))^2$ is an
isomorphism onto its image.
\end{proof}
Next we consider the closed submanifold
$\{\textup{$\ell_{p_3}$ bad}\}$.
Elements of $\{\textup{$\ell_{p_3}$ bad}\}$ have one of three forms:
\begin{align*}
[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}], &&
[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}], &&
[L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}],
\end{align*}
where $L^2 \neq {\mathcal O}$ and $L_i$ is a 2-torsion line bundle.
Note that elements of the form
$[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, (L^{-1})_{p_3}]$ can
be converted into the first of the three listed forms by applying the
isomorphism $\phi:L \oplus L^{-1} \rightarrow L^{-1} \oplus L$:
\begin{align*}
[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, (L^{-1})_{p_3}] =
[L^{-1} \oplus L, \phi(\ell_{p_1}), \phi(\ell_{p_2}),
(L^{-1})_{p_3}] =
[M \oplus M^{-1}, \phi(\ell_{p_1}), \phi(\ell_{p_2}), M_{p_3}],
\end{align*}
where we have defined $M = L^{-1}$ and used the fact that
$\phi((L^{-1})_{p_3}) = (L^{-1})_{p_3}$.
Recall that we defined a map
$\pi_1:M^s(X,3) \rightarrow M^{ss}(X)$,
$[E, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \mapsto [E]$.
We can lift
$\pi_1:\{\textup{$\ell_{p_3}$ bad}\} \rightarrow M^{ss}(X)$ to the
branched double-cover $p:\Jac(X) \rightarrow M^{ss}(X)$ by using the
bad line $\ell_{p_3}$ to distinguish between distinct vector bundles
$L \oplus L^{-1}$ and $L^{-1} \oplus L$ that are identified in
$M^{ss}(X)$:
\begin{eqnarray*}
\begin{tikzcd}
{} &
\Jac(X) \arrow{d}{p} \\
\{\textup{$\ell_{p_3}$ bad}\} \arrow{ru}{{\tilde{\pi}}_1} \arrow{r}{\pi_1} &
M^{ss}(X),
\end{tikzcd}
\end{eqnarray*}
where
${\tilde{\pi}}_1:\{\textup{$\ell_{p_3}$ bad}\} \rightarrow \Jac(X)$ is
defined such that
\begin{align*}
{\tilde{\pi}}_1([L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}])
&= [L], &
{\tilde{\pi}}_1([F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}])
&=
{\tilde{\pi}}_1([L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}])
= [L_i].
\end{align*}
Define a map $f:\Jac(X) \rightarrow (M^{ss}(X))^3$,
$f = (p,\,p \circ \tau_{p_3 - e_2},\,p \circ \tau_{p_3 - e_1})$.
\begin{theorem}
We have a commutative diagram
\begin{eqnarray*}
\begin{tikzcd}
\{\textup{$\ell_{p_3}$ bad}\} \arrow{dr}{\pi}
\arrow{d}[swap]{{\tilde{\pi}}_1} &
{} \\
\Jac(X) \arrow{r}{f} &
(M^{ss}(X))^3.
\end{tikzcd}
\end{eqnarray*}
\end{theorem}
\begin{proof}
The fiber of ${\tilde{\pi}}_1$ over a point
$[L] \in \Jac(X)$ such that $L^2 \neq {\mathcal O}$ is
\begin{align*}
{\tilde{\pi}}_1^{-1}([L]) =
\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \}.
\end{align*}
From Theorem \ref{theorem:he-eval} it follows that
$\pi({\tilde{\pi}}_1^{-1}([L])) = f([L])$.
The fiber of ${\tilde{\pi}}_1$ over the point $[L_i] \in \Jac(X)$ is
\begin{align*}
{\tilde{\pi}}_1^{-1}([L_i]) =
\{[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}] \} \cup
\{[L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \}.
\end{align*}
From Theorem \ref{theorem:he-eval} it follows that
\begin{align*}
\pi([F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}]) =
\pi([L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}]) =
f([L_i]).
\end{align*}
Thus $\pi({\tilde{\pi}}_1^{-1}([L_i])) = f([L_i])$.
\end{proof}
\begin{theorem}
The map $f:\Jac(X) \rightarrow (M^{ss}(X))^3$ is injective.
\end{theorem}
\begin{proof}
Take $[L], [M] \in \Jac(X)$ such that $f([L]) = f([M])$.
Projecting onto the first factor of $(M^{ss}(X))^3$, we find that
either $[M] = [L]$ or $[M] = [L^{-1}]$.
If $[M] = [L^{-1}]$, then projecting onto the second factor of
$(M^{ss}(X))^3$ gives either
$[L \otimes {\mathcal O}(p_3-e_2)] = [L^{-1} \otimes {\mathcal O}(p_3-e_2)]$ or
$[L \otimes {\mathcal O}(p_3-e_2)] = [L \otimes {\mathcal O}(e_2-p_3)]$.
In the first case $[L] = [L^{-1}]$.
The second case cannot actually occur, since otherwise
$2p_3 = 2e_2 = p_3 + p_1$ and thus $p_1 = p_3$, contradiction.
\end{proof}
Given a point
$[E,\ell_{p_1},\ell_{p_2},\ell_{p_3}] \in
\{\textup{$\ell_{p_3}$ bad}\}$,
we have that $E$ is semistable and $\ell_{p_1}$ and $\ell_{p_2}$
cannot be bad in the same direction, hence we can define a map
$h:\{\textup{$\ell_{p_3}$ bad}\} \rightarrow M^{ss}(X)$,
\begin{align*}
h([E, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}]) =
h_{e_3}(E, \ell_{p_1}, \ell_{p_2}).
\end{align*}
\begin{theorem}
\label{theorem:subset-bad}
The map
$({\tilde{\pi}}_1,h):\{\textup{$\ell_{p_3}$ bad}\} \rightarrow
\Jac(X) \times M^{ss}(X)$ is an isomorphism.
\end{theorem}
\begin{proof}
The fiber of ${\tilde{\pi}}_1$ over a point $[L] \in \Jac(X)$ such that
$L^2 \neq {\mathcal O}$ is
\begin{align*}
{\tilde{\pi}}_1^{-1}([L]) =
\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \}.
\end{align*}
We will argue that ${\tilde{\pi}}_1^{-1}([L])$ is isomorphic to $\mathbbm{CP}^1$.
Choose a local trivialization of $E := L \oplus L^{-1}$ over an open
set containing $p_1$ and $p_2$ so as to obtain identifications
$\psi_i:\mathbbm{P}(E_{p_i}) \rightarrow \mathbbm{CP}^1$ for $i=1,2$.
We can choose the local trivialization such that
$\psi_i(L_{p_i}) = \infty$ and $\psi_i((L^{-1})_{p_i}) = 0$.
Define $z_i = \psi_i(\ell_{p_i})$ and note that
$(z_1, z_2) \in \mathbbm{C}^2 - \{(0,0)\}$.
An automorphism of $E$ induces the transformation
$(z_1, z_2) \mapsto a(z_1,z_2)$ for $a \in \mathbbm{C}^\times$, hence
we have an isomorphism
\begin{align*}
{\tilde{\pi}}_1^{-1}([L]) =
\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \} \rightarrow
\mathbbm{CP}^1, &&
[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \mapsto
[z_1 : z_2].
\end{align*}
A canonical version of this statement is that the restriction of
$h:\{\textup{$\ell_{p_3}$ bad}\} \rightarrow M^{ss}(X)$
to ${\tilde{\pi}}_1^{-1}([L])$ gives an isomorphism
${\tilde{\pi}}_1^{-1}([L]) \rightarrow M^{ss}(X)$.
In particular, from Theorems \ref{theorem:he-iso} and
\ref{theorem:he-eval} it follows that
\begin{align*}
&h(\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \mid
\ell_{p_1} \neq (L^{-1})_{p_1},\,
\ell_{p_2} \neq (L^{-1})_{p_2}\}) =
M^{ss}(X) -
\{
(p \circ \tau_{e_3 - p_1})([L]),\,
(p \circ \tau_{e_3 - p_2})([L])\}, \\
&h(\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \mid
\ell_{p_1} = (L^{-1})_{p_1},\,
\ell_{p_2} \neq (L^{-1})_{p_2}\}) =
\{
(p \circ \tau_{e_3 - p_1})([L])\}, \\
&h(\{[L \oplus L^{-1}, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \mid
\ell_{p_1} \neq (L^{-1})_{p_1},\,
\ell_{p_2} = (L^{-1})_{p_2}\}) =
\{
(p \circ \tau_{e_3 - p_2})([L])\}.
\end{align*}
The fiber of ${\tilde{\pi}}_1$ over the point $[L_i] \in \Jac(X)$ is
\begin{align*}
{\tilde{\pi}}_1^{-1}([L_i]) =
\{[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}] \} \cup
\{[L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \}.
\end{align*}
Note that
$\{[L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \}$
consists of a single point, since there is a unique (up to rescaling
by a constant) automorphism of
$L_i \oplus L_i$ that induces an isomorphism of any pair of stable
parabolic bundles
$(L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3})$ and
$(L_i \oplus L_i, \ell_{p_1}', \ell_{p_2}', \ell_{p_3}')$.
We will argue that
$\{[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, (L_i)_{p_3}] \}$ is
isomorphic to $\mathbbm{C}$.
Choose a local trivialization of $E := F_2 \otimes L_i$ over an open
set containing $p_1$ and $p_2$ so as to obtain identifications
$\psi_i:\mathbbm{P}(E_{p_i}) \rightarrow \mathbbm{CP}^1$ for $i=1,2$.
We can choose the local trivialization such that
$\psi_i((L_i)_{p_i}) = \infty$.
Define $z_i = \psi_i(\ell_{p_i})$ and note that
$(z_1, z_2) \in \mathbbm{C}^2$.
An automorphism of $E$ induces the transformation
$(z_1, z_2) \mapsto (z_1 + b, z_2 + b)$ for $b \in \mathbbm{C}$, hence
we have an isomorphism
\begin{align*}
\{[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \} \rightarrow
\mathbbm{C}, &&
[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, L_{p_3}] \mapsto
z_2 - z_1.
\end{align*}
A canonical version of these results is that the restriction of
$h:\{\textup{$\ell_{p_3}$ bad}\} \rightarrow M^{ss}(X)$
to ${\tilde{\pi}}_1^{-1}([L_i])$ gives an isomorphism
${\tilde{\pi}}_1^{-1}([L_i]) \rightarrow M^{ss}(X)$.
In particular, from Theorems \ref{theorem:he-iso} and
\ref{theorem:he-eval} it follows that
\begin{align*}
&h(\{[F_2 \otimes L_i, \ell_{p_1}, \ell_{p_2}, L_{p_3}]\}) =
M^{ss}(X) -
\{(p \circ \tau_{p_1 - e_3})([L_i]) \}, \\
&h(\{[L_i \oplus L_i, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \}) =
\{ (p \circ \tau_{p_1 - e_3})([L_i]) \}.
\end{align*}
Note that
$(p \circ \tau_{p_1 - e_3})([L_i]) =
(p \circ \tau_{e_3 - p_1})([L_i]) =
(p \circ \tau_{p_2 - e_3})([L_i])$.
\end{proof}
Theorems \ref{theorem:subset-good}--\ref{theorem:subset-bad} prove
Theorem \ref{theorem:intro-main} from the Introduction.
\section{Relationship between $M^s(X,3)$ and $M^{ss}(X,2)$}
\label{sec:relationship}
In \cite{Vargas} it is shown that $M^{ss}(X,2)$ is isomorphic to
$(\mathbbm{CP}^1)^2$.
From our perspective, we can describe this result by defining a map
$M^{ss}(X,2) \rightarrow (M(X)^{ss})^2$,
\begin{align*}
[E,\ell_{p_1}, \ell_{p_2}] \mapsto
([E],\,h_{e_3}(E,\ell_{p_1},\ell_{p_2})).
\end{align*}
One can show that this map is an isomorphism.
We can relate the closed subset $\{\textup{$\ell_{p_3}$ bad}\}$ of
$M^s(X,3)$ to the moduli space $M^{ss}(X,2)$ as follows.
Define a map $\{\textup{$\ell_{p_3}$ bad}\} \rightarrow M^{ss}(X,2)$,
\begin{align*}
[E, \ell_{p_1}, \ell_{p_2}, \ell_{p_3}] \mapsto
[E, \ell_{p_1}, \ell_{p_2}].
\end{align*}
We have a commutative diagram
\begin{eqnarray*}
\begin{tikzcd}
\{\textup{$\ell_{p_3}$ bad}\} \arrow{r}
\arrow{d}[swap]{{\tilde{\pi}}_1} &
M^{ss}(X,2) \arrow{d} \\
\Jac(X) \arrow{r}{p} &
M^{ss}(X),
\end{tikzcd}
\end{eqnarray*}
where we have defined a map
$M^{ss}(X,2) \rightarrow M^{ss}(X)$,
$[E,\ell_{p_1},\ell_{p_2}] \mapsto [E]$.
\section{Poincar\'{e} polynomial of $M^s(X,3)$}
\label{sec:poincare}
The Poincar\'{e} polynomial of $M^s(C,n)$ is given in
\cite[Theorem 3.8]{Street} for the case $\mu=1/4$, corresponding to
the traceless character variety, and $n$ odd:
\begin{align}
\label{eqn:pt-c-n}
P_t(M^s(C,n)) =
\frac{(1 + t^2)^n (1 + t^3)^{2g} -
2^{n - 1} t^{2g + n - 1} (1 + t)^{2g} (1 + t^2)}
{(1 - t^2)(1 - t^4)},
\end{align}
where $g$ is the genus of $C$.
In fact, the results of \cite{Street} are stated for parabolic bundles
with fixed determinant bundle of odd degree, but since
$\mu=1/4$, corresponding to a traceless character variety, the results
also hold for the moduli space $M^s(C,n)$ for
which the determinant bundle of the parabolic bundles is trivial.
For an elliptic curve $X$ with $3$ marked points, equation
(\ref{eqn:pt-c-n}) gives
\begin{align}
\label{eqn:pt-x-3}
P_t(M^s(X,3)) &= 1 + 4t^2 + 2t^3 + 4t^4 + t^6.
\end{align}
We can reproduce equation (\ref{eqn:pt-x-3}) using our explicit
description of $M^s(X,3)$.
Since $\pi:M^s(X,3) \rightarrow (M^{ss}(X))^3$ restricts to an
isomorphism
$M^s(X,3) - \{\textup{$\ell_{p_3}$ bad}\} \rightarrow
(M^{ss}(X))^3 - \pi(\{\textup{$\ell_{p_3}$ bad}\})$,
we obtain the following equation for the Poincar\'{e} polynomials for
cohomology with compact supports:
\begin{align}
\label{eqn:pt-1}
P_t(M^s(X,3) - \{\textup{$\ell_{p_3}$ bad}\}) =
P_t((M^{ss}(X))^3 - \pi(\{\textup{$\ell_{p_3}$ bad}\})).
\end{align}
From the long exact sequence for cohomology with compact supports, we
have
\begin{align}
\label{eqn:pt-2}
P_t(M^s(X,3) - \{\textup{$\ell_{p_3}$ bad}\}) &=
P_t(M^s(X,3)) - P_t(\{\textup{$\ell_{p_3}$ bad}\}), \\
\label{eqn:pt-3}
P_t((M^{ss}(X))^3 - \pi(\{\textup{$\ell_{p_3}$ bad}\})) &=
P_t((M^{ss}(X))^3) - P_t(\pi(\{\textup{$\ell_{p_3}$ bad}\})).
\end{align}
We have that $M^{ss}(X)$ is isomorphic to $\mathbbm{CP}^1$,
$\pi(\{\textup{$\ell_{p_3}$ bad}\})$ isomorphic to $\Jac(X)$, and
$\{\textup{$\ell_{p_3}$ bad}\}$ is isomorphic to
$\Jac(X) \times M^{ss}(X)$, so
\begin{align}
\label{eqn:pt-4}
P_t(M^{ss}(X)^3) &= (1 + t^2)^3, &
P_t(\pi(\{\textup{$\ell_{p_3}$ bad}\})) &= 1 + 2t + t^2, &
P_t(\{\textup{$\ell_{p_3}$ bad}\}) &= (1 + 2t + t^2)(1 + t^2).
\end{align}
Combining equations (\ref{eqn:pt-1})--(\ref{eqn:pt-4}), we reproduce
equation (\ref{eqn:pt-x-3}) for $P_t(M^s(X,3))$.
\bibliographystyle{abbrv}
| {
"timestamp": "2020-07-07T02:22:48",
"yymm": "2007",
"arxiv_id": "2007.02524",
"language": "en",
"url": "https://arxiv.org/abs/2007.02524",
"abstract": "We explicitly describe the moduli space $M^s(X,3)$ of stable rank 2 parabolic bundles over an elliptic curve $X$ with trivial determinant bundle and 3 marked points. Specifically, we exhibit $M^s(X,3)$ as a blow-up of an embedded elliptic curve in $(\\mathbb{CP}^1)^3$. The moduli space $M^s(X,3)$ can also be interpreted as the $SU(2)$ character variety of the 3-punctured torus. Our description of $M^s(X,3)$ reproduces the known Poincaré polynomial for this space.",
"subjects": "Algebraic Geometry (math.AG); Geometric Topology (math.GT); Symplectic Geometry (math.SG)",
"title": "The moduli space of stable rank 2 parabolic bundles over an elliptic curve with 3 marked points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750529474512,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7095221823628272
} |
https://arxiv.org/abs/1711.11285 | A characterization of Zoll Riemannian metrics on the 2-sphere | The simple length spectrum of a Riemannian manifold is the set of lengths of its simple closed geodesics. We prove a theorem claimed by Lusternik: in any Riemannian 2-sphere whose simple length spectrum consists of only one element L, any geodesic is simple closed with length L. | \section{Introduction}
A remarkable class of closed Riemannian manifolds is given by those all of whose geodesics are closed. A detailed account of the state of the art of the research on this subject up to the late 1970s is contained in the celebrated monograph of Besse \cite{Besse:1978pr}, while for more recent results we refer the reader to, e.g., \cite{Olsen:2010ne, Radeschi:2017dz, Abbondandolo:2017xz} and references therein. The round $n$-spheres are the simplest examples of manifolds in this class. The first non-trivial example of a 2-sphere of revolution all of whose geodesics are closed was given by Zoll \cite{Zoll:1903by}. The closed geodesics in this example are without self-intersections and have the same length. This is not accidental: a theorem of Gromoll and Grove \cite{Gromoll:1981kl} implies that every 2-sphere all of whose geodesics are closed is Zoll, meaning that all the geodesics are simple closed and have the same length. Our main result, which was claimed by Lusternik in \cite[page~82]{Ljusternik:1966tk}, shows that the property of being Zoll for a Riemannian 2-sphere $(S^2,g)$ can be read off from its simple length spectrum $\sigma_{\mathrm{s}}(S^2,g)$, that is, the set of lengths of its simple closed geodesics.
\begin{thm}\label{t:main}
In a Riemannian 2-sphere $(S^2,g)$ such that $\sigma_{\mathrm{s}}(S^2,g)=\{\ell\}$, every geodesic is simple closed and has length $\ell$.
\end{thm}
Under a weaker assumption on the simple length spectrum, Lusternik also established the following easier statement. We will provide its precise proof in Section~\ref{s:proofs} for the reader's convenience.
\begin{thm}[\cite{Ljusternik:1966tk}, page 81]\label{t:ellipsoid}
Let $(S^2,g)$ be a Riemannian 2-sphere such that $\sigma_{\mathrm{s}}(S^2,g)$ has at most two elements. Then, for some $\ell\in\sigma_{\mathrm{s}}(S^2,g)$, every $x\in S^2$ lies on a simple closed geodesic of $(S^2,g)$ of length $\ell$.
\end{thm}
If one further assumes that the sectional curvature of the Riemannian metric takes values inside $[1/4,1]$, Theorems~\ref{t:main} and~\ref{t:ellipsoid} are a consequence of the following result of Ballmann, Thorbergsson and Ziller \cite[Theorem~A]{Ballmann:1983fv}. Consider a Riemannian $n$-sphere, with $n\geq 2$, whose sectional curvature $\kappa$ satisfies $1/4\leq\delta\leq \kappa \leq 1$. If all the (not necessarily simple) closed geodesics with length in $[2\pi,2\pi\delta^{-1/2}]$ have at most two different length values, for one such length $\ell$ every point of the $n$-sphere lies on a closed geodesic of length $\ell$. If all the closed geodesics with length in $[2\pi,2\pi\delta^{-1/2}]$ have the same length, then all the geodesics are closed with the same length.
For any $r\in(0,1)$ sufficiently close to $1$, the ellipsoid of revolution
\[E(r):=\big\{(x,y,z)\in\mathds{R}^3\ \big|\ x^2+y^2+(z/r)^{2}=1 \big\}\]
equipped with the Riemannian metric induced by the ambient Euclidean metric of $\mathds{R}^3$ satisfies the assumptions of Theorem~\ref{t:ellipsoid}, but is not a Zoll 2-sphere. Indeed, the meridians of $E(r)$ are simple closed geodesics of the same length, and the only other simple closed geodesic is the equator, whose length is $1$. Nevertheless, we do not know whether Theorem~\ref{t:ellipsoid} is optimal.
\begin{quest}
On a Riemannian 2-sphere $(S^2,g)$ such that $\sigma_{\mathrm{s}}(S^2,g)$ has at most two elements, is there a length $\ell\in\sigma_{\mathrm{s}}(S^2,g)$ and a point $x\in S^2$ such that every geodesic going through $x$ is simple closed with length $\ell$?
\end{quest}
The proofs of Theorems~\ref{t:main} and~\ref{t:ellipsoid} build on the classical minmax recipe due to Lusternik and Schnirel\-mann \cite{Lusternik:1934km, Ballmann:1978rw} for detecting three simple closed geodesics on every Riemannian 2-sphere. A crucial ingredient for this recipe is a deformation that shrinks emdedded loops without creating self-intersections, which can be obtained by applying Grayson's curve shortening flow \cite{Grayson:1989ec}. We expect such a flow to be available also in the setting of reversible Finsler metrics. If this were the case, Theorems~\ref{t:main} and~\ref{t:ellipsoid} would extend to reversible Finsler metrics on the 2-sphere.
We close the introduction by raising one more question related to Theorem~\ref{t:main}. Consider the unit tangent bundle $SS^2$ equipped with a contact form $\alpha$. The associated Reeb vector field $R$ on $SS^2$ is defined by $\alpha(R)\equiv1$ and $\mathrm{d}\alpha(R,\cdot)\equiv0$. We say that $(SS^2,\alpha)$ is reversible when $\phi^* R=-R$, where $\phi:SS^2\to SS^2$ is the involution $\phi(x,v)=(x,-v)$.
\begin{quest}
Assume that all the periodic orbits of the Reeb vector field of a reversible $(SS^2,\alpha)$ have the same period. Is $(SS^2,\alpha)$ a Zoll contact manifold, namely such that all its Reeb orbits are periodic?
\end{quest}
\footnotesize
\subsection*{Acknowledgments.} We are grateful to Alberto Abbondandolo, who asked us the original question leading to Theorem~\ref{t:main}, and suggested to convert our homological proof in a cohomological one, which resulted in a dramatic simplification of the exposition. We also thank Wolfgang Ziller for pointing out to us the above mentioned result in \cite{Ballmann:1983fv}. Marco Mazzucchelli is partially supported by the ANR-13-JS01-0008-01 ``Contact spectral invariants''. Stefan Suhr is supported by the SFB/TRR 191 ``Symplectic Structures in Geometry, Algebra and Dynamics'', funded by the Deutsche Forschungsgemeinschaft. Part of this work was carried out during a visit of Marco Mazzucchelli at the Ruhr-Universit\"at Bochum in November 2017, funded by the SFB/TRR 191; both authors wish to thank the university for providing a wonderful working environment.
\normalsize
\section{Lusternik-Schnirelmann theory}
In their celebrated work \cite{Lusternik:1934km}, Lusternik and Schnirelmann showed how to detect three simple closed geodesics on every Riemannian 2-sphere by applying variational methods. The original proof of this fact in~\cite{Lusternik:1934km} is known to have a gap. More specifically, the argument requires a deformation of the space of unparametrized embedded circles in the 2-sphere that shrinks all those circles that are not closed geodesics. The deformation provided by Lusternik and Schnirelmann is incomplete. Actually, constructing such a deformation by hand turned out to be highly non-trivial, and several authors proposed their solution in the second half of the 20th century. A particularly elegant one was provided by Grayson \cite{Grayson:1989ec} with its curve shortening flow. In this section, we are going to review the arguments leading to Lusternik-Schnirelmann's theorem in combination with Grayson's work. For the topological arguments, we will mainly follow \cite{Ballmann:1978rw}.
\subsection{Grayson's curve shortening flow}
Let $(S^2,g)$ be an oriented Riemannian 2-sphere. We denote by $\Omega\subset C^\infty(S^1,S^2)$ the space of embedded circles $\gamma:S^1\hookrightarrow S^2$, and by $\Omega_0\subset C^\infty(S^1,S^2)$ the space of constant maps. The group $\mathrm{Diff}(S^1)$ acts on $\Omega_*:=\Omega\cup\Omega_0$ by reparametrizations, i.e.
\begin{align*}
(f\cdot\gamma)(t)=\gamma(f(t)),\qquad
\forall f\in\mathrm{Diff}(S^1),\ \gamma\in\Omega,\ t\in S^1.
\end{align*}
We consider the space of unparametrized loops
$\Lambda_*:=\Omega_*/\mathrm{Diff}(S^1)$ endowed with the quotient Whitney $C^\infty$ topology, and its subsets $\Lambda:=\Omega/\mathrm{Diff}(S^1)$ and $\Lambda_0:=\Omega_0/\mathrm{Diff}(S^1)\equiv\Omega_0\cong S^2$. We also consider the length function
\begin{align*}
L:\Lambda_* \to [0,\infty),
\qquad
L(\gamma)=\int_{S^1} g(\dot\gamma(t),\dot\gamma(t))^{1/2} \mathrm{d} t,
\end{align*}
which is continuous.
For each parametrized embedded circle $\gamma\in\Omega$, we denote by $\nu_\gamma$ its positive normal vector field, so that the ordered pair $\{\dot\gamma(s),\nu_{\gamma}(s)\}$ is an oriented orthonormal basis of the tangent space $\mathrm{T}_{\gamma(s)}S^2$ for each $s\in S^1$. We also denote by $\kappa_{\gamma}:S^1\to\mathds{R}$ the signed curvature of $\gamma$, which is defined by
\[\kappa_\gamma(s)=\frac{g(\nabla_s\dot\gamma,\nu_{\gamma}(s))}{\|\dot\gamma(s)\|_g^{2}}.\]
Up to a sign, both $\nu_\gamma$ and $\kappa_\gamma$ are independent of the parametrization of $\gamma$; more precisely, for all $f\in\mathrm{Diff}(S^1)$, we have
\begin{align*}
\nu_{f\cdot\gamma}=\mathrm{sign}(\dot f)\,\nu_{\gamma}\circ f,
\qquad
\kappa_{f\cdot\gamma}=\mathrm{sign}(\dot f)\, \kappa_{\gamma}\circ f.
\end{align*}
In particular, the product $\kappa_{\gamma}\nu_{\gamma}$ is completely independent of the parametrization of $\gamma$, that is,
\begin{align}\label{e:invariance}
\kappa_{f\cdot\gamma}(s) \nu_{f\cdot\gamma}(s)= \kappa_{\gamma}(f(s))\,\nu_{\gamma}(f(s)),
\qquad\forall s\in S^1.
\end{align}
Now, let us consider the parabolic partial differential equation
\begin{equation}
\label{e:curve_shortening}
\partial_t \gamma_t(s) = \kappa_{\gamma_t}(s)\,\nu_{\gamma_t}(s)
\end{equation}
with initial condition $\gamma_0=\gamma$. By~\eqref{e:invariance}, this equation is parametrization invariant. Namely, if $\gamma_t\in\Omega$ is a solution of
\eqref{e:curve_shortening}, for each $f\in\mathrm{Diff}(S^1)$ the family of curves $\zeta_t:=f\cdot\gamma_t$ is a solution of the same equation with initial condition $\zeta_0=f\cdot\gamma$. Therefore, we can view~\eqref{e:curve_shortening} as a recipe that prescribes the evolution of unparametrized embedded circles $\gamma\in\Lambda$.
The local existence, uniqueness, and continuous dependence on the initial condition in the $C^\infty$ topology of the solutions of~\eqref{e:curve_shortening} is well known by the standard theory of parabolic partial differential equations (see, e.g., \cite[Theorem 1.1]{ManteMarti} for a modern account). In his fundamental paper \cite{Grayson:1989ec}, Grayson studied the long-term existence and several properties of the solutions of \eqref{e:curve_shortening}. Summing up, there is an open neighborhood $J\subset[0,\infty)\times \Lambda$ of $\{0\}\times\Lambda$ and a continuous map $\phi:J\to\Lambda$ encoding the solutions of \eqref{e:curve_shortening}, in the sense that $\phi(t,\gamma)=\phi_t(\gamma):=\gamma_t$. Such map $\phi$ is referred to in the literature as the curve shortening flow, and satisfies the following properties. For each $\gamma\in\Lambda$, we denote by $\tau_\gamma\in(0,\infty]$ the largest extended real number such that $[0,\tau_\gamma)\times \{\gamma\} \subset J$.
\begin{itemize}
\item[(i)] For all $(t,\gamma)\in J$ we have
\[\frac{\mathrm{d}}{\mathrm{d} t} L(\phi_t(\gamma)) = -\int_{S^1} \kappa_{\gamma_t}(s)^2\|\dot\gamma_t(s)\|_g \mathrm{d} s \leq0, \]
with equality if and only if $\gamma$ is a closed geodesic (notice that in the integrand above we have introduced a parametrization of $\gamma_t\in\Lambda$, but the value of the integral is independent of this choice); see \cite[page 75]{Grayson:1989ec}.
\item[(ii)] For each $\gamma\in \Lambda$, the limit
\begin{align*}
\ell_{\gamma}:=\lim_{t\to\tau_\gamma} L(\phi_t(\gamma))
\end{align*}
exists; if $\tau_\gamma<\infty$ then $\ell_\gamma=0$ and $\phi_t(\gamma)$ converges to some constant curve in $\Lambda_0$ as $t\to\tau_\gamma$; otherwise, $\ell_\gamma>0$ and, for each open neighborhood $U\subset\Lambda$ of the set of simple closed geodesics of length $\ell_\gamma$ and for all $t>0$ large enough, $\phi_t(\gamma)$ belongs to $U$; see \cite[Theorem 0.1]{Grayson:1989ec}.
\item[(iii)] Let $U\subset\Lambda$ be an open neighborhood of the subspace of simple closed geodesics of length $\ell$; there exists $\epsilon=\epsilon(U)>0$ such that, for every compact subset $K\subset\{L\leq \ell+\epsilon\}$, there exists a continuous function $\tau:K\to[0,\infty)$ such that $\phi_{\tau(\gamma)}(\gamma)\in\{L<\ell\}\cup U$ for all $\gamma\in K$; see \cite[Lemma 8.1]{Grayson:1989ec}.
\end{itemize}
\subsection{The fundamental group of $\Lambda_*$}
Let us construct a 2-fold covering map \[\pi:C\to \Lambda_*.\] The idea of this construction goes as follows. Above the subspace of embedded circles $\Lambda\subset\Lambda_*$, the total space $\pi^{-1}(\Lambda)$ is precisely the space of embedded compact disks in $S^2$, and the projection $\pi$ sends a compact disk to its boundary curve; above any constant $\gamma\in\Lambda_0$, one element of $\pi^{-1}(\gamma)$ must be thought as the collapsed disk at the point $\gamma$, whereas the other element must be thought as the compact disk that fills $S^2$ and whose boundary has been collapsed to $\gamma$. Let us now provide the formal construction of this covering space.
For each $\gamma\in\Lambda_*$ we define a set of two elements $C_\gamma$ as follows: if $\gamma\in\Lambda$, we define $C_\gamma:=\pi_0(S^2\setminus\gamma)$ to be the set of path-connected components of its complement; otherwise, if $\gamma\in\Lambda_0$, we simply set $C_\gamma:=\mathds{Z}_2=\mathds{Z}/2\mathds{Z}$. We set
\[C:=\big\{(\gamma,Q)\ \big|\ \gamma\in\Lambda_*,\ Q\in C_\gamma\big\}.\]
We endow $C$ with a topology, by defining a fundamental system of open neighborhoods of any point $(\gamma,Q)\in C$ as follows.
\begin{itemize}
\item Assume first that $\gamma\in\Lambda$, so that $Q$ is a connected component of $S^2\setminus\gamma$, and choose an arbitrary point $x\in Q$. Let $U\subset\Lambda$ be a sufficiently small open neighborhood of $\gamma$ so that, for each $\zeta\in U$, $x$ does not lie on the curve $\zeta$. For every such $\zeta$, we set $Q_\zeta\in\pi_0(S^2\setminus\zeta)$ to be the connected component containing $x$.
\item Assume now that $\gamma\in\Lambda_0$, so that $Q\in\mathds{Z}_2$. We choose an arbitrary point $x\in S^2\setminus\gamma$, and as before a sufficiently small open neighborhood $U\subset\Lambda_*$ of $\gamma$ such that, for every $\zeta\in U$, $x$ does not lie on $\zeta$. For each $\zeta\in U\cap\Lambda_0$, we set $Q_\zeta:=Q$. For each $\zeta\in U\cap\Lambda$, if $Q=1$ we set $Q_\zeta\in\pi_0(S^2\setminus\zeta)$ to be the connected component containing $x$, whereas if $Q=0$ we set $Q_\zeta$ to be the connected component not containing $x$.
\end{itemize}
In both cases, we declare $U':=\big\{ (\zeta,Q_\zeta)\ |\ \zeta\in U \}\subset C$ to be an open neighborhood of $(\gamma,Q)$. With this topology, $C\to \Lambda_*$ is a 2-fold covering map with projection $(\gamma,Q)\mapsto\gamma$.
Let $\gamma_0\in\Lambda_0$ be a constant curve. We employ the covering $C$ to define a group homomorphism
\begin{align}\label{e:A}
A : \pi_1(\Lambda_*,\gamma_0) \to \mathds{Z}_2
\end{align}
as follows. Consider a continuous loop $\Gamma:[0,1]\to\Lambda_*$ with $\Gamma(0)=\Gamma(1)=\gamma_0$. We lift $\Gamma$ to a continuous path $\widetilde\Gamma:[0,1]\to C$ such that $\widetilde\Gamma(0)=(\Gamma(0),0)$ and the following diagram commutes
\begin{align*}
\xymatrix{
& C \ar[d] \\
[0,1] \ar[ur]^{\widetilde\Gamma}\ar[r]^{\Gamma} & \Lambda_*
}
\end{align*}
We write $\widetilde\Gamma(t)=:(\Gamma(t),Q(t))$, and set $A([\Gamma]):=Q(1)$. The fact that $A([\Gamma])$ only depends on the homotopy class $[\Gamma]\in\pi_1(\Lambda_*,\gamma_0)$ readily follows from the homotopy lifting property of the covering $C\to\Lambda_*$. The homomorphism property \[A([\Gamma']*[\Gamma''])=A([\Gamma'])+A([\Gamma''])\] is a consequence of the following observation: if $\widetilde\Gamma_0:[0,1]\to C$ and $\widetilde\Gamma_1:[0,1]\to C$ are the two lifts of a continuous loop $\Gamma:S^1\to\Lambda_*$ as above with $\widetilde\Gamma_0(0)=(\Gamma(0),0)$ and $\widetilde\Gamma_1(0)=(\Gamma(0),1)$, then if we write $\widetilde\Gamma_i(t)=:(\Gamma(t),Q_i(t))$ we have $Q_0(1)=1-Q_1(1)$. It is not hard to see that the homomorphism $A$ is surjective (see the proof of Lemma~\ref{l:pi_1_injective}).
\subsection{Three subordinate homology classes}
As usual, we denote by $B^{n+1}$ the closed unit ball in $\mathds{R}^{n+1}$, by $S^{n}=\partial B^{n+1}$ the unit $n$-sphere, and by $\mathds{R}\mathrm{P}^n=S^n/\sim$ the $n$-dimensional real projective space, where $x\sim-x$ for each $x\in S^n$. We will always identify $\mathds{R}^{n-1}$ with the hyperplane $\mathds{R}^{n-1}\times\{0\}\subset\mathds{R}^n$, so that in the sequence $S^0\subset S^1\subset S^2$ each sphere is the equator of the next one, and analogously we have the sequence of inclusions $\mathds{R}\mathrm{P}^0\subset\mathds{R}\mathrm{P}^1\subset \mathds{R}\mathrm{P}^2$. We denote by $E$ the space
\begin{align*}
E := \big\{ ([x],\lambda x)\in\mathds{R}\mathrm{P}^2\times B^{3}\ \big|\ x\in S^2,\ \lambda\in[-1,1]\big\},
\end{align*}
which is the total space of the tautological unit-ball bundle over $\mathds{R}\mathrm{P}^2$ with projection
$p:E\to\mathds{R}\mathrm{P}^2$, $p([x],\lambda x)=[x]$.
We recall that the cohomology ring of $E$ with coefficients in $\mathds{Z}_2$ is given by $H^*(E;\mathds{Z}_2)=\mathds{Z}_2[\omega]/(\omega^{3})$, where $\omega$ is the generator of $H^1(E;\mathds{Z}_2)\cong\mathds{Z}_2$. Let $\tau\in H^1(E,\partial E;\mathds{Z}_2)$ be the Thom class of the tautological bundle, which gives the Thom isomorphism
\begin{align*}
H^j(E;\mathds{Z}_2) \toup^{\cong} \,& H^{j+1}(E,\partial E;\mathds{Z}_2),\\
\alpha \longmapsto \,& \tau\smallsmile\alpha.
\end{align*}
In particular, $H^{*}(E,\partial E;\mathds{Z}_2)$ is generated as a group by the cohomology classes $\tau\smallsmile\omega^j$, for $j=0,1,2$. Let $h_3$ be the (non-zero) homology class in $H_3(E,\partial E;\mathds{Z}_2)$ such that $(\tau\smallsmile\omega^2)(h_3)=1$. We define the non-zero homology classes
\begin{align*}
h_{2}:=\omega\smallfrown h_{3}\in H_2(E,\partial E;\mathds{Z}_2),\\
h_{1}:=\omega\smallfrown h_{2}\in H_1(E,\partial E;\mathds{Z}_2).
\end{align*}
We now define a map
\begin{align}\label{e:iota}
\iota: (E,\partial E)\to (\Lambda_*,\Lambda_0)
\end{align}
as follows: for each $([x],v)\in E$, the (possibly constant) loop $\iota([x],v)$ is given by the intersection of $S^2$ with the affine plane
$P([x],v):=\mathrm{span}\{x\}^\bot + v \subset \mathds{R}^{3}$,
see Figure~\ref{f:iota}.
\begin{figure}
\begin{center}
\begin{small}
\input{cycle.pdf_tex}
\caption{The map $\iota:(E,\partial E)\to(\Lambda_*,\Lambda_0)$.}
\label{f:iota}
\end{small}
\end{center}
\end{figure}
\begin{lem}\label{l:pi_1_injective}
For any $e_0\in\partial E$ with image $\gamma_0:=\iota(e_0)$, the map $\iota$ induces injective homomorphisms
$\iota_* :\pi_1(E,e_0)\hookrightarrow\pi_1(\Lambda_*,\gamma_0)$ and $\iota_* :H_1(E;\mathds{Z}_2)\hookrightarrow H_1(\Lambda_*;\mathds{Z}_2)$.
\end{lem}
\begin{proof}
Let $\Gamma:S^1\to E$ be a continuous loop such that $\Gamma(0)=e_0$ and the homotopy class $[\Gamma]$ generates the fundamental group $\pi_1(E,e_0)$. For instance we can define $\Gamma$ as follows. Let $\Psi:S^1\to\mathds{R}\mathrm{P}^1\subset \mathds{R}\mathrm{P}^2$ be a continuous map of degree 1. We can see $\Psi$ as a loop in the 0-section of $E$, so that is represents a generator of the fundamental group of $\pi_1(E,\Psi(0))$. Let $\Theta:[0,1]\to E$ be a continuous path that joins $e_0$ with $\Psi(0)$. We can set $\Gamma$ to be the loop obtained by the concatenation $\Theta * \Psi * \overline{\Theta}$. Notice that $\iota\circ\Psi$ is the loop of meridians of the sphere $S^2$ that starts at a meridian and applies to it a rotation of angle $\pi$ around the axis passing through the north and south poles. We readily see that $A\circ\iota_*([\Gamma])=1$, where $A$ is the homomorphism in \eqref{e:A}. In particular, $\iota_*:\pi_1(E,e_0)\to\pi_1(\Lambda_*,\gamma_0)$ is non-trivial. Since $\pi_1(E,e_0)\cong\mathds{Z}_2$, $\iota_*$ is injective and the composition $A\circ\iota_*$ is an isomorphism. Moreover, $\iota_*([\Gamma])$ does not belong to the commutator subgroup $[\pi_1(\Lambda_*,\gamma_0),\pi_1(\Lambda_*,\gamma_0)]$, for otherwise it would belong to the kernel of $A$. Therefore, by Hurewicz Theorem, $\iota\circ\Gamma$ represents a non-zero element of the homology group $H_1(\Lambda_*;\mathds{Z}_2)$, and therefore the homology homomorphism $\iota_*:H_1(E;\mathds{Z}_2)\hookrightarrow H_1(\Lambda_*;\mathds{Z}_2)$ is injective.
\end{proof}
\begin{lem}\label{l:non_triviality}
The map $\iota$ induces an injective homomorphism
\begin{align*}
\iota_* & : H_1(E,\partial E;\mathds{Z}_2)\hookrightarrow H_1(\Lambda_*,\Lambda_0;\mathds{Z}_2).
\end{align*}
\end{lem}
\begin{proof}
Since $\partial E$ and $\Lambda_0$ are homeomorphic to $S^2$, the long exact sequences of the pairs $(E,\partial E)$ and $(\Lambda_*,\Lambda_0)$ imply that the inclusions induce isomorphisms
\begin{align*}
H_1(E;\mathds{Z}_2)\toup^{\cong} H_1(E,\partial E;\mathds{Z}_2),
\qquad
H_1(\Lambda_*;\mathds{Z}_2)\toup^{\cong} H_1(\Lambda_*,\Lambda_0;\mathds{Z}_2).
\end{align*}
Lemma~\ref{l:pi_1_injective}, together with the commutative diagram
\begin{align*}
\xymatrix{
H_1(E;\mathds{Z}_2)\Big. \ar[r]^{\cong\ \ \ } \ar@{^{(}->}[d]^{\iota_*}& H_1(E,\partial E;\mathds{Z}_2) \ar[d]^{\iota_*} \\
H_1(\Lambda_*;\mathds{Z}_2) \ar[r]^{\cong\ \ \ } & H_1(\Lambda_*,\Lambda_0;\mathds{Z}_2)
}
\end{align*}
implies the injectivity of $\iota_* : H_1(E,\partial E;\mathds{Z}_2)\hookrightarrow H_1(\Lambda_*,\Lambda_0;\mathds{Z}_2)$.
\end{proof}
Consider again the generator $\omega$ of $H^1(E;\mathds{Z}_2)\cong\mathds{Z}_2$. Since the homomorphism $\iota_*:H_1(E;\mathds{Z}_2)\hookrightarrow H_1(\Lambda_*;\mathds{Z}_2)$ is injective, there exists a cohomology class
\begin{align}\label{e:kappa}
\kappa\in H^1(\Lambda_*;\mathds{Z}_2)
\end{align}
such that $\iota^*\kappa=\omega$. This implies that
\begin{equation}\label{e:cap_product_relations}
\begin{split}
\iota_*h_1&=\iota_*( (\iota^*\kappa)\smallfrown h_2) = \kappa\smallfrown\iota_*h_2,\\
\iota_*h_2&=\iota_*( (\iota^*\kappa)\smallfrown h_3) = \kappa\smallfrown\iota_*h_3,
\end{split}
\end{equation}
and since $\iota_*h_1$ is nonzero, the homology classes $\iota_*h_2$ and $\iota_*h_3$ are non-trivial in $H_*(\Lambda_*,\Lambda_0;\mathds{Z}_2)$ as well.
\subsection{Lusternik-Schnirelmann minmax values}
For each value $\ell\geq0$, we denote by $\{L\leq\ell\}\subset\Lambda_*$ the corresponding sublevel set of the length functional, and by $j^\ell:(\{L\leq\ell\},\Lambda_0)\hookrightarrow(\Lambda_*,\Lambda_0)$ the inclusion map.
The so-called Lusternik-Schnirelmann minmax values are given by
\begin{align*}
\ell_i:= \inf \big\{\ell>0\ \big|\ \iota_*h_i\in j^\ell_* \big(H_i(\{L\leq\ell\},\Lambda_0)\big) \big\},\qquad i=1,2,3.
\end{align*}
\begin{lem}
$\ell_1>0$.
\end{lem}
\begin{proof}
Let us assume by contradiction that $\ell_1=0$.
We denote by $\rho$ the injectivity radius of $(S^2,g)$, and we fix a constant $\epsilon\in(0,\rho/3)$.
Since $\ell_1=0$, we can find a relative 1-cycle $\sigma$ representing $\iota_*h_1$ whose support is contained in the sublevel set $\{L<\epsilon\}$. In other words, $\sigma$ is the formal sum $\sigma_1+...+\sigma_n$ of singular 1-simplexes of the form $\sigma_i:[0,1]\to\{L<\epsilon\}$. Let $k\in\mathds{N}$ be large enough so that, for all $i=1,...,n$ and $t_1,t_2\in[0,1]$ with $|t_1-t_2|\leq1/k$, we have
\begin{align*}
\min\big\{d(x_1,x_2)\ |\ x_1\in\sigma_i(t_1),\ x_2\in\sigma_i(t_2) \big\}\leq\epsilon.
\end{align*}
Here, $d:S^2\times S^2\to[0,\infty)$ denotes the Riemannian distance function induced by $g$. For each $i=1,...,n$ and $j=0,...,k$, we choose a point $x_{i,j}\in \sigma_i(j/k)$. We should make these choices coherently, in the sense that $x_{i_1,j_1}=x_{i_1,j_2}$ whenever $\sigma_{i_1}(j_1/k)=\sigma_{i_2}(j_2/k)$. Since $L(\sigma_i(j/k))<\epsilon$, we have $d(x_{i,j},x)\leq\epsilon/2$ for all $x\in\sigma_i(j/k)$. Notice that $d(x_{i,j},x_{i,j+1})\leq2\epsilon$. For $i=1,...,n$, we define a continuous map $\gamma_i:[0,1]\to S^2$ such that each restriction $\gamma_i|_{[j/k,(j+1)/k]}$ is a geodesic joining $\gamma_i(j/k)=x_{i,j}$ and $\gamma_i((j+1)/k)=x_{i,j+1}$. Notice that
\begin{align*}
\max \big\{ d(\gamma_i(t),x)\ \big|\ i=1,...,n,\ t\in[0,1],\ x\in\sigma_i(t) \big\}\leq 3\epsilon < \rho.
\end{align*}
For each $i=1,...,n$, we define the continuous homotopy $\sigma_{i,s}:[0,1]\to\Lambda_*$, $s\in[0,1]$, by
$\sigma_{i,s}(t):=\exp_{\gamma_i(t)}\big( (1-s) \exp_{\gamma_i(t)}^{-1}(\sigma_i(t)) \big)$.
The relative $1$-cycle $\sigma':=\sigma_{1,1}+...+\sigma_{n,1}$ still represents $\iota_*h_1\in H_1(\Lambda_*,\Lambda_0)$, and its support is contained in $\Lambda_0$. Therefore $\iota_*h_1=0$, which contradicts Lemma~\ref{l:non_triviality}.
\end{proof}
\begin{lem}
$\ell_1\leq \ell_2\leq \ell_3$.
\end{lem}
\begin{proof}
This is a direct consequence of the cap product relations~\eqref{e:cap_product_relations}. Indeed, such relations imply that, for any relative cycle $\sigma$ representing $\iota_*h_3$ and for any cocycle $\lambda$ representing $\kappa$, the relative cycles $\lambda\smallfrown \sigma$ and $\lambda^2\smallfrown \sigma$ represent $\iota_*h_2$ and $\iota_*h_1$ respectively, and the supports of these two relative cycles are contained in the support of $\sigma$.
\end{proof}
\begin{thm}[Lusternik-Schnirelmann \cite{Lusternik:1934km}]\label{t:LS}
If $\ell_1=\ell_2$ or $\ell_2=\ell_3$, then for every open neighborhood $U\subset\Lambda_*$ of the set of simple closed geodesics with length $\ell_2$, the cohomology class $\kappa$ restricts to a non-zero cohomology class in $H^1(U;\mathds{Z}_2)$. If furthermore $\ell_1=\ell_2=\ell_3$, then also the cohomology class $\kappa^2$ restricts to a non-zero cohomology class in $H^2(U;\mathds{Z}_2)$.
\end{thm}
\begin{proof}
Assume that $\ell:=\ell_i=\ell_{i+1}$ for some $i\in\{1,2\}$, and let $U\subset\Lambda_*$ be an open neighborhood of the set of simple closed geodesics with length $\ell$. Let $\epsilon=\epsilon(U)>0$ be the constant given by property~(iii) of the curve shortening flow. By the definition of the minmax value $\ell_{i+1}$ there exists a relative cycle $\sigma$ that represents $\iota_*h_{i+1}$ and has support $K$ contained inside the sublevel set $\{L\leq \ell+\epsilon\}$. Since $K$ is compact, by property~(iii) of the curve shortening flow $\phi_t$ there exists a continuous function $\tau:K\to[0,\infty)$ such that the image of the map $\Phi:K\to\Lambda_*$, $\Phi(\gamma)=\phi_{\tau(\gamma)}(\gamma)$ is contained in $\{L<\ell\}\cup U$. Clearly, $[\Phi_*\sigma]=[\sigma]$ in $H_1(\Lambda_*,\Lambda_0;\mathds{Z}_2)$. After applying sufficiently many times a barycentric subdivision to all the singular simplexes in $\Phi_*\sigma$, we obtain that each singular simplex in $\Phi_*\sigma$ is contained in $\{L<\ell\}$ or in $U$. In particular, we have $\Phi_*\sigma=\sigma'+\sigma''$, where $\sigma'$ and $\sigma''$ are singular chains (but not necessarily cycles) whose supports are contained in $\{L<c\}$ and $U$ respectively.
Now, assume by contradiction that the restriction of the cohomology class $\kappa$ to $U$ is trivial. The cohomology long exact sequence of the pair $(\Lambda_*,U)$ implies that $\kappa$ belongs to the image of the homomorphism $H^1(\Lambda_*,U;\mathds{Z}_2)\to H^1(\Lambda_*;\mathds{Z}_2)$ induced by the inclusion. Namely, $\kappa$ can be represented by a cocycle $\lambda$ whose kernel contains all the singular chains with support in $U$. In particular
\begin{align*}
\iota_*h_i
=\kappa\smallfrown \iota_*h_{i+1}
=[\lambda\smallfrown\sigma' + \lambda\smallfrown\sigma'']
=[\lambda\smallfrown\sigma'].
\end{align*}
But this would imply that $\iota_*h_i$ is represented by the relative cycle $\lambda\smallfrown\sigma'$ whose support is contained in the sublevel set $\{L<\ell\}$. This contradicts the fact that $\ell=\ell_i$ is the minmax value associated to $\iota_*h_i$.
The assertion concerning the case where $\ell_1=\ell_2=\ell_3$ follows by an analogous argument.
\end{proof}
\begin{rem}
As we mentioned in the introduction, we expect a curve shortening flow to be available also for reversible Finsler metrics on $S^2$, and thus Theorem~\ref{t:LS}, as well as Theorems~\ref{t:main} and~\ref{t:ellipsoid}, should extend to the reversible Finsler setting.
\end{rem}
\section{Proofs of the theorems}
\label{s:proofs}
Theorem~\ref{t:main} is a consequence of the following statement.
\begin{thm}
Let $(S^2,g)$ be a Riemannian 2-sphere whose Lusternik-Schnirelmann minmax values satisfy $\ell_1=\ell_2=\ell_3=:\ell$. Then, every geodesic of $(S^2,g)$ is a simple closed geodesic of length $\ell$.
\end{thm}
\begin{proof}
We define
$\mathcal{E}
:=
\big\{
(\gamma,x)\in\Lambda\times S^2\ \big|\ x\in\gamma
\big\}$, which is the total space of the circle bundle $\pi:\mathcal{E}\to\Lambda$, $\pi(\gamma,x)=\gamma$. We denote by $\mathrm{P}\mathrm{T} S^2$ the projectivization of the tangent bundle of $S^2$, that is, $\mathrm{P}\mathrm{T} S^2:= (\mathrm{T} S^2\setminus\{\mbox{0-section}\})/\sim$,
where $\sim$ is the equivalence relation $(x,v)\sim(x,\lambda v)$ for each real number $\lambda\neq0$. We have a continuous evaluation map
\begin{align*}
\mathrm{ev}:\mathcal{E}\to\mathrm{P}\mathrm{T} S^2,
\qquad
\mathrm{ev}(\gamma,x)=\mathrm{T}_x\gamma.
\end{align*}
Let $G\subset\Lambda$ be the set of great circles in $S^2$. Notice that, if $\iota:E\to\Lambda$ is the map introduced in~\eqref{e:iota}, we have $G=\iota(\{\mbox{0-section}\})\cong\mathds{R}\mathrm{P}^2$. In particular, if $\kappa\in H^1(\Lambda_*;\mathds{Z}_2)$ is the cohomology class of~\eqref{e:kappa}, the restriction $\kappa^2|_{G}$ is a generator of $H^2(G;\mathds{Z}_2)\cong\mathds{Z}_2$. The Gysin sequence of the restricted circle bundle $\pi:\mathcal{E}|_G\to G$ reads
\begin{align*}
\xymatrix@R=8pt@C-6pt{
H^3(G;\mathds{Z}_2) \ar[r]^{\pi^*} \ar@{=}[d] &
H^3(\mathcal{E}|_G;\mathds{Z}_2) \ar[r]^{\pi_*} &
H^2(G;\mathds{Z}_2) \ar[r] \ar@{=}[d] &
H^4(G;\mathds{Z}_2) \ar@{=}[d]\\
0 & & \mathds{Z}_2 & 0
}
\end{align*}
and therefore we have an isomorphism $\displaystyle\pi_*:H^3(\mathcal{E}|_G;\mathds{Z}_2)\toup^{\cong} H^2(G;\mathds{Z}_2)$. The evaluation map restricts to a homeomorphism $\mathrm{ev}|_{\mathcal{E}|_{G}}:\mathcal{E}|_{G}\to\mathrm{P}\mathrm{T} S^2$. Therefore $\kappa^2|_{G}=\pi_*\mathrm{ev}|_{\mathcal{E}|_{G}}^*\nu$, where $\nu$ is the generator of $H^3(\mathrm{P}\mathrm{T} S^2;\mathds{Z}_2)$. This, together with the commutative diagram
\begin{align*}
\xymatrix{
\mathcal{E}|_G\, \ar@{^{(}->}[r] \ar[d]^{\pi} &
\mathcal{E} \ar[r]^{\mathrm{ev}\ \ \ \ } \ar[d]^{\pi} &
\mathrm{P}\mathrm{T} S^2\\
G\, \ar@{^{(}->}[r] & \Lambda &
}
\end{align*}
readily implies that $\kappa^2=\pi_*\mathrm{ev}^*\nu\neq 0$ in $H^2(\Lambda;\mathds{Z}_2)$.
Now, assume by contradiction that there exists $y=(x,[v])\in\mathrm{P}\mathrm{T} S^2$ such that no simple closed geodesic of $(S^2,g)$ is tangent to $v$. The subset
\begin{align*}
U:=\pi(\mathrm{ev}^{-1}(\mathrm{P}\mathrm{T} S^2\setminus\{y\}))
\end{align*}
is therefore an open neighborhood of the set of simple closed geodesics with length $\ell$. We denote by $j:U\hookrightarrow \Lambda$ and $\widetilde j:\mathcal{E}|_U\hookrightarrow \mathcal{E}$ the inclusions, so that we have the commutative diagram
\begin{equation}\label{e:final_diagram}
\begin{split}
\xymatrix{
\mathcal{E}|_U\, \ar@{^{(}->}[r]^{\widetilde j} \ar[d]^{\pi} &
\mathcal{E} \ar[r]^{\mathrm{ev}\ \ \ \ } \ar[d]^{\pi} &
\mathrm{P}\mathrm{T} S^2\\
U\, \ar@{^{(}->}[r]^j & \Lambda &
}
\end{split}
\end{equation}
Notice that $\mathrm{ev}\circ\widetilde j(\mathcal{E}|_U)=\mathrm{P}\mathrm{T} S^2\setminus\{y\}$. Since $\ell=\ell_1=\ell_2=\ell_3$, Theorem~\ref{t:LS} implies that $j^*\kappa^2\neq 0$ in $H^2(U;\mathds{Z}_2)$. This, together with~\eqref{e:final_diagram}, implies that
\begin{align*}
0\neq j^*\kappa^2 = j^*\pi_*\mathrm{ev}^*\nu = \pi_*\widetilde{j}^*\mathrm{ev}^*\nu = \pi_*(\mathrm{ev}\circ\widetilde{j})^*\nu,
\end{align*}
and therefore $(\mathrm{ev}\circ\widetilde{j})^*\nu\neq0$ in $H^3(\mathcal{E}|_{U};\mathds{Z}_2)$. In particular, the homology homomorphism $(\mathrm{ev}\circ\widetilde{j})_*:H_3(\mathcal{E}|_U)\to H_3(\mathrm{P}\mathrm{T} S^2;\mathds{Z}_2)$ is surjective, and we conclude that the map $\mathrm{ev}\circ\widetilde{j}$ must be surjective as well, which is a contradiction.
\end{proof}
Theorem~\ref{t:ellipsoid} is a consequence of the following statement.
\begin{thm}
Let $(S^2,g)$ be a Riemannian 2-sphere whose Lusternik-Schnirelmann minmax values satisfy $\ell_1=\ell_2$ or $\ell_2=\ell_3$. Then every $x\in S^2$ lies on a simple closed geodesic of $(S^2,g)$ of length $\ell_2$.
\end{thm}
\begin{proof}
Assume by contradiction that $\ell_1=\ell_2$ or $\ell_2=\ell_3$, but there exists $x\in S^2$ such that no simple closed geodesic of length $\ell_2$ passes through $x$. The subset
$U:=\big\{\gamma\in\Lambda_*\ \big|\ x\not\in\gamma\big\}$
is therefore an open neighborhood of the set of simple closed geodesics of length $\ell_2$. We claim that $U$ is contractible. Indeed, consider a smooth deformation retraction $r_t:S^2\setminus\{x\}\to S^2\setminus\{x\}$ such that $r_0=\mathrm{id}$, $r_t$ is an embedding for each $t\in[0,1)$, and $r_1(y)=-x$ for all $y\in S^2\setminus\{x\}$. This induces a deformation retraction $R_t:U\to U$, $R_t(\gamma)=r_t(\gamma)$, whose time-1 map $R_1$ contracts $U$ at the constant loop at $-x$. In particular, $H^1(U;\mathds{Z}_2)=0$. However, if $\kappa\in H^1(\Lambda_*;\mathds{Z}_2)$ is the cohomology class of~\eqref{e:kappa}, Theorem~\ref{t:LS} implies that $j^*\kappa\neq 0$ in $H^1(U;\mathds{Z}_2)$, and gives a contradiction.
\end{proof}
| {
"timestamp": "2018-08-28T02:03:49",
"yymm": "1711",
"arxiv_id": "1711.11285",
"language": "en",
"url": "https://arxiv.org/abs/1711.11285",
"abstract": "The simple length spectrum of a Riemannian manifold is the set of lengths of its simple closed geodesics. We prove a theorem claimed by Lusternik: in any Riemannian 2-sphere whose simple length spectrum consists of only one element L, any geodesic is simple closed with length L.",
"subjects": "Differential Geometry (math.DG); Algebraic Topology (math.AT); Geometric Topology (math.GT)",
"title": "A characterization of Zoll Riemannian metrics on the 2-sphere",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750518329434,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.709522181561948
} |
https://arxiv.org/abs/1610.03523 | Noncommutative potential theory | We propose to view hermitian metrics on trivial holomorphic vector bundles $E\to\Omega$ as noncommutative analogs of functions defined on the base $\Omega$, and curvature as the notion corresponding to the Laplace operator or $\partial\overline\partial$. We discuss noncommutative generalizations of basic results of ordinary potential theory, mean value properties, maximum principle, Harnack inequality, and the solvability of Dirichlet problems. | \section{Introduction}
Traditional potential theory is the study of the Laplace operator, harmonic and subharmonic functions, and related notions.
The Laplacian, while an analytic object, has geometric content as well:\ the curvature of a holomorphic line bundle over a Riemann surface is expressed through it.
By noncommutative potential theory we mean the study of hermitian metrics on holomorphic vector bundles of higher rank, in the spirit of traditional potential theory:\ through maximum principles, averaging properties, the Dirichlet problem, regularization, and more.
Although in complex geometry chiefly vector bundles of finite rank occur, one still encounters there and elsewhere---e.g., in harmonic analysis or mathematical physics---bundles with Hilbert or Banach space fibers, or even more general bundle--like objects, see e.g.~[ADW,B,L3,LSz,Rc].
Accordingly, in this paper we will discuss vector bundles with Hilbert space fibers and hermitian metrics on them.
At the same time, we will focus on trivial Hilbert bundles, typically over open subsets $\Omega$ of $\mathbb C$.
Some of our results clearly have implications for general vector bundles (and higher dimensional bases), but the analogy between the Laplacian or $\partial\overline\partial$ and curvature in a general vector bundle is clearest if the bundle is locally trivialized first.
We shall write $\text{Hom}(V,W)$ for the space of continuous linear maps between Banach spaces $V$ and $W$, End$V$ for Hom$(V,V)$, and $\text{GL}(V)\subset$ End$V$ for the group of invertible elements.
Let $(V,\langle ,\rangle)$ be a complex Hilbert space and $\Omega\subset\mathbb C^n$ open.
A hermitian metric on the trivial bundle $\Omega\times V\to\Omega$ is a function $h\colon\Omega\times V\times V\to\mathbb C$ given by an operator valued function $P$ on $\Omega$
\begin{equation}
h(z,u,v)=\langle P(z) u,v\rangle,\qquad z\in\Omega,\ u,v\in V.
\end{equation}
For the sake of simplicity in this introduction we restrict ourselves to smooth metrics, meaning that $P=P^*\colon\Omega\to\text{End}\, V$ is a $C^\infty$ map taking values in positive invertible operators; but in the main body of the paper we will deal with rougher metrics as well.
We write $\text{End}^+ V\subset\text{End}\, V$ for the cone of positive invertible operators.
The curvature of $h$ or of $P$ is the $\text{End} \,V$ valued $(1,1)$ form on $\Omega$
\begin{eqnarray}
R=R^h=R^P&=&\overline\partial (P^{-1}\partial P)=P^{-1}\overline\partial\partial P-P^{-1}\overline\partial P\wedge P^{-1}\partial P\\
&=&P^{-1}\Bigl(\sum^n_{\mu,\nu=1} P_{\overline z_\mu} P^{-1} P_{z_\nu}-P_{\overline z_\mu z_\nu}) \Bigr) dz_\nu\wedge d\overline z_\mu.\nonumber
\end{eqnarray}
When $\dim V=1$ and $P$ is multiplication by a positive $p\in C^\infty(\Omega)$, $R=\overline\partial\partial\log p$, and zero curvature corresponds to (pluri)harmonicity.
Our first result is about solving a noncommutative Dirichlet problem on the disc in $\mathbb C$ for general metrics.
\begin{thm}Suppose $\Omega=\{z\in\mathbb C\colon |z| < 1\}$ and $F\colon\partial\Omega\to\text{End}^+ V$ is continuous for the norm topology on $\text{End}^+V\subset\text{End}\, V$.
Then there is a unique continuous $P\colon\overline\Omega\to\text{End}^+V$ that is smooth over $\Omega$,
\begin{eqnarray*}
P&=&F\text{ on }\ \partial\Omega,\\
R^P&=&0\ \text{ on }\ \Omega.
\end{eqnarray*}
Over $\Omega$ one can write $P=H^*H$ with a holomorphic $H\colon\Omega\to \text{GL}(V)$.
If $F$ takes values in a unital $C^*$--subalgebra $\mathcal A$ of $\text{End}\, V$, then so will $P$, and $H$ can be taken with values in $\mathcal A$.
\end{thm}
Related results, when $\mathcal A=\text{End} V$, have been known for quite a while.
When $\dim V<\infty$, Coifman and Semmes solved Dirichlet problems not only for hermitian but for Finsler metrics as well.
Even earlier, Masani and Wiener solved a Dirichlet problem when $V$ is a finite dimensional Hilbert space, and $F,\log \|F\|$ are only integrable.
Then the conclusion is necessarily weaker than in our theorem.
Devinatz, Douglas, Helson, Foia\c s and Sz.--Nagy subsequently extended this latter result to separable $V$.
This author considered Dirichlet problems with boundary values more regular than continuous, again when $\dim V<\infty$.
See [CS,De,Do,H,L1,SzF,WM].
When the base $\Omega$ is one dimensional, semipositivity/negativity of the curvature simply means
\[
P_{\overline z} P^{-1} P_z-P_{ z\overline z}\geq 0,\text{ respectively }\leq 0,
\]
and we next turn to mean value properties of such metrics.
\begin{thm}A smooth hermitian metric as in (1.1) has seminegative curvature if and only if for every disc $\{z\in\mathbb C\colon |z-a|\leq r\}$ contained in $\Omega$
\begin{equation}
\int_{|z-a|=r} P|dz|-\int_{|z-a|=r} Pdz \left(\int_{|z-a|=r} P|dz|\right)^{-1}\int_{|z-a|=r} Pd\overline z\geq 2\pi r P(a).
\end{equation}
(Here $|dz|$ refers to integration with respect to arc length.) If the curvature is seminegative, the left hand side of (1.4), divided by $r$, is an increasing function of $r$.
\end{thm}
This is clearly analogous to the mean value property of subharmonic functions.
But even when $\dim V=1$ the two are different, for then (1.4) boils down to characterizing subharmonicity of $u=\log p$ in terms of integrals of $p$, rather than of $\log p$.
In Section 4 we will also prove a related characterization of semipositive curvature (but it is {\sl not} (1.3) with the inequality sign reversed).
Behind these results is a maximum principle.
Consider an open $\Omega\subset\mathbb C^n$ and hermitian metrics $h,k$ on trivial holomorphic vector bundles $E=\Omega\times V\to\Omega$, $F=\Omega\times W\to\Omega$.
Write $h_z(u,v)$ for the inner product $h(z,u,v)$ on $V$, for $z\in\Omega$, and similarly $k_z$.
A holomorphic homomorphism $A\colon E\to F$ can be thought of as a holomorphic map $\Omega\to\text{Hom} (V,W)$.
Its norm $\|A\|\colon \Omega\to [0,\infty)$ is obtained by taking for each $z\in\Omega$ the norm of the operator $A(z)\colon (V,h_z)\to (W,k_z)$.
\begin{thm}If $A$ decreases curvature in the sense that for $z\in\Omega,\xi\in T^{1,0}_z\Omega$
\[
{k_z (R^k (\xi,\overline\xi)Av,Av)\over k_z(Av,Av)} \leq {h_z(R^h(\xi,\overline\xi)v,v)\over h_z(v,v)} ,\quad v\in V\text{ such that }A(z)v\neq 0,
\]
then $\log \|A\|$ is plurisubharmonic.
In particular, $\log \|A\|$ satisfies the maximum principle.
\end{thm}
This result is not new, it is a special case of what we proved in [L2].
Related results had been known earlier:\ Coifman and Semmes proved an analogous result for Finsler metrics when $\dim V<\infty$, and Berndtsson proved the infinite rank case when $R^h$ or $R^k=0$, see [BK,CS].
As said, various results in the paper, even if formulated only for bundles over one dimensional bases, have obvious generalizations to higher dimensional bases. But not all these generalizations are satisfactory. For example, an integral characterization of Nakano semipositivity/negativity in the spirit of Theorems 1.2 (or 4.2) and 4.7 is lacking. Yet such a characterization would be useful to study Nakano curvature of uniform limits of, say, Nakano semipositively curved hermitian metrics.
I am grateful to Kuang--Ru Wu for his questions and critical remarks concerning the first version of this paper.
\section{Smoothness classes of hermitian metrics}
What one means by a hermitian metric of class $C^k$ on a vector bundle of finite rank is unambiguous, but in bundles of infinite rank several definitions are possible depending on the topology one uses on spaces of operators.
Since in matters of smoothness there is no difference between hermitian metrics and general sesquilinear forms, in discussing smoothness classes we will deal with the latter.
We will also allow the base to be a subset of real Euclidean space, and later a smooth manifold.
So, let $\Omega\subset\mathbb R^m$ be open and $(V,\langle \ , \ \rangle)$ a complex Hilbert space.
A sesquilinear form on the bundle $E=\Omega\times V\to\Omega$ is a function $h\colon\Omega\times V\times V\to\mathbb C$ such that
\[
h_x= h(x,\cdot,\cdot)\colon V\times V\to\mathbb C
\]
is a continuous sesquilinear form for each $x\in\Omega$.
Such an $h$ can be represented as
\begin{equation}
h_x(v,w)=\langle P(x)v,w\rangle,\qquad\text{with }P\colon\Omega\to\text{End}\, V.
\end{equation}
The weakest notion of $C^k$ smoothness, $k=0,1,\ldots,\infty$, is obtained by requiring that $h_x(v,w)=\langle P(x),v,w\rangle$ should be a $C^k$ function of $x\in\Omega$ when $v,w\in V$ are fixed.
If this is so, we say $h$ or $P$ are $C^k_{\text{weak}}$, or that $P\in C^k_{\text{weak}}(\Omega,\text{End} \,V)$.
The strongest requirement is that the map $P\colon\Omega\to\text{End}\, V$ should be $C^k$, when $\text{End}\, V$ is endowed with the operator norm; we then say $h$ or $P$ are $C^k_{\text{op}}$, or that $P\in C_{\text{op}}^k(\Omega,\text{End}\, V)$.
When $k=0$, we just write $C_{\text{weak}}$, $C_{\text{op}}$.
In our context, as elsewhere, H\"older classes $C^k$ with nonintegral $k$ behave better than those with integral $k$.
For one thing, their weak and operator norm versions turn out to coincide.
Because of this the notation will not indicate which topology on $\text{End}\, V$ is used.
Their definition is as follows.
Write $\lfloor k\rfloor$ for the integer part of $k\in\mathbb R$.
If $k\in (0,\infty)$ is nonintegral, we say that $h$ or $P$ is $C^k$, or $P\in C^k(\Omega,\text{End}\, V)$, if for any $v,w\in V$ the function $x\mapsto h_x(v,w)$ has partials of order $\lfloor k\rfloor$ on $\Omega$, and these partials are locally H\"older continuous with exponent $k-\lfloor k\rfloor$.
If $k=0,1,\ldots$ and the partials of order $k$ are locally Lipschitz continuous, i.e., H\"older continuous with exponent 1, we say $h$ and $P$ are $C^{k,1}$, or that $P\in C^{k,1}(\Omega,\text{End}\, V)$.
\begin{prop}Let $k\in [0,\infty)$.
If $P\in C^k_{\text{weak}} (\Omega,\text{End}\, V)$ (when $k$ is integral) or $P\in C^k(\Omega,\text{End}\, V)$ (otherwise), and $\varphi,\psi\colon \Omega\to V$ are $C^k$ functions in the norm topology of $V$, then $\langle P\varphi,\psi\rangle\colon\Omega\to\mathbb C$ is also $C^k$.
Furthermore, if $k\in (0,\infty)$ is nonintegral and $P\in C^k (\Omega,\text{End}\, V)$, then $P\in C_{\text{op}}^{\lfloor k\rfloor} (\Omega,\text{End}\, V)$ and its partials of order $\lfloor k\rfloor$ are locally H\"older continuous with exponent $k-\lfloor k\rfloor$.
\end{prop}
Thus, if $Q$ is a partial of $P$ of order $\lfloor k\rfloor$, and $K\subset\Omega$ is compact, there is a constant $C$ such that
\[
\|Q(x)-Q(y)\|\leq C|x-y|^{k-\lfloor k\rfloor},\quad x,y\in K.
\]
Here $||\,||$ denotes the operator norm on $\text{End} V$; but below we also use it to denote the norm on $V$ induced by the inner product.
\begin{prop}Let $k=0,1,\ldots$.
If $P\in C^{k,1}(\Omega,\text{End}\, V)$ then $P$ is $C^k_{\text{op}}$, and its partials of order $k$ are locally Lipschitz continuous.
\end{prop}
Since $C^k_{\text{weak}}\subset C^{k-1,1}$ for $k\in\mathbb N$, Proposition 2.2 implies
\begin{prop}$C^k_{\text{weak}}(\Omega,\text{End}\, V)\subset C_{\text{op}}^{k-1}(\Omega,\text{End}\, V)$ when $k\in\mathbb N$, and so $C^\infty_{\text{weak}}=C_{\text{op}}^\infty$.
\end{prop}
\begin{proof}We will only prove Proposition 2.1, the proof of Proposition 2.2 is analogous.
(a)\ Suppose $k$ is nonintegral.
It suffices to prove the second statement in Proposition 2.1, that we do by induction on $\lfloor k\rfloor$.
Assume first $0<k<1$.
Given sequences $x_j \neq y_j\in\Omega$ with limits $x,y\in\Omega$, for fixed $v,w\in V$ the sequence
\[
\left\langle {P(x_j)-P(y_j)\over |x_j-y_j|^k}\ v,w\right\rangle,\quad j=1,2,\ldots
\]
is bounded.
Two applications of the principle of uniform boundedness then give that $\|P(x_j)-P(y_j)\|/|x_j-y_j|^k$ is bounded, whence the claim follows.
Next suppose that $l\in\mathbb N$, the proposition has been proved for exponents $<l$, and $P$ is $C^k$ with $k\in (l,l+1)$.
Write $x_\mu$ for the coordinates on $\Omega \ (\mu=1,2,\ldots,m)$, and $\partial_\mu$ for $\partial/\partial x_\mu$.
For fixed $v,w\in V$ representing $\partial_\mu \langle Pv,w\rangle$ as the limit of difference quotients, the principle of uniform boundedness again applies and gives a $Q_\mu\colon\Omega\to\text{End}\, V$ such that $\partial_\mu \langle Pv,w\rangle=\langle Q_\mu v,w\rangle$.
Thus $\langle Q_\mu v,w\rangle$ is $C^{k-1}$, and by the inductive hypothesis $Q_\mu$ has partials of order $l-1$, locally H\"older continuous with exponent $k-1-(l-1)=k-\lfloor k\rfloor$.
In particular, $Q_\mu\in C_{\text{op}}(\Omega,\text{End} V)$.
Hence
\begin{multline*}
\int_a^b Q_\mu (x_1,\ldots,x_{\mu-1},t,x_{\mu+1},\ldots)dt=\\
P(x_1,\ldots,x_{\mu-1},b,x_{\mu+1},\ldots)-P(x_1,\ldots,x_{\mu-1},a,x_{\mu+1},\ldots),
\end{multline*}
whenever the path over which we integrate is in $\Omega$.
Thus $Q_\mu =\partial_\mu P$, partial understood in the norm topology.
But then the partials of $P$ of order $l=\lfloor k\rfloor$ are locally H\"older continuous with exponent $k-\lfloor k\rfloor$, as claimed.
(b)\ Next suppose $P\in C_{\text{weak}}(\Omega,\text{End}\, V)$.
Given a sequence $x_j\in\Omega$ with limit $x\in\Omega$, again by the principle of uniform boundedness $\sup_j\|P(x_j)\|<\infty$.
Hence
\begin{multline*}
\langle P(x_j)\varphi(x_j),\psi(x_j)\rangle-\langle P(x)\varphi(x),\psi(x)\rangle=\langle (P(x_j)-P(x))\varphi(x),\psi(x)\rangle+\\
\langle P(x_j)(\varphi(x_j)-\varphi(x)),\psi(x)\rangle+\langle P(x_j)\varphi(x_j), \psi(x_j)-\psi(x)\rangle\to 0
\end{multline*}
as $j\to\infty$, and $\langle P\varphi,\psi\rangle$ is indeed continuous.
(c)\ Finally, suppose $k\in\mathbb N$ and $P$ is $C^k_{\text{weak}}$.
We assume, inductively, that the claim in Proposition 2.1 is true when $k$ is replaced by $k-1$.
As in part (a) of the proof, there are weak partials $Q_\mu\colon\Omega\to\text{End}\, V$ such that $\partial_\mu\langle Pv,w\rangle=\langle Q_\mu v,w\rangle$ for all $v,w\in V$.
If $\varphi,\psi\colon\Omega\to V$ are $C^k$, or even just $C^1$, we claim that $\langle P\varphi,\psi\rangle$ has partial derivatives
\begin{equation}
\partial_\mu\langle P\varphi,\psi\rangle=\langle Q_\mu \varphi,\psi\rangle+\langle P\partial_\mu\varphi,\psi\rangle + \langle P\varphi,\partial_\mu\psi\rangle.
\end{equation}
With $x\in\Omega$ and $\xi=(\xi_1,\ldots,\xi_m)\in\mathbb R^m$ small we can write
\[
\varphi(x+\xi)=\varphi(x)+\sum^m_{\nu=1} a_\nu (x,\xi)\xi_\nu,\qquad \psi(x+\xi)=\psi(x)+\sum^m_{\nu=1} b_\nu (x,\xi)\xi_\nu,
\]
where $a_\nu$, $b_\nu$ are continuous functions of $x,\xi$.
Then
\begin{multline*}
\langle P(x+\xi)\varphi(x+\xi),\psi(x+\xi)\rangle=\langle P(x+\xi)\varphi(x),\psi(x)\rangle\\
+\sum_\nu\xi_\nu \langle P(x+\xi)\varphi(x),b_\nu (x,\xi)\rangle+\sum_\nu\xi_\nu
\langle P(x+\xi) a_\nu (x,\xi),\psi(x+\xi)\rangle.
\end{multline*}
The first term on the right has $\partial/\partial\xi_\mu$ partials by assumption, equal to $\langle Q_\mu (x+\xi)\varphi(x),\psi(x)\rangle$.
As to the other inner products on the right, by part (b) of this proof they are continuous functions of $x$ and $\xi$.
We conclude that all terms on the right have $\partial/\partial\xi_\mu$ partials at $\xi=0$; these partials add up to
\[
\langle Q_\mu (x)\varphi(x),\psi (x)\rangle+\langle P(x)\varphi(x),\partial_\mu \psi(x)\rangle + \langle P(x)\partial_\mu\varphi (x),\psi(x)\rangle,
\]
which proves (2.2).
Now $Q_\mu$ is $C^{k-1}_{\text{weak}}$.
If $\varphi,\psi$ are $C^k$, by the inductive assumption the right hand side of (2.2) is $C^{k-1}$, and $\langle P\varphi,\psi\rangle$ is indeed $C^k$.
\end{proof}
A variant of Proposition 2.1 concerning upper semicontinuity (u.s.c.) also holds:
\begin{prop}Suppose $P=P^*\colon\Omega\to\text{End}\, V$ is bounded below and is weakly u.s.c~in the sense that $\langle Pv,v\rangle$ is u.s.c.~for $v\in V$.
With any $\varphi\in C(\Omega,V)$ then $\langle P\varphi,\varphi\rangle$ is also u.s.c.
\end{prop}
\begin{proof}Upon adding a constant to $P$, we can arrange that $P\geq 0$.
Let $Q=P^{1/2}$.
Given a sequence $x_j\in\Omega$ with limit $x\in\Omega$, for any $v\in V$ the sequence $\|Q(x_j)v\|^2=\langle P(x_j)v,v\rangle$ is bounded.
Hence $\|Q(x_j)\|$ is bounded, and so is $\|P(x_j)\|$.
As before,
\begin{multline*}
\langle P(x_j)\varphi(x_j),\varphi(x_j)\rangle-\langle P(x)\varphi(x),\varphi(x)\rangle=\langle (P(x_j)-P(x))\varphi(x),\varphi(x)\rangle\\
+\langle P(x_j)(\varphi(x_j)-\varphi(x)),\varphi(x)\rangle+\langle P(x_j)\varphi(x_j),\varphi(x_j)-\varphi(x)\rangle,
\end{multline*}
whence
\[
\limsup_{j\to\infty} \langle P(x_j)\varphi(x_j),\varphi(x_j)\rangle\leq \langle P(x)\varphi(x),\varphi(x)\rangle,
\]
as claimed.
\end{proof}
We will also need the following result.
Let $(W,\ \|\ \|)$ be a Banach space.
\begin{prop}Fix $k\in\mathbb N$ and consider a sequence $P_j\colon\Omega\to W$ of functions, $C^k$ in the norm topology.
Assume that $P_j(x)$ converges in norm, uniformly for $x\in\Omega$, and that the partials of $P_j$, of order $\leq k$, are uniformly equicontinuous on $\Omega$.
Then these partials converge in norm, locally uniformly on $\Omega$.
\end{prop}
\begin{proof}First let $k=1$; say, we want to prove that $\partial_1 P_j$ converges locally uniformly.
Given a compact $K\subset\Omega$ and $\varepsilon>0$, choose $0<\delta<\text{dist} (K,\partial\Omega)$ so that $\|\partial_1 P_j(x)-\partial_1 P_j(y)\|<\varepsilon/4$ when $j\in\mathbb N$, $x\in K$, and $|x-y|\leq\delta$.
When $x=(x_1,\ldots,x_m)\in K$
\begin{multline}
\Big\|{P_j(x_1+\delta,x_2,\ldots,x_m)-P_j(x)\over \delta}-\partial_1 P_j(x)\Big\|=\\
{1\over\delta}\Big\| \int_0^\delta \big(\partial_1 P_j (x_1+t,x_2,\ldots,x_m)-\partial_1 P_j(x)\big)dt\Big\| < {\varepsilon\over 4}.
\end{multline}
Next choose $j_0$ so that for $i,j>j_0$ and $y\in\Omega$
\[
\|P_i(y)-P_j(y)\|<\delta\varepsilon/4.
\]
By (2.3), if $x\in K$, $\xi=(x_1+\delta,x_2,\ldots,x_n)$, and $i,j>j_0$
\[
\|\partial_1 P_i(x)-\partial_1 P_j(x)\|<\delta^{-1} (\|P_i(\xi)-P_j(\xi)\|+\|P_i(x)-P_j(x)\|)+\varepsilon/2<\varepsilon.
\]
Therefore $\partial_1 P_j$ indeed converges locally uniformly, as do all other first partials.
The case $k>1$ follows by induction.
\end{proof}
All the smoothness classes that we have introduced are invariant under $C^\infty$ diffeomorphisms $\Omega\to\Omega'$.
For this reason they make sense over differential manifolds $M$ as well.
The corresponding spaces of functions will be denoted $C^k_{\text{weak}} (M,\text{End}\, V)$, etc.
They all have variants $C^k(\overline M,\text{End}\, V)$, etc. with $\overline M$ a compact manifold with boundary.
For example, when $k\in (0,\infty)$ is nonintegral, we fix a finite open cover $\overline M=U_1\cup\ldots\cup U_s$ such that each $\overline U_j$ is contained in a coordinate chart, and let $C^k(\overline M,\text{End}\, V)$ consist of continuous $P\colon\overline M\to\text{End}\, V$ whose restrictions $P|U_j\cap\text{ int} M$ have partials of order $\lfloor k\rfloor$, partials $Q$ that satisfy
\begin{equation}
\sup \left\{ {\|Q(x)-Q(y)\|\over |x-y|^{k-\lfloor k\rfloor}}\quad\colon\quad x,y\in U_j\cap\text{ int }M\right\}<\infty
\end{equation}
for all $j$. The partials, $x-y$, and its length $|x-y|$ are computed using the coordinates on $\overline U_j$.
This space turns out to be a Banach space if the norm of $P$ is defined as the maximum of $\sup_{x\in\overline M} \|P(x)\|$ and the quantities appearing in (2.4), for all choices $j=1,\ldots,s$ and partial derivatives $Q$. Multiplication in $C^k(\overline M,\text{End}\, V)$ is then continuous, and by rescaling the norm we can turn $C^k(\overline M,\text{End}\, V)$ into a Banach algebra.
When $S\subset\text{End}\, V$, we write $C^k(M,S)$ etc.~for the space of $P\in C^k(M,\text{End}\, V)$ etc.~that take values in $S$.
\section{A Dirichlet problem}
Given a complex Hilbert space $V$, consider a unital $C^*$ subalgebra $\mathcal A\in\text{End}\, V$.
The most important case is $\mathcal A=\text{End}\, V$.
Denote the unit in $\mathcal A$ by $\mathbf 1$.
We write $\mathcal A^\times \subset\mathcal A$ for the group of invertible elements and $\mathcal A^+\subset\mathcal A^\times$ for the cone of (self adjoint) positive elements.
This latter is not completely consistent with usage in $C^*$ algebra theory, where $\mathcal A^+$ would contain not necessarily invertible elements as well. An equivalent definition of our $\mathcal A^+$ would be the set of self adjoint $S\in\mathcal A$ that satisfy $S\ge\varepsilon\text{Id}$ with some $\varepsilon>0$ (cf. [C, p. 243, Exercise 8]).
In this section $\Omega=\{z\in\mathbb C\colon |z| < 1\}$.
\begin{thm}Given $F\in C_{\text{op}}(\partial\Omega,\mathcal A^+)$, there is a holomorphic $H\colon\Omega\to\mathcal A^\times$ such that the function
\[
P=\begin{cases}
H^*H&\text{ on $\Omega$}\\
F&\text{on $\partial\Omega$}\end{cases}
\]
is in $C_{\text{op}} (\overline\Omega,\mathcal A^+)$.
If a holomorphic $K\colon\Omega\to\mathcal A^\times$ also solves the same problem, then $K=UH$ with a unitary $U\in\mathcal A$.
\end{thm}
From this the existence part of Theorem 1.1 follows because on $\Omega$
\begin{equation}
R^P=\overline\partial (P^{-1} \partial P)=\overline\partial (H^{-1} \partial H)=0.
\end{equation}
In the theorem $H$ itself need not extend continuously to $\overline\Omega$, not even when $\mathcal A=\mathbb C$.
A continuous $f\colon\partial\Omega\to\mathbb R$ whose conjugate function is discontinuous provides a counterexample $F=e^f$.
To prepare the proof we start with the following simple consequence of the maximum principle, our Theorem 1.3 (cf.~[L2, Theorem 1.1 or Theorem 2.4]).
\begin{lem}(a)\ Suppose $P, Q\in C_{\text{op}} (\overline\Omega,\mathcal A^+)$ are in $C_{\text{op}}^2$ over $\Omega$, and their curvature there is 0.
If $P\geq Q$ on $\partial\Omega$, then the same holds on $\Omega$.
(b)\ Suppose $H,K\colon\Omega\to\mathcal A^\times$ are holomorphic, and $H^*H$, $K^*K$ extend to mappings in $C_{\text{op}}(\overline\Omega,\mathcal A^+)$.
If $H^*H\geq K^*K$ on $\partial\Omega$, then the same holds on $\Omega$.
\end{lem}
\begin{proof}(a)\ Let $h,k$ be the hermitian metrics on $\Omega\times V\to\Omega$ determined by $P,Q$.
By Theorem 1.3 the norm of the identity map $A(z)=\text{Id}\colon (V,h_z)\to (V,k_z)$ is logarithmically subharmonic.
Since $\|A(z)\|\leq 1$ when $z\in\partial\Omega$, the maximum principle for subharmonic functions implies $\|A(z)\|\leq 1$ for $z\in\Omega$ as well.
(b)\ This follows by setting $P=H^*H$, $Q=K^*K$, both of which have zero curvature, see (3.1).
\end{proof}
\begin{cor}For $j\in\mathbb N$ let $H_j\colon\Omega\to\mathcal A^\times$ be holomorphic.
Suppose that $H_j^*H_j$ extend to mappings in $C_{\text{op}}(\overline\Omega,\mathcal A^+)$.
If $H_j^* H_j|\partial\Omega$ converge in $C_{\text{op}}(\partial\Omega,\mathcal A^+)$, then $H_j^*H_j$ converge in $C_{\text{op}}(\overline\Omega,\mathcal A^+)$.
\end{cor}
\begin{proof}There are positive numbers $a,b$ such that $a\,\text{Id}\le H_j^*H_j\le b\,\text{Id}$ on $\partial\Omega$ for all $j$, whence by Lemma 3.2b also on $\Omega$. In particular, this implies that for any $\varepsilon>0$ there is a $j_0$ such that for $i,j>j_0$
\[
(1-\varepsilon) H_i^* H_i\leq H_j^* H_j\leq (1+\varepsilon) H_i^* H_i\qquad\text{ on }\partial\Omega.
\]
By Lemma 3.2b again, the same holds on $\overline\Omega$, whence the claim.
\end{proof}
Next we prove a perturbative result in the spirit of Theorem 3.1.
\begin{lem}Suppose $k\in (0,\infty)$ is not an integer.
Any constant map $F_0\colon\partial\Omega\to\mathcal A^+$ has a neighborhood ${\mathcal U}\subset C^k (\partial\Omega,\mathcal A^+)$ such that whenever $F\in{\mathcal U}$, there is an $H\in C^k (\overline\Omega,\mathcal A^\times)$ that is holomorphic on $\Omega$ and satisfies $H^*H=F$ on $\partial\Omega$.
\end{lem}
\begin{proof}It suffices to prove when $F_0\equiv\mathbf 1$.
Consider
\begin{eqnarray*}
A&=&\{h\in C^k (\overline\Omega,\mathcal A)\colon h|\Omega\text{ is holomorphic}\},\\
B&=&\{f\in C^k(\partial\Omega,\mathcal A)\colon f=f^*\},
\end{eqnarray*}
and let $A_0\subset A$, $B_0\subset B$ consist of maps that vanish at $1\in\partial\Omega$.
With their $C^k$ norms these are Banach spaces. As indicated in Section 2, the $C^k$ norms on $A$, suitably normalized, turn $A$ into a Banach algebra; $A_0$ is a subalgebra.
We claim that the smooth map
\begin{equation}
A_0\ni h\mapsto (\mathbf 1+h^*) (\mathbf 1+h)|\partial\Omega-\mathbf 1=(h^*+h+h^*h)|\partial\Omega\in B_0
\end{equation}
restricts to a diffeomorphism between certain neighborhoods of $0\in A_0$ and $0\in B_0$.
To justify the claim note that the linearization of (3.2) at $h=0$ is the map
\begin{equation}
A_0\ni h\mapsto (h^*+h)|\partial\Omega\in B_0.
\end{equation}
This is clearly injective:\ \ if the harmonic function $h^*+h$ vanishes on $\partial\Omega$, then it vanishes on all of $\overline\Omega$, whence $h=-h^*$ is both holomorphic and antiholomorphic on $\Omega$.
Therefore $h$ is constant, $h\equiv h(1)=0$.
(3.3) is also surjective for the following reason.
Let $f\in B_0$.
Schwarz's formula
\[
s(z)={1\over 4\pi}\int_0^{2\pi}\ {e^{it}+z\over e^{it}-z}\ f(e^{it}),\quad z\in\Omega,
\]
defines a holomorphic function $s\colon\Omega\to \mathcal A$, that by Privalov's theorem, see [P], extends to a function in $C^k(\overline\Omega,\mathcal A)$.
Thus
\[
s(z)^*+s(z)={1\over 2\pi}\ \int_0^{2\pi}\text{ Re }{e^{it}+z\over e^{it}-z}\ f(e^{it}) dt.
\]
The kernel in the integral being Poisson's, we see that the extension of $s$ to $\partial\Omega$ satisfies $s^*+s=f$.
It follows that $h=s-s(1)\in A_0$ also satisfies $(h^*+h)|\partial\Omega=f$.
So (3.3) is an isomorphism, and by the implicit function theorem (3.2) is indeed a diffeomorphism between neighborhoods of $0\in A_0$ and $0\in B_0$.
Hence, if $F\in B$ is sufficiently close to $F_0\equiv\mathbf 1$, i.e.~if $f=F(1)^{-1/2}FF(1)^{-1/2}-\mathbf 1\in B_0$ is sufficiently small, there is an $h\in A_0$ such that $H=(\mathbf 1+h)F(1)^{1/2}\in A$ satisfies $H^*H|\partial\Omega=F$.
Since $H$ maps into $\mathcal A^\times$ if $h$ is small, the lemma is proved.
\end{proof}
Based on this lemma we will prove Theorem 3.1 by the continuity method.
We can avoid tricky a priori estimates by first proving a generalization of a theorem of Fej\'er and Riesz.
\begin{lem}If $F\in C_{\text{op}}(\partial\Omega,\mathcal A^+)$ is a Laurent polynomial
\begin{equation}
F(z)=\sum^N_{n=-N}\ F_n z^n,\qquad F_n\in\mathcal A,
\end{equation}
then there are $H_n\in\mathcal A$, $0\leq n\leq N$ such that
\[
H(z)=\sum^N_{n=0} H_n z^n
\]
takes invertible values for $z\in\overline\Omega$ and satisfies $H^*H=F$ on $\partial\Omega$.
It can be arranged that $H_0\in\mathcal A^+$.
\end{lem}
Helson [He, p.~127] proposes a theorem that, when $V$ is separable and $\mathcal A=\text{End}\, V$, would be the same as our lemma, if one could show that Helson's ``outer factor'' is a function valued in $\text{End}\, V$, not in some other space $\text{Hom}(V,W)$.
Subsequently Rosenblum in [Rs] proved
Lemma 3.5, again when $V$ is separable and $\mathcal A=\text{End}\, V$; in his version $F$ need not even take invertible values.
Both Helson and Rosenblum first prove a general factorization theorem from which they derive the polynomial case.
By contrast, in our approach the polynomial factorization comes first.
We will use the following simple fact.
\begin{lem}Suppose that a sequence of invertible $C_k\in\text{End}\, V$ converges in norm to $C$. Then the following are equivalent:
(a) $C$ is invertible;
(b) $\sup_k ||C_k^{-1}||=s<\infty$;
(c) $C^*C$ is invertible.
\end{lem}
\begin{proof} (a) $\Rightarrow$ (b): There is a positive number $c$ such that $||Cv||\ge c||v||$ for $v\in V$. Hence there is a $k_0$ such that $||C_kv||\ge c||v||/2$ when $k>k_0$, i.e., $||C_k^{-1}||\le 2/c$, and indeed $s<\infty$.
(b) $\Rightarrow$ (a): Since
\[
||C_k^{-1}-C_l^{-1}||=||C_k^{-1}(C_l-C_k)C_l^{-1}||\le s^2||C_l-C_k||\to 0 \qquad\text{as }k,l\to\infty,
\]
$D=\lim_k C_k^{-1}$ exists and satisfies $CD=DC=\text{Id}$.
The equivalence (c) $\Leftrightarrow$ (b) is just (a) $\Leftrightarrow$ (b) applied with $C_k'=C_k^*C_k$.
\end{proof}
\begin{proof}[Proof of Lemma 3.5]Fix a nonintegral $k\in (0,\infty)$.
We will use the spaces $A_0\subset A$, $B_0\subset B$ introduced in the proof of Lemma 3.4.
It suffices to prove when $F(1)=\mathbf 1$, for a general $F$ can be normalized to $F(1)^{-1/2} FF(1)^{-1/2}$.
With such $F$ let $\Phi_t=(1-t)\mathbf 1+tF$, with $t\in [0,1]$, and consider the set $T\subset [0,1]$ of those $t$ for which $\Phi_t$ can be written $H^*H|\partial\Omega$, where $H\in C^k(\overline\Omega,\mathcal A^\times)$ is holomorphic on $\Omega$.
One can modify the requirement on $H$ without affecting $T$.
For example one can require that $H(z)=\sum_0^N H_n z^n$ be a polynomial.
This in fact is automatic for the following reason.
Let $H(z)=\sum_0^\infty H_n z^n$ and $H(z)^{-1}=\sum_0^\infty K_n z^n$ for $z\in\Omega$, and
\begin{equation}
\Phi_t (z)=\sum^N_{-N} G_n z^n\qquad\text{ for }z\in\partial\Omega.
\end{equation}
If we use (3.5) to extend $\Phi_t(z)$ to all $z\in\mathbb C\backslash \{0\}$, as $r\nearrow 1$ we have
\[
H^*(rz) H(rz)-\Phi_t (rz)\to 0,\qquad\text{ or }\qquad H^*(rz)-\Phi_t (rz) H(rz)^{-1}\to 0
\]
in $\mathcal A$, uniformly for $z\in\partial\Omega$.
The second limit translates to
\[
\sum^\infty_0 H_n^* r^n z^{-n} -\sum^N_{-N} G_n r^n z^n\sum_0^\infty K_n r^n z^n\to 0 ,\quad r\nearrow 1.
\]
The second term here contains no power $z^m$ with $m<-N$, hence the same holds for the first sum, i.e., $H_n=0$ when $n>N$, and indeed $H(z)=\sum_0^N H_n z^n$.
One can also add the requirement that $H(0)\in\mathcal A^+$, because from a general $H$ one can pass to $UH$, where $U=(H(0)^* H(0))^{1/2} H(0)^{-1}\in\mathcal A^\times$.
Now, $T\subset [0,1]$ is open.
Indeed, if $t\in T$ and $H$ is as in the definition of $T$,
\[
B\ni H^{*-1} \Phi_s H^{-1}|\partial\Omega-\mathbf 1\to 0\quad\text{ as }s\to t.
\]
Hence for $s$ close to $t$, by Lemma 3.4 we can write
\[
H^{*-1}\Phi_s H^{-1}=K^* K\quad\text{ on }\partial\Omega,
\]
with $K\in C^k(\overline\Omega,\mathcal A^\times)$, holomorphic on $\Omega$.
Since $\Phi_s=(KH)^*(KH)$, such $s$ is indeed in $T$.
But $T$ is also closed.
For suppose $t_\nu\in T$ and $\lim t_\nu=t$.
As seen above, the corresponding factorization of $\Phi_{t_\nu}$ will be
\[
\Phi_{t_\nu}=H^{(\nu)*} H^{(\nu)}|\partial\Omega,\qquad H^{(\nu)}(z)=\sum_0^N H_{\nu n} z^n,
\]
and we can assume $H^{(\nu)}(0)=H_{\nu 0}\in\mathcal A^+$.
By Corollary 3.3,
\[
H^{(\nu)}(z)^* H^{(\nu)}(z)=\sum^N_{n,m=0} H_{\nu n}^* H_{\nu m} \overline z^n z^m
\]
converge for every $z\in\overline\Omega$ as $\nu\to\infty$.
The limits are in $\mathcal A^+$.
Since the functions $\overline z^n z^m$ are independent over $\mathbb C$, this implies $H_{\nu n}^* H_{\nu m}\in\mathcal A$ converge for every $n,m$.
In particular, $\lim_\nu H_{\nu 0}=\lim_\nu (H_{\nu 0}^* H_{\nu 0})^{1/2}=H_0\in\mathcal A^+$ exists.
Therefore
\[
\lim_\nu H_{\nu n}=\lim_\nu H_{\nu 0}^{*-1} (H_{\nu 0}^* H_{\nu n})=H_n\in\mathcal A
\]
also exists for $n=0,\ldots,N$; convergence is in operator norm.
Thus $H(z)=\sum_0^N H_n z^n$ satisfies
\[
H(z)^*H(z)\begin{cases}=\Phi_t(z),&\text{$z\in\partial\Omega$}\\
\in\mathcal A,&\text{$z\in\Omega$}.\end{cases}
\]
But $H(z)^*H(z)$ is invertible for $z\in\overline\Omega$, hence by Lemma 3.6 so is $H(z)$. This shows $t\in T$, and $T$ is closed.
What we have proved about $T$ implies $T=[0,1]$, in particular $F=\Phi_1$ has the required factorization.
\end{proof}
\begin{proof}[ Proof of Theorem 3.1] To construct $H$ choose a sequence $F_\nu\colon\partial\Omega\to \mathcal A^+$ of Laurent polynomials converging uniformly to $F$, and construct factorizations $H_\nu^* H_\nu|\partial\Omega=F_\nu$ as in Lemma 3.5, making sure that $H_\nu(0)\in\mathcal A^+$.
By Corollary 3.3 $H_\nu^* H_\nu$ converge uniformly on $\overline\Omega$ to some $P\in C_{\text{op}}(\overline\Omega,\mathcal A^+)$, and $P|\partial\Omega=F$.
In particular the norms $\|H_\nu(z)\|$ are uniformly bounded, say $\|H_\nu(z)\|\leq C$ for $z\in\overline\Omega$ and $\nu\in\mathbb N$.
By Cauchy's estimate
\begin{equation}
\big\| {\partial^k H_\nu(z)\over\partial z^k}\big\| \leq {Ck\text{!}\over (1-|z|)^k},\quad k=0,1,\ldots,\quad z\in\Omega.
\end{equation}
This in turn implies that the partials of $P_\nu=H_\nu^*H_\nu$, of any fixed order, are locally uniformly bounded on $\Omega$.
By Lemma 2.5 these partials converge locally uniformly as $\nu\to\infty$.
In particular
\begin{equation}
\lim_\nu P_\nu(0)^{-1/2} {\partial^k P_\nu\over\partial z^k}(0)=\lim_\nu \ {\partial^k H_\nu\over\partial z^k} (0)=A_k\in\mathcal A
\end{equation}
exists.
In view of (3.6), $\|A_k\|\leq Ck$!, so that
\[
H(z)=\sum^\infty_{k=0} A_k z^k/k\text{!}
\]
defines a holomorphic $H\colon\Omega\to\mathcal A$.
Now (3.6) shows that for $|z|\leq r<1$ the series
\[
\sum^\infty_{k=0}\ {\partial^k H_\nu\over\partial z^k} (0) {z^k\over k!}\quad (=H_\nu (z))
\]
is termwise dominated in norm by $\sum Cr^k$.
By virtue of (3.7) this implies $H_\nu\to H$ locally uniformly on $\Omega$, whence $H^*H=\lim_\nu H_\nu^* H_\nu=\lim_\nu P_\nu=P$ on $\Omega$; and by Lemma 3.6 $H$ takes invertible values, as needed.
To show $H$ is unique up to a unitary factor, consider another solution $K$ of the same problem, and
\[
Q=\begin{cases}K^*K&\text{ on }\Omega\\ F&\text{ on }\partial\Omega\end{cases}\in C_{\text{op}}(\overline\Omega,\mathcal A^+).
\]
Since $R^P=R^Q=0$, Lemma 3.2 implies $P=Q$, and so
\[
K^*\ {\partial^k K\over\partial z^k}={\partial^k Q\over \partial z^k}={\partial^k P\over\partial z^k}=H^*\ {\partial^k H\over\partial z_k}.
\]
Substituting $z=0$ here, Taylor's formula gives that $K(z)=UH(z)$, where $U=K^*(0)^{-1} H^*(0)$ is unitary because
\[
K^*(0)^{-1} H^*(0) H(0) K(0)^{-1}=K^*(0)^{-1} P(0) K(0)^{-1}=K^* (0)^{-1} Q(0) K(0)^{-1}=\mathbf 1.
\]
This completes the proof.
\end{proof}
\begin{proof}[ Proof of Theorem 1.1]As we already noted, $P$ constructed in Theorem 3.1 solves the Dirichlet problem by (3.1).
Uniqueness of $P$ follows from Lemma 3.2.
\end{proof}
We finish the subject of Dirichlet problems by proving a regularity result:
\begin{thm}If $k\in (0,\infty)$ is nonintegral and $F\in C^k(\partial\Omega,\mathcal A^+)$, then $H\colon\Omega\to \mathcal A^\times$ in Theorem 3.1 extends to a function in $C^k(\overline\Omega,\mathcal A^\times)$.
\end{thm}
\begin{proof}Given $\zeta\in\partial\Omega$, we will show that $H$ extends to a $C^k$ function in a neighborhood of $\zeta$.
The automorphisms of $\overline\Omega$
\[
\varphi_t(z)={z+t\zeta\over 1+t\overline\zeta z}\ ,\quad t\in [0,1),
\]
as $t\to 1$ converge in the $C^\infty$ topology to the constant map $\equiv\zeta$ on any closed arc $I\subset\partial\Omega\backslash \{-\zeta\}$.
Hence $F\circ\varphi_t\to F_0\equiv F(\zeta)$ in $C^k(I,\mathcal A)$, and so by Lemma 3.4 for
some $t$ there is a $K\in C^k(\overline\Omega,\mathcal A^\times)$, holomorphic on $\Omega$, such that $K^*K=F\circ\varphi_t$ in a neighborhood of $\zeta\in\partial\Omega$.
Therefore on some closed arc $J\subset\partial\Omega$ containing $\zeta$
\[
L^*L=F,\qquad\text{where }L=K\circ\varphi_t^{-1}\in C^k(\overline\Omega,\mathcal A^\times).
\]
In particular, $(L^*L)(rz)-(H^*H)(rz)\to 0$ as $r\nearrow 1$, uniformly for $z\in J$, or
\[
(H^{*-1} L^* LH^{-1})(rz)\to\mathbf 1\qquad\text{ as }r\nearrow 1,
\]
uniformly for $z\in J$ (since $H,H^{-1}$ are bounded).
Fix $v,w\in V$.
The function $\langle HL^{-1}v,w\rangle$ is bounded and holomorphic on $\Omega$; its almost everywhere existing boundary values on $J$ satisfy
\begin{multline*}
\lim_{r\nearrow 1} \langle (HL^{-1}(rz)v,w\rangle=\\
\lim_{r\nearrow 1} \ \langle(\mathbf 1 -H^{*-1} L^* LH^{-1})(HL^{-1})(rz)v,w\rangle+
\langle (H^{*-1} L^*)(rz)v,w\rangle=\\
\lim_{r\nearrow 1}\langle (H^{*-1} L^*)(rz)v,w\rangle.
\end{multline*}
Thus the bounded holomorphic and antiholomorphic functions $\langle HL^{-1}v,w\rangle$ and\newline
$\langle H^{*-1} L^*v,w\rangle$ share the same boundary values on $J$.
This implies that the former continues analytically to $\mathbb C\backslash\overline\Omega$ across int $J$. But this, in turn, implies that $HL^{-1}$ continues analytically across $\zeta$, as follows e.g. from the more general [L4, Lemma 6.1].
Hence $H$ itself extends $C^k$ near $\zeta$.
\end{proof}
\section{Mean values}
In this section we will characterize semipositivity/negativity of curvature through mean value properties.
This can be done in generality greater than smooth or even continuous metrics that we have worked with so far.
What the appropriate generality should be is suggested by the case of line bundles.
On a trivial line bundle hermitian metrics of, say, seminegative curvature are determined by plurisubharmonic functions $u$, and it is well understood that it is profitable to allow $u$ to be upper semicontinuous and take $-\infty$ as value.
This latter translates to allowing the metric to assign zero length to nonzero vectors in the fibers.
Accordingly, for a Hilbert space $(V,\langle ,\rangle)$ let
\[
\text{End}^{\geq 0} V=\{A\in\text{End}\, V\colon A=A^*\geq 0\}.
\]
Given an open $\Omega\subset\mathbb C$ and a trivial Hilbert bundle $E\colon\Omega\times V\to\Omega$, by a finite hermitian metric on $E$ we mean a function $h\colon\Omega\times V\times V\to\mathbb C$ that can be written
\begin{equation}
h(z,v,w)=\langle P(z)v,w\rangle,\text{ with }P\colon\Omega\to\text{End}^{\geq 0} V.
\end{equation}
Thus here we will deal with one dimensional bases only.
Mean value properties when $\dim\Omega>1$ follow upon restricting to one dimensional slices.
\begin{defn}We say that a finite hermitian metric $h$, or the corresponding $P$ in (4.1), has seminegative curvature if $\langle P\varphi,\varphi\rangle$ is subharmonic for any holomorphic $\varphi\colon\Omega\to V$.
\end{defn}
This definition has been in use for a while now in various degrees of generality.
It implies, in particular, that $P$ is weakly u.s.c., i.e.~$\langle Pv,v\rangle$ is u.s.c.~for $v\in V$.
It also implies the seemingly stronger condition that $\log\langle P\varphi,\varphi\rangle\colon\Omega\to [-\infty,\infty)$ is subharmonic (because with any holomorphic $f\colon\Omega\to \mathbb C$ the function $\langle Pe^f\varphi,e^f\varphi\rangle$ satisfies the maximum principle, whence $2\text{ Re }f+\log\langle P\varphi,\varphi\rangle$ also satisfies it).
At this point we are allowing subharmonic functions to be $\equiv -\infty$.---When $P\in C^2_{\text{weak}}(\Omega,\text{End}^+ V)$, and its curvature $R^P$ can be computed by (1.2), our current notion of seminegative curvature is equivalent to $P_{z\overline z}\geq P_{\overline z} P^{-1} P_z$.
In the following, integrals of $P$ will be understood in the weak sense:\ \ $\int P$ (over whatever set) is the operator $Q$ that satisfies $\int\langle Pv,w\rangle=\langle Qv,w\rangle$ for all $v,w\in V$.
It suffices to require the latter equality for $v=w$ only, from which the general case follows by polarization.
We write $D_r(a)$ for the disc $\{z\in\mathbb C\colon |z-a| < r\}$.
\begin{thm}For a weakly u.s.c.~$P\colon\Omega\to\text{End}^{\geq 0} V$ the following are equivalent:
(i)\ $P$ has seminegative curvature.
(ii)\ For any disc $\overline{D_r(z_0)}\subset\Omega$, writing $\oint$ for the average $(1/2\pi r)\int_{\partial D_r (z_0)}$,
\begin{equation}
\begin{pmatrix}
\oint P(z) |dz| & \oint P(z) dz\\
\oint P(z) d\overline z & \oint P(z) |dz|
\end{pmatrix} \geq \begin{pmatrix} P(z_0)&0\\ 0&0\end{pmatrix}
\end{equation}
in $\text{End}\,(V\oplus V)$.
(iii)\ For any disc $\overline{D_r(z_0)}\subset\Omega$ and $t\in (0,\infty)$
\begin{equation}
\oint P(z) |dz|-\oint P(z) dz \left(t\text{Id}_V +\oint P(z) |dz|\right)^{-1}\oint P(z) d\overline z\geq P(z_0).
\end{equation}
\end{thm}
If $\oint P|dz|$ is invertible, then (4.3) is equivalent to the simpler estimate
\begin{equation}
\oint P(z) |dz|-\oint P(z) dz \left(\oint P(z)|dz|\right)^{-1}\oint P(z) d\overline z\geq P(z_0).
\end{equation}
Even for noninvertible $\oint P|dz|$ (4.3) and (4.4) will be equivalent if we define the product in (4.4) as the monotone limit, in the weak or strong operator topology,
\[
\lim_{t\searrow 0} \oint P dz \left(t\text{Id}_V+\oint P |dz|\right)^{-1} \oint P d\overline z.
\]
The equivalence of (4.2), (4.3) is an instance of the following.
\begin{lem}Let $V,W$ be Hilbert spaces, $A=A^*\in\text{End} \,V$, $B\in\text{Hom} (W,V)$, $C=C^*\in\text{End}^{\geq 0} W$.
Then
\begin{equation}
\begin{pmatrix}A&B\\ B^*&C\end{pmatrix}\geq 0
\end{equation}
in $\text{End}\,(V\oplus W)$ if and only if $A\geq B(t\text{Id}_W+C)^{-1} B^*$ for all $t\in (0,\infty)$, which in turn is equivalent to
\[
D=\lim_{t\searrow 0} B(t\text{Id}_W+C)^{-1} B^*
\]
existing in the strong operator topology and satisfying $A\geq D$.
\end{lem}
The result is well known and much used in matrix theory, where $A-D$ is called the Schur complement of $C$, see [H, Chapter 7].
The proof for operators is the same as for matrices:\ \ If in (4.5) we replace $C$ by $C_t=C+t\text{Id}_W$, the resulting inequality for all $t>0$ will be equivalent to the original (4.5).
But
\begin{equation}
\begin{pmatrix}\text{Id}_V&-BC_t^{-1}\\ 0&\text{Id}_W\end{pmatrix}
\begin{pmatrix}A&B\\ B^*&C_t\end{pmatrix}
\begin{pmatrix}\text{Id}_V&0\\ -C_t^{-1} B^*&\text{Id}_W\end{pmatrix}=
\begin{pmatrix}A-B C_t^{-1} B^*&0\\ 0&C_t\end{pmatrix},
\end{equation}
so that (4.5) is equivalent to
\begin{equation}
A-B C_t^{-1} B^*\geq 0\quad\text{ for all }t>0.
\end{equation}
Next suppose that (4.7) holds.
Then $BC_t^{-1}B^*$ is a decreasing function of $t>0$, bounded above by $A$, hence by Vigier's theorem [RSz, p.~261] it converges in the strong operator topology to a $D\leq A$, as $t\searrow 0$.
Conversely, suppose $D=\lim_{t\searrow 0} BC_t^{-1} B^*$ exists and satisfies $D\leq A$.
As the limit is monotone, $BC_t^{-1} B^*\leq A$ for $t>0$.
\begin{proof}[Proof of Theorem 4.2]By Lemma 4.3 (ii) and (iii) are equivalent.
To show (i) $\Rightarrow$ (ii), take $z_0=0$ for simplicity.
Given $u,v\in V$, the function $\varphi(z)=u+iv z/r$ is holomorphic, so that $\langle P\varphi,\varphi\rangle$ is subharmonic.
Noting that on the circle $|z|=r$ we have $dz=iz |dz|/r$, $d\overline z=-i\overline z|dz|/r$, the mean value theorem gives
\[
\langle P(0)u,u\rangle\leq \oint\langle P\varphi,\varphi\rangle|dz|=\oint \langle Pu,u\rangle |dz|+\oint\langle Pu,v\rangle d\overline z+\oint (Pv,u\rangle dz+\oint\langle Pv,v\rangle |dz|.
\]
But this is equivalent to (4.2).
To prove that (ii) or (iii) imply (i), we start by assuming $P\in C^2_{\text{op}} (\Omega,\text{End}^+ V)$.
Then we need to show $P_{z\overline z}\geq P_{\overline z} P^{-1} P_z$.
Let $z_0\in\Omega$ and $z=z_0+\zeta$.
As $\zeta\to 0$
\[
P(z)=P(z_0)+P_z(z_0)\zeta+P_{\overline z}(z_0)\overline\zeta+P_{z\overline z}(z_0) |\zeta|^2+\text{ Re }P_{zz}(z_0)\zeta^2+o |\zeta|^2,
\]
whence, as $r\to 0$
\begin{multline*}
\oint P(z) |dz|-\oint P(z) dz \left(\oint P(z) |dz|\right)^{-1} \oint P(z) d\overline z-P(z_0)=\\
r^2\big(P_{z\overline z} (z_0)-P_{\overline z} (z_0) P(z_0)^{-1} P_z (z_0) + o(1)\big).
\end{multline*}
Therefore (4.4) implies $P_{z\overline z}\geq P_{\overline z} P^{-1} P_z$.
Now take a general $P$. (4.2) implies
\begin{equation}
\oint P(z)|dz|\geq P(z_0).
\end{equation}
With a smooth $\chi\colon\mathbb C\to [0,\infty)$ supported in a disc $D_\rho(0)$, the convolution $Q=\chi*P$ is defined and $C^\infty_{\text{weak}}=C^\infty_{\text{op}}$ in $\Omega_\rho=\{z\in\Omega\colon\text{dist} (z,\partial\Omega)>\rho\}$, cf.~Proposition 2.3.
If $\chi$ is rotationally symmetric and $\int_{\mathbb C}\chi=1$, then $Q\geq P$ on $\Omega_\rho$ by (4.8).
Clearly, $Q$ inherits the mean value property (4.2) from $P$, as does $t\text{Id}_V+Q$ for any $t>0$.
By what we have just proved, $t\text{Id}_V+Q$ is seminegatively curved.
Further, given a compact $K\subset\Omega_\rho$, $\sup_K \|Q\|$ is dominated by $\sup\| P\|<\infty$, the latter $\sup$ taken over the $\rho$--neighborhood of $K$.
Choose a sequence of $\chi=\chi_n$ as above, whose supports shrink to $0$, and also $t_n\searrow 0$.
Then $Q_n=t_n\text{Id}_V+\chi_n*P\to P$ pointwise in the weak operator topology.
Therefore, with any holomorphic $\varphi\colon\Omega\to V$
\[
\langle P\varphi,\varphi\rangle=\lim \langle Q_n\varphi,\varphi\rangle=\inf \langle Q_n\varphi,\varphi\rangle
\]
is subharmonic.
\end{proof}
For a subharmonic function $u$ the integrals $\int_0^{2\pi} u(z_0+r e^{it})dt$ increase with $r$.
This property also generalizes to seminegatively curved metrics and the modified averages in (4.3), (4.4), that we will denote
\begin{equation}
\frak S (P,z_0,r)=\oint P|dz|-\oint Pdz \left(\oint P|dz|\right)^{-1} \oint P d\overline z.
\end{equation}
The expression makes sense for a general weakly measurable, bounded $P\colon\partial D_r (z_0)\to\text{End}^{\geq 0} V$, assuming $\oint P |dz|$ is invertible.
Failing that, we define
\[
\frak S (P,z_0,r)=\lim_{t\searrow 0}\frak S (P+t\text{Id}_V, z_0,r)
\]
as long as the limit exists.
As we have seen, the limit will exist when $P$ has seminegative curvature.
Thus $\frak S(P,z_0,r)$ is the Schur complement in
\begin{equation}
\frak M (P,z_0,r)=\begin{pmatrix} \oint P|dz|& r\oint Pdz\\ r\oint Pd\overline z& r^2\oint P|dz|\end{pmatrix}.
\end{equation}
\begin{thm}If $P\colon\Omega\to\text{End}^{\geq 0} V$ is seminegatively curved, and $\overline{D_r(z_0)}\subset \Omega$, then
\begin{equation}
\frak S(P,z_0,\rho)\leq\frak S (P,z_0,r)\qquad\text{for }0<\rho\leq r.
\end{equation}
\end{thm}
This generalizes (4.3), which at least for continuous $P$ is obtained in the limit $\rho\to 0$.
The proof requires some preparation.
\begin{lem}Let $V,W$ be Hilbert spaces, $A=A^*$, $A'={A'}^*\in\text{End}\, V$, $B,B'\in\text{Hom} (W,V)$, and $C,C'\in\text{End}^+ W$.
\item{(i)} $(B+B')(C+C')^{-1}(B+B')^*\leq BC^{-1} B^* +B' C'^{-1} {B'}^*$;
\item{(ii)} if
\[
\begin{pmatrix} A&B\\ B^*&C\end{pmatrix}\leq \begin{pmatrix} A'&B'\\ {B'}^*&C'\end{pmatrix}
\]
then $A-BC^{-1} B^*\leq A'-B' C'^{-1}{B'}^*$.
\end{lem}
The result extends to noninvertible $C,C'\ge 0$ if e.g.~$BC^{-1}B^*$ is defined, as above, by $\displaystyle\lim_{t\searrow 0} B(C+t\text{Id}_V)^{-1}B^*$, whenever this limit exists in the strong operator topology.---The first inequality says that the function $(B,C)\mapsto BC^{-1}B^*$ is subadditive (equivalently, convex).
\begin{proof}For finite dimensional $V,W$ statements equivalent to (i) and (ii) are formulated in [HJ, 7.7.P41]. In our generality,
(i)\ was proved in [LR]. It is also straightforward from Lemma 4.3, for
\[
\begin{pmatrix} BC^{-1}B^*+B'{C'}^{-1}{B'}^*&B+B'\\(B+B')^*&C+C'\end{pmatrix}=
\begin{pmatrix}BC^{-1}B^*&B\\B^*&C\end{pmatrix}+\begin{pmatrix}B'{C'}^{-1}{B'}^*&B'\\{B'}^*&C'\end{pmatrix}\ge 0
\]
by Lemma 4.3; and another application of Lemma 4.3 then gives (i). In turn (ii) follows if we introduce
\[
\begin{pmatrix}\alpha&\beta\\ \beta^*&\gamma\end{pmatrix}=\begin{pmatrix} A'&B'\\ {B'}^*&C'\end{pmatrix}-\begin{pmatrix}A&B\\ B^*&C\end{pmatrix}\geq 0.
\]
We can assume that $\gamma\in\text{End}^+V$, for the general case will then follow by replacing $C'$ by $C'+t\text{Id}$ and letting $t\searrow 0$. By (i)
\[
A'-B' C'^{-1} {B'}^*\geq A+\alpha-BC^{-1} B^*-\beta\gamma^{-1}\beta^*\geq A-BC^{-1} B^*.
\]
\end{proof}
\begin{proof}[Proof of Theorem 4.4]Given $u,v\in V$, consider the holomorphic function $\varphi(z)=u+iv(z-z_0)$, $z\in\Omega$.
Then
\[
\frac1\rho\int_{\partial D(z_0,\rho)}\langle P\varphi(z),\varphi(z)\rangle |dz|\leq\frac1r\int_{\partial D(z_0,r)}\langle P\varphi(z),\varphi(z)\rangle |dz|
\]
by subharmonicity.
Writing this out in terms of $u,v$ as in the proof of Theorem 4.2, we find that
\[
\frak M (P,z_0,\rho)\leq\frak M(P,z_0,r),
\]
notation as in (4.10).
Hence Lemma 4.5(ii) and the remark following it imply (4.11).
\end{proof}
An analog of Hadamard's Three Circles Theorem, namely that $\frak S(P,z_0,e^t)$ is convex in $t$, can be proved along the same lines.
Next we turn to metrics $h$ and corresponding $P\colon\Omega\to\text{End}\, V$ with semipositive curvature.
The proper generality for semipositive curvature is not even finite hermitian metrics, but duals of such.
However, not to overburden the discussion, we will only deal with $C^2_{\text{op}}$ operators $P$ that take invertible values.
Then semipositive curvature means
\begin{equation}
P_{\overline z} P^{-1} P_z-P_{z\overline z}\geq 0.
\end{equation}
Our integral characterization of (4.12) will be in terms of
\begin{equation}
\frak T (P,z_0,r)=\min_H H^* (z_0)^{-1}\frak S (H^* PH, z_0,r) H(z_0)^{-1}
\end{equation}
(when $\overline{D_r(z_0)}\subset\Omega)$, where the minimum is taken over $H\in C_{\text{op}} (\overline{D_r(z_0)}, \text{GL}(V))$ that are holomorphic on $D_r(z_0)$. That there is a minimum is the content of Lemma 4.6 below.
The quantity $\frak T (P,z_0,r)$ is a gauge covariant version of $\frak S(P,z_0,r)$, in the sense that if we transform the gauge---that is, we change the trivialization of $\Omega\times V\to \Omega$---and replace $P$ by $Q=K^*PK$, where $K\colon\Omega\to \text{GL}(V)$ is holomorphic, then
\begin{equation}
\frak T(Q,z_0,r)=K (z_0)^*\frak T (P,z_0,r) K(z_0).
\end{equation}
Curvature (4.12) is also gauge covariant,
\begin{equation}
Q_{\overline z}Q^{-1} Q_z-Q_{z\overline z}=K^* (P_{\overline z} P^{-1} P_z-P_{z\overline z}) K.
\end{equation}
Like $\frak S$, $\frak T(P,z_0,r)$ makes sense when $P$ is defined only on $\partial D_r(z_0)$ and has some mild regularity properties.
\begin{lem}Suppose $0<k <1$ and $P\in C^k (\partial D_r (z_0)$, $\text{End}^+ V$).
Then the minimum in (4.13) is achieved by an $H$ for which $H^*PH=\text{Id}_V$ on $\partial D_r(z_0)$.
Thus $\frak T (P,z_0,r)=H^*(z_0)^{-1} H(z_0)^{-1}$.
\end{lem}
\begin{proof}Assume first that $P\equiv\text{Id}_V$.
For any competing $H$ the curvature of $H^*H$ is $0$ on $D_r(z_0)$.
By Theorem 4.2
\[
H^* (z_0)^{-1}\frak S (H^*H,z_0,r) H(z_0)^{-1}\geq\text{Id}_V,
\]
and equality holds if $H=\text{Id}_V$.
Second, for a general $P$ by Theorems 3.1, 3.7 we can solve the Dirichlet problem
\[
(K^{-1})^* K^{-1}=P\quad\text{ on }\quad\partial D_r (z_0)
\]
for $K\in C^k (\overline{D_r(z_0)}, \text{GL}(V))$ holomorphic on $D_r(z_0)$.
We then pass from $P$ to $K^*PK$ and apply covariance (4.14).
\end{proof}
\begin{thm}A function $P\in C_{\text{op}}^2 (\Omega,\text{End}^+ V)$ has semipositive curvature if and only if for every disc $\overline{D_r(z_0)}\subset\Omega$
\begin{equation}
\frak T(P,z_0,r)\leq P(z_0).
\end{equation}
\end{thm}
\begin{proof}Suppose $P$ is semipositively curved, and choose $H\colon\overline{D_r(z_0)}\to \text{GL}(V)$ as in Lemma 4.6.
Since $Q=H^{*-1} H^{-1}$ has zero curvature on $D_r(z_0)$ and $Q=P$ on $\partial D_r(z_0)$, the maximum principle (Theorem 1.3) implies
\[
P(z_0)\geq H^* (z_0)^{-1} H(z_0)^{-1}=H^* (z_0)^{-1}\frak S (H^* PH,z_0,r) H(z_0)^{-1}\geq\frak T(P,z_0,r).
\
Conversely, suppose (4.16) holds.
We need to prove $P_{\overline z}P^{-1} P_z\geq P_{z\overline z}$, that we will do at $z_0=0$.
First assume that with some $A=A^*\in\text{End}\, V$
\begin{equation}
P(z)=\text{Id}_V +A|z|^2 +o |z|^2\qquad\text{ as }z\to 0,
\end{equation}
and choose $H=H_r$ again as in Lemma 4.6.
We claim that
\[
\|\text{Id}_V +Ar^2-H_r^* (z)^{-1} H_r (z)^{-1}\|=o(r^2)\quad\text{ as } |z|\leq r\to 0.
\]
Indeed, given $\varepsilon>0$, for sufficiently small $r$ and $z\in\partial D_r(0)$
\[
(1-\varepsilon r^2)\text{Id}_V +Ar^2\leq H_r^* (z)^{-1} H_r(z)^{-1}\leq (1+\varepsilon r^2)\text{Id}_V+Ar^2.
\]
By the maximum principle the same must then hold for $z\in\overline{D_r(0)}$.
But then
\[
P(0)=\text{Id}_V\geq\frak T (P,z_0,r)=H_r^* (0)^{-1} H_r(0)^{-1}=\text{Id}_V+Ar^2+o(r^2),
\]
and $P_{\overline z}(0)P^{-1}(0) P_z(0)-P_{z\overline z}(0)=-A\geq 0$ follows by letting $r\to 0$.
Now for a general $P$ we can choose a holomorphic $K\colon\Omega\to \text{GL}(V)$ so that $P_1=K^*PK$ has Taylor series as in (4.17).
That $P$ has semipositive curvature then follows from the gauge covariance properties (4.14), (4.15) and from the first part of the proof.
\end{proof}
\section{Limits}
Nevertheless, there are limits to how far one can go and generalize results from traditional to noncommutative potential theory. Consider the case of Harnack's inequality: If $\Omega=\{z\in\mathbb C\,:\,|z|<1\}$ and $u:\Omega\to[0,\infty)$ is harmonic, then
\begin{equation}
u(z)\le\frac{1+|z|}{1-|z|} u(0).
\end{equation}
When $u(0)=0$, we recover the maximum---or rather, minimum---principle, $u\equiv 0$. But (5.1) also implies that the maximum principle is {\sl stable}: Knowing how far $u(0)$ is from $\inf u$, we can estimate how far $u$ is from a constant function. This raises the following question of noncommutative potential theory.
Let $\Omega=\{z\in\mathbb C\,:\,|z|<1\}$, and let $P\in C^2_{\text{op}}(\Omega, \text{End}^+V)$ satisfy $R^P=0$. Given that $P(z)\ge\text{Id}$ for all $z\in\Omega$, is it possible to estimate $P(z)$ in terms of $P(0)$, in the form
\[
||P(z)||\le C\big(z,P(0)\big)?
\]
Such an estimate holds and follows from Harnack's inequality (5.1) if $\dim V<\infty$, but not otherwise:
\begin{thm} Suppose $\dim V=\infty$. With $\Omega=\{z\in\mathbb C\,:\,|z|<1\}$ and $z_0\in\Omega\setminus\{0\}$, there is a $T\in\text{End}^+V$ such that
\begin{equation}
\sup\{||P(z_0)||\,:\, P\in C^2_{\text{op}}(\Omega, \text{End}^+ V), \,P\ge\text{Id},\, R^P=0,\, P(0)=T\}=\infty.
\end{equation}
$T$ can be chosen a multiple of $\text{Id}$.
\end{thm}
This we will derive from a lemma that disproves the noncommutative generalization of Hurwitz's theorem on zeros of holomorphic functions.
\begin{lem} If $\dim V=\infty$, there is a sequence of holomorphic $H_k:\mathbb C\to\text{GL}(V)\subset\text{End} \,V$ with $H_k(0)=\text{Id}$ and $\lim_{k\to\infty}H_k=H:\mathbb C\to\text{End}\, V$ locally uniformly, such that
\[
\emptyset\neq\{\zeta\in\mathbb C\,:\, H(\zeta)\in\text{GL}(V)\}\neq\mathbb C .
\]
Locally uniform convergence is understood with respect to the norm topology on $\text{End}\, V$.
\end{lem}
\begin{proof} At the bottom of this phenomenon is the fact that the set valued function that associates with an operator $A\in\text{End}\, V$ its spectrum spec$\,A\subset\mathbb C$ is discontinuous, when the space of compact subsets of $\mathbb C$ is endowed with the Hausdorff metric (although the function is upper semicontinuous). A construction showing this is in [Ri, p. 282], attributed to Kakutani. We reproduce this construction, with a minimal change in notation. Consider on $V=l^2$ the weighted shift operator $A$ that maps $x=(x_n)_{n\ge1}\in l^2$ to
\[
\big(0,x_1,\frac{x_2}2,x_3,\frac{x_4}4,x_5,\frac{x_6}2, x_7,\frac{x_8}8,\ldots\big) .
\]
If $\beta(n)$ denotes the highest power of $2$ that divides $n\ge 1$, then
\[
Ax=y,\qquad \text{where }\quad y_n=\begin{cases}0 &\text{if } n=1\\
x_{n-1}/\beta(n-1) & \text{otherwise.}
\end{cases}
\]
Further, for $k=1,2,\ldots$ let $A_k\in\text{End}\, V$ be given by
\[
A_kx=y,\qquad \text{where }\quad y_n=\begin{cases}0 &\text{if } 2^k \text{ divides } n-1\\
x_{n-1}/\beta(n-1) & \text{otherwise.}
\end{cases}
\]
Then $||A-A_k||=2^{-k}$, and $A_k\to A$. Further, $A_k$ is nilpotent, so for all $\zeta\in\mathbb C$
\[
H_k(\zeta)=\text{Id}-\zeta A_k
\]
has an inverse, namely $\sum_{j\le2^k}(\zeta A_k)^j$. However, $H(\zeta)=\lim_k H_k(\zeta)$ is not invertible if $|\zeta|\ge 2$, it is not even onto (while $H(0)=\text{Id}$).
Indeed, suppose $H(\zeta)x=(1,0,0,\ldots)$. This means $x_1=1$ and $x_n=\zeta x_{n-1}/\beta(n-1)$ for $n\ge 2$, i.e.,
\[
x_n=\zeta^{n-1}/\prod_{m<n}\beta(m).
\]
When $n=2^k$, the product is easy to compute. Given $j=0,1,\ldots,k-1$, the equation $\beta(m)=2^j$ has $2^{k-j-1}$ many solutions $1\le m<2^k$. Hence
\[
\prod_{m<n}\beta(m)=\prod_{j=0}^{k-1}2^{j2^{k-j-1}}=2^{2^k\sum_0^{k-1}j2^{-j-1}}<2^n.
\]
Therefore $|x_n|>1/2$ if $|\zeta|\ge2$ and $n$ is a power of $2$, whence $(x_n)_{n\ge 1}\notin l^2$.
This construction for $V=l^2$ has an obvious extension to any infinite dimensional $V$ via a splitting $V=l^2\oplus W$.
\end{proof}
\begin{proof}[Proof of Theorem 5.1]
By scaling the dependent and independent variables of $H_k, H$ of Lemma 5.2 we can construct a sequence $L_k:\Omega\to\text{GL}(V)$ of holomorphic maps, $||L_k(z)||\le 1$ for $z\in\Omega$, such that $L_k(0)=\varepsilon\text{Id}$ with $\varepsilon>0$ independent of $k$, and $L_k\to L$ uniformly, but $L(z_0)$ is not invertible. Then $P_k=L_k^{*-1}L_k^{-1}$ are competitors in (5.2) if $T=\varepsilon^{-2}\text{Id}$, and $\sup_k||P_k(z_0)||=\sup_k||L_k(z_0)^{-1}||^2=\infty$, for otherwise $L(z_0)$ would be invertible by Lemma 3.6.
\end{proof}
| {
"timestamp": "2016-10-13T02:00:45",
"yymm": "1610",
"arxiv_id": "1610.03523",
"language": "en",
"url": "https://arxiv.org/abs/1610.03523",
"abstract": "We propose to view hermitian metrics on trivial holomorphic vector bundles $E\\to\\Omega$ as noncommutative analogs of functions defined on the base $\\Omega$, and curvature as the notion corresponding to the Laplace operator or $\\partial\\overline\\partial$. We discuss noncommutative generalizations of basic results of ordinary potential theory, mean value properties, maximum principle, Harnack inequality, and the solvability of Dirichlet problems.",
"subjects": "Complex Variables (math.CV); Functional Analysis (math.FA)",
"title": "Noncommutative potential theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750496039276,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7095221799601898
} |
https://arxiv.org/abs/1507.02173 | On Integer Additive Set-Filtered Graphs | Let $\mathbb{N}_0$ denote the set of all non-negative integers and $\mathcal{P}(\mathbb{N}_0)$ be its power set. An integer additive set-labeling (IASL) of a graph $G$ is an injective function $f:V(G)\to \mathcal{P}(\mathbb{N}_0)$ such that the induced function $f^+:E(G) \to \mathcal{P}(\mathbb{N}_0)$ is defined by $f^+ (uv) = f(u)+ f(v)$, where $f(u)+f(v)$ is the sumset of $f(u)$ and $f(v)$. In this paper, we introduce the notion of a particular type of integer additive set-indexers called integer additive set-filtered labeling of given graphs and study their characteristics. | \section{Introduction}
For all terms and definitions, not defined specifically in this paper, we refer to \cite{BM} and \cite{FH} and \cite{DBW}. Unless mentioned otherwise, all graphs considered here are simple, finite and have no isolated vertices.
The {\em sumset} of two non-empty sets $A$ and $B$, denoted by $A+B$, is defined as $A + B = \{a+b: a \in A, b \in B\}$. If either $A$ or $B$ is countably infinite, then their sumset $A+B$ is also countably infinite. Hence, all sets we mention here are finite. We denote the cardinality of a set $A$ by $|A|$.
Using the concepts of sumsets, an integer additive set-labeling of a given graph $G$ is defined as follows.
\begin{defn}\label{D-IASL}{\rm
\cite{GA} Let $\mathbb{N}_0$ denote the set of all non-negative integers and $\mathcal{P}(\mathbb{N}_0)$ be its power set. An {\em integer additive set-labeling} (IASL, in short) of a graph $G$ is defined as an injective function $f:V(G)\to \mathcal{P}(\mathbb{N}_0)$ which induces a function $f^+:E(G) \to \mathcal{P}(\mathbb{N}_0)$ such that $f^+ (uv) = f(u)+ f(v),~ uv\in E(G)$. A graph which admits an IASL is called an {\em integer additive set-labeled graph} (IASL-graph).}
\end{defn}
\noindent The notion of an integer additive set-indexers of graphs was introduced in \cite{GA}.
\begin{defn}\label{D-IASI}{\rm
\cite{GA,GSO} An integer additive set-labeling $f:V(G)\to \mathcal{P}(\mathbb{N}_0)$ of a graph $G$ is said to be an {\em integer additive set-indexer} (IASI) if the induced function $f^+:E(G) \to \mathcal{P}(\mathbb{N}_0)$ defined by $f^+ (uv) = f(u)+ f(v)$ is also injective. A graph which admits an IASI is called an {\em integer additive set-indexed graph} (IASI-graph).}
\end{defn}
The existence of an integer additive set-labeling (or integer additive set-indexers) by a given graph was established in \cite{GS0} and the admissibility of integer additive set-labeling (or integer additive set-indexers) by given graph operations and graph products was established in \cite{CGS1}.
\begin{thm}
{\rm \cite{GS0}} Every graph $G$ admits an integer additive set-labeling (or an integer additive set-indexer).
\end{thm}
The cardinality of the set-label of an element (a vertex or an edge) of a graph $G$ is called the {\em set-indexing number} of that element. An element of $G$ having a singleton set-label is called a {\em mono-indexed element} of $G$.
In this paper, we study the characteristic of graphs which admit a certain type of integer additive set-labeling, called integer additive set-filtered labeling.
\section{Integer Additive Set-Filtered Graphs}
Note that all sets we consider in this paper are non-empty finite sets of non-negative integers. By the term a {\em ground set}, we mean a non-empty finite set of non-negative integers whose subsets are the set-labels of the elements of the given graph $G$. We denote the ground set used for labeling the elements of a graph $G$ by $X$.
Motivated from the studies about topological IASL-graphs, made in \cite{GS12}, we study a set-labeling of a given graph, in which the collection of all set-labels of the vertices of a given graph forms a filter of the ground set used for the labeling. Let us first recall the definition of the filter of a set.
\begin{defn}\label{D-SF1}{\rm
\cite{KDJ1,JRM} Given a set $X$, a partial ordering $\subseteq$ can be defined on the power set $\mathcal{P}(X)$ by subset inclusion, turning $(\mathcal{P}(X),\subseteq)$ into a lattice. A {\em filter} on $X$, denoted by $\mathcal{F}$, is a non-empty subset of the power set $\mathcal{P}(X)$ of $X$ which has the following properties:
\begin{enumerate}\itemsep0mm
\item[(i)] $X \in \mathcal{F}$.
\item[(ii)] $A,B \in \mathcal{F} \implies A\cap B \in \mathcal{F}$. ($\mathcal{F}$ is closed under finite intersection).
\item[(iii)] $\emptyset \not \in \mathcal{F}$. ($\mathcal{F}$ is a proper filter).
\item[(iv)] $A \in \mathcal{F}, A \subset B, \implies B \in \mathcal{F}$ where $B$ is a non-empty subset of $X$.
\end{enumerate} }
\end{defn}
In view of Definition \ref{D-SF1}, we define the notion of an integer additive set-filtered labeling of a given graph as follows.
\begin{defn}\label{D-IASFG}{\rm
Let $X$ be a finite set of non-negative integers. Then, an integer additive set-labeling $f:V(G)\to \mathcal{P}(X)$ is said to be an {\em integer additive set-filtered labeling} (IASFL, in short) of $G$ if $\mathcal{F}=f(V)$ is a proper filter on $X$. A graph $G$ which admits an IASFL is called an {\em integer additive set-filtered graph} (IASF-graph). }
\end{defn}
Note that the null set can not be the set-label of any element of the graph $G$, with respect to an IASL defined on it.
Does every given graph admit an integer additive set-filtered labeling? If not so, what is the condition required for a graph to admit an IASFL? As answers to both questions, we establish a necessary and sufficient condition for an IASL $f$ of a given graph $G$ to be an IASFL of $G$ as follows.
\begin{thm}\label{T-IASFL1}
An IASL $f$ defined on a given graph $G$ with respect to a non-empty ground set $X$ is an integer additive set-filtered labeling of $G$ if and only if the following conditions hold.
\begin{enumerate}\itemsep0mm
\item[(i)] $0 \in X$.
\item[(ii)] every subset of $X$ containing $0$ is the set-label of some vertex in $G$.
\item[(iii)] $0$ is an element of the set-label of every vertex in $G$.
\end{enumerate}
\end{thm}
\begin{proof}
Let $f$ be an IASFL defined on a given graph $G$, with respect to a non-empty set $X$. Then, $\mathcal{F}=f(V)$ is a filter on $X$. Therefore, $X\in \mathcal{F}$. Since, for any non-zero element $a\in X$, the sets $X$ and $X+\{a\}$ are of same cardinality, but indeed $X\subsetneq X+\{a\}$. Hence, $\{0\}$ must also be an element of $\mathcal{F}$. Hence, we notice that $0$ is an element of $X$. Then, by condition (iv) of Definition \ref{D-SF1}, every subset of $X$ containing $0$ must belong to $\mathcal{F}$. For any two subsets $X_i$ and $X_j$ of $X$, $0\in X_i,\; 0\in X_j\implies 0\in X_i\cap X_j$ and hence $X_i\cap X_j$ also belongs to $\mathcal{F}$. If possible, let a set-label $X_i$ of a vertex $v_i$ of $G$ does not contain $0$. Then, $\{0\} \cap X_i=\emptyset$, which can not be the set-label of any vertex of $G$, contradicting the fact that $\mathcal{F}$ is a filter on $X$. Hence, no subset of $X$ which does not contain $0$, belongs to $\mathcal{F}$.
Conversely, assume that the set-label of every vertex of $G$ contains $0$ and every subset of $X$ containing $0$ is the set-label of some vertex of $G$. Since $0\in X, ~ X\in \mathcal{F}$. If $X_i$ and $X_j$ are the set-labels of two vertices in $G$, then both $X_i$ and $X_j$ contain the element $0$ and hence $X_i\cap X_j$ also contains $0$. Therefore, by the assumption, $X_i\cap X_j$ is also the set-label of some vertex in $G$. That is, $X_i, X_j \in \mathcal{F} \implies X_i\cap X_j \in \mathcal{F}$. As the set-label $X_i$ of any vertex $v_i$ of $G$ contains $0$, then every super set $X_j$ of $X_i$ also contains the element $0$. Therefore, by the hypothesis, $X_j$ is also the set-label of some vertex of $G$. That is, $X_i\in \mathcal{F}, ~ X_i \subset X_j \implies X_j\in \mathcal{F}$. Therefore, $\mathcal{F}$ is a filter on $X$. Hence, $f$ is an IASFL on $G$.
\end{proof}
\noindent From the above theorem we notice that all graphs do not possess IASFLs. Hence, a characterisation of the graphs that admit IASFLs arouses much interest. In view of Theorem \ref{T-IASFL1}, we now proceed to find the characteristics and properties of the graphs which admit IASFLs.
\noindent The following results is are immediate consequences of Theorem \ref{T-IASFL1}.
\begin{cor}
If a graph $G$ admits an IASFL, then $G$ has $2^{|X|-1}$ vertices.
\end{cor}
\begin{proof}
Note that $0\in X$ and let $|X|=n$. The number of $r$-element subsets of $X$ with a common element $0$ is $\binom{|X|-1}{r-1}$. Therefore, the number of subsets of $X$ containing the element $0$ is $\sum\limits_{i=0}^{n-1}\binom{n-1}{i}=2^{n-1}$. This completes the proof.
\end{proof}
\begin{cor}\label{C-IASFL2}
If a given graph $G$ admits an IASFL $f$, then only one vertex of $G$ can have a singleton set-label.
\end{cor}
\begin{proof}
Let $G$ be an IASF-graph. Then, by Theorem \ref{T-IASFL1}, $\{0\}$ is a set-label of some vertex in $G$. Let $a$ be a non-zero element in $X$. If $\{a\}$ is the set-label of some vertex of $G$, then the set $\{0\}\cap \{a\}=\emptyset$ must belong to $\mathcal{F}=f(V)$, which is a contradiction to Condition (iii) of Definition \ref{D-SF1}. Therefore, only one vertex of $G$ can have a singleton set-label. (That is, the only possible singleton set-label in $\mathcal{F}$ is $\{0\}$).
\end{proof}
Next, we establish the relation between the collection of the set-labels of vertices and the collection of the set-labels of the edges of an IASF-graph $G$ in the following result.
\begin{prop}\label{P-IASFL-ev}
If $f$ is an IASFL of a graph $G$, then $f^+(E(G))\subseteq f(V(G))$.
\end{prop}
\begin{proof}
If $u$ and $v$ are any two adjacent vertices of the IASF-graph $G$, then $f(u)$ and $f(v)$ contains $0$ and hence, being the sumset of $f(u)$ and $f(v)$, the set-label $f^+(uv)$ also contains the element $0$. Since every subset of $X$ containing $0$ is the set-label of some vertex in $G$, the set label of the edge $uv$ will also be a set-label of some vertex in $G$. Therefore, $f^+(E)\subseteq f(V)$.
\end{proof}
\noindent The following theorem is a consequence of Theorem \ref{T-IASFL1}.
\begin{thm}\label{T-IASFL2}
If a given graph $G$ admits an integer additive set-filtered labeling $f$, then every element of the collection $\mathcal{F}=f(V(G))$ belongs to some finite chain of sets in $\mathcal{F}$ of the form $\{0\} =f(v_1)\subset f(v_2)\subset f(v_3) \subset \ldots\ldots \subset f(v_r)=X$.
\end{thm}
\begin{proof}
Let $f$ be an IASFL defined on a graph $G$ and $\mathcal{F}$ be the collection of all set-labels of the vertices in $G$. Then, by Theorem \ref{T-IASFL1}, both $\{0\}$ and $X$ are in $\mathcal{F}$. Since every set-label in $\mathcal{F}$ contains $0$, $\{0\}$ is the subset of all set-labels in $\mathcal{F}$. Since $\mathcal{F} \subseteq \mathcal{P}(X)$, $X$ is the maximal set in $\mathcal{F}$ containing all sets in $\mathcal{F}$. Since $\mathcal{F}$ is a filter on $X$, if a subset $X_i$ of $X$ belongs to $\mathcal{F}$ implies every subset of $X$ containing $X_i$ is also in $\mathcal{F}$. Therefore, there exists some finite sequence $\{0\}\subset \ldots\ldots X_i\subset X_j \ldots\ldots X$ of subsets of $X$ in $\mathcal{F}$. Therefore, every set-label in $\mathcal{F}$ is contained in some finite chain of subsets of $X$ whose least element is $\{0\}$ and the maximal element is $X$.
\end{proof}
We have already identified the number of vertices required for a graph to admit an IASL with respect to a given ground set $X$. In this context, it is interesting to examine certain structural properties of a graph that admit an IASFL. Hence, we have
\begin{thm}\label{C-IASFL1}
If a graph $G$ admits an integer additive set-filtered labeling, with respect to a non-empty ground set $X$, then $G$ must have at least $2^{|X|-2}$ pendant vertices that are incident on a single vertex of $G$.
\end{thm}
\begin{proof}
Let $f$ be an IASFL defined on a given graph $G$. Then by Theorem \ref{T-IASFL1}, every subset of $X$ containing the element $0$ must belong to $\mathcal{F}$. Let $x_l$ be the maximal element of $X$. Then, for any non-zero element $x$ in $X$, $x+x_l\not\in X$.
Therefore, if $X_l$ is a subset of $X$ containing $x_l$, then the vertex having $X_l$ as its set-label can not be adjacent to any vertex of $G$ other than the one that has the set-label $\{0\}$. Hence, all the subsets of $X$ containing $x_l$, including $X$ itself, can be adjacent only to the vertex having the set-label $\{0\}$. Note that the number of subsets of $X$ containing $0$ and $x_l$ is $2^{|X|-2}$. Therefore, the minimum number of pendant vertices in $G$ is $2^{|X|-2}$.
\end{proof}
Figure \ref{G-IASFG1} elucidates an IASF-graph with $2^{|X|-2}$ pendant vertices incident on a single vertex, where $X=\{0,1,2,3,4\}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\linewidth]{G-IASFG1}
\caption{An example to an IASF-graph}
\label{G-IASFG1}
\end{figure}
In view of the discussions we have made so far, we notice the following.
\begin{enumerate}\itemsep0mm
\item The existence of an IASFL is not a hereditary property. That is, an IASFL of a graph need not induce an IASFL for all of its subgraphs.
\item For $n\ge 3$, no paths $P_n$ admits an IASFL.
\item No cycles admit IASFLs and as a result neither Eulerian graphs nor Hamiltonian graphs admit IASFLs.
\item Neither complete graphs nor complete bipartite graphs admit IASFLs. For $r>2$, complete $r$-partite graphs also do not admit IASFLs.
\item Graphs having odd number of vertices never admits an IASFL.
\end{enumerate}
Another important property of IASFLs is that the existence of an IASFL is a {\em monotone} property. That is, removing any non-leaf edge of an IASFL graph preserves the IASFL of that graph.
\section{Relation Between Different IASLs}
In this section, let us verify the relation between an IASFL of a graph $G$ with certain other types of IASLs of $G$. First recall the definition of an exquisite IASL of a given graph $G$.
\begin{defn}{\rm
\cite{GS12} An {\em exquisite integer additive set-labeling} (EIASL, in short) is defined as an integer additive set-labeling $f:V(G)\to \mathcal{P}(\mathbb{N}_0)$ with the induced function $f^+:E(G) \to \mathcal{P}(\mathbb{N}_0)$ defined by $f^+ (uv) = f(u)+ f(v),~ uv\in E(G)$, holds the condition $f(u),f(v)\subseteq f^+(uv)$ for all adjacent vertices $u, v\in V(G)$. }
\end{defn}
The following theorem is a necessary and sufficient condition for an IASL of a graph $G$ to be an EIASL of $G$.
\begin{thm}\label{T-EIASL}
{\rm \cite{GS12}} Let $f$ be an IASL of a given graph $G$. Then, $f$ is an EIASL of $G$ if and only if $0$ is an element in the set-label of every vertex in $G$.
\end{thm}
Invoking Theorem \ref{T-EIASL}, we establish the following relation between an IASFL and an exquisite IASL of a given graph $G$.
\begin{prop}
Every IASFL of a graph $G$ is also an exquisite IASL of $G$.
\end{prop}
\begin{proof}
Let $f$ be an IASFL of a given graph $G$. Then, by Theorem \ref{T-IASFL1}, the set-label of every vertex of $G$ contains $0$. Then by theorem \ref{T-EIASL}, $f$ is also an exquisite IASL of $G$.
\end{proof}
It is to be noted that, for an exquisite IASL $f$ of a graph $G$, $f(V)$ need not contain all the subsets of the ground set $X$ containing $0$. Therefore, every exquisite IASL of a graph $G$ need not be an IASFL of $G$.
Figure \ref{G-EIASL1} depicts a topological IASI of a graph $G$ with respect to the ground set $X=\{0,1,2,3,4\}$, which is not an IASFL of $G$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.65\linewidth]{G-EIASL1}
\caption{An example to a strong IASL of $G$ which is not an IASFL of $G$.}
\label{G-EIASL1}
\end{figure}
Let us now consider the notions of integer additive set-graceful graphs and integer additive set-sequential graphs, which are defined as follows.
\begin{defn}{\rm
\cite{GS14,GS15} Let $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ be an IASL defined on a graph $G$. Then, $f$ is called an {\em integer additive set-graceful labeling} (IASGL) of $G$ if $f^{+}(E(G))=\mathcal{P}(X)-\{\emptyset,\{0\}\}$ and $f$ is called an {\em integer additive set-sequential labeling} (IASSL) of $G$ if $f(V(G))\cup f^{+}(E(G))=\mathcal{P}(X)-\{\emptyset,\{0\}\}$. }
\end{defn}
The following result checks whether an IASFL of a given graph $G$ can be an IASGL of the graph $G$.
\begin{prop}
No IASFL defined on a given graph $G$ is an IASGL of $G$.
\end{prop}
\begin{proof}
Let $f$ be an IASFL defined on $G$. By Proposition \ref{P-IASFL-ev}, $f(E(G))\subseteq f(V(G))$. Hence, set-labels of all edges of $G$ also contain the element $0$. That is, any subset $X_r$ of $X$ that does not contain $0$ will not be in $f(E(G))$. Therefore, $f(E(G))\neq \mathcal{P}(X)-\{\{0\},\emptyset\}$. Hence, $f$ is not an IASGL of $G$.
\end{proof}
\noindent The following results can also be proved in a similar manner.
\begin{prop}
No IASFL defined on a given graph $G$ is an IASSL of $G$.
\end{prop}
\begin{proof}
We have already proved that the set-labels of all elements of an IASF-graph $G$ contain the element $0$. Therefore, the set $f(V)\cup f^+(E)$ contains only those subsets of $X$ which contain $0$. That is, $f(V(G))\cup f^{+}(E(G))\ne \mathcal{P}(X)-\{\emptyset,\{0\}\}$. Hence, $f$ is not an IASSL of $G$.
\end{proof}
Another important IASL known to us, is a topological IASL, which is defined in \cite{GS13} as follows.
\begin{defn}{\rm
\cite{GS13} An integer additive set-labeling $f:V(G)\to \mathcal{P}(X)-\{\emptyset\}$ is called a {\em topological integer additive set-labeling} (TIASL) of $G$ if $f(V(G))\cup \{\emptyset\}$ is a topology of $X$.}
\end{defn}
Can an IASFL of a given graph $G$ be a topological IASL of $G$? A relation between an IASFL and an TIASL of a graph $G$ is established in the following result.
\begin{prop}
Every IASFL of a graph $G$ is also a topological IASL of $G$.
\end{prop}
\begin{proof}
Let $f$ be an IASFL of a given graph $G$, with respect to a non-empty set $X$. Then $\mathcal{F}=f(V(G))$ is a filter on $X$. Let $\mathscr{T}=\mathcal{F}\cup \{\emptyset\}$. To show that $f$ is a TIASL of $G$, we need to show that $\mathscr{T}$ is a topology on $X$. Since $X\in \mathcal{F}$, we have $\emptyset, X \in \mathscr{T}$. Since $X$ is a finite set and $\mathcal{F}$ contains all subsets of $X$ consisting of $0$, the union of any number of elements of $\mathscr{T}$ is a set containing the element $0$ and hence belongs to $\mathscr{T}$. Similarly, the intersection of any two sets in $\mathcal{F}$ contains at least one element $0$ and hence the intersection of any number of elements in $\mathscr{T}$ is also in $\mathscr{T}$. Therefore, $\mathscr{T}$ is a topology on $X$. This completes the proof.
\end{proof}
If an IASL $f$ of a graph $G$ is an IASFL of $G$, then $f(V(G))$ contains only those subsets of $X$ consisting of the element $0$ and hence not all topological IASLs of $G$, with respect to $X$, can be the IASFLs of $G$.
Figure \ref{G-TIASL1} depicts a topological IASI of a graph $G$ which is not an IASFL of $G$ with respect to the ground set $X=\{0,1,2,3\}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\linewidth]{G-TIASL1}
\caption{An example to a TIASL of $G$ which is not an IASFL of $G$.}
\label{G-TIASL1}
\end{figure}
Another important type IASL of an given graph $G$ is a weak IASL of $G$, which is defined as follows.
\begin{defn}{\rm
\cite{GS1} A {\em weak integer additive set-labeling} (WIASL) of a graph $G$ is an IASL $f$ such that $|f^+(uv)| = \max(|f(u)|,|f(v)|)$ for all $u, v \in V(G)$.}
\end{defn}
The following is a necessary and sufficient condition for an IASL to be a weak IASL of a given graph $G$.
\begin{lem}\label{L-WIASL}
{\rm \cite{GS1}} Let $f$ be an IASL defined on a given graph $G$. Then, $f$ is a WIASL of $G$ if and only if at least one end vertex of every edge of $G$ is mono-indexed.
\end{lem}
An interesting question in this context is whether an IASFL of a given graph $G$ can be a weak IASL. The following result provides an answer to this question.
\begin{prop}
An IASFL of a graph $G$ is a weak IASL of $G$ if and only if $G$ is a star.
\end{prop}
\begin{proof}
Let $G=K_{1,n}$, where $n=2^{|X|-1}-1$, $X$ being the ground set that is used for set-labeling and let $f$ be an IASFL defined on $G$. Then, label the vertex $v$ at the centre of $G$ by the set $\{0\}$ and other vertices by other subsets of $X$ containing $0$. Therefore, every edge of $G$ has one mono-indexed end vertex. Hence by Lemma \ref{L-WIASL}, $f$ is a weak IASI of $G$.
Conversely, assume that an IASFL $f$ of $G$ is a weak IASI of $G$. Then, by Lemma \ref{L-WIASL}, every edge of $G$ must have at least one mono-indexed end vertex. But by Corollary \ref{C-IASFL2}, the only singleton set-label in $\mathcal{F}$ is $\{0\}$. Therefore, the vertex,say $v$, having the set-label $\{0\}$ must be adjacent to all other vertices of $G$ and the graph $G-v$ is a trivial graph. Therefore, $G$ is a star.
\end{proof}
\noindent Next, recall the definition of a strong IASL of a given graph $G$.
\begin{defn}{\rm
\cite{GS2} A {\em strong integer additive set-labeling} (SIASL) of $G$ is an IASL such that if $|f^+(uv)| = |f(u)|\,|f(v)|$ for all $u, v \in V(G)$.}
\end{defn}
The {\em difference set} of a set $A$ is the set of all positive differences between the elements of $A$. The difference set of a set $A$ is denoted by $D_A$.
Then, the following result is a necessary and sufficient condition for an IASL (or IASI) to be a SIASL (or SIASI) of a given graph $G$.
\begin{lem}\label{L-SIASL}
{\rm \cite{GS2}} Let $f$ be an IASL defined on a given graph $G$. Then, $f$ is a SIASL of $G$ if and only if the difference sets of any two adjacent vertices of $G$ are disjoint.
\end{lem}
\noindent Can a given IASFL $f$ of a given graph $G$ be a strong IASL of $G$? We know that $f$ is a strong IASL of $G$ if the difference sets of the set-labels of any two adjacent vertices of $G$ are disjoint. Using this result, we wish to verify whether there is any relation between an IASFL and a strong IASL of $G$.
\noindent Invoking Lemma \ref{L-SIASL} and Theorem \ref{T-IASFL1}, we propose the following result.
\begin{prop}
If an IASFL $f$ of a graph $G$ is a strong IASL of $G$, then $f(u)\cap f(v) = \{0\}$, where $u$ and $v$ of $G$ are two adjacent vertices of $G$.
\end{prop}
\begin{proof}
Assume that $f$ is an IASFL defined on a graph $G$. Let $u$ and $v$ be two adjacent vertices of $G$. Now, assume that $f$ is a strong IASL. Then, by Lemma \ref{L-SIASL}, $D_{f(v_i)}\cap D_{f(v_i)}=\emptyset$. If $f(u)$ and $f(v)$ have a common non-zero element, say $a$, then both $D_{f(v_i)}$ and $D_{f(v_i)}$ also contain the element $a$, contradicting the fact that $f$ is a strong IASL. Therefore, the set-labels of any two adjacent vertices have only one common element $0$.
\end{proof}
It can be noted that the conditions $f(u)\cap f(v)=\{0\}$ and $D_{f(v_i)}\cap D_{f(v_i)}=\emptyset$, even together, do not produce the idea that every subset of $X$ containing $0$ is the set-label of some vertex of $G$. Therefore, every strong IASL of $G$ need not be an IASFL of $G$.
Figure \ref{G-SIASL1} depicts a strong IASL of a graph $G$, with respect to the ground set $\{0,1,2,3\}$, which is not an IASFL of $G$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.55\linewidth]{G-SIASL1}
\caption{An example to a strong IASL of $G$ which is not an IASFL of $G$.}
\label{G-SIASL1}
\end{figure}
Another type IASL which remains to be considered in this occasion is an arithmetic IASL of a graph $G$. An arithmetic IASL of a graph $G$ is an IASL $f$, with respect to which, the set-labels of all elements of $G$ are AP-sets. (An AP-set is a set whose elements are in an arithmetic progression). Since an AP-sets must have at least three elements, an IASFL of a graph $G$ can not be an Arithmetic IASL.
\section{Conclusion}
In this paper, we have introduced a new type of integer additive set-labeling and called an integer additive set-filtered labeling and have discussed certain characteristics and structural properties of graphs which admit this type of IASL. We have also discussed the relations, if any, with the other known types of IASLs. There are several other problems in this area are still open. The following are some of the problems we have identified in this area which need further investigation.
\begin{prob}
Determine a necessary and sufficient condition for an integer additive set-filtered labeling of a given graph $G$ to be an integer additive set-filtered indexer of $G$.
\end{prob}
\begin{prob}
Characterise the graphs which admit integer additive set-filtered indexers.
\end{prob}
\begin{prob}
Check the admissibility of IASFL by different operations and products of IASF-graphs.
\end{prob}
\begin{prob}
Check the admissibility of IASFL by the complement of IASF-graphs.
\end{prob}
\begin{prob}
Check the admissibility of IASFL by different certain graph classes.
\end{prob}
\begin{prob}
Check the admissibility of an induced IASFL by certain associated graphs such as line graphs, total graphs, subdivisions, homeomorphic graphs etc. of given IASF-graphs.
\end{prob}
An IASL (or IASI) is said to be {\em $k$-uniform} if $|f^+(e)| = k$ for all $e\in E(G)$. That is, a connected graph $G$ is said to have a $k$-uniform IASL (or IASI) if all of its edges have the same set-indexing number $k$.
\begin{prob}
Determine the conditions required for an IASFL of a given graph to be a uniform IASFL.
\end{prob}
Studies on certain other types of integer additive set-labeling of graphs, both uniform and non-uniform, seem to be much promising. The integer additive set-labelings under which the vertices of a given graph are labeled by different standard sequences of non negative integers, are also worth studying. All these facts highlight a wide scope for further studies in this area.
| {
"timestamp": "2015-07-09T02:10:17",
"yymm": "1507",
"arxiv_id": "1507.02173",
"language": "en",
"url": "https://arxiv.org/abs/1507.02173",
"abstract": "Let $\\mathbb{N}_0$ denote the set of all non-negative integers and $\\mathcal{P}(\\mathbb{N}_0)$ be its power set. An integer additive set-labeling (IASL) of a graph $G$ is an injective function $f:V(G)\\to \\mathcal{P}(\\mathbb{N}_0)$ such that the induced function $f^+:E(G) \\to \\mathcal{P}(\\mathbb{N}_0)$ is defined by $f^+ (uv) = f(u)+ f(v)$, where $f(u)+f(v)$ is the sumset of $f(u)$ and $f(v)$. In this paper, we introduce the notion of a particular type of integer additive set-indexers called integer additive set-filtered labeling of given graphs and study their characteristics.",
"subjects": "General Mathematics (math.GM)",
"title": "On Integer Additive Set-Filtered Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375049232425,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7095221796932302
} |
https://arxiv.org/abs/1710.07634 | Fractional Newton-Raphson Method | The Newton-Raphson (N-R) method is useful to find the roots of a polynomial of degree n. However, this method is limited since it diverges for the case in which polynomials only have complex roots if a real initial condition is taken. In the present work, we explain an iterative method that is created using the fractional calculus, which we will call the Fractional Newton-Raphson (F N-R) Method, which has the ability to enter the space of complex numbers given a real initial condition, which allows us to find both the real and complex roots of a polynomial unlike the classical Newton-Raphson method. | \section{Newton-Raphson Method}
For the one-dimensional case the N-R method is one of the most used methods to find the roots $x_*$ of a function $f:\nset{R} \to \nset{R} $, with $f\in C^2(\nset{R})$, $i.e.$, $\set{x_*\in \nset{R} \ : \ f(x_*)=0}$, this due to its easy implementation and its rapid convergence. The N-R method is expressed in terms of an iteration function $\Phi: \nset{R}\to \nset{R}$, as follows \cite{plato2003concise}
\begin{eqnarray}\label{eq:1}
\small
\begin{array}{cc}
x_{n+1}:= \Phi(x_n)=x_n - \dfrac{f(x_n)}{ Df(x_n)},& n=0,1,\cdots
\end{array}
\end{eqnarray}
where $D= d/dx$.
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth, height=0.2\textheight]{Im_N-R_1}
\caption{ \small{Illustration of N-R method \cite{plato2003concise} } }
\end{figure}
The N-R method is based on creating a succession $\set{x_n}$ of points by means of the intersection of the tangent line of the function $f(x)$ at the point $x_n$ with the $x$ axis, if the initial condition $x_0$ is close enough to the root $x_*$, the sequence $\set{x_n}$ should be convergent to that root. \cite{stoer2013}
This method is characterized by having a convergence order at least quadratic for the case in which $Df(x)\neq 0$, on the other hand if $ f (x) $ is a polynomial that presents a root $x_*$ with a certain multiplicity $m$, with $m\in \nset{N}$, $i.e.$,
\begin{eqnarray*}
\small
\begin{array}{cc}
f(x)=(x-x_*)^mg(x), & g(x_*)\neq 0,
\end{array}
\end{eqnarray*}
the N-R method converges at least linearly \cite{plato2003concise}.
\section{Fractional Calculus}
Fractional calculus is a branch of mathematical analysis whose applications have been increasing since the late XX century and early XXI century \cite{bramb17}. It arises around 1695 due to the notation of Leibniz for derivatives of the integer order
\begin{eqnarray*}
\small
\begin{array}{cc}
\dfrac{d^n}{dx^n}f(x), & n\in \nset{N},
\end{array}
\end{eqnarray*}
Thanks to this notation, L'Hopital was able to ask Leibniz in a letter about the interpretation of taking $ n = 1/2 $ in a derivative, because at that time Leibniz could not give a physical or geometric interpretation to this question, he limited himself to answering L'Hopital in a letter, \com{$\dots$ this is an apparent paradox from which, one day, useful consequences will be extracted} \cite{miller93}. The name of fractional calculation comes from a historical question since in this branch of mathematical analysis we study the derivatives and integrals of arbitrary order $ \alpha $, with $\alpha \in \nset{R}$ or $\nset{C}$.
Currently the fractional calculation does not have a unified definition of what is considered a fractional derivative, this because one of the conditions that is asked to consider an expression as a fractional derivative, is that the results of the conventional calculation are recovered when the order $\alpha \to n$, with $n \in \nset{N}$ \cite{oldham74}, among the most common definitions of fractional derivatives are the fractional derivative of Riemann-Liouville (R-L) and the fractional derivative of Caputo \cite{hilfer00}, the latter is usually the most studied, since the fractional derivative of Caputo allows to give a physical interpretation to problems with initial conditions, this derivative complies with the property of the classical calculus that the derivative of a constant is null regardless of the order $ \alpha $ of the derivative, however this does not occur with the fractional derivative of R-L.
Unlike the fractional derivative of Caputo, the fractional derivative of R-L does not allow to give a physical interpretation to the problems with initial condition because its use induces fractional initial conditions, however, the fact that this derivative does not cancel the constants for $\alpha$, with $\alpha\notin \nset{N}$, allows to obtain a \com{spectrum} of the behavior that the constants have for different orders of the derivative, which is not possible with the conventional calculus.
It must be taken into account that those functions that do not have a classical derivative or a weak derivative, these can present a fractional derivative \cite{umarov} , however, that is a subject that exceeds the purpose of this document.
\subsection{Fractional Derivative of Riemann-Liouville}
The fractional derivative operator R-L is constructed in a simplified way, taking into account that the integral operator is defined for a locally integrable function as
\begin{eqnarray*}
\small
\begin{array}{c}
\displaystyle{ \ifr{a}{}{I}{x}{}f(x)}:=\int_a^x f(t)dt,
\end{array}
\end{eqnarray*}
applying twice the integral operator we obtain
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{
\ifr{a}{}{I}{x}{2}f(x)= \int_a^x \left( \int_a^{x_1} f(t)dt \right)dx_1 =\int_a^x \ifr{a}{}{I}{x_1}{}f(x_1)dx_1 ,}
\end{array}
\end{eqnarray*}
making integration by parts taking $u=\ifr{a}{}{I}{x_1}{}f(x_1)$ and $dv=dx_1$
\begin{eqnarray*}
\small
\begin{array}{ccc}
\ifr{a}{}{I}{x}{2}f(x)&=& \ds{x_1 \ifr{a}{}{I}{x_1}{}f(x_1)\Big{|}_a^x- \int_a^x x_1f(x_1)dx_1 }\\
&=&\ds{ x \ifr{a}{}{I}{x}{}f(x)- \ifr{a}{}{I}{x}{}xf(x) } \\
&=&\ds{ \int_a^x(x-t)f(t)dt,}
\end{array}
\end{eqnarray*}
repeating the previous process applying three times the integral operator we obtain
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{
\ifr{a}{}{I}{x}{3}f(x) =\int_a^x \ifr{a}{}{I}{x_1}{2}f(x_1)dx_1 ,}
\end{array}
\end{eqnarray*}
making integration by parts taking $u=\ifr{a}{}{I}{x_1}{2}f(x_1)$ and $dv=dx_1$
\begin{eqnarray*}
\small
\begin{array}{ccc}
\ifr{a}{}{I}{x}{3}f(x)&=& \ds{x_1 \ifr{a}{}{I}{x_1}{2}f(x_1)\Big{|}_a^x- \int_a^x x_1 \ifr{a}{}{I}{x_1}{} f(x_1)dx_1 }\\
&=&\ds{ x \ifr{a}{}{I}{x}{2}f(x)- \ifr{a}{}{I}{x}{} \left( x\ifr{a}{}{I}{x}{} f(x) \right) } \\
&=&\ds{ \int_a^x(x-x_1)\ifr{a}{}{I}{x_1}{} f(x_1)dx_1,}
\end{array}
\end{eqnarray*}
realizing again integration by taking parts $u=\ifr{a}{}{I}{x_1}{}f(x_1)$ and $dv=(x-x_1)dx_1$
\begin{eqnarray*}
\small
\begin{array}{ccc}
\ifr{a}{}{I}{x}{3}f(x)&=&\ds{ -\dfrac{(x-x_1)^2}{2} \ifr{a}{}{I}{x_1}{}f(x_1) \Big{|}_a^x+ \int_a^x \frac{ (x-x_1)^2}{2} f(x_1)dx_1 } \\
&=& \ds{ \dfrac{1}{2}\int_a^x(x-t)^2 f(t)dt},
\end{array}
\end{eqnarray*}
repeating the above process $ n $ times it is possible to obtain the expression known as $n$-fold iterated integra \cite{hilfer00}
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{
\ifr{a}{}{I}{x}{n}f(x)= \dfrac{1}{(n-1)!}\int_a^x(x-t)^{n-1}f(t)dt,}
\end{array}
\end{eqnarray*}
to make a generalization of the previous expression, it is enough to take into account the relationship between the gamma function and the factorial function $(\gam{n}=(n-1)!)$ and doing $n\to \alpha$, with $\alpha \in \nset{C}( \re{\alpha}>0)$, the expression for the fractional integral operator of R-L is obtained \cite{hilfer00}
\begin{eqnarray}\label{eq:14}
\small
\begin{array}{c}
\ds{\ifr{a}{}{I}{x}{\alpha}f(x)= \dfrac{1}{\gam{\alpha}}\int_a^x(x-t)^{\alpha-1}f(t)dt,}
\end{array}
\end{eqnarray}
taking into account that the differential operator is the inverse operator on the left of the integral operator, $i.e.$,
\begin{eqnarray*}
\small
\begin{array}{c}
D^n\ifr{a}{}{I}{x}{n}f(x)=f(x),
\end{array}
\end{eqnarray*}
we can consider extending this analogy to fractional calculus using the expression
\begin{eqnarray*}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}f(x)=\ifr{a}{}{I}{x}{-\alpha}f(x),
\end{array}
\end{eqnarray*}
however this would cause convergence problems because the gamma function is not defined for $\alpha \in \nset{Z}^{-}$, to solve this is defined
\begin{eqnarray}\label{eq:15}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}f(x):=D^n \ifr{a}{}{I}{x}{n} \ifr{a}{}{I}{x}{-\alpha}f(x)=D^n \ifr{a}{}{I}{x}{n-\alpha}f(x),
\end{array}
\end{eqnarray}
taking $\alpha \in \nset{C}(\re{\alpha}>0)$ with $n=\lfloor \re{\alpha}\rfloor +1$, we get the expression for the fractional derivative operator of R-L \cite{hilfer00}
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{
\ifr{a}{}{D}{x}{\alpha}f(x)=\dfrac{1}{\gam{n-\alpha}} \dfrac{d^n}{dx^n} \int_a^x (x-t)^{n-\alpha-1} f(t)dt},
\end{array}
\end{eqnarray*}
the fractional derivative of R-L in its unified version with the fractional integral of R-L is given by the following expression
\begin{eqnarray*}\label{eq:2}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}f(x)=
\left\{
\begin{array}{c}
\ds{
\dfrac{1}{\gam{-\alpha}} \int_a^x(x-t)^{-\alpha-1}f(t)dt},\ \ \re{\alpha}<0 \\
\\
\ds{
\dfrac{1}{\gam{n-\alpha}}\dfrac{d^n}{dx^n} \int_a^x(x-t)^{n-\alpha-1}f(t)dt}, \ \ \re{\alpha} \geq 0
\end{array}
\right.
\end{array}
\end{eqnarray*}
\begin{eqnarray}
{}
\end{eqnarray}
where $n=\lfloor \re{\alpha}\rfloor +1$.
The fractional derivative of R-L of a monomial of the form $f(x)=(x-c)^m$, with $m\in \nset{N}$ and $c\in \nset{R}$, through the equation \eqref{eq:2} is expressed as
\begin{eqnarray}\label{eq:13}
\small
\begin{array}{c}
\ds{\ifr{a}{}{D}{x}{\alpha}f(x)= \dfrac{1}{\gam{n-\alpha}} \dfrac{d^n}{dx^n}
\int_a^x(x-t)^{n-\alpha-1}(t-c)^mdt},
\end{array}
\end{eqnarray}
taking the variable change $ t = c + (x-c) u $ in the integral
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{\int_a^x(x-t)^{n-\alpha-1}(t-c)^mdt}
= \\
\ds{(x-c)^{m+n-\alpha}\int_\frac{a-c}{x-c}^1(1-u )^{n-\alpha-1}u^m du} ,
\end{array}
\end{eqnarray*}
the previous result can be expressed in terms of the Beta function and the incomplete Beta function \cite{arfken85}
\begin{eqnarray*}
\small
\begin{array}{c}
\ds{(x-c)^{m+n-\alpha}\int_\frac{a-c}{x-c}^1(1-u )^{n-\alpha-1}u^m du
} = \\
\ds{(x-c)^{m+n-\alpha}\left[\int_0^1(1-u )^{n-\alpha-1}u^m du -\int_0^\frac{a-c}{x-c}(1-u )^{n-\alpha-1}u^m du \right]} \\
=\ds{(x-c)^{m+n-\alpha}\left[B(n-\alpha, m+1) - B_{\frac{a-c}{x-c}}(n-\alpha,m+1)\right]},
\end{array}
\end{eqnarray*}
so the equation \eqref{eq:13} can be rewritten as
\begin{eqnarray*}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}f(x) =
\dfrac{\gam{m+1}}{\gam{m+n-\alpha+1}} \dfrac{d^n}{dx^n} \left\{ (x-c)^{m + n-\alpha}G \left( \dfrac{a-c}{x-c}\right) \right\} ,
\end{array}
\end{eqnarray*}
where
\begin{eqnarray*}
\small
\begin{array}{c}
G \left( \dfrac{a-c}{x-c}\right):= 1 -\dfrac{B_{\frac{a-c}{x-c}}(n-\alpha,m+1)}{B(n-\alpha,m+1)},
\end{array}
\end{eqnarray*}
when $c=a$ we have to $G(0)=1$, then
\begin{eqnarray*}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}(x-a)^m =
\dfrac{\gam{m+1}}{\gam{m+n-\alpha+1}} \dfrac{d^n}{dx^n} (x-a)^{m + n-\alpha} ,
\end{array}
\end{eqnarray*}
taking into account that in the conventional calculus
\begin{eqnarray*}
\small
\begin{array}{c}
\dfrac{d^n}{dx^n}x^k= \dfrac{k!}{(k-n)!}x^{k-n} = \dfrac{\gam{k+1}}{\gam{k-n+1}}x^{k-n},
\end{array}
\end{eqnarray*}
we get the fractional derivative of R-L for a monomial of the form$f(x)=(x-a)^m$
\begin{eqnarray}\label{eq:3}
\small
\begin{array}{c}
\ifr{a}{}{D}{x}{\alpha}(x-a)^m =
\dfrac{\gam{m+1}}{\gam{m-\alpha+1}} (x-a)^{m -\alpha} .
\end{array}
\end{eqnarray}
\section{Fractional N-R Method}
The N-R method is useful to find the roots of a polynomial of degree $ n $, with $n\in \nset{N}$, however it is limited to not being able to find complex roots of the polynomial if a real initial condition is taken, to solve this problem and to develop a method that is able to find both the real and complex roots of a polynomial regardless of whether the initial condition is real is made use of the N-R method with the implementation of the fractional derivative of R-L.
Taking into account that a polynomial of degree $ n $ is composed of $ n + 1 $ monomios of the form $x^m$, with $m\in \nset{N}$, we can take the equation \eqref{eq:3} with $a=0$, getting
\begin{eqnarray}\label{eq:4}
\small
\begin{array}{c}
\ifr{0}{}{D}{x}{\alpha}x^m =
\dfrac{\gam{m+1}}{\gam{m-\alpha+1}} x^{m -\alpha} ,
\end{array}
\end{eqnarray}
and using the equation \eqref{eq:1} we can define the F N-R method for $f(x)\in \nset{P}_n[x]$, as follows
\begin{eqnarray}\label{eq:5}
\small
\begin{array}{cc}
x_{n+1}:= \Phi(\alpha, x_n)=x_n - \dfrac{f(x_n)}{ \ifr{0}{}{D}{x}{\alpha} f(x)\Big{|}_{x=x_n} },& n=0,1,\cdots
\end{array}
\end{eqnarray}
where $-2<\alpha<2$.
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth, height=0.2\textheight]{Im_N-R_2}
\caption{ \small{Illustration of some lines generated by the F N-R method} }
\end{figure}
To understand why the F N-R method has the ability to enter complex space unlike the classical N-R method, it is enough to observe the fractional derivative of R-L with $ \alpha = 1/2 $ of the constant function $ f_0 (x) = 1 = x ^ 0 $ and the identity function $ f_1 (x) = x = x ^ 1 $
\begin{eqnarray*}
\small
\begin{array}{ccc}
\ifr{0}{}{D}{x}{1/2}f_1(x) &=& \ds{
\dfrac{\gam{2}}{\gam{3/2}} x^{1/2} },\\
\ifr{0}{}{D}{x}{1/2}f_0(x) &=& \ds{
\dfrac{\gam{1}}{\gam{1/2}} x^{-1/2}} ,
\end{array}
\end{eqnarray*}
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth, height=0.2\textheight]{Frac_Id}
\caption{\small{Fractional Derivative of R-L with $ \alpha \in [0,1] $ of the function $ f_1(x) $} }
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth, height=0.2\textheight]{Frac_Cte}
\caption{\small{Fractional Derivative of R-L with $ \ alpha \ in [0,1] $ of the function $ f_0(x) $} }
\end{figure}
for polynomials of degree $ n \geq 1 $ an initial condition must be taken $x_0\neq 0$, this as a consequence of the fractional derivative of R-L order $\alpha\notin \nset{N}$ of the constants are of the form $x^{-\alpha}$. When we take $ \alpha \neq 1 $ you have two cases:
\begin{itemize}
\item[i)] When we take an initial condition $ x_0> 0 $, the sequence $ \set {x_n} $ generated by the equation \eqref {eq:5} can be divided into three parts, this happens because there may be a $N \in \nset{N}$ for which $\set{x_n}_{n=0}^{N-1}\in \nset{R}^+$ and $x_N\in \nset{R}^{-}$, consequently the succession $\set{x_n}_{n\geq N+1}\in \nset{C}$.
\item[ii)] When we take an initial condition $ x_0 <0 $, the succession $\set{x_n}_{n\geq 1}\in \nset{C}$.
\end{itemize}
\newpage
\subsection{Advantages of the F N-R Method}
One of the main advantages of the F N-R method is that the initial condition can be left fixed $ x_0 $ and vary the order of the derivative $ \alpha $ to obtain real and complex roots of a function. Once the order $ \alpha $ of the derivative is varied, different values of $ \alpha $ can generate the same root but with a different number of iteration or error, to optimize the method, it is possible to implement a filter in the that once the roots have been obtained, only those whose orders of the derivatives have generated a lower number of iterations or a minimum error are extracted.
Another advantage is that the F N-R method provides complex roots, once a complex root is obtained, it is sufficient to obtain its complex conjugate to obtain another root, so, in essence, it could be considered that two roots are extracted in the same order of the derivative and the same iteration number.
The F N-R method does not guarantee that all roots of the function leave a fixed initial condition and vary the ordered $ \alpha $ of the derivative, as in the classical N-R method, finding the roots will depend on giving an appropriate initial condition, such as the F N-R method works in a complex space, the initial condition has the freedom to be real or complex. Regarding the convergence of the method, it is suggested to consult \cite{pfractional}
\section{Results of the F N-R Method}
The following results using the F N-R method were performed with Julia 1.1.0 using the Anaconda Navigator 1.9.6 software, with a maximum of 300 iterations for each $\alpha$ value.
\begin{itemize}
\item[1)] Selected polynomial:
\begin{eqnarray*}
\footnotesize{
\begin{array}{cc}
f(x)=&- 70.98x^{13}- 35.88x^{12} - 61.16x^{11} + 81.44x^{10} \\
& - 63.88x^{9}+ 80.34x^{8} + 17.48x^{7} - 55.87x^{6}\\
& + 35.85x^{5} - 41.2x^{4}- 0.29x^{3}\\
&- 46.44x^{2}+13.37x - 34.49,
\end{array}}
\end{eqnarray*}
\newpage
roots $ x_* $ obtained with the F N-R method for a fixed initial condition $ x_0 = 0.5 $ and different orders $ \alpha $ from the derivative
\begin{eqnarray*}
\scriptsize{
\begin{array}{ccccc}
\alpha & Re(x_*) & Im(x_*) & \norm{f(x_*)}_2 & Iter \\ \hline
0.718 & 0.19151807732 & -0.83357382411 & 2.06468e-10 & 295 \\
0.810 & 0.53669830311 & -0.62248830172 & 9.97056e-10 & 126 \\
0.819 & 0.53669830311 & 0.62248830172 & 9.97056e-10 & 78 \\
0.861 & 0.00875091845 & -0.90699223528 & 1.006794e-9 & 83 \\
0.864 & -0.88279627032 & -1.1177511854 & 3.3844239e-8 & 87 \\
0.865 & 0.19151807732 & 0.83357382411 & 2.06468e-10 & 42 \\
0.879 & -0.57614988323 & -0.55484703326 & 1.607627e-9 & 56 \\
0.887 & 0.94774341679 & -0.25285849811 & 1.216203e-9 & 41 \\
0.893 & -0.95702362973 & -0.0 & 1.34979e-9 & 62 \\
0.992 & 0.94774341679 & 0.25285849811 & 1.216203e-9 & 18 \\
1.003 & 0.00875091845 & 0.90699223528 & 1.006794e-9 & 45 \\
1.004 & -0.88279627032 & 1.1177511854 & 3.3844239e-8 & 33 \\
1.005 & -0.57614988323 & 0.55484703326 & 1.607627e-9 & 31
\end{array}
}
\end{eqnarray*}
\item[2)] Selected polynomial:
\begin{eqnarray*}
\footnotesize{
\begin{array}{cc}
f(x)=& - 88.39x^{14} + 48.02x^{13}+ 32.56x^{12} - 15.54x^{11}\\
& - 77.39x^{10} - 41.09x^{9}- 6.54x^{8} - 39.53x^{7} \\
&+ 44.72x^{6} + 16.5x^{5}+ 6.86x^{4} + 59.91x^{3} \\
&+ 35.73x^{2}-77.66x - 63.27,
\end{array}}
\end{eqnarray*}
roots $ x_* $ obtained with the F N-R method for a fixed initial condition $ x_0 = 1 $ and different orders $ \alpha $ from the derivative
\begin{eqnarray*}
\scriptsize{
\begin{array}{cccccc}
\alpha & Re(x_*) & Im(x_*) & \norm{f(x_*)}_2 & Iter \\ \hline
0.735 & -0.77273375854 & -0.0 & 4.8262e-11 & 296 \\
0.894 & -0.88240039555 & -0.0 & 3.88628e-10 & 151 \\
0.905 & 1.05303011385 & 0.69083468949 & 2.182362e-8 & 73 \\
0.906 & 0.92059810432 & -0.21643115394 & 4.399572e-9 & 40 \\
0.909 & -0.04034472627 & 0.99000258591 & 8.226489e-9 & 39 \\
1.039 & -0.68751207818 & 0.49662443778 & 1.812979e-9 & 103 \\
1.042 & -0.61553864903 & 0.8313583513 & 8.540856e-9 & 110 \\
1.048 & 0.92059810432 & 0.21643115394 & 4.399572e-9 & 64 \\
1.069 & 0.46897137536 & -0.87145640197 & 3.853197e-9 & 99 \\
1.071 & -0.68751207818 & -0.49662443778 & 1.812979e-9 & 102 \\
1.092 & -0.61553864903 & -0.8313583513 & 8.540856e-9 & 205 \\
1.097 & -0.04034472627 & -0.99000258591 & 8.226489e-9 & 65 \\
1.203 & 0.46897137536 & 0.87145640197 & 3.853197e-9 & 180 \\
1.294 & 1.05303011385 & -0.69083468949 & 2.182362e-8 & 151
\end{array}
}
\end{eqnarray*}
\item[3)] Selected function:
\begin{eqnarray*}
\footnotesize{
\begin{array}{cc}
f(x)=&-39.4x^{-8.5} - 93.24x^{-7.5} + 24.82x^{-6.5} + 87.47x^{-5.5}\\
& - 53.29x^{-4.5} - 50.25x^{-3.5} + 8.68x^{-2.5} + 21.83x^{-1.5}\\
&+ 77.42x^{-0.5} - 75.11x^{0.5} + 1.8x^{1.5} - 23.8x^{2.5}\\
&+ 94.45x^{3.5} - 60.09x^{4.5} - 63.06x^{5.5} + 7.53x^{6.5},
\end{array}}
\end{eqnarray*}
roots $ x_* $ obtained with the F N-R method for a fixed initial condition $ x_0 = 0.5 $ and different orders $ \alpha $ from the derivative
\begin{eqnarray*}
\scriptsize{
\begin{array}{ccccc}
\alpha & Re(x_*) & Im(x_*) & \norm{f(x_*)}_2 & Iter \\ \hline
0.501 & 9.10400683521 & 0.0 & 5.496665835e-6 & 111 \\
0.722 & 0.52571944068 & -0.89875372854 & 7.871003e-9 & 236 \\
0.732 & 0.52571944068 & 0.89875372854 & 7.871003e-9 & 207 \\
0.761 & 1.00699819744 & 0.21940297376 & 3.646223e-9 & 223 \\
0.771 & -0.67346362057 & 0.79075507678 & 3.75495e-9 & 290 \\
0.785 & -0.67346362057 & -0.79075507678 & 3.75495e-9 & 176 \\
0.786 & -0.190772567 & -0.99381042641 & 7.661492e-9 & 194 \\
0.793 & -0.190772567 & 0.99381042641 & 7.661492e-9 & 201 \\
0.803 & -1.75584484473 & 0.0 & 6.504081e-9 & 224 \\
0.825 & 0.77885919581 & -0.63357785842 & 5.340029e-9 & 106 \\
0.844 & -0.69583931589 & 0.22642013542 & 1.078318e-8 & 211 \\
0.845 & 0.77885919581 & 0.63357785842 & 5.340029e-9 & 80 \\
0.848 & -0.69583931589 & -0.22642013542 & 1.078318e-8 & 214 \\
0.918 & 1.00699819744 & -0.21940297376 & 3.646223e-9 & 33 \\
1.000 & -0.47666265938 & 0.0 & 4.6161398e-8 & 32
\end{array}
}
\end{eqnarray*}
\item[4)] Selected function:
\begin{eqnarray*}
\footnotesize{
\begin{array}{cc}
f(x)=&\sin(x)-\dfrac{x}{40},
\end{array}}
\end{eqnarray*}
roots $ x_* $ obtained with the F N-R method for a fixed initial condition $ x_0 =5 $ and different orders $ \alpha $ from the derivative
\begin{eqnarray*}
\scriptsize{
\begin{array}{ccccc}
\alpha & Re(x_*) & Im(x_*) & \norm{f(x_*)}_2 & Iter \\ \hline
-1.672 & 6.44501615746 & 0.0 & 1.684e-12 & 53 \\
-1.205 & -15.31505532872 & -0.0 & 3.518e-12 & 58 \\
-1.201 & 15.31505532872 & 0.0 & 3.518e-12 & 34 \\
-1.043 & -27.51575073465 & -0.0 & 6.017e-12 & 280 \\
-0.947 & -21.42587497631 & -0.0 & 4.164e-12 & 100 \\
-0.900 & 21.42587497631 & 0.0 & 4.164e-12 & 83 \\
0.000 & -0.0 & 0.0 & 0.0 & 6 \\
0.385 & 3.0648951024 & 0.0 & 2.674e-12 & 63 \\
0.601 & 12.89459734687 & 0.0 & 1.739e-12 & 13 \\
0.611 & 9.19288309791 & 0.0 & 4.425e-12 & 27 \\
0.618 & 19.35462233 & 0.0 & 2.733e-12 & 12 \\
0.619 & 25.83490703614 & 0.0 & 2.394e-12 & 18 \\
0.787 & -6.44501615747 & 0.0 & 7.935e-12 & 56 \\
0.808 & -12.89459734688 & -0.0 & 7.476e-12 & 48 \\
0.822 & -25.83490703615 & 0.0 & 9.78e-12 & 39 \\
0.825 & -32.35830325979 & -0.0 & 8.322e-12 & 39 \\
0.861 & 39.05172918055 & 0.0 & 5.489e-12 & 106 \\
0.870 & 27.51575073464 & 0.0 & 1.489e-12 & 18 \\
0.877 & 32.35830325978 & 0.0 & 2.695e-12 & 17 \\
1.156 & -3.0648951024 & -0.0 & 2.674e-12 & 47 \\
1.187 & -9.1928830979 & -0.0 & 5.559e-12 & 52 \\
1.228 & 39.43777002241 & 0.0 & 1.52e-12 & 196 \\
1.233 & 33.56198515675 & 0.0 & 2.414e-12 & 25
\end{array}
}
\end{eqnarray*}
\end{itemize}
\section{Conclusions}
The F N-R method is very effective at finding roots of polynomials since it does not present the problems of divergence as the classical N-R method for a polynomial with only complex roots, however the really interesting thing is that this method opens up the possibility of creating new fractional iterative methods by combining the F N-R method with the existing iterative methods \cite{stoer2013}.
So in this work it has been given one more application to fractional calculus and has opened the possibility of extending the capacity of the iterative methods that allow us to find roots of functions more general
\bibliographystyle{unsrt}
\nocite{plato2003concise}
\nocite{stoer2013}
\nocite{bramb17}
\nocite{oldham74}
\nocite{miller93}
\nocite{arfken85}
\nocite{umarov}
\nocite{hilfer00}
\nocite{pfractional}
| {
"timestamp": "2019-05-07T02:14:51",
"yymm": "1710",
"arxiv_id": "1710.07634",
"language": "en",
"url": "https://arxiv.org/abs/1710.07634",
"abstract": "The Newton-Raphson (N-R) method is useful to find the roots of a polynomial of degree n. However, this method is limited since it diverges for the case in which polynomials only have complex roots if a real initial condition is taken. In the present work, we explain an iterative method that is created using the fractional calculus, which we will call the Fractional Newton-Raphson (F N-R) Method, which has the ability to enter the space of complex numbers given a real initial condition, which allows us to find both the real and complex roots of a polynomial unlike the classical Newton-Raphson method.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Fractional Newton-Raphson Method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9873750525759487,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.709522176145431
} |
https://arxiv.org/abs/1408.2277 | Some properties of a Rudin-Shapiro-like sequence | We introduce the sequence $(i_n)_{n \geq 0}$ defined by $i_n = (-1)^{inv_2(n)}$, where $inv_2(n)$ denotes the number of inversions (i.e., occurrences of 10 as a scattered subsequence) in the binary representation of n. We show that this sequence has many similarities to the classical Rudin-Shapiro sequence. In particular, if S(N) denotes the N-th partial sum of the sequence $(i_n)_{n \geq 0}$, we show that $S(N) = G(\log_4 N)\sqrt{N}$, where G is a certain function that oscillates periodically between $\sqrt{3}/3$ and $\sqrt{2}$. | \section{Introduction}
Loosely speaking, a \emph{digital sequence} is a sequence whose $n$-th
term is defined based on some property of the digits of $n$ when
written in some chosen base. The prototypical digital sequence is the
\emph{sum-of-digits function} $s_k(n)$, which is equal to the sum of
the digits of the base-$k$ representation of $n$. Of course, when
$k=2$, the sequence $s_2(n)$ counts the number of $1$'s in the binary
representation of $n$. By considering only the parity of $s_2(n)$,
one obtains the classical \emph{Thue--Morse sequence} $(t_n)_{n \geq
0}$, defined by $t_n = (-1)^{s_2(n)}$. That is,
\[
\begin{array}{cccccccccc}
(t_n)_{n \geq 0} = & +1 & -1 & -1 & +1 & -1 & +1 & +1 & -1 & \cdots
\end{array}
\]
Similarly, if one denotes by $e_{2;11}(n)$ the number of occurrences
of $11$ in the binary representation of $n$, one obtains the
\emph{Rudin--Shapiro sequence} $(r_n)_{n \geq 0}$ by defining $r_n =
(-1)^{e_{2;11}(n)}$. That is,
\[
\begin{array}{cccccccccc}
(r_n)_{n \geq 0} = & +1 & +1 & +1 & -1 & +1 & +1 & -1 & +1 & \cdots
\end{array}
\]
Traditionally, digital sequences have been defined in terms of the
number of occurrences of a given block in the digital
representation of $n$. Here we define a sequence based on the number
of occurrences of certain patterns as \emph{scattered subsequences} in
the digital representation of $n$.
Let $a_0 a_1 \cdots a_\ell$ be the base-$k$ representation of an
integer $n$; that is
\[
n = \sum_{j=0}^\ell a_j k^{\ell-j}, \quad\quad a_j \in \{0,1,\cdots, k-1\}.
\]
A \emph{scattered subsequence} of $a_0 a_1 \cdots a_\ell$ is a word
$a_{j_1} a_{j_2} \cdots a_{j_t}$ for some collection of indices $0
\leq j_1 < j_2 < \cdots < j_t \leq \ell$. Let $p$ be any word over
$\{0,1,\ldots,k-1\}$. We denote the number of occurrences of $p$ as a
scattered subsequence of the base-$k$ representation of $n$ by
$\sub_{k;p}(n)$. In particular, $\sub_{2;10}(n)$ denotes the number
of ocurrences of $10$ as a scattered subsequence of the binary
representation of $n$. For example, since the binary representation
of the integer $12$ is $1100_2$ and the word $1100$ has four
occurrences of $10$ as a subsequence, we have $\sub_{2;10}(12) = 4$.
The quantity $\sub_{2;10}(n)$ can be viewed alternatively as the
number of \emph{inversions} in the binary representation of $n$. In
general, over an alphabet $\{0,1,\ldots,k-1\}$, an \emph{inversion} in
a word $w$ is an occurrence of $ba$ as a scattered subsequence of $w$,
where $a,b \in \{0,1,\ldots,k-1\}$ and $b>a$. For this reason, in the
remainder of this paper we will write $\inv_2(n)$ to denote
$\sub_{2;10}(n)$.
We now define the sequence $(i_n)_{n \geq 0}$ by $i_n =
(-1)^{\inv_2(n)}$. That is,
\[
\begin{array}{cccccccccc}
(i_n)_{n \geq 0} = & +1 & +1 & -1 & +1 & +1 & -1 & +1 & +1 & \cdots
\end{array}
\]
We will show that this sequence has many similarities with the
Rudin--Shapiro sequence.
When studying digital sequences, one often looks at the
\emph{summatory function} of the sequence to get a better idea of the
long-term behaviour of the sequence. For instance, Newman
\cite{New69} and Coquet \cite{Coq83} studied the summatory function of
the Thue--Morse sequence taken at multiples of $3$. In particular,
\[
\sum_{0 \leq n < N} t_{3n} = N^{\log_4 3} G_0(\log_4 N) +
\frac13\eta(N),
\]
where $G_0$ is a bounded, continuous, nowhere
differentiable, periodic function with period $1$, and
\[
\eta(N)=
\begin{cases}
0 & \text{if $N$ is even,}\\
(-1)^{3N-3} & \text{if $N$ is odd.}\\
\end{cases}
\]
Similarly, Brillhart, Erd\H{o}s, and Morton \cite{BEM83}, and
subsequently, Dumont and Thomas \cite{DT89} studied the summatory
function of the Rudin--Shapiro sequence. In this case,
\[
\sum_{0 \leq n < N} r_n =\sqrt{N} G_1(\log_4 N)
\]
where again $G_1$ is a bounded, continuous, nowhere
differentiable, periodic function with period $1$. We will show that
the summatory function of the sequence $(i_n)_{n \geq 0}$ has the same
form as that of the Rudin--Shapiro sequence.
For more on digital sequences, the reader may consult
\cite[Chapter~3]{AS03}, as well as \cite{DG10}. Brillhart and Morton
\cite{BM96} have also given a nice expository account of their work on
the Rudin--Shapiro sequence.
\section{Alternative definitions of the sequence $(i_n)_{n \geq
0}$}\label{definitions}
Let us begin by recalling the definition of $(i_n)_{n \geq 0}$: we
have $i_n = (-1)^{\inv_2(n)}$, where $\inv_2(n)$ denotes the number of
ocurrences of $10$ as a scattered subsequence of the binary
representation of $n$.
Our first observation is that $(i_n)_{n \geq 0}$ is a
\emph{$2$-automatic sequence} (in the sense of Allouche and Shallit
\cite{AS03}). It is generated by the automaton pictured in
Figure~\ref{DFAO}. (We do not recapitulate the definitions of
\emph{automatic sequence} or \emph{automaton} here: the reader is
referred to \cite{AS03}.)
\begin{figure}[h!]
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\node[initial,state] (A) {${+1 \choose +1}$};
\node[state] (B) [right of=A] {${-1 \choose +1}$};
\node[state] (C) [right of=B] {${-1 \choose -1}$};
\node[state] (D) [right of=C] {${+1 \choose -1}$};
\path (A) edge [loop above] node {0} (A)
edge [bend left] node {1} (B)
(B) edge [bend left] node {0} (C)
edge [bend left] node {1} (A)
(C) edge [bend left] node {0} (B)
edge [bend left] node {1} (D)
(D) edge [loop above] node {0} (D)
edge [bend left] node {1} (C);
\end{tikzpicture}
\caption{Automaton generating the sequence $(i_n)_{n \geq 0}$}\label{DFAO}
\end{figure}
The automaton calculates $i_n$ as follows: the binary digits of $n$
are processed from most significant to least significant, and when the
last digit is read, the automaton halts in the state
\[
{ (-1)^{s_2(n)} \choose (-1)^{\inv_2(n)}}.
\]
In particular, $i_n$ is given by the lower component of the label of
the state reached after reading the binary representation of $n$ (the
first component has the value $t_n$).
Consequently, $(i_n)_{n \geq 0}$ can be generated by iterating the
morphism $g : \{A,B,C,D\}^* \to \{A,B,C,D\}^*$ defined by
\[
A \to AB, \quad B \to CA, \quad C \to BD, \quad D \to DC,
\]
to obtain the infinite sequence
\[
ABCABDABCADCABCA \cdots
\]
and then applying the recoding
\[
A, B \to {+}1, \quad C, D \to {-}1.
\]
(The reader may again consult \cite[Chapter~6]{AS03} for the standard
conversion between automata and morphisms.) Compare this to the
Rudin--Shapiro sequence, which is obtained by iterating
\[
A \to AB, \quad B \to AC, \quad C \to DB, \quad D \to DC,
\]
and then applying the same recoding as above.
The sequence $(i_n)_{n \geq 0}$ also satisfies certain recurrence
relations. To begin with, we have
\begin{eqnarray}
i_{2n} & = & i_n t_n \label{i_even} \\
i_{2n+1} & = & i_n, \label{i_odd}
\end{eqnarray}
where $t_n$ is the $n$-th term of the Thue--Morse sequence, as defined in
the introduction. To see this, note that if $w$ is the binary
representation of $n$, then $w0$ is the binary representation of $2n$.
The number of occurences of $10$ as a subsequence of $w0$ equals the
number of occurrences of $10$ as a subsequence of $w$ plus the number
of $1$'s in $w$. Thus
\[
i_{2n} = (-1)^{\inv_2(2n)} = (-1)^{\inv_2(n)+s_2(n)} =
(-1)^{\inv_2(n)}(-1)^{s_2(n)} = i_n t_n.
\]
Now the binary representation of $2n+1$ is $w1$, and appending the $1$
to $w$ creates no new occurrences of $10$, so $i_{2n+1} = i_n$.
\begin{proposition}\label{relations}
The sequence $(i_n)_{n \geq 0}$ satisfies the following recurrence
relations:
\begin{eqnarray*}
i_{4n} & = & i_n \\
i_{4n+1} & = & i_{2n} \\
i_{4n+2} & = & -i_{2n} \\
i_{4n+3} & = & i_n.
\end{eqnarray*}
\end{proposition}
\begin{proof}
First, recall that the Thue--Morse sequence satisfies the relations
\[
t_{2n} = t_n \quad\text{and}\quad t_{2n+1} = -t_n.
\]
Now we have
\[
i_{4n} = i_{2n} t_{2n} = i_{2n} t_n = i_n t_n t_n = i_n,
\]
where we have applied \eqref{i_even} twice. Similarly, we get
\[
i_{4n+1} = i_{2(2n)+1} = i_{2n+1} = i_n
\]
by applying \eqref{i_odd} twice. Next, we calculate
\[
i_{4n+2} = i_{2(2n+1)} = i_{2n+1} t_{2n+1} = i_n (-t_n) = -i_{2n},
\]
and finally,
\[
i_{4n+3} = i_{2(2n+1)+1} = i_{2n+1} = i_n.
\]
\end{proof}
The relations of Proposition~\ref{relations} can be represented in
matrix form as follows. Define the matrices
\[
\Gamma_0 = \left( \begin{array}{cc}1&0\\0&1\end{array} \right) =
\Gamma_3, \quad
\Gamma_1 = \left( \begin{array}{cc}0&1\\-1&0\end{array} \right), \quad
\Gamma_2 = \left( \begin{array}{cc}0&-1\\1&0\end{array} \right).
\]
For $n=0,1,2,\ldots$ define
\[
V_n = {i_n \choose i_{2n}}.
\]
Then for $n=0,1,2,\ldots$ and $r = 0,1,2,3$, we have
\begin{equation}\label{gamma_rec}
V_{4n+r} = \Gamma_r V_n.
\end{equation}
\section{The summatory function}
Define the \emph{summatory function} $S(N)$ of $(i_n)_{n \geq 0}$ as
\[
S(N) = \sum_{0 \leq n \leq N} i_n.
\]
The first few values of $S(N)$ are:
\begin{center}
\begin{tabular}{|c|cccccccc|}
\hline
$N$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline
$S(N)$ & 1 & 2 & 1 & 2 & 3 & 2 & 3 & 4 \\
\hline
\end{tabular}
\end{center}
The graph given in Figure~\ref{summatory} is a plot of the function
$S(N)$. The upper and lower smooth curves are plots of the functions
$\sqrt{2}\sqrt{N}$ and $(\sqrt{3}/3)\sqrt{N}$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{summatory}
\caption{A plot of the function $S(N)$}\label{summatory}
\end{figure}
\begin{theorem}\label{sum_growth}
There exists a bounded, continuous, nowhere differentiable,
periodic function $G$ with period $1$ such that
\[
S(N) = \sqrt{N}G(\log_4 N).
\]
\end{theorem}
A plot of the function $G$ is given in Figure~\ref{periodicG}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.8]{periodicG}
\caption{A plot of the periodic function $G$}\label{periodicG}
\end{figure}
The proof of Theorem~\ref{sum_growth} is a straightforward application
of the following result \cite[Theorem~3.5.1]{AS03} (stated here in
slightly less generality):
\begin{theorem}\label{thm3_5_1}
Let $k \geq 2$ be an integer. Suppose there exist an integer $d \geq
1$, a sequence of vectors $(V_n)_{n \geq 0}$, $V_n \in \mathbb{C}^d$,
and $k$ $d \times d$ matrices $\Gamma_0, \ldots, \Gamma_k$ such that
\begin{enumerate}
\item $V_{kn+r} = \Gamma_rV_n$ for $n=0,1,2,\ldots$ and $r =
0,1,\ldots,k-1$;
\item $\|V_n\| = O(\log n)$;
\item $\Gamma := \Gamma_0 + \cdots + \Gamma_k = cI$, where $I$ is the
$d \times d$ identity matrix and $c>0$ is some constant.
\end{enumerate}
Then there exists a continuous function $F : \mathbb{R} \to
\mathbb{C}^d$ of period $1$ such that if $A(N) = \sum_{0 \leq n \leq N}
V_n$, then
\[
A(N) = N^{\log_k c} F(\log_k N).
\]
\end{theorem}
Theorem~\ref{sum_growth} (except for the non-differentiability of $G$)
now follows from Theorem~\ref{thm3_5_1} by taking $k=4$, $d=2$, and
letting the $\Gamma_r$ and $V_n$ be as defined in Section~\ref{definitions}.
Condition~(1) is Eq.~\eqref{gamma_rec}; Condition~(2) is clear, since
$i_n \in \{-1,+1\}$; Condition~(3) holds with $c=2$. Now $S(N)$ is
the first component of the vector $A(N)$; if we take $G$ to be the
function obtained by projecting $F$ onto its first component,
Theorem~\ref{thm3_5_1} gives
\[
S(N) = N^{\log_4 2} G(\log_4 N) = N^{1/2} G(\log_4 N),
\]
as required. All the assertions of Theorem~\ref{sum_growth} have now
been established, except for the nowhere differentiability of $G$. To
obtain this, we note that the proof of \cite{Ten97} for the
summatory function of the Rudin--Shapiro sequence goes through here
for $S(N)$ without modification.
\begin{proposition}\label{sumIdentities}
The function $S(n)$ satisfies the following recurrence relations:
\begin{eqnarray}
S(4n) &=& 2S(n) - i_n\\
S(4n+1) &=& 2S(n) - i_n + i_{2n}\\
S(4n+2) &=& 2S(n) - i_n\\
S(4n+3) &=& 2S(n).
\end{eqnarray}
\end{proposition}
\begin{proof}
Let $A(n) = \sum_{0 \leq j \leq n} V_j$ (as in Theorem~\ref{thm3_5_1}).
Then
\begin{align*}
A(4n+3) &= \sum_{0 \leq j \leq 4n+3} V_j \\
&= \sum_{0 \leq r < 4} \sum_{0 \leq j \leq n} V_{4j+r} \\
&= \sum_{0 \leq r < 4} \sum_{0 \leq j \leq n} \Gamma_r V_j
\quad\quad\text{(by \eqref{gamma_rec})}\\
&= \sum_{0 \leq r < 4} \Gamma_r \sum_{0 \leq j \leq n} V_j \\
&= \left( \sum_{0 \leq r < 4} \Gamma_r \right) \left( \sum_{0 \leq j \leq
n} V_j \right) \\
&= 2I \sum_{0 \leq j \leq n} V_j \\
&= 2A(n).
\end{align*}
Now $S(n)$ is the first component of $A(n)$, so we have $S(4n+3) =
2S(n)$. We thus have
\[
S(4n+r) = S(4n+3) - \sum_{r < \ell \leq 3} i_{4n+\ell} = 2S(n) -
\sum_{r < \ell \leq 3} i_{4n+\ell}.
\]
Applying the relations of Proposition~\ref{relations} now gives the
claimed relations for $S(n)$.
\end{proof}
\begin{corollary}\label{parity}
Let $n$ be a positive integer. Then $S(n)$ and $n$ have opposite parity.
\end{corollary}
\begin{corollary}
Let $n$ be a positive integer. Then $$\frac{S(n) - 2}{2} \leq S\Bigg(\Big\lfloor \frac{n}{4} \Big\rfloor\Bigg) \leq \frac{S(n) + 2}{2}$$
\end{corollary}
Next we identify the positions of certain local maxima and minima
of $S(n)$. For a positive integer $k$ define the interval: $I_k =
[2^{2k -1}, 2^{2k+1} - 1]$.
\begin{theorem}\label{I_k_max}
For all $k\geq1$, if $n\in I_k$, then $S(n) \leq 2^{k+1}$. Moreover, $S(n) = 2^{k+1}$ only when $n = 2^{2k+1}-1$.
\end{theorem}
\begin{proof}
We proceed by induction on $k$.
The result clearly holds for $k=1$, so suppose the result holds for some $k\geq 1$ and consider $n\in I_{k+1} = [2^{2(k+1)-1},2^{2(k+1) +1}-1] = [2^{2k+1}, 2^{2k+3} - 1]$. It will be useful for us to write $n = 4m+d$ for some positive integer $m$ and $d\in \{0,1,2,3\}$. Further, we make the observation that $m\in I_k$ for any $n$ in $I_{k+1}$.
\medskip
\noindent\underline{Case 1}: $m \neq 2^{2k+1}-1$. \\
By the induction hypothesis, $S(m) \leq 2^{k+1}-1$. Thus
\begin{eqnarray*}
S(n) = S(4m+d) &\leq& 2S(m)+2 \\
&\leq& 2(2^{k+1}-1) +2 \\
&=& 2^{k+2}.
\end{eqnarray*}
\noindent\underline{Case 2}: $m = 2^{2k+1}-1$. \\
Again by the induction hypothesis, $S(m) = 2^{k+1}$. We have 4 subcases: \\
\underline{$n = 4m+3$}: By Proposition \ref{sumIdentities}, $S(n) = 2S(m) = 2^{k+2}$.
\\
\underline{$n = 4m+2$}:
Then $n = 2^{2(k+1)+1} - 2$. We make the observation that in base 2, $n+1$ consists only of $2(k+1)+1$ ones. Hence, $\inv_2(n+1) = 0$ and so $i_{n+1} = 1$. This yields: $$S(n) = S(n+1) - i_{n+1} = 2^{k+2} - 1 \leq 2^{k+2}$$ by the above subcase. \\
\underline{$n = 4m + 0$}: Here $n = 2^{2(k+1)+1} - 4$. Observe that the base 2 representation of $m$ consists of exactly $2k+1$ ones, and hence $i_{m} = 1$ (since $m$ will have no inversions). Thus $$S(4m) = 2S(m) - i_m = 2^{k+2} -1 \leq 2^{k+2}. $$
\underline{$n = 4m+1$}: Here, $n$ may be expressed as $n = 2^{2(k+1)+1} - 3$. We claim $n$ has an odd number of inversions since its binary representation consists of $2(k+1)-1$ ones followed by `01'. It follows that $i_{4m+1} = -1$, giving $$S(4m+1) = S(4m) +i_{4m+1} = S(4m) - 1 \leq 2^{k+2} - 2.$$
It should also be noted that using induction and the above identities, $$S(2^{2k+3}-1) = 2S(4(2^{2k+1}-1) +3) = 2S(2^{2k+1}-1) = 2^{k+2}$$ for all $k \geq 1$.
It remains to show that the only position at which $S(n) = 2^{k+1}$ is $n= 2^{2k+1}-1$. Let $n\in I_{k+1} = [2^{2k+1}, 2^{2k+3} - 1]$ and suppose that $S(n) = 2^{k+2}$. Since $S(n)$ is even, $n$ must be odd. Then either $n = 4m+1$ or $n = 4m+3$ for some integer $m$. Suppose the former. Then,
$$S(n) = S(4m+1) = 2S(m) - i_{m} + i_{2m} = 2S(m) - i_{m} + i_mt_m = 2S(m) +i_m(t_m - 1).$$
Obviously $t_m = \pm 1$. Suppose that $t_m = +1$. Then we get that
$S(n) = 2S(m) = 2^{k+2}$ and thus, $S(m) = 2^{k+1}$. Now by the
induction hypothesis, $m = 2^{2k+1} - 1$. Then in base $2$, $m =
111\dotsm1$ ($2k+1$ ones), contradicting the fact that $t_m = +1$. So then it must be that indeed $t_m = -1$. Moreover, if $i_{m}(t_m-1) = -2$, then $S(m) = 2^{k+1}+1$, a contradiction, since $m \in I_k$. Hence $i_{m} = -1$ and it follows that $$S(n) = 2S(m) +2 = 2^{k+2},$$ which implies that $S(m) = 2^{k+1} - 1$.
Observe that $$S(m-1) = S(m) - i_{m} = (2^{k+1} - 1) +1 = 2^{k+1}.$$ Consequently, $S(m-1)$ achieves the maximum for $I_k$ and so $m-1$ is the endpoint for the interval $I_k$. This yields that $m$ is in fact the first element in $I_{k+1}$. In other words, $m = 2^{2k+1}$, contradicting the fact that $m \in I_k$. Thus we finally conclude that $n \neq 4m+1$.
We now claim that $m = 2^{2k+1}-1$. Suppose that it isn't. Then by the induction hypothesis, $S(m) \leq 2^{k+1}-1$. By the above argument, $n = 4m+3$, so we have
\begin{equation*}
2^{k+2} = S(n) = 2S(m) \leq 2(2^{k+1}-1) = 2^{k+2}-2 < 2^{k+2}
\end{equation*}
which is a contradiction. Hence the only possible choice of $n$ is $n = 4(2^{2k+1}-1)+3 = 2^{2k+3}-1 =2^{2(k+1)+1}-1.$ We have already seen that $S(n)$ is indeed $2^{k+2}$, so this completes the proof.
\end{proof}
\begin{corollary}\label{maxLimit} $\lim\limits_{k \rightarrow \infty} \frac{S(2^{2k+1}-1)}{\sqrt{2^{2k+1}-1}} = \sqrt{2}$.
\end{corollary}
\begin{theorem}\label{I_k_min}
For $k \geq 1$ and $n \in I_k$, $S(n) \geq 2^{k-1}$. Moreover, $S(n) = 2^{k-1}$ if and only if $n = 3 \cdot 4^{k-1} -1.$
\end{theorem}
\begin{proof} This theorem is true for $k=1$, so assume the result for
an arbitrary $k \geq 1$ and consider $I_{k+1}=
[2^{2(k+1)-1},2^{2(k+1)+1}-1]$. Let $n$ be in $I_{k+1}$. As before,
we will let $n=4m+d$, where $d \in \{0,1,2,3\}$. Note that $m \in I_k$.
We consider 2 cases.
\medskip
\noindent \underline{Case 1}: $m = 3 \cdot 4^{k-1} -1$. \\
\noindent Then $S(m) = 2^{k-1}$. The possibilities for $n$ are $n_d =
4(3\cdot4^{k-1} -1)+d = 3 \cdot 4^{k} -(4-d)$, for $d \in
\{0,1,2,3\}$. Now observe that $n_3 = 3 \cdot 4^k -1$ expressed in
binary has the form $$10 \underbrace{11\cdots 1}_{\text{$2k$ `1's}}.$$
It follows that $i_{n_3} = -1$. By observing the binary expansions of
$n_2, n_1, n_0$, we can determine that $i_{n_2} = -1, i_{n_1} = +1$ and
$i_{n_0} = -1$. By Proposition \ref{sumIdentities}, $S(n_3) =
2^k$. Working backwards from $n_3$, it can be seen that $S(n_d) > 2^k$
for $d = 0,1,2$. Hence in this case, the only position in which $S(n)
= 2^{k}$ is $n = 3 \cdot 4^{k} -1$.
\medskip
\noindent \underline{Case 2}: $m \neq 3 \cdot 4^{k-1} -1$. \\
In this case, $S(n) \geq 2S(m) - 2 \geq 2(2^{k-1}+1) -2 = 2^{k}$, which is all we need.
Having now established the lower bound, we now only need to show that it is unique. Assume $S(n) = 2^k$. This implies that $n$ is odd, so we begin by supposing $n = 4m +1$. Then
$$S(n) = S(4m+1) = 2S(m) - i_{m} + i_{2m} = 2S(m) - i_{m} + i_mt_m = 2S(m) +i_m(t_m - 1).$$
In a fashion similar to that seen in the upper bound, we find that
$i_m = +1$ and $m \neq 3 \cdot 4^{k-1}-1$. Hence $S(m-1) = S(m) -
i_{m} = 2^{k-1}+1-1 = 2^{k-1}$. By the induction hypothesis, there is
only one value in $I_k$ such that $S(m_0)=2^{k-1}$. Namely $m_0 = 3
\cdot 4^{k-1}-1$. Hence $m = m_0 +1 = 3 \cdot 4^{k-1}$. We may thus
conclude that the only possibility for $n$ in this case is $n = 4(3
\cdot 4^{k-1}) + 1$. Under examination of the binary representation of
$n$ and $n-1$ as well as the fact that $n-2 = 4m_0+3$ and consequently
$S(n-2) = 2S(m_0)=2^{k}$, we find that this is not the case. Hence $n$ has the form $4m+3$.
If $m \neq 3 \cdot 4^{k-1} -1$, then $S(m) \geq 2^{k-1} +1$. By Proposition \ref{sumIdentities}, $S(n) \geq 2(2^{k-1}+1) > 2^{k}$, contradicting the assumption that $S(n) = 2^k$. It follows that $n = 4m+3 = 4(3 \cdot 4^{k-1} -1) +3 = 3 \cdot 4^{k} -1$ is the only possibility. As we have already verified that $S(n)$ does indeed equal $2^{k}$ for this value of $n$, we have a unique minimum for $S(n)$ on $I_{k+1}$. The result now follows.
\end{proof}
Theorems~\ref{I_k_min} and \ref{I_k_max} show that
\[
\liminf_{n \to \infty} \frac{S(n)}{\sqrt{n}} \leq \frac{\sqrt{3}}{3}
\quad\text{and}\quad
\limsup_{n \to \infty} \frac{S(n)}{\sqrt{n}} \geq \sqrt{2},
\]
respectively. In the next section, we will show that the lower and
upper limits are in fact equal to $\sqrt{3}/3$ and $\sqrt{2}$. That
is, we will prove
\begin{theorem}\label{lim_inf_sup}
We have
\[
\liminf_{n \to \infty} \frac{S(n)}{\sqrt{n}} = \frac{\sqrt{3}}{3}
\quad\text{and}\quad
\limsup_{n \to \infty} \frac{S(n)}{\sqrt{n}} = \sqrt{2}.
\]
\end{theorem}
\section{Establishing the upper and lower limits of $S(n)/\sqrt{n}$}
The following lemma provides us with some tools to work with for the proof of the upper limit of $S(n)/\sqrt{n}$.
\begin{lemma}\label{dividingI}
\begin{align} & S(n + 2^{2k}) = -S(n) + 3(2^{k}), & 2^{2k} \leq n \leq 2^{2k+1} -1, k \geq 1; \label{eqq3}\\
& S(n + 3 \cdot 2^{2k}) = S(n) + 2^{k}, & 0 \leq n \leq 2^{2k} -1, k \geq 1; \label{eqq4}\\
& S(n + 2^{2k+1}) = -S(n) + 2^{k+2}\label{eqq2}, & 2^{2k+1} \leq n \leq 2^{2k+2} -1, k \geq 1; \\
& S(n + 3 \cdot 2^{2k+1}) = S(n) + 2^{k+1}, & 0 \leq n \leq 2^{2k+1} -1, k \geq 1. \label{eqq1}
\end{align}
\end{lemma}
\begin{proof} Consider equation (\ref{eqq2}). We will show
that for an arbitrary $k \geq 1$, $S(2^{2k+2}) + S(2^{2k+1})=
2^{k+2}$, so that by rearranging we obtain equation (\ref{eqq2}) with $n = 2^{2k+1}$. Observing the binary representations and using Theorem~\ref{I_k_max}, we find that $S(2^{2k+1}) = 2^{k+1}-1$. So we must show that $S(2^{2k+2}) = 2^{k+1}+1$, or equivalently (again using the binary representation), that $S(2^{2k+2}-1) = 2^{k+1}$. It may be verified that this is true for $k=1$, so we proceed via induction on $k$. Assuming the result holds for $k$, we consider the $k+1$ case. $$S(2^{2(k+1)+2}-1) = S(4(2^{2k+2}-1)+3) = 2S(2^{2k+2}-1) = 2(2^{k+1}) = 2^{k+2},$$ hence the result is true for all $k \geq 1$.
Now, for $2^{2k+1} \leq j \leq 2^{2k+2} -1$, we claim that it must be the case that $i_j = -i_{j + 2^{2k+1}}$. This is because the difference in the inversion counts of $j$ and $j+2^{2k+1}$ can be attributed solely to those obtained from their respective leading `1's. In fact, the leading `1' of the latter term will give exactly one more inversion than the former. Hence the parity of the inversion counts will be different. It now follows that starting with $n = 2^{2k+1}$ and increasing $n$ successively by one, that $S(n+ 2^{2k+1}) + S(n)= 2^{k+2}$ for each $n$ in the interval $[2^{2k+1},2^{2k+2}-1]$.
We will now prove (\ref{eqq1}). Our first order of business will be to show that for any $k \geq 1$, $S(3 \cdot 2^{2k+1}) = 2^{k+1} + 1$. Considering the binary representation of $3 \cdot 2^{2k+1}$, we find that $S(3 \cdot 2^{2k+1}) = S(3 \cdot 2^{2k+1}-1)+1$. Therefore it will be sufficient to show that $S(3 \cdot 2^{2k+1}-1) = 2^{k+1}$. For $k=1$ we have $S(23) = 4$, so suppose the result holds for some $k \geq 1$ and consider $k+1$. Since $3 \cdot 2^{2(k+1)+1}-1 = 4(3\cdot 2^{2k+1}-1) +3$, Proposition (\ref{sumIdentities}) gives us that $S(3 \cdot 2^{2(k+1)+1}-1) = 2S(3\cdot 2^{2k+1}-1) = 2^{k+2}$ as desired.
It may be observed from the binary representations that $i_{j + 3 \cdot 2^{2k+1}} = i_j$ for $0 \leq j \leq 2^{2k+1} -1$. This stems from the fact that for each $j$ in this interval, $j + 3 \cdot 2^{2k+1}$ has a different inversion count from $j$ only due to the 2 leading `1's, which can be disregarded when considering the parity of the number of inversions. It now follows that starting with $n = 0$ and increasing $n$ successively by one, that $S(n + 3 \cdot 2^{2k+1}) = S(n) + 2^{k+1}$ for each $n$ in the domain of equation (\ref{eqq1}).
Equations~\eqref{eqq3} and \eqref{eqq4} may be proved in a similar fashion.
\end{proof}
\subsection{Outline of the proof of the upper limit}
In order to prove the upper limit of $\frac{S(n)}{\sqrt{n}}$, our
argument becomes a little bit messy, so we give a brief outline of our
approach: Recall that $I_k = [2^{2k -1}, 2^{2k+1} - 1]$. Lemma~\ref{dividingI} leads naturally to the following division of $I_{k}\setminus \{2^{2k+1}-1\}$:
\begin{align*} &I_{k,1} = [2^{2k-1}, 3\cdot 2^{2k-2}-1] &&\qquad I_{k,2}=[3\cdot 2^{2k-2}, 2^{2k}-1]\\
&I_{k,3}= [2^{2k}, 3\cdot 2^{2k-1}-1] &&\qquad I_{k,4} = [3\cdot 2^{2k-1}, 2^{2k+1} -2]. \end{align*}
We attempt to prove that for $n \geq 8$, if $n \neq 2^{2k+1}-1$ for $k \geq 1$, then $\frac{S(n)}{\sqrt{n}} < \sqrt{2}$.
$I_{k,1}$ and $I_{k,2}$ are taken care of by first establishing that the maximum $S(n)$ value on these two intervals is $2^k$, after which the result falls out quite nicely.
The proof for the interval $I_{k,3}$ demands that we split it up into
several sub-intervals based on the equations of Lemma~\ref{dividingI}. We show that a local max on $[2^{2k},3\cdot 2^{2k-1}-1]$ occurs at $n_0 = 5\cdot 2^{2(k-1)}$, with $S(n_0) = 3\cdot 2^{k-1}$, which effectively cuts $I_{k,3}$ in half. The algebra comes together for the second half of this division, but the first still requires some work.
Using the formulae once more, we cut this new subinterval into two pieces, $[2^{2k}, 3^2 \cdot 2^{2(k-1) -1}-1]$ and $[3^2 \cdot 2^{2(k-1)-1}, 5\cdot 2^{2(k-1)}-1]$. Again the algebra follows for the latter half, but not the former. We then determine that a max on the former interval occurs at $n = 17 \cdot 2^{2(k-3)}-1$, giving $S(n) \leq 5\cdot 2^{k-2}$, which is strong enough to finally allow us to obtain the desired inequality.
The last interval $I_{k,4}$, with the exception of $n = 2^{2k+1}-1$, ends up being dispatched with relative ease using some simple algebra.
This result along with Theorem \ref{I_k_max} and Corollary \ref{maxLimit} is enough to give the desired result.
\subsection{Establishing the upper limit}
\begin{theorem}\label{sqrt2} Let $n \geq 8$. If $n \neq 2^{2k+1}-1$ for $k \geq 1$, then $\frac{S(n)}{\sqrt{n}} < \sqrt{2} $.
\end{theorem}
In order to prove the above, we must first develop some useful tools. The following few results show that if $n \in [2^{2k-1}, 2^{2k}-1]$, then $S(n) \leq 2^k$ for $k \geq 1$.
\begin{proposition}\label{formProp} Suppose $k \geq 1$. If $m_0$ is of the form \begin{equation}\label{theform} 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2r+1}) - \beta \end{equation} for some combination of $\varepsilon_r\text{{\rm{'s}}} \in\{0,1\}$ and $\beta \in \{0,2\}$, then $4 m_0 + 3$ is also of the above form.
\end{proposition}
\begin{proof} First let $m_0 = 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2r+1})$. Then
\begin{align*} 4 m_0 +3 &= 4\left( 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2r+1})\right) +3 \\
&= 2^{2(k+1)} -4 - \sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2(r+1)+1}) +3\\
&= 2^{2(k+1)} -1 - \sum\limits_{s=1}^{k-1}\varepsilon_{s-1}(3\cdot 2^{2{s}+1}) \qquad (\text{letting } s = r+1).\\
\end{align*}
By letting $\varepsilon_0 = 0$ and re-indexing the $\varepsilon_s$ so that the summands have the form $\varepsilon_{s}(3\cdot 2^{2{s}+1})$, we see that the above is indeed of the desired form. The case where $$m_0 = 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2r+1}) -2$$ is similar, although in this case we will have $\varepsilon_0 = 1$.
\end{proof}
\begin{lemma}
If $n$ may be written in the form seen in equation {\rm{(\ref{theform})}} for some combination of $\varepsilon_r\text{'s} \in\{0,1\}$ and $\beta \in \{0,2\}$, then $i_n = +1$.
\end{lemma}
\begin{proof}
Suppose that $$n = 2^{2k} -1 -
\sum\limits_{r=0}^{k-2}\varepsilon_r(3\cdot 2^{2r+1}) - \beta$$ for
some combination of $\varepsilon_r\text{'s} \in\{0,1\}$ and $\beta \in
\{0,2\}$. We note that the binary form of $2^{2k}-1$ consists of $2k$
1's. Consider the case when $\beta = 0$. Observe that if we label the
digit positions of the binary representation starting from the right and beginning with 0, subtracting $3 \cdot 2^{2r+1}$ from $2^{2k}-1$, for $0 \leq r \leq k-2$, changes the digits in positions $2r+1$ and $2r+2$ from `1's to `0's. It follows that any $n$ of the above form will have `0's only occurring in blocks of even length. This ensures an even number of inversions, which means $i_n = +1$.
Now let $\beta = 2$. Every $n$ of this form may be obtained subtracting 2 from an $n$ of the form in the above case. Since `0's occur in even blocks, this subtraction will turn the block of zeroes adjacent to the `1' in the 0th position (which could possibly be empty) into `1's and the `1' to the left of the block into a `0'. The changing of the even block of zeroes into `1's will change the inversion number by an even amount, so we only need to check that the new `0' does not create an odd number of inversions. However we also know that excluding the left and rightmost `1's, `1's must come in even blocks as well. The new `0' will thus have an even number of `1's to the left of it (since it turns the right digit in a pair of `1's into a `0'). Hence we still have an even number of inversions, so $i_n = +1$.\end{proof}
\begin{lemma}Given an $n$ of the form in equation (\ref{theform}), $S(n) = 2^k$.
\end{lemma}
\begin{proof}
It may be observed that the result is certainly true for $k=1$, so assume that it is true for an arbitrary $k$ and consider $k+1$. Our approach uses Proposition~\ref{sumIdentities} extensively, so it will be useful to note that the only $\varepsilon_r$ that affects the value of $n$ modulo 4 will be $\varepsilon_0$. Since $\beta$ will also affect this value, it is natural to have 4 cases.
\noindent \underline{Case 1}: $\varepsilon_0 = 1, \beta = 2.$\\
We have:
\begin{align*} n &= 2^{2(k+1)} -1 - \sum\limits_{r=1}^{k-1}\varepsilon_r(3\cdot 2^{2r+1}) - 3 \cdot 2 - 2 \\
&= 4\left( 2^{2k} - \sum\limits_{r=1}^{k-1}\varepsilon_r(3\cdot 2^{2(r-1)+1}) \right) - 9\\
&= 4\left( 2^{2k} - \sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) \right) - 12 + 3\\
&= 4 \left( 2^{2k} - 1 -\sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) -2\right) +3.
\end{align*}
From the induction hypothesis, $$S\left(2^{2k} - 1 - \sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) -2\right) = 2^{k},$$ and so by Proposition~\ref{sumIdentities} $S(n) = 2^k$.\\
\underline{Case 2}: $\varepsilon_0 = 0, \beta = 2.$\\
With a bit of algebra, we find that
\begin{align*} n &= 2^{2(k+1)} -1 - \sum\limits_{r=1}^{k-1}\varepsilon_r(3\cdot 2^{2r+1}) - 2 \\
&= 4 \left( 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) \right) +1.
\end{align*}
We thus obtain the following equation for $S(n)$:
\begin{align*}S(n) &= S\left( 4 \left( 2^{2k} -1 -\sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) \right) +1 \right)\\
&= 2 S(m) -i_m + i_{2m},
\end{align*}
where $m =2^{2k} - 1 -\sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1})$.
By observing the binary representation of $m$ and $2m$, we find that $-i_m +i_{2m} = 0$. It follows from the induction hypothesis that $S(n) = 2^{k}$. \\
\underline{Case 3}: $\varepsilon_0 = 1, \beta = 0.$\\
It is not too hard to show that
\begin{align*}
n &= 4\left[\left(2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1})\right) -1\right]+1\\
&= 4(m-1) +1,
\end{align*}
where $$m = 2^{2k} -1 - \sum\limits_{r=0}^{k-2}\varepsilon_{r+1}(3\cdot 2^{2r+1}) -1.$$ From the induction hypothesis and the fact that $i_m = +1$, we obtain that $S(m-1) = 2^{k}-1$. Hence $S(n) =S(4(m-1)+1) = 2S(m-1) - i_{m-1} +i_{2(m-1)}$.
Now $m$ has an even number of `1's and `0's in its binary representation and ends in `01', so $m-1$ will have an odd number of `1's and `0's and end in `00'. The binary representation of $2(m-1)$ will then have an extra `0' at the end, and since there are an odd number of preceding `1's, the parity of its inversion count will be the opposite of $m-1$, ie. $i_{m-1} = -i_{2(m-1)}$.
From the above lemma, we know that $i_m = +1$. Writing $m$ in the form of (\ref{theform}), we find that its $\beta$ value is $0$. Thus $m-2$ may also be written in the same form, which implies $i_{m-2}=+1$. It follows that $i_{m-1} = -1$, and so $S(n)= 2S(m-1) -(-1) +1 = 2^{k+1} - 2 + 2 = 2^{k+1}$ as needed.
The remaining case is similar to Case 1.
\end{proof}
\begin{theorem}\label{thm_J_k}
Let $J_k = [2^{2k-1}-1, 2^{2k}-1]$. Then for $n\in J_k$, $S(n) = 2^k$ if and only if $n$ is of the form in {\rm{(\ref{theform})}} for some combination of $\varepsilon_r\text{\rm{'s}} \in\{0,1\}$ and $\beta \in \{0,2\}$. Moreover, $2^k$ is the maximum value for the partial sum function over $J_k$.
\end{theorem}
\begin{proof}
We proceed by induction.
Observe that this result holds for $J_1$, so suppose it holds true for some $k \geq 1$ and consider $n \in J_{k+1}$.
Suppose $S(n) = 2^{k+1}$, but $n$ is not of the form in
(\ref{theform}) for any combination of $\varepsilon_r$'s and $\beta$'s. First of all, if $n =4m_0 +3$, then $m_0$ is in $J_{k}$, and $S(n) = 2S(m_0)$, giving $S(m_0) = 2^{k}$. By the induction hypothesis, $m_0$ may be written in the form of equation (\ref{theform}). However it follows from Proposition~\ref{formProp} that $4m_0 +3$ may also be written in same form, contradicting the hypothesis. Thus we may assume $n = 4m_0 +1$. \\
\noindent We note that $S(4m_0 +1) = 2S(m_0) -i_{m_0} - i_{2m_0} = 2^{k+1}.$ Some rearranging gives: $$S(m_0) = \frac{(2^{k+1} + i_{m_0} + i_{2m_0})}{2} = 2^{k} + \frac{( i_{m_0} + i_{2m_0})}{2} \in \{2^{k}-1, 2^{k}, 2^{k} +1\}.$$ It follows from the induction hypothesis that $S(m_0) \neq 2^{k}+1$, so we need only consider the other two cases. Suppose $S(m_0)= 2^{k}$. By the induction hypothesis, $i_{m_0}= +1 $ (else $S(m_0 -1) = 2^{k}+1$), and it is easy to see that $i_{2m_0}= -1$. Furthermore, the induction hypothesis gives that $m_0$ may be written in the form of equation (\ref{theform}), with $\beta = 2$ (since $\beta = 0$ gives that $4m_0 +1$ is also of the form in (\ref{theform}) with $\beta = 2$, which is a contradiction). From this, we can say that $$4m_0 +1 = 2^{2(k+1)}-1 - \sum\limits_{r=0}^{k-1}\varepsilon_r(3\cdot 2^{2r+1}) - 4,$$ and so $4m_0+3$ can be expressed by an equation of the form (\ref{theform}), implying $i_{4m+3} = +1$. Moreover, $i_{4m_0 +2} = -i_{2m_0} = +1$. This gives that
$$2^{k+1} =2S(m_0) = S(4m_0 + 3) = S(4m_0 +1) + i_{4m_0 +2} + i_{4m_0 +3} = 2^{k+1} +2,$$
which is clearly a contradiction. Thus we must have $S(m_0) = 2^{k} -1$. It now follows that $\frac{( i_{m_0} + i_{2m_0})}{2} = -1$, so $i_{m_0} = i_{2m_0} = -1$. Since $i_{4m_0 +1} = i_{2m_0}$, we have
\begin{eqnarray*}
S(4m_0) &=& S(4m_0 +1)-i_{4m_0+1} \\
&=& 2S(m_0) - i_{m_0}\\
&=& 2^{k+1} + 1.
\end{eqnarray*}
Therefore $S(4m_0) -1 = 2^{k+1} = 2S(m_0).$ However $2S(m_0) = 2^{k+1}$, implying that $S(m_0) = 2^{k}$, a contradiction.
Now we must show that $S(n)\leq 2^k$ for $n\in J_k$. Writing $n = 4m+
d$, where $d \in \{ 0,1,2,3\}$, Proposition~\ref{sumIdentities} tells
us that $S(n) > 2^k$ is only possible if $S(m) = 2^{k-1}$. Hence $m$
can be written in the form of equation (\ref{theform}). If $d = 0$,
then Proposition~\ref{sumIdentities} gives us $S(n) \leq 2^{k}+1$. If
we have equality here, this implies that $S(4(m-1)+3) = 2^k$, and
consequently that $S(m)= S(m-1)$, which is clearly impossible. By
Corollary \ref{parity}, $S(4m) \leq 2^k -1$, which gives us that
$S(4m+1) \leq 2^k$. Finally, since we have that $S(4m+3) = 2^k$ and
$i_{4m+3} = +1$, it follows that $S(4m+2) \leq 2^k -1$. Therefore no value of
$d$ allows for $S(n)$ to exceed $2^k$, giving the result.
\end{proof}
\noindent We now finally have the necessary tools to prove Theorem \ref{sqrt2}.
\begin{proof}
We will begin by observing that for $n$ in $I_2 \setminus \{31\} = [8,30]$, $\frac{S(n)}{\sqrt{n}} < \sqrt{2}$. So assume that the statement is true for an arbitrary $k \geq 2$ and consider $I_{k+1}\setminus \{2^{2k+1}-1\}$. We will proceed by breaking up this interval into the following 4 pieces:
\begin{align*} &I_{k+1,1} = [2^{2k+1}, 3\cdot 2^{2k}-1] &&\qquad I_{k+1,2}=[3\cdot 2^{2k}, 2^{2k+2}-1]\\
&I_{k+1,3}= [2^{2k+2}, 3\cdot 2^{2k+1}-1] &&\qquad I_{k+1,4} = [3\cdot 2^{2k+1}, 2^{2k+3} -2]. \end{align*}
\noindent\underline{Case 1}: $n \in I_{k+1,1} \cup I_{k+1,2} = [2^{2k+1}, 2^{2k+2}-1]$.\\
By Theorem~\ref{thm_J_k}, all $S(n)$ values in this range are bounded
above by $2^{k+1}$, so we have
$\frac{S(n)}{\sqrt{n}} \leq \frac{2^{k+1}}{\sqrt{2^{2k+1}}} = \sqrt{2}$. We observe that that equality is possible only when $n = 2^{2k+1}$, but since $S(2^{2k+1}-1) = 2^{k+1}$, we get that $S(2^{2k+1}-1) < 2^{k+1}$, hence the result holds in these two intervals.
\noindent \underline{Case 2:} $n \in I_{k+1,3} = [2^{2k+2}, 3 \cdot 2^{2k+1}-1]$. \\
Observe that by \eqref{eqq2} $I_{k+1,3}$ is determined entirely by the interval $[2^{2k+1}, 2^{2k+2}-1]$. Since we have that the minimum $S(n)$ value on $I_{k+1}$ occurs at $n = 3 \cdot 2^{2k} -1$, it follows that the maximum on $I_{k+1,3}$ occurs precisely at $n= (3 \cdot 2^{2k} -1) + 2^{2k+1} = 5 \cdot 2^{2k}-1$, with $S(n) = 3 \cdot 2^{k}$. If we consider the interval $[5 \cdot 2^{2k}, 3 \cdot 2^{2k+1}-1]$, we find that
\begin{align*} \frac{S(n)}{\sqrt{n}} \leq \frac{3 \cdot 2^{k}}{\sqrt{5 \cdot 2^{2k}}} = \frac{3}{\sqrt{5}}< \sqrt{2}.
\end{align*}
It remains to show that the bound holds on $[2^{2k+2},5 \cdot 2^{2k}-1]$.
For reasons that will become apparent shortly, it will be convenient
to split this remaining interval into two disjoint pieces,
$[2^{2k+2},9 \cdot 2^{2k-1}-1]$ and $[9 \cdot 2^{2k-1},5\cdot
2^{2k}-1]$. Consider the interval $I_{k,3} = [2^{2k}, 3 \cdot
2^{2k-1}-1]$. We have already established that the unique maximum
value of $S(n)$ on this interval is $3 \cdot 2^{k-1}$, and occurs
exactly at $n_1 = 5 \cdot 2^{2(k-1)} -1$. Using \eqref{eqq3}, we find
that the minimum value on the interval $[2^{2k+1},5 \cdot2^{2k-1}-1]$
occurs at $n_2 = 9 \cdot 2^{2k-2}-1$, with $S(n_2) = 3 \cdot
2^{k-1}$. Finally, we can apply \eqref{eqq2} to obtain that the unique maximum on the interval $[2^{2k+2},9 \cdot 2^{2k-1}-1]$ occurs at $n = 17 \cdot 2^{2(k-2)}-1$, and that $S(n) = 5 \cdot 2^{k-1}$. By some quick algebra, we find that for $n$ in this interval,
$$ \frac{S(n)}{\sqrt{n}} \leq \frac{5 \cdot 2^{k-1}}{\sqrt{2^{2k+2}}} < \sqrt{2}.$$
Lastly we tackle the final piece, $[9 \cdot 2^{2k-1}, 5 \cdot 2^{2k}-1]$. We have that for any $n \in I_{k+1,3}$, $S(n) \leq 3 \cdot 2^{k}$. Thus for any $n$ in this interval
$$ \frac{S(n)}{\sqrt{n}} \leq \frac{3 \cdot 2^{k}}{\sqrt{9 \cdot 2^{2k-1}}} \leq \sqrt{2}.$$
As equality can only hold when $n = 9 \cdot 2^{2k-1}$, we just need to check this value. However, $S(9 \cdot 2^{2k-1}-1) = 3 \cdot 2^{k}$, implying that $S(9 \cdot 2^{2k-1}) \leq 3 \cdot 2^{k}-1$. This gives us a strict inequality and completes the proof for this interval.\\
\underline{Case 3}: $n \in I_{k+1,4} = [3 \cdot 2^{2k+1}, 2^{2k+3}-2]$. \\
Write $n = n'+3 \cdot 2^{2k+1}$.
We observe that \eqref{eqq1} pertains to this interval completely, giving us
\begin{align*}
\frac{S(n)}{\sqrt{n}} &= \frac{S(n' + 3 \cdot 2^{2k+1})}{\sqrt{n'+3
\cdot 2^{2k+1}}} \\
&= \frac{S(n')}{\sqrt{n'}} \cdot \frac{\sqrt{n'}}{\sqrt{n'+3 \cdot 2^{2k+1}}} +\frac{2^{k+1}}{\sqrt{n'+3 \cdot 2^{2k+1}}}\\
& < \frac{\sqrt{2n'} +2^{k+1}}{\sqrt{n'+3\cdot 2^{2k+1}}}.
\end{align*}
Using a little bit of algebra, we find that this is less than $\sqrt{2}$ whenever $0 \leq n' \leq 2^{2k+1}-2$. Since this is within the domain of \eqref{eqq1}, we have the result.
\end{proof}
\subsection{Establishing the lower limit}
\begin{theorem}\label{lower_bound}
$\frac{S(n)}{\sqrt{n}} > \frac{1}{\sqrt{3}}$ for all $n \geq 1$.
\end{theorem}
\begin{proof}
We can certainly observe this for values of $n$ up to 8, so we can assume that it holds true for all $n$ up to and including $I_j$ where $1 \leq j \leq k$, and consider $I_{k+1}$ for $k \geq 1$.
By Theorem~\ref{I_k_min}, the minimum value of $S$ occurs at $n_0 = 3
\cdot 2^{2k}-1$ with $S(n_0)= 2^k$. It follows quite easily that for
any $n \in I_{k+1,1}=[2^{2k+1},3 \cdot 2^{2k}-1]$, the
inequality $$\frac{S(n)}{\sqrt{n}} \geq \frac{2^k}{\sqrt{3 \cdot
2^{2k}-1}} > \frac{2^k}{\sqrt{3 \cdot 2^{2k}}} =
\frac{1}{\sqrt{3}}$$ is satisfied.
For $I_{k+1,2}$, observe that \eqref{eqq4} applies exactly to this interval. Hence if $n = n' + 3 \cdot 2^{2k}$, the induction hypothesis gives
\begin{align*} \frac{S(n'+3 \cdot 2^{2k})}{\sqrt{n'+3 \cdot 2^{2k}}} &= \frac{S(n')}{\sqrt{n'}} \cdot \frac{\sqrt{n'}}{\sqrt{n'+3 \cdot 2^{2k}}} + \frac{2^k}{\sqrt{n'+3 \cdot 2^{2k}}}\\
& > \frac{1}{\sqrt{3}} \cdot \frac{\sqrt{n'}}{\sqrt{n'+3 \cdot 2^{2k}}}+ \frac{2^k}{\sqrt{n'+3 \cdot 2^{2k}}}
\end{align*}
for $n' \geq 1$.
We would like $$\frac{1}{\sqrt{3}} \cdot \frac{\sqrt{n'}}{\sqrt{n'+3 \cdot 2^{2k}}}+ \frac{2^k}{\sqrt{n'+3 \cdot 2^{2k}}} \geq \frac{1}{\sqrt{3}},$$ which with a little bit of work, can be shown to be equivalent to
$$\frac{(\sqrt{n'}+ \sqrt{3}\cdot 2^k)^2}{{n'+3 \cdot 2^{2k}}} \geq 1.$$ As this is true when $n' \geq 0$ and hence on the domain of the equation, we have the result for $I_{k+1,2} \setminus \{3\cdot 2^{2k}\}$. It is easily verified that $S(3\cdot 2^{2k}) = 2^{k}+1$ and that the bound holds for this value as well, giving us the result for $I_{k+1,2}$.
Now consider $n \in I_{k+1,3}= [2^{2k+2}, 3 \cdot 2^{2k+1}-1]$. Recall
that by Lemma \ref{dividingI}, the values of $S$ on $I_{k+1,3}$ are
completely determined by the values of $S$ on $I_{k+1,1} \cup I_{k+1,2}$. We also know that the
maximum value of $S$ on $I_{k+1,1} \cup I_{k+1,2}$ is $2^{k+1}$. In particular,
$S(n) \geq - 2^{k+1} + 2^{k+2} = 2^{k+1}$.
Finally, let $n \in I_{k+1,4} \cup \{2^{2k+3}-1\}$. Note that the value of $S$ on
$I_{k+1,4} \cup \{2^{2k+3}-1\}$ is completely determined by the value of $S$ on $[0, 2^{2k+1}-1]$. Moreover, the
minimum value of $S$ on $[0, 2^{2k+1}-1]$ is $1$. By equation
\eqref{eqq1} we obtain that $S(n) \geq 1 + 2^{k+1} > 2^{k+1}$.
Hence for $n \in I_{k+1,3} \cup I_{k+1,4} \cup \{2^{2k+3}-1\}$, the following inequality holds:
$$\frac{S(n)}{\sqrt{n}} \geq \frac{2^{k+1}}{\sqrt{2^{2k+3}-1}} > \frac{2^{k+1}}{\sqrt{2^{2k+3}}} = \frac{1}{\sqrt{2}} > \frac{1}{\sqrt{3}},$$
thus establishing the result for the remaining piece of $I_{k+1}$ and completing the proof.
\end{proof}
Theorem~\ref{lim_inf_sup} now follows from Corollary \ref{maxLimit} and Theorems~\ref{I_k_max},
\ref{I_k_min}, \ref{sqrt2}, \ref{lower_bound}.
\section{Combinatorial properties}\label{combinat_props}
Both the Thue--Morse sequence and the Rudin--Shapiro sequence have
been extensively studied from the point of view of combinatorics on
words. Indeed, both of these sequences have many interesting
combinatorial properties. Before collecting some of the combinatorial
properties of the sequence $(i_n)_{n \geq 0}$, we first recall some
basic definitions.
A word of the form $xx$, where $x$ is non-empty, is called a
\emph{square}. A \emph{cube} has the form $xxx$, and in general, a
\emph{$k$-power} has the form $xx\cdots x$ ($x$ repeated $k$ times)
and is denoted by $x^k$. A \emph{palindrome} is word that is equal to
its reversal. We denote the length of a word $x$ by $|x|$.
\begin{theorem}
The sequence $(i_n)_{n \geq 0}$ contains
\begin{enumerate}
\item no $5$-th powers,
\item cubes $x^3$ exactly when $|x| = 3$,
\item squares $xx$ exactly when $|x| \in \{1,2\} \cup \{3\cdot2^k : k \geq 0\}$.
\item arbitrarily long palindromes.
\end{enumerate}
\end{theorem}
\begin{proof}
First, note that 1) can be deduced from 2) along with a computer
calculation to verify that there are no $5$-th powers of period $3$.
The proofs of 2)--4) are ``computer proofs''. The survey \cite{Sha13}
gives an overview of a general method for proving combinatorial
properties of automatic sequences. We will not explain the method in
any great detail here. The output of the computer prover is a finite
automaton accepting the binary representation of the lengths of the
squares, cubes, palindromes, etc.\ contained in the sequence of
interest.
Figure~\ref{cube_lengths} shows the automaton accepting the binary
representations of the lengths of the periods of the cubes present in
the sequence. It is easy to see that the only numbers accepted by the
automaton are $0$ and $3$. Of course $0$ is not a valid length for
the period of a repetition, but it makes things a little easier
to allow the automaton to accept $0$.
Figure~\ref{square_lengths} shows the automaton accepting the lengths
of the periods of the squares. Again, it is easy to see from the
structure of the automaton that the non-zero lengths accepted are the
elements of the set $\{1,2\} \cup \{3\cdot2^k : k \geq 0\}$.
Finally, Figure~\ref{pal_lengths} shows the automaton accepting the
lengths of the palindromes. It is easy to see that this automaton
accepts the binary representations of infinitely many numbers.
\end{proof}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{output_narad-cubes}
\caption{Automaton accepting period lengths of cubes in $(i_n)_{n \geq
0}$}\label{cube_lengths}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{output_narad-squares}
\caption{Automaton accepting period lengths of squares in $(i_n)_{n \geq
0}$}\label{square_lengths}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[scale=0.7]{output_narad-pal}
\caption{Automaton accepting lengths of palindromes in $(i_n)_{n \geq
0}$}\label{pal_lengths}
\end{figure}
\section{Conclusion}
It would be interesting to study the properties of other sequences of
the form $(\,(-1)^{\sub_{2;w}(n)}\,)_{n \geq 0}$ for different choices
of subsequence $w$.
\section*{Acknowledgments}
The computations needed to prove the results in
Section~\ref{combinat_props} were performed by Jeffrey Shallit and
Hamoon Mousavi. We would like to sincerely thank them for their
assistance.
\newpage
| {
"timestamp": "2014-08-12T02:10:08",
"yymm": "1408",
"arxiv_id": "1408.2277",
"language": "en",
"url": "https://arxiv.org/abs/1408.2277",
"abstract": "We introduce the sequence $(i_n)_{n \\geq 0}$ defined by $i_n = (-1)^{inv_2(n)}$, where $inv_2(n)$ denotes the number of inversions (i.e., occurrences of 10 as a scattered subsequence) in the binary representation of n. We show that this sequence has many similarities to the classical Rudin-Shapiro sequence. In particular, if S(N) denotes the N-th partial sum of the sequence $(i_n)_{n \\geq 0}$, we show that $S(N) = G(\\log_4 N)\\sqrt{N}$, where G is a certain function that oscillates periodically between $\\sqrt{3}/3$ and $\\sqrt{2}$.",
"subjects": "Combinatorics (math.CO); Formal Languages and Automata Theory (cs.FL); Number Theory (math.NT)",
"title": "Some properties of a Rudin-Shapiro-like sequence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750514614408,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7095221753445519
} |
https://arxiv.org/abs/1409.3475 | Piecewise straightening and Lipschitz simplicial volume | We study the Lipschitz simplicial volume, which is a metric version of the simplicial volume. We introduce the piecewise straightening procedure for singular chains, which allows us to generalize the proportionality principle and the product inequality to the case of complete Riemannian manifolds of finite volume with sectional curvature bounded from above. We obtain also yet another proof of the proportionality principle in the compact case by a direct approximation of the smearing map. | \section{Introduction}
The simplicial volume is a homotopy invariant of manifolds defined for a closed manifold $M$ as
$$
\|M\| :=inf\{|c|_1\::\: \text{$c$ is a fundamental cycle with $\mathbb{R}$ coefficients}\},
$$
where $|\cdot|_1$ is an $\ell^1$-norm on $C_*(M,\mathbb{R})$ (which we will denote for simplicity as $C_*(M)$) with respect to the basis consisting of singular simplices. Although the definition is relatively straightforward, it has many applications. Most of them are mentioned in the work of Gromov \cite{G}, one of the most important is the use to the degree theorems. In general, by the degree theorem we understand a bound on the degree of a continuous map $f:M\rightarrow N$ between two $n$-dimensional Riemannian manifolds
$$
\deg(f)\leq \const_n\frac{\vol(M)}{\vol(N)}.
$$
Such a theorem may obviously require additional assumptions. The reason why the simplicial volume is suitable for establishing such theorems is its functoriality, i.e. if $f:M\rightarrow N$ is a map between two closed manifolds then
$$
\|N\|\leq \deg(f)\cdot \|M\|.
$$
One obtains easily that if $\|N\|\neq0$, then
$$
\deg(f)\leq \frac{\|M\|}{\|N\|}.
$$
Under some curvature assumptions, Gromov proved in \cite{G} that for a given Riemannian manifold $M$ we have $\|M\|\leq \const_n\cdot \vol(M)$ and $\|M\|\geq \const_n\cdot \vol(M)$, which imply the degree theorem if the curvature assumptions are satisfied.
In most cases simplicial volume is very difficult to compute exactly. However, it has a few properties which can be used to approximate it or at least decide if it is zero or not. Two of them which we are interested in are the product inequality and the proportionality principle.
\begin{thm}[\cite{G}]\label{PI0}
Let $M$ and $N$ be two compact manifolds. Then the following inequality holds
$$
\|M\|\cdot\|N\| \leq \|M\times N\|.
$$
\end{thm}
\begin{thm}[\cite{G}, \cite{Str}]\label{PP0}
Let $M$ and $N$ be two compact Riemannian manifolds. Assume also that their universal covers are isometric. Then
$$
\frac{\|M\|}{\vol(M)} = \frac{\|N\|}{\vol(N)}.
$$
\end{thm}
A natural question to ask is if these properties generalise somehow to the non-compact case. In order to have a fundamental class, one needs to consider $\ell^1$ norm on locally finite singular chains instead of just (finite) singular chains. In this case simplicial volume obviously does not have to be finite. Unfortunately, neither of the above properties holds in such generality. The product inequality does not hold because of another result of Gromov from \cite{G} that the simplicial volume of a product of at least 3 open manifolds is $0$, while there are examples of products of two such manifolds with nonzero simplicial volume \cite{LS}. The proportionality principle fails because of a similar reason. Take a product of three non-compact, locally symmetric space of finite volume. Its simplicial volume vanishes, but on the other hand there always exists a compact locally symmetric space with isometric universal cover \cite{B} and the simplicial volume of closed locally symmetric spaces of non-compact type is known to be nonzero \cite{LafS}.
The solution to these problems, also proposed by Gromov in \cite{G}, is to consider a geometric variant of simplicial volume by taking only Lipschitz chains. This way one obtains the Lipschitz simplicial volume
$$
\|M\|_{\Lip} :=inf\{|c|_1\::\: \text{$c\in C^{lf}_*(M)$ is a fundamental cycle with $\mathbb{R}$ coefficients, $\Lip(c)<\infty$}\}.
$$
In the case of closed manifolds the classical and the Lipschitz simplicial volumes coincide. L\"oh and Sauer studied the above invariant in \cite{LS} and proved that it may be a proper generalisation of the simplicial volume to the case of complete Riemannian manifolds of finite volume, not necessarily compact. In particular, in the presence of non-positive curvature they proved the proportionality principle and the product inequality. The main result of this article is a generalisation of their proofs to the case of manifolds with curvature bounded from above.
\begin{thm}[Product inequality]\label{PI1}
Let $M$ and $N$ be two complete, Riemannian manifolds with sectional curvatures bounded from above. Then the following inequality holds
$$
\|M\|_{\Lip}\cdot\|N\|_{\Lip} \leq \|M\times N\|_{\Lip}.
$$
\end{thm}
\begin{thm}[Proportionality principle]\label{PP1}
Let $M$ and $N$ be two complete Riemannian manifolds of finite volume with sectional curvatures bounded from above. Assume also that their universal covers are isometric. Then
$$
\frac{\|M\|_{\Lip}}{\vol(M)} = \frac{\|N\|_{\Lip}}{\vol(N)}.
$$
\end{thm}
In the work of L\"oh and Sauer, non-positive curvature assumption is needed to introduce the procedure of straightening the simplices. Namely, given a singular chain, one can homotope it to the chain consisting of straight simplices by using the fact that in simply connected, non-positively curved Riemannian manifolds geodesics are unique. To generalize this straightening technique to the case of manifolds with curvature bounded from above we construct 'exponential neighbourhoods' of points of a given manifold, where the straightening can be applied after some iterated barycentric subdivision of simplices. These 'neighbourhoods' were introduced by Gromov in \cite[4.3(B)]{G}, however, we give much more details.
Since for closed manifolds sectional curvature is always bounded and the Lipschitz simplicial volume equals the classical one, we obtain yet another proof of Theorem \ref{PP0}. We follow Thurston's approach from \cite{T} (used in \cite{Str}), however, we obtain the proof without any use of bounded cohomology by approximating directly the smearing map.
The proportionality principle provides direct connection between Lipschitz simplicial volume and volume, therefore one obtains immediately
\begin{cor}
Let $f:M\rightarrow N$ be a proper Lipschitz map between two complete Riemannian manifolds of finite volume with sectional curvatures bounded from above, which in addition have isometric universal covers. Assume moreover that $\|N\|_{\Lip}\neq 0$. Then
$$
\deg(f)\leq \frac{\vol(M)}{\vol(N)}.
$$
\end{cor}
L\"oh and Sauer combined this fact for non-positively curved manifolds with the facts that Lipschitz simplicial volume is strictly positive for locally symmetric spaces of non-compact type of finite volume \cite{B,LafS,LS}, that there are only finitely many symmetric spaces (with the standard metric) in each dimension and that $\|N\|\leq C_n \vol(N)$ if $\Ricci(N)\geq -(n-1)$ and $\sec(N)\leq 1$ \cite{G,LS} to prove the following theorem.
\begin{thm}[Degree theorem, \cite{CF,LafS,LS}]
For every $n\in\mathbb{N}$ there is a constant $C_n>0$ with the following property: Let $M$ be an $n$-dimensional locally symmetric space of non-compact type with finite volume. Let $N$ be an $n$-dimensional complete Riemannian manifold of finite volume with $\Ricci(N)\geq -(n-1)$ and $\sec(N)\leq 1$, and let $f:N\rightarrow M$ be a proper Lipschitz map. Then
$$
\deg(f)\leq C_n\cdot\frac{\vol(N)}{\vol(M)}.
$$
\end{thm}
Possible generalisation of the above theorem depends on the further results on non-vanishing of the Lipschitz simplicial volume. At the moment most results in this direction are based on the proportionality principle indicated above, non-vanishing of the simplicial volume for negatively curved spaces \cite{T}, locally symmetric spaces of non-compact type \cite{LafS} and their products and connected sums \cite{G}.
\subsection*{Notation}
To clarify the notation, we will denote by $B_M(x,r)$ an open ball in a space $M$ centred at $x$ with radius $r$, and more generally by $B_M(X,r)$ an open $r$-neighbourhood of a set $X\subset M$. We consider all Riemannian manifolds as metric spaces with metric induced by Riemannian structure. In particular, if they are complete they are geodesic as metric spaces i.e. the distance between two points equals the length of shortest path joining them. We will also identify a $k$-dimensional simplex $\Delta^k$ with a set $\{(x_0,..,x_k)\in\mathbb{R}_{\geq 0}^{k+1}\::\:\sum_{i=0}^kx_i=1\,,\}\subset\mathbb{R}^{k+1}$ with an induced Riemannian structure.
\subsection*{Organization of this work}
In Section~\ref{SecPiece} we define the 'exponential neighbourhoods', recall the basic facts about straight simplices and develop the piecewise straightening procedure for singular chains. In Section~\ref{SecHom} we define piecewise $C^1$ chains and introduce corresponding singular and Milnor-Thurston-type homology theories. Section~\ref{SecApp} is devoted to the proofs of theorems \ref{PP0}, \ref{PI1} and \ref{PP1}.
\subsection*{Acknowledgements}
I am grateful to Piotr Nowak for bringing the simplicial volume and \cite{LS} to my attention. I would also like to thank Clara L\"{o}h for discussions about \cite{LS}, Jean-Fran\c{c}ois Lafont for an inspiring conversation and Federico Franceschini for very useful comments on the previous versions of this paper.
\section{Piecewise straightening procedure}\label{SecPiece}
The straightening procedure on non-positively curved manifolds is well known and applied successfully to many problems. Roughly speaking, given a complete, simply connected Riemannian manifold $M$ with non-positive curvature and a singular simplex $\sigma:\Delta^k\rightarrow M$ with vertices $x_0,...,x_k$ the straightening of this simplex is the geodesic simplex $[x_0,...,x_k]$, which is defined inductively to be a geodesic join of $x_k$ with geodesic simplex $[x_0,...,x_{k-1}]$. Because geodesics on $M$ joining points are unique, there exists a (unique) geodesic homotopy between $\sigma$ and $[x_0,...,x_k]$ which is defined as the geodesic join $[\sigma,[x_0,...,x_k]]$. We can apply the same procedure to the singular simplex $\sigma$ on non necessarily simply-connected Riemannian manifold $M$ with non-positive curvature by taking its lift to the universal cover $\wt{\sigma}:\Delta^k\rightarrow\wt{M}$, applying straightening there and pushing down the homotopy. It can be shown that it does not depend on the choice of the lift, therefore it can be extended to the straightening on singular chains and induces an isomorphism on homology. The same applies to locally finite Lipschitz chains and homology. The straightening procedure has also the advantage that it does not increase $l_1$-norm on chains, therefore the above isomorphism turns out to be isometric and the simplicial volume can be computed by considering only straight chains. This fact, together with a careful control of the set of vertices, is a key to prove e.g. proportionality principle for Lipschitz simplicial volume and inequalities for products of manifolds, assuming all the manifolds have non-positive curvature.
The fact which obviously fails if we consider a Riemannian manifold with $\sec(M)<K<\infty$ is the existence of unique geodesics on simply connected manifolds. They do exist locally, but unfortunately not uniformly, even if we pass to the universal covering. Therefore the crucial problem in defining piecewise straightening procedure on $M$ is the choice of a suitable space in which we have such uniform local uniqueness of geodesics. If such a space is provided, one can define piecewise straightening by subdividing barycentrically given singular chain, straighten every small simplex and glue straightened simplices back.
In Section~\ref{EN} for every point of $M$ we construct an 'exponential neighbourhood' of it which is a space admitting a local isometry to $M$ for which there exists a uniform lower bound (depending on $K$) of the injectivity radius of points in some (uniform) neighbourhood of the origin. This system of spaces and local isometries on $M$ admits also transition maps (at least locally) which allow one to apply some local constructions independently of the choice of point for which we consider its exponential neighbourhood. The construction is sketched in \cite[4.3(B)]{G}, however, we provide more detailed approach. In Section~\ref{SSAH} we recall basic notions concerning geodesic simplices and joins and prove that under some curvature and diameter conditions a geodesic join of Lipschitz maps is also Lipschitz. Finally, in Section~\ref{PSP} we define the piecewise straightening procedure for locally finite Lipschitz chains.
\subsection{Exponential neighbourhoods}\label{EN}
Let $M$ be a connected, complete $n$-dimensional Riemannian manifold with $\sec(M)<K$, $K>0$.
\begin{defi}
Let $x\in M$ and let $r\leq\injr{}$. Consider an open ball $B_{T_xM}(0,r)$ in the tangent space $T_xM$. Then the exponential map $\exp_x:B_{T_xM}(0,r)\rightarrow M$ is an immersion by Rauch-Berger comparison theorem \cite[Theorems 1.28, 1.29]{CE}. We endow $B_{T_xM}(0,r)$ with a Riemannian metric induced from $M$ by $\exp_x$ and obtain a space $V_x(r)$ which we call \emph{an $r-$exponential neighbourhood} of $x$ with distinguished point $\bar{x}\in V_x(r)$, which corresponds to $0$ in $B_{T_xM}(0,r)$ and the canonical local isometry $p_x:V_x(r)\rightarrow M$ such that $p_x(\bar{x})=x$.
If $r=\injr{}$, we will denote this space for short as $V_x$.
\end{defi}
Spaces $V_x$ are not complete, however, the closures of open balls $B_{V_x}(\bar{x},r)$ for any $r<\injr{}$ are complete as metric spaces and for $y\in V_x(r)$ the map $\exp_y:T_yV_x\rightarrow V_x$ is defined for vectors of length less than $\injr{}-r$. As we will see next, these spaces have all desired properties described in the introduction of this section. First of all we check that there exists a uniform lower bound on the injectivity radii of points around the (uniform) origins of $V_x$.
\begin{prop}\label{inj_rad_prop}
Let $x\in M$ and let $y\in V_x(\injr{4})$. Then the injectivity radius of $y$ in $V_x$ is at least $\injr{4}$.
\end{prop}
\begin{proof}
If $y\in B_{V_x}(\bar{x},\injr{4})$ then the exponential map $\exp_y:T_yV_x\rightarrow V_x$ is defined for vectors of length less than $\frac{3\pi}{4\sqrt{K}}$. Because of the curvature bound, it is immersion by Rauch-Berger comparison theorem, so we only need to prove that it is injective on $B_{T_yV_x}(0,\injr{4})$.
Denote by $V'_y$ the space $B_{T_yV_x}(0,\injr{2})$ endowed with Riemannian metric induced from $V_x$ (in particular the exponential map $\exp_y:V'_y\rightarrow V_x$ becomes a local isometry) with distinguished point $\bar{y}$ corresponding to $0$ in $T_yV_x$. Let $z_1, z_2\in B_{V'_y}(\bar{y},\injr{4})$ be such that $\exp_y(z_1)=\exp_y(z_2)=z$ and let $\wt{x}\in B_{V'_y}(\bar{y},\injr{4})$ be some lift of $\bar{x}$, i.e. any point satisfying $\exp_y(\wt{x})=\bar{x}$. Such point exists in $B_{V'_y}(\bar{y}'\injr{4})$ because $d_{V_x}(\bar{x},y)<\injr{4}$, but need not be unique. Since $\wt{x},z_1,z_2\in B_{V'_y}(\bar{y},\injr{4})$ there exists a geodesic $\gamma_1$ in $V'_y$ joining $z_1$ and $\wt{x}$. Similarly, there is a geodesic $\gamma_2$ joining $z_2$ and $\wt{x}$. Because $\exp_y$ is a local isometry on $V'_y$, both $\exp_y(\gamma_1)$ and $\exp_y(\gamma_2)$ are geodesics joining $\bar{x}$ and $z$ inside $V_x$. But by the construction of the exponential map and the space $V_x$, all geodesics joining $\bar{x}$ and any other point inside $V_x$ are unique. In particular $\exp_y(\gamma_1)=\exp_y(\gamma_2)$. We use again the fact that $\exp_y$ is a local isometry around $\wt{x}$ to see that both geodesics $\gamma_1$ and $\gamma_2$ have the same tangent line in $\wt{x}$ and the same direction, hence (without loss of generality) $\gamma_1$ is a subgeodesic of $\gamma_2$. Moreover, because $\exp_y$ does not change the length of geodesics, we have in fact $\gamma_1=\gamma_2$, hence $z_1=z_2$ q.e.d.
\end{proof}
Secondly, we check the existence of 'transition maps' which will allow us to preform local constructions on spaces $V_x$ independently of $x\in M$.
\begin{prop}\label{transition_prop}
Let $x,y\in M$ be such that $d_M(x,y)<\injr{4}$. Let also $y'$ be any lift of $y$ to $V_x(\injr{4})$. Then there exists a locally isometric diffeomorphism $I_{y',x}:V_y(\injr{4})\rightarrow B_{V_x}(y',\injr{4})$ such that we have a commutative diagram
$$
\xymatrix{
V_y(\injr{4}) \ar@{>}[rr]^{I_{y',x}} \ar@{>}[rd]_{p_{y}} && B_{V_x}(y',\injr{4})\ar@{>}[ld]^{p_x} \\
& M &
}
$$
\end{prop}
\begin{proof}
By Proposition \ref{inj_rad_prop} we know that $\exp_{y'}$ provides a diffeomorphism
$$
\exp_{y'}:B_{T_{y'}V_x}(0,\injr{4})\rightarrow B_{V_x}(y',\injr{4})\subset V_x
$$
which can be corrected to be a local isometry by changing the Riemannian metric on $B_{T_{y'}V_x}(0,\injr{4})$. Hence it suffices to show that $V_y(\injr{4})$ is isometric to $B_{T_{y'}V_x}(0,\injr{4})$ (with Riemannian metric induced by $\exp_y$). However, both spaces can be identified with the space of geodesics of length less than $\injr{4}$ starting from $y$, with Riemannian metric induced from $M$ by the map mapping geodesic to its endpoint. Checking the commutativity of a diagram is straightforward.
\end{proof}
Finally we establish the lifting property for spaces $V_x$ with respect to singular simplices with sufficiently small Lipschitz constants. Recall that if $X$ is a metric space and $\gamma:[0,1]\rightarrow X$ then we define the \emph{length of $\gamma$} to be
$$
L(\gamma) := \sup\{\sum_{i=1}^n d_X(\gamma(t_{i-1}),\gamma(t_i))\::\: 0=t_0<t_1<...<t_n=1,\,n\in\mathbb{N}\}
$$
and we say that $X$ is \emph{geodesic} if it is path-connected and for any two points their distance equals the length of the shortest path between them, called \emph{geodesic}. We will use the following simple fact.
\begin{lemma}\label{lemma_geodesic_lip}
Let $X$ be a geodesic metric space and let $f:X\rightarrow Y$ be a Lipschitz map. Then for every $\varepsilon>0$
$$
\Lip(f) = \sup\{\frac{d_Y(f(x),f(x'))}{d_X(x,x')}\::\: x,x'\in X \,,\, 0<d_X(x,x')<\varepsilon\}
$$
\end{lemma}
\begin{proof}
The '$\geq$' inequality is obvious, we need to prove the opposite one. Let $\delta>0$ and let $x,x'\in X$ be two points such that
$$
d_Y(f(x),f(x'))>(\Lip(f)-\delta)d_X(x,x')
$$
Let also $\gamma:[0,d_X(x,x')]\rightarrow X$ be the shortest geodesic joining $x$ and $x'$. Subdivide $\gamma$ into nontrivial subgeodesics $\gamma_1,...,\gamma_n$ of length less than $\varepsilon$ and let $x=x_0,x_1,...,x_n=x'$ be their subsequent endpoints. Then we have
$$
\sum_{i=1}^n d_Y(f(x_{i-1}),f(x_i)) \geq d_Y(f(x),f(x')) > (\Lip(f)-\delta)d_X(x,x') = (\Lip(f)-\delta)\sum_{i=1}^n d_X(x_{i-1},x_i)
$$
hence for some $i\in\{1,...,n\}$ we have the inequality
$$
d_Y(f(x_{i-1}),f(x_i))>(\Lip(f)-\delta)d_X(x_{i-1},x_i)
$$
and $0<d_X(x_{i-1},x_i)<\varepsilon$. Because $\delta$ was arbitrary, the inequality holds.
\end{proof}
Note that every complete Riemannian manifold is geodesic as a metric space. In particular, a $k$-dimensional simplex $\Delta^k$ is a geodesic space with diameter $\sqrt{2}$.
\begin{prop}
Let $\sigma:\Delta^k\rightarrow M$ be a Lipschitz singular simplex, let $y\in\Delta^k$ and let $\sigma(y)=x\in M$. Then if $\Lip(\sigma)<\frac{C}{\sqrt{2}} <\frac{\pi}{\sqrt{2K}}$ then there exists a unique Lipschitz lift $\wt{\sigma}:\Delta^k\rightarrow V_x(C)$ of $\sigma$ (i.e. $\sigma = p_x\circ\wt{\sigma}$) such that $\wt{\sigma}(y)=\bar{x}$. This lift satisfies also $\Lip(\wt{\sigma})=\Lip(\sigma)$
\end{prop}
\begin{proof}
Let $z\in \Delta^k$ and let $I_z:[0,1]\rightarrow \Delta^k$ be a (rescaled) interval connecting $y$ and $z$, that is $I_z(t)=(1-t)y+tz$. Let also $\gamma_z=\sigma\circ I_z$. We claim that we can construct a unique path $\wt{\gamma}_z:[0,1]\rightarrow V_x(C)$ such that $p_x\circ\wt{\gamma}_z= \gamma_z$ and $\wt{\gamma}_z(0)=\bar{x}$. Let
$$
R = \sup\,\{r\in [0,1] \: :\: \text{ there exists a lift $\wt{\gamma}^r_z:[0,r]\rightarrow V_x(C)$ of $\gamma_z|_{[0,r]}$ such that $\wt{\gamma}^r_z(0)=\bar{x}$} \}.
$$
We claim that $R=1$. Note that if we have two lifts $\wt{\gamma}^s_z:[0,s]\rightarrow V_x$ and $\wt{\gamma}^t_z:[0,t]\rightarrow V_x$ for $0\leq s\leq t\leq1$ satisfying the above conditions then they need to agree on $[0,s]$ because the subset of $[0,s]$ where these two lifts agree is nonempty (because $\wt{\gamma}^s_z(0)=\wt{\gamma}^t_z(0)=\bar{x}$), open (because $p_x$ is a local diffeomorphism) and closed (because of the continuity of both lifts). Hence we can consider a (unique) union of such lifts $\wt{\gamma}^s_z$ for $s<R$ to obtain a lift $\wt{\gamma}'^R_z:[0,R)\rightarrow V_x$ of $\gamma_z|_{[0,R)}$ such that $\wt{\gamma}'^R_z(0)=\bar{x}$. To extend it continuously to a lift $\wt{\gamma}^R_z:[0,R]\rightarrow V_x(C)$ we need to check that
$$
\sup_{t\in[0,R)}d_{V_x}(\bar{x},\wt{\gamma}'^{R}_z(t)) < C
$$
because then the limit $\lim_{t\rightarrow R}\wt{\gamma}'^R_z(t)$ exists in $V_x(C)$. Fix $0<t<R$ and consider a path $\wt{\gamma}_z^t=\wt{\gamma}'^R_z|[0,t]$. Note that because $p_x$ is a local isometry this path has the same length as $\gamma_z|[0,t]$. Using the fact that $\gamma_z=\sigma\circ I_z$ and that $\sigma$ is Lipschitz we have
$$
d_{V_x}(\bar{x},\wt{\gamma}^t_z(t)) = d_{V_x}(\wt{\gamma}^t_z(0),\wt{\gamma}^t_z(t)) \leq L(\wt{\gamma}^t_z) = L(\gamma_z|[0,t]) < (\frac{C}{\sqrt{2}}-\varepsilon) L(I_z) \leq C - \varepsilon
$$
for some sufficiently small $\varepsilon$ depending on $\sigma$, but neither on $z$ nor on $t$. Since $\wt{\gamma}^t_z(t) = \wt{\gamma}'^R_z(t)$ we have $\sup_{t\in[0,R)}d_{V_x}(\bar{x},\wt{\gamma}'^{R}_z(t))\leq C-\varepsilon < C$ so we can extend our lift to $\wt{\gamma}^R_z:[0,R]\rightarrow V_x(C)$. Finally, if $R<1$ we can use again the fact that $p_x$ is a local diffeomorphism (this time in the neighbourhood of $\wt{\gamma}^{R}_z(R)$) and extend $\wt{\gamma}^R_z$ to $\wt{\gamma}^{R'}_z$ for some $R'>R$, which contradicts the definition of $R$.
Because the choice of $\wt{\gamma}_z$ is unique we can define $\wt{\sigma}(z) = \wt{\gamma}_z(1)$. Moreover, we can once again use the fact that $p_x$ is a local diffeomorphism and that $[0,1]$ is compact to conclude that $\wt{\gamma}_z$ depends continuously on $z$ in the compact-open topology, hence $\wt{\sigma}$ as a map $\Delta\rightarrow V_x(C)$ is continuous.
The last claim to verify is the equality $\Lip(\wt{\sigma})=\Lip(\sigma)$. Note that $\Delta^k$ is a geodesic metric space, hence the Lipschitz constants of $\sigma$ and $\wt{\sigma}$ can be computed locally as in Lemma \ref{lemma_geodesic_lip}. But $p_x\circ\wt{\sigma}=\sigma$ and $p_x$ is a local isometry, hence these 'local' Lipschitz constants are the same.
\end{proof}
By combining the above proposition with Proposition \ref{transition_prop} we obtain very useful corollary.
\begin{cor}\label{simplex_lift_cor}
Let $\sigma:\Delta^k\rightarrow M$ be a Lipschitz singular simplex such that $\sigma(\Delta^k)\subset B_M(x,\injr{4})$. Then if $\Lip(\sigma)<\frac{C}{\sqrt{2}} <\frac{\pi}{4\sqrt{2K}}$ then there exists a Lipschitz lift $\wt{\sigma}:\Delta^k\rightarrow V_x$ of $\sigma$ (i.e. $p_x\circ\wt{\sigma} = \sigma$) with $\Lip(\wt{\sigma})=\Lip(\sigma)$.
Moreover, if $y\in\Delta^k$ then the lift is unique up to the choice of $\wt{\sigma}(y)$ which can be chosen to be any point $\wt{y}\in V_x(\injr{4})$ such that $p_x\circ\wt{\sigma}(\wt{y})=y$. We have then $\wt{\sigma}(\Delta^k)\subset B_{V_x}(\wt{y},C)$.
\end{cor}
\subsection{Straight simplices and homotopies}\label{SSAH}
As before, we will assume that $M$ is connected complete $n$-dimensional Riemannian manifold with $\sec(M)<K$, $K>0$ and $x\in M$. Let $y,z\in V_x$ be two points such that $y,z\in V_x(\injr{8})$. By Proposition \ref{inj_rad_prop} there exists a unique shortest geodesic joining them (depending continuously on both endpoints) which we denote by $[y,z]$. Following \cite{LS}, we can define the \emph{geodesic join} of two maps $f,g:X\rightarrow V_x$.
\begin{defi}
Let $f,g:Y\rightarrow V_x$ be two maps such that $(\im(f)\cup \im(g))\subset V_x(\injr{8})$. Then there exists a unique homotopy $[f,g]:Y\times [0,1]\rightarrow V_x$ defined by $(y,t)\mapsto [f(y),g(y)](t)$ called a \emph{geodesic join} of $f$ and $g$.
\end{defi}
We will often use the following lemma.
\begin{lemma}\label{joinlemma}
Let $f,g:Y\rightarrow V_x$ be two maps such that $\im(f)\subset V_x(R_1)$ and $\im(g)\subset V_x(R_2)$ for $R_1,R_2<\injr{8}$. Then $\im([f,g])\subset V_x(R_1+R_2)$.
\end{lemma}
\begin{proof}
Suppose there is a point $z = [f,g](y,t)$ such that $d(\bar{x},z)\geq R_1+R_2$. Then
$$
d_{V_x}(z,f(y))\geq d_{V_x}(\bar{x},z)-d_{V_x}(\bar{x},f(y))\geq R_2
$$ and similarly $d_{V_x}(z, g(y))\geq R_1$. Because $z$ is on the unique minimizing geodesic between $f(y)$ and $g(y)$, we have
$$
d_{V_x}(f(y),g(y)) = d_V(f(y),z)+d_V(z,g(y)) \geq R_1+R_2.
$$
On the other hand
$$
d_{V_x}(f(y),g(y)) \leq d_{V_x}(f(y),\bar{x})+d_{V_x}(\bar{x},g(y)) < R_1+R_2.
$$
The above contradiction shows that $z\in V_x(R_1+R_2)$.
\end{proof}
We can consequently define geodesic simplices. Recall that as we identified the standard simplex $\Delta^k$ with the subset $\{(z_0,...,z_k)\in\mathbb{R}^{k+1}_{\geq 0}\;:\;\sum_{i=0}^kz_i=1\}$, we can identify $\Delta^{k-1}$ with the subset $\{(z_0,...,z_k)\in\Delta^k\;:\; z_k=0\}$.
\begin{defi}
The \emph{geodesic simplex} $[x_0,...,x_k]:\Delta^k\rightarrow V_x$ with vertices $x_0,...,x_k\in V_x(\injr{8k})$ is defined inductively by the formulas
\begin{itemize}
\item $[x_0](\Delta^0) = \{x_0\}\subset V_x$;
\item $[x_0,...,x_k]((1-t)s+t(0,...,0,1))=[[x_0,...,x_{k-1}](s),x_k](t)$ for $s\in\Delta^{k-1}$
\end{itemize}
\end{defi}
To prove that the definition is correct it is enough to prove the following lemma.
\begin{lemma}\label{diam}
Let $k\in\mathbb{N}$ and $R<\injr{8k}$. If $x_0,...,x_k\in V_x(R)$ then $[x_0,...,x_k]$ exists and
$$
[x_0,...,x_k](\Delta^k) \subset V_x((k+1)R).
$$
\end{lemma}
\begin{proof}
We prove the statement by induction. For $k=0$ the existence of a geodesic simplex is obvious and does not require any metric assumptions. For $k>0$ $[x_0,...,x_{k-1}]$ exists by induction hypothesis and $[x_0,...,x_{k-1}]\subset V_x(kR)\subset V_x(\injr{8})$. Consider the geodesic join of maps $[x_0,...,x_{k-1}]:\Delta^{k-1}\rightarrow V_x$ and a constant map sending $\Delta^{k-1}$ to the point $x_k$. Obviously this join has the same image in $V_x$ as $[x_0,...,x_k]$. By Lemma \ref{joinlemma} we get
\begin{eqnarray*}
[x_0,...,x_k](\Delta^k) = [[x_0,...,x_{k-1}],\{x_k\}](\Delta^k) \subset V_x(kR+ R) = V_x((k+1)R)
\end{eqnarray*}
\end{proof}
The most important fact in this section is a positive curvature analogue of Proposition 2.1 in \cite{LS}.
\begin{prop}\label{Lipschitz_prop}
Let $Y$ be a compact, smooth manifold (possibly with boundary) and let $f,g:Y\rightarrow V_x$ be two Lipschitz maps such that $(\im(f)\cup \im(g))\subset V_x(C_K)$, where $C_K<\injr{8}$ is a constant depending only on $K$. Then $[f,g]$ has Lipschitz constant depending only on $K$ and the Lipschitz constants for $f$ and $g$. Moreover, $[f,g]$ is smooth ($C^1$) if $f$ and $g$ are smooth ($C^1$).
\end{prop}
To proceed, we need two technical lemmas concerning Riemannian geometry. First is the technical result proved in \cite{LS}, which can be easily applied in our situation.
\begin{lemma}[{\cite[Proposition 2.6]{LS}}]\label{Loeh_positive}
Let $V$ be a complete simply connected Riemannian manifold with $\sec(V)<K$, $K>0$. Then every geodesic simplex $\sigma$ in $V$ such that $\diam(\sigma)<\injr{2}$ is smooth. Further, there is a constant $L>0$ such that every geodesic $k$-simplex $\sigma$ of diameter less than $\injr{4}$ satisfies $\|T_x\sigma\|<L$ for every $x\in\Delta^n$.
\end{lemma}
\begin{lemma}\label{curvature_inequality}
Consider the geodesic triangle $[x_0,x_1,x_2]$ in $V_x$ such that $x_0,x_1,x_2\in V_x(\injr{48})$. Then there exists a constant $D_K$, depending only on the curvature bound $K$, such that for any $t\in[0,1]$
$$
d_{V_x}([x_0,x_2](t),[x_1,x_2](t))\leq D_K d_{V_x}(x_0,x_1)
$$
\end{lemma}
\begin{proof}
If $x_0=x_1$ it is nothing to prove. If not, consider the extension (in any direction) of $[x_0,x_1]$ to a geodesic of length $\injr{24}$ and denote the endpoints of this geodesic by $x'_0, x'_1$. Such geodesic exists because $B_{V_x}(x_0,\injr{24})\subset V_x(\injr{8})$. Now consider the geodesic triangle $[x'_0,x'_1,x_2]$. Note that
$$
d_{V_x}(x'_0,\bar{x})\leq d_{V_x}(x'_0,x_0)+d_{V_x}(x_0,\bar{x}) < \injr{24} + \injr{48} = \injr{16}.
$$
similarly, $d_V(x'_1,\bar{x})< \injr{16}$, hence by Lemma \ref{diam} we have $[x'_0,x'_1,x_2]\subset V_x(\frac{3\pi}{16\sqrt{K}})$. We can therefore use Lemma \ref{Loeh_positive} to conclude that the diffeomorphic simplex map $\sigma:\Delta^2\rightarrow V_x$ from the standard 2-simplex onto $[x'_0,x'_1,x_2]$ is Lipschitz with constant $L$ independent of $\sigma$. Hence
\begin{eqnarray*}
d_{V_x}([x_0,x_2](t),[x_1,x_2](t)) & \leq & L\cdot d_{\Delta^2}(\sigma^{-1}([x_0,x_2](t)), \sigma^{-1}([x_1,x_2](t)) \\
&\leq & L\cdot d_{\Delta^2}(\sigma^{-1}(x_0), \sigma^{-1}(x_1)) \\
& = & L\sqrt{2}\frac{d_{V_x}(x_0,x_1)}{\pi/24\sqrt{K}}
\end{eqnarray*}
so one can take $D_K = \frac{24L\sqrt{2K}}{\pi}$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Lipschitz_prop}]
Put $C_K=\injr{48}$. To prove smoothness in the case $f$ and $g$ are smooth, one can rewrite $[f,g]$ as
$$
[f,g](y,t) = \exp_{f(y)}(t\cdot \exp^{-1}_{f(y)}(g(y))),
$$
where we use the fact that by Proposition~\ref{inj_rad_prop} if $TV_\rho=\{(y,t)\in TV\;:\; y\in V_x(\rho),\, \|t\|<\rho\}$ then
\begin{eqnarray*}
\exp:TV_{\injr{4}} &\rightarrow& V_x\times V_x \\
\exp_x(t) = \exp(x,t) &\mapsto& (x, \exp_x(t))
\end{eqnarray*}
is a diffeomorphism onto its image.
Now, let $(y,t),(y',t')\in Y\times [0,1]$. We have
\begin{eqnarray*}
d_{V_x}([f,g](y,t),[f,g](y',t'))&\leq& d_{V_x}([f(y),g(y)](t),[f(y),g(y)](t')) \\ &+& d_{V_x}([f(y),g(y)](t'),[f(y'),g(y')](t'))
\end{eqnarray*}
The first term can be easily estimated as follows
$$
d_{V_x}([f(y),g(y)](t),[f(y),g(y)](t'))\leq |t-t'|\cdot d_{V_x}(f(y),g(y))\leq |t-t'|\cdot \diam(\im(f)\cup \im(g)).
$$
Recall that by assumption $(\im(f)\cup \im(g))\subset V_x(\injr{48})$. Therefore the second term can be estimated using Lemma \ref{curvature_inequality}
\begin{eqnarray*}
d_{V_x}([f(y),g(y)](t'),[f(y'),g(y')](t')) &\leq& d_{V_x}([f(y),g(y)](t'),[f(y),g(y')](t')) \\
&+& d_{V_x}([f(y),g(y')](t'),[f(y'),g(y')](t')) \\
&\leq& D_K(d_{V_x}(g(y),g(y'))+d_{V_x}(f(y),f(y')))\\
&\leq& D_K(\Lip(f)+\Lip(g))d_Y(y,y').
\end{eqnarray*}
Finally, we obtain
\begin{eqnarray*}
d_{V_x}([f,g](y,t),[f,g](y',t'))&\leq& 2|t-t'|C_K + D_K(\Lip(f)+\Lip(g))d_Y(y,y')\\
&\leq& (2C_K + D_K(\Lip(f)+\Lip(g)))d_{Y\times[0,1]}((y,t),(y',t'))
\end{eqnarray*}
\end{proof}
\begin{remark}
All the facts above could be stated (possibly with some minor changes in constants used) for any Riemannian manifold $V$ with $\sec(V)<K$ with a distinguished point $\bar{x}\in V$ such that the closure of an open ball $B_V(\bar{x},R)$ is complete for some $R$ and there exists $r<R$ such that every point in $B_V(\bar{x},r)$ has injectivity radius at least $\rho>0$. However, the only examples which are important to us at the moment are spaces $V_x$ for $x\in M$.
\end{remark}
\subsection{The piecewise straightening itself}\label{PSP}
Let $M$ be a complete, $n$-dimensional Riemannian manifold with $\sec(M)<K$, $0<K<\infty$ and let $E_{n,K}=\frac{C_K}{2(n+1)}$, where $C_K$ is a constant from Proposition~\ref{Lipschitz_prop}. Choose a locally finite family $(F_j)_{j\in J}$ of pairwise disjoint Borel subsets of $M$ together with points $z_j\in F_j$ and Borel maps $s_j:F_j\rightarrow V_{z_j}(E_{n,K})$ for $j\in J$, such that
\begin{itemize}
\item $\bigcup_{j\in J}F_j=M$;
\item for every $j\in J$ $\diam(F_j) < E_{n,K}$;
\item for every $j\in J$ $s_j$ is a section of $p_{z_j}$ (i.e. $p_{z_j}\circ s_j = id:F_j\rightarrow F_j$) such that $s_j(z_j)=\bar{z_j}$.
\end{itemize}
A family with properties described above always exists. To see this choose a triangulation of $M$ (which exists because $M$ is Riemannian) and divide every triangle into a locally finite family of disjoint Borel sets with sufficiently small diameters. Also sections $s_j$ for $j\in J$ exist because for $x\in F_j$ a lift of the (not necessarily unique) shortest geodesic joining $z_j$ and $x$ has length $<E_{n,K}$ and one can choose $s_j(x)$ to be an endpoint of one of such lifts in a Borel way.
\begin{defi}
Let $F_j$, $z_j$, $s_j$ for $j\in J$ be as above and let $\pi_U:U\rightarrow M$ be a continuous map such that $B_M(z_j,E_{n,K})\subset \im(\pi_U)$. We call a Borel section $s'_j:F_j\rightarrow U$ of $\pi_U$ \emph{admissible} if there exists a continuous map $v_U:V_{z_j}(E_{n,K})\rightarrow U$ such that $s'_j = v_U\circ s_j$ and $\pi_U\circ v_U = p_{z_j}$, i.e. it fits into the commutative diagram
$$
\xymatrix{
V_{z_j}(E_{n,K})\ar@{.>}[rr]^{v_U} \ar@{>}[rrd]^(0.4){p_{z_j}} && U \ar@{>}[d]^{\pi_U} \\
F_j \ar@{>}[u]^{s_j} \ar@{.>}[urr]^(0.2){s'_j} \ar@{^{(}->}[rr] && M
}
$$
\end{defi}
A motivating example is given by the following lemma
\begin{lemma}\label{admissible_lift_lemma}
Let $x\in M$ and $x'\in V_x(\injr{4})$. Then there exists a unique $j\in J$ and a unique admissible section
$$
s^{x'}_j:F_j\rightarrow B_{V_x}(x',2E_{n,K})
$$
with respect to the map $p_x:V_x\rightarrow M$ such that $x'\in s^{x'}_j(F_j)$.
\end{lemma}
\begin{proof}
Let $y=p_x(x')$, then $y$ is contained in a set $F_j$ for some $j\in J$. By Proposition~\ref{transition_prop} we can compose a canonical section $s_j$ with $I^{-1}_{s_j(y),z_j}:B_{V_{z_j}}(s_j(y),\injr{4})\rightarrow V_y(\injr{4})$ and obtain an admissible section $s'_j:F_j\rightarrow V_y(2E_{n,K})$ with respect to $p_y$ such that $s'_j(y)=\bar{y}$. After the composition of this section with $I_{x',x}:V_y(\injr{4})\rightarrow B_{V_x}(x',\injr{4})$ we obtain an admissible section $s^{x'}_j:F_j\rightarrow B_{V_x}(x',2E_{n,K})$ which satisfies required conditions.
To see the uniqueness of $s^{x'}_j$, let $s'^{x'}_{j'}:F_{j'}\rightarrow B_{V_x}(x',2E_{n,K})$ be another admissible section satisfying the above conditions. Note that $F_j\ni p_x\circ s^{x'}_j(x') = p_x\circ s'^{x'}_{j'}(x') \in F_{j'}$, hence $j=j'$. After composing $s^{x'}_j$ and $s'^{x'}_{j'}$ with $I_{s_j(y),z_j}\circ I^{-1}_{x',x}:B_{V_x}(x',\injr{4})\rightarrow B_{V_{z_j}}(s_j(y), \injr{4})$ and using the admissibility of $s'^{x'}_{j'}$ we obtain sections $s_j,s'_j:F_j\rightarrow V_{z_j}(\injr{4})$ and a map $v:V_{z_j}(E_{n,K})\rightarrow V_{z_j}$ such that $v\circ s_j(y)=s'_j(y)$ and the following diagram commutes
$$
\xymatrix{
V_{z_j}(E_{n,K})\ar@{>}[rr]^{v} \ar@{>}[rrd]^(0.4){p_{z_j}} && V_{z_j} \ar@{>}[d]^{p_{z_j}} \\
F_j \ar@{>}[u]^{s_j} \ar@{>}[urr]^(0.2){s'_j} \ar@{^{(}->}[rr] && M
}
$$
It suffices to show that $v=Id|_{V_{z_j}(E_{n,K})}$. But since $p_{z_j}$ is a local isometry, $v$ is also, so it is an identity in some neighbourhood of $s_j(y)$. It follows that it must be an identity on the neighbourhood of the geodesic path joining $s_j(y)$ and $\bar{z}_j$, and consequently on every geodesic joining $\bar{z}_j$ with any other point of $V_{z_j}(E_{n,K})$. In consequence $v=Id|_{V_{z_j}(E_{n,K})}$.
\end{proof}
Now we turn back to the definition of piecewise straightening.
\begin{defi}
Let $\sigma:\Delta^k\rightarrow M$ be a Lipschitz singular simplex. Then we say that $\sigma$ is \emph{$\varepsilon$-geodesic} (with respect to $(F_j)_{j\in J}$) if $\Lip(\sigma) \leq \frac{\varepsilon}{\sqrt{2}}$ and there exists $x\in M$ and a lift $\wt{\sigma}:\Delta^k\rightarrow V_x$ of $\sigma$ such that $\wt{\sigma}$ is geodesic with vertices in some lifts of the points $z_j$, $j\in J$.
\end{defi}
Note that by Proposition~\ref{transition_prop} if $\varepsilon < \injr{4}$ then the above definition does not depend on the choice of $x\in M$ unless $\sigma(\Delta^k)\subset B_M(x,\injr{4})$.
\begin{defi}
Let $\sigma:\Delta^k\rightarrow M$ be a singular simplex and let $S^{(m)}(\sigma)=\sum_i \sigma_i$ be its $m$-times iterated barycentric subdivision, where $m\in\mathbb{N}$. We say that $\sigma$ is \emph{($m$-)piecewise straight} if every $\sigma_i$ in $S^{(m)}(\sigma)$ is $E_{n,K}$-geodesic (with respect to $(F_j)_{j\in J}$).
We say that a (locally finite) chain $c=\sum_{i\in\mathcal{I}}a_i\sigma_i\in C_*(M)$ is \emph{piecewise straight} if there exists $m\in\mathbb{N}$ such that every $\sigma_i$, $i\in\mathcal{I}$, is $m$-piecewise straight.
\end{defi}
Let $\sigma:\Delta^k\rightarrow M$ for $k\leq n$ be a Lipschitz singular simplex. We define the \emph{straightening} of $\sigma$ (with respect to $(F_j)_{j\in J}$) as follows. Choose $m\in\mathbb{N}$ such that each simplex $\sigma_i$ in $S^{(m)}(\sigma) = \sum_i\sigma_i$ has Lipschitz constant less than $\frac{E_{n,K}}{\sqrt{2}}$. Such $m$ exists because diameters of subdivided simplices in $\Delta^k$ tends to $0$ \cite[Corollary 9.4.9]{ES}, hence also Lipschitz constants of subdivided simplices in $\sigma$. Moreover, we can choose $m$ depending only on $n$, $K$ and $\Lip(\sigma)$. For every simplex $\sigma_i$ choose a point $y_i\in\Delta^k$ and let $y'_i=\sigma_i(y_i)$, then by Corollary~\ref{simplex_lift_cor} there is a unique lift $\wt{\sigma_i}:\Delta^k\rightarrow V_{y'_i}(E_{n,K})$ of $\sigma_i$ such that $\wt{\sigma_i}(y_i)=\bar{y}'_i$. Denote by $x_{i,0},...,x_{i,k}$ its vertices, let $s'_{i,l}:F_{i,l}\rightarrow V_{y'_i}$ be admissible sections containing $x_{i,l}$ in their images for $l=0,...,k$, constructed by Lemma~\ref{admissible_lift_lemma} and let $z'_{i,l} = s'_{i,l}(z_{i,l})$ for $l=0,...,k$. In particular $z'_{i,0},...,z'_{i,k}\in V_{y'_i}(2E_{n,K})$, hence the geodesic simplex $[z'_{i,0},...,z'_{i,k}]$ exists by Lemma~\ref{diam} because
$$
2E_{n,K} =\frac{2C_K}{2(n+1)} < \injr{8(n+1)} < \injr{8k}.
$$
Let $\str_{y_i}(\sigma_i) = [z'_{i,0},...,z'_{i,k}]$ and define
$$
\str_m(\sigma) = (S^{(m)})^{-1}(\sum_i p_{y'_i}\circ\str_{y_i}(\sigma_i)).
$$
Moreover, by Lemma~\ref{diam} we have
$$
[z'_{i,0},...,z'_{i,k}] \subset V_{y'_i}(2(k+1)E_{n,K})\subset V_{y'_i}(C_K)
$$
so it follows from Proposition~\ref{Lipschitz_prop} that $[\wt{\sigma_i},[z'_{i,0},...,z'_{i,k}]]$ exists and defines a Lipschitz homotopy $H_{y_i}:\Delta^k\times I\rightarrow V_{y'_i}$ between these simplices, with Lipschitz constant depending only on $m$, $K$ and $\Lip(\sigma)$. Define
$$
H_m (\sigma) = (S^{(m)}\times Id_I)^{-1}(\sum_i p_{y'_i}\circ H_{y_i}(\sigma_i)).
$$
To show that $\str_m$ and $H_m$ are well defined it suffices to verify that the construction is independent of the choice of $y_i\in\Delta^k$. Indeed, assuming this fact if $\partial^q:C_k(M)\rightarrow C_{k-1}(M)$ for $q=0,...,k$ is an operator assigning to a singular simplex its $q$-th face, we see that for any $\dot{y}_i\in \Delta^{k-1}\subset\partial^q\Delta^k$ and $\dot{y}'_i=\sigma_i(\dot{y}_i)$ we have
$$
\partial^q(p_{y'_i}\circ\str_{y_i}(\sigma_i)) = \partial^q(p_{\dot{y}'_i}\circ\str_{\dot{y}_i}(\sigma_i)) = p_{\dot{y}'_i}\circ\partial^q\str_{\dot{y}_i}(\sigma_i) = p_{\dot{y}'_i}\circ\str_{\dot{y}_i}(\partial^q\sigma_i).
$$
where the last equality is the consequence of the fact that the straightening of a face of any singular simplex depends only on this particular face, not on the whole simplex. In particular, if two simplices $\sigma_i$ and $\sigma_{i'}$ have some face in common, their straightenings will also have the same one. This shows that $\sum_i p_{y'_i}\circ\str_{y_i}(\sigma_i)$ lies in the image of $S^{(m)}$, hence (after giving some ordering on the vertices of $\sigma_i$) we can choose a preimage in the canonical way. The same proof applies also to $H_m$.
Now we verify our claim. Let $\dot{y}_i\in\Delta^k$, $\dot{y}'_i=\sigma_i(\dot{y}_i)\in M$ and $\wt{y}'_i=\wt{\sigma}_i(\dot{y}'_i)\in V_{y'_i}(E_{n,K})$. Then Proposition~\ref{transition_prop} gives an isometry $I_{\wt{y}'_i,y'_i}$ between $V_{\dot{y}'_i}(\injr{4})$ and $B_{V_{y'_i}}(\wt{y}'_i,\injr{4})$.
By Lemma~\ref{joinlemma} we have
$$
H_{y_i}(\Delta^k\times I)\subset V_{y'_i}(C_K+E_{n,K}) \subset V_{y'_i}(\frac{3\pi}{16\sqrt{K}}).
$$
and because $d_{V_{y'_i}}(\bar{y}'_i,\wt{y}'_i)<E_{n,K}<\injr{16}$, the images of $H_{y_i}$ and $H_{\dot{y}_i}$ stay in $B_{V_{y'_i}}(\wt{y}'_i,\injr{4})$ and $V_{\dot{y}'_i}(\injr{4})$ respectively. Moreover, $I_{\wt{y}'_i,y'_i}$ maps respective admissible sections $\dot{s}'_{i,l}:F_{i,l}\rightarrow V_{\dot{y}'_i}$ to admissible sections $s'_{i,l}:F_{i,l}\rightarrow V_{y'_i}$, hence $H_{y_i}=I_{\wt{y}'_i,y'_i}\circ H_{\dot{y}_i}$. As a result they are the same after pushing them back on $M$. This argument applies also to $\str_{y_i}$ since $\str_{y_i} = H_{y_i}(-,1)$.
Let $c=\sum_ia_i\sigma_i$ be a locally finite Lipschitz chain with Lipschitz constant $L$. We see that we can choose $m\in\mathbb{N}$, depending only on $n$, $L$ and $K$, such that $\str_m(\sigma_i)$ is defined for every $i$, so we can define $\str_m(c)$ simply as $\sum_i a_i \str_m(\sigma_i)$. The chain $\str_m(c)$ is Lipschitz because of Proposition~\ref{Lipschitz_prop} and Lemma~\ref{lemma_geodesic_lip}, and locally finite since by construction for any singular simplex $\sigma:\Delta^k\rightarrow M$ we have $\str_m(\sigma)\subset B_M(\sigma(\Delta^k),C_K)$. Note that the straightening defined as above does not define a chain operator $C^{lf,\Lip}_{*\leq n}(M)\rightarrow C^{lf,\Lip}_{*\leq n}(M)$, where $C^{lf,\Lip}_*(M)$ are locally finite Lipschitz chains, because we cannot choose $m$ uniformly. However, it allows us to prove slightly weaker statement. Recall that $C^{lf,<L}_*(M)$ is a chain complex of locally finite singular chains on $M$ consisting of simplices with Lipschitz constant less than $L$.
\begin{lemma}\label{chain_homotopy_lemma}
For every $L<\infty$ there exists $m\in\mathbb{N}$ such that the operator
$$
\str_m:C^{lf,<L}_{*\leq n}(M)\rightarrow C^{lf,\Lip}_*(M)
$$
is a well defined chain map homotopic to the inclusion $\iota:C^{lf,<L}_{*\leq n}\rightarrow C^{lf,\Lip}_*(M)$. Moreover, $|\str_m|_1\leq 1$.
\end{lemma}
\begin{proof}
Choose $m$ such that $\str_m$ is well defined for any singular simplex $\sigma:\Delta^k\rightarrow M$ with $k\leq n$ and $\Lip(\sigma)<L$. Then for any such singular simplex $\sigma$ let $S^{(m)}(\sigma)=\sum_i\sigma_i$. For arbitrary $y\in\Delta^k$ and $y^q\in\partial^q\Delta^k$, $q=0,...,k$, we have
\begin{eqnarray*}
\partial\str_m(\sigma) & = & \sum_{q=0}^k(-1)^q\partial^q (S^{(m)})^{-1}(\sum_i p_{\sigma_i(y)}\circ\str_y(\sigma_i)) \\
& = & \sum_{q=0}^k(-1)^q (S^{(m)})^{-1}(\sum_i \partial^q (p_{\sigma_i(y)}\circ\str_y(\sigma_i)))\\
& = & \sum_{q=0}^k(-1)^q (S^{(m)})^{-1}(\sum_i p_{\partial^q\sigma_i(y^q)}\circ\str_{y^q}(\partial^q\sigma_i))\\
& = & \str_m(\sum_{q=0}^k(-1)^q \partial^q\sigma) = \str_m(\partial\sigma)
\end{eqnarray*}
where we use the fact that $S^{(m)}$ is a chain operator and the construction of $\str_m$. This shows that $\str_m$ is a chain map. To obtain a chain homotopy joining $\str_m$ and $\iota$ let $P_k\in C_{k+1}(\Delta^k\times I)$ be a canonical division of $\Delta\times I$ into singular simplices described e.g. in \cite[Proof of 2.10]{HAT} and let $h:C_k^{lf,<L}(M)\rightarrow C_{k+1}^{lf,\Lip}(M)$ for $k\leq n$ be defined as $h(\sigma) = H_m(\sigma)_*(P_k)$. Note that $h(c)$ is Lipschitz, because $H_m$ is by Proposition~\ref{Lipschitz_prop} and Lemma~\ref{lemma_geodesic_lip}, and locally finite since by construction $H_m(\sigma)(\Delta^k\times I)\subset B_M(\sigma)(\injr{4})$ for any $\sigma:\Delta^k\rightarrow M$. The proof that it provides the desired chain homotopy is standard and described e.g. in \cite[Proof of Theorem 2.10]{HAT} or \cite[Lemma 2.13]{LS}. The proof that $|\str_m|_1\leq 1$ is straightforward.
\end{proof}
\begin{cor}\label{homology_representation}
Every homology class $\xi$ in $H_{*\leq n}^{lf,\Lip}(M)$ can be represented by a piecewise straight chain with vertices in $(z_j)_{j\in J}$. Moreover, $l^1$ semi-norm on $H_{*\leq n}^{lf,\Lip}(M)$ can be computed on piecewise straight chains.
\end{cor}
\begin{proof}
Let $c=\sum_i a_i\sigma_i\in C_k^{lf,\Lip}(M)$, $k\leq n$, be any cycle such that $[c]=\xi$. Then $c\in C_k^{lf,<L}(M)$ for some $L<\infty$. Hence by Lemma~\ref{chain_homotopy_lemma} there exists $m$ such that a chain $\str_m(c)$ is homologuous to $c$ and $|\str_m(c)_1|\leq |c|_1$. It is also obviously straight and has its vertices in $(z_j)_{j\in J}$.
\end{proof}
\begin{remark}
The results above are stated only for $*\leq n$. However, for $*>n$ groups $H_*^{lf,\Lip}(M)$ vanish \cite[Theorem 3.3]{LS}. Moreover, we could simply modify the constants used in the straightening to work for $*\leq N$ for $N$ arbitrarily large. In further work we will without loss of generality assume that all chains and homology classes are of dimension $*\leq n$.
\end{remark}
\begin{remark}
It is obvious that the straightening procedure depends on the choice of sets $(F_j)_{j\in J}$, sections $s_j$ for $j\in J$ and $m\in\mathbb{N}$, which depends on particular chain which we would like to straighten. However, in most cases these details are of secondary interest, therefore we will just say shortly about \emph{applying (piecewise) straightening procedure} meaning applying it with respect to any suitable family $(F_j)_{j\in J}$ and any $m\in\mathbb{N}$ for which the procedure is defined.
\end{remark}
\section{Piecewise $C^1$ homology theories}\label{SecHom}
The straightening procedure described in the previous section is sufficient for some applications, though we need some more extensive machinery. One of the key properties of the classical straightening procedure for non-positively curved manifolds is that the straightened chains are smooth, because they consist of geodesic simplices. It is important e.g. in the proof of the proportionality principle in non-positively curved case, which depends on measure homology with $C^1$ Lipschitz support, i.e. where 'chains' are Borel measures with finite variation on $C^1$ singular simplices with $C^1$-topology, with additional assumption that support of each 'chain' is contained in $L$-Lipschitz simplices for some $L<\infty$. Differentability here is strictly technical, but necessary, because it allows to recognise fundamental cycle by integrating the volume form. However, the piecewise straight simplices which we use are only piecewise $C^1$.
In Section~\ref{PSH} we define piecewise $C^1$ simplices and chains and introduce piecewise $C^1$ homology. In Section~\ref{PSMH} we provide some reasonable topology on these simplices in order to define corresponding measure homology theory.
\subsection{Piecewise $C^1$ homology}\label{PSH}
Let $M$ be a connected, complete $n$-dimensional Riemannian manifold with $\sec(M)<K$. Before we continue, let us fix some notation concerning convex polyhedra. Let $V\subset\mathbb{R}^n$ be an affine space and let $\langle,\rangle$ be a truncation of a standard scalar product on $\mathbb{R}^n$ to $V$.
\begin{itemize}
\item for $v\in V$ and $b\in\mathbb{R}$ a \emph{half-space} $H_{v,b}\subset V$ is
$$
H_{v,b}=\{x\in V\::\:\langle x,v\rangle\leq b\};
$$
\item a convex polyhedron $P\subset V$ is an intersection of finite number of half-spaces;
\item $\dim{P} = \min\{\dim{W}\::\: P\subset W\,,\,W\subset V \text{ is an affine subspace}\}$;
\item $P$ is \emph{nondegenerated} if $\dim{P} = \dim{V}$;
\item for a convex polyhedron $P$ a map $f:P\rightarrow M$ is $C^1$ if it can be extended to a $C^1$ map $f':U\rightarrow M$, where $U\subset V$ is some open neighbourhood of $P\subset V$.
\end{itemize}
\begin{defi}
Let $V =\{(x_0,...,x_k)\in \mathbb{R}^{k+1}\::\: \sum_{i=0}^k x_i=1\}\supset \Delta^k$. We say that a family $\mathcal{P}$ of nondegenerated convex polyhedra $P\subset\Delta^k$ is \emph{$\Delta^k$-admissible} if it satisfies
\begin{itemize}
\item $\bigcup_{P\in\mathcal{P}} P = \Delta^k$;
\item $\forall_{P_1,P_2\in\mathcal{P}}\: P_1\neq P_2\Rightarrow \dim{P_1\cap P_2}<k$.
\end{itemize}
We will denote the family of all $\Delta^k$-admissible families by $\mathscr{P}_k$.
\end{defi}
A good example of a $\Delta^k$-admissible family of convex polyhedra is a barycentric subdivision $S\Delta^k$ and more generally $m$-times iterated barycentric subdivision $S^{(m)}\Delta^k$.
For two families $\mathcal{P}_1, \mathcal{P}_2\in \mathscr{P}_k$ we can define their product
$$
\mathcal{P}_1\cdot\mathcal{P}_2 = \{P_1\cap P_2\::\:P_1\in\mathcal{P}_1,\,\:P_2\in\mathcal{P}_2\,,\,\dim{P_1\cap P_2}=k\}
$$
which is also $\Delta^k$-admissible. This product is obviously commutative and associative. Moreover, we can put a partial order on $\mathscr{P}_k$ by
$$
\mathcal{P}_1\leq \mathcal{P}_2\Leftrightarrow \forall_{P_2\in\mathcal{P}_2}\exists_{P_1\in\mathcal{P}_1}\:P_2\subset P_1.
$$
Note that with this order every finite set $\{\mathcal{P}_1,...,\mathcal{P}_m\}\subset \mathscr{P}_k$ has supremum $\mathcal{P}_1\cdot...\cdot\mathcal{P}_m$.
\begin{defi}
Let $\mathcal{P}$ be a $\Delta^k$-admissible family of convex polyhedra and let $\sigma:\Delta^k\rightarrow M$ be a singular simplex. We say that it is \emph{$\mathcal{P}$-$C^1$} if for every $P\in\mathcal{P}$ $\sigma|_P:P\rightarrow M$ is of class $C^1$.
A chain $c\in C^{lf}_k(M)$ is called \emph{$\mathcal{P}$-$C^1$} if it consists of $\mathcal{P}$-$C^1$ simplices and is \emph{piecewise $C^1$} if it is $\mathcal{P}$-$C^1$ for some $\mathcal{P}\in\mathscr{P}_k$.
\end{defi}
Note that if $c_1, c_2\in C^{lf}_k(M)$ are singular chains such that $c_1$ is $\mathcal{P}_1$-$C^1$ and $c_2$ is $\mathcal{P}_2$-$C^1$ then $c_1+c_2$ is $\mathcal{P}_1\cdot\mathcal{P}_2$-$C^1$, hence finite sums of piecewise $C^1$ chains are piecewise $C^1$. Moreover, if $c\in C_k^{lf}(M)$ is $\mathcal{P}$-$C^1$ then $\partial c$ is $\prod_{q=0}^k\partial^q\mathcal{P}$-$C^1$, where
$$
\partial^q\mathcal{P} = \{P\cap \partial^q\Delta^k\::\:P\in\mathcal{P}\,,\,\dim{P\cap \partial^q\Delta^k}=k-1\}
$$
for $q=0,...,k$. In particular piecewise $C^1$ chains form a subcomplex of $C^{lf}_*(M)$. The same is true for $C^{lf,\Lip}_*(M)$ if we consider piecewise $C^1$ Lipschitz chains. Therefore we can define \emph{piecewise $C^1$ locally finite homology} $H^{PC^1,lf}_*(M)$ and \emph{piecewise $C^1$ locally finite Lipschitz homology} $H_*^{PC^1,lf,Lip}(M)$.
Obviously every piecewise straight chain is piecewise smooth (with respect to some iterated barycentric subdivision) by Lemma \ref{Loeh_positive}. To show that the homology theories defined above are isometric to the corresponding non-$C^1$ ones, we need the following lemma.
\begin{lemma}\label{piecewise_homotopy_lemma}
Let $c\in C_k^{PC^1,lf,\Lip}(M)$ be a piecewise $C^1$ locally finite Lipschitz cycle and let $m\in\mathbb{N}$ be such that $\str_m(c)$ is defined. Then $c$ and $\str_m(c)$ are homologous in $C_{k}^{PC^1,lf,\Lip}(M)$.
\end{lemma}
\begin{proof}
In the proof of Lemma \ref{chain_homotopy_lemma} we constructed a chain homotopy $h:C_k^{lf,<L}(M)\rightarrow C_{k+1}^{lf,\Lip}(M)$ between an identity and $\str_m$ for some $L<\infty$. Therefore it suffices to to show that if $c$ is piecewise $C^1$, then $h(c)$ is.
Assume that a singular simplex $\sigma$ is $\mathcal{P}$-$C^1$. Then a map $H_m(\sigma)|_{P\times I}:P\times I\rightarrow M$ is $C^1$ for $P\in \mathcal{P}\cdot(S^{m}\Delta^k)$. Moreover, if $P_k$ is a canonical subdivision of a prism as in \cite[Proof of 2.10]{HAT} (this time considered as a set of simplices) then for any simplex $\Delta'\in P_k$ a family
$$
\mathcal{P}_{m,\Delta'}=\{\Delta'\cap (P\times I)\::\: P\in\mathcal{P}\cdot(S^{m}\Delta^k)\,,\,\dim{(\Delta'\cap (P\times I))} = k+1 \}
$$
is (up to some affine isomorphism) $\Delta^{k+1}$-admissible and $H_m(\sigma)$ is $C^1$ on every $P\in\mathcal{P}_{m,\Delta'}$, hence $h(\sigma)$ is $\prod_{\Delta'\in P_k}\mathcal{P}_{m,\Delta'}$-$C^1$. In particular $h(c)$ is piecewise $C^1$ if $c$ is.
\end{proof}
\begin{prop}\label{homology_theories}
Let $M$ be a complete Riemannian manifold with $\sec(M)<K<\infty$. Then the map $I_*:H^{PC^1,lf,\Lip}_*(M)\rightarrow H^{lf,\Lip}_*(M)$ induced by the inclusion of chains is an isometric isomorphism.
\end{prop}
\begin{proof}
The map $I_*$ is onto by Corollary~\ref{homology_representation}. To see that it is injective consider $c_1,c_2\in C_*^{PC^1,lf,\Lip}(M)$ which represent the same class in $H^{lf,\Lip}_*(M)$. Then there exists a chain $D\in C_{*+1}^{lf,\Lip}(M)$ such that $\partial D=c_2-c_1$. We can apply now the straightening procedure to $D$ to obtain a chain $\str_m(D)\in C_{*+1}^{PC^1,lf,\Lip}(M)$ for some $m\in\mathbb{N}$ such that $\partial \str_m(D) = \str_m(c_2)-\str_m(c_1)$. Now apply Lemma~\ref{piecewise_homotopy_lemma} to see that $c_1$ and $c_2$ are homologous (in $C_*^{PC^1,lf,\Lip}(M)$) to $\str_m(c_1)$, $\str_m(c_2)$ respectively. It is an isometry on homology by Corollary~\ref{homology_representation}.
\end{proof}
\subsection{Piecewise $C^1$ Measure Homology}\label{PSMH}
Now we turn our attention to chains with finite $\ell^1$-norm and corresponding measure homology theory.
\begin{defi}
Let $C^{PC^1, \ell^1, \Lip}_*(M)$ be a chain subcomplex of $C^{PC^1,lf,\Lip}_*(M)$ consisting of chains which have finite $\ell^1$ norm. We call the homology of this complex \emph{piecewise $C^1$-$\ell^1$ Lipschitz homology} and denote it by $H_*^{PC^1,\ell^1,\Lip}(M)$.
\end{defi}
\begin{remark}
Note that Lemma~\ref{piecewise_homotopy_lemma} also applies to $C^{PC^1, \ell^1, \Lip}_*(M)$, so an analogue of Proposition~\ref{homology_theories} for $H_*^{PC^1,\ell^1,\Lip}(M)$ is true.
\end{remark}
\begin{defi}
Let $\mathcal{P}\in\mathscr{P}_k$ be a $\Delta^k$-admissible family and let $\mathcal{P}C^1(\Delta^k, M)$ be a set of singular simplices $\sigma:\Delta^k\rightarrow M$ such that $\sigma|_P$ is $C^1$ for every $P\in\mathcal{P}$. We call it the set of \emph{$\mathcal{P}$-$C^1$ singular simplices}. We equip it with the topology induced from the embedding onto a closed subspace
$$
\mathcal{P}C^1(\Delta^k, M)\rightarrow \prod_{P\in\mathcal{P}}C^1(P,M).
$$
Where $C^1(P,M)$ is a set of $C^1$ maps $P\rightarrow M$ with the topology induced from $Map(TP,TM)$ with compact-open topology. For every $\mathcal{P}_1,\mathcal{P}_2\in\mathscr{P}_k$ such that $\mathcal{P}_1\leq \mathcal{P}_2$ we have an embedding $\mathcal{P}_1C^1(\Delta^k, M)\rightarrow \mathcal{P}_2C^1(\Delta^k, M)$ onto a closed subset. We denote the direct limit of these spaces with weak topology as $\mathscr{P}C^1(\Delta^k,M)$.
\end{defi}
The properties of the above topology on $\mathcal{P}C^1(\Delta^k, M)$ for $\mathcal{P}\in\mathscr{P}_k$ which are crucial to us are the following
\begin{itemize}
\item $\mathcal{P}C^1(\Delta^k, M)$ is a locally compact Hausdorff space;
\item If $k=n=\dim M$ then for every differential form $\omega\in\Omega^n(M)$ the map
$$
I_\omega:\mathcal{P}C^1(\Delta^n,M)\rightarrow \mathbb{R}\;,\; f \mapsto \int_{\Delta^n} f^*\omega
$$
is continuous;
\item for every $\sigma\in \mathcal{P}C^1(\Delta^k, M)$ the map
$$
Isom^+(M)\rightarrow \mathcal{P}C^1(\Delta^k, M)\;,\; g \mapsto g\sigma
$$
is continuous.
\end{itemize}
\begin{defi}
Let $\mathcal{C}_*^{PC^1,\Lip}(M)$ be a chain complex of measures on $\mathscr{P}C^1(\Delta^*,M)$ such that
\begin{enumerate}
\item for every measure $\mu\in\mathcal{C}_*^{PC^1,\Lip}(M)$ there exists $\mathcal{P}\in\mathscr{P}_k$ such that it is a push-forward of a Borel measure on $\mathcal{P}C^1(\Delta^*,M)$ with finite variation;
\item every measure has \emph{Lipschitz determination}, i.e. there exists $L<\infty$ such that it is supported on simplices with Lipschitz constant $L$.
\end{enumerate}
The boundary operators are the push-forwards of measures by the boundary maps $\partial:C_*^{PC^1,\ell^1,\Lip}(M)\rightarrow C^{PC^1,\ell^1,\Lip}_{*-1}(M)$. The obtained homology theory is called \emph{piecewise $C^1$ measure homology with Lipschitz determination} $\mathcal{H}_*^{PC^1,\Lip}(M)$.
\end{defi}
\begin{remark}
The space $\mathscr{P}C^1(\Delta^*,M)$ is in general not locally compact, therefore it is a problem with the definition of Borel measures. However, we will say for simplicity that measures in $\mathcal{C}_*^{PC^1,\Lip}(M)$ are Borel meaning that every such measure is a push-forward of a Borel measure on $\mathcal{P}C^1(\Delta^*,M)$ for some $\mathcal{P}\in\mathscr{P}_k$. Similarly, when integrating over $\mathscr{P}C^1(\Delta^*,M)$, we will understand it as an integral over $\mathcal{P}C^1(\Delta^*,M)$ for some 'sufficiently large' $\mathcal{P}\in\mathscr{P}_k$.
\end{remark}
The above homology theory is a variant of Milnor-Thurston homology. We can introduce a semi-norm $\|\cdot\|_1$ on it by taking infimum of absolute variations over all measures representing given homology class. An important consequence of the above construction is the following
\begin{prop}\label{measure_hom_theories}
Let $M$ be a complete Riemannian manifold with $\sec(M)<K<\infty$. Then the homology groups $H_*^{PC^1, \ell^1,\Lip}(M)$ and $\mathcal{H}_*^{PC^1,\Lip}(M)$ are isometrically isomorphic.
\end{prop}
\begin{proof}
By interpreting singular chains with finite $\ell^1$-norm as discrete measures with finite variation we have an obvious inclusion of chains $I:C_*^{PC^1,\ell^1,\Lip}(M)\rightarrow \mathcal{C}_*^{PC^1,\Lip}(M)$ which commutes with differentials, hence it is a morphism of chain complexes and induces a homomorphism $I_*$ on homology.
To show that $I_*$ is surjective, let $\mu\in\mathcal{C}_*^{PC^1,\Lip}(M)$ be a measure cycle determined in $L$-Lipschitz simplices. Choose any family $(F_j)_{j\in J}$ of Borel subsets of $M$ with the properties indicated in the description of the piecewise straightening procedure and $m\in\mathbb{N}$ such that $\str_m$ is defined for any simplex with Lipschitz constant $L$. Then after applying $\str_m$ to the measure $\mu$ we obtain a cycle $\sum_{i\in I}a_i\sigma_i$, where each $\sigma_i, i\in I$ is an $m$-piecewise straight simplex and
$$
a_i=\mu(\{\sigma\in \mathscr{P}C^1(\Delta^*,M);\; \Lip(\sigma)\leq L,\; \str_m(\sigma)=\sigma_i\}).
$$
The subset of $\mathscr{P}C^1(\Delta^*,M)$ described above is Borel by the construction of sets $(F_j)_{j\in J}$, so the cycle is well defined. It is also piecewise smooth by Proposition~\ref{Lipschitz_prop}, locally finite by the local finiteness of $(F_j)_{j\in J}$ and Lipschitz by Proposition~\ref{Lipschitz_prop} and Lipschitz determination of $\mu$. It is also easy to see that $\mu$ and $\str_m(\mu)$ are homologous in $\mathcal{C}_*^{b1,\Lip}(M)$ by the same argument as in Lemma~\ref{piecewise_homotopy_lemma}.
The injectivity of $I_*$ can be shown using the similar argument applied to the homotopy in $\mathcal{C}^{PC^1,\Lip}_*(M)$ between two cycles in $C_*^{PC^1,\ell^1,\Lip}(M)$ and Lemma~\ref{piecewise_homotopy_lemma}. The fact that $I_*$ is an isometry is a consequence of the facts that $I$ is an isometric inclusion and that the straightening procedure does not increase the norm.
\end{proof}
\begin{remark}
The existence of an isometric isomorphism as above for 'finite' piecewise $C^1$ theory $H^{PC^1}_*(M)$ and piecewise $C^1$ measure homology with compact supports $\mathcal{H}^{PC^1}_*(M)$ can be proved without any curvature assumptions as in \cite{LoM}. However, the proof given in \cite{LoM} depends heavily on bounded cohomology and cannot be easily generalised to the locally finite Lipschitz case.
\end{remark}
\section{Applications}\label{SecApp}
\subsection{Product inequality}
There is a classical result concerning the behaviour of simplicial volume under taking products. Namely, if $M$ and $N$ are compact manifolds of dimensions $m$ and $n$ respectively there are inequalities (see \cite{G} for more details)
$$
\|M\|\cdot\|N\| \leq \|M\times N\| \leq {m+n \choose m}\|M\|\cdot\|N\|.
$$
The second inequality is obtained by simply taking a simplicial approximation of a cross product and can be easily generalised to the noncompact case. On the other hand, first inequality can be established by passing to bounded cohomology and using the duality between $\ell^1$ semi-norm on homology and $\ell^\infty$ semi-norm on cohomology. However, this approach does not generalize directly to the case of noncompact manifolds and Lipschitz simplicial volume (and in general is false in noncompact, non-Lipschitz case). Two main problems which arise are more subtle relation between $\ell^1$ semi-norm on locally finite homology and $\ell^\infty$ semi-norm on cohomology with compact supports and the existence of a good product in cohomology with compact supports. However, for the Lipschitz simplicial volume the inequality was proved in the case of complete, non-positively curved Riemannian manifolds \cite[Theorem 1.7]{LS}. Using piecewise straightening procedure, we are able to generalize it slightly and obtain Theorem~\ref{PI1}.
The proof is a modification of the proof from \cite{LS} with one proposition generalised to the case of bounded positive curvature. We introduce necessary notions and facts. By $S_k^{lf,\Lip}(M)$ we denote a family of subsets of $Map(\Delta^k,M)$ such that each element $A\in S_k^{lf,\Lip}$ is locally finite (in the sense that for any given compact subset $K\subset M$ we have $\#\{\sigma\in A\;:\; \sigma\cap K\neq \emptyset \}<\infty$) and consists of $L$-Lipschitz simplices for some $L$, depending on $A$. We recall the most important definitions and results from \cite{LS}.
\begin{defi}[{\cite[Definition 3.11]{LS}}]
Let $M$ be a topological space, $k\in\mathbb{N}$, and let $A\subset Map(\Delta^k,M)$.
\begin{enumerate}
\item For a locally finite chain $c=\sum_{i\in I}a_i\sigma_i\in C^{lf}_k(M)$, let
$$
|c|^A_1 = \begin{cases}|c|_1 & \text{if $\supp(c)\subset A$,} \\ \infty & \text{otherwise.}\end{cases}
$$
Here, $\supp(c):=\{\sigma_i;\; i\in I,a_i\neq 0\}$.
\item The semi-norms on (Lipschitz) locally finite homology induced by $|\cdot|^A_1$ are denoted by $\|\cdot\|^A_1$.
\item If $M$ is an oriented, connected $n$-manifold, then
$$
\|M\|^A:=\|[M]\|^A_1.
$$
\end{enumerate}
\end{defi}
\begin{defi}[{\cite[Definition 3.19]{LS}}]
Let $M$ and $N$ be two topological spaces, and let $k,l\in\mathbb{N}$. A~locally finite set $A\in S^{lf}_{k+l}(M\times N)$ is called \emph{$(k,l)$-sparse} if
$$
\begin{matrix}
A_M: = \{\pi_M\circ\sigma\rfloor_k;\; \sigma\in A\}\in S^{lf}_k(M) & \text{ and } & A_N:=\{\pi_N\circ{}_l\lfloor\sigma;\; \sigma\in A\}\in S^{lf}_l(N)
\end{matrix}
$$
where $\sigma\rfloor_k$ is an $k$-dimensional face of $\sigma$ spanned by the last $k$ vertices, ${}_l\lfloor\sigma$ is an $l$-dimensional face of $\sigma$ spanned by the first $l$ vertices and $\pi_M:M\times N\rightarrow M$ and $\pi_N:M\times N\rightarrow N$ are the canonical projections.
A locally finite chain $c\in C^{lf}_{k+l}(M\times N)$ is called \emph{$(k,l)$-sparse} if its support is $(k,l)$-sparse.
\end{defi}
The proof of product inequality given in \cite{LS} uses non-positive curvature only to prove that for two non-positively curved manifolds the simplicial volume can be computed on sparse cycles. The following proposition is not stated as such in \cite{LS}, it is, however, a meta-theorem actually proved there.
\begin{prop}
Let $M$ and $N$ be two complete, oriented manifolds of dimensions $m$ and $n$ respectively such that the Lipschitz simplicial volume of $M\times N$ can be computed via $(m,n)$-sparse fundamental cycles, i.e.
$$
\|M\times N\|_{\Lip} = \inf\{\|M\times N\|^A;\; A\in S^{lf,\Lip}_{m+n}(M\times N),\; \text{$A$ is $(m,n)$-sparse}\}.
$$
Then
$$
\|M\|_{\Lip}\cdot\|N\|_{\Lip}\leq \|M\times N\|_{\Lip}.
$$
\end{prop}
The outline of the proof is as follows. Consider cohomology with Lipschitz compact supports $H^*_{cs,\Lip}$, i.e. cohomology of cochain complex consisting of those singular cochains for which there exists a compact set $K$ and constant $L$ such that the evaluation on a simplex $\sigma$ is $0$ if $\sigma$ has image disjoint from $K$ and Lipschitz constant less than $L$. For a given space $X$ and family $A\in S^{lf,\Lip}_k(X)$ an $\ell^\infty$ semi-norm $\|\cdot\|^A_\infty$ on cohomology with Lipschitz compact support computed on $A$ is dual to the $\ell_1$ semi-norm $\|\cdot\|_1^A$ on Lipschitz locally finite homology \cite[Proposition 3.12]{LS}. Therefore one can compute a Lipschitz simplicial volume using a dual point of view. Moreover for two cochains with Lipschitz compact supports on $M$ and $N$ we can define their product on $M\times N$ which also has Lipschitz compact support \cite[Lemma 3.15]{LS}. Finally, if $A\in S^{lf,\Lip}_{m+n}(M\times N)$ is $(m,n)$-sparse and $A_M$, $A_N$ are corresponding projections of $A$ to $S^{lf,\Lip}_m(M)$ and $S^{lf,\Lip}_n(N)$, then we have a product inequality \cite[Remark 3.17]{LS}:
$$
\|\phi\times\psi\|^A_\infty \leq \|\phi\|^{A_M}_\infty \cdot \|\psi\|^{A_N}_\infty
$$
for $\phi\in H^m_{cs,\Lip}(M)$ and $\psi\in H^n_{cs,\Lip}(N)$. In particular if $A$ is $(m,n)$-sparse we obtain
$$
\|M\times N\|^A=\frac{1}{\|[M\times N]^*\|^A_\infty}\geq \frac{1}{\|[M]^*\|^{A_M}_\infty\cdot\|[N]^*\|^{A_N}_\infty}=\|M\|^{A_M}\cdot\|N\|^{A_N}\geq\|M\|_{\Lip}\
\cdot\|N\|_{\Lip},
$$
hence if the simplicial volume of $M\times N$ can be computed on sparse cycles we get the desired inequality.
To finish the proof of Theorem \ref{PI1} we will prove the following proposition, which is a generalization of Proposition 3.20 in \cite{LS}, where it was proved assuming non-positive curvature.
\begin{prop}
Let $M$ and $N$ be two oriented, connected, complete Riemannian manifolds (without boundary) of dimensions $m$ and $n$ respectively with sectional curvatures bounded from above by $0<K<\infty$.
\begin{enumerate}
\item Let $k,l\in\mathbb{N}$. For any cycle $c\in C_{k+l}^{lf,\Lip}(M\times N)$ there is a $(k,l)$-sparse cycle $c'\in C_{k+l}^{lf,\Lip}(M\times N)$ satisfying
$$
\begin{matrix}
|c'|_1\leq |c|_1 & \text{ and } & c\sim c' \text{ in } C_{k+l}^{lf,\Lip}(M\times N).
\end{matrix}
$$
\item In particular, the Lipschitz simplicial volume can be computed via sparse fundamental cycles, i.e.
$$
\|M\times N\|_{\Lip} = \inf\{\|M\times N\|^A;\; A\in S^{lf,\Lip}_{m+n}(M\times N),\; \text{$A$ is $(m,n)$-sparse}\}.
$$
\end{enumerate}
\end{prop}
\begin{proof}
The second statement is a direct consequence of the first one. To prove the first one it is enough to just apply straightening procedure, but with more carefully chosen sets $(F_j)_{j\in J}$. Choose a family of Borel subsets $(F^M_j)_{j\in J^M}$ of $M$ together with the points $(z^M_j)_{j\in J^M}$ and sections $(s_j^M)_{j\in J}$ with all the properties indicated in the description of the straightening procedure, but with the additional assumption that $\diam(F^M_j)<\frac{E_{m+n,K}}{2}$ and $s^M_j:F_j\rightarrow B_{V_{z^M_j}}(\wt{z_j}^M,\frac{E_{m+n,K}}{2})$ for every $j\in J^M$. Similarly choose a family $(F^N_j)_{j\in J^N}$ of Borel subsets of $N$ together with points $(z_j^{N})_{j\in J^N}$ and sections $(s_j^N)_{j\in J^N}$ and as a base of the straightening procedure for $M\times N$ take a family $(F^M_{j_1}\times F^N_{j_2})_{(j_1,j_2)\in J^M\times J^N}$ together with points $(z^M_{j_1},z^N_{j_2})_{(j_1,j_2)\in J^M\times J^N}$ and sections $(s_{j_1}\times s_{j_2})_{(j_1,j_2)\in J^M\times J^N}$. This family is locally finite, satisfying $\diam(F^M_{j_1}\times F^N_{j_2})<E_{m+n,K}$ and $s_{j_1}\times s_{j_2}:(F_{j_1}\times F_{j_2})\rightarrow V_{(z^M_{j_1},z^N_{j_2})}(E_{m+n,K})$ for every $(j_1,j_2)\in J^M\times J^N$. Hence if $c\in C^{lf,\Lip}_k(M\times N)$ is any locally finite Lipschitz chain it can be straightened with respect to that family. Note also that for any $L<\infty$ and $p\in\mathbb{N}$ the family
$$
A_{L,p}:=\{\sigma\in Map(\Delta^{k+l},M\times N);\;\Lip(\sigma)\leq L;\; \sigma \text{ is $p$-piecewise straight simplex}\}
$$
belongs to $S^{lf,\Lip}_{k+l}(M\times N)$ and is $(k,l)$-sparse by the construction of $(F^M_{j_1}\times F^N_{j_2})_{(j_1,j_2)\in J^M\times J^N}$ and Lipschitz condition. To finish the proof note that $c\sim \str_p(c)$ for some $p\in\mathbb{N}$, $|c|_1\geq |\str_p(c)|_1$ and $\str_p(c)$ has a support in $A_{L,p}$ for some $L$, thus it is $(k,l)$-sparse.
\end{proof}
\subsection{Proportionality Principle}
Another result obtained in \cite{LS} for non-positively curved manifolds is the proportionality principle for the Lipschitz simplicial volume. We generalize it here and prove Theorem \ref{PP1}. As a corollary we obtain a proof of Theorem \ref{PP0}, based on Thurston's approach \cite{T}, but slightly different from the proof given in \cite{Str}.
The idea of the original proof is as follows. Using the common universal cover one can construct a 'smearing map' from $C^1$-$\ell^1$ locally finite Lipschitz chain complex on $M$ into the chain complex of Borel measures on $C^1(\Delta^*,N)$ with finite variation and Lipschitz determination. This map does not increase the norm and has the property that the image of locally finite real fundamental class of $M$ maps to a (measure) fundamental class of $N$ multiplied by $\frac{\vol(M)}{\vol(N)}$ (or more precisely a measure cycle such that every singular chain homologous to it, if it exists, is an indicated multiplicity of a fundamental cycle). Moreover, the image of this map can be approximated 'isometrically' by a singular locally finite Lipschitz cycle, which finishes the proof.
The usage of $C^1$ chains and measures is strictly technical and is used to recognise the image of the smearing map. In our approach we cannot use $C^1$ chains and measures, however, piecewise $C^1$ chains have all required properties. The following propositions are either taken from~\cite{LS} or are slight modifications of those with the same proofs. For a Riemannian $n$-manifold we will denote by $\dvol_M\in \Omega^n(M)$ the volume form on $M$. Recall that we are able to integrate over $\dvol_M$ not only Lipschitz chains (via Rademacher's theorem), but also Borel measures on $C^1(\Delta^n,M)$ and piecewise $C^1$ measures on $\mathscr{P}C^1(\Delta^n,M)$ via the formula
$$
\int_M \mu\, \dvol_M := \int_{\mathscr{P}C^1(\Delta^n,M)}\int_{\Delta^n} \sigma^*\,\dvol_M\, d\mu(\sigma)
$$
for $\mu\in\mathcal{C}^{PC^1,\Lip}_*(M)$. We will denote the evaluation of $\dvol_M$ on simplex, chain or measure by $\langle \dvol_M,\cdot\rangle$.
\begin{prop}[{\cite[Proposition 4.4]{LS}}]\label{recognise_class}
Let $M$ be a Riemannian $n$-manifold, and let $c=\sum_{k\in\mathbb{N}}a_k\sigma_k\in C^{lf}_n(M)$ be a cycle with $|c|_1<\infty$ and $\Lip(c)<\infty$.
\begin{enumerate}
\item Then $\langle \dvol_M,\sigma_k\rangle\leq \Lip(c)^n\vol(\Delta^n)$ for every $k\in\mathbb{N}$
\item Furthermore, we have the following equivalence:
$$
\sum_{k\in\mathbb{N}}a_k\cdot\langle \dvol_M,\sigma_k\rangle = \vol(M) \;\Leftrightarrow\;\text{$c$ is a fundamental cycle}.
$$
\end{enumerate}
\end{prop}
\begin{remark}\label{fundamental_class}
The second statement of the above proposition gives an easy criterion to distinguish fundamental class. It can be also applied to a given measure cycle, but only if it is homologous to some singular cycle. The reason is that there is no obvious map $\mathcal{C}_*^{PC^1,\Lip}(M)\rightarrow C^{lf}_*(M)$. However, by Proposition~\ref{measure_hom_theories} we obtain a map $\mathcal{H}^{PC^1,\Lip}_*(M)\rightarrow H^{lf}_*(M)$ by composing the inverse of the isometric isomorphism $H^{PC^1,\ell^1,\Lip}_*(M)\rightarrow\mathcal{H}^{PC^1,\Lip}_*(M)$ and a map $H^{PC^1,\ell^1,\Lip}_*(M)\rightarrow H^{lf}_*(M)$ induced by the inclusion of chains. We can therefore define fundamental cycles in $\mathcal{C}_n^{PC^1,\Lip}(M)$ as the cycles representing any class in the preimage of the fundamental class in $H_n^{lf}(M)$.
\end{remark}
The following proposition is a variation of the results from \cite{LS} and the proof is completely analogous. Let $U$ be a common universal cover of $M$ and $N$ with covering maps $p_M$ and $p_N$ respectively, let $G=Isom^+(U)$ and let $\Lambda=\pi_1(N)$. Then $\Lambda$ is a lattice in $G$ \cite[Lemma 4.2]{LS}. Denote by $\mu_{\Lambda\backslash G}$ the normalized Haar measure on $\Lambda\backslash G$.
\begin{prop}[{\cite[Proposition 4.9, Lemma 4.10]{LS}}]\label{smear}
Let $\sigma:\Delta^*\rightarrow M$ be a piecewise $C^1$ simplex, and let $\wt{\sigma}:\Delta^*\rightarrow U$ be a lift of $\sigma$ to $U$. Then push-forward of $\mu_{\Lambda\backslash G}$ under the map
$$
\smear_{\wt{\sigma}}:\Lambda\backslash G \rightarrow \mathscr{P}C^1(\Delta^*,N),\;\Lambda g\mapsto p_N\circ g\wt{\sigma}
$$
does not depend on the choice of the lift of $\sigma$ as is denoted by $\mu_\sigma$. Further there is a well-defined chain map
$$
\smear_*:C_*^{PC^1,\ell^1,\Lip}(M) \rightarrow \mathcal{C}_*^{PC^1,\Lip}(N),\; \sum_{\sigma}a_\sigma \sigma\mapsto \sum_{\sigma}a_\sigma \mu_\sigma.
$$
Moreover, for every fundamental cycle $c\in C_n^{PC^1,\ell^1,\Lip}(M)$ we have
$$
\langle \dvol_N,\smear_n(c) \rangle = \int_{\mathscr{P}C^1(\Delta^n,N)}\int_{\Delta^n} \sigma^*\,\dvol_N\,d\smear_n(c)(\sigma)=\vol(M).
$$
\end{prop}
\begin{proof}[Proof of theorem \ref{PP1} and \ref{PP0}]
We will show that
$$
\frac{\|N\|_{\Lip}}{\vol(N)}\leq \frac{\|M\|_{\Lip}}{\vol(M)}
$$
and the opposite inequality will follow by symmetry. For $\|M\|_{\Lip}=\infty$ the inequality is obvious, so we can assume $\|M\|_{\Lip}<\infty$. By Proposition~\ref{homology_theories} in this case there exists a fundamental cycle in $C^{PC^1,\ell^1,\Lip}_n(M)$. Let $c = \sum_{\sigma}a_\sigma\sigma\in C^{PC^1,\ell^1,\Lip}_n(M)$ be a fundamental cycle and consider its image under the smearing map. It follows from Propositions~\ref{smear}, \ref{recognise_class} and Remark~\ref{fundamental_class} that its image $\smear_n(c)$ represents a fundamental class in $\mathcal{C}_*^{PC^1,\Lip}(N)$ multiplied by $\frac{\vol(M)}{\vol(N)}$. Moreover, by the construction of the smearing map
$$
|\smear_n(c)| = |\sum_{\sigma}a_\sigma\mu_\sigma| \leq \sum_\sigma |a_\sigma||\mu_\sigma| = \sum_\sigma |a_\sigma| = |c|_1.
$$
By Proposition \ref{measure_hom_theories} there exists a cycle in $C^{PC^1\ell^1,\Lip}_n(N)$ which represents the same homology class as $\smear_n(c)$ with not greater $\ell^1$ norm. Because Proposition \ref{homology_theories} implies that the Lipschitz simplicial volume of $M$ can be computed on piecewise $C^1$ cycles we obtain
$$
\|N\|_{\Lip}\leq \frac{\vol(N)}{\vol(M)}\|M\|_{\Lip}\;\Rightarrow\; \frac{\|N\|_{\Lip}}{\vol(N)}\leq \frac{\|M\|_{\Lip}}{\vol(M)}.
$$
To prove Theorem \ref{PP0} we need only to use the fact that for closed manifolds the classical and Lipschitz simplicial volumes coincide \cite[Remark 1.4]{LS}.
\end{proof}
| {
"timestamp": "2015-02-17T02:18:16",
"yymm": "1409",
"arxiv_id": "1409.3475",
"language": "en",
"url": "https://arxiv.org/abs/1409.3475",
"abstract": "We study the Lipschitz simplicial volume, which is a metric version of the simplicial volume. We introduce the piecewise straightening procedure for singular chains, which allows us to generalize the proportionality principle and the product inequality to the case of complete Riemannian manifolds of finite volume with sectional curvature bounded from above. We obtain also yet another proof of the proportionality principle in the compact case by a direct approximation of the smearing map.",
"subjects": "Geometric Topology (math.GT); Differential Geometry (math.DG)",
"title": "Piecewise straightening and Lipschitz simplicial volume",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750507184356,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7095221748106325
} |
https://arxiv.org/abs/1301.4630 | The vanishing ideal of a finite set of points with multiplicity structures | Given a finite set of arbitrarily distributed points in affine space with arbitrary multiplicity structures, we present an algorithm to compute the reduced Groebner basis of the vanishing ideal under the lexicographic ordering. Our method discloses the essential geometric connection between the relative position of the points with multiplicity structures and the quotient basis of the vanishing ideal, so we will explicitly know the set of leading terms of elements of I. We split the problem into several smaller ones which can be solved by induction over variables and then use our new algorithm for intersection of ideals to compute the result of the original problem. The new algorithm for intersection of ideals is mainly based on the Extended Euclidean Algorithm. | \section{Introduction}
To describe the problem, first we give the definitions below.
{\bfseries Definition 1:} $D\subseteq \mathbb{N}_{0}^{n}$ is called
a lower set as long as $\forall d\in D$ if $d_{i}\neq 0$, $d-e_{i}$ lies in $D$ where $e_{i}=(0, \ldots, 0, 1, 0,
\ldots, 0)$ with the 1 situated at the $i$-th position ($1\leq i\leq
n$). For a lower set $D$, we define its limiting set $E(D)$ to be
the set of all $\beta\in \mathbb{N}_{0}^{n}-D$ such that whenever
$\beta_{i}\neq 0$, then $\beta -e_{i}\in D$.
As showed in Fig.1 below, there are three lower sets and their limiting sets.
The elements of the lower sets are marked by solid circles and the elements of the
limiting sets are marked by blank circles.
\begin{center}
\includegraphics[height=3.4cm]{14.JPG}\\Fig.1: Illustration of three lower sets and their limiting sets.
\end{center}
Let $k$ be a field and $p$ be a point in the affine space
$k^{n}$, i.e. $p=(p_{1}, \ldots, p_{n})\in
k^{n}$. Let $k[X]$ be the polynomial ring over $k$, where we write
$X=(X_{1},X_{2},\ldots,X_{n})$ for brevity's sake.
{\bfseries Definition 2:} $\langle p, D\rangle$ represents a point
$p$ with multiplicity structure $D$, where $p$ is a point in affine
space $k^{n}$ and $D$ is a lower set. $\sharp D$ is called the
multiplicity of point $p$ (here we use the definition in [3]).
For each $d=(d_{1},\ldots,d_{n})\in D$, we define a
corresponding functional
$$L(f)=\frac{\partial^{d_{1}+\ldots+d_{n}}}{\partial x_{1}^{d_{1}}\ldots \partial x_{n}^{d_{n}}}f(p).$$
Hence for any given finite set of points with multiplicity structures $H=\{\langle p_{1},
D_{1}\rangle, \ldots, \langle p_{t}, D_{t}\rangle\}$, we can define $m$ functionals where $m\triangleq\sharp D_{1} +\ldots +\sharp
D_{t}$. Our aim is to
find the reduced Gr$\ddot{\rm{o}}$bner basis of the vanishing ideal $I(H)=\{f\in
k[X];L_{i}(f)=0, i=1,\ldots,m\}$ under the lexicographic ordering with $X_{1}\succ X_{2}
\succ \ldots\succ X_{n}$.
There exists an algorithm that provides a complete solution to this
problem in [4].
However, our answer for the special case of lexicographical ordering
will be in a way more transparent than the one above.
The ideas are summed-up as follows:
$\bullet$ Construct the reduced Gr$\ddot{\rm{o}}$bner basis of $I(H)$ and get the quotient basis $D(H)$ by induction over variables.
$\bullet$ Get the quotient basis $D(H)$ purely according to the geometric distribution of the points with multiplicity structures.
$\bullet$ Split the original problem into smaller ones which can be converted into 1 dimension lower problems and hence can be solved by induction over variables.
$\bullet$ Compute the intersection of the ideals of the smaller problems by using Extended Euclidean Algorithm.
There are several publications which have a strong connection to the work presented here. Paper [5] give a computationally efficient algorithm to get the quotient basis of the vanishing ideal over a set of points with no multiplicity structures and the authors introduce the lex game to describe the problem. Paper [6] offers a purely combinatorial algorithm to obtain the linear basis of the quotient algebra which can handle the set of points with multiplicity structures but it does not give the Gr$\ddot{\rm{o}}$bner basis. For a finite set of points with multiplicity structures,
our algorithm obtains a lower set by induction over variables and constructs the reduced Gr$\ddot{\rm{o}}$bner bases at the same time. It is only by constructing Gr$\ddot{\rm{o}}$bner basis we can prove that the lower set is the quotient basis.
One important feature of our method is the clear geometric interpretation, so in Section 2 an example together with some auxiliary pictures will be given in the first place to demonstrate this kind of feature which can make the algorithms and conclusions in this paper easier understood for us. In Section 3 and 4, some definitions and notions are given.
Section 5 and 6 are devoted to our main algorithms of computing the reduced Gr$\ddot{\rm{o}}$bner basis and the
quotient basis together with the proofs. In Section 7 we demonstrate the algorithm to compute the intersection of two ideals and some applications.
\section{Example}
First we give two different forms to represent the set of points $H$ with multiplicity structures.
For easier description, we introduce the matrix form which consists of two matrices $\langle \mathcal{P}=(p_{i,
j})_{m\times n}, \mathcal{D}=(d_{i, j})_{m\times n}\rangle$ with $\mathcal{P}_{i},
\mathcal{D}_{i}$ denoting the $i$-th row vectors of $\mathcal{P}$ and
$\mathcal{D}$ respectively. Each pair $\{\mathcal{P}_{i},
\mathcal{D}_{i}\}$ $(1\leq i\leq m)$ defines a functional in the following way.
$$L_{i}(f)=\frac{\partial^{d_{i, 1}+\ldots+d_{i, n}}}{\partial x_{1}^{d_{i, 1}}\ldots \partial x_{n}^{d_{i, n}}}f|_{x_{1}=p_{i, 1}, \ldots, x_{n}=p_{i, n}.}$$
And the functional set defined above is the same with that defined by $H$ in Section 1.
For example, given a set of three points with their
multiplicity structures $\{\langle p_{1}, D_{1}\rangle, \langle
p_{2}, D_{2}\rangle, \langle p_{3}, D_{3}\rangle\}$, where
$p_{1}=(1, 1), p_{2}=(2, 1), p_{3}=(0, 2), D_{1}=\{(0, 0), (0,
1), (1, 0)\},D_{2}=\{(0,0),(0,1),(1,0),(1,1)\}, D_{3}=\{(0, 0), (1, 0)\}$, the matrix form is like the follows.
$$
\mathcal{P}=\left(\begin{array}{cc}1&1\\1&1\\1&1\\2&1\\2&1\\2&1\\2&1\\0&2\\0&2
\end{array}\right)
,
\mathcal{D}=\left(\begin{array}{cc}0&0\\1&0\\0&1\\0&0\\1&0\\0&1\\1&1\\0&0\\1&0
\end{array}\right).
$$
For intuition's sake, we also represent the points with multiplicity structures in a more intuitive way as showed in the left picture of Fig.2 where each lower set which represents the multiplicity structure of the corresponding point $p$ is also put in the affine space with the zero element (0,0) situated at $p$. This intuitive representing form is the basis of the geometric interpretation of our algorithm.
We take the example above to show how our method works and what the geometric interpretation of our algorithm is like:
\textbf{Step 1:}
Define mapping $\pi:H\mapsto k$ such that $\langle p=(p_1,\ldots,p_n),D\rangle\in H$ is mapped to $p_n\in k$. So $H=\{\langle p_{1},D_{1}\rangle, \langle p_{2}, D_{2}\rangle, \langle p_{3},D_{3}\rangle\}$ consists of two $\pi$-fibres: $H_{1}=\{\langle p_{1}, D_{1}\rangle, \langle p_{2}, D_{2}\rangle\}$ and $H_{2}=\{\langle p_{3}, D_{3}\rangle\}$ as showed in the middle and the right pictures in Fig.2. Each fibre defines a new problem, so we split the original problem defined by $H$ into two small ones defined by $H_1$ and $H_2$ respectively.
\begin{center}
\includegraphics[height=4.7cm]{1.jpg}\hskip 0.1cm
\includegraphics[height=4.7cm]{2.jpg}
\includegraphics[height=4.7cm]{3.jpg}\\Fig. 2: The left picture represents $H$. The middle one is for $H_{1}$ and the right one for $H_{2}$.
\end{center}
\textbf{Step 2:} Solve the small problems. Take the problem defined
by $H_{1}$ for example.
First, it's easy to write down one element of $I(H_{1})$:
$$f_{1}=(X_{2}-1)(X_{2}-1)=(X_{2}-1)^{2}\in I(H_{1}).$$
The geometry interpretation is: we draw two lines sharing the same equation of $X_{2}-1=0$
to cover all the points as illustrated in the left picture in Fig.3 and the corresponding polynomial is $f_1$.
\begin{center}
\includegraphics[height=4.5cm]{00.jpg}
\includegraphics[height=4.5cm]{02.jpg}
\includegraphics[height=4.5cm]{01.jpg}\\Fig. 3: Three ways to draw lines to cover the points.
\end{center}
According to the middle and the right pictures in Fig.2, we can
write down another two polynomials in $I(H_{1})$:
$$f_{2}=(X_{2}-1)(X_{1}-1)(X_{1}-2)^{2}~~{\rm and}~~f_{3}=(X_{1}-1)^{2}(X_{1}-2)^{2}.$$
It can be checked that $G_{1}=\{ f_{1},f_{2},f_{3}\}$ is the
reduced Gr$\ddot{\rm{o}}$bner basis of $I(H_{1})$, and the quotient basis is $\{
1,X_{1},X_{2},X_{1}X_{2},X_{1}^{2},X_{2}X_{1}^{2},X_{1}^{3}\}$. In the following,
we don't distinguish explicitly an $n$-variable monomial $X_1^{d_1}X_2^{d_2}\ldots X_n^{d_n}$ with the element $(d_1,d_2,\ldots,d_n)$ in $\mathbb{N}_{0}^{n}$, and we denote
the quotient basis of $I(H)$ by $D(H)$. Hence $D(H_{1})$ can be written
as a subset of $\mathbb{N}_{0}^{n}$:
$\{(0,0),(1,0),(0,1),(1,1),(2,0),(2,1),(3,0)\}$, i.e. a lower set, denoted
by $D^{'}$.
In fact we can get the lower set in a more direct way by pushing
the points with multiplicity structures leftward which is illustrated in the picture
below (lower set $D^{'}$ is positioned in the right part of the picture with the (0,0) element situated at point (0,1)). The elements of the lower set $D^{'}$ in the right picture in Fig.4) are marked by solid
circles. The blank circles constitute the limiting set $E(D^{'})$ and they are the leading terms of the reduced Gr$\ddot{\rm{o}}$bner basis $\{f_{1},f_{2},f_{3}\}$.
\begin{center}
\includegraphics[height=4.7cm]{001.jpg}\\Fig.4: Push the points leftward to get a lower set.
\end{center}
In the same way, we can get the Gr$\ddot{\rm{o}}$bner basis $G_{2}=\{h_{1},h_{2}\}$ and a lower set $D^{''}$ for the problem defined by $H_{2}$, where $h_{1}=(X_{2}-2),h_{2}=X_{1}^{2},D^{''}=\{(0,0),(1,0)\}$.
\textbf{Step 3:} Compute the intersection of the ideals $I(H_{1})$ and $I(H_{2})$ to get the
result for the problem defined by $H$.
First, we construct a new lower set $D$ based on $D^{'},D^{''}$ in an intuitive
way: let the solid circles fall down and the elements of $D^{''}$ rest on the elements of $D^{'}$ to form a new lower set $D$ which is showed in the right part of Fig.5 and the blank circles represent the elements of the limiting set $E(D)$.
\begin{center}
\includegraphics[height=5cm]{11.jpg}\\Fig. 5: Get the lower set $D$ based on $D^{'}$ and $D^{''}$.
\end{center}
Then we need to find $\sharp E(D)$ polynomials vanishing on $H$
with leading terms being the elements of $E(D)$. Take
$X_{1}^{3}X_{2}\in E(D)$ for example to show the general way we do it.
We need two polynomials which vanish on $H_{1}$ and
$H_{2}$ respectively, and their leading terms both have the same degrees
of $X_{1}$ with that of the desired monomial $X_{1}^{3}X_{2}$ and both have
the minimal degrees of $X_{2}$. It's easy to notice that $f_{2}$ and $X_{1}\cdot h_{2}$ satisfy the requirement and then we multiply $f_{2}$ and $X_{1}\cdot h_{2}$ with $h_{1},f_{1}$ respectively which are all univariate
polynomials of $X_{2}$ to get two polynomials $q_{1},q_{2}$
which both vanish on
$H$.
$$q_{1}=f_{2}\cdot h_{1}=(X_{2}-1)(X_{1}-1)(X_{1}-2)^{2}(X_{2}-2),$$
$$q_{2}=X_{1}\cdot h_{2}\cdot f_{1}=X_{1}^{3}(X_{2}-1)^{2}.$$
Next try to find two univariate polynomials of $X_{2}$: $r_{1},r_{2}$ such that $q_{1}\cdot
r_{1}+q_{2}\cdot r_{2}$ vanishes on $H$ (which is apparently true already) and has the desired leading
term $X_{1}^{3}X_{2}$.
To settle the leading term issue, write $q_{1},q_{2}$ as univariate polynomials of $X_{1}$.
$q_{1}=(X_{2}-2)(X_{2}-1)X_{1}^{3}-(5X_{2}^{2}-15X_{2}+10)X_{1}^{2}+(8X_{2}^{2}-24X_{2}+16)X_{1}-4X_{2}^{2}+12X_{2}-8,
q_{2}=(X_{2}-1)^{2}X_{1}^{3}$. Because $X_{2}\prec X_{1}$ and the
highest degrees of $X_{1}$ of the leading terms of $q_{1},q_{2}$ are
both $3$, we know that as long as the leading term of
$(X_{2}-2)(X_{2}-1)X_{1}^{3}\cdot r_{1}+(X_{2}-1)^{2}X_{1}^{3}\cdot
r_{2}$ is $X_{1}^{3}X_{2}$, the leading term of $q_{1}\cdot
r_{1}+q_{2}\cdot r_{2}$ is also $X_{1}^{3}X_{2}$.
$$(X_{2}-2)(X_{2}-1)X_{1}^{3}\cdot r_{1}+(X_{2}-1)^{2}X_{1}^{3}\cdot r_{2}$$
$$=X_{1}^{3}(X_{2}-1)\left((X_{2}-2)\cdot r_{1}+(X_{2}-1)\cdot r_{2}\right)$$
Obviously if and only if $(X_{2}-2)\cdot r_{1}+(X_{2}-1)\cdot r_{2}=1$ we can
keep the leading term of $q_{1}\cdot r_{1}+q_{2}\cdot r_{2}$ to be
$X_{1}^{3}X_{2}$. In this case $r_{1}=-1$ and $r_{2}=1$ will be just
perfect. In our algorithm we use Extended Euclid Algorithm to compute
$r_{1},r_{2}$.
Finally we obtain
$$g_{3}=q_{1}\cdot r_{1}+q_{2}\cdot r_{2}=(X_{2}-1)X_{1}^{3}+(5X_{2}^{2}-15X_{2}+10)X_{1}^{2}-(8X_{2}^{2}-24X_{2}+16)X_{1}+4X_{2}^{2}-12X_{2}+8$$
which vanishes on $H$ and has $X_{1}^{3}X_{2}$ as its leading term.
In the same way, we can get $g_{1}=(X_{2}-1)^{2}(X_{2}-2)$ for $X_{2}^{3}$, $g_{2}=(X_{2}-1)^{2}X_{1}^{2}$ for $X_{1}^{2}X_{2}^{2}$ and
$g_{4}=X_{1}^{4}+6(X_{2}^{2}-2X_{2})X_{1}^{3}-13(X_{2}^{2}-2X_{2})X_{1}^{2}+12(X_{2}^{2}-2X_{2})X_{1}-4(X_{2}^{2}-2X_{2})$ for $X_{1}^{4}$. In fact we need to compute $g_{1},~g_{2},~g_{3}$ and $g_{4}$ in
turn according to the lexicographic order because we need reduce
$g_{2}$ by $g_{1}$, reduce $g_{3}$ by $g_{2}$ and $g_{1}$, and reduce $g_4$ by $g_1$, $g_2$ and $g_3$.
The reduced polynomial set can be proved in Section 6 to be
the reduced Gr$\ddot{\rm{o}}$bner basis of the intersection of two ideals which is exactly the vanishing ideal over $H$, and $D$ is the
quotient basis.
\section{Notions}
First, we define the following mappings.
$~~~~proj:\mathbb{N}_{0}^{n} \longrightarrow k$
$~~~~~~~~~~~(d_{1},\ldots,d_{n})\longrightarrow d_{n}$.
$~~~~\widehat{proj}:\mathbb{N}_{0}^{n}\longrightarrow \mathbb{N}_{0}^{n-1}$
$~~~~~~~~~~~(d_{1},\ldots,d_{n})\longrightarrow (d_{1},\ldots,d_{n-1})$.
$~~~~embed_{c}:\mathbb{N}_{0}^{n-1}\longrightarrow \mathbb{N}_{0}^{n}$
$~~~~~~~~~~~(d_{1},\ldots,d_{n-1})\longrightarrow (d_{1},\ldots,d_{n-1},c)$.
Let $D\subset N_0^{n}$, and naturally we define $\widehat{proj}(D)=\{\widehat{proj}(d)|d\in D\}$, and $embed_{c}(D^{'})=\{embed_{c}(d)|d\in D^{'}\}$ where $D^{'}\subset N_0^{n-1}$. In fact we can apply these mappings to any set $O\subset k^{n}$ or any matrix of $n$ columns, because there is no danger of confusion. For example, let $M$ be a matrix of $n$ columns, and $\widehat{proj}(M)$ is a matrix of $n-1$ columns with the first $n-1$ columns of $M$ reserved and the last one eliminated.
The $embed_c$ mapping embeds an $n-1$ dimensional lower set into the $n$ dimensional space.
When the $embed_c$ operation parameter $c$ is zero, we can get an $n$ dimensional lower set by
mapping each element $d=(d_{1},\ldots,d_{n-1})$ to $d=(d_{1},\ldots,d_{n-1},0)$ as showed below.
\begin{center}
\includegraphics[height=4.3cm]{embed.JPG}\\Fig. 6: Embed the lower set in 2-D space into 3-D space with parameter $c=0$.
\end{center}
Blank circles represent the elements of the limiting sets. Note that after the $embed_c$ mapping,
there is one more blank circle. In this case, the limiting set is always increased by one element $(0,\ldots,0,1)$.
In the case the $embed_c$ operation parameter $c$ is not zero, it is obvious that what we got is not a lower set
any more. But there is another intuitive fact we should realize.
\textbf{Theorem 1:} $D_{0},D_{1},\ldots,D_{k}$ are $n-1$ dimensional lower sets, and
$D_{0}\supseteq D_{1}\supseteq \ldots \supseteq D_{k}$. Let $\hat{D}_{i}=embed_{i}(D_{i}),i=0,\ldots,k.$
Then $D=\bigcup_{i=0}^{k}\hat{D}_{i}$ is an $n$ dimensional lower set, and $E(D)\subseteq C$ where
$C=\bigcup_{i=0}^{k}embed_{i}(E(D_{i}))\bigcup \{(0,\ldots,0,k+1)\}$.
\textbf{Proof:}
First to prove $D$ is a lower set. $\forall d\in D,$ let $i=proj(d)$, then $d\in \hat{D}_{i}$ i.e. $\widehat{proj}(d)\in\widehat{proj}(\hat{D}_i)=D_i$. Because $D_{i}$ is a lower set, hence for $j=1,\ldots,n-1,$ if $d_{j}\neq 0$, then $\widehat{proj}(d)-\widehat{proj}(e_{j})\in D_{i}$ where $e_{j}=(0, \ldots, 0, 1, 0,
\ldots, 0)$ with the 1 situated at the $j$-th position. So $d-e_{j}\in \hat{D}_{i}\subseteq D$. For $j=n$, if $i=0$, then we are finished. Else there must be $d-e_{n}\in\hat{D}_{i-1}\subseteq D$. Because if $d-e_{n}\notin\hat{D}_{i-1}$, we have $\widehat{proj}(d)\notin D_{i-1}$. Since we already have $\widehat{proj}(d)\in D_{i}$, this is contradictory to $D_{i}\subseteq D_{i-1}$.
Second, $\forall d\in E(D)$, $\widehat{proj}(d)\notin D_{i},i=0,\ldots,k$. If $\widehat{proj}(d)$ is a zero tuple, then $d_{n}$ must be $k+1$, that is $d\in C.$ Else we know $d_{n}<k+1$. If $d_{j}\neq 0,~j=1,\ldots,n-1$
, then $d-e_{j}\in embed_{d_{n}}(D_{d_{n}})$. Then
$\widehat{proj}(d)-\widehat{proj}(e_{j})\in D_{d_{n}}$, that is $\widehat{proj}(d)\in E(D_{d_{n}})$. Finally with the $embed_{d_{n}}$ operation we have $d\in embed_{d_{n}}(E(D_{d_{n}}))$ where $d_{n}<k+1$. So $d\in C$.
\section{Addition of lower sets}
In this section, we define the addition of lower sets which is the same with that in [2], the following paragraph and Fig.7 are basically excerpted from that paper with a little modification of expression.
To get a visual impression of what the addition of lower sets dose, look at the example in Fig.7. What is depicted there can generalizes to arbitrary lower sets $D_{1}$ and $D_{2}$ in arbitrary dimension $n$ and can be described as follows. Draw a coordinate system of $\mathbb{N}_{0}^{n}$ and insert $D_{1}$. Place a translate of $D_{2}$ somewhere on the $X_2$-axis. The translate has to be sufficiently far out, so that $D_{1}$ and the translate $D_{2}$ do not intersect. Then take the elements of the translate of $D_{2}$ and drop them down along the $X_2$-axis until they lie on top of the elements of $D_{1}$. The resulting lower set is denoted by $D_{1}+D_{2}$.
\begin{center}
\includegraphics[height=4.7cm]{addition.JPG}\\Fig. 7: Addition of $D_{1}$ and $D_{2}$.
\end{center}
Intuitively, we define algorithm \textbf{AOL} to realize the addition of lower sets.
Algorithm \textbf{AOL:} Given two $n$ dimensional lower sets
$D_{1},D_{2}$, determine another lower set as the addition of
$D_{1},D_{2}$, denoted by \textbf{$D:=D_{1}+D_{2}$}.
[step 1]: $D:=D_{1}$;
[step 2]: If $\sharp D_{2}=0$ return $D$. Else pick $a\in
D_{2},D_{2}:=D_{2}\setminus\{a\}.$
[step 2.1]: If $\sharp (D\bigcup \{a\})$=$\sharp D$, add the last
coordinate of $a$ with $1$. Go to [step 2.1]. Else
$D:=D\bigcup\{a\}$, go to [step 2].
Given $n$ dimensional lower sets $D_{1},D_{2},D_{3}$, the addition we
defined satisfies:
$(a)~D_{1}+D_{2}=D_{2}+D_{1},$
$(b)~(D_{1}+D_{2})+D_{3}=D_{1}+(D_{2}+D_{3}),$
$(c)~D_{1}+D_{2}$ is a lower set,
$(d)~\sharp(D_{1}+D_{2})=\sharp D_{1}+\sharp D_{2}.$
These are all the same with that in [2]. And the proof can be referred to it.
As implied in the example of Section 2, when we want to get a polynomial with leading term $d_{3}$ showed in the right part of Fig.8, we need two polynomials with the leading terms $d_{1},d_{2}$ which are not the elements of the lower sets and have the same degrees of $X_{1}$ as $d_3$ and the minimal degrees of $X_{2}$ as showed in the left part of Fig.8. In other words, $d_{1}\notin D_{1},~d_{2}\notin D_{2},~\widehat{proj}(d_{1})=\widehat{proj}(d_{2})=\widehat{proj}(d_{3})$, $proj(d_{1})+proj(d_{2})=proj(d_{3})$. It's easy to understand that these equations hold for the addition of three or even more lower sets.
\begin{center}
\includegraphics[height=5cm]{addition_s.jpg}\\Fig.8: $\widehat{proj}(d_{1})=\widehat{proj}(d_{2})=\widehat{proj}(d_{3}),~proj(d_{1})+proj(d_{2})=proj(d_{3})$.
\end{center}
We use algorithm \textbf{GLT} to get the leading terms $d_{1}$ and $d_{2}$ from $d_3$ respectively.
Algorithm \textbf{GLT:} Given $a\in\mathbb{N}_{0}^{n}$, and an $n$ dimensional lower set $D$ satisfying $a\notin D$. Determine
another $r=(r_1,\ldots,r_n)\in \mathbb{N}_{0}^{n}$ which satisfies that $r\notin D$, $\widehat{proj}(r)=\widehat{proj}(a)$ and $(r_1,\ldots,r_{n-1},r_n-1)\in D$, denoted by \textbf{$r:=GLT(a,D)$}.
[step 1]: Initialize $r$ such as $\widehat{proj}(r)=\widehat{proj}(a)$ and $proj(r)=0$.
[step 2]: if $r\notin D$, return r, else $r_n:=r_n+1$, go to [step 2].
Then $d_{1}=GLT(d_{3},D_{1}),~d_{2}=GLT(d_{3},D_{2}).$
\textbf{Definition 3:} For any $f\in k[X]$, view it as an element in $k(X_{n})[X_{1},\ldots,X_{n-1}]$ and define $LC_n(f)$ to be the leading coefficient of $f$ which is an univariate polynomial of $X_{n}$.
Algorithm \textbf{GLP:} $D$ is an $n$ dimensional lower set, $a\in \mathbb{N}_{0}^{n}$ and $a\notin D$, $G:=\{f\in k[X];\exists~ ed\in E(D),s.t.$ the leading term of $f$ is $ed~\}$, algorithm \textbf{GLP} returns a polynomial $p$ in the ideal $\langle G\rangle$ whose leading term is $GLT(a,D).$ Denoted by $p:=GLP(a,D,G).$
[step 1:] $c:=GLT(a,D)$.
[step 2:] Select $c^{'}\in E(D),s.t.~c^{'}$ is a factor of $c.~~d:=\frac{c}{c^{'}}$.
[step 3:] $p:=f_{c^{'}}\cdot d$ where $f_{c^{'}}$ is an element of $G$ whose leading term is $c^{'}$.
\textbf{Remark 1:} $LC_n(f_{c^{'}})=LC_n(p)$ in [step 3]. Since $c$ has the minimal degree of $X_n$ according to algorithm \textbf{GLT}, there exists no element $c^{''}\in E(D)$ which is a factor of $c$ satisfying $proj(c^{''})<proj(c)$. Hence monomial $d$ in the algorithm does not conclude the variable $X_n$.
\section{Associate a lower set $D(H)$ to a set of points $H$ with multiplicity structures}
For any given set of $n$ dimensional points $H$ with multiplicity structures, we can construct an $n$ dimensional lower set $D(H)$ by induction.
\textbf{Univariate case:}
$H=\{\langle p_{1},D_{1}\rangle,\ldots,\langle p_{t},D_{t}\rangle\}$, then the lower set is $D(H)=\{0,1,\ldots,\sum_{i=1}^{t}\sharp D_{i}\}$.
To pass from $n-1$ to $n$ ($n\geq 2$), we first solve a \textbf{Special case}.
\textbf{Special case:}
$H=\{\langle p_{1},D_{1}\rangle,\ldots,\langle p_{t},D_{t}\rangle\}$ is a set of $n$ dimensional points with multiplicity structures where all the points share the same $X_{n}$ coordinates. Write $H$ in matrix form as $\langle \mathcal{P},\mathcal{D}\rangle$ and all the entries in the last column of matrix $\mathcal{P}$ have the same values. Classify the row vectors of $\langle\mathcal{P},\mathcal{D}\rangle$ to get $\{\langle \mathcal{P}_{0},\mathcal{D}_{0}\rangle,\ldots,\langle \mathcal{P}_{w},\mathcal{D}_{w}\rangle\}$ according to the values of the entries in the last column of matrix $\mathcal{D}$ and we guarantee the corresponding relationship between the row vectors of matrix $\mathcal{P}$ and matrix $\mathcal{D}$ holds in $\langle \mathcal{P}_{i},\mathcal{D}_{i}\rangle$ ($0\leq i\leq w$). All the entries in the last column of $\mathcal{D}_{i}$ are the same $i$ and the entries of the last column of $\mathcal{P}_{i}$ stay the same too. Then eliminate the last columns of $\mathcal{P}_{i}$ and $\mathcal{D}_{i}$ to get $\langle\widehat{proj}(\mathcal{P}_{i}),\widehat{proj}(\mathcal{D}_{i})\rangle$ which represents a set of $n-1$ dimensional points with multiplicity structures, by induction we get a lower set $\hat{D}_{i}$ in $n-1$ dimensional space. Then we set
$$D(H)=\bigcup_{i=0}^{w}embed_{i}(\hat{D}_{i}).$$
Next we deal with the \textbf{General case}.
\textbf{General case:}
$H=\{\langle p_{1},D_{1}\rangle,\ldots,\langle p_{t},D_{t}\rangle\}$ is a set of $n$ dimensional points with multiplicity structures. Split the set of points: $H=H_{1}\bigcup H_{2}\bigcup\ldots\bigcup H_{s}$. The points of $H_{i}$ are in the same $\pi$-fibre, i.e. they have the same $X_{n}$ coordinates $c_{i}$, $i=1,\ldots,s$,and $c_{i}\neq c_{j},\forall i,j=1,\ldots,s,i\neq j.$ According to the \textbf{Special case}, for each $i=1,\ldots,s$, we can get a lower set $D(H_{i})$, then we set
$$D(H)=\sum_{i=1}^{s}D(H_{i}).$$
We now proof $D(H)$ is a lower set although it is easy to understand as long as the geometric interpretation involves. Since it is obviously true for \textbf{Univariate case}, induction over dimension would be helpful for the proof.
\textbf{Proof:} Assume $D(H)$ is a lower set for the $n-1$ dimensional situation and now we prove the conclusion for $n$ dimensional situation ($n\geq 2$).
First to prove $D(H)$ of the \textbf{Special case} is a lower set.
We claim that $\langle\widehat{proj}(\mathcal{P}_{i}),\widehat{proj}(\mathcal{D}_{i})\rangle$
represents an $n-1$ dimensional set of points with multiplicity structures ($i=0,\ldots,w$). For any $D\subset \mathbb{N}_{0}^{n}$, define $F_{a}(D)=\{d\in D|~proj(d)=a\}.$ Let $U=\{u|u\in\{1,\ldots,t\},F_i(D_u)\neq \varnothing\}.$
So $\langle\widehat{proj}(\mathcal{P}_{i}),\widehat{proj}(\mathcal{D}_{i})\rangle$ can be written in the form of $\{\langle \widehat{proj}(p_{u}), \widehat{proj}(F_{i}(D_{u})) \rangle | u\in U\}$. Apparently $\widehat{proj}(F_{i}(D_{u}))$ is an $n-1$ dimensional lower set and can be viewed as the multiplicity structure of the point $\widehat{proj}(p_{u})$. Hence $\langle\widehat{proj}(\mathcal{P}_{i}),\widehat{proj}(\mathcal{D}_{i})\rangle$
is an $n-1$ dimensional set of points with multiplicity structures.
What's else, we assert $\widehat{proj}(\mathcal{P}_{j})$ is a sub-matrix of $\widehat{proj}(\mathcal{P}_{i}),$ and $\widehat{proj}(\mathcal{D}_{j})$ is a sub-matrix of $\widehat{proj}(\mathcal{D}_{i}),0\leq i<j\leq w.$ Because of the corresponding relationship between the row vectors in $\mathcal{P}$ and $\mathcal{D}$, we need only to prove $\widehat{proj}(\mathcal{D}_{j})$ is a sub-matrix of $\widehat{proj}(\mathcal{D}_{i})$. If it is not true, there exists a row vector $g$ of $\widehat{proj}(\mathcal{D}_{j})$ which is not a row vector of $\widehat{proj}(\mathcal{D}_{i})$. That is, there exists $b$ ($1\leq b\leq t$) such that $embed_j(g)$ is an element of the lower set $D_{b}$, and $embed_i(g)$ is not included in any lower set $D_{a}$ ($1\leq a\leq t$). However since $i<j$ and $embed_j(g)\in D_b$, $embed_i(g)$ must be included in $D_b$.
Hence our assertion is true.
Since $\widehat{proj}(\mathcal{P}_{j})$ is a sub-matrix of $\widehat{proj}(\mathcal{P}_{i}),$ and $\widehat{proj}(\mathcal{D}_{j})$ is a sub-matrix of $\widehat{proj}(\mathcal{D}_{i}),0\leq i<j\leq w.$ According to the assumption of induction and the way we construct $D(H)$, we have $\hat{D}_{i}\supseteq \hat{D}_{j},0\leq i<j\leq w,$ where $\hat{D}_{i},\hat{D}_{j}$ are both lower sets. Based on the \textbf{Theorem 1} in Section 3, $D(H)=\bigcup_{i=0}^{w}embed_{i}(\hat{D}_{i})$ is a lower set, and $E(D(H))\subseteq \bigcup_{i=0}^{w} embed_{i}(E(\hat{D}_{i}))\bigcup \{(0,\ldots,0,w+1)\}$.
Then to prove $D(H)$ of \textbf{General case} is a lower set. Since $D(H_{i}),i=1,\ldots,s$ are lower sets, and the addition of lower sets is also a lower set according to Section 4, $D(H)$ is obviously a lower set. The proof is finished.
\section{Associate a set of polynomials $poly(H)$ to $D(H)$}
For every lower set constructed during the induction procedure showed in the last section, we associate a set of polynomials to it.
We begin with the univariate case as we did in the last section.
\textbf{P-univariate case:}
$H=\{\langle p_{1},D_{1}\rangle,\ldots,\langle p_{t},D_{t}\rangle\}$, and $D(H)=\{0,1,\ldots,\sum_{i=1}^{t}\sharp D_{i}\}$. The set of polynomials associated to $D(H)$ is $poly(H)=\{\prod_{i=1}^{t}(X_{1}-p_{i})^{\sharp D_{i}}\}$.
Apparently, $poly(H)$ of \textbf{P-univariate case} satisfies the following \textbf{Assumption}.
\textbf{Assumption:} For any given $n-1~(n>1)$ dimensional set of points $H$ with multiplicity structures, there are the following conclusions. For any $\lambda\in E(D(H))$, there exists a polynomial $f_{\lambda}\in k[X]$ where $X=(X_{1},\ldots,X_{n-1})$ such that
$\bullet$ The leading term of $f_{\lambda}$ under lexicographic ordering is $X^{\lambda}$.
$\bullet$ The exponents of all lower terms of $f_{\lambda}$ lies in $D(H)$.
$\bullet$ $f_{\lambda}$ vanishes on $H$.
$\bullet$ $poly(H)=\{f_{\lambda}|\lambda\in E(D(H))\}$.
When we construct the set of polynomials $poly(H)$, we should make sure the assumption always holds. Now let us consider the $n~(n>1)$ dimensional situation and still begin with the special case.
\textbf{P-Special case:} Given a set of points with multiplicity structures
$H=\{\langle p_{1},D_{1}\rangle,\ldots,\langle p_{t},D_{t}\rangle\}$
or in matrix form $\langle\mathcal{P}=(p_{ij})_{m\times
n},\mathcal{D}=(d_{ij})_{m\times n}\rangle$. All the given points have the same $X_{n}$ coordinates, i.e.
the entries in the last column of $\mathcal{P}$ are the same. We compute $poly(H)$ following the steps below.
[step 1]: $c:=p_{1n};$ $w=max\{d_{in};i=1,\ldots,m\}.$
[step 2]: $\forall i=0,\ldots,w$, define $\mathcal{SD}_{i}$ as a
sub-matrix of $\mathcal{D}$ containing all the row vectors whose last
coordinates equal to $i$. Extract the corresponding row vectors of $\mathcal{P}$
to form matrix $\mathcal{SP}_{i}$, and the corresponding relationship between the
row vectors in $\mathcal{P}$ and $\mathcal{D}$ holds for $\mathcal{SP}_{i}$ and $\mathcal{SD}_{i}$.
[step 3]: $\forall i=0,\ldots,w$, eliminate the last columns of $\mathcal{SP}_{i}$ and $\mathcal{SD}_{i}$ to get $\langle\tilde{\mathcal{SP}_{i}},\tilde{\mathcal{SD}_{i}}\rangle$ which represents a set of points in $n-1$ dimensional space with multiplicity structures.
According to the induction assumption, we have the polynomial set $\tilde{G}_{i}=poly(\langle \tilde{\mathcal{SP}_{i}}$,$\tilde{\mathcal{SD}_{i}} \rangle)$ associated to the lower set $\tilde{D}_{i}=D(\langle \tilde{\mathcal{SP}_{i}}$,$\tilde{\mathcal{SD}_{i}} \rangle)$.
[step 4]: $ D:=\bigcup_{i=0}^{w}embed_{i}(\tilde{D}_{i}).$ Multiply every element of
$\tilde{G}_{i}$ with $(X_{n}-c)^{i}$ to get $G_{i}.$
$\tilde{G}:=\bigcup_{i=0}^{w}G_{i}\bigcup \{(X_{n}-c)^{w+1}\}$.
[step 5]: Eliminate the polynomials in $G$ whose leading term is not
included in $E(D)$ to get $poly(H)$.
\textbf{Theorem 2:} The $poly(H)$ got in \textbf{P-Special case} satisfies the \textbf{Assumption}.
\textbf{Proof:} According to the Section 5, $\langle \tilde{\mathcal{SP}_{i}}$,$\tilde{\mathcal{SD}_{i}} \rangle$ represents an $n-1$ dimensional set of points with multiplicity structures for $i=0,\ldots,w.$ And $\tilde{D}_{j}\supseteq \tilde{D}_{i},0\leq j\leq i\leq w$. $D$ is a lower set and $E(D)\subseteq \bigcup_{i=0}^{w}embed_{i}(E(\tilde{D}_{i}))\bigcup \{(0,\ldots,0,w+1)\}$.
For $\lambda=(0,\ldots,0,w+1)\in E(D)$, we have $f_\lambda=(X_{n}-c)^{w+1}$. It is easy to check that it satisfies the first three terms of the \textbf{Assumption}.
For any other element $ed$ of $E(D)$, $\exists k~s.t.~ed\in embed_{k}E(\tilde{D}_{k})$. So let $\tilde{ed}$ be the element in $E(\tilde{D}_{k})$ such that $ed=embed_{k}(\tilde{ed})$. We have $f_{\tilde{ed}}$ vanishes on $\langle\tilde{\mathcal{SP}_{k}},\tilde{\mathcal{SD}_{k}}\rangle$ whose leading term is $\tilde{ed}\in E(\tilde{D}_{k})$ and the lower terms belong to $\tilde{D}_{k}$.
According to the algorithm $f_{ed}=(X_{n}-c)^{k}\cdot f_{\tilde{ed}}\in poly(H)$ .
First it is easy to check that the leading term of $f_{ed}$ is $ed$ since $ed=embed_{k}(\tilde{ed})$.
Second, the lower terms of $f_{ed}$ are all in the set $S=\bigcup_{j=0}^{k}embed_{j}(\tilde{D}_{k})$ because all the lower terms of $f_{\tilde{ed}}$ are in the set $\tilde{D}_{k}$. $\tilde{D}_{0}\supseteq \tilde{D}_{1}\supseteq \ldots \tilde{D}_{k}$, so $embed_j(\tilde{D}_{k})\subset embed_j(\tilde{D}_{j})~(0\leq j\leq k)$, hence $S\subseteq D=\bigcup_{j=0}^{w}embed_{j}(\tilde{D}_{j})$ and the second term of the \textbf{Assumption} is satisfied.
Third, we are going to prove that $f_{ed}$ vanishes on all the functionals defined by $\langle \mathcal{P},\mathcal{D}\rangle$, i.e. all the functionals defined by $\langle \mathcal{SP}_i,\mathcal{SD}_i\rangle~(i=0,\ldots,w).$
When $i\neq k$, we write all the functionals defined by $\langle \mathcal{SP}_i,\mathcal{SD}_i\rangle$ in this form: $L^{'}\cdot \frac{\partial^{i}}{\partial X_n^{i}}|_{X_n=c}$ where $L^{'}$ is an $n-1$ variable functional. Since $f_{ed}=(X_{n}-c)^{k}\cdot f_{\tilde{ed}}$, apparently $f_{ed}$ vanishes on these functionals.
For $i=k$, denote by $L$ the functionals defined by $\langle\tilde{\mathcal{SP}_{i}},~\tilde{\mathcal{SD}_{i}}\rangle$, and $f_{\tilde{ed}}$ vanishes on $L$. All the functionals defined by $\langle \mathcal{SP}_k,\mathcal{SD}_k\rangle$ can be written in this form: $L^{''}\cdot \frac{\partial^{k}}{\partial X_n^{k}}|_{X_n=c}$ where $L^{''}\in L$. Since $f_{ed}=(X_{n}-c)^{k}\cdot f_{\tilde{ed}}$, apparently $f_{ed}$ vanishes on these functionals.
So $f_{ed}$ vanish on $H$, and $f_{ed}$ satisfies the first three terms of the \textbf{Assumption}.
In summary $poly(H)$ satisfies the \textbf{Assumption}, and we finish the proof.
\textbf{Remark 2:} For $f_{\lambda}\in poly(H),\lambda\in E(D)$ where $poly(H)$ is the result got in the algorithm above, we have the conclusion that $LC_n(f_{\lambda})=(X_{n}-c)^{proj(\lambda)}$.
\textbf{P-General case:}
Given a set of points with multiplicity structures
$H$ or in matrix form $\langle\mathcal{P}=(p_{ij})_{m\times
n},\mathcal{D}=(d_{ij})_{m\times n}\rangle$, we are going to get $poly(H)$.
[step 1]: Write $H$ as $H=H_{1}\bigcup H_{2}\bigcup\ldots\bigcup H_{s}$ where $H_{i}~(1\leq i\leq s)$ is a $\pi$-fibre ($\pi:H\mapsto k$ such that $\langle p=(p_1,\ldots,p_n),D\rangle\in H$ is mapped to $p_n\in k$) i.e. the points of $H_{i}$ have the same $X_{n}$ coordinates $c_{i}$, $i=1,\ldots,s$,and $c_{i}\neq c_{j},\forall i,j=1,\ldots,s,i\neq j.$
[step 2]: According to the \textbf{P-Special case}, we have $D^{'}_{i}=D(H_i), G_{i}=poly(H_{i})$. Write $H_{i}$ as $\langle \mathcal{P}_{i},\mathcal{D}_{i}\rangle$,
and define $w_{i}$ as the maximum value of the elements in the last column of $\mathcal{D}_{i}$.
[step 3]: $D:=D^{'}_{1}, G:=G_{1},i:=2$.
[step 4]: If $i>s$, go to [step 5]. Else
[step 4.1]: $D:=D+D^{'}_{i};$ $\hat{G}:=\varnothing$. View $E(D)$ as a monomial set $MS:=E(D)$.
[step 4.2]: If $\sharp MS= 0$, go to [step 4.7], else select the minimal element of $MS$ under lexicographic ordering, denoted by $LT$. $
MS:=MS\setminus\{LT\}$.
[step 4.3]:
$$f_{1}:=GLP(LT,D,G),f_{2}:=GLP(LT,D_{i}^{'},G_{i}).$$
~~~~~~~~~~~~~~~~~~~~~~~~~$v_{k}:=proj(g_{k})$, where $g_{k}:=GLT(LT,D_{k}^{'})$, $k=1,\ldots,i$.
[step 4.4]:
$$q_{1}:=f_{1}\cdot (X_{n}-c_{i})^{w_{i}+1};~~q_{2}:=f_{2}\cdot \prod_{k=1}^{i-1}(X_{n}-c_{k})^{w_{k}+1}.$$
$$pp1:=(X_{n}-c_{i})^{w_{i}+1-v_{i}};~~pp2:=\prod_{k=1}^{i-1}(X_{n}-c_{k})^{w_{k}+1-v_{k}}.$$
[step 4.5]: Use Extended Euclidean Algorithm to compute $r_{1}$ and
$r_{2}$ s.t. $r_{1}\cdot pp_{1}+r_{2}\cdot pp_{2}=1$.
[step 4.6]: $f:=r_{1}\cdot q_{1}+r_{2}\cdot q_{2}$. Reduce $f$ with the
elements in $\hat{G}$ to get $f^{'}$; $\hat{G}:=\hat{G}\bigcup\{f^{'}\}.$ Go to [step
4.2].
[step 4.7]: $G:=\hat{G}.$ $i:=i+1.$ Go to [step 4].
[step 5]: $poly(H):=G$.
\textbf{Theorem 3:} The $poly(H)$ got in \textbf{p-General case} satisfies the \textbf{Assumption}.
\textbf{proof:} We need only to prove the situation that $s\geq 2$ in [step 1].
For $i=2$, $D=D_{1}^{'}+D_{2}^{'}. ~~\forall ed\in E(D)$, $v:=proj(ed)$ and $X_0:=\frac{X^{ed}}{X_n^{v}}$. According to Section 4, we have $v=v_{1}+v_{2}$. Based on the \textbf{Remark 1} and \textbf{Remark 2}, $f_{1}$ and $f_{2}$ can be written as polynomials of $k(X_{n})[X_{1},\ldots,X_{n-1}]:$ $f_{1}=X_{0}\cdot (X_{n}-c_{1})^{v_{1}}+the~rest$ and $f_{2}=X_{0}\cdot (X_{n}-c_{2})^{v_{2}}+the~rest$ and none of the monomials in $the~rest$ is greater than or equal to $X_{0}.$
Because $f_{1}$ and $(X_{n}-c_{1})^{w_{1}+1}$ vanish on $H_{1}$, $f_{2}$ and $(X_{n}-c_{2})^{w_{2}+1}$ vanish on $H_{2}$, we know that $q_{1}=f_{1}\cdot (X_{n}-c_{2})^{w_{2}+1}$ and $q_{2}=f_{2}\cdot (X_{n}-c_{1})^{w_{1}+1}$ both vanish on $H_{1}\bigcup H_{2}$. Then $f$ vanishes on $H_{1}\bigcup H_{2}$ where $f=r_{1}\cdot q_{1}+r_{2}\cdot q_{2}$.
$\qquad~~f=X_{0}\cdot (X_{n}-c_{1})^{v_{1}}\cdot (X_{n}-c_{2})^{v_{2}}(r_{1}\cdot (X_{n}-c_{2})^{w_{2}+1-v_{2}}+r_{2}\cdot (X_{n}-c_{1})^{w_{1}+1-v_{1}})+the~ rest$
$\qquad~~~~=X_{0}\cdot (X_{n}-c_{1})^{v_{1}}\cdot (X_{n}-c_{2})^{v_{2}}(r_{1}\cdot pp1+r_{2}\cdot pp2)+the~ rest$
$\qquad~~~~=X_{0}\cdot (X_{n}-c_{1})^{v_{1}}\cdot (X_{n}-c_{2})^{v_{2}}+the~ rest$
None monomial in $the~rest$ is greater than or equal to $X_{0}$ , so the leading term of $f$ is apparently $X_{0}\cdot X_{n}^{v}$ which is equal to $ed$. Moreover we naturally have the following \textbf{Proposition 1} for $i=2$.
\textbf{Proposition 1:}
For every polynomial $f$ we get in the algorithm, $LC_n(f)=\prod_{j=1}^{i}(X_{n}-c_{j})^{v_{j}}$.
When $i>2$, assume the \textbf{Proposition 1} holds for $i-1$. $\forall~ ed\in E(D)$, $v:=proj(ed)$ and $X_0:=\frac{X^{ed}}{X_n^{v}}$. According to Section 4, we have $v=v_{1}+\ldots+v_{i}$. Based on the \textbf{Proposition 1}, \textbf{Remark 1} and \textbf{Remark 2}, $f_{1}$ and $f_{2}$ can be written as polynomials of $k(X_{n})[X_{1},\ldots,X_{n-1}]:$ $f_{1}=X_{0}\cdot \prod_{j=1}^{i-1}(X_{n}-c_{j})^{v_{j}}+the~rest$ and $f_{2}=X_{0}\cdot (X_{n}-c_{i})^{v_{i}}+the~rest$ and none of the monomials in $the~rest$ is greater than or equal to $X_{0}$.
Because $f_{1}$ and $\prod_{j=1}^{i-1}(X_{n}-c_{j})^{w_{j}+1}$ vanish on $\bigcup_{j=1}^{i-1}H_{j}$, $f_{2}$ and $(X_{n}-c_{i})^{w_{i}+1}$ vanish on $H_{i}$, we know that $q_{1}=f_{1}\cdot (X_{n}-c_{i})^{w_{i}+1}$ and $q_{2}=f_{2}\cdot \prod_{j=1}^{i-1}(X_{n}-c_{j})^{w_{j}+1}$ both vanish on $\bigcup_{j=1}^{i} H_{j}$. Then $f$ vanishes on $\bigcup_{j=1}^{i} H_{j}$ where $f=r_{1}\cdot q_{1}+r_{2}\cdot q_{2}$.
$$f=X_{0}\cdot\prod_{j=1}^{i}(X_{n}-c_{j})^{v_{j}}(r_{1}\cdot (X_{n}-c_{i})^{w_{i}+1-v_{i}}+r_{2}\cdot \prod_{j=1}^{i-1}(X_{n}-c_{j})^{w_{j}+1-v_{j}})+the~ rest$$
$\qquad\qquad=X_{0}\cdot\prod_{j=1}^{i}(X_{n}-c_{j})^{v_{j}}(r_{1}\cdot pp1+r_{2}\cdot pp2)+the~ rest$
$\qquad\qquad=X_{0}\cdot\prod_{j=1}^{i}(X_{n}-c_{j})^{v_{j}}+the~ rest$
None monomial in $the~rest$ is greater than or equal to $X_{0}$ and the leading term of $f$ is apparently $X_{0}\cdot X_{n}^{v}$ which is equal to $ed$. Hence the \textbf{Proposition 1} holds for arbitrary $i$.
Therefore we have proved that for any element $ed\in E(D)$, $f_{ed}:=f$ vanishes on $H$ and the leading term is $ed$. In the algorithm, we compute $f_{ed}$ in turn according to the lexicographic ordering of the elements of $E(D)$. Once we get a polynomial, we use the polynomials we got previously to reduce it ([step 4.6]). Now to prove the lower terms of $f^{'}$ are all in $D$ after such a reduction operation.
Let $D$ be a lower set, $a$ be a monomial, define $L(a,D)=\{b\in\mathbb{N}_{0}^{n};b\prec a,b\in D\}$. Given any $d\notin D$, there exist only two situations:
$d\in E(D)$ or $d\notin E(D)$ but $\exists d^{'}\in E(D),~ s.t.~d^{'}$ is a factor of $d$. Of course $d^{'}\prec d$.
The very first vanishing polynomial we got in the algorithm is an univariate polynomial of $X_{n}$ with leading term being $T$. It is easy to check it's lower terms are in $D$. Since the polynomial is a vanishing polynomial, we can say that $T$ can be represented as the linear combination of the elements of $L(T,D)$.
Since $T$ is the first element which is not in $D$ under lexicographic ordering. We \textbf{assume} that there exists such a monomial $M\notin D(M\succ T)$ that $\forall m\prec M(m\notin D)$, $m$ can be represented as the linear combination of the elements of $L(m,D)$. Now to prove $M$ could be represented as the linear combination of the elements of $L(M,D).$
If $M\in E(D)$, then the algorithm provides us a vanishing polynomial whose leading term is $M$ i.e. that $M$ can be represented as the combination of the terms which are all smaller than $M$. According to the assumption, for any lower term $m~(m\notin D)$ of the polynomial, $m$ can be represented as the linear combination of the elements of $L(m,D)$, then $M$ could be represented as the linear combination of the elements of $L(M,D).$
If $M\notin E(D)$, there exists $d^{'}\in E(D)$ s.t. $M=M^{'}\cdot d^{'}$. Since $d^{'}\prec M$, according to the assumption, we can substitute $d^{'}$ with the linear combination of the elements of $L(d^{'},D)$. Since all the elements in $L(d^{'},D)$ are smaller than $d^{'}$, then $M$ could be represented as the combination of elements which are all smaller than $M$. Then for the same reason described in the last paragraph, $M$ could be represented as the linear combination of the elements of $L(M,D).$
Therefore specially for any $ed\in E(D)$, all the lower terms of the polynomial $f_{ed}$ we got in the algorithm after the reduce operation are in $D$, and the proof is done.
\textbf{Theorem 4:} Given a set of points $H$ with multiplicity structures, $poly(H)$ is the reduced Gr$\ddot{\rm{o}}$bner basis of the vanishing ideal $I(H)$ and $D(H)$ is the quotient basis under lexicographic ordering.
\textbf{Proof:}
Let $m$ be the number of functionals defined by $H$ and then $m=dim(k[X]/I(H))$. Denote by $J$ the ideal generated by $poly(H)$. According to the \textbf{Assumption}, $poly(H)\subseteq I(H)$. So $dim(k[X]/I(H))\leq dim(k[X]/J)$. Let $C$ be the leading terms of polynomials in $J$ under lexicographic ordering, then $C\supseteq \bigcup_{\beta\in E(D(H))}(\beta+\mathbb{N}_{0}^{n})$ where the latter union is equal to $\mathbb{N}_{0}^{n}-D(H)$. Then we can get $C^{'}=\mathbb{N}_{0}^{n}-C\subseteq D(H)$. Because $k[X]/J$ is isomorphic as a $k$-vector space to the $k$-span of $C^{'}$, here $C^{'}$ is viewed as a monomial set. So $dim(k[X]/J)\leq \sharp D(H)=m$. Hence we have $$m=dim(k[X]/I(H))\leq dim(k[X]/J)\leq m.$$
Therefore $J=I(H)$, where $J=\langle poly(H)\rangle$. Hence apparently $poly(H)$ is exactly the reduced Gr$\ddot{\rm{o}}$bner basis of the vanishing ideal under lexicographic ordering, and $D(H)$ is the quotient basis.
\section{Intersection of ideals and some applications}
Some steps of our algorithm actually do the work of computing the intersection of two ideals, but we note that the information of the zeros of the ideals is necessary there (see [step 4.1] - [step 4.7] of \textbf{p-General case} in Section 6). We now bring up a new algorithm to compute the intersection of two ideals which does not require the information of the zeros of the ideals.
\textbf{Lemma 1:} $G$ is the reduced Gr$\ddot{\rm{o}}$bner basis of some $n$-variable polynomial ideal under lexicographic ordering with $X_{1}\succ X_{2}\succ\ldots\succ X_{n}$. Define $p_{0}(G)$ as the univariate polynomial of $X_{n}$ in $G$. View $g\in G$ as polynomial of $K(X_{n})[X_{1},\ldots,X_{n-1}]$ and define $LC_n(g)$ to be the leading coefficient of $g$ which is an univariate polynomial of $X_{n}$ and we have the conclusion that $LC_n(g)$ is always a factor of $p_{0}(G)$.
\textbf{Proof:} In fact \textbf{Proposition 1} in Section 6 holds for any given reduced Gr$\ddot{\rm{o}}$bner basis under lexicographic ordering since it is unique and can be constructed in the way our algorithm offers. According to the proposition, $\forall f\in G$, $LC_n(g)=\prod_{j=1}^{s}(X_{n}-c_{j})^{v_{j}}$ and $v_{j}\leq w_{j}+1.$ $p_{0}(G)=\prod_{j=1}^{s}(X_{n}-c_{j})^{w_{j}+1}$. Hence the proof is done.
Based on \textbf{Proposition 1} and \textbf{Lemma 1}, we give the algorithm \textbf{Intersection} to compute the intersection of two ideals $I_1$ and $I_2$ which are represented by the lexicographic ordering reduced Gr$\rm{\ddot{o}}$bner bases $G_{1}$ and $G_{2}$ and the greatest common divisor of $p_{0}(G_{1})$ and $p_{0}(G_{2})$ equals to 1. Denote by $Q(G)$ the quotient basis where $G$ is the reduced Gr$\rm{\ddot{o}}$bner basis. Algorithm \textbf{GP} is a sub-algorithm called in algorithm \textbf{Intersection}.
Algorithm \textbf{GP:} $G$ is a reduced Gr$\ddot{\rm{o}}$bner basis, for any given monomial $LT$ which is not in $Q(G)$, we get a polynomial $p$ in $\langle G\rangle$ whose leading term is a factor of $LT$: the $X_{1},\ldots,X_{n-1}$ components of the leading term are the same with that of $LT$ and the $X_{n}$ component has the lowest degree. Denoted by $p:=GP(LT,G).$
[step 1:] $G^{'}:=\{g\in G|$ the leading monomial of $g$ is a factor of $LT$ $\}$.
[step 2:] $G^{''}:=\{g\in G^{'}|\nexists g^{'}\in G^{'},~s.t.~$ the degree of $X_{n}$ of the leading monomial of $g^{'}$ is lower than that of $g$ $\}$.
[step 3:] Select one element of $G^{''}$ and multiply it by a monomial of $X_{1},\ldots,X_{n-1}$ to get $p$ whose leading monomial is $LT$.
Algorithm \textbf{Intersection:} $G_{1}$ and $G_{2}$ are the reduced Gr$\ddot{\rm{o}}$bner bases of two different ideals satisfying that $GCD(p_{0}(G_{1}),p_{0}(G_{2}))=1$. Return the reduced Gr$\ddot{\rm{o}}$bner basis of the intersection of these two ideals, denoted by $G:=Intersection(G_{1},G_{2})$.
[step 1:] $D:=Q(G_{1})+Q(G_{2})$. View $E(D)$ as a monomial set. $G:=\varnothing$.
[step 2:] If $E(D)=\varnothing$, the algorithm is done. Else select the minimal element of $E(D)$, denoted by $T$. $E(D):=E(D)/\{T\}$.
[step 3:] $$f_{1}:=GP(T,G_{1}),~f_{2}:=GP(T,G_{2}).$$$$q_{1}:=f_{1}\cdot p_{2},~q_{2}:=f_{2}\cdot p_{1}.$$
[step 4:] $$t_{1}:=\frac{p_{0}(G_2)}{LC_n(f_{2})},~t_{2}:=\frac{p_{0}(G_1)}{LC_n(f_{1})}.$$
[step 5:] Use Extended Euclidean Algorithm to find $r_{1},r_{2}$ s.t.
$$r_{1}\cdot t_{1}+r_{2}\cdot t_{2}=1.$$
[step 6:] $f:=q_{1}\cdot r_{1}+q_{2}\cdot r_{2}$. Reduce $f$ with $G$ to get $f^{'}$, and $G:=G\bigcup\{f^{'}\}$. Go to [Step 2].
Because the algorithm is essentially the same with [step 4.1] - [step 4.7] of \textbf{p-General case} in Section 6, here we don't give the proof.
The \textbf{Proposition 1} and \textbf{Lemma 1} reveal important property of the reduced Gr$\ddot{\rm{o}}$bner basis under lexicographic ordering. If a set of polynomials does not have this property, it is surely not a reduced Gr$\ddot{\rm{o}}$bner basis.
It is well-known that the Gr$\ddot{\rm{o}}$bner basis of an ideal under lexicographic ordering holds good algebraic structures and hence is convenient to use for polynomial system solving. To compute the zeros of an zero dimensional ideal with the reduced Gr$\ddot{\rm{o}}$bner basis $G$, we need first compute the roots of $p_0(G)$. Since $LC_n(g)$ ($g\neq p_0(G),~g\in G$) is a factor of $p_0(G)$, compute the roots of $LC_n(g)$ which has a smaller degree would be helpful for saving the computation cost.
\section{Conclusion}
Based on the algorithm \textbf{Intersection} in Section 7, the algorithm of \textbf{p-General case} in Section 6 can be simplified. The last sentence in [step 2] can be deleted and we can replace [step 4.3] and [step 4.4] by:
[step 4.3]:
$$f_{1}:=GLP(LT,D,G),f_{2}:=GLP(LT,D_{i}^{'},G_{i}).$$
[step 4.4]:
$$q_{1}:=f_{1}\cdot p_0(G_i);~~q_{2}:=f_{2}\cdot p_0(G).$$
$$pp1:=\frac{p_0(G_i)}{LC_n(f_{2})};~~pp2:=\frac{p_0(G)}{LC_n(f_{1})}.$$
During the induction of the algorithm in Section 6, we can record the leading coefficients for later use to save the computation cost and the computation cost is mainly on the Extended Euclid Algorithm. However the advantage of our algorithm is not fast computation, after all it depends on how many times we need to use the Extended Euclid Algorithm.
Our algorithm has an explicit geometric interpretation which reveals the essential connection between the relative position of the points with multiplicity structures and the quotient basis of the vanishing ideal. The algorithm offers us a new perspective of view to look into the reduced Gr$\ddot{\rm o}$bner basis which can help us understand the problem better. \textbf{Lemma 1} and the algorithm to compute the intersection of two ideals are the direct byproducts of our algorithm.
Since we finished the paper [1] previously which gives an algorithm to get the minimal monomial basis of Birkhoff interpolation problem with little computation cost, we have always believed that the algorithm could be interpreted in a more geometric way and the proof should be more beautiful and much easier to understand. The proof in [1] is so complicated that we ourselves don't like it. And it would be great if we can get the interpolation polynomial with little computation cost instead solving the linear equations since the minimal monomial basis can already be got in a simple way. That's why we began to study the vanishing ideal of the set of points with multiplicity structures which is essentially a special case of Birkhoff interpolation problem.
I still remember the moment when I first read the paper [2] written by Mathias Lederer in which the quotient basis and Gr$\ddot{\rm{o}}$bner basis can be got in a geometric way. I told myself that this was just what we wanted. Paper [2] concentrates on the the vanishing ideal of the set of points with no multiplicity structures in affine space. Although whether or not the points are with multiplicity structures matters much, the paper really inspired us a lot. Our algorithm also uses induction over variables and the definition of addition of lower sets is essentially the same with that in paper [2]. However during the induction procedure, we have to consider \textbf{p-Special case} and \textbf{p-General case}. This consideration, on one hand, clearly indicates the geometric meaning of the multiplicity structures of points, on the other hand, means a lot for our capacity of applying the algorithm of intersection of two ideals. In paper [2], the author uses Lagrange interpolation method to get the vanishing polynomial of all points from the polynomials vanishing on subsets of the points. However the Lagrange interpolation method could just not work for our problem because the points are with multiplicity structures. In this paper, we creatively use the Extended Euclidean Algorithm. Thanks goes to paper [2] and the author, after we solved the problem of the vanishing ideal of the set of points with multiplicity structures, we will move on to the Birkhoff problem.
\section{References}
[1]Na Lei, Junjie Chai, Peng Xia, Ying Li, A fast algorithm for multivariate Birkhoff interpolation problem, Journal of Computational and Applied Mathematics 236 (2011) 1656-1666.
[2]Mathias Lederer, The vanishing ideal of a finite set of closed points in affine space, Journal of Pure and Applied Algebra 212 (2008) 1116-1133.
[3]Hans J.Stetter. Numerical Polynomial Algebra. Chapter 2. SIAM, Philadelphia, PA, USA. 2004.
[4]M.G. Marinari, H.M. M$\rm{\ddot{o}}$ller, T. Mora, Gr$\rm{\ddot{o}}$bner bases of ideals defined by functionals with an application to ideals of projective points, J. AAECC 4 (2) (1993) 103-145.
[5]B$\acute{\rm{a}}$lint Felszeghy, Bal$\acute{\rm{a}}$zs R$\acute{\rm{a}}$th, Laj$\acute{\rm{o}}$s Ronyai, The lex game and some applications, Journal of Symbolic Computation 41 (2006) 663-681.
[6]L. Cerlinco, M. Mureddu, From algebraic sets to monomial linear bases by means of combinatorial algorithms, Discrete Math 139 (1995) 73-87.
\end{document}
| {
"timestamp": "2013-01-22T02:01:30",
"yymm": "1301",
"arxiv_id": "1301.4630",
"language": "en",
"url": "https://arxiv.org/abs/1301.4630",
"abstract": "Given a finite set of arbitrarily distributed points in affine space with arbitrary multiplicity structures, we present an algorithm to compute the reduced Groebner basis of the vanishing ideal under the lexicographic ordering. Our method discloses the essential geometric connection between the relative position of the points with multiplicity structures and the quotient basis of the vanishing ideal, so we will explicitly know the set of leading terms of elements of I. We split the problem into several smaller ones which can be solved by induction over variables and then use our new algorithm for intersection of ideals to compute the result of the original problem. The new algorithm for intersection of ideals is mainly based on the Extended Euclidean Algorithm.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "The vanishing ideal of a finite set of points with multiplicity structures",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750488609223,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7095221734758339
} |
https://arxiv.org/abs/2110.09190 | Secure domination number of $k$-subdivision of graphs | Let $G=(V,E)$ be a simple graph. A dominating set of $G$ is a subset $D\subseteq V$ such that every vertex not in $D$ is adjacent to at least one vertex in $D$. The cardinality of a smallest dominating set of $G$, denoted by $\gamma(G)$, is the domination number of $G$. A dominating set $D$ is called a secure dominating set of $G$, if for every $u\in V-D$, there exists a vertex $v\in D$ such that $uv \in E$ and $D-\{v\}\cup\{u\}$ is a dominating set of $G$. The cardinality of a smallest secure dominating set of $G$, denoted by $\gamma_s(G)$, is the secure domination number of $G$. For any $k \in \mathbb{N}$, the $k$-subdivision of $G$ is a simple graph $G^{\frac{1}{k}}$ which is constructed by replacing each edge of $G$ with a path of length $k$. In this paper, we study the secure domination number of $k$-subdivision of $G$. | \section{Introduction}
Let $G = (V,E)$ be a simple graph with $n$ vertices. Throughout this paper we consider only simple graphs. A set $D\subseteq V(G)$ is a dominating set if every vertex in $V(G)- D$ is adjacent to at least one vertex in $D$.
The domination number $\gamma(G)$ is the minimum cardinality of a dominating set in $G$. There are various domination numbers in the literature.
For a detailed treatment of domination theory, the reader is referred to \cite{domination}.
\medskip
Cockayne et al. introduced the concept of secure domination number \cite{Coc} in 2004. By their definition,
a dominating set $D$ is called a secure dominating set of $G$, if for every $u\in V-D$, there exists a vertex $v\in D$ such that $uv \in E$ and $D-\{v\}\cup\{u\}$ is a dominating set of $G$. The cardinality of a smallest secure dominating set of $G$, denoted by $\gamma_s(G)$, is the secure domination number of $G$.
\medskip
Secure domination number widely studied in literature. In 2005, Mynhardt et al. used a simple constructive characterisation of $\gamma$-excellent trees to obtain a constructive characterisation of trees with equal domination and secure domination numbers, where a graph $G$ is said to be $\gamma$-excellent if each vertex of $G$ is contained in some minimum dominating set of $G$ \cite{Myn}. Later, Cockayne found a sharp upper bound as $\frac{\Delta n +\Delta -1}{3\Delta -1}$, for trees with $n$ vertices and maximum degree $\Delta \geq 3$ \cite{Coc1}. In 2008, Burger et al. by using vertex cover, showed that if $G$ is a connected graph of order $n$ with minimum degree at least two
that is not a 5-cycle, then $\delta _s(G)\leq \frac{n}{2}$ \cite{Bur}. Merouane et al. in \cite{Mer}, showed that the problem of computing the secure domination number is in the NP-complete class, even when restricted to bipartite graphs and split graphs. Araki et al. proposed a linear-time algorithm for finding the secure domination number of proper interval graphs in 2018 \cite{Ara}. Recently, Mohamed Ali et al. obtained the secure domination number of zero-divisor graphs \cite{Moh}. More results on secure domination number can be found in \cite{Ara1,Cas,Gro,Kis,Klo,LiXu}.
\medskip
The \textit{ $k$-subdivision} of $G$, denoted by $G^{\frac{1}{k}}$, is constructed by replacing each edge $v_iv_j$ of $G$ with a path of length $k$., say $P^{\{v_i,v_j\}}$. These $k$-paths are called \textit{superedges}, any new vertex is an internal vertex, and is denoted by $x^{\{v_i,v_j\}}_l$ if it belongs to the superedge $P_{\{v_i,v_j\}}$, $i<j$ with distance $l$ from the vertex $v_i$, where $l \in \{1, 2, \ldots , k-1\}$. Note that for $k = 1$, we have $G^{1/1}= G^1 = G$, and if the graph $G$ has $v$ vertices and $e$ edges, then the graph $G^{\frac{1}{k}}$ has $v+(k-1)e$ vertices and $ke$ edges. Some results about subdivision of a graph can be found in \cite{ALikhani,ALikhani1,Babu}.
\medskip
In this paper, we study the secure domination number of the $k$-subdivision of a graph.
\section{Main Results}
In this section, we study the secure domination number of $k$-subdivision of a graph. First we state some known results.
\begin{proposition}\cite{Coc}\label{COC-pro}
For any graph $G$, $\gamma(G)\leq\gamma_s(G)$.
\end{proposition}
\begin{theorem}\cite{Coc}\label{COC}
Let $P_n$ the path graph with with $n$ vertices. Then $\gamma_s(P_n)=\lceil \frac{3n}{7} \rceil$.
\end{theorem}
Now we consider to the $2$-subdivision of a graph and present an upper bound for its secure domination number.
\begin{theorem}\label{G12}
Let $G$ be a graph which is not star. Then,
$$\gamma_s(G^{\frac{1}{2}})\leq min \{ |E(G)|, |V(G)|\}.$$
\end{theorem}
\begin{proof}
Suppose that $G$ is a graph which is not star. Then each part of graph is a subgraph, either $H_1$ or $H_2$, as we see in Figure \ref{P4C3}. Now we consider to the $G^{\frac{1}{2}}$. Then each part of $G^{\frac{1}{2}}$ is a subgraph, either $H_1^{\frac{1}{2}}$ or $H_2^{\frac{1}{2}}$ (Figure \ref{P4C3}). For both of the cases, one can easily check that, $\{1,2,3\}$ is a secure dominating set for subgraphs. Now, by applying our choices for all part of the graph, we don't consider any vertices of $G$ in our set and the size of that is $|E(G)|$, and by our argument, this is a secure dominating set. On the other hand, if we consider $V(G)$ as our set for $G^{\frac{1}{2}}$, then it is a secure dominating set too. Therefore $\gamma_s(G^{\frac{1}{2}})\leq min \{ |E(G)|, |V(G)|\}$.
\hfill $\square$\medskip
\end{proof}
\begin{figure}
\begin{center}
\psscalebox{0.5 0.5}
{
\begin{pspicture}(0,-7.265)(16.54139,2.365)
\psdots[linecolor=black, dotsize=0.4](0.20138885,0.015)
\psdots[linecolor=black, dotsize=0.4](2.601389,0.015)
\psdots[linecolor=black, dotsize=0.4](5.001389,0.015)
\psdots[linecolor=black, dotsize=0.4](7.4013886,0.015)
\psdots[linecolor=black, dotsize=0.4](13.401389,1.615)
\psdots[linecolor=black, dotsize=0.4](11.001389,-0.785)
\psdots[linecolor=black, dotsize=0.4](15.801389,-0.785)
\psdots[linecolor=black, dotsize=0.4](0.20138885,-5.185)
\psdots[linecolor=black, dotsize=0.4](2.601389,-5.185)
\psdots[linecolor=black, dotsize=0.4](5.001389,-5.185)
\psdots[linecolor=black, dotsize=0.4](7.4013886,-5.185)
\psdots[linecolor=black, dotsize=0.4](13.401389,-3.585)
\psdots[linecolor=black, dotsize=0.4](11.001389,-5.985)
\psdots[linecolor=black, dotsize=0.4](15.801389,-5.985)
\psline[linecolor=black, linewidth=0.08](0.20138885,0.015)(7.4013886,0.015)(7.4013886,0.015)
\psline[linecolor=black, linewidth=0.08](0.20138885,-5.185)(7.4013886,-5.185)(7.4013886,-5.185)
\psline[linecolor=black, linewidth=0.08](13.401389,1.615)(11.001389,-0.785)(15.801389,-0.785)(13.401389,1.615)(13.401389,1.615)
\psline[linecolor=black, linewidth=0.08](13.401389,-3.585)(11.001389,-5.985)(15.801389,-5.985)(13.401389,-3.585)(13.401389,-3.585)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.8013887,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.201389,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.201389,-4.785)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.601389,-4.785)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,-5.985)
\rput[bl](3.4213889,-1.665){$H_1$}
\rput[bl](13.161388,-1.645){$H_2$}
\rput[bl](3.561389,-7.185){$H_1^{\frac{1}{2}}$}
\rput[bl](13.281389,-7.265){$H_2^{\frac{1}{2}}$}
\rput[bl](0.021388855,0.535){u}
\rput[bl](2.4613888,0.515){v}
\rput[bl](4.8213887,0.515){w}
\rput[bl](7.3213887,-4.765){x}
\rput[bl](7.3213887,0.555){x}
\rput[bl](0.041388854,-4.845){u}
\rput[bl](2.4613888,-4.785){v}
\rput[bl](4.8413887,-4.745){w}
\rput[bl](1.3013889,-4.725){1}
\rput[bl](3.6613889,-4.685){2}
\rput[bl](6.041389,-4.725){3}
\rput[bl](11.601389,-4.685){1}
\rput[bl](14.921389,-4.625){2}
\rput[bl](13.261389,-5.585){3}
\rput[bl](13.301389,2.175){u}
\rput[bl](10.221389,-0.845){v}
\rput[bl](16.22139,-0.785){w}
\rput[bl](13.241389,-3.005){u}
\rput[bl](10.381389,-6.105){v}
\rput[bl](16.241388,-6.045){w}
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,0.015)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,0.015)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.001389,0.015)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,0.015)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,1.615)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.801389,-0.785)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-0.785)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.001389,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-5.185)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,-3.585)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.801389,-5.985)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-5.985)
\psdots[linecolor=black, dotsize=0.4](12.201389,-4.785)
\psdots[linecolor=black, dotsize=0.4](14.601389,-4.785)
\psdots[linecolor=black, dotsize=0.4](13.401389,-5.985)
\psdots[linecolor=black, dotsize=0.4](1.4013889,-5.185)
\psdots[linecolor=black, dotsize=0.4](3.8013887,-5.185)
\psdots[linecolor=black, dotsize=0.4](6.201389,-5.185)
\end{pspicture}
}
\end{center}
\caption{Subgraphs $H_1$ and $H_2$ in the proof of Theorem \ref{G12}} \label{P4C3}
\end{figure}
\medskip
The initial condition that $G$ is not star in Theorem \ref{G12}, is necessary. By an easy argument, we have the following result for star graphs:
\begin{proposition}
for star graphs $S_n=K_{1,n-1}$, $\gamma_s(S_n^{\frac{1}{2}})=n$.
\end{proposition}
In the following, we show that both of $|V(G)|$ and $|E(G)|$ can be sharp upper bounds for Theorem \ref{G16}:
\begin{remark}
The upper bound in the Theorem \ref{G12} is sharp. First consider to the path graph $P_4$. Then $\gamma_s(P_4^{\frac{1}{2}})\leq 3$ which is the number of edges of $P_4$. Also $P_4^{\frac{1}{2}}=P_7$, and by Theorem \ref{COC}, $\gamma_s(P_7)=3=|E(P_4)|$, and we have the result. Now, consider graph $G$, as shown in Figure \ref{graphG}. One can easily check that $\{u_1,u_2,\ldots,u_7\}$ is a secure dominating set for $G^{\frac{1}{2}}$, and there is no set with smaller size for that. Therefore $\gamma_s(G^{\frac{1}{2}})=7=|V(G)|$.
\end{remark}
\begin{figure}
\begin{center}
\psscalebox{0.5 0.5}
{
\begin{pspicture}(0,-8.225)(17.48,-0.075)
\psline[linecolor=black, linewidth=0.08](2.16,-0.725)(6.16,-0.725)(7.36,-3.525)(6.16,-6.325)(2.16,-6.325)(0.96,-3.525)(2.16,-0.725)(2.16,-0.725)
\psline[linecolor=black, linewidth=0.08](2.16,-0.725)(6.16,-6.325)(6.16,-6.325)
\psline[linecolor=black, linewidth=0.08](7.36,-3.525)(0.96,-3.525)(0.96,-3.525)
\psline[linecolor=black, linewidth=0.08](6.16,-0.725)(2.16,-6.325)(2.56,-6.325)
\psdots[linecolor=black, dotsize=0.4](2.16,-0.725)
\psdots[linecolor=black, dotsize=0.4](6.16,-0.725)
\psdots[linecolor=black, dotsize=0.4](7.36,-3.525)
\psdots[linecolor=black, dotsize=0.4](6.16,-6.325)
\psdots[linecolor=black, dotsize=0.4](4.16,-3.525)
\psdots[linecolor=black, dotsize=0.4](0.96,-3.525)
\psdots[linecolor=black, dotsize=0.4](2.16,-6.325)
\rput[bl](1.58,-0.325){$u_1$}
\rput[bl](6.28,-0.345){$u_2$}
\rput[bl](7.88,-3.665){$u_3$}
\rput[bl](6.5,-6.945){$u_4$}
\rput[bl](1.72,-6.925){$u_5$}
\rput[bl](0.0,-3.665){$u_6$}
\rput[bl](4.0,-3.005){$u_7$}
\psline[linecolor=black, linewidth=0.08](11.36,-0.725)(15.36,-0.725)(16.56,-3.525)(15.36,-6.325)(11.36,-6.325)(10.16,-3.525)(11.36,-0.725)(11.36,-0.725)
\psline[linecolor=black, linewidth=0.08](11.36,-0.725)(15.36,-6.325)(15.36,-6.325)
\psline[linecolor=black, linewidth=0.08](16.56,-3.525)(10.16,-3.525)(10.16,-3.525)
\psline[linecolor=black, linewidth=0.08](15.36,-0.725)(11.36,-6.325)(11.76,-6.325)
\psdots[linecolor=black, dotsize=0.4](11.36,-0.725)
\psdots[linecolor=black, dotsize=0.4](15.36,-0.725)
\psdots[linecolor=black, dotsize=0.4](16.56,-3.525)
\psdots[linecolor=black, dotsize=0.4](15.36,-6.325)
\psdots[linecolor=black, dotsize=0.4](13.36,-3.525)
\psdots[linecolor=black, dotsize=0.4](10.16,-3.525)
\psdots[linecolor=black, dotsize=0.4](11.36,-6.325)
\rput[bl](10.78,-0.325){$u_1$}
\rput[bl](15.48,-0.345){$u_2$}
\rput[bl](17.08,-3.665){$u_3$}
\rput[bl](15.7,-6.945){$u_4$}
\rput[bl](10.92,-6.925){$u_5$}
\rput[bl](9.2,-3.665){$u_6$}
\rput[bl](13.2,-3.005){$u_7$}
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.36,-0.725)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.36,-6.325)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.76,-3.525)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.96,-3.525)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.38,-2.085)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.34,-2.085)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.94,-2.145)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.98,-4.945)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.32,-4.925)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.34,-4.945)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](10.76,-4.905)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](10.78,-2.105)
\rput[bl](13.22,-0.345){1}
\rput[bl](16.26,-2.025){2}
\rput[bl](16.26,-5.245){3}
\rput[bl](13.26,-6.925){4}
\rput[bl](10.06,-5.145){5}
\rput[bl](10.12,-2.125){6}
\rput[bl](12.6,-2.065){7}
\rput[bl](14.74,-2.165){8}
\rput[bl](15.16,-3.925){9}
\rput[bl](13.62,-5.125){10}
\rput[bl](12.54,-5.085){11}
\rput[bl](11.56,-3.205){12}
\rput[bl](13.32,-8.225){$G^{\frac{1}{2}}$}
\rput[bl](3.98,-8.165){$G$}
\end{pspicture}
}
\end{center}
\caption{Graph $G$ and $G^{\frac{1}{2}}$} \label{graphG}
\end{figure}
Now we consider to $G^{\frac{1}{3}}$ and present an upper and lower bound for secure domination number of that.
\begin{theorem}\label{G13}
Let $G$ be a graph. Then,
$$|V(G)| \leq \gamma_s(G^{\frac{1}{3}})\leq 2|E(G)|.$$
\end{theorem}
\begin{proof}
For every edge $uv\in E(G)$, we have a superedge $P^{\{u,v\}}$ with vertices $u$, $x_1^{\{u,v\}}$, $x_2^{\{u,v\}}$ and $v$ in $G^{\frac{1}{3}}$. To have a dominating set in four consecutive vertices, we need at least two vertices. By choosing $u$ and $v$, then we have a dominating set for $G^{\frac{1}{3}}$ with size $|V(G)|$. It is easy to see that there is no set with smaller size as dominating set for this graph. Now by Proposition \ref{COC-pro}, we have $\gamma_s(G)\geq |V(G)| $. By choosing $x_1^{\{u,v\}}$ and $x_2^{\{u,v\}}$ in any superedge $P^{\{u,v\}}$, then we have a secure dominating set with size $2|E(G)|$. Therefore $\gamma_s(G)\leq 2|E(G)| $ and we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{remark}
Bounds in the Theorem \ref{G13} are sharp.
For the lower bound, it suffices to consider star graph $S_n$. As we see in Figure \ref{graphSn3}, the set of white vertices in $S_n^{\frac{1}{3}}$ is a secure dominating set. Then $\gamma_s(S_n^{\frac{1}{3}})=n=|V(S_n)|$.
For the upper bound, it suffices to consider path graph $P_2$. Then $P_2^{\frac{1}{3}}=P_4$ and $\gamma_s(P_4)=2=2|E(P_2)|$.
\end{remark}
\begin{figure}
\begin{center}
\psscalebox{0.5 0.5}
{
\begin{pspicture}(0,-5.3014426)(15.594231,0.89567304)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(0.19711533,0.69855773)(0.19711533,0.69855773)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(2.5971153,0.69855773)(2.5971153,0.69855773)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(0.19711533,-1.7014422)(0.19711533,-1.7014422)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(4.997115,0.69855773)(4.997115,0.69855773)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(4.997115,-1.7014422)(4.997115,-1.7014422)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(4.997115,-4.1014423)(4.997115,-4.1014423)
\psline[linecolor=black, linewidth=0.08](2.5971153,-1.7014422)(2.5971153,-4.1014423)(2.5971153,-4.1014423)
\psdots[linecolor=black, dotsize=0.1](1.7971153,-3.3014421)
\psdots[linecolor=black, dotsize=0.1](1.3971153,-2.9014423)
\psdots[linecolor=black, dotsize=0.1](0.9971153,-2.5014422)
\psdots[linecolor=black, dotsize=0.4](0.19711533,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](0.19711533,0.69855773)
\psdots[linecolor=black, dotsize=0.4](2.5971153,0.69855773)
\psdots[linecolor=black, dotsize=0.4](4.997115,0.69855773)
\psdots[linecolor=black, dotsize=0.4](4.997115,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](4.997115,-4.1014423)
\psdots[linecolor=black, dotsize=0.4](2.5971153,-4.1014423)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(10.5971155,0.69855773)(10.5971155,0.69855773)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(12.997115,0.69855773)(12.997115,0.69855773)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(10.5971155,-1.7014422)(10.5971155,-1.7014422)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(15.397116,0.69855773)(15.397116,0.69855773)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(15.397116,-1.7014422)(15.397116,-1.7014422)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(15.397116,-4.1014423)(15.397116,-4.1014423)
\psline[linecolor=black, linewidth=0.08](12.997115,-1.7014422)(12.997115,-4.1014423)(12.997115,-4.1014423)
\psdots[linecolor=black, dotsize=0.1](12.197115,-3.3014421)
\psdots[linecolor=black, dotsize=0.1](11.797115,-2.9014423)
\psdots[linecolor=black, dotsize=0.1](11.397116,-2.5014422)
\psdots[linecolor=black, dotsize=0.4](10.5971155,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](10.5971155,0.69855773)
\psdots[linecolor=black, dotsize=0.4](12.997115,0.69855773)
\psdots[linecolor=black, dotsize=0.4](15.397116,0.69855773)
\psdots[linecolor=black, dotsize=0.4](15.397116,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](15.397116,-4.1014423)
\psdots[linecolor=black, dotsize=0.4](12.997115,-4.1014423)
\psdots[linecolor=black, dotsize=0.4](12.197115,-0.9014423)
\psdots[linecolor=black, dotsize=0.4](2.5971153,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](12.997115,-0.9014423)
\psdots[linecolor=black, dotsize=0.4](13.797115,-0.9014423)
\psdots[linecolor=black, dotsize=0.4](13.797115,-1.7014422)
\psdots[linecolor=black, dotsize=0.4](13.797115,-2.5014422)
\psdots[linecolor=black, dotsize=0.4](12.997115,-2.5014422)
\psdots[linecolor=black, dotsize=0.4](12.197115,-1.7014422)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.397116,-0.10144226)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.997115,-0.10144226)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.5971155,-0.10144226)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.5971155,-1.7014422)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.5971155,-3.3014421)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.997115,-3.3014421)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.397116,-1.7014422)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.997115,-1.7014422)
\rput[bl](12.5971155,-5.301442){$S_n^{\frac{1}{3}}$}
\rput[bl](2.1971154,-5.301442){$S_n$}
\end{pspicture}
}
\end{center}
\caption{Graph $S_n$ and $S_n^{\frac{1}{3}}$} \label{graphSn3}
\end{figure}
The following theorem gives the exact value of secure domination number of $4$-subdivision of a graph.
\begin{theorem}\label{G14}
Let $G$ be a graph. Then,
$ \gamma_s(G^{\frac{1}{4}})= 2|E(G)|$.
\end{theorem}
\begin{proof}
First, it is easy to see that for every five consecutive vertices, we need at least two vertices to have a dominating set. Now we consider to every two edges $uv$ and $vw$ in $E(G)$ (see Figure \ref{edge14}). In any superedge $P^{\{u,v\}}$ in $G^{\frac{1}{4}}$, we choose $x_1^{\{u,v\}}$ and $x_3^{\{u,v\}}$ to put in our set $D$. We claim that $D$ is a secure dominating set and there is no set with smaller size as secure dominating set. One can easily check that $D$ is a secure dominating set. Now we show that there are no other options with smaller size. As mentioned, for every five consecutive vertices, we need at least two vertices to have a dominating set.
If we want to reduce the size of our set, then we have to remove $x_3^{\{u,v\}}$ and $x_1^{\{v,w\}}$ and replace them with vertex $v$ which is our only option.
Now let $D'=D-\{x_3^{\{u,v\}},x_1^{\{v,w\}}\}\cup\{v\}$. Obviously $D'$ is a dominating set for $G^{\frac{1}{4}}$. But if we consider the vertex $x_3^{\{u,v\}}\in V-D'$, then our only option for replacing that is $v$ and $D'-\{v\}\cup\{x_3^{\{u,v\}}\}$ is not a dominating set. Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-4.77)(17.59423,-3.03)
\psline[linecolor=black, linewidth=0.08](0.19711533,-3.93)(6.5971155,-3.93)(6.5971155,-3.93)
\psdots[linecolor=black, dotsize=0.4](0.19711533,-3.93)
\psdots[linecolor=black, dotsize=0.4](3.3971152,-3.93)
\psdots[linecolor=black, dotsize=0.4](6.5971155,-3.93)
\psline[linecolor=black, linewidth=0.08](10.997115,-3.93)(17.397116,-3.93)(17.397116,-3.93)
\psdots[linecolor=black, dotsize=0.4](10.997115,-3.93)
\psdots[linecolor=black, dotsize=0.4](14.197115,-3.93)
\psdots[linecolor=black, dotsize=0.4](17.397116,-3.93)
\psdots[linecolor=black, dotsize=0.4](12.5971155,-3.93)
\psdots[linecolor=black, dotsize=0.4](15.797115,-3.93)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.597115,-3.93)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.997115,-3.93)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.397116,-3.93)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.797115,-3.93)
\rput[bl](0.057115328,-4.73){u}
\rput[bl](3.2371154,-4.73){v}
\rput[bl](17.217115,-4.71){w}
\rput[bl](6.477115,-4.71){w}
\rput[bl](14.117115,-4.73){v}
\rput[bl](10.877115,-4.73){u}
\rput[bl](11.497115,-3.59){$x_1^{\{u,v\}}$}
\rput[bl](12.317116,-4.77){$x_2^{\{u,v\}}$}
\rput[bl](13.1371155,-3.55){$x_3^{\{u,v\}}$}
\rput[bl](14.677115,-3.53){$x_1^{\{v,w\}}$}
\rput[bl](15.537115,-4.77){$x_2^{\{v,w\}}$}
\rput[bl](16.237116,-3.61){$x_3^{\{v,w\}}$}
\end{pspicture}
}
\end{center}
\caption{Two connected edges $uv$ and $vw$ and their superedges in $G^{\frac{1}{4}}$} \label{edge14}
\end{figure}
\medskip
Now we consider to $G^{\frac{1}{5}}$, and present upper and lower bound for secure domination number of that regarding maximum degree and number of edges of $G$.
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-6.475)(17.6,2.975)
\rput[bl](5.6,2.725){$u_1$}
\rput[bl](7.2,0.325){$u_2$}
\rput[bl](0.0,-1.675){$w$}
\rput[bl](5.6,-6.475){$u_{\Delta}$}
\rput[bl](7.2,-3.675){$u_3$}
\psline[linecolor=black, linewidth=0.08](0.8,-1.675)(2.0,-1.275)(3.2,-0.875)(4.4,-0.475)(5.6,-0.075)(6.8,0.325)(6.8,0.325)
\psline[linecolor=black, linewidth=0.08](0.8,-1.675)(5.2,2.725)(5.2,2.725)
\psline[linecolor=black, linewidth=0.08](0.8,-1.675)(5.2,-6.075)(5.2,-6.075)
\psdots[linecolor=black, dotsize=0.1](6.58,-4.635)
\psdots[linecolor=black, dotsize=0.1](6.44,-4.895)
\psdots[linecolor=black, dotsize=0.1](6.22,-5.135)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.2,2.725)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.8,0.325)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.2,-6.075)
\rput[bl](15.6,2.725){$u_1$}
\rput[bl](17.2,0.325){$u_2$}
\rput[bl](10.0,-1.675){$w$}
\rput[bl](15.6,-6.475){$u_{\Delta}$}
\rput[bl](17.2,-3.675){$u_3$}
\psline[linecolor=black, linewidth=0.08](10.8,-1.675)(12.0,-1.275)(13.2,-0.875)(14.4,-0.475)(15.6,-0.075)(16.8,0.325)(16.8,0.325)
\psline[linecolor=black, linewidth=0.08](10.8,-1.675)(15.2,2.725)(15.2,2.725)
\psline[linecolor=black, linewidth=0.08](10.8,-1.675)(15.2,-6.075)(15.2,-6.075)
\psdots[linecolor=black, dotsize=0.1](16.58,-4.635)
\psdots[linecolor=black, dotsize=0.1](16.44,-4.895)
\psdots[linecolor=black, dotsize=0.1](16.22,-5.135)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.2,2.725)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.8,0.325)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.2,-6.075)
\psdots[linecolor=black, dotsize=0.4](11.6,-0.875)
\psdots[linecolor=black, dotsize=0.4](12.4,-0.075)
\psdots[linecolor=black, dotsize=0.4](14.4,1.925)
\psdots[linecolor=black, dotsize=0.4](12.0,-1.275)
\psdots[linecolor=black, dotsize=0.4](13.2,-0.875)
\psdots[linecolor=black, dotsize=0.4](15.6,-0.075)
\psline[linecolor=black, linewidth=0.08](10.8,-1.675)(16.8,-3.675)(16.8,-3.675)
\psline[linecolor=black, linewidth=0.08](0.8,-1.675)(6.8,-3.675)(6.8,-3.675)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.8,-3.675)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](10.8,-1.675)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.8,-3.675)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.8,-1.675)
\psdots[linecolor=black, dotsize=0.4](12.0,-2.075)
\psdots[linecolor=black, dotsize=0.4](13.2,-2.475)
\psdots[linecolor=black, dotsize=0.4](15.6,-3.275)
\psdots[linecolor=black, dotsize=0.4](11.6,-2.475)
\psdots[linecolor=black, dotsize=0.4](12.4,-3.275)
\psdots[linecolor=black, dotsize=0.4](14.4,-5.275)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.2,0.725)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.4,-0.475)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](14.4,-2.875)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.2,-4.075)
\rput[bl](10.4,-0.875){$x_1^{\{w,u_1\}}$}
\rput[bl](11.92,-1.095){$x_1^{\{w,u_2\}}$}
\rput[bl](12.14,-1.995){$x_1^{\{w,u_3\}}$}
\rput[bl](10.4,-3.275){$x_1^{\{w,u_{\Delta}\}}$}
\rput[bl](11.3,0.025){$x_2^{\{w,u_1\}}$}
\rput[bl](13.28,-1.435){$x_2^{\{w,u_2\}}$}
\rput[bl](13.2,-2.295){$x_2^{\{w,u_3\}}$}
\rput[bl](11.32,-4.075){$x_2^{\{w,u_{\Delta}\}}$}
\rput[bl](12.22,-4.815){$x_3^{\{w,u_{\Delta}\}}$}
\rput[bl](13.32,-5.935){$x_4^{\{w,u_{\Delta}\}}$}
\rput[bl](12.26,0.945){$x_3^{\{w,u_1\}}$}
\rput[bl](13.9,-0.275){$x_3^{\{w,u_2\}}$}
\rput[bl](14.28,-2.635){$x_3^{\{w,u_3\}}$}
\rput[bl](13.32,2.005){$x_4^{\{w,u_1\}}$}
\rput[bl](15.56,-0.715){$x_4^{\{w,u_2\}}$}
\rput[bl](15.52,-3.075){$x_4^{\{w,u_3\}}$}
\end{pspicture}
}
\end{center}
\caption{Vertex $w$ with maximum degree $\Delta$ and corresponding superedges in $G^{\frac{1}{5}}$ } \label{Delta15}
\end{figure}
\begin{theorem}\label{G15}
Let $G$ be a graph and $\Delta$ be the maximum degree of its vertices. Then,
$$2|E(G)|+1 \leq \gamma_s(G^{\frac{1}{5}})\leq 3|E(G)| - \Delta +1.$$
\end{theorem}
\begin{proof}
For every edge $uv\in G$, we consider the superedge $P^{\{u,v\}}$ in $G^{\frac{1}{5}}$. We choose $x_1^{\{u,v\}}$,$x_2^{\{u,v\}}$ and $x_4^{\{u,v\}}$ from the set $\{u,x_1^{\{u,v\}}\,x_2^{\{u,v\}},x_3^{\{u,v\}},x_4^{\{u,v\}},v\}$ and put in a new set $D$. One can easily check that $D$ is a secure dominating set for $G^{\frac{1}{5}}$. Now we consider to vertex $w$ with degree $\Delta$ (See Figure \ref{Delta15}). We define a new set $D'$ as
$$D'=D-\{x_1^{\{w,u_1\}},x_1^{\{w,u_2\}},x_1^{\{w,u_3\}},\ldots,x_1^{\{w,u_{\Delta}\}}\} \cup \{w\}.$$
It is easy to see that $D'$ is a secure dominating set too. Therefore by our argument we have,
$$\gamma_s(G^{\frac{1}{5}})\leq 3|E(G)| - \Delta +1.$$
Now, we consider the superedges $P^{\{u,v\}}$ and $P^{\{u,t\}}$ in $G^{\frac{1}{5}}$ as we see in Figure \ref{edge15}. In every twelve consecutive vertices, we need at least four vertices to have a dominating set. We have the following cases:
\begin{itemize}
\item[(i)] We choose $x_1^{\{u,v\}}$, $x_4^{\{u,v\}}$, $x_1^{\{u,t\}}$ and $x_4^{\{u,t\}}$ to put in domination set from these superedges. Clearly, by this process we have a dominating set for $G^{\frac{1}{5}}$ with size at most $2|E(G)|$. On the other hand, by Proposition \ref{COC-pro}, $\gamma(G^{\frac{1}{5}})\leq\gamma_s(G^{\frac{1}{5}})$ but this set is not a secure dominating set because of vertex $u$ .
\item[(ii)] We choose $x_2^{\{u,v\}}$, $x_4^{\{u,v\}}$, $x_1^{\{u,t\}}$ and $x_4^{\{u,t\}}$ to put in domination set from these superedges. By the same argument as previous case, this process does not give us a secure dominating set because of vertex $x_2^{\{u,t\}}$.
\item[(iii)] Put $u$ in our set. Then for having a dominating set regarding these edges, we need at least 4 other vertices and this makes our set bigger than $2|E(G)|$.
\end{itemize}
So $\gamma_s(G^{\frac{1}{5}}) > 2|E(G)|$. Now we show that $\gamma_s(G^{\frac{1}{5}}) \geq 2|E(G)|+1$. Consider the star graph $S_4$ as shown in Figure \ref{S415}. One can easily check that, the set of white vertices is a secure dominating set and we conclude that for every graph $G$, $\gamma_s(G^{\frac{1}{5}}) \geq 2|E(G)|+1$.
Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\bigskip
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-5.9693055)(12.42139,-4.827916)
\psline[linecolor=black, linewidth=0.08](0.20138885,-5.0293055)(7.4013886,-5.0293055)(7.4013886,-5.0293055)
\psline[linecolor=black, linewidth=0.08](7.4013886,-5.0293055)(11.801389,-5.0293055)(11.801389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.8013887,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.001389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.201389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](8.601389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.801389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-5.0293055)
\psline[linecolor=black, linewidth=0.08](11.801389,-5.0293055)(12.201389,-5.0293055)(12.201389,-5.0293055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.201389,-5.0293055)
\rput[bl](6.201389,-5.8293056){$u$}
\rput[bl](12.201389,-5.8293056){$v$}
\rput[bl](0.20138885,-5.8293056){$t$}
\rput[bl](4.601389,-5.9693055){$x_1^{\{u,t\}}$}
\rput[bl](3.5413888,-5.9293056){$x_2^{\{u,t\}}$}
\rput[bl](2.321389,-5.9293056){$x_3^{\{u,t\}}$}
\rput[bl](1.0013889,-5.9093056){$x_4^{\{u,t\}}$}
\rput[bl](6.981389,-5.8893056){$x_1^{\{u,v\}}$}
\rput[bl](8.101389,-5.9693055){$x_2^{\{u,v\}}$}
\rput[bl](9.421389,-5.9493055){$x_3^{\{u,v\}}$}
\rput[bl](10.641389,-5.9293056){$x_4^{\{u,v\}}$}
\end{pspicture}
}
\end{center}
\caption{Superedges $P^{\{u,v\}}$ and $P^{\{u,t\}}$ in $G^{\frac{1}{5}}$} \label{edge15}
\end{figure}
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-6.8)(8.394231,1.594231)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-2.6028845)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-3.4028845)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-4.2028847)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-5.0028844)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-5.8028846)
\psdots[linecolor=black, dotsize=0.4](4.1971154,-6.6028843)
\psdots[linecolor=black, dotsize=0.4](4.9971156,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](5.7971153,-1.0028845)
\psdots[linecolor=black, dotsize=0.4](6.5971155,-0.20288453)
\psdots[linecolor=black, dotsize=0.4](7.3971157,0.59711546)
\psdots[linecolor=black, dotsize=0.4](8.197116,1.3971155)
\psdots[linecolor=black, dotsize=0.4](3.3971155,-1.8028846)
\psdots[linecolor=black, dotsize=0.4](2.5971155,-1.0028845)
\psdots[linecolor=black, dotsize=0.4](1.7971154,-0.20288453)
\psdots[linecolor=black, dotsize=0.4](0.9971155,0.59711546)
\psdots[linecolor=black, dotsize=0.4](0.19711548,1.3971155)
\psline[linecolor=black, linewidth=0.08](0.19711548,1.3971155)(4.1971154,-2.6028845)(4.1971154,-6.6028843)(4.1971154,-6.6028843)
\psline[linecolor=black, linewidth=0.08](4.1971154,-2.6028845)(8.197116,1.3971155)(8.197116,1.3971155)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.1971154,-2.6028845)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.3971157,0.59711546)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.9971155,0.59711546)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.1971154,-5.8028846)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.7971153,-1.0028845)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.5971155,-1.0028845)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.1971154,-4.2028847)
\end{pspicture}
}
\end{center}
\caption{Graph $S_4^{\frac{1}{5}}$ } \label{S415}
\end{figure}
\begin{remark}
Upper Bound in the Theorem \ref{G15} is sharp.
It suffices to consider path graph $P_3$. Then $P_3^{\frac{1}{5}}=P_{11}$ and by Theorem \ref{COC}, $\gamma_s(P_{11})=5=3|E(P_3)|-2+1$. Also, it is easy to see that this upper bound is sharp for all star graphs $S_n$.
\end{remark}
Now we consider the general cases $G^{\frac{1}{n}}$ for $n\geq 6$.
\begin{theorem}\label{G16}
Let $G$ be a graph and $n=7k+r$, where $k$ is a positive odd number and $r \in\{-1,1,3,5\}$. Then,
$$\gamma_s(G^{\frac{1}{n}})=\gamma_s(P_{n+1})|E(G)| .$$
\end{theorem}
\begin{proof}
Let $n=7k+r$, where $k$ is a positive odd number and $r \in\{-1,1,3,5\}$. Consider edge $uv\in V(G)$. As shown in Figure \ref{G16}, we consider $u$, $x_1^{\{u,v\}}$, $x_2^{\{u,v\}}$, $\ldots$, $x_{n-2}^{\{u,v\}}$, $x_{n-1}^{\{u,v\}}$, $v$ as vertex sequence of $P^{\{u,v\}}$. Now we define
\[
E_{uv}=\bigcup_{i=0}^{k-1} \Big\{ x_{7i+1}^{\{u,v\}},x_{7i+3}^{\{u,v\}},x_{7i+5}^{\{u,v\}}\Big\},
\]
and
\[
F_{uv}=\left\{
\begin{array}{ll}
{\displaystyle
\Big\{\Big\}}&
\quad\mbox{if $r=-1$, }\\[15pt]
{\displaystyle
\Big\{ x_{n-1}^{\{u,v\}} \Big\}}&
\quad\mbox{if $r=1$,}\\[15pt]
{\displaystyle
\Big\{ x_{n-3}^{\{u,v\}}, x_{n-1}^{\{u,v\}} \Big\}}&
\quad\mbox{if $r=3$,}\\[15pt]
{\displaystyle
\Big\{ x_{n-5}^{\{u,v\}}, x_{n-3}^{\{u,v\}}, x_{n-1}^{\{u,v\}} \Big\}}&
\quad\mbox{if $r=5$.}
\end{array}
\right.
\]
Let $D_{uv}=E_{uv}\cup F_{uv}$ and
\[
D=\bigcup_{uv\in E(G)} D_{uv}.
\]
It is easy to see that $D$ is a secure dominating set for $G^{\frac{1}{n}}$. So $\gamma_s(G^{\frac{1}{n}})\leq |E(G)|\lceil \frac{3n+3}{7} \rceil$. Hence $\gamma_s(G^{\frac{1}{n}})\leq \gamma_s(P_{n+1})|E(G)|$. Now we show that we can not use less vertices to make a secure dominating set. In the way we choose $D$, we can not choose less vertices among $x_2^{\{u,v\}},\ldots, x_{n-2}^{\{u,v\}}$ for each $P^{\{u,v\}}$. We only can remove $x_1^{\{v,w\}}$ and $x_{n-1}^{\{u,v\}}$ from $D$ and put $v$ in it to have a dominating set. So $D'=D-\{x_1^{\{v,w\}},x_{n-1}^{\{u,v\}}\}\cup\{v\}$ is a dominating set which clearly is not a secure dominating set. Therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\begin{figure}
\begin{center}
\psscalebox{0.7 0.7}
{
\begin{pspicture}(0,-5.1593056)(17.997116,-4.0379167)
\psdots[linecolor=black, dotsize=0.4](0.19711533,-4.2393055)
\psdots[linecolor=black, dotsize=0.4](12.197115,-4.2393055)
\psline[linecolor=black, linewidth=0.08](0.19711533,-4.2393055)(6.997115,-4.2393055)(6.997115,-4.2393055)
\psline[linecolor=black, linewidth=0.08](8.5971155,-4.2393055)(12.197115,-4.2393055)(12.197115,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](10.5971155,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](8.997115,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.5971155,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.997115,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.3971152,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.7971153,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](7.397115,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](7.7971153,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](8.197115,-4.2393055)
\rput[bl](0.037115324,-5.0193057){$u$}
\rput[bl](12.057116,-5.0793056){$v$}
\rput[bl](1.2571154,-5.0793056){$x_1^{\{u,v\}}$}
\rput[bl](2.9971154,-5.0793056){$x_2^{\{u,v\}}$}
\rput[bl](4.437115,-5.1593056){$x_3^{\{u,v\}}$}
\rput[bl](6.0971155,-5.1193056){$x_4^{\{u,v\}}$}
\rput[bl](8.497115,-5.1193056){$x_{n-2}^{\{u,v\}}$}
\rput[bl](10.1371155,-5.1593056){$x_{n-1}^{\{u,v\}}$}
\psline[linecolor=black, linewidth=0.08](12.197115,-4.2393055)(14.197115,-4.2393055)(13.797115,-4.2393055)
\psline[linecolor=black, linewidth=0.08](15.797115,-4.2393055)(17.397116,-4.2393055)(17.397116,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](14.5971155,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](14.997115,-4.2393055)
\psdots[linecolor=black, dotsize=0.1](15.397116,-4.2393055)
\psdots[linecolor=black, dotsize=0.4](17.797115,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.797115,-4.2393055)
\psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.197115,-4.2393055)
\psline[linecolor=black, linewidth=0.08](17.397116,-4.2393055)(17.797115,-4.2393055)(17.797115,-4.2393055)
\rput[bl](17.697115,-4.9993057){$w$}
\rput[bl](13.357116,-5.0993056){$x_1^{\{v,w\}}$}
\rput[bl](15.777115,-5.1193056){$x_{n-1}^{\{v,w\}}$}
\end{pspicture}
}
\end{center}
\caption{Superedge $P^{\{u,v\}}$ and $P^{\{v,w\}}$ in $G^{\frac{1}{n}}$ related to the proof of Theorem \ref{G16}} \label{edge1nodd}
\end{figure}
By the same argument as proof of Theorem \ref{G16}, we have the following result:
\begin{theorem}
Let $G$ be a graph and $n=7k+r$, where $k$ is a positive even number and $r \in\{-1,1,3,5\}$. Then,
$$\gamma_s(G^{\frac{1}{n}})=\gamma_s(P_{n+1})|E(G)| .$$
\end{theorem}
Now we consider the other cases and present some bounds for them:
\begin{theorem}
Let $G$ be a graph and $n=7k+r$, where $k$ is a positive number and $r \in\{0,2,4\}$. Then,
$$|V(G)|+ \gamma_s(P_{n-3})|E(G)| \leq\gamma_s(G^{\frac{1}{n}})\leq\gamma_s(P_{n+1})|E(G)| .$$
\end{theorem}
\begin{proof}
First consider to every superedge $P^{\{u,v\}}$ and choose a secure dominating set for that, and then put all these vertices in a set. Therefore we have a secure dominating set for $G$ with size at most $\gamma_s(P_{n+1})|E(G)| $. Now for finding a lower bound, first we put every vertex of $G$ in our set $D$ but we do not consider their neighbours. Then we have a path $P_{n-3}$ for every superedge and we should choose between these vertices. Obviously we can choose $\gamma_s(P_{n-3})$ of these vertices and add to $D$. Clearly, $D$ is a dominating set which is not a secure dominating set in some cases. Now by Proposition \ref{COC-pro}, we conclude that $\gamma_s(G^{\frac{1}{n}})\geq |V(G)|+ \gamma_s(P_{n-3})|E(G)|$, and therefore we have the result.
\hfill $\square$\medskip
\end{proof}
\medskip
At the beginning of this section, we presented a sharp upper bound for secure domination number of $G^{\frac{1}{2}}$ in Theorem \ref{G12}. There are some graphs $G$, which show that $\gamma_s(G^{\frac{1}{2}})< min \{ |E(G)|, |V(G)|\}$. For example, consider path graph $P_{11}$. Then $P_{11}^{\frac{1}{2}}=P_{21}$, and by Theorem \ref{COC}, $\gamma_s(P_{11}^{\frac{1}{2}})=9$. So, this inspires us to find a lower bound for $\gamma_s(G^{\frac{1}{2}})$. We end this section with the following conjecture:
\begin{conjecture}\label{Conj}
For every graph $G$, $\gamma_s(G^{\frac{1}{2}})>\frac{4}{5}|V(G)|$.
\end{conjecture}
\section{Conclusions}
In this paper, we obtained the secure domination number of $k$-subdivision of graphs for some cases and presents some lower and upper bounds for other ones.
Future topics of interest for future research include the following suggestions:
\begin{itemize}
\item[•]
Proving Conjecture \ref{Conj} or finding a better lower bound for $\gamma_s(G^{\frac{1}{2}})$.
\item[•]
What is the exact value of $\gamma_s(G^{\frac{1}{n}})$ for $n=7k+r$, where $k$ is a positive integer value and $r \in\{0,2,4\}$?
\end{itemize}
\section{Acknowledgements}
The author would like to thank the Research Council of Norway and Department of Informatics, University of
Bergen for their support.
| {
"timestamp": "2021-10-19T02:34:49",
"yymm": "2110",
"arxiv_id": "2110.09190",
"language": "en",
"url": "https://arxiv.org/abs/2110.09190",
"abstract": "Let $G=(V,E)$ be a simple graph. A dominating set of $G$ is a subset $D\\subseteq V$ such that every vertex not in $D$ is adjacent to at least one vertex in $D$. The cardinality of a smallest dominating set of $G$, denoted by $\\gamma(G)$, is the domination number of $G$. A dominating set $D$ is called a secure dominating set of $G$, if for every $u\\in V-D$, there exists a vertex $v\\in D$ such that $uv \\in E$ and $D-\\{v\\}\\cup\\{u\\}$ is a dominating set of $G$. The cardinality of a smallest secure dominating set of $G$, denoted by $\\gamma_s(G)$, is the secure domination number of $G$. For any $k \\in \\mathbb{N}$, the $k$-subdivision of $G$ is a simple graph $G^{\\frac{1}{k}}$ which is constructed by replacing each edge of $G$ with a path of length $k$. In this paper, we study the secure domination number of $k$-subdivision of $G$.",
"subjects": "Combinatorics (math.CO)",
"title": "Secure domination number of $k$-subdivision of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750484894195,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.709522173208874
} |
https://arxiv.org/abs/1706.01367 | Symmetric cohomology of groups | We investigate the relationship between the symmetric, exterior and classical cohomologies of groups. The first two theories were introduced respectively by Staic and Zarelua. We show in particular, that there is a map from exterior cohomology to symmetric cohomology which is a split monomorphism in general and an isomorphism in many cases, but not always. We introduce two spectral sequences which help to explain the realtionship between these cohomology groups. As a sample application we obtain that symmetric and classical cohomologies are isomorphic for torsion free groups. | \section{Introduction}
Let $G$ be a group and $M$ be a $G$-module. In order to better understand 3-algebras arising in lattice field theory \cite{triangle}, Staic defined a variant of group cohomology, which he denoted by $HS^*(G,M)$ and called \emph{symmetric cohomology of groups} \cite{staic}. Some aspects of this theory were later extended by Singh \cite{singh} and Todea \cite{todea}. There is an obvious natural transformation from the symmetric cohomology to the classical Eilenberg-MacLane cohomology
$$\alpha^n:HS^n(G,M)\to H^n(G,M), \ \ n\geq 0.$$
According to \cite{staic},\cite{staic_h2}, $\alpha^n$ is an isomorphism if $n=0,1$ and is a monomorphism for $n=2$. By Corollary 2.3 in \cite{staic_h2} we know that $\alpha^2$ is an isomorphism if $G$ has no elements of order two.
Ten years prior to this, Zarelua had also defined a version of group cohomology, denoted by $H_\la^*(G,M)$ and called \emph{exterior cohomology of groups} \cite{zarelua}. It also comes together with a natural transformation
$$\beta^n:H_\la^n(G,M)\to H^n(G,M),$$
with similar properties. The exterior cohomology has the following striking property: If $G$ is a finite group of order $d$, then $H^i_\lambda(G,M)=0$ for all $i\geq d.$
The aim of this work is to obtain more information about homomorphisms $\alpha^* $ and $\beta^*$. We construct a natural transformation $\gamma^n:H_\la^n(G,M)\to HS^n(G,M)$ such that the following diagram commutes:
$$\xymatrix{H_\la^n(G,M)\ar[rr]^{\gamma^n}\ar[dr]_{\beta^n}&& HS^n(G,M)\ar[dl]^{\alpha^n}\\
&H^n(G,M).}$$
Our results in Section \ref{comp} show that the homomorphism $\gamma^n: H_\la^n(G,M)\to HS^n(G,M)$ is a split monomorphism in general, and an isomorphism in certain cases, namely if $0\leq n\leq 4,$ or $M$ has no elements of order two.
In general, $\gamma^5$ is not an isomorphism.
Our next results are related to the homomorphism $\beta^n:H_\la^n(G,M)\to H^n(G,A)$. We construct a spectral sequence for which $\beta^n$ are edge homomorphisms, $n \geq 0$. As any first quadrant spectral sequence, it gives a 5-term exact sequence (see for example \cite[Exercise 5.1.3]{weibel}) which has the following form:
$$0\to H_{\lambda}^2(G,M)\xto{\beta^2} H^2(G,M)\to \prod_{C_2\subset G}H^2(C_2,M)\to H^3_{\lambda}(G,M)\xto{\beta^3} H^3(G,M).$$
Here the product is taken over all subgroups of order two. The exactness at $H^2(G,M)$ is an answer to Problem 25 by Singh in \cite{problems}. At the very end of Section $4$ in \cite{staic}, Staic wondered about the injetivity of the map $\alpha^3$ under the assumption that $G$ has no elements of order $2$. A trivial consequence of our spectral sequence says that, if $G$ has no elements of order two, then one has an exact sequence:
$$0\to H_{\lambda}^3(G,M)\xto{\beta^3} H^3(G,M)\to \prod_{C_3\subset G}H^3(C_3,M)\to H^4_{\lambda}(G,M)\xto{\beta^4} $$
$$\xto{\beta^4} H^4(G,M)\to \prod_{C_3\subset G}H^4(C_3,M)\to H^5_{\lambda}(G,M)\xto{\beta^5} H^5(G,M).$$
In particular, if $G$ has no elements of order two and three, then $H^i_\lambda(G,M)=H^i(G,M)$, for $i=0,1,2,3,4$.
Among other consequences of our spectral sequence, we mention the following: if $G$ is a torsion free group, then $\beta^n:H^n_\lambda(G,M)\to H^n(G,M)$ is an isomorphism for all $n\geq 0$.
The paper is organised as follows: In Section 2 we recall the definitions of the symmetric and exterior cohomologies. In the next section we construct the transformation $\gamma^*$ and prove our first result, which shows that $\gamma^n$ is quite often an isomorphism, but not always. In the final section we construct a spectral sequence and we prove our main result Theorem \ref{e2}.
\section{Preliminaries}
\subsection{Classical cohomology} Let $G$ be a group and $M$ be a $G$-module. One way to define the cohomology $H^*(G,M)$ is via cochains, as $H^*(C^*(G,M))$. The group of $i$-cochains of $G$ with coefficients in $M$ is the set of functions from $G^i$ to $M$:
$$C^i(G,M)=\left\{\phi:G^i\to M\right\}.$$
The $i^{th}$ differential $\partial^i:C^i(G,M)\to C^{i+1}(G,M)$ is the map
\begin{align*}
\partial^i(\phi)(g_0,g_1,\cdots ,g_i)&=g_0\cdot \phi(g_1,\cdots ,g_i)\\
&+\sum_{j=1}^i(-1)^j\phi(g_0,\cdots ,g_{j-2},g_{j-1}g_j,g_{j+1},\cdots , g_i)\\
&+(-1)^{i+1}\phi(g_0,\cdots ,g_{i-1}).
\end{align*}
Given a chain complex such as this one, one can define its normalised subcomplex. In each dimension $n$, define $NC^n(G,M)$ to be the group of $n$-cochains which satisfy the normalisation condition
$$\phi(g_0,\cdots,g_{i-1},1,g_{i+1},\cdots,g_n)=0, \quad i=0,\cdots,n.$$
The canonical inclusion $\iota:NC^*(G,M)\to C^*(G,M)$ is a chain equivalence \cite{homology}.
Another way to define $H^*(G,M)$ is via projective resolutions, as $\emph{ H}^* (K^* (G,M))$. The standard projective resolution of $\Z$ by $G$-modules is the sequence of $G$-module homomorphisms \cite{aw}
$$\cdots \to \Z[G^{i+1}] \xrightarrow{\partial_{i-1}}\Z[G^i] \to\cdots \to\Z[G]\xrightarrow{\epsilon}\Z,$$
where
$$\partial_{i-1}(g_0,\cdots ,g_i)=\sum_{j=0}^i(-1)^j(g_0,\cdots ,g_{j-1},g_{j+1},\cdots , g_i),$$
and the mapping $\epsilon$ sends each generator $(g)$ to $1\in\Z$.
An element of $$K^i(G,M)=\hom_G(\Z[G^{i+1}],M)$$ is then a function $f:G^{i+1}\to M$ such that
$$f(sg_0,sg_1,\cdots ,sg_i)=s\cdot f(g_0,g_1,\cdots,g_i).$$
The maps
$$K^i(G,M)\xrightarrow{\psi^i} C^i(G,M)$$
defined by
$$\psi^i(f)(g_1,\cdots , g_i)=f(1,g_1,g_1g_2, \cdots ,g_1g_2\cdots g_i)$$
induce an isomorphism of cochain complexes $K^*(G,M)\to C^*(G,M)$ \cite{aw}. Moreover, one has a commutative diagram
$$\xymatrix{K^*(G,M)\ar[r]^\psi& C^*(G,M)\\ NK^*(G,M)\ar[r]_\psi\ar[u]& NC^*(G,M)\ar[u]}$$
where the horizontal maps are isomorphisms and the vertical maps are inclusions and homotopy equivalences. Here $NK^i(G,M)$ consists of such maps $f\in K^i(G,M)$ that
$$f(x_0,\cdots, x_i)=0, \ \ {\rm if} \ x_{j}=x_{j+1}, \ {\rm for} \ 0\leq j<n.$$
Thus
$$H^*(G,M)=H^*(NC^*(G,M))=H^*(C^*(G,M))=H^*(K^*(G,M))=H^*(NK^*(G,M)).$$
\subsection{Symmetric cohomology}
We now discuss a subcomplex of $C^*(G,M)$ introduced by Staic in \cite{staic} and \cite{staic_h2}. It is based on an action of $\Sigma_{n+1}$ on $C^n(G,M)$ (for every $n$) compatible with the differential. In order to define this action, it is enough to define how the transpositions $\tau_i = (i,i + 1)$,
$1 \leq i \leq n$ act. For $\phi \in C^n(G,M)$ one defines:
$$(\tau_i \phi)(g_1, g_2, g_3, \cdots , g_n) = \begin{cases} -g_1\phi(g_1^{-1}, g_1g_2, g_3,\cdots , g_n), \quad {\rm if} \ i=1,\\
-\phi(g_1, \cdots , g_{i-2}, g_{i-1}g_i, g_i^{-1}, g_ig_{i+1},\cdots , g_n),& 1 < i < n,\\
-\phi(g_1, g_2, g_3,\cdots , g_{n-1}g_n, g_n^{-1}) , \quad {\rm if} \ i=n.
\end{cases}$$
Denote by $CS^n(G,M)$ the subgroup of the invariants of this action. That is, $CS^n(G,M)= C^n(G,M)^{\Sigma_{n+1}}$.
Staic proved that $CS^*(G,M)$ is a subcomplex of $C^*(G,M)$ \cite{staic}, \cite{staic_h2}.
\begin{De}The homology of this subcomplex is called the symmetric cohomology of $G$ with coefficients in $M$ and is denoted by $HS^n(G,M)$.
\end{De}
\begin{Rem} There is a natural map $\alpha^n:HS^n(G,M)\to H^n(G,M)$ induced by the inclusion $CS^*(G,M)\hookrightarrow C^*(G,M)$.
\end{Rem}
\subsection{Exterior powers}
In order to define the chain complex introduced by Zarelua \cite{zarelua} we need to recall some facts about exterior powers.
\begin{De}
The exterior algebra $\Lambda^*(A)$ of an abelian group $A$ is a quotient algebra of the tensor algebra $T^*(A)$ with respect to the two-sided ideal generated by the elements of the form $a\otimes a \in T^2(A)=A\otimes A$.
\end{De}
A weaker version of this, denoted by $\tilde \Lambda^*(A)$, can be defined as the quotient algebra of the tensor algebra $T^*(A)$ with respect to the two-sided ideal generated by the elements of the form $a\otimes b +b\otimes a \in T^2(A)$.
Since $$a\t b +b\t a=(a+b)\t (a+b)-a\t a-b\t b,$$
it is clear that one has the canonical quotient maps
$$\t^{n}(A)\twoheadrightarrow \tilde \Lambda^n(A) \twoheadrightarrow \Lambda^n(A).$$
Denote by $\Delta^n(A)$ the kernel of the projection $\tilde \Lambda^n(A) \twoheadrightarrow \Lambda^n(A)$. Thus we have a short exact sequence
$$0\to \Delta^n(A)\to \tilde \Lambda^n(A) \twoheadrightarrow \Lambda^n(A)\to 0.$$
Clearly $\Lambda^1(A)=A=\tilde\Lambda ^1(A)$. Hence
\begin{equation}\label{del1}
\Delta^1(A)=0.
\end{equation}
The images of $a_1\t\cdots \t a_n\in\t^{n}A$ in $\tilde \Lambda^n(A) $ and $\Lambda^n(A)$ are denoted by
$a_1 \tilde \wedge \cdots \tilde \wedge a_n$ and $a_1 \wedge \cdots \wedge a_n$ respectively.
Recall that if $A=\Z[S]$ is a free abelian group with a set $S$ as basis, then $\t^{n}A$ is a free abelian group with basis elements $s_1\t\cdots\t s_n$, where $s_i\in S$. It is also well-known that $\Lambda^n(A)$ is a free abelian group with basis elements $s_1\wedge \cdots \wedge s_n$, where $s_1<\cdots <s_n$. Here $<$ is a total order on $S$.
In $\tilde \Lambda^n(A) $, $A=\Z[S]$, things are a bit more complicated because of the relation $2a\tilde \wedge a=0$, which is a consequence of the relation $a\tilde \wedge b+b\tilde \wedge a=0$. It implies that $\Delta^n(A)$ is an $\mathbb{F}_2$-vector space.
The epimorphism $ \tilde \Lambda^n (\Z[S]) \to \Lambda^n(\Z[S])$ has a splitting given by $s_1\wedge \cdots \wedge s_n\mapsto s_1\tilde \wedge \cdots \tilde \wedge s_n$. Here $s_1,\cdots,s_n$ are distinct elements in $S$. Thus
\begin{equation}\label{tildasdas}
\tilde{\Lambda}^n(\Z[S])\cong \Lambda ^n(\Z[S])\oplus \Delta^n(\Z[S]),
\end{equation}
Thus expressions of the form $s_1\tilde \wedge \cdots \tilde \wedge s_n$, where $s_1\leq \cdots \leq s_n$, are canonical generators of $\tilde\Lambda^n(\Z[S])$. Among these elements, ones with strict inequalities $s_1< \cdots < s_n$ form a basis of the summand corresponding to the free abelian group part, while the rest form a basis of the $\mathbb{F}_2$-vector space $\Delta^n(\Z[S])$.
\subsection{Exterior cohomology of groups} We now discuss a subcomplex of $K^*(G,M)$, denoted by $K^*_{\lambda}(G,M)$, introduced by Zarelua in \cite{zarelua}.
According to Lemma 3.1 in \cite{zarelua}, there is a differential $$\partial:\Lambda^{n+1}(\Z[G])\to\Lambda^n(\Z[G])$$ in the exterior algebra generated by $\Z[G]$ given by
$$\partial(g_0\wedge \cdots\wedge g_n)=\sum_{i=0}^n(-1)^{i+1}g_0\wedge \cdots\wedge \hat{g_i}\wedge\cdots\wedge g_n,$$
where, as usual, the hat \ $\hat{}$ \ denotes a missing value. The group $G$ acts on this chain complex by:$$g(g_1\wedge g_2\wedge\cdots\wedge g_n)=gg_1\wedge gg_2\wedge\cdots\wedge gg_n.$$
\begin{De}
The homology groups of the cochain complex (denoted by $K^*_\la(G,M)$)
$$\hom_G(\Lambda^1\Z[G],M)\xrightarrow{\partial}\hom_G(\Lambda^2\Z[G],M)\xrightarrow{\partial}\cdots\xrightarrow{\partial} \hom_G(\Lambda^n\Z[G],M)\xrightarrow{\partial}\cdots$$
are called the exterior cohomology groups of the group $G$ with coefficients in $M$ and are denoted by $H^n_{\lambda}(G,M)$.
\end{De}
Therefore, $K^*_\la(G,M)$ is the subcomplex of $K^n (G,M)$ of all $G$-maps $f\in K^n(G,M)$ such that
$$f(g_0,\cdots,g_i,g_i,\cdots,g_n)=0, $$
and
$$f(g_0,\cdots,g_i,g_{i+1},\cdots,g_n)=- f(g_0,\cdots,g_{i+1},g_i,\cdots,g_n),$$
for all $0\leq i< n$.
\begin{Rem} There is a natural transformation $\beta^n:H_\la^n(G,M)\to H^n(G,M)$ induced by the inclusion $K_\la^*(G,M)\hookrightarrow K^*(G,M)$.
\end{Rem}
\begin{Rem}\label{26} Let $G$ be a finite group of order $d$. Since $\Z[G]$ is a free abelian group of rank $d$, we have $\Lambda^i\Z[G]=0$, for $i>d$ and $H^n_\lambda(G,M)=0$ for $n\geq d$. On the other hand, as we will see later, the groups $HS^n(C_2,M)$ are nontrivial for infinitely many $n$.
\end{Rem}
\section{Comparison of symmetric and exterior cohomologies}\label{comp}
\subsection{Construction of the map $\gamma$}
We need two more complexes: $C_\la^*(G,M)$ and $KS^*(G,M)$. They are defined as follows.
\begin{De}
Let $KS^n (G,M)$ denote the subcomplex of $K^n (G,M)$ of all $G$-maps $f\in K^n(G,M)$ such that
\begin{equation}\label{KS}f(g_0,\cdots,g_i,g_{i+1},\cdots,g_n)=-f(g_0,\cdots,g_{i+1},g_i,\cdots,g_n)\end{equation}
for all $0\leq i< n$.
\end{De}
So we have the following subcomplexes:
$$K_\la^*(G,M) \hookrightarrow KS^*(G,M) \hookrightarrow K^*(G,M).$$
\begin{De}
Let $C^n_{\lambda}(G,M)$ be the complex defined by $$C^*_\la(G,M)= CS^n(G,M) \cap NC^*(G,N)$$
Thus $\phi\in CS^n(G,M)$ belongs to $C^n_{\lambda}(G,M)$ if
$$\phi(x_1,\cdots, 1,\cdots,x_n)=0.$$
\end{De}
This subcomplex has already been considered by \cite{todea}, who showed that if $M$ has no elements of order $2$, then $C^n_{\lambda}(G,M) = CS^n(G,M)$ for all $n$. We will later prove the same fact in a different way.
We have the following subcomplexes:
$$C_\la^*(G,M) \hookrightarrow CS^*(G,M) \hookrightarrow C^*(G,M).$$
In order to understand the relationship between all these complexes it is useful to rewrite them in terms of resolutions, which we constructed in Lemma \ref{3res} below.
Since $\Z[G^i]=\Z[G]^{\t i}$, the standard projective resolution can be rewritten as
$$\cdots \to \Z[G]^{\t i}\to \Z[G]^{\t i-1}\to\cdots \to \Z[G]^{\t 2} \to \Z[G].$$
If one replaces the tensor algebra by either version of the exterior algebra, one still obtains a resolution, though in general no longer a projective one. This is the subject of the following lemma.
\begin{Le}\label{3res} One has a commutative diagram of resolutions of $\Z$:
$$\xymatrix{\cdots \ar[r]& \Z[G]^{\t i}\ar[r]\ar[d]& \Z[G]^{\t i-1}\ar[r]\ar[d]&\cdots\ar[r] & \Z[G]^{\t 2} \ar[r]\ar[d]& \Z[G]\ar[d]^{Id}\\
\cdots \ar[r]& \tilde\Lambda^i\Z[G] \ar[r]\ar[d] & \tilde\Lambda^{i-1}\Z[G]\ar[r]\ar[d]&\cdots\ar[r] & \tilde\Lambda^2 \Z[G] \ar[r]\ar[d] & \Z[G]\ar[d]^{Id}\\
\cdots \ar[r]& \Lambda^i\Z[G] \ar[r]& \Lambda^{i-1}\Z[G]\ar[r]&\cdots\ar[r] & \Lambda^2 \Z[G] \ar[r]& \Z[G]}$$
\end{Le}
One denotes these resolutions by $(T^{*+1}(\Z[G]),\partial)$, $(\tilde \Lambda^{*+1}(\Z[G]), \partial)$ and $(\Lambda^{*+1}(\Z[G]),\partial)$ respectively.
\begin{proof}
In this proof, take $\partial_{-1} = \epsilon:\Z[G]\to \Z$. We only present the proof for $\Lambda^*$, as the proof for $\tilde \Lambda^*$ is similar.
We construct a homomorphism $h:\Lambda^i\Z[G]\to\Lambda^{i+1}\Z[G]$ by the formula
$$h(g_0\wedge\cdots\wedge g_i)=1\wedge g_0\wedge\cdots\wedge g_i.$$
To show that this is a contracting homotopy, we need to check that $h\circ \partial +\partial\circ h = Id_{\Lambda^i\Z[G]}$.
Indeed, we have
\begin{align*}
\partial_i\circ h_i(g_0\wedge\cdots\wedge g_i)&=g_0\wedge\cdots\wedge g_i -\sum^i_{j=0}(-1)^j1\wedge g_0\wedge\cdots\wedge \hat{g_j}\wedge\cdots\wedge g_i\\
&=g_0\wedge\cdots\wedge g_i - h_{i-1}\circ\partial_{i-1}(g_0\wedge\cdots\wedge g_i).
\end{align*}
\end{proof}
\begin{Le}\label{34mai} The differential $\partial: \tilde\Lambda^{n+1}(\Z[G])\to \tilde\Lambda^n(\Z[G])$ sends $\Delta^{n+1}(\Z[G])$ to $\Delta^n(\Z[G])$. Moreover, it is compatible with the decompostion (\ref{tildasdas}). Hence
$$(\tilde{\Lambda}^{*+1}(\Z[G]),\partial)\cong (\Delta^{*+1}(\Z[G]),\partial)\oplus (\Lambda^{*+1}(\Z[G]),\partial)$$
and $H_*(\Delta^{*+1}(\Z[G]),\partial)=0$.
\end{Le}
\begin{proof} By Lemma \ref{3res} the canonical projection $\tilde \Lambda^{*+1}(\Z[G]) \to \Lambda^{*+1}(\Z[G]))$ is a chain map, inducing an isomorphism in homology, hence $\Delta^{*+1}(\Z[G])$ is a chain subcomplex with trivial homology. To finish the proof, it suffices to note that the map $g_1\wedge \cdots \wedge g_n\mapsto g_1\tilde \wedge \cdots \tilde \wedge g_n$, $g_1,\cdots,g_n\in G$, commutes with differentials and hence defines a splitting of chain complexes.
\end{proof}
\begin{Le}\label{315} After applying the functor $\hom_{\Z[G]}(-,M)$ to the resolutions in Lemma \ref{3res} one obtains the following diagram
$$\xymatrix{\hom_{\Z[G]}(\Lambda^*(\Z[G]),M) \ar[r] \ar[d]^{=} & \hom_{\Z[G]}(\tilde\Lambda^*(\Z[G]),M) \ar[r]\ar[d]^=& \hom_{\Z[G]}(T^*(\Z[G]),M) \ar[d]^{=}\\
K_\la^*(G,M)\ar[d]^\psi\ar[r]& KS^*(G,M)\ar[r]\ar[d]^\psi & K^*(G,M)\ar[d]^\psi\\
C_\la^*(G,M)\ar[r]& CS^*(G,M) \ar[r] & C^*(G,M)}$$
where all horizontal arrows are inclusions and vertical arrows are isomorphisms.
\end{Le}
\begin{proof} A key point is to show that restricting $\psi$ on $KS^*(G,M)$ yields an isomorphism between $KS^*(G,M)$ and $CS^*(G,M)$.
To this end, take $\phi\in CS^n(G,M)$. Then
$$f(g_0,\cdots,g_n)=(\psi^n)^{-1}(\phi)(g_0,g_1,\cdots,g_n)=g_0\cdot\phi(g_0^{-1}g_1,g_1^{-1}g_2,\cdots,g_{n-1}^{-1}g_n).$$
The equation $\phi(g_1, g_2,g_3,\cdots , g_n) = -g_1\phi(g_1^{-1}, g_1g_2, g_3,\cdots , g_n)$ translates to
\begin{align*}
f(g_0,\cdots,g_n)&= g_0(\psi^n)^{-1}(\phi)(g_0^{-1}g_1,\cdots,g_{n-1}^{-1}g_n)\\
&=-g_0\cdot g_0^{-1}g_1\cdot (\psi^n)^{-1}(\phi)((g_0^{-1}g_1)^{-1},g_0^{-1}g_1g_1^{-1}g_2,\cdots,g_{n-1}^{-1}g_n)\\
&=-g_1\cdot (\psi^n)^{-1}(\phi)(g_1)^{-1}g_0,g_0^{-1}g_2,\cdots,g_{n-1}^{-1}g_n)\\
&= -f(g_1,g_0,\cdots,g_n).
\end{align*}
In a similar way, the other equations above give the condition \ref{KS} for $n>0$.
\end{proof}
If one passes to cohomology, one obtains the homomorphisms
$$\xymatrix{H^*_\la(G,M)\ar[r]\ar[dr]_\gamma & H^*( KS^*(G,M))\ar[d]^{\cong}\ar[r] &H^*(G,M)\\ & HS^*(G,M)\ar[ru]_\alpha &}$$
and the commutativity of the diagram in Lemma \ref{315} shows that $\beta=\alpha\gamma$.
\begin{Pro}\label{zdast} The homomorphism $\gamma^n: H_\la^n(G,M)\to HS^n(G,M)$ is a split monomorphism. Moreover, it is an isomorphism provided $M$ has no elements of order two.
\end{Pro}
\begin{proof}The first part follows from Lemma \ref{34mai}. Assume $M$ has no elements of order two. It suffices to show that $K_\la^*(G,M)=KS^*(G,M)$. Take an element $f\in KS^n(G,M)$. Then we have
$$f(x_0,\cdots,x_i,x_{i+1},\cdots, x_n)=- f(x_0,\cdots,x_{i+1},x_{i},\cdots, x_n),$$
for all $0\leq i<n$. If $x_i=x_{i+1}$, one obtains $2f(x_0,\cdots,x_i,x_{i},\cdots, x_n)=0$ and hence $f(x_0,\cdots,x_i,x_{i},\cdots, x_n)=0$. This implies that $f\in K^n_\la(G,M)$ and the proof is finished.
\end{proof}
\subsection{$\delta$-cohomology} In order to state the realtionship between the exterior and symmetric cohomology we need to introduce new groups.
\begin{De} For a group $G$ and a $G$-module $M$ one defines the $\delta$-homology $H^*_\delta(G,M)$ by
$$H^*_\delta(G,M)=H^*(\hom_{\Z[G]}(\Delta^{*+1}(\Z[G]),M)).$$
\end{De}
Since $\Delta^n(\Z[G])$ is an $\mathbb{F}_2$-vector space, it follows that the groups $H^n_\delta(G,M)$ are also $\mathbb{F}_2$-vector spaces, $n\geq 0$. The importance of these groups comes from the fact that
\begin{equation}\label{hs=hl+hd}
HS^n(G,M)\cong H_\lambda^n(G,M)\oplus H^n_\delta(G,M)
\end{equation}
which is a trivial consequence of Lemma \ref{34mai}. It follows from Proposition \ref{zdast} that if $M$ has no elements of order two, then $H_\delta^*(G,M)=0$.
\subsection{Preliminaries on spectral sequences}\label{hhs}
To state our main result of this section, let us recall the construction of the hypercohomology spectral sequences. These spectral sequences will also play a prominent role in the next section.
Let $G$ be a group and $M$ be a left $G$-module. For any chain complex of left $G$-modules $C_*=(C_0\leftarrow C_1\leftarrow\cdots)$ one defines $\ext_{\Z[G]}^*(C_*,N)$ to be the homology of the total complex of the bicomplex $\hom_{\Z[G]}(C_*,I^*)$, where $I^*$ is an injective resolution of $M$.
There exist two spectral sequences.
Both of them abut to the group $\ext_{\Z[G]}^*(C_*,M)$. They are:
$$\mathbf{I}^{pq}_1 = \ext_{\Z[G]}^q(C_p,N)\quad \Longrightarrow \quad \ext_{\Z[G]}^{p+q}(C_*,M),$$
$$\mathbf{II}^{pq}_2 = \ext_{\Z[G]}^p(H_q(C_*),M)\quad \Longrightarrow \quad \ext_{\Z[G]}^{p+q}(C_*,M).$$
We also need the following easy lemma on spectral sequences
\begin{Le}\label{abut0} Assume a spectral sequence
abuts to zero and $E_2^{pq}=0$ if $q<0$ or $p<k$, where $k$ is a fixed integer. Then $$E_2^{k\,0}=0=E^{k+1\,0}_2.$$
\end{Le}
\subsection{Vanishing of $\delta$-cohomology in low dimensions}
Now we can state the main result of this section:
\begin{Th} Let $G$ be a group and $M$ be a $G$-module. Then
$$H^i_\delta(G,M)=0, \quad {\rm for} \quad 0\leq i\leq 4.$$
Hence $\gamma^i:H^i_\lambda(G,M)\to HS^i(G,M)$ is an isomorphism for $i=0,1,2,3,4$.
\end{Th}
\begin{proof} In the hypercohomology spectral sequence we take
$C_*=(\Delta^{*+1}(\Z[G]),\partial).$ Since $H_*(C_*)=0$, the spectral sequence $\mathbf{II}$ gives $\ext_{\Z[G]}^*(C_*,M)=0$. Thus, the spectral sequence $\mathbf{I}$ has the form
$$E^{pq}_1=\ext_{\mathbb{Z}[G]}^q(\Delta^{p+1}(\Z[G]),M)\quad\Longrightarrow 0.$$
Since $E^{p0}_1=\hom_{\Z[G]}(\Delta^{p+1}(\Z[G]),M),$
we see that $$E_2^{*0}=H^*_\delta(G,M).$$ According to (\ref{del1}) we have $E^{pq}_1=0$ for $p< 1$. It follows from Lemma \ref{abut0} that $E^{i\,0}_2=H^i_\delta(G,M)=0$ for $i\leq 2$. Thus by the same Lemma it suffices to show that
$E^{1,q}_2=0=E^{2,q}_2$ if $q>0$.
One checks that the following diagram of $G$-modules commutes:
$$\xymatrix{\Delta^2(\Z[G])&\Delta^3(\Z[G])\ar[l]_{\partial^1}& \Delta^4(\Z[G])\ar[l]_{\partial^2}\\
\mathbb{F}_2[G] \ar[u]_{\psi_1}& \mathbb{F}_2[G\times G] \ar[l]_{\delta_1}\ar[u]_{\psi_2}& \mathbb{F}_2[G\times G] \ar[l]_{\delta_2}\ar[u]_{\psi_3}
}$$
where
$$\psi_1(g)=g\tilde\wedge g, \quad \psi_2(g,h)=g\tilde\wedge g\tilde\wedge h, \quad \psi_3(g,h)=g\tilde\wedge g\tilde\wedge g\tilde\wedge h,$$
$$\delta_1(g,h)=g, \quad \delta_2(g,h)=(g,h)-(g,g).$$
Since the set of elements $\{s\tilde \wedge s|s\in G\}$, (resp. $\{s\tilde\wedge s\tilde \wedge t| s,t\in G\}$) forms an $\mathbb{F}_2$-basis of $\Delta^2(\Z[G])$ (resp. $\Delta^3(\Z[G])$), the $G$-homomorphism $\psi_1$ (resp. $\psi_2$) is an isomorphism. In general, the $G$-homomorphism $\psi_3$ is not an isomorphism, but only a split monomorphism.
Hence the projective resolutions
$$0\to \Z[G]\xto{2}\Z[G]\to \mathbb{F}_2[G]\to 0 \quad {\rm and} \quad 0\to \Z[G\times G]\xto{2}\Z[G\times G]\to \mathbb{F}_2[G\times G]\to 0$$
can be used to compute $\ext_{\mathbb{Z}[G]}^q(\Delta^{2}(\Z[G]),M)$ and $\ext_{\mathbb{Z}[G]}^i(\Delta^{3}(\Z[G]),M)$. In both cases $$Ext_{\Z[G]}^i(\Delta^2(\Z[G]),M)=0=Ext_{\Z[G]}^i(\Delta^3(\Z[G]),M) \quad {\rm if} \quad i>1.$$
Hence $E^{1,q}_1=0=E^{2,q}_1$ if $q>1$. The first projective resolution gives
$$E^{11}_1=Ext_{\Z[G]}^1(\Delta^2(\Z[G]),M)=N,$$
where $N= M/2M.$ Since $\Z[G\times G]=\oplus _{g\in G}\Z[G]$ as a $G$-module, the second projective resolution gives
$$E^{21}_1=Ext_{\Z[G]}^1(\Delta^3(\Z[G]),M)=\\ Maps(G,N).$$
Moreover, it also shows that the group $ Maps(G,N)$ is a direct summand of $Ext_{\Z[G]}^1(\Delta^4(\Z[G]),M)$. It follows that there is an isomorphism of chain complexes
$$\xymatrix{E^{01}_1\ar[r]^{\partial^0}&E^{11}_1\ar[r] ^{\partial^1}&E^{21}_1\ar[r]^{\partial^2}& E^{31}_1\\
0\ar[r]^{\delta^0}\ar[u]&N \ar[r] ^{\delta^1} \ar[u]_{\psi^*_1}& Maps(G,N)\ar[r]^{\delta^2}\ar[u]_{\psi^*_2}& X\oplus Maps(G,N)\ar[u]_{\psi^*_3}
}$$
for some $X$, where $(\delta^1(n))(g)=n$, $\delta^2=\begin{pmatrix}x\\ \delta'\end{pmatrix}$ for some $x$ and
$(\delta'(\tau))(g)=\tau(g)-\tau(1)$. Since $\delta^1$ is a monomorphism, it follows that $E^{1,1}_2=0$. And as $Ker(\delta')=Im(\delta^1)$, we obtain that $E^{2,1}_2=0$ and the proof is finished.
\end{proof}
Now we give an example which shows that $\gamma^n$, $n\geq 5$ is not an isomorphism in general.
\subsection{The symmetric and exterior cohomologies of $C_2$} Let $G=C_2=\{1,t\}$, $t^2=1$ be the cyclic group of order two. In this section we compute both symmetric and exterior cohomologies of $C_2$. The computation of the exterior cohomology is extremely easy. In fact, for $G=C_2$, the resolution $(\Lambda ^*(\Z[G]),\partial)$ has the following form:
$$\xymatrix{ \cdots \ar[r]&0\ar[r] &\Lambda^3(\Z[C_2])\ar[r]^\partial \ar[d]^\cong\ar[r]&\Lambda^2(\Z[C_2])\ar[r]^\partial \ar[d]^\cong & \Z[G]\\
&&0& \Z[C_2]/(1+t)&}$$
where $\partial =(1-t)$.
So,
$$H_\la^n(C_2,M)=\begin{cases} H^n(C_2,M), \quad {\rm if} \ n=0,1, \\ 0, \quad {\rm else.}\end{cases}$$
For the symmetric cohomology one has the following result:
\begin{Le} For $G=C_2$ and $M=\mathbb{F}_2$ with trivial action of $G$ on $M$, one has
\begin{equation*}
HS^i(C_2,\mathbb{F}_2) = \begin{cases}
\mathbb{F}_2, & \quad {\rm if} \quad i=0, \quad {\rm or}\quad i\equiv 1 \quad mod\quad 4,\\
0, & \quad {\rm else}.
\end{cases}
\end{equation*}
\end{Le}
Thus, in general, $H_{\lambda}^*(G,M)\neq HS^*(G,M).$
\begin{proof} Consider the resolution
$$\cdots \to\tilde\Lambda^3 \Z[C_2] \xrightarrow{\partial_1}\tilde\Lambda^2\Z[C_2] \xto{\partial_0}\Z[C_2].$$
Fix $n> 0$ and in $\tilde\Lambda^n\Z[C_2]$ consider the elements
$$\alpha^n_i=\underbrace{1\tilde\w\cdots\tilde\w 1}_{n-i}\tilde\w\underbrace{t\tilde\w\cdots\tilde\w t}_{i}, \quad 0\leq i< \frac{n}{2},$$
$$\beta^n=\underbrace{1\tilde\w\cdots\tilde\w 1}_{m}\tilde\w\underbrace{t\tilde\w\cdots\tilde\w t}_{m}, \quad n=2m.$$
Then $\alpha^n_i$ and $\beta^n$ generate $\tilde\Lambda^n \Z[C_2]$ as a $C_2$-module. More accurately, $\Z[C_2]$ is a free $\Z[C_2]$-module with the generator
$\alpha^1_0$. As a $\Z[C_2]$-module,
$$\tilde\Lambda^2\Z[C_2]=\mathbb{F}_2[C_2]\bigoplus \Z[C_2]/(t+1),$$
with $\alpha^2_0$ generating $\mathbb{F}_2[C_2]$ and $\beta^2$ generating $\Z[C_2]/(t+1)$. For odd $n$, $n=2m+1\geq 3$,
$$\tilde\Lambda^n\Z[C_2]=\mathbb{F}_2[C_2]\bigoplus\cdots\bigoplus \mathbb{F}_2[C_2],$$
with $\alpha^n_0,\cdots,\alpha^n_m$ generating each of the summands. Similarly to $n=2,$ for larger $n=2m\geq 4$ we have
$$\tilde\Lambda^n\Z[C_2]=\mathbb{F}_2[C_2]\bigoplus\cdots\bigoplus \mathbb{F}_2[C_2]\bigoplus \mathbb{F}_2[C_2]/(t-1),$$
where the $\alpha^n_0,\cdots,\alpha^n_m$ generate the $\mathbb{F}_2[C_2]$ summands and $\beta^n$ generates $\mathbb{F}_2[C_2]/(t-1)$.
Beginning from $\partial_1$, the coboundary maps are given by the matrices
\begin{equation*}
(\partial_{4k+1})_{ij} = \begin{cases}
1, & \quad if \quad i \quad is \quad odd \quad and\quad j=i \quad or \quad j=i+1\\
0, & \quad else,
\end{cases}
\end{equation*}
where $1\leq i\leq 2k+2$, $1\leq j\leq 2k+2$,
\begin{equation*}
(\partial_{4k+2})_{ij} = \begin{cases}
1, & \quad if \quad j \quad is \quad even \quad and\quad i=j \quad or \quad i=j-1\\
0, & \quad else,
\end{cases}
\end{equation*}
where $1\leq i\leq 2k+2$, $1\leq j\leq 2k+3$,
\begin{equation*}
(\partial_{4k+3})_{ij} = \begin{cases}
1, & \quad if \quad i \quad is \quad odd \quad and\quad j=i\\
1, & \quad if \quad i \quad is \quad odd \quad and\quad j=i+1 \quad and \quad i<2k+2\\
0, & \quad else,
\end{cases}
\end{equation*}
where $1\leq i\leq 2k+3$, $1\leq j\leq 2k+3$,
\begin{equation*}
(\partial_{4k})_{ij} = \begin{cases}
1, & \quad if \quad j \quad is \quad even \quad and\quad i=j \quad or \quad i=j-1 \quad and \quad j<2k+2\\
t-1, & \quad if \quad j=2k+2 \quad and \quad i=2k+1\\
0, & \quad else,
\end{cases}
\end{equation*}
where $1\leq i\leq 2k+1$, $1\leq j\leq 2k+2$.
Based on this the result easily follows.
\end{proof}
\section{Relationship between exterior and classical cohomology}
We start this section with the following easy and probably well-known fact. It will be used in the proof of Theorem \ref{e2} below.
\begin{Le}\label{divides}
Let $g\in G$ and $\omega=x_1\wedge \cdots\wedge x_n\in \Lambda^n\Z[G]$, where $x_1,\cdots,x_n$ are distinct elements in $G$. If $g\omega=\pm \omega$, then the order of $g$ divides $n$.
\end{Le}
\begin{proof} If one forgets the sign, it follows from the assumption that the multiplication by $g$ permutes the $n$ elements $x_1,\cdots , x_n$, meaning the cyclic group generated by $g$ acts on the set $\{x_1,\cdots,x_n\}$. The action is free, because it is given by the multiplication in $G$. Hence all orbits will have the same length equal to the order of $g$, dividing $n$.
\end{proof}
Now we can state our main result.
\begin{Th}\label{e2} For any group $G$ and any $G$-module $M$, there is a first quadrant spectral sequence
$$E^{pq}_1\Longrightarrow H^{p+q}(G,M)$$
with properties
\begin{itemize}
\item[(i)] $E^{p,0}_2=H^p_\lambda(G,M)$ and the edge homomorphism $E^{p0}_2\to H^p(G,M)$ is precisely $\beta^p$, $p\geq 0$.
\item[(ii)] If $q>0$, then $E^{0q}_1=0$.
\item[(iii)] If $q>0$, $p>0$ and the equation $x^{p+1}=1$ has only trivial solution in $G$, then $E^{pq}_1=0$.
\item[(iv)] If $\ell$ is a prime number and $q>0$, then
$$E^{\ell-1 \, q}_1=\begin{cases} \ \prod_{C_\ell\subset G}H^{q+1}(C_\ell,M), \ {\rm if} \ \ell=2,\\
\ \prod_{C_\ell\subset G}H^{q}(C_\ell,M), \ {\rm if} \ \ell>2.
\end{cases}$$
Here the product is taken over all subgroups of order $\ell$ and for each such subgroup, the corresponding action of $C_\ell$ on $M$ is induced by the inclusion.
\end{itemize}
\end{Th}
\begin{Rem} If $p+1 $ is not prime, then $E^{pq}_1$, $q>0, p>0$ can be described as a product (usually of several copies) of the group cohomology of subgroups of order $k$, where $k|p+1$, but the exact formula is too complex to state here. From this it is easy to deduce that $E^{pq}_1=E^{pq}_2$ for all $q>0$ (compare with the proof of the part i) of Corollary \ref{orisami}).
\end{Rem}
\begin{proof} In the hypercohomology spectral sequence discussed in Section \ref{hhs}, we take $R=\Z[G]$, $N=M$ and $C^*=(\Lambda^{*+1}(\Z[G]),\partial)$, which we denote simply by $\Lambda^{*+1}$. This gives the spectral sequences
$$\mathbf{I}^{pq}_1 = \ext_G^q(\Lambda^{p+1},M)\quad \Longrightarrow \quad \ext_G^{p+q}(\Lambda^{*+1},M)$$
$$\mathbf{II}^{pq}_2 = \ext_G^p(H_q(\Lambda^{*+1}),M)\quad \Longrightarrow \quad \ext_G^{p+q}(\Lambda^{*+1},M).$$
Let us first consider the second spectral sequence. As $\Lambda^{*+1}$ is a resolution of $\Z$, we have
\begin{equation*}
H_q(\Lambda^{*+1})= \begin{cases}
\Z, & \quad {\rm for} \quad q=0,\\
0, & \quad {\rm else}.
\end{cases}
\end{equation*}
Therefore, the second spectral sequence degenerates to the isomorphism
$$\ext_G^p(\Lambda^{*+1},M)=\ext_G^p(H_0(\Lambda^{*+1}),M)=\ext_G^p(\Z,M)=H^p(G,M).$$
Substituting this value into the first spectral sequence, we obtain the spectral sequence
$$E^{pq}_1=\ext^{q}_{\Z[G]}(\Lambda ^{p+1}(\Z[G]), M)\Longrightarrow H^{p+q}(G,M).$$
Since the differential $d_1$ in the first page of the spectral sequence is induced by the boundary map in the resolution $\Lambda^{*+1}(\Z[G])\to \Z$, it follows that for $q=0$, the chain complex $(E^{p0}_1,d^1)$ coincides with the Zarelua chain complex and the statement (i) follows.
If $p=0$, then $E^{pq}_1=Ext^q_{\Z[G]}(\Z[G],M)$ vanishes for $q>0$. Hence $E^{0q}_1=0$ for $q>0$, and the property (ii) holds.
Next, the $G$-module $\Lambda ^{q+1}(\Z[G])$ is free as an abelian group with a basis of the form $x_1\wedge \cdots\wedge x_p$, where $x_1<\cdots<x_p$. Here $\leq$ is any total order on $G$. If one ignores the sign, we see that $G$ acts on the basis. Thus $\Lambda ^{p+1}(\Z[G])$ decomposes as a direct sum of $G$-submodules corresponding to these orbits. In particular, summands corresponding to free orbits are free $G$-modules. Now, if the assertion of (iii) holds, all orbits are free thanks to Lemma \ref{divides} and hence the $Ext$-group vanishes and $E^{pq}_1=0$ for $q>0$. Thus the property (iii) is proved.
If $\ell$ is prime and $C_\ell=\{1,g,\cdots, g^{p-1}\}$ is a cyclic subgroup of $G$, then for the basis element $\omega=1\wedge g\wedge \cdots \wedge g^{\ell-1}$ one has $g\omega =\omega$ for odd $\ell$, and $g\omega =-\omega$ for $\ell=2$. Thus $\omega$ determines a non free summand of $\Lambda^{p}(\Z[G])$. This summand is isomorphic to $\Z[G]/(g-1)$ for odd $\ell$ and $\Z[G]/(g+1)$ for $\ell=2$. This summand has an obvious projective resolution
$$0\leftarrow \Z[G]/_{(g-1)}\leftarrow \Z[G]\xleftarrow{g-1}\Z[G]\xleftarrow{1+g+\cdots g^{\ell-1}}\cdots $$
if $\ell$ is odd and
$$0\leftarrow \Z[G]/_{(g+1)}\leftarrow \Z[G]\xleftarrow{g+1}\Z[G]\xleftarrow{g-1}\cdots $$
if $\ell=2$. From this it follows that this summand of $\Lambda^{\ell}(\Z[G])$ contributes the factor $H^{i}(C_\ell,M)$ (resp. $H^{i+1}(C_\ell,M)$) in $Ext^{m}_{\Z[G]}(\Lambda ^{p+1}(\Z[G]), M)$ for odd $\ell$ (resp. $\ell=2$). By Lemma \ref{divides} all non-free summands of $\Lambda^{\ell}(\Z[G])$ arise in this way and hence $E^{\ell-1\,m}_1$ has the form described in
(iv).
\end{proof}
Thus the first plane/page of the spectral sequence is:
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,
nodes in empty cells,
nodes={minimum width=10ex,
minimum height=10ex,
outer sep=-5pt},
column sep=1ex, row sep=1ex,
text centered,anchor=center]{
q\strut & 0 \strut & \athir{q+1}{2} & \athir{q}{3} & \cdots & Ext^q(\Lambda^{p+1}\Z [G],M)& \cdots \\
\vdots& \vdots & \vdots & \vdots & \ddots & \vdots & \cdots \\
2 & 0 & \athir{3}{2} & \athir{2}{3} & \cdots & Ext^2(\Lambda^{p+1}\Z [G],M) & \cdots \\
1 & 0 & \athir{2}{2} & \athir{1}{3} & \cdots & Ext^1(\Lambda^{p+1}\Z [G],M) & \cdots \\
0 & H_\la^0(G,M) & H_\la^1(G,M) & H_\la^2(G,M) & \cdots & H_\la^p(G,M) & \cdots \\
\quad\strut & 0 & 1 & 2 & \cdots & p & \strut \\};
\draw[thick] (m-1-1.north east) -- (m-6-1.east) ;
\draw[thick] (m-6-1.north) -- (m-6-7.north east) ;
\end{tikzpicture}
As an immediate consequence of Theorem \ref{e2} one obtains the following corollary.
\begin{Co}\label{orisami}
\begin{itemize}
\item[(i)] For any group $G$ and any $G$-module $M$, the homomorphism $\beta^i:H^i_\lambda(G,M)\to H^i(G,M)$ is an isomorphism for $i=0$ and $i=1$, while $\beta^2$ and $\beta^3$ can be fit in an exact sequence:
$$0\to H_{\lambda}^2(G,M)\xto{\beta^2} H^2(G,M)\to \prod_{C_2\subset G}H^2(C_2,M)\to H^3_{\lambda}(G,M)\xto{\beta^3} H^3(G,M).$$
\item[(ii)] If $G$ has no elements of order two, then for any $G$-module $M$, the homomorphism $\beta^2$ is an isomorphism, while $\beta^3, \beta^4$ and $\beta^5$ can be fit in an exact sequence:
$$0\to H_{\lambda}^3(G,M)\xto{\beta^3} H^3(G,M)\to \prod_{C_3\subset G}H^3(C_3,M)\to H^4_{\lambda}(G,M)\xto{\beta^4} $$
$$\xto{\beta^4} H^4(G,M)\to \prod_{C_3\subset G}H^4(C_3,M)\to H^5_{\lambda}(G,M)\xto{\beta^5} H^5(G,M).$$
\item[(iii)] If all nontrivial elements of $G$ are of infinite order, then $\beta^i:H^i_\lambda(G,M)\to H^i(G,M)$ is an isomorphism for all $i\geq 0$.
\end{itemize}
\end{Co}
\begin{proof}
\begin{itemize}
\item[(i)] We first show that if $q>0$, the differential $E^{1q}_1\to E^{2q}_1$ vanishes. In fact, by part (iv) of Theorem \ref{e2} the group $E^{1q}_1$ is annihilated by the multiplication by 2, while the group $E^{1q}_1$ is annihilated by the multiplication by 3 and hence the corresponding map is zero. This fact implies that
$E^{1q}_2=E^{1q}_1$ for all $q>0$. The rest is a consequence of the 5-term exact sequence, which we have in any first quadrant spectral sequence.
\item[(ii)] Assume $q>0$. By part (iii) of Theorem \ref{e2} and the fact that $G$ does not contain an element of order two, we have $E^{pq}_0=0$, if $q>0$ and $p+1$ is a power of two. It follows that $E^{pq}_2=E^{pq}_1$, for $p=2$ and hence the result.
\item[(iii)] By part (iii) of Theorem \ref{e2} we have $E^{pq}_1=0$ for all $q>0$. Hence the spectral sequence degenerates and in particular, the edge homomorphism is an isomorphism.
\end{itemize}
\end{proof}
{\bf Example}. Let $\ell$ be a prime number and $G=C_\ell$ be a cyclic group of order $\ell$. Then
$$H^i_{\lambda}(C_\ell,M)=\begin{cases} H^i(C_\ell,M), \ {\rm if} \ i\leq \ell-1, \\ 0, \ {\rm if } \ {i\geq \ell}.\end{cases}$$
In fact, the case when $i\geq \ell$ follows from Remark \ref{26}, while the case $i\leq \ell-1$ follows from part (iii) of Theorem \ref{e2}.
\section*{Acknowledgements}
The paper was written during the author's postdoctoral fellowship at the University of Southampton. The author would like to thank the staff of the School of Mathematics, especiallly Prof. J. Brodzki, for providing me with excellent conditions to work.
| {
"timestamp": "2017-06-15T02:07:21",
"yymm": "1706",
"arxiv_id": "1706.01367",
"language": "en",
"url": "https://arxiv.org/abs/1706.01367",
"abstract": "We investigate the relationship between the symmetric, exterior and classical cohomologies of groups. The first two theories were introduced respectively by Staic and Zarelua. We show in particular, that there is a map from exterior cohomology to symmetric cohomology which is a split monomorphism in general and an isomorphism in many cases, but not always. We introduce two spectral sequences which help to explain the realtionship between these cohomology groups. As a sample application we obtain that symmetric and classical cohomologies are isomorphic for torsion free groups.",
"subjects": "Group Theory (math.GR)",
"title": "Symmetric cohomology of groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750477464142,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7095221726749547
} |
https://arxiv.org/abs/1710.10065 | Further results on the $(b, c)$-inverse, the outer inverse $A^{(2)}_{T, S}$ and the Moore-Penrose inverse in the Banach context | In this article properties of the $(b, c)$-inverse, the inverse along an element, the outer inverse with prescribed range and null space $A^{(2)}_{T, S}$ and the Moore-Penrose inverse will be studied in the contexts of Banach spaces operators, Banach algebras and $C^*$-algebras. The main properties to be considered are the continuity, the differentiability and the openness of the sets of all invertible elements defined by all the aforementioned outer inverses but the Moore-Penrose inverse. The relationship between the $(b, c)$-inverse and the outer inverse $A^{(2)}_{T, S}$ will be also characterized. | \section{Introduction}
Recently two outer inverses have been introduced: the $(b, c)$-inverse and the inverse along an element, see \cite{D} and \cite{mary}, respectively.
These two inverses are related; in fact, the latter is a particular case of the former. It is worth noticing one of the main properties of these inverses, namely, they encompass
several well known outer inverses such as the Drazin inverse, the group inverse and the Moore-Penrose inverse.
Furthermore, several authors have studied these notions, see for the $(b, c)$-inverse
\cite{D,D2,CCW,KC,WCGC,KWC,b2,r} and for the invese along an element \cite{mary, M2, marybis, mary2, bb1, bb2, ZCLG, ZPCZ}.
In particular, in \cite{b2} and \cite{bb2} several properties of the $(b, c)$-inverse and the inverse along an element were studied in the Banach context, respectively.
On the other hand, one of the most well known outer inverses is the outer inverse with prescribed range and null space, i.e., the outer invers \OIP.
This inverse has been studied in the frames of matrices, Hilbert space operators and Banach space operators. To learn about the \OIP outer inverse in Banach spaces, see for example \cite{YW, DX, LYZW}.
The main objective of this article is to deepen the knowledge of the three aformentioned outer inverses in the contexts of Banach algebras, $C^*$-algebras, Banach space operators
and Hilbert space operators. However, as an application of the main results, properties of the Moore-Penrose inverse will be also presented.
The article is organized as follows. In section 3, after having recalled some preliminary definitions and facts in section 2, the relationship between the $(b, c)$-inverse (in particular the inverse along an element)
and the outer inverse \OIP will be studied. In section 4 both the set of all $(b, c)$-invertible elements (in particular the set of all invertible elements along a fixed element)
and the set of all operator for which the outer inverse \OIP exists will be proved to be open. The continuity of the $(b, c)$-inverse of Banach space operators and of Banach algebra and $C^*$-algebra elements will be characterized in
section 5; two main notions will be used to accomplish this aim: the gap between two subspaces and the Moore-Penrose inverse. The diffrentiability of the $(b, c)$-inverse in Banach algebras and $C^*$-algebras will be studied in section 6; the Moore-Penrose inverse will be also applied in this section. Finally, the continuity and differentiability of the outer inverse \OIP will be characterized in section 7 using again the gap between subspaces and the Moore-Penrose inverse. In addition, in section 5 and 6, as an application of the main results of these sections,
the continuity and the differentiability of the Moore-Penrose inverse for Banach algebra elements and Banach space operators will be studied, respectively.
\section{Preliminary definitions}
\noindent From now on, $\A$ will denote a unitary Banach algebra with unit $\uno$ while $\A^{-1}$ and $\A^\bullet$ will stand for the set
of invertible elements and the set of idempotents of $\A$, respectively. A particular case is $\L (\X)$, the Banach algebra of all
linear and bounded maps defined on and with values in the Banach space $\X$. However, in the present work it will be necessary to consider the Banach space
of all operators defined on the Banach space $\X$ with values in the Banach space $\Y$, which will be denoted by $\L(\X, \Y)$. Note that if $T\in\L (\X, \Y)$, then $\N(T)\subseteq\X$ and $\R(T)\subseteq \Y$ will stand for
the null space and the range of the operator $T$, respectively. For example, when $\A$ is a unitary Banach algebra and $x\in \A$, the operators $L_x\colon \A\to \A$ and $R_x\colon \A\to \A$
are the maps defined as follows: given $z\in \A$, $L_x(z)= xz$ and $R_x (z)= zx$. Observe that since $\A$ is unitary, then $\parallel L_a\parallel=\parallel a\parallel=\parallel R_a\parallel$. Moreover, the following notation will be used:
\begin{align*}
&x^{-1}(0)=\N(L_x),& &x\A=\R(L_x),& &x_{-1}(0)=\N(R_x),& &\A x=\R(R_x).&
\end{align*}
Note that when no confusion is possible, the identity operator defined on the
Banach space $\X$ will be denoted by $I\in\L(\X)$; otherwise it will be denoted by $I_{\X}$. In addition, given a Hilbert space $\H$ and a closed subspace $\M\subseteq \H$, $P_{\M}^\perp\in\L(\H)^\bullet$ wil stand for the orthogonal projector with range $\M$. \par
An element $a \in \A$ will be said to be {\it regular}, if there exists $x \in \A$ such that $a=axa$.
The element $x$, which is not uniquely determined by $a$, will be said to be a {\it generalized inverse}
or an {\it inner inverse} of $a$. In addition, $\hat{\A}$ will stand for the set of all regular elements of $\A$ and given $a\in\hat{\A}$, $a\{1\}$ will denote the set of all
generalized inverses of $a$. On the other hand,
if $y \in \A$ satisfies $yay=y$, then $y$ will be said to be an {\it outer inverse} of $a$. Moreover, an element $z$ will be said to be a \it normalized
generalized inverse of $a$, \rm if $z$ is both an inner and an outer inverse of $a$. Recall that if $b$ is an inner inverse of $a$, then
$bab$ is a normalized generalized inverse of $a$.
Now the definition of one of the key notion of this article will be recalled. Note that this notion was originally introduced in the context of semigroup, however,
since the frame of this article are Banach algebras and Banach space operators, the notion under consideration, as well as all the object considered in this work, will be introduced and studied
in the Banach context.\par
\begin{df}[{\hspace{-1pt}\cite[Definition 1.3]{D}}]\label{def1}
Let $\A$ be a unitary Banach algebra and consider $b, c \in \A$. The element $a\in\A$
will be said to be $(b,c)$-invertible, if there exists $y \in \A$ such that the following
equations hold:
\begin{enumerate}[{\rm (i)}]
\item$y\in(b\A y)\cap (y\A c)$,
\item $b=yab$, $c=cay$.
\end{enumerate}
\end{df}
\indent In the same conditions of Definition \ref{def1}, if such an inverse exists, then it is unique (\hspace{-1pt}\cite[Theorem 2.1 (i)]{D}). Thus in what follows,
if the element $y$ in Definition \ref{def1} exists, then it will be denoted by $a^{-(b,\hbox{ }c)}$. In addition,
$a^{-(b,\hbox{ }c)}$ is an outer inverse of $a$ (\hspace{-1pt}\cite[Theorem 2.1 (ii)]{D}) and $b$ and $c$ are regular (\hspace{-1pt}\cite[Remark 2.2 (iii)]{b2} or \cite[Proposition 3.3]{WCGC}).
\indent A particular case of the $(b,c )$-inverse is the Bott-Duffin $(p, q)$-inverse.
\begin{df}[{\hspace{-1pt}\cite[Definition 3.2]{D}}]\label{def15}Let $\A$ be a unitary Banaxh algebra and consider $p$, $q\in\A^\bullet$. The element
$a\in\A$ will be said to be Bott-Duffin $(p, q)$-invertible, if there exists $y\in \A$ such that
\begin{enumerate}[{\rm (i)}]
\item $y=py=yq$,
\item $yap=p$ and $qay=q$.
\end{enumerate}
\end{df}
\indent Clearly, given $p$, $q\in\A^\bullet$, the Bott-Duffin $(p, q)$-inverse is nothing but the $(b, c)$-inverse
when $b$ and $c$ are idempotents. In addition, since there exists at most one $(b, c)$-inverse,
the Bott-Duffin $(p, q)$-inverse is unique, if it exists. According to what has been said, if $a\in\A$ is
Bott-Duffin $(p,q)$-invertible, then the element $y$ in Definition \ref{def15} will be denoted by
$a^{-(p, \hbox{ }q)}$. To learn more on the outer inverses recalled in Definition \ref{def1} and Definition \ref{def15}, see \cite{D, D2, CCW, KC, WCGC, KWC, b2, r}.
Next the definition of the inverse along an element will be recalled. This is another particular case of the $(b, c)$-inverse.
\begin{df}[{\hspace{-1pt}\cite[Definition 4]{mary}}] \label{def2}Consider $a$, $d\in\A$. The element $a$
is invertible along $d$, if there exists $b\in\A$ such that
$b$ is an outer inverse of $a$, $b\A=d\A$ and $\A b=\A d$.\end{df}
\indent Recall that, in the same conditions of Definition \ref{def2}, according to \cite[Theorem 6]{mary}, if such $b\in\A$ exists, then it is unique.
Therefore, the element $b$ satisfying Definition \ref{def2} will be said to be the \it inverse of $a$ along $d$. \rm
In this case, the inverse under consideration will be denoted by $a^{-d}$. Moreover, according to \cite[Proposition 6.1]{D}, the inverse along an element
is a particular case of the $(b, c)$-inverse. In fact, the element $a$ is invertible along $d$ if and only if it is $(d, d)$-invertible.
Furthermore, in this case, $a^{-d}=a^{-(d, d)}$. To learn more on this outer inverse, see \cite{mary, M2, marybis, mary2, bb1, bb2, ZCLG, ZPCZ}.
\indent One of the most studied generalized inverses is the outer inverse \OIP, i.e., the outer inverse of the operator $A\in \L(\X, \Y)$ with
prescribed range $\T\subseteq \X$ and null space $\S\subseteq Y$, where $\X$ and $\Y$ are Banach spaces.
This generalized inverse was studied for matrices and for operators defined on Hilbert and on Banach spaces. Recall that this inverse
is unique, when it exists (\hspace{-1pt}\cite[Lemma 1]{LYZW}).\par
\begin{df}\label{def30}Let $\X$ and $\Y$ be Banach spaces, $A\in\L (\X, \Y)$ and $\T$ and $\S$ two closed subspaces of $\X$ and $\Y$, respectively.
If there exists a {\rm(}necessarily unique{\rm)} operator $B\in\L (\Y, \X)$ such that
$B$ is an outer inverse of $A$, $\R(B)=\T$ and $\N(B)=\S$, then $B$ will be said to be the \OIP outer inverse of $A$.
\end{df}
\indent According to \cite[Lemma 1]{LYZW}, a necessary and sufficient condition for the existence of \OIP is that
$\T$ and $\S$ are complemented subspaces of $\X$ and $\Y$, respectively, $A\mid_\T^{A(\T )}\colon \T\to A(\T)$ is invertible and $A(\T)\oplus \S=\Y$. In particular, using this latter decomposition,
\OIP is the following operator:
\begin{align*}
&A^{(2)}_{\T,\S}\mid_\S=0,& &(A\mid_T^{A(\T)})^{-1}=A^{(2)}_{\T,\hbox{ }\S}\mid^T_{A(\T)}\colon A(\T)\to \T.&\\
\end{align*}
\noindent To learn more properties of the \OIP outer inverse in Banach spaces, see \cite{YW, DX, LYZW}.
\indent To characterize the continuity of the outer inverses considered in this article, it is necessary to recall the definition of the Moore-Penrose inverse.
Let $\A$ be a $C^*$-algebra. An element $a\in\A$ will be said to be \it Moore-Penrose invertible, \rm if there exists $b\in \A$ such that
the following equations hold:
\begin{align*}
&a=aba,& &b=bab,& &(ab)^*=ab,& &(ba)^*=ba.&\\
\end{align*}
To give the notion of a Moore-Penrose invertible Banach algebra element, the definition of an hermitian element need to be recalled first.
Given a Banach algebra $\A$, an element $z\in\A$ is said to be \it hermitian, \rm if $\parallel exp(ita)\parallel =1$, for all $ t\in\mathbb{R}$
(\hspace{-1pt}see \cite{Vv}, \cite[Chapter 4]{Dw} and \cite[Chapter I, Section 10]{BD}). When $\A$ is a $C^*$-algebra, $a\in\A$ is hermitian
if and only if it is self-adjoint (\hspace{-1pt}\cite[Proposition 20, Chapter I, Section 12]{BD}). Now Moore-Penrose Banach algebra elements can be defined.
Given a Banach algebra $\A$, an element $x\in\A$ will be said to be \it Moore-Penrose invertible, \rm if
there exists $b\in\A$ such that $b$ is a normalized generalized inverse of $a$ and the elements $ab$ and $ba$ are hermitian.
Since the Moore-Penrose inverse is unique, when it exists (see \cite[Lemma 2.1]{V3}), the element $b$ will be denoted by $a^\dag$. Moreover, $\A^\dag$ will stand for the set of all
Moore-Penrose invertible elements of $\A$. Note that $\A^\dag\subseteq \hat{\A}$. Furthermore, when $\A$ is a $C^*$-algebra, $\A^\dag=\hat{\A}$
(\hspace{-1pt}\cite[Theorem 6]{HM1}); however, when the Banach algebra $\A$ is not a $C^*$-algebra, in general, this result does not hold (\hspace{-1pt}\cite[Remark 4]{B}).
To learn the definition of the Moore-Penrose inverse for matrices, see \cite{Pe}; to learn more
properties of this generalized inverse see, for $C^*$ algebras, \cite{HM1, HM2, kol}, and for Banach algebras, \cite{V2, V3, B}.
\indent To prove some of the main results of this article, the definition of the gap between two subspaces need to be recalled.
Let $\X$ be a Banach space and consider $\M$ and $\N$ two closed subspaces in $\X$. If $\M=0$, then set $\delta (\M, \N)=0$,
otherwise set
$$
\delta (\M, \N)=\hbox{\rm sup}\{\hbox{\rm dist}(x,\N)\colon x\in\M, \, \parallel x\parallel =1\},
$$
where $\hbox{\rm dist}(x,\N)=\hbox{\rm inf}\{\parallel x-y\parallel\colon y\in\N\}$. The \it gap between the subspaces $\M$ and $\N$ \rm is
$$
\gap{\M}{\N}= \hbox{\rm max}\{ \delta (\M, \N), \delta (\N, \M)\}.
$$
\noindent To learn more on this notion, see \cite{DX, K}.
\section{The relationship between the $(b, c)$-inverse and \OIP}
\noindent The main objective of this section is to prove that given a Banach space $\X$, the $(B, C)$-inverse in $\L(\X)$ consists in a reformulation of
the outer inverse \OIP, in other words, the outer inverse introduced in \cite[Definition 1.3]{D} is an extension to semigroups of the outer inverse
with prescribed range and null space.
First an equivalent formulation of Definition \ref{def1} will be given in the context of Banach space operators. To this end, however, some
preparation is needed. In the following remark a particular operator will be introduced.
\begin{rema}\label{rem3.4}\rm
Recall that given a Banach space $\X$, a necessary and sufficient condition for $F\in \L(\X)$ to be a regular operator is that $\N(F)$ and $\R(F)$ are
closed and complemented susbpsaces of $\X$. Suppose that $F\in \L(\X)$ satisfies this condition and let $\N$ and $\M$ be two subspaces of $\X$ such that
$$
\N(F)\oplus \N=\X=\R(F)\oplus \M.
$$
Consider the isomorphism $F_1=F\mid_{\N}^{\R(F)}\colon \N\to \R(F)$ and define the map $S_F\in\L(X)$ as follows:
\begin{align*}
& &&S_F\mid{\M}=0,& &S_F\mid_{\R(F)}=\iota_{\N, \X}F_1^{-1},&
\end{align*}
where $\iota_{\N, \X}\colon \N\to\X$ is the inclusion map.
The operator $S_F\in\L(\X)$ will be used in the proofs of the next results.
\end{rema}
\begin{lem}\label{lem3.3} Let $\X$ be a Banach space and consider $F\in\L(\X)$ a regular operator and $S_F\in\L(\X)$ the map defined in
Remark \ref{rem3.4}. The following statements hold.\par
\begin{enumerate}[{\rm (i)}]
\item If $X\in\L(\X)$ is such that $\R(X)=\R(F)$, then $X=FS_FX$.
\item If $X\in\L(\X)$ is such that $\N(X)=\N(F)$, then $X=XS_FF$.
\end{enumerate}
\end{lem}
\begin{proof} Note that $FS_F\mid_{\R(F)}=\iota_{\R(F), \X}$, where $\iota_{\R(F), \X}\colon \R(F)\to \X$ is the inclusion map.
If $\R(X)=\R(F)$, then $X=FS_FX$.\par
Similarly, $S_FF\mid_{\N}=\iota_{\N, \X}$, where $\N$ is the subspace of $\X$ considered in Remak \ref{rem3.4}.
If $\N(X)=\N(F)$, then since $\N(F)\oplus \N=\X$, $X=XS_FF$.
\end{proof}
The following proposition is a key step in the reformulation of Definition \ref{def1} for Banach space operators.
\begin{pro}\label{pro3.5} Let $\X$ be a Banach space and consider two regular operators $F$, $G\in\L(\X)$.
\begin{enumerate}[{\rm (i)}]
\item A necessary and sufficient condition for $F\L(\X)=G\L(\X)$ is that $\R(F)=\R(G)$.
\item $\L(\X)F=\L(\X)G$ if and only if $\N(F)=\N(G)$.
\end{enumerate}
\end{pro}
\begin{proof}If there exist $U$, $V\in\L(\X)$ such that $F=GU$ and $G=FV$, then $\R(F)=\R(G)$.
On the other hand, if $\R(F)=\R(G)$, then according to Lemma \ref{lem3.3} (i), $G=FS_FG$ and $F=GS_GF$,
which implies that $F\L(\X)=G\L(\X)$.
A similar argument, using in particular Lemma \ref{lem3.3} (ii), proves the second statement.
\end{proof}
\begin{thm}\label{thm3.1}Let $\X$ be a Banach space and consider two regular operators $B$, $C\in\L(\X)$.
The following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The operator $A\in\L(\X)$ is $(B, C)$-invertible.
\item There exists a bounded and linear map $X\in\L(\X)$ such that
$$
B=XAB, \hskip.3truecmC=CAX, \hskip.3truecm\R(X)=B,\hskip.3truecm \N(X)=\N(C).
$$
\end{enumerate}
Moreover, in this case $X=A^{-(B, C)}$.
\end{thm}
\begin{proof}
According to Definition \ref{def1}, if the $(B, C)$-inverse of $A$ exists, then there are $X$, $T_1$ and $T_2\in\L(\X)$ such that
$$
B=XAB, \hskip.3truecm C=CAX, \hskip.3truecm X= BT_1, \hskip.3truecm X=T_2C.
$$
In particular, $\R(B)=\R(X)$ and $\N(X)=\N(C)$.\par
\indent On the other hand, suppose that statement (ii) holds. Since $B$ (respectively $C$) is regular and $\R(X)=\R(B)$ (respectively $\N(X)=\N(C)$), according to Lemma \ref{lem3.3},
$X=BS_BX$ (respectively $X=XS_CC$), where $S_B$ (respectively $S_C$) is the operator considered in Remark \ref{rem3.4}. Therefore, $X\in B\L(\X)X\cap X\L(\X)C$. Since the $(B, C)$-inverse is
unique, when it exists, $X=A^{-(B, C)}$.
\end{proof}
Next the main result of this section will be proved
\begin{thm}\label{thm3.2} Let $\X$ be a Banach space and consider two regular operators $B$, $C\in\L(\X)$.
The following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The operator $A\in\L(\X)$ is $(B, C)$-invertible.
\item There exists $X\in \L(\X)$ such that $X=XAX$, $\R(X)=\R(B)$ and $\N(X)=\N(C)$.
\item The outer inverse $A^{(2)}_{\R(B), \N(C)}$ exists.
\item $A\mid_{\R(B)}^{\R(AB)}\colon \R(B)\to\R(AB)$ is invertible and $\R(AB)\oplus \N(C)=\X$.
\end{enumerate}
\noindent Furthermore, in this case $A^{-(B, C)}=X=A^{(2)}_{\R(B),\, \\N(C)}$.
\end{thm}
\begin{proof}According to \cite[Proposition 6.1]{D}, statement (i) is equivalent to the fact that there exists an operator $X\in\L(\X)$
such that $X$ is an outer inverse of $A$, $X\L(\X)=B\L(\X)$ and $\L(\X)X=\L(\X)C$. However, according to the proof of Theorem \ref{thm3.1},
these conditions are equivalent to statement (ii). In addition, since the $(B,C)$-inverse is unique, when it exists, $X=A^{-(B, C)}$.
Statement (ii) and (iii) are equivalent; moreover $X=A^{(2)}_{\R(B),\, \N(C)}$ (\hspace{-1pt}\cite[Lemma 1]{LYZW}).
To prove the equivalence between statements (iii) and (iv), apply \cite[Lemma 1]{LYZW} and recall that, since $B$ and $C$ are regular, $\R(B)$ and $\N(C)$ are
closed and complemented subspaces of $\X$.
\end{proof}
Note that Theorem \ref{thm3.2} was proved for square matrices in \cite[Theorem 1.5]{r}. The following results will be derived from what has been proved.
\begin{cor}\label{cor3.6} Let $\X$ be a Banach space and consider $A$, $B$, $C\in\L(\X)$ such that $B$ and $C$ are regular. The following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item $A^{-(B, C)}$ exists.
\item $A^{-(F, G)}$ exists for any $F$, $G\in\L(\X)$, $F$,$G$ regular, such that $\R(F)=\R(B)$ and $\N(G)=\N(C)$.
\item The Bott-Duffin inverse $A^{-(P, Q)}$ exists for any $P$, $Q\in \L(\X)^\bullet$ such that $\R(P)=\R(B)$ and $\N(Q)=\N(C)$.
\end{enumerate}
\noindent Furthermore, in this case, $A^{-(B, C)}=A^{-(F, G)}=A^{-(P, Q)}$.
\end{cor}
\begin{proof} Apply Theorem \ref{thm3.2}.
\end{proof}
Note that Corollary \ref{cor3.6} can be rephrased using the outer inverse $A^{(2)}_{\T, \S}$.
\begin{rema}\label{rema3.8}\rm Let $\X$ be a Banach space and consider $\S$ and $\T$ two closed and complemented subspaces of $\X$. Let $A\in \L(\X)$.
The following statement are equivalent.\par
\begin{enumerate}[{\rm (i)}]
\item The outer inverse $A^{(2)}_{\T, \S}$ exists.
\item $A^{-(B, C)}$ exists for any $B$, $C\in\L(\X)$, $B$, $C$ regular, such that $\R(B)=\T$ and $\N(C)=\S$.
\item The Bott-Duffin inverse $A^{-(P, Q)}$ exists for any $P$, $Q\in \L(\X)^\bullet$ such that $\R(P)=\T$ and $\N(Q)=\S$.
\end{enumerate}
\noindent Moreover, in this case $A^{(2)}_{\T, \S}=A^{-(B, C)}=A^{-(P, Q)}$.\par
\noindent The proof of the equivalence among statements (i)-(iii) and the last identity can be derived from Theorem \ref{thm3.2} and Corollary \ref{cor3.6}.
\end{rema}
It is worth noticing, as it has been mentioned in the first paragraph of this section, that Theorem \ref{thm3.2} and Remark \ref{rema3.8} show that in $\L(\X)$, $\X$ a Banach space, the $(B, C)$-inverse is a reformulation of \OIP,
so that \cite[Definition 1.3]{D} consists in an extension of the latter outer inverse to semigroups.
In fact, given $A\in\L(\X)$, \OIP and the $(B, C)$-inverse of $A\in\L(\X)$ refer to the same object -under the conditions of Theorem \ref{thm3.2} and Remark \ref{rema3.8}, but it could be said that the former outer inverse is defined
from a spatial point of view while the latter from an algebraic point of view. Actually, in the first case subspaces of an underlying space (a Banach space $\X$ or $\mathbb{C}^n$,
$n\in\mathbb{N}$) are used in the definition, but in the second only elements of a semigroup can be considered; precisely, in $\L(\X)$ it is possible to associate two specific subspaces to any element of this algebra -the range and the null space,
which leads to the aforementioned results.
Next the inverse along an operator will be considered.
\begin{cor}\label{cor3.9} Let $\X$ be a Banach space and consider $A$, $D\in\L(\X)$ such that $D$ is regular. The following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item $A^{-D}$ exists.
\item There exists $X\in \L(\X)$ such that $XAD=D=DAX$, $\R(X)=\R(D)$ and $\N(X)=\N(D)$.
\item There exists $Y\in\L(\X)$ such that $Y$ is an outer inverse of $A$, $\R(Y)=\R(D)$ and $\N(Y)=\N(D)$.
\item The outer inverse $A^{(2)}_{\R(D), \N(D)}$ exists.
\item $A\mid_{\R(D)}^{\R(AD)}\colon \R(D)\to\R(AD)$ is invertible and $\R(AD)\oplus \N(D)=\X$.
\item $A^{-F}$ exists for all $F\in\L(\X)$ such that $F$ is regular, $\R(F)=\R(D)$ and $\N(F)=\N(D)$.
\end{enumerate}
Moreover, in this case, $A^{-D}=X=Y=A^{(2)}_{\R(D), \N(D)}=A^{-F}$.
\end{cor}
\begin{proof} Recall that according to \cite[Proposition 6.1]{D}, $A^{-D}=A^{-(D, D)}$, when one of these outer inverses exists.
Apply now Theorem \ref{thm3.1}, Theorem \ref{thm3.2} and Corollary \ref{cor3.6}.
\end{proof}
In the following proposition the Banach algebra case will be considered.
\begin{pro}\label{pro3.10} Let $\A$ be a unitary Banach algebra and consider $a$, $b$, $c\in\A$ such that $b$ and $c$ are regular.
The following statements hold.
\begin{enumerate}[{\rm (i)}]
\item If $a^{-(b, c)}$ exists, then $L_{a^{-(b, c)}}=L_a^{-(L_b, L_c)}=(L_a)^{(2)}_{b\A, c^{-1}(0)}\in\L(\A)$.
\item Suppose that $L_a^{-(L_b, L_c)}=(L_a)^{(2)}_{b\A, c^{-1}(0)}\in \L(\A)$ exists and there is $z\in\A$ such that $(L_a)^{(2)}_{b\A, c^{-1}(0)}=L_z$.
Then, $a^{-(b, c)}$ exists and $a^{-(b, c)}=z$.
\item If $a^{-(b, c)}$ exists, then $R_{a^{-(b, c)}}=R_a^{-(R_c, R_b)}=(R_a)^{(2)}_{\A c, b_{-1}(0)}\in\L(\A)$.
\item Suppose that $R_a^{-(R_c, R_b)}=(R_a)^{(2)}_{\A c, b_{-1}(0)}\in \L(\A)$ exists and there is $w\in\A$ such that $(R_a)^{(2)}_{\A c, b_{-1}(0)}=R_w$.
Then, $a^{-(b, c)}$ exists and $a^{-(b, c)}=w$.
\end{enumerate}
\end{pro}
\begin{proof} Recall that according to \cite[Proposition 6.1]{D}, necessary and sufficient for $a^{-(b, c)}$ to exist is that $a$ has an outer inverse, say $y$
($y=a^{-(b, c)}$), such that $y\A=b\A$ and $\A y=\A c$. Thus, if $a^{-(b, c)}$ exists, then $L_{a^{-(b, c)}}\in \L(\A)$ is an outer inverse of $L_a\in \L(\A)$ such
that
\begin{align*}
\R(L_{a^{-(b, c)}})&=a^{-(b, c)}\A=b\A=\R(L_b),\\
\N(L_{a^{-(b, c)}})&=(a^{-(b, c)})^{-1}(0).
\end{align*}
\noindent However, note that $(a^{-(b, c)})^{-1}(0)=c^{-1}(0)=\N(L_c)$. In fact, since $\A a^{-(b, c)}=\A c$, there exist $u_1$, $v_1\in\A$ such that $a^{-(b, c)}=u_1c$ and $c=v_1a^{-(b, c)}$,
which implies that $(a^{-(b, c)})^{-1}(0)=c^{-1}(0)$. Therefore, according to \cite[Lemma 1]{LYZW} and Theorem \ref{thm3.2},
$$
L_{a^{-(b, c)}}=(L_a)^{(2)}_{b\A, c^{-1}(0)}=L_a^{-(L_b, L_c)}.
$$
Now suppose that statement (ii) holds. Note that according to Theorem \ref{thm3.2}, $L_a^{-(L_b, L_c)}\in \L(\A)$ exists if and only if
$(L_a)^{(2)}_{b\A, c^{-1}(0)}\in \L(\A)$ exists; moreover, in this case, both maps coincide.
Since $(L_a)^{(2)}_{b\A, c^{-1}(0)}\in \L(\A)$ is an outer inverse of $L_a\in\L(\A)$, $z$ is an outer inverse of $a$. In addition,
\begin{align*}
z\A &=\R(L_z)=\R((L_a)^{(2)}_{b\A, c^{-1}(0)})=b\A, \\
z^{-1}(0)&=\N(L_z)=\N((L_a)^{(2)}_{b\A, c^{-1}(0)})=c^{-1}(0).\\
\end{align*}
However, since $c$ and $z$ are regular, according to \cite[Proposition 3.1 (b) (iii)]{bb1}, $\A z=\A c$.
Consequently, according to \cite[Proposition 6.1]{D}, $a^{-(b, c)}$ exists and $a^{-(b, c)}=z$.
As in the proof of statement (i), if $a^{-(b, c)}$ exists, then $R_{a^{-(b, c)}}$ is an outer inverse of $R_a\in \L(\A)$ such that
\begin{align*}
\R(R_{a^{-(b, c)}})&=\A a^{-(b, c)}=\A c=\R(R_c),\\
\N(R_{a^{-(b, c)}})&=(a^{-(b, c)})_{-1}(0).\\
\end{align*}
\noindent However, since $b\A=a^{-(b, c)}\A$, there exist $u_2$, $v_2\in\A$ such that $b=a^{-(b, c)}u_2$ and $a^{-(b, c)}=bv_2$, which implies that
$(a^{-(b, c)})_{-1}(0)=b_{-1}(0)=\N (R_b)$. Therefore, according to \cite[Lemma 1]{LYZW} and Theorem \ref{thm3.2}, statement (iii) holds.
If statement (iv) holds, then as in the proof of statement (ii), observe that according to Theorem \ref{thm3.2}, $R_a^{-(R_c, R_b)}\in \L(\A)$ exists if and only if
$(R_a)^{(2)}_{\A c, b_{-1}(0)}$ exists; moreover, in this case, both maps coincide. Furthermore, since $(R_a)^{(2)}_{\A c, b_{-1}(0)}\in \L(\A)$ is an outer inverse of $R_a\in\L(\A)$,
$w$ is an outer inverse of $a$. In addition,
\begin{align*}
\A w &=\R(R_w)=\R((R_a)^{(2)}_{\A c, b _1{-1}(0)})=\A c, \\
w_{-1}(0)&=\N(R_w)=\N((R_a)^{(2)}_{\A c, b_{-1}(0)})=b_{-1}(0).\\
\end{align*}
\noindent However, since $b$ and $w$ are regular, according to \cite[Proposition 3.1 (b) (iv)]{bb1}, $w\A=b\A$.
Hence, according to \cite[Proposition 6.1]{D}, $a^{-(b, c)}$ exists and $a^{-(b, c)}=w$.
\end{proof}
\section{Openness}
\noindent Given an unitary Banach algebra $\A$, $\A^{-1}\subset\A$ is an open set. In this section the set of all $(b ,c)$-invertible elements ($b$, $c\in\A$) will be proved to be open.
Similar results will be presented for the inverse along an element and the outer inverse \OIP.
Let $\A$ be a unitary Banach algebra and consider $b$, $c\in \A$. Let $\A^{-(b, c)}$ be the set of all $(b, c)$-invertible elements of $\A$,
i.e.,
$$
\A^{-(b, c)}=\{a\in\A\colon a^{-(b, c)}\hbox{ exists}\}.
$$
Recall that if $b$ or $c$ is not regular, then $\A^{-(b, c)}=\emptyset$ (\hspace{-1pt}\cite[Remark 2.2 (iii)]{b2}). In addition, note that if $b=0=c$,
then $\A^{-(b, c)}=\A$. In fact, in this case, given $a\in\A$, $a^{-(b, c)}=0$ satisfies Definition \ref{def1}. Moreover, if $b=0$ and $c\in\hat{\A}$ is such that $c\neq 0$ (respectively if $b\in\hat{\A}$ is such that
$b\neq 0$ and $c=0$),
according to Definition \ref{def1}, then $\A^{-(b, c)}=\emptyset$. Actually, if $b=0$ (respectively $c=0$) and $a^{-(b, c)}$ exists, then $a^{-(b, c)}=0$, which is impossible, for $c\neq 0$
(respectively $b\neq 0$). In the following theorem the set $\A^{-(b, c)}$ will be proved to be open.
\begin{thm}\label{thm4.1}Let $\A$ be a unitary Banach algebra and consider $b$, $c\in\A$. Then $\A^{-(b, c)}$ is an open set.
Furthermore, if $b$, $c\in\hat{\A}\setminus\{0\}$, $a\in\A^{-(b, c)}$ and $e\in\A$ is such that $\parallel e\parallel< \frac{1}{\parallel a^{-(b, c)}\parallel}$,
then $a+e\in \A^{-(b, c)}$ and
$$
(a+e)^{-(b, c)}= (\uno+ a^{-(b, c)}e)^{-1}a^{-(b,c)}=a^{-(b, c)}(\uno+ea^{-(b, c)})^{-1}.
$$
\end{thm}
\begin{proof}According to what has been said in the first paragraph of this section, it is enough to consider the case $b$, $c\in\hat{\A}\setminus\{0\}$. Recall, in addition, that under this hypothesis,
$a^{-(b, c)}\neq0$. In fact, according to Definition \ref{def1}, $a^{-(b, c)}=0$ implies that $b=0=c$. Moreover, note also that since $\parallel a^{-(b,c)}e\parallel <1$ (respectively $\parallel e a^{-(b,c)}\parallel <1$),
$\uno+a^{-(b,c)}e\in \A^{-1}$ (respectively $\uno+ea^{-(b,c)}\in \A^{-1}$).
\indent Consider the operators $L_a$, $L_{a^{-(b,c)}}$, $L_e \in\L(\A)$. According to Proposition \ref{pro3.10} (i),
$L_{a^{-(b,c)}}=(L_a)^{(2)}_{b\A, c^{-1}(0)}$. Now, since $\A$ is a unitary algebra,
$$
\parallel (L_a)^{(2)}_{b\A, c^{-1}(0)}\parallel \parallel L_e\parallel= \parallel a^{-(b,c)}\parallel \parallel e\parallel <1.
$$
Thus, according to \cite[Lemma 3.4]{DX}, $(L_{a+e})^{(2)}_{b\A, c^{-1}(0)}$ exists and
$$
(L_{a+e})^{(2)}_{b\A, c^{-1}(0)}=(I+ L_{a^{-(b,c)}}L_e)^{-1}L_{a^{-(b,c)}}=L_{a^{-(b,c)}}(I+L_eL_{a^{-(b,c)}})^{-1}.
$$
However,
\begin{align*}
&(I+ L_{a^{-(b,c)}}L_e)^{-1}=L_{(\uno+a^{-(b,c)}e)^{-1}},& &(I+ L_eL_{a^{-(b,c)}})^{-1}=L_{(\uno+ea^{-(b,c)})^{-1}}.&
\end{align*}
Consequently,
$$
(\uno+a^{-(b,c)}e)^{-1}a^{-(b,c)}=a^{-(b,c)}(\uno+ea^{-(b,c)})^{-1}.
$$
\indent Set $f=(\uno+a^{-(b,c)}e)^{-1}a^{-(b,c)}=a^{-(b,c)}(\uno+ea^{-(b,c)})^{-1}$. Since $(L_{a+e})^{(2)}_{b\A, c^{-1}(0)}=L_f$,
according to Proposition \ref{pro3.10} (ii), $(a+e)^{-(b, c)}$ exists and $(a+e)^{-(b, c)}=f$.
\end{proof}
In the following theorem the case of the inverse along an element will be presented. To this end, given a unitary Banach algebra $\A$, let $\A^{-d}$ be the set of all elements of
$\A$ invertible along $d\in\A$, i.e.,
$$
\A^{-d}=\{a\in\A\colon a^{-d}\hbox{ exists}\}.
$$
\begin{thm}\label{thm4.2}Let $\A$ be a unitary Banach algebra and consider $d\in\A$. Then $\A^{-d}$ is an open set.
Furthermore, if $d\in\hat{\A}\setminus\{0\}$, $a\in\A^{-d}$ and $e\in\A$ is such that $\parallel e\parallel< \frac{1}{\parallel a^{-d}\parallel}$,
then $a+e\in \A^{-d}$ and
$$
(a+e)^{-d}= (\uno+ a^{-d}e)^{-1}a^{-d}=a^{-d}(\uno+ea^{-d})^{-1}.
$$
\end{thm}
\begin{proof} Note that according to \cite[Proposition 6.1]{D}, $\A^{-d}=\A^{-(d, d)}$. Now apply Theorem \ref{thm4.1}.
\end{proof}
\indent For sake of completeness, the case of the outer inverse with prescribed range and null space will be considered. Let $\T$ and $\S$ be two closed
subspace of the Banach spaces $\X$ and $\Y$, respectivley. $\L(\X, \Y)^{(2)}_{\T, \S}$ will stand for the set of all operators defined on $\X$ with values in $\Y$ whose outer inverse with range $\T$ and null space $\S$ exists, i.e.,
$$
\L(\X, \Y)^{(2)}_{\T, \S}=\{A\in\L(\X, \Y)\colon A^{(2)}_{\T, \S} \hbox{ exists}\}.
$$
\begin{thm}\label{thm4.3} Let $\X$ and $\Y$ be two Banach spaces and consider $\T$ and $\S$ two closed subspaces of $\X$ and $\Y$, respectively. Then,
$\L(\X, \Y)^{(2)}_{\T, \S}$ is an open set.
\end{thm}
\begin{proof} According to \cite[Lemma 1]{LYZW}, if $\T$ or $\S$ is not a complemented subspaces of $\X$ and $\Y$ respectively, then $\L(\X, \Y)^{(2)}_{\T, \S}=\emptyset$. On the other hand, note that
if $A\in\L(\X, \Y)$ is such that $A^{(2)}_{\T, \S}$ exists and $A^{(2)}_{\T, \S}=0$, then $\T=0$ and $\S=\Y$. In addition, in this case, $\L(\X, \Y)^{(2)}_{0, \Y}=\L(\X, \Y)$. In fact, given $A\in\L(\X, \Y)$,
$A^{(2)}_{0, \Y}=0$. To end the proof, apply \cite[Lemma 3.4]{DX}.
\end{proof}
\section{Continuity of the $(b, c)$-inverse}
Recall that the notion of the gap between subspaces was used to study the continuity of the Moore-Penrose inverse ({\hspace{-1pt}\cite{V2}) and the Drazin inverse ({\hspace{-1pt}\cite{kr1, V}) in the Banach context.
In this section the aforementioned notion will be used to characterize the continuity of the $(b, c)$-inverse for Banach space operators and Banach algebra elements.
Another notion that will be central to present more characterizations will be the Moore-Penrose inverse.
First a particular case will be presented.
\begin{rema}\label{rema500}\rm \noindent (i) Let $\X$ be a Banach space and consider $A$, $B$, $C\in \L (\X)$ such that
$B$ and $C$ are regular, $A$ is $(B, C)$-invertible and $A^{-(B,C)}=0$. Let $(A_n)_{n\in\mathbb{N}}\subset \L(\X)$ be such that
$(A_n)_{n\in\mathbb{N}}$ converges to $A\in\L(\X)$, and suppose that there exist two sequence of operators
$(B_n)_{n\in\mathbb{N}}$, $(C_n)_{n\in\mathbb{N}}\subset \L(\X)$ such that
for each $n\in\mathbb{N}$, $B_n$ and $C_n$ are regular and $A_n$ is $(B_n, C_n)$-invertible.
Then, $(A_n^{-(B_n, C_n)})_{n\in\mathbb{N}}$ converges to $A^{-(B,C)} (=0)$ if and only if there exists $n_0\in\mathbb{N}$
such that for each $n\ge n_0$, $A_n^{-(B_n, C_n)}=0$.
In fact, if $(A_n^{-(B_n, C_n)})_{n\in\mathbb{N}}$ converges to 0, then $(A_n^{-(B_n, C_n)}A_n)_{n\in\mathbb{N}}$ converges to 0,
and according to \cite[Lemma 3.3]{kr1}, $(\hat{\delta}(\R(A_n^{-(B_n, C_n)}A_n), 0))_{n\in\mathbb{N}}$ converges to 0.
However, according to the definition of the gap between two subspaces (\hspace{-1pt}see \cite[Chapter 2, Section 2, Subsection 1]{K}),
if $\R(A_n^{-(B_n, C_n)}A_n)\neq 0$, then $(\hat{\delta}(\R(A_n^{-(B_n, C_n)}A_n), 0))=1$. Therefore, there exists $n_0 \in\mathbb{N}$
such that for each $n\ge n_0$, $\R(A_n^{-(B_n, C_n)})=\R(A_n^{-(B_n, C_n)}A_n)=0$. As a result, $A_n^{-(B_n, C_n)}=0$ for each $n\ge n_0$. The converse implication is evident.\par
\noindent (ii) Let $\A$ be a Banach algebra and consider $a\in\A$ and $b$, $c\in\hat{\A}$ such that $a$ is $(b, c)$-invertible and $a^{-(b, c)}=0$.
Suppose that there exist three sequence $(a_n)_{n\in\mathbb{N}}\subset \A$ and $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}\subset \hat{\A}$ such that
for each $n\in\mathbb{N}$, $a_n$ is $(b_n, c_n)$-invertible and $(a_n)_{n\in\mathbb{N}}$ converges to $a$. Then, statement (i) can be extended to this case, i.e., necessary and sufficient for
$(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ to converge to $a^{-(b, c)} (=0)$ is that there exists
$n_0\in\mathbb{N}$ such that for all $n\ge n_0$, $a_n^{-(b_n, c_n)}=0$. Actually, to prove this equivalent condition, it is enough to apply statement (i) to $L_a$, $L_a^{-(L_b, L_c)}\in\L(\A)$ and
$(L_{a_n})_{n\in\mathbb{N}}$, $(L_{a_n}^{-(L_{b_n}, L_{c_n})})_{n\in\mathbb{N}}\subset\L(\A)$, ($L_a^{-(L_b, L_c)}=L_{a^{-(b, c)}}$,
$L_{a_n}^{-(L_{b_n}, L_{c_n})}=L_{a_n^{-(b_n, c_n)}}$, see Proposition \ref{pro3.10} (i)). Note that since $\A$ is a unitary Banach algebra,
$(a_n)_{n\in\mathbb{N}}$ (respectively $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$) converges to $a$ (respectively to $a^{-(b, c)}$)
if and only if $(L_{a_n})_{n\in\mathbb{N}}$ (respectively $(L_{a_n^{-(b_n, c_n)}})_{n\in\mathbb{N}}$ converges to $L_a$ (respectively to $L_{a^{-(b, c)}}$).\par
\noindent (iii) In the same conditions of statement (ii), note that according to Definition \ref{def1}, $a^{-(b,c)}=0$ implies that $b=0=c$ . Similarly, for each $n\ge n_0$, $b_n=0=c_n$, i.e.,
the fact that $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to 0 determines the elements $b_n$ and $c_n$ for which $a_n$ is $(b_n, c_n)$-invertible ($n\ge n_0$).
Naturally, given $a\in\A^{-(0, 0)}$, $a^{-(0, 0)}=0$. \par
\noindent (iv) It is worth noticing that if $\A$ is a unitary Banach algebra, $a\in\A$ and $b$, $c\in\hat{\A}$ are such that $a$ is $(b, c)$-invertible with $a^{-(b, c)}\neq 0$, then $b\neq 0$ and $c\neq 0$
(see Definition \ref{def1}).
\end{rema}
Now the continuity of the $(b, c)$-inverse will be studied using the gap between subspaces. Next follows the characterization of the continuity of $(B, C)$-invertible Banach space operators.
\begin{thm}\label{thm5.2}Let $\X$ be a Banach space and consider $A$, $B$, $C\in \L (\X)$ such that
$B$ and $C$ are regular, $A$ is $(B, C)$-invertible and $A^{-(B,C)}\neq 0$. Suppose that there exist three sequences of
operators $(A_n)_{n\in\mathbb{N}}$, $(B_n)_{n\in\mathbb{N}}$, $(C_n)_{n\in\mathbb{N}}\subset \L(\X)$ such that
for each $n\in\mathbb{N}$, $B_n$ and $C_n$ are regular and $A_n$ is $(B_n, C_n)$-invertible.
If $(A_n)_{n\in\mathbb{N}}$ converges to $A$, then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(A_n^{-(B_n, C_n)})_{n\in\mathbb{N}}$ converges to $A^{-(B,C)}$.
\item The sequence $(A_n^{-(B_n, C_n)}A_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(A_nA_n^{-(B_n, C_n)})_{n\in\mathbb{N}}$\rm) \it
converges to $A^{-(B, C)}A$ \rm(\it respectively to $AA^{-(B, C)}$\rm).\it
\item The sequence $(A_n^{-(B_n, C_n)}A_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\N(C_n), \N(C)))_{n\in\mathbb{N}}$\rm) \it converges to $A^{-(B, C)}A$
\rm(\it respectively to 0\rm).\it
\item The sequence $(A_nA_n^{-(B_n, C_n)})_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\R(B_n), \R(B)))_{n\in\mathbb{N}}$\rm) \it
converges to $AA^{-(B, C)}$ \rm(\it respectively to 0\rm).\it
\item The sequences $(\hat{\delta}(\R(B_n), \R(B)))_{n\in\mathbb{N}}$ and $(\hat{\delta}(\N(C_n), \N(C)))_{n\in\mathbb{N}}$
converge to $0$.
\item The sequences $(\hat{\delta}(\R(A_n^{-(B_n, C_n)}), \R(A^{-(B, C)})))_{n\in\mathbb{N}}$ and $(\hat{\delta}(\N(A_n^{-(B_n, C_n)}), \N(A^{-(B, C)})))_{n\in\mathbb{N}}$
converge to $0$.
\end{enumerate}
\end{thm}
\begin{proof}It is evident that statement (i) implies statement (ii).\par
\indent Observe that, since $A^{-(B, C)}$ is an outer inverse of $A$ and $A_n^{-(B_n, C_n)}$ is an outer inverse of $A_n$ ($n\in\mathbb{N}$),
according to Theorem \ref{thm3.1},
\begin{align*}
&\R(A^{-(B, C)}A)=\R(A^{-(B, C)})=\R(B),& &\N(AA^{-(B, C)})=\N(A^{-(B, C)})=\N(C),&\\
&\R(A_n^{-(B_n, C_n)}A_n)=\R(A_n^{-(B_n, C_n)})=\R(B_n),& &\N(A_nA_n^{-(B_n, C_n)})=\N(A_n^{-(B_n, C_n)})=\N(C_n).&
\end{align*}
Consequently, according to \cite[Lemma 3.3]{kr1}, statment (ii) implies statement (iii), which in turn implies statement (v).
In addition, applying again \cite[Lemma 3.3]{kr1}, statement (ii) also implies statement (iv), which in turn implies statement (v). Note also that statement (vi)
is an equivalent formulation of statement (v) (Theorem \ref{thm3.2}).
\indent Suppose that statement (vi) holds. According to Theorem \ref{thm3.2},
\begin{align*}
&A_n^{-(B_n, C_n)}={A_n}^{(2)}_{\R(B_n), N(C_n)},& &A^{-(B, C)}=A^{(2)}_{\R(B), N(C)}.&
\end{align*}
\noindent Let $\kappa=\parallel A\parallel\parallel A^{-(B, C)}\parallel$ and consider $n_0\in\mathbb{N}$ such that
for all $n\ge n_0$,
\begin{align*}
&u_n=\hat{\delta}(\N(C_n), \N(C))<\frac{1}{3+\kappa},&
&v_n=\hat{\delta}(\R(B_n), \R(B))<\frac{1}{(1+\kappa)^2},&\\
&z_n=\parallel A^{-(B, C)}\parallel\parallel A-A_n\parallel<\frac{2\kappa}{(1+\kappa)(4+\kappa)}.& & &\\
\end{align*}
\noindent Thus, according to \cite[Theorem 3.5]{DX},
$$
\parallel A_n^{-(B_n, C_n)}-A^{-(B, C)}\parallel \le\frac{(1+\kappa)(v_n+u_n)+(1+u_n)z_n}{1-(1+\kappa) v_n-\kappa u_n-(1+u_n)z_n}\parallel A^{-(B, C)}\parallel,
$$
which implies statement (i).
\end{proof}
Next the Banach algebra case will be considered.
\begin{thm}\label{thm5.3}Let $\A$ be a unitary Banach algebra and consider $a\in\A$ and $b$, $c\in\hat{\A}$ such that $a$ is $(b, c)$-invertible and $a^{-(b, c)}\neq 0$.
Suppose that there exist three sequences $(a_n)_{n\in\mathbb{N}}\subset\A$ and $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}\subset \hat{\A}$ such that
$a_n$ is $(b_n, c_n)$-invertible, for each $n\in\mathbb{N}$.
If $(a_n)_{n\in\mathbb{N}}$ converges to $a$, then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $a^{-(b, c)}$.
\item The sequences $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ and $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$
converge to $a^{-(b, c)}a$ and $aa^{-(b, c)}$, respectively.
\item The sequence $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}((c_n)^{-1}(0), c^{-1}(0)))_{n\in\mathbb{N}}$\rm) \it
converges to $a^{-(b, c)}a$ \rm(\it respectively to 0\rm)\it.
\item The sequence $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(b_n\A, b\A))_{n\in\mathbb{N}}$\rm) \it
converges to $aa^{-(b, c)}$ \rm(\it respectively to 0\rm)\it.
\item The sequences $(\hat{\delta}(b_n\A, b\A))_{n\in\mathbb{N}}$ and $(\hat{\delta}((c_n)^{-1}(0), c^{-1}(0)))_{n\in\mathbb{N}}$
converge to $0$.
\item The sequences $(\hat{\delta}(a_n^{-(b_n, c_n)}\A, a^{-(b, c)}\A))_{n\in\mathbb{N}}$ and $(\hat{\delta}((a_n^{-(b_n, c_n)})^{-1}(0), (a^{-(b, c)})^{-1}(0)))_{n\in\mathbb{N}}$
converge to $0$.\par
\item The sequence $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}((b_n)_{-1}(0), b_{-1}(0)))_{n\in\mathbb{N}}$\rm) \it
converges to $aa^{-(b, c)}$ \rm(\it respectively to 0\rm)\it.
\item The sequence $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\A c_n, \A c))_{n\in\mathbb{N}}$\rm) \it
converges to $a^{-(b, c)}a$ \rm(\it respectively to 0\rm)\it.
\item The sequences $(\hat{\delta}(\A c_n, \A c))_{n\in\mathbb{N}}$ and $(\hat{\delta}((b_n)_{-1}(0), b_{-1}(0)))_{n\in\mathbb{N}}$
converge to $0$.
\item The sequences $(\hat{\delta}(\A a_n^{-(b_n, c_n)}, \A a^{-(b, c)}))_{n\in\mathbb{N}}$ and $(\hat{\delta}((a_n^{-(b_n, c_n)})_{-1}(0), (a^{-(b, c)})_{-1}(0)))_{n\in\mathbb{N}}$
converge to $0$.\par
\end{enumerate}
\end{thm}
\begin{proof} Recall that, according to Proposition \ref{pro3.10} (i),
\begin{align*}
&L_{a^{-(b, c)}}=L_a^{-(L_b, L_c)}=(L_a)^{(2)}_{b\A, c^{-1}(0)},& &L_{a_n^{-(b_n, c_n)}}=L_{a_n}^{-(L_{b_n}, L_{c_n})}=(L_{a_n})^{(2)}_{b_n\A, {c_n}^{-1}(0)},&\\
\end{align*}
for each $n\in\mathbb{N}$. In addition, note that since $\A$ is a unitary Banach algebra, $(a_n)_{n\in\mathbb{N}}$ converges to $a$
if and only if $(L_{a_n})_{n\in\mathbb{N}}$ converges to $L_a$. Similarly, a necessary and sufficient condition for $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ (respectively for
$(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ and $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$)
to converge to $a^{-(b, c)}$ (respectively to $a^{-(b, c)}a$ and $aa^{-(b, c)}$) is that $(L_{a_n^{-(b_n, c_n)}})_{n\in\mathbb{N}}$ (respectively $(L_{a_n^{-(b_n, c_n)}}L_{a_n})_{n\in\mathbb{N}}$ and
$(L_{a_n}L_{a_n^{-(b_n, c_n)}})_{n\in\mathbb{N}}$)
converges to $L_{a^{-(b, c)}}$ (respectively to $L_{a^{-(b, c)}}L_a$ and $L_aL_{a^{-(b, c)}}$).
Now, to prove the equivalence between statements (i)-(vi), apply Theorem \ref{thm5.2} to $L_a$ and $(L_{a_n})_{n\in\mathbb{N}}$.
Recall that according to the proof of Proposition \ref{pro3.10}, $c^{-1}(0)=(a^{-(b, c)})^{-1}(0)$ and $c_n^{-1}(0)=(a_n^{-(b_n, c_n)})^{-1}(0)$.
A similar argument, using in particular Proposition \ref{pro3.10} (iii) and Theorem \ref{thm5.2}, proves the equivalence between statements (i), (ii) and (vii)-(x).
\end{proof}
Next the continuity of the $(b, c)$-inverse will be characterized using the Moore-Penrose inverse.
\begin{thm}\label{thm5.4}Let $\A$ be a unitary Banach algebra and consider $a\in\A$ and $b$, $c\in\A^\dag$ such that $a$ is $(b, c)$-invertible and $a^{-(b, c)}\neq 0$.
Suppose that there exist three sequences $(a_n)_{n\in\mathbb{N}}\subset\A$ and $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}\subset \A^\dag$ such that
$a_n$ is $(b_n, c_n)$-invertible, for each $n\in\mathbb{N}$.
If $(a_n)_{n\in\mathbb{N}}$ converges to $a$, then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $a^{-(b, c)}$.
\item The sequence $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ converges to $a^{-(b, c)}a$ and the sequences
\begin{align*}
&(c^\dag c(\uno-c_n^\dag c_n))_{n\in\mathbb{N}},&
&(c_n^\dag c_n (\uno -c^\dag c))_{n\in\mathbb{N}}\\
\end{align*}
converge to 0.
\item The sequence $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $aa^{-(b, c)}$ and the sequences
\begin{align*}
&((\uno-bb^\dag)b_nb_n^\dag)_{n\in\mathbb{N}},& &( (\uno-b_nb_n^\dag)bb^\dag)_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequences
\begin{align*}
&( (\uno-bb^\dag)b_nb_n^\dag)_{n\in\mathbb{N}},& &( (\uno-b_nb_n^\dag)bb^\dag)_{n\in\mathbb{N}},&\\
&( c^\dag c(\uno-c_n^\dag c_n))_{n\in\mathbb{N}},&
&( c_n^\dag c_n (\uno -c^\dag c))_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequence $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $aa^{-(b, c)}$ and the sequences
\begin{align*}
&( bb^\dag (\uno-b_nb_n^\dag))_{n\in\mathbb{N}},& &( b_nb_n^\dag (\uno-bb^\dag))_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequence $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ converge to $a^{-(b, c)}a$ and the sequences
\begin{align*}
&( (\uno -c^\dag c)c_n^\dag c_n)_{n\in\mathbb{N}},& &( (\uno -c_n^\dag c_n)c^\dag c)_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequences
\begin{align*}
&( bb^\dag (\uno-b_nb_n^\dag))_{n\in\mathbb{N}},& &( b_nb_n^\dag (\uno-bb^\dag))_{n\in\mathbb{N}},&\\
&( (\uno -c^\dag c)c_n^\dag c_n)_{n\in\mathbb{N}},& &( (\uno -c_n^\dag c_n)c^\dag c)_{n\in\mathbb{N}}&\\
\end{align*}
converge to $0$.
\item The sequences $(b_nb_n^\dag)_{n\in\mathbb{N}}$ and $(c_n^\dag c_n)_{n\in\mathbb{N}}$ converge to $bb^\dag$ and $c^\dag c$, respectively.
\end{enumerate}
\end{thm}
\begin{proof}Note that
\begin{align*}
&b\A= bb^\dag A,& &b_n\A=b_nb_n^\dag\A,& &c^{-1}(0)=(\uno-c^\dag c)\A,& &c_n^{-1}(0)=(\uno-c_n^\dag c_n)\A,&\\
&\A c=\A c^\dag c,& &\A c_n=\A c_n^\dag c,& &b_{-1}(0)=\A (\uno-bb^\dag),& &(b_n)_{-1}(0)=\A (\uno-b_nb_n^\dag).& \\
\end{align*}
Since $\A$ is a unitary Banach algebra, according to \cite[Lemma 2.2]{V2},
\begin{align*}
&\hat{\delta}(b_n\A, b\A)=\hbox{\rm max}\{ \parallel (\uno-bb^\dag)b_nb_n^\dag\parallel, \parallel (\uno-b_nb_n^\dag)bb^\dag\parallel\},&\\
&\hat{\delta}((c_n)^{-1}(0), c^{-1}(0))=\hbox{\rm max}\{ \parallel c^\dag c(\uno-c_n^\dag c_n)\parallel, \parallel c_n^\dag c_n (\uno -c^\dag c\parallel\},&\\
&\hat{\delta}((b_n)_{-1}(0), b_{-1}(0))=\hbox{\rm max}\{ \parallel bb^\dag (\uno-b_nb_n^\dag)\parallel, \parallel b_nb_n^\dag (\uno-bb^\dag)\}\parallel\},&\\
&\hat{\delta}(\A c_n, \A c)=\hbox{\rm max}\{\parallel (\uno -c^\dag c)c_n^\dag c_n\parallel, \parallel (\uno -c_n^\dag c_n)c^\dag c\parallel\}.&\\
\end{align*}
\indent Now, to prove the equivalence among statements (i)-(vii), apply Theorem \ref{thm5.3} and use the identities that have been proved.
Suppose that statement (i) holds. Observe that
\begin{align*}
b_nb_n^\dag&=bb^\dag b_nb_n^\dag bb^\dag + bb^\dag b_nb_n^\dag (\uno-bb^\dag) + (\uno-bb^\dag) b_nb_n^\dag bb^\dag+ (\uno-bb^\dag)b_nb_n^\dag(\uno-bb^\dag).\\
\end{align*}
Thus, according to statements (iv) and (vii), if
$$
f_n=bb^\dag b_nb_n^\dag (\uno-bb^\dag) + (\uno-bb^\dag) b_nb_n^\dag bb^\dag+ (\uno-bb^\dag)b_nb_n^\dag(\uno-bb^\dag),
$$
then the sequence $(f_n)_{n\in\mathbb{N}}$ converges to 0. In addition, according to statement (vii), the sequence $(bb^\dag (\uno-b_nb_n^\dag)bb^\dag)_{n\in\mathbb{N}}$ converges to 0, which implies that $(bb^\dag b_nb_n^\dag bb^\dag)_{n\in\mathbb{N}}$
convergs to $bb^\dag$. Therefore, $(b_nb_n^\dag)_{n\in\mathbb{N}}$ converges to $bb^\dag$. A similar argument proves that $(c_n^\dag c_n)_{n\in\mathbb{N}}$ converges to $c^\dag c$.
\indent It is evident that statement (viii) implies statement (vii).
\end{proof}
In the following corollary, a particular case will be presented.
\begin{cor}\label{cor5.5}
Let $\A$ be a unitary Banach algebra and consider $a\in\A$ and $b$, $c\in\A^\dag$ such that $a$ is $(b, c)$-invertible and $a^{-(b, c)}\neq 0$.
Suppose that there exist three sequences $(a_n)_{n\in\mathbb{N}}\subset\A$ and $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}\subset \A^\dag$ such that
$a_n$ is $(b_n, c_n)$-invertible, for each $n\in\mathbb{N}$. Suppose, in addition, that the sequences $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}$,
$(b_n^\dag)_{n\in\mathbb{N}}$ and $(c_n^\dag)_{n\in\mathbb{N}}$ converge to $b$, $c$, $b^\dag$ and $c^\dag$, respectively.
Then, if $(a_n)_{n\in\mathbb{N}}$ converges to $a$, the sequence $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $a^{-(b, c)}$.
\end{cor}
\begin{proof} Apply Theorem \ref{thm5.4} (viii).
\end{proof}
In the context of $C^*$-algebras, Theorem \ref{thm5.4} and Corollary \ref{cor5.5} can be reformulated as follows.
\begin{thm}\label{thm5.6} Let $\A$ be a $C^*$-algebra and consider $a\in\A$ and $b$, $c\in\hat{\A}$ such that $a$ is $(b, c)$-invertible and $a^{-(b, c)}\neq 0$.
Suppose that there exist three sequences $(a_n)_{n\in\mathbb{N}}\subset\A$ and $(b_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}\subset\hat{\A}$ such that
$a_n$ is $(b_n, c_n)$-invertible, for each $n\in\mathbb{N}$.
If $(a_n)_{n\in\mathbb{N}}$ converges to $a$, then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(a_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ converges to $a^{-(b, c)}$.
\item The sequences $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ and $(c_n^\dag c_n)_{n\in\mathbb{N}}$ converge to $a^{-(b, c)}a$ and $c^\dag c$, respectively.
\item The sequences $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ and $(b_nb_n^\dag)_{n\in\mathbb{N}}$ converge to $aa^{-(b, c)}$ and
$bb^\dag$, respectively.
\item The sequences $(b_nb_n^\dag)_{n\in\mathbb{N}}$ and $(c_n^\dag c_n)_{n\in\mathbb{N}}$ converge to $bb^\dag$ and $c^\dag c$, respectively.
\hspace*{\dimexpr\linewidth-\textwidth\relax}
\begin{minipage}[t]{\textwidth}
\noindent In addition, if the sequence $(b_n)_{n\in\mathbb{N}}$ converges to $b$, then statement {\rm (iii)} is equivalent to:
\end{minipage}
\item the sequences $(a_na_n^{-(b_n, c_n)})_{n\in\mathbb{N}}$ and $(b_n^\dag)_{n\in\mathbb{N}}$ converge to $aa^{-(b, c)}$ and
$b^\dag$, respectively.
\hspace*{\dimexpr\linewidth-\textwidth\relax}
\begin{minipage}[t]{\textwidth}
\noindent Moreover, if the sequence $(c_n)_{n\in\mathbb{N}}$ converge to $c$, then statement {\rm (ii)} is equivalent to:
\end{minipage}
\item the sequences $(a_n^{-(b_n, c_n)}a_n)_{n\in\mathbb{N}}$ and $(c_n^\dag )_{n\in\mathbb{N}}$ converge to $a^{-(b, c)}a$ and $c^\dag$, respectively.
\hspace*{\dimexpr\linewidth-\textwidth\relax}
\begin{minipage}[t]{\textwidth}
\noindent Furthermore, if the sequences $(b_n)_{n\in\mathbb{N}}$ and $(c_n)_{n\in\mathbb{N}}$ converge to $b$ and $c$, respectively, then statement {\rm (iv)} is equivalent to:
\end{minipage}
\item the sequences $(b_n^\dag)_{n\in\mathbb{N}}$ and $(c_n^\dag )_{n\in\mathbb{N}}$ converge to $b^\dag$ and $c^\dag$, respectively.
\end{enumerate}
\end{thm}
\begin{proof}Note that since
\begin{align*}
&((\uno-bb^\dag)b_nb_n^\dag)^*=b_nb_n^\dag(\uno-bb^\dag),& &( (\uno-b_nb_n^\dag)bb^\dag)^*=bb^\dag(\uno-b_nb_n^\dag),&\\
\end{align*}
according to the proof of statement (vii) implies statement (viii) in Theorem \ref{thm5.4}, statement (iii) is equivalent to Theorem \ref{thm5.4} (iii).
Now observe that,
\begin{align*}
&(c^\dag c(\uno-c_n^\dag c_n))^*=(\uno-c_n^\dag c_n)c^\dag c,&
&(c_n^\dag c_n (\uno -c^\dag c))^*=(\uno -c^\dag c)c_n^\dag c_n.\\
\end{align*}
Then, a similar argument proves that statement (ii) is equivalent to Theorem \ref{thm5.4} (ii).
To prove statements (v)-(vii), apply \cite[Theorem 1.6]{kol}.
\end{proof}
Since the inverse along an element is a particular case of the $(b, c)$-inverse, it is possible to apply the results of this section
to obtain characterizations of the continuity of the inverse along an element for Banach space operators and for Banach algebra and $C^*$-algebra elements.
In fact, given a Banach algebra (or a $C^*$-algebra) $\A$, $a\in\A$ and $d\in\hat{\A}$,
to state the aforementioned characterizations, it is enough to apply the corresponding result to the $(d, d)$-inverse (see section 2);
in the Banach operator case it is possible to proceed in a similar way.
In order not to extend unnecessarily this work, these results will not be stated; the details are left to the interested reader.
To end this section, as an application, characterizations of the continuity of the Moore-Penrose inverse
in the contexts of Banch algebras and Banach space operators will be given; in the former frame, compare with \cite[Theorem 2.5]{V2}.
\begin{cor}\label{cor5.7}Let $\A$ be a unitary Banach algebra and consider $a\in\A^\dag\setminus 0$.
Suppose that there exists a sequence $(a_n)_{n\in\mathbb{N}}\subset\A^\dag$ such that
$(a_n)_{n\in\mathbb{N}}$ converges to $a$. Then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(a_n^\dag)_{n\in\mathbb{N}}$ converges to $a^{\dag}$.
\item The sequence $(a_n^\dag a_n)_{n\in\mathbb{N}}$ converges to $a^\dag a$ and the sequences
\begin{align*}
&(a a^\dag(\uno-a_n a_n^\dag))_{n\in\mathbb{N}},&
&(a_n a_n^\dag (\uno -a a^\dag))_{n\in\mathbb{N}}\\
\end{align*}
converge to 0.
\item The sequence $(a_na_n^\dag)_{n\in\mathbb{N}}$ converges to $aa^\dag$ and the sequences
\begin{align*}
&((\uno-a^\dag a)a^\dag_na_n)_{n\in\mathbb{N}},& &( (\uno-a^\dag_na_n)a^\dag a)_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequences
\begin{align*}
&( (\uno-a^\dag a)a^\dag_na_n)_{n\in\mathbb{N}},& &( (\uno-a_n^\dag a_n)a^\dag a)_{n\in\mathbb{N}},&\\
&( a a^\dag(\uno-a_n a_n^\dag))_{n\in\mathbb{N}},&
&( a_n a_n^\dag (\uno -a a^\dag))_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequence $(a_na_n^\dag)_{n\in\mathbb{N}}$ converges to $aa^\dag$ and the sequences
\begin{align*}
&( a^\dag a (\uno-a^\dag_na_n))_{n\in\mathbb{N}},& &( a^\dag_n a_n (\uno-a^\dag a))_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequence $(a_n^\dag a_n)_{n\in\mathbb{N}}$ converge to $a^\dag a$ and the sequences
\begin{align*}
&( (\uno -a a^\dag)a_n a_n^\dag)_{n\in\mathbb{N}},& &( (\uno -a_n a_n^\dag)a a^\dag)_{n\in\mathbb{N}}&\\
\end{align*}
converge to 0.
\item The sequences
\begin{align*}
&( a^\dag a (\uno-a^\dag_n a_n))_{n\in\mathbb{N}},& &( a_n^\dag a_n (\uno-a^\dag a))_{n\in\mathbb{N}},&\\
&( (\uno -a a^\dag)a_n a^\dag_n)_{n\in\mathbb{N}},& &( (\uno -a_n a_n^\dag)a a^\dag)_{n\in\mathbb{N}}&\\
\end{align*}
converge to $0$.
\item The sequences $(a_n^\dag a_n)_{n\in\mathbb{N}}$ and $(a_n a^\dag_n)_{n\in\mathbb{N}}$ converge to $a^\dag a$ and $a a^\dag$, respectively.
\end{enumerate}
\end{cor}
\begin{proof} According to \cite[Proposition 6.1]{D}, given $x\in\A^\dag$, $x^\dag=x^{-(x^\dag, x^\dag)}$. To conclude the proof, apply
Theorem \ref{thm5.4} to $a$, $b=c=a^\dag$, $a_n$ and $b_n=c_n=a_n^\dag$ ($n\in\mathbb{N}$). Note that if $x\in\A^\dag$, then $x^\dag\in A^\dag$ and
$(x^\dag)^\dag=x$.
\end{proof}
\begin{rema}\label{rema5.8}\rm In the same conditions of Corollary \ref{cor5.7}, note that according to \cite[Lemma 2.2]{V2},
\begin{align*}
&\hat{\delta}(\R(L_{a_n^\dag}), \R(L_{a^\dag}))=\hat{\delta}(a_n^\dag\A, a^\dag\A)=\hbox{\rm max}\{ \parallel (\uno-a^\dag a)a_n^\dag a_n\parallel, \parallel (\uno-a_n^\dag a_n)a^\dag a\}\parallel\},&\\
&\hat{\delta}(\N(L_{a_n^\dag}), \N(L_{a^\dag}))=\hat{\delta}((a_n^\dag)^{-1}(0), (a^\dag)^{-1}(0))=\hbox{\rm max}\{ \parallel aa^\dag (\uno-a_na_n^\dag)\parallel, \parallel a_na_n^\dag (\uno -a a^\dag) \parallel\},&\\
&\hat{\delta}(\N(R_{a_n^\dag}), \N(R_{a^\dag}))=\hat{\delta}((a_n^\dag)_{-1}(0), (a^\dag)_{-1}(0))=\hbox{\rm max}\{ \parallel a^\dag a(\uno-a_n^\dag a_n)\parallel, \parallel a_n^\dag a_n (\uno-a^\dag a) \parallel\},&\\
&\hat{\delta}(\R(R_{a_n^\dag}), \R(R_{a^\dag}))=\hat{\delta}(\A a_n^\dag, \A a^\dag)=\hbox{\rm max}\{\parallel (\uno -a a^\dag )a_n a_n^\dag \parallel, \parallel (\uno -a_na_n^\dag)a a^\dag \parallel\}.&\\
\end{align*}
Compare Corollary \ref{cor5.7} with \cite[Theorem 2.5]{V2}.
\end{rema}
\begin{cor}\label{cor5.9}Let $\X$ be a Banach space and consider a Moore-Penrose invertible operator $A\in \L (\X)$, $A\neq 0$. Suppose that there exists a sequence
of operators $(A_n)_{n\in\mathbb{N}}$ such that
for each $n\in\mathbb{N}$, $A_n$ is Moore-Penrose invertible.
If $(A_n)_{n\in\mathbb{N}}$ converges to $A$, then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $(A_n^\dag)_{n\in\mathbb{N}}$ converges to $A^\dag$.
\item The sequence $(A_n^\dag A_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(A_nA_n^\dag)_{n\in\mathbb{N}}$\rm) \it
converges to $A^\dag A$ \rm(\it respectively to $AA^\dag $\rm).\it
\item The sequence $(A_n^\dag A_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\N(A_n^\dag), \N(A^\dag)))_{n\in\mathbb{N}}$\rm) \it converges to $A^\dag A$
\rm(\it respectively to 0\rm).\it
\item The sequence $(A_nA_n^\dag)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\R(A^\dag_n), \R(A^\dag)))_{n\in\mathbb{N}}$\rm) \it
converges to $AA^\dag$ \rm(\it respectively to 0\rm).\it
\item The sequences $(\hat{\delta}(\R(A_n^\dag), \R(A^\dag)))_{n\in\mathbb{N}}$ and $(\hat{\delta}(\N(A^\dag_n), \N(A^\dag)))_{n\in\mathbb{N}}$
converge to $0$.
\end{enumerate}
\end{cor}
\begin{proof} Proceed as in the proof of Corollary \ref{cor5.7} and apply Theorem \ref{thm5.2}.
\end{proof}
\section{Differentiation of the $(b, c)$-inverse}
Next follows the first characterization of this section. Observe that if $\A$ is a Banach algebra, $J\subseteq \mathbb{R}$ and there exist functions ${\mathbf a}\colon J\to \A$ and ${\mathbf b}$,
${\mathbf c}\colon J\to \hat{\A}$ such that for each $t\in J$, ${\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}$ exists, then ${\mathbf a}^{-({\mathbf b }, {\mathbf c})}\colon J\to \A$
is the following funciton: ${\mathbf a}^{-({\mathbf b }, {\mathbf c})}(t)= {\mathbf a(t)}^{-({\mathbf b(t) }, {\mathbf c(t)})}$.
\begin{thm}\label{thm6.1} Let $\A$ be a Banach algebra and consider a function ${\mathbf a}\colon J\to \A$, where $J\subseteq\mathbb{R}$ is an open set.
Let ${\mathbf b}$, ${\mathbf c}\colon J\to \hat{\A}$ be two functions such that for each $t\in J$, ${\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}$ exists. Suppose that
there exist functions ${\mathbf g}$, ${\mathbf h}\colon J\to \A$ such that for each $t\in J$, ${\mathbf g}(t)\in {\mathbf b}(t)\{1\}$ and ${\mathbf h}(t)\in {\mathbf c}(t)\{1\}$.
Then, given $t_0\in J$, if the functions ${\mathbf a}$, ${\mathbf b}{\mathbf g}$, ${\mathbf h}{\mathbf c}\colon J\to \A$ are differentiable at $t_0$, the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The funciton ${\mathbf a}^{-({\mathbf b }, {\mathbf c})}\colon J\to \A$ is continuous at $t_0$.
\item The funciton ${\mathbf a}^{-({\mathbf b}, {\mathbf c})}\colon J\to \A$ is differentiable at $t_0$.
\end{enumerate}
\noindent Moreover, in this case,
\begin{align*}
({\mathbf a}^{-({\mathbf b }, {\mathbf c})})'(t_0)=&{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}({\mathbf h}{\mathbf c})'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
&-(\uno-{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}(t_0))({\mathbf b}{\mathbf g})'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
& -{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}.\\
\end{align*}
\end{thm}
\begin{proof} It is enough to prove that statement (i) implies statement (ii). This proof can be deduced from \cite[Corollary 7.12]{b2}. In fact, according to this result,
\begin{align*}
{\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}-{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}&={\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}({\mathbf h}{\mathbf c}(t)-{\mathbf h}{\mathbf c}(t_0))
{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
&-(\uno-{\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}{\mathbf a}(t))({\mathbf b}{\mathbf g}(t)-{\mathbf b}{\mathbf g}(t_0)){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
&+ {\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}({\mathbf a}(t_0)-{\mathbf a(t)}){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}.\\
\end{align*}
\noindent Now divide each term by $t-t_0$ and note that the limt leads the formula of $({\mathbf a}^{-({\mathbf b }, {\mathbf c})})'(t_0)$.
\end{proof}
When the function ${\mathbf b}$ and ${\mathbf c}$ in Theorem \ref{thm6.1} are such that ${\mathbf b}(J)\subseteq \A^\dag$ and ${\mathbf c}(J)\subseteq \A^\dag$,
the aforementioned result can be reformulated as follows. Note first that if $\A$ is a Banach algebra and ${\mathbf x}\colon J\to \A^\dag$ is a function ($J\subseteq\mathbb{R}$), then ${\mathbf x}^\dag\colon J\to \A$
denotes the following function: ${\mathbf x}^\dag(t)= ({\mathbf x}(t))^\dag$.
\begin{cor}\label{cor6.3} Let $\A$ be a Banach algebra and consider a function ${\mathbf a}\colon J\to \A$, where $J\subseteq\mathbb{R}$ is an open set.
Let ${\mathbf b}$, ${\mathbf c}\colon J\to \A^\dag$ be two functions such that for each $t\in J$, ${\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}$ exists.
Then, given $t_0\in J$, if the functions ${\mathbf a}$, ${\mathbf b}{\mathbf b}^\dag$, ${\mathbf c}^\dag{\mathbf c}\colon J\to \A$ are differentiable at $t_0$, the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The funciton ${\mathbf a}^{-({\mathbf b }, {\mathbf c})}\colon J\to \A$ is continuous at $t_0$.
\item The funciton ${\mathbf a}^{-({\mathbf b}, {\mathbf c})}\colon J\to \A$ is differentiable at $t_0$.
\end{enumerate}
\noindent Moreover, in this case,
\begin{align*}
({\mathbf a}^{-({\mathbf b }, {\mathbf c})})'(t_0)=&{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}({\mathbf c}^\dag{\mathbf c})'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
&-(\uno-{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}(t_0))({\mathbf b}{\mathbf b}^\dag)'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
& -{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}.\\
\end{align*}
\end{cor}
\begin{proof} Apply Theorem \ref{thm6.1} to the case under consideration.
\end{proof}
As an application of Theorem \ref{thm6.1}, a characterization of the differentiability of the Moore-Penrose inverse in the Banach context will be given.
\begin{cor}\label{cor6.2} Let $\A$ be a Banach algebra and consider a function ${\mathbf a}\colon J\to \A^\dag$, where $J\subseteq\mathbb{R}$ is an open set.
Suppose that there exists $t_0\in J$ such that the function ${\mathbf a}\colon J\to \A$ is differentiable at $t_0$. Then, the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The function ${\mathbf a}^\dag\colon J\to \A$ is differentiable at $t_0$.
\item The function ${\mathbf a}^\dag\colon J\to \A$ is continuous at $t_0$ and the functions ${\mathbf a}{\mathbf a}^\dag$, ${\mathbf a}^\dag {\mathbf a}\colon J\to \A$
are differentiable at $t_0$.
\end{enumerate}
Furthermore, in this case,
\begin{align*}
({\mathbf a}^\dag)'(t_0)=&{\mathbf a}^\dag(t_0)({\mathbf a}{\mathbf a}^\dag)'(t_0){\mathbf a}^\dag(t_0) -(\uno-{\mathbf a}^\dag {\mathbf a}(t_0))({\mathbf a}^\dag {\mathbf a})'(t_0){\mathbf a}^\dag(t_0)
-{\mathbf a}^\dag(t_0){\mathbf a}'(t_0){\mathbf a}^\dag(t_0).\\
\end{align*}
\end{cor}
\begin{proof} Recall that given $x\in\A^\dag$, $x^\dag= x^{-(x^\dag, x^\dag)}$ ({\hspace{-1pt}}\cite[Propostion 6.1]{D})). To conclude the proof apply Theorem \ref{thm6.1} to the function
${\mathbf a}$, ${\mathbf b}$, ${\mathbf c}\colon J\to\A$, where ${\mathbf b}={\mathbf c}={\mathbf a}^\dag$.
\end{proof}
In the frame of $C^*$-algebras, the hypotheses of Corollary \ref{cor6.3} can be lightened.
\begin{cor}\label{cor6.4} Let $\A$ be a $C^*$-algebra and consider a function ${\mathbf a}\colon J\to \A$, where $J\subseteq\mathbb{R}$ is an open set.
Let ${\mathbf b}$, ${\mathbf c}\colon J\to \hat{\A}$ be two functions such that for each $t\in J$, ${\mathbf a}(t)^{-({\mathbf b(t)}, {\mathbf c(t)})}$ exists.
Then, given $t_0\in J$, if the functions ${\mathbf a}$, ${\mathbf b}$, ${\mathbf c}\colon J\to \A$ are differentiable at $t_0$, the functions
${\mathbf b}^\dag$, ${\mathbf c}^\dag\colon J\to \A$ are continuous at $t_0$, and ${\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\neq 0$, the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The funciton ${\mathbf a}^{-({\mathbf b }, {\mathbf c})}\colon J\to \A$ is continuous at $t_0$.
\item The funciton ${\mathbf a}^{-({\mathbf b}, {\mathbf c})}\colon J\to \A$ is differentiable at $t_0$.
\end{enumerate}
\noindent Moreover, in this case,
\begin{align*}
({\mathbf a}^{-({\mathbf b }, {\mathbf c})})'(t_0)=&{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}(({\mathbf c}^\dag)'(t_0){\mathbf c}(t_0)+{\mathbf c}^\dag(t_0){\mathbf c}'(t_0)){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
&-(\uno-{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}(t_0))({\mathbf b}'(t_0){\mathbf b}^\dag(t_0)+{\mathbf b}(t_0)({\mathbf b}^\dag)'(t_0)){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\\
& -{\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}{\mathbf a}'(t_0){\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}.\\
\end{align*}
\end{cor}
\begin{proof} Note that the condition ${\mathbf a}(t_0)^{-({\mathbf b(t_0)}, {\mathbf c(t_0)})}\neq 0$ implies that ${\mathbf b}(t_0)\neq 0$ and ${\mathbf c}(t_0)\neq 0$ (Definition \ref{def1}).
Thus, according to \cite[Theorem 2.1]{kol}, the functions ${\mathbf b}^\dag$, ${\mathbf c}^\dag\colon J\to \A$ are differentiable at $t_0$. To conclude the proof, apply Corollary \ref{cor6.3}.
\end{proof}
As in section 5, the results concerning the differentiability of the inverse along an element can be deduced from Theorem \ref{thm6.1}, Corollary \ref{cor6.2} and Corollary \ref{cor6.4}.
The details are left to the interested reader.
\section{The outer inverse \OIP}
Although similar arguments to the ones used in sections 5 and 6 will be applied to study the continuity and the differentiability of the outer inverse \OIP, the results of this section can not be derived from the corresponding ones concerning the $(b, c)$-inverse. In fact, in what follows operators between two different Banach spaces will be considered. First the gap between subspaces
will be used to characterize the continity of the \OIP.
\begin{thm}\label{thm7.1} Let $\X$ and $\Y$ be Banach spaces and consider $A\in\L(\X, \Y)$ and two subspaces $\T\subseteq \X$
and $\S\subseteq \Y$ such that $A^{(2)}_{\T, \S}$ exists. Let $(A_n)_{n\in\mathbb{N}}\subset\L(\X, \Y)$ and consider $(\T_n)_{n\in\mathbb{N}}$ and $(\S_n)_{n\in\mathbb{N}}$
two sequences of subspaces of $\X$ and $\Y$, respectively, such that $(A_n)^{(2)}_{\T_n, \S_n}$ exists, for each $n\in\mathbb{N}$. Suppose that $(A_n)_{n\in\mathbb{N}}$ converges to $A$.
The following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converges to $A^{(2)}_{\T, \S}$.
\item The sequences $((A_n)^{(2)}_{\T_n, \S_n}A_n)_{n\in\mathbb{N}}$ and $(A_n(A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converge to $A^{(2)}_{\T, \S}A$ and $AA^{(2)}_{\T, \S}$, respectively.
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n}A_n)_{n\in\mathbb{N}}$ \rm(\it respectively $(\hat{\delta}(\S_n, \S))_{n\in\mathbb{N}}$\rm) \it converges to $A^{(2)}_{\T, \S}A$ \rm(\it respectively to 0\rm).\it
\item The sequence $(A_n(A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ \rm (\it respectively $(\hat{\delta}(\T_n, \T))_{n\in\mathbb{N}}$\rm) \it converges to $AA^{(2)}_{\T, \S}$ \rm(\it respectively to 0\rm).\it
\item The sequences $(\hat{\delta}(\T_n, \T))_{n\in\mathbb{N}}$ and $(\hat{\delta}(\S_n, \S))_{n\in\mathbb{N}}$
converge to $0$.\par
\end{enumerate}
\end{thm}
\begin{proof} First define
\begin{align*}
&P=AA^{(2)}_{\T, \S},& &P_n=A_n(A_n)^{(2)}_{\T_n, \S_n},& &Q=A^{(2)}_{\T, \S}A,&
&Q_n=(A_n)^{(2)}_{\T_n, \S_n}A_n&\\
\end{align*}
($P$, $P_n\in\L(\Y)^\bullet$ and $Q$, $Q_n\in \L(\X)^\bullet$, $n\in\mathbb{N}$). Note that
\begin{align*}
&\R(Q)=\T,& &\R(Q_n)=\T_n,& &\N(P)=\S,& &\N(P_n)=\S_n.&\\
\end{align*}
It is evident that statement (i) implies statement (ii).
Now, according to \cite[Lemma 3.3]{kr1}, if $(P_n)_{n\in\mathbb{N}}$ converges to $P$
(respectively $(Q_n)_{n\in\mathbb{N}}$ converges to $Q$), then $(\hat{\delta}(\S_n, \S))_{n\in\mathbb{N}}$ (respectively $(\hat{\delta}(\T_n, \T))_{n\in\mathbb{N}}$)
converges to $0$. Thus, statement (ii) implies statement (iii) (respectively statement (iv)), which in turn implies statement (v).
Suppose that statement (v) holds. If $A^{(2)}_{\T, \S}=0$, then $\T=0$ and $\S=\Y$. In particular, $(\hat{\delta}(\T_n, 0))_{n\in\mathbb{N}}$ converges to $0$.
However, according to \cite[Chapter 2, Section 2, Subsection 1]{K}, if $\T_n\neq 0$, $\hat{\delta}(\T_n, 0)=1$. Thus, there exists $n_0\in\mathbb{N}$
such that for each $n\ge n_0$, $\T_n=0$, which implies that $(A_n)^{(2)}_{\T_n, \S_n}=0$ ($n\ge n_0$). To conclude the proof, assume that $A^{(2)}_{\T, \S}\neq 0$ and
appy \cite[Theorem 3.5]{DX}.
\end{proof}
In the follwing result the Moore-Penrose inverse will be used to characterize the continuity of the outer inverse \OIP. Although it will not be used in this article, recall
that given a Banach space $\X$ and $P$ and $Q\in\L(\X)^\bullet$ such that $P$ and $Q$ are hermitian, if $\R (P)=\R(Q)$, then $P=Q$ (\hspace{-1pt}\cite[Theorem 2.2]{P}).
In particular, given a subspace $\M\subseteq \X$, there exists at most one hermitian idempotent $R$ such that $\R(R)=\M$.
\begin{thm}\label{thm7.2} Let $\X$ and $\Y$ be Banach spaces and consider $A\in\L(\X, \Y)$ and two subspaces $\T\subseteq \X$
and $\S\subseteq \Y$ such that $A^{(2)}_{\T, \S}$ exists. Let $(A_n)_{n\in\mathbb{N}}\subset\L(\X, \Y)$ and consider $(\T_n)_{n\in\mathbb{N}}$ and $(\S_n)_{n\in\mathbb{N}}$
two sequences of subspaces of $\X$ and $\Y$, respectively, such that $(A_n)^{(2)}_{\T_n, \S_n}$ exists, for each $n\in\mathbb{N}$.
Suppose, in addition, that there exit hermitian idempotents $U\in\L(\X)$ \rm (\it respectively $V\in\L(\Y)$\rm ) \it and $U_n\in\L(\X)$ \rm(\it respectively $V_n\in\L(\Y)$\rm ) \it
such that $\R(U)=\T$ and $\R(U_n)=\T_n$ \rm (\it respectively $\N(V)=\S$ and $\N(V_n)=\S_n$\rm ), \it $n\in\mathbb{N}$. If $(A_n)_{n\in\mathbb{N}}$ converges to $A$,
then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converges to $A^{(2)}_{\T, \S}$.
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n}A_n)_{n\in\mathbb{N}}$ converges to $A^{(2)}_{\T, \S}A$ and the sequences $(V(I-V_n))_{n\in\mathbb{N}}$ and $(V_n(I-V))_{n\in\mathbb{N}}$ converge to 0.
\item The sequence $(A_n(A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converges to $AA^{(2)}_{\T, \S}$ and the sequences $((I-U)U_n)_{n\in\mathbb{N}}$ and $((I-U_n)U)_{n\in\mathbb{N}}$
converge to 0.
\item The sequences $(V(I-V_n))_{n\in\mathbb{N}}$, $(V_n(I-V))_{n\in\mathbb{N}}$, $((I-U)U_n)_{n\in\mathbb{N}}$ and $((I-U_n)U)_{n\in\mathbb{N}}$ converge to 0.
\end{enumerate}
\end{thm}
\begin{proof} Note that $\hat{\delta}(\T_n, \T)=\hat{\delta}(\R (U_n), \R(U))$. Thus, according to \cite[Lemma 2.2]{V2}, the sequence $(\hat{\delta}(\T_n, \T))_{n\in\mathbb{N}}$
converges to 0 if and only if the sequences $((I-U)U_n)_{n\in\mathbb{N}}$ and $((I-U_n)U)_{n\in\mathbb{N}}$ converge to 0. Similarly, since $\hat{\delta}(\S_n, \S)=\hat{\delta}(\R (I-V_n), \R(I-V))$,
according to \cite[Lemma 2.2]{V2}, the sequence $(\hat{\delta}(\S_n, \S))_{n\in\mathbb{N}}$
converges to 0 if and only if the sequences $(V(I-V_n))_{n\in\mathbb{N}}$ and $(V_n(I-V))_{n\in\mathbb{N}}$ converge to 0. To conclude the proof, apply Theorem \ref{thm7.1}.
\end{proof}
Next the continuity of the outer inverse \OIP will be studied in the context of Hilbert spaces.
\begin{thm}\label{thm7.3} Let $\H$ and $\K$ be Hilbert spaces and consider $A\in\L(\H, \K)$ and two subspaces $\T\subseteq \H$
and $\S\subseteq \K$ such that $A^{(2)}_{\T, \S}$ exists. Let $(A_n)_{n\in\mathbb{N}}\subset\L(\H, \K)$ and consider $(\T_n)_{n\in\mathbb{N}}$ and $(\S_n)_{n\in\mathbb{N}}$
two sequences of subspaces of $\H$ and $\K$, respectively, such that $(A_n)^{(2)}_{\T_n, \S_n}$ exists, for each $n\in\mathbb{N}$.
If $(A_n)_{n\in\mathbb{N}}$ converges to $A$,
then the following statements are equivalent.
\begin{enumerate}[{\rm (i)}]
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converges to $A^{(2)}_{\T, \S}$.
\item The sequences $((A_n)^{(2)}_{\T_n, \S_n}A_n)_{n\in\mathbb{N}}$ and $(A_n(A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ converge to $A^{(2)}_{\T, \S}A$ and $AA^{(2)}_{\T, \S}$, respectively.
\item The sequence $((A_n)^{(2)}_{\T_n, \S_n}A_n)_{n\in\mathbb{N}}$ \rm (\it respectively $(P_{\S_n^\perp}^\perp)_{n\in\mathbb{N}}$\rm ) \it converges to $A^{(2)}_{\T, \S}A$ \rm (\it respectively to
$P_{\S^\perp}^\perp$\rm).\it
\item The sequence $(A_n(A_n)^{(2)}_{\T_n, \S_n})_{n\in\mathbb{N}}$ \rm(\it respectively $(P_{\T_n}^\perp)_{n\in\mathbb{N}}$\rm) \it converges to $AA^{(2)}_{\T, \S}$ \rm (\it respectively to $P_{\T}^\perp$\rm).\it
\item The sequences $(P_{\S_n^\perp}^\perp)_{n\in\mathbb{N}}$ and $(P_{\T_n}^\perp)_{n\in\mathbb{N}}$ converge to $P_{\S^\perp}^\perp$ and $P_{\T}^\perp$, respectively.
\end{enumerate}
\end{thm}
\begin{proof} Note that
\begin{align*}
&U=P_{\T}^\perp,& &U_n=P_{\T_n}^\perp,& &V=P_{\S^\perp}^\perp,& &V_n=P_{\S_n^\perp}^\perp,&\\
\end{align*}
satisfies the hypoteses of Theorem \ref{thm7.2}. First it will be proved that the sequences $((I-U)U_n)_{n\in\mathbb{N}}$ and $((I-U_n)U)_{n\in\mathbb{N}}$
converge to 0 if and only if the sequence $(P_{\T_n}^\perp)_{n\in\mathbb{N}}$ converges to $P_{\T}^\perp$.
Since $U$, $U_n\in\L(\H)$ are self-adjoint, the sequences $((I-U)U_n)_{n\in\mathbb{N}}$ and $((I-U_n)U)_{n\in\mathbb{N}}$
converge to 0 if and only if the sequences $(U_n(I-U))_{n\in\mathbb{N}}$ and $(U(I-U_n))_{n\in\mathbb{N}}$
converge to 0. In particular, the sequences $((I-U)U_nU)_{n\in\mathbb{N}}$, $((I-U)U_n(I-U))_{n\in\mathbb{N}}$ and $(UU_n(I-U))_{n\in\mathbb{N}}$
converge to 0. In addition, since $(U(I-U_n))_{n\in\mathbb{N}}$ converges to 0, the sequence $(UU_nU)_{n\in\mathbb{N}}$ converges to $U$.
However, since
$$
U_n=UU_nU +UU_n(I-U_n)+(I-U)U_nU+(I-U_n)U_n(I-U),
$$
the sequence $(P_{\T_n}^\perp)_{n\in\mathbb{N}}$ converges to $P_{\T}^\perp$.
A similar argument proves that the sequences $(V(I-V_n))_{n\in\mathbb{N}}$ and $(V_n(I-V))_{n\in\mathbb{N}}$ converge to 0
if and only if the sequence $(P_{\S_n^\perp}^\perp)_{n\in\mathbb{N}}$ converges to $P_{\S^\perp}^\perp$.
To conlcude the proof, apply Theorem \ref{thm7.1} and Theorem \ref{thm7.2}.
\end{proof}
To study the differentiation of the outer inverse \OIP, it is necessary to present a prelimiery result first.
\begin{lem}\label{lem7.4} Let $\X$ and $\Y$ be two Banach spaces and consider $A$, $B\in\L(\X, \Y)$. Let $\T$, $\V\subseteq \X$ and $S$, $\U\subseteq \Y$
be two pairs of subspaces such that $A^{(2)}_{\T, \S}$ and $B^{(2)}_{\V, \U}$ exist and consider idempotents $P_{\T}$, $P_{\V}\in\L (\X)$ and
$P_{\S}$, $P_{\U}\in\L(\Y)$ such that $\R(P_{\T})=\T$, $\R(P_{\V})=\V$, $\R(P_{\S})=\S$ and $\R(P_{\U})=\U$. Then
\begin{align*}
B^{(2)}_{\V, \U}-A^{(2)}_{\T, \S}=&B^{(2)}_{\V, \U}(P_{\S}-P_{\U})(I_{\Y}-AA^{(2)}_{\T, \S})+(I_{\X}-B^{(2)}_{\V, \U}B)(P_{\V}-P_{\T})A^{(2)}_{\T, \S}\\
& -B^{(2)}_{\V, \U}(B-A)A^{(2)}_{\T, \S}.\\
\end{align*}
\end{lem}
\begin{proof} Let $P_{\T}$, $P_{\V}\in \L(X)^\bullet$ be such that $\R(P_{\T})=\T$ and $\R(P_{\V})=\V$. According to \cite[Lemma 1]{LYZW},
\begin{align*}
&(I_{\X}-B^{(2)}_{\V, \U}B)P_{\V}=0,& &P_{\T}A^{(2)}_{\T, \S}=A^{(2)}_{\T, \S}.&\\
\end{align*}
Then,
\begin{align*}
&B^{(2)}_{\V, \U}BA^{(2)}_{\T, \S}-A^{(2)}_{\T, \S}= -(I_{\X}-B^{(2)}_{\V, \U}B)P_{\T}A^{(2)}_{\T, \S}= (I_{\X}-B^{(2)}_{\V, \U}B)(P_{\V}-P_{\T})A^{(2)}_{\T, \S}.& \\
\end{align*}
Now consider $P_{\S}$, $P_{\U}\in\L({\Y})^\bullet$ such that $\R(P_{\S})=\S$ and $\R(P_{\U})=\U$. According to \cite[Lemma 1]{LYZW},
\begin{align*}
&(I_{\Y}-P_{\S})(I_{\Y}-AA^{(2)}_{\T, \S})=0,& &B^{(2)}_{\V, \U}=B^{(2)}_{\V, \U}(I_{\Y}-P_{\U}).&
\end{align*}
Consequently,
\begin{align*}
B^{(2)}_{\V, \U}-B^{(2)}_{\V, \U}AA^{(2)}_{\T, \S}&=B^{(2)}_{\V, \U}(I_{\Y}-P_{\U})(I_{\Y}-AA^{(2)}_{\T, \S})=B^{(2)}_{\V, \U}((I_{\Y}-P_{\U})-(I_{\Y}-P_{\S}) )(I_{\Y}-AA^{(2)}_{\T, \S})\\
&=B^{(2)}_{\V, \U}(P_{\S}-P_{\U})(I_{\Y}-AA^{(2)}_{\T, \S}).\\
\end{align*}
Therefore,
\begin{align*}
B^{(2)}_{\V, \U}-A^{(2)}_{\T, \S}&= B^{(2)}_{\V, \U}(P_{\S}-P_{\U})(I_{\Y}-AA^{(2)}_{\T, \S}) +B^{(2)}_{\V, \U}AA^{(2)}_{\T, \S}\\
&\hskip.4truecm +(I_{\X}-B^{(2)}_{\V, \U}B)(P_{\V}-P_{\T})A^{(2)}_{\T, \S}-B^{(2)}_{\V, \U}BA^{(2)}_{\T, \S}\\
&= B^{(2)}_{\V, \U}(P_{\S}-P_{\U})(I_{\Y}-AA^{(2)}_{\T, \S})+(I_{\X}-B^{(2)}_{\V, \U}B)(P_{\V}-P_{\T})A^{(2)}_{\T, \S}\\
&\hskip.4truecm -B^{(2)}_{\V, \U}(B-A)A^{(2)}_{\T, \S}.\\
\end{align*}
\end{proof}
In what follows, the differentiation of the outer inverse \OIP will be studied.
\begin{thm}\label{thm7.5} Let $\X$ and $\Y$ be Banach spaces and consider $J\subseteq \mathbb{R}$ an open set.
Suppose that there exist functions ${\mathbf A}\colon J\to \L(\X, \Y)$, ${\mathbf P}\colon J\to \L(\X)^\bullet$ and ${\mathbf Q}\colon J\to\L(\Y)^\bullet$
such that for each $t\in J$, $({\mathbf A}(t))^{(2)}_{\R({\mathbf P}(t)),\R({\mathbf Q}(t))}$ exists. If the functions ${\mathbf A}$, ${\mathbf P}$
and ${\mathbf Q}$ are differentiable at $t_0$, then the following statements are equivalent.
\begin{enumerate}[\rm (i)]
\item The function ${\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}\colon J\to \L(\Y, \X)$ is continuous at $t_0$,
\item the function ${\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}\colon J\to \L(\Y, \X)$ is differentiable at $t_0$,
\end{enumerate}
where ${\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t)=({\mathbf A}(t))^{(2)}_{\R({\mathbf P}(t)),\R({\mathbf Q}(t))}$.\par
\noindent Furthermore,
\begin{align*}
({\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}})'(t_0)=&-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0){\mathbf Q}'(t_0)(I_{\Y}-{\mathbf A}(t_0){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0))
+(I_{\X}-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0){\mathbf A}(t_0)){\mathbf P}'(t_0){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0)\\
&-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0){\mathbf A}'(t_0){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0).\\
\end{align*}
\end{thm}
\begin{proof} It is enough to prove that statement (i) implies statement (ii). This proof can be derived from Lemma \ref{lem7.4}. In fact, according to this result,
\begin{align*}
{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t)-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0)=&-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t)({\mathbf Q}(t)-
{\mathbf Q}(t_0))(I_{\Y}-{\mathbf A}(t_0){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0))\\
&+(I_{\X}-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t){\mathbf A}(t))({\mathbf P}(t)-{\mathbf P}(t_0)){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0)\\
&-{\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t)({\mathbf A}(t)-{\mathbf A}(t_0)){\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}}(t_0).\\
\end{align*}
\noindent Now divide each term by $t-t_0$ and note that the limit leads to $({\mathbf A}^{(2)}_{{\mathbf P},{\mathbf Q}})'(t_0)$.
\end{proof}
In the Hilbert space operators context, Theorem \ref{thm7.5} can be reformulated as follows.
\begin{cor}\label{cor7.6} Let $\H$ and $\K$ be Hilbert spaces and consider $J\subseteq \mathbb{R}$ an open set.
Suppose that there exist functions ${\mathbf A}\colon J\to \L(\X, \Y)$, ${\mathbf P}^\perp\colon J\to \L(\X)^\bullet$ and ${\mathbf Q}^\perp\colon J\to\L(\Y)^\bullet$
such that for each $t\in J$, ${\mathbf P}^\perp(t)\in\L(\X)$
and ${\mathbf Q}^\perp(t)\in\L(\Y)$ are orthogonal idempotents and $({\mathbf A}(t))^{(2)}_{\R({\mathbf P}^\perp(t)),\R({\mathbf Q}^\perp(t))}$ exists. If the functions ${\mathbf A}$, ${\mathbf P}^\perp$
and ${\mathbf Q}^\perp$ are differentiable at $t_0$, then the following statements are equivalent.
\begin{enumerate}[\rm (i)]
\item The function ${\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}\colon J\to \L(\Y, \X)$ is continuous at $t_0$,
\item the function ${\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}\colon J\to \L(\Y, \X)$ is differentiable at $t_0$,
\end{enumerate}
where ${\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t)=({\mathbf A}(t))^{(2)}_{\R({\mathbf P}^\perp(t)),\R({\mathbf Q}^\perp(t))}$.\par
\noindent Furthermore,
\begin{align*}
({\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp})'(t_0)=&-{\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0)({\mathbf Q}^\perp)'(t_0)(I_{\Y}-{\mathbf A}(t_0)
{\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0))\\
&+(I_{\X}-{\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0){\mathbf A}(t_0))({\mathbf P}^\perp)'(t_0){\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0)\\
&-{\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0){\mathbf A}'(t_0){\mathbf A}^{(2)}_{{\mathbf P}^\perp,{\mathbf Q}^\perp}(t_0).\\
\end{align*}
\end{cor}
\begin{proof} Apply Theorem \ref{thm7.5} to the case under consideration.
\end{proof}
| {
"timestamp": "2017-10-30T01:06:33",
"yymm": "1710",
"arxiv_id": "1710.10065",
"language": "en",
"url": "https://arxiv.org/abs/1710.10065",
"abstract": "In this article properties of the $(b, c)$-inverse, the inverse along an element, the outer inverse with prescribed range and null space $A^{(2)}_{T, S}$ and the Moore-Penrose inverse will be studied in the contexts of Banach spaces operators, Banach algebras and $C^*$-algebras. The main properties to be considered are the continuity, the differentiability and the openness of the sets of all invertible elements defined by all the aforementioned outer inverses but the Moore-Penrose inverse. The relationship between the $(b, c)$-inverse and the outer inverse $A^{(2)}_{T, S}$ will be also characterized.",
"subjects": "Functional Analysis (math.FA)",
"title": "Further results on the $(b, c)$-inverse, the outer inverse $A^{(2)}_{T, S}$ and the Moore-Penrose inverse in the Banach context",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375047746414,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7095221726749545
} |
https://arxiv.org/abs/2208.10223 | Boundary representations of mapping class groups | Let $S = S_g$ be a closed orientable surface of genus $g \geq 2$ and $Mod(S)$ be the mapping class group of $S$. In this paper, we show that the boundary representation of $Mod(S)$ is ergodic using statistical hyperbolicity, which generalizes the classical result of Masur on ergodicity of the action of $Mod(S)$ on the projective measured foliation space $\mathcal{PMF}(S).$ As a corollary, we show that the boundary representation of $Mod(S)$ is irreducible. | \section{Introduction}
\noindent Let $S = S_{g}$ be a closed, connected, orientable surface of genus $g$. Recall that the mapping class group $\M(S)$ of $S$ is defined to be the group of isotopy classes of orientation-preserving homeomorphisms of $S$. Throughout this paper, the genus $g$ is assumed to be at least $2$. {\it The space of measured foliations} $\MF(S)$ is the set of equivalence classes of non-zero measured foliations on $S$. The mapping class group $\M(S)$ acts on $\MF(S)$ and preserves a Radon measure $\nu$, called the {\it Thurston measure} on $\MF(S)$. Moreover, the space $\MF(S)$ is equipped with an $\mathbb{R}_{+}-$action that commutes with the $\M(S)-$action. Therefore, $\M(S)$ acts on the quotient $\PMF(S)$, called {\it the projective measured foliation space}, of $\MF(S)$ by $\mathbb{R}_{+}$ preserving the measure class $[\nu]$, called the Thurston measure (class) on $\PMF(S)$, defined by the Thurston measure on $\MF(S)$.\\
\noindent One motivation of this paper is to use geometric objects, such as $\MF(S)$ and $\PMF(S)$ to understand unitary representations of $\M(S)$ (see, for instance, \cite{ma2021family},\cite{Paris2002},\cite{Costantino-Martelli14} for related topics).\\
\noindent \textbf{Main results.}~~~Recall that, for a probability measure class preserving action of $G$ on $(X,[\nu])$, one defines a unitary representation of $G$ on $L^2(X,\nu)$, called a {\it quasi-regular representation} (see Section \ref{subsection:quasi-regularrepresentation} for definitions). Hence, for a probability measure class preserving ergodic action, it is natural to ask that whether the quasi-regular representation is irreducible. Recall that a unitary representation is called \textit{irreducible} if it has no nontrivial closed invariant subspaces. Notice that this is not true for a measure preserving ergodic action as it always has $\mathbb{C}\mathds{1}_{X}$ as a nontrivial closed invariant subspace. For the ergodic action of $\M(S)$ on $\PMF(S)$ with respect to $[\nu]$, we prove:
\begin{theorem}[See Corollary \ref{corollary:mappingclassgroupirreducibility}]
Let $S = S_g$ be a closed surface of genus $g \geq 2$. The quasi-regular unitary representation of the mapping class group $\M(S)$ on $L^2(\PMF(S),\nu)$, the space of square integrable functions on $\PMF(S)$ with respect to the Thurston measure $\nu$, is irreducible.
\end{theorem}
\noindent This theorem is a corollary of an ergodic-type theorem for quasi-regular representations and it is this ergodic-type theorem that is much interesting and has many corollaries.\\
\noindent Given an action of a discrete group $G$ on a Borel probabililty space $(X,\mu)$ preserving the measure class $[\mu]$, denote the associated quasi-regular unitary representation $G \curvearrowright L^2(X,\mu)$ by $\pi_{\mu}$ and the projection to the subspace of constant functions by $P_{\mathds{1}_X}$. The representation $\pi_{\mu}$ is said to be \textit{ergodic with respect to $(K_n,e_n,f)$}, where $K_n \subset G$ is a finite subset with $|K_n| \to \infty$, $e_n: K_n \to X$ is a map and $f$ is a bounded Borel function on $X$, if we have the following convergence in weak operator topology:
$$\frac{1}{|K_n|} \sum_{g \in K_n} f(e_n(g))\frac{\pi_{\mu}(g)}{\langle \pi_{\mu}(g)\mathds{1}_X,\mathds{1}_X\rangle} \to fP_{\mathds{1}_X}.$$
\noindent The ergodicity of representations generalizes the ergodicity of group actions and it in fact implies that the irreducibility \cite[Proposition 2.5]{BLP}. Our main result is then:
\begin{theorem}[See Theorem \ref{theoreom:ergodicitymcg}]\label{intro:maintheorem}
There exist a sequence of finite subsets $\{E_n\}$ of $\M(S)$ and maps $e_n: E_n \to \PMF(S)$ such that, for every bounded Borel functions $f$, the quasi-regular representation $\pi_{\nu}$, with respect to the Thurston measure $\nu$, is ergodic with respect to $(E_n,e_n,f)$.
\end{theorem}
\noindent Thus we obtain a generalization of Masur's classical result on ergodicity of the action $\M(S) \curvearrowright \PMF(S)$ \cite{Masur_ergodic}. One of the key steps in our proof is the following result concerning matrix coefficients of representations which might be of independent interests.
\begin{theorem}[See Theorem \ref{Harish-Chandra} for the precise statement]
There exist a sequence of finite subsets $\{ E_n\}$ of $\M(S)$ for $n >> 0$ with exponential growth and constants $a_1 > 0, a_2 > 0,b_1, b_2,c_1 >0$ such that for every $g \in E_n,$ $$
(a_1n-c_1\ln \ln n +b_1)e^{-\frac{h}{2}n} \leq \langle \pi_{\nu}(g)\mathds{1}_{\PMF(S)},\mathds{1}_{\PMF(S)}\rangle \leq (a_2n+b_2)e^{-\frac{h}{2}n}. $$
\end{theorem}
\noindent The above theorem should be compared with \cite[Proposition 3.2]{Boyer17}. Finally, we remark that, for this quasi-regular representation of $\M(S)$, we can also show that it is \textit{tempered}, meaning that it is weakly contained in the regular representation of $L^2(\M(S))$ (see Proposition \ref{proposition:tempered}).\\
\noindent \textbf{Historical remarks.}~~~The main theorem is related to a question of Bader-Muchnik in the context of random walks on groups. Namely, let $G$ be a discrete group and $\mu$ be a probability measure on $G$. Let $(\partial G,\nu)$ be the Poisson boundary of $G$ associated to the $\mu-$random walk on $G$. Then the measure class $[\nu]$ is $G-$invariant, hence defines a quasi-regular representation of $G$ on $L^2(\partial G, \nu)$. In \cite{BaderMuchnik}, inspired by the cases of free groups and lattices in Lie groups, Bader-Muchnik proposed the following conjecture:
\begin{conjecture}[\cite{BaderMuchnik}]
For a locally compact group $G$ and a spread-out probability measure $\mu$ on $G$, the quasi-regular representation associated to the $\mu-$Poisson boundary of $G$ is irreducible.
\end{conjecture}
\noindent We now mention briefly some progress toward this conjecture. As mentioned above, this conjecture is true for certain random walks on free groups and lattices in Lie groups (see \cite{BaderMuchnik} and references therein). Hence it is true for the mapping class group $\M(S) = SL(2,\mathbb{Z})$ of closed surface of genus one acting on $\PMF(S) = S^1$ with respect to the Lebesgue measure which is also identified with the Thurston measure on $\PMF(S)$. Notice that all identifications are $\M(S)-$equvariant. For lattices in Lie groups, one can also deduce the irreducibility from ergodicity of the associated quasi-regular representation (see \cite{BLP}). The conjecture is then verified in \cite{BaderMuchnik} for the fundamental group of compact negatively curved manifolds with respect to the Patterson-Sullivan measure by Bader-Muchnik. Their result has been further generalized to hyperbolic groups \cite{garncarek2016boundary} with respect to the Patterson-Sullivan measure by Garncarek and some discrete subgroups of the group of isometries of a $CAT(-1)$ space with non-arithmetic spectrum by Boyer \cite{Boyer17}. Note that in all cases above, the Patterson-Sullivan measure on the Gromov boundary coincides with the Poisson boundary of $(G,\mu)$ for some probability measure $\mu$ on $G$. However, Bj\"{o}rklund-Hartman-Oppelmayer \cite{bjorklund2020random} recently showed that there are random walks on some Lamplighter groups and solvable Baumslag-Solitar groups that provide counterexamples to this conjecture.\\
\noindent The relationship between the main theorem and above progress is the following. On the one hand, there is a long history on exploiting similarities between mapping class groups and hyperbolic groups which is quite fruitful. To name very few among massive literatures, we mention \cite{MasurWolf}, \cite{MasurMinsky99} and \cite{Hamenstaedt2009a}. On the other hand, by \cite{ABEM}, the Thurston measure on $\PMF(S)$ is the Patterson-Sullivan measure on the Teichm\"{u}ller boundary (more precisely, Gardiner-Masur boundary, but this is irrelevant for our purpose) of the Teichm\"{u}ller space of $S$ which is in the similar situation with the previous known cases. \\
\noindent {\bf Strategy of the proof.} The proof of Theorem \ref{intro:maintheorem} exhibits both homogeneous and hyperbolic features of Teichm\"{u}ller spaces. On the one hand, by regarding the Teichm\"{u}ller space $\T(S)$ of $S$ as the homogeneous space of $\M(S)$ and $\PMF(S)$ as the boundary, we follow the approach in Boyer-Pittet-Link \cite{BLP}, namely Theorem \ref{criterion2}, which works quite well for lattices in Lie groups. However, Teichm\"{u}ller spaces are in general not homogeneous spaces, so on the other hand, we make heavily use of hyperbolic features of Teichm\"{u}ller spaces, namely the statistical hyperbolicity defined in Dowdall-Duchin-Masur \cite{DDM}. There are three main steps in the whole proof. The first step is to construct desired finite subsets of $\M(S)$. This is achieved by carefully choosing elements in $\M(S)$ with enough hyperbolicity so that the cardinality of these subsets goes to infinity (in fact, we need the growth to be exponential). The subsets are described before Lemma \ref{lemma:longsegment} relying on \cite{DDM}. The second step is the main part of this paper, that is, the Harish-Chandra estimates (Theorem \ref{Harish-Chandra}). Unlike what have been done in \cite{BaderMuchnik} and \cite{Boyer17}, we deduce the Harish-Chandra estimates using Teichm\"{u}ller theory, especially extremal lengths and intersection number functions. The idea is that, instead of doing estimations directly, we first relate it to integrations on intersection numbers and then use the map considered in Masur-Minsky \cite{MasurMinsky99} which relates $\T(S)$ to the pants curves of $S$ to simplify integrations. It is one of the novelty of this paper and obtains condition (3) in Theorem \ref{criterion2} for free. The last step is to obtain uniform boundness for operators. We use statistical hyperbolicity here as well. As we don't have a metric structure on $\PMF(S)$ with nice regularities, we make use of again intersection numbers and unlike \cite{Boyer17}, we complete the proof by counting lattice points in balls.
\subsection*{Acknowledgments.}
This paper is part of the author's thesis. The author would like to thank his advisor Professor Indira Chatterji for many discussions and comments, and the LJAD at Nice for its hospitality. He is also grateful to Professors Adrien Boyer, Ilya Gekhtman, Steven Kerckhoff, Wenyuan Yang for helpful discussions. Part of the work was done while the author participated in the program \textit{Random and Arithmetic Structures in Topology} hosted by the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2020 semester. This paper is supported by China Scholarship Council (NO. 201706140166) and Grant GIF I-1485-304.6/2019.
\section{Quasi-regular unitary representations}
\subsection{Quasi-regular representations of discrete groups.}\label{subsection:quasi-regularrepresentation}
In this section, we will recall ergodic quasi-regular representations and a criterion for showing ergodicity of representations. The reader is referred to \cite{BaderMuchnik},\cite{Boyer17} and \cite{BLP} for more details. \\
\noindent{\bf Quasi-regular unitary representations.}~~~ Let $G$ be a locally compact second-countable group and $X$ be a second-countable Hausdorff topological space. Let $\nu$ be a probability Borel measure on $X$. Assume that $G$ acts on $X$ as homeomorphisms and $G$ preserves the measure class $[\nu]$ of $\nu$, namely, $G$ preserves null $\nu-$measure sets . Choose $\nu \in [\nu]$, thus for every $\gamma \in G$, the measure $\gamma_{*}\nu$ is absolutely continuous with respect to $\nu$ and $\nu$ is absolutely continuous with respect to $\gamma_{*}\nu$. Denote the corresponding Radon-Nikodym derivative by $c(\gamma,\nu) = \frac{d\gamma_{*}\nu}{d\nu}$. One can construct a unitary representation $\pi_{\nu}$ of $G$ on $L^2(X,\nu)$ as follows. For every $f \in L^2(X,\nu)$, every $x \in X$ and every $\gamma \in G $, $\pi_{\nu}(\gamma)f(x)$ is defined to be $\pi_{\nu}(\gamma)f(x) = f(\gamma^{-1}x)c(\gamma, \nu)^{\frac{1}{2}}(x).$ The representation $\pi_{\nu}$ will be called a {\it quasi-regular (unitary) representation} of $G$. We remark that if $\nu$ and $\mu$ are in the same measure class, then $\pi_{\nu}$ and $\pi_{\mu}$ are unitary equivalent. Assume that $c(\gamma,\nu)^{\frac{1}{2}}$ is integrable for each $\gamma \in G$ with respect to $\nu$. The {\it Harish-Chandra function} $\Phi$ associated to $\pi_{\nu}$ is then defined to be the integral $$\Phi(\gamma) = \langle \pi_{\nu}(\gamma)\mathds{1}_{X}, \mathds{1}_{X}\rangle_{L^2(X,\nu)} = \int_{X}c(\gamma,\nu)^{\frac{1}{2}}(x)d\nu(x).$$
\noindent{\bf Ergodic quasi-regular representations.}~~~ From now on, we always assume that $G$ is a discrete group. Let $(X,\nu),\pi_{\nu}$ as above and $\mathcal{B}(L^2(X,\nu))$ be the Banach space of bounded operators on $L^2(X,\nu)$. Let $e_{K}: K \longrightarrow X$ be a map from a finite subset $K$ of $G$ to $X$ and $f:X \longrightarrow \mathbb{C}$ be a bounded Borel function. Consider the following elements in $\mathcal{B}(L^2(X,\nu))$:
\begin{displaymath}
\begin{aligned}
& M^{f}_{(K,e_{K})}: L^2(X,\nu) \longrightarrow L^2(X,\nu), \phi \mapsto \frac{1}{|K|}\sum_{\gamma \in K}f(e_{K}(\gamma))\frac{\pi_{\nu}(\gamma)\phi}{\Phi(\gamma)},\\
&\phantom{=\;} P_{\mathds{1}_{X}}: L^2(X,\nu) \longrightarrow L^2(X,\nu), \phi \mapsto \int_{X}\phi d\nu\mathds{1}_{X},\\
&\phantom{=\;} m(f): L^2(X,\nu) \longrightarrow L^2(X,\nu), \phi \mapsto f\phi.
\end{aligned}
\end{displaymath}
We now introduce an ergodicity for quasi-regular representations which generalizes the usual ergodicity for measure class-preserving group actions. Recall that a sequence $F_n \in \mathcal{B}(L^2(X,\nu))$ {\it converges} to $F \in \mathcal{B}(L^2(X,\nu))$, written as $F_n \rightarrow F$, in the weak operator topology if, for every $\phi, \psi \in L^2(X,\nu)$, $\lim_{n \rightarrow \infty}\langle F_n(\phi),\psi \rangle_{L^2} = \langle F(\phi),\psi \rangle_{L^2}$.
\begin{definition}[\cite{BLP}]\label{definition:ergodicityofrepresentation}
Let $G, (X,\nu),\pi_{\nu},f$ as above. Suppose that for every $n \in \mathbb{N}$, there is a pair $(K_n, e_n: K_n \longrightarrow X)$ such that $K_n$ is a finite subset of $G$ and such that $|K_n| \rightarrow \infty$ as $n \rightarrow \infty$. The representation $\pi_{\nu}$ is called ergodic with respect to $(K_n, e_n)$ and $f$, if we have the following convergence in the weak operator topology: $$M^{f}_{(K_n,e_{n})} \rightarrow m(f)P_{\mathds{1}_{X}}.$$
\end{definition}
\begin{remark}\label{remark:quasiregularirreducible}
It is easy to see that the ergodicity of a measure class-preserving group action is weaker than the ergodicity of the associated quasi-regular representation. One could refer to [\cite{BLP}, Proposition 2.5] for its proof.
\end{remark}
\noindent The following criterion for the ergodicity of a quasi-regular representation is essentially contained in \cite{BaderMuchnik} and summarized in \cite{BLP}.
\begin{theorem}[\cite{BLP} Theorem 2.2]\label{criterion}
Let $G, (X, \nu)$ as above and $\pi_{\nu}$ be the associated quasi-regular representation of $G$ on $L^2(X,\nu)$. Let $L$ be a length function on $G$ and let $(X,d)$ be a metric space inducing the topology of $X$. For every $n \in \mathbb{N}$, let $E_n$ be a symmetric finite subset of $G$, that is $E_n = E_n^{-1}$, and $e_n : E_n \longrightarrow X$ be a map. Assume that the following conditions hold:
\begin{itemize}
\item[(1)] for every $g \in G$, $\|\pi_{\nu}(g)\mathds{1}_X\|_{L^{\infty}(X,\nu)} < \infty$,
\item[(2)] $\lim_{n \rightarrow \infty}|E_n| = \infty$,
\item[(3)] for all Borel subsets $W,V \subset X$ such that $\nu(\partial W) = \nu(\partial V) = 0$, $$\limsup_{n \rightarrow \infty}\frac{1}{|E_n|}|\left\{\gamma \in E_n: e_n(\gamma^{-1})\in W ~\mbox{and}~e_n(\gamma)\in V \right\}| \leq \nu(W)\nu(V),$$
\item[(4)] for every $r \ge 0$, there is a non-increasing function $h_{r}: [0,\infty) \longrightarrow [0,\infty) $ such that $\lim_{s \rightarrow \infty}h_{r}(s) = 0$ and such that $$\forall n \in \mathbb{N}, \forall \gamma \in E_n, \frac{\langle \pi_{\nu}(\gamma)\mathds{1}_{X}, \mathds{1}_{\{x\in X: d(x,e_n(\gamma))\geq r\}}\rangle_{L^2}}{\Phi(\gamma)} \leq h_{r}(L(\gamma)),$$
\item[(5)] $$\sup_{n}\left\|M^{\mathds{1}_{X}}_{E_n}\mathds{1}_{X}\right\|_{L^{\infty}(X,\nu)} < \infty.$$
\end{itemize}
Then the quasi-regular representation $\pi_{\nu}$ is ergodic with respect to $(E_n, e_n)$ and any $f \in \overline{H}^{L^{\infty}(X,\nu)}$, where $H$ is a vector space generated by $$ \{\mathds{1}_{U}:\nu(\partial U) = 0 ~\mbox{and}~ U ~\mbox{is a Borel subset of}~ X\}.$$
\end{theorem}
\begin{remark}
Thanks to the condition (1), the Harish-Chandra function is defined for each $\gamma$ in $G$.
\end{remark}
\begin{proposition}[\cite{BaderMuchnik}]\label{Irre}
Under the assumptions in the above theorem, if moreover $\nu$ is a Radon measure, then $\pi_{\nu}$ is irreducible.
\end{proposition}
\subsection{Boundary representations of mapping class groups.}
We now consider quasi-regular representations of mapping class groups and state our main theorem. For more on mapping class groups and Teichm\"{u}ller theory, we refer to \cite{Farb2012}, \cite{ABEM} and \cite{Kerckhoff80}. \\
\noindent{\bf Mapping class groups and Teichm\"{u}ller spaces.}~~~ Let $S =S_{g}$ be a genus $g$, closed, connected, orientable surface. We always assume that $g \geq 2$. All arguments here work for hyperbolic surfaces with punctures as well. The {\it mapping class group} $\M(S)$ of $S$ is the group of isotopy classes of orientation-preserving homeomorphisms of $S$. Namely, if the group of orientation-preserving homeomorphisms of $S$ is denoted by ${\rm Homeo}^{+}(S)$ and the group of homeomorphisms of $S$ that isotopic to the identity is denoted by ${\rm Homeo}_{0}(S)$, then $$ \M(S) = {\rm Homeo}^{+}(S)/ {\rm Homeo}_{0}(S).$$
\noindent We remark here that mapping class groups of surfaces are finitely presented and considered to be discrete groups. The {\it Teichm\"{u}ller space} $\T(S)$ of $S$ is the space of homotopy classes of hyperbolic structures. The Teichm\"{u}ller space $\T(S)$ is homeomorphic to $\mathbb{R}^{6g-6}$ and the mapping class group $\M(S)$ acts on $\T(S)$ by changing markings. The quotient $\mathcal{M}(S) = \T(S)/\M(S) $ is the {\it moduli space} of $S$. There are several distances on $\T(S)$ so that $\M(S)$ acts as isometries and the one that we will use is the {\it Teichm\"{u}ller distance} $d = d_{T}$. It is defined as follows: For $\mathcal{X}=[(X,\phi)],\mathcal{Y}=[(Y,\psi)] \in \T(S)$, $d(\mathcal{X},\mathcal{Y}) = \frac{1}{2}\log K_f$, where $f: X \longrightarrow Y$ is the Teichm\"{u}ller mapping, locally in the form of $x+iy \mapsto e^{t}x + ie^{-t}y$, in the isotopy class of $\psi\circ\phi^{-1}$, namely the quasi-conformal homeomorphism with minimal dilatation in the isotopy class of $\psi\circ\phi^{-1}$ and $K_f$ is the dilatation of $f$. It is obvious that $\M(S) \subset Isom(\T(S),d)$. Neither the Teichm\"{u}ller space $\T(S)$ nor $\mathcal{M}(S)$ is compact.\\
\noindent{\bf Measured foliations.}~~~ The Teichm\"{u}ller space can be compactified in several ways. The compactification we will use in this paper is the Teichm\"{u}ller compactification. Fix a point $o \in \T(S)$ that is considered to be a Riemann surface $X$ via uniformization. A {\it holomorphic quadratic differential} $q \in {\rm H^{0}(X, \Omega_{X}^{\otimes 2})}$ on $X$ is locally of the form $q(z)dz^2$ such that $q(z)$ is a holomorphic function. Define a {\it norm} on $q$ by $$\|q\| = \int_{X}|q(z)|dxdy$$ and consider the unit open ball $B^{1}(X)$ with respect to $\| \cdot \|$. The set $QD(X)$ of holomorphic quadratic differentials is a vector space and can be identified with the cotangent space of $\T(S)$ at $o$. There is a homeomorphism $\pi : B^{1}(X) \longrightarrow \T(S)$ sending each open unit ray in $QD(X)$ starting at the origin to a Teichm\"{u}ller geodesic starting at $o$. The {\it Teichm\"{u}ller compactification} is then the visual compactification by adding ending points in the unit sphere of $QD(X)$ to each ray. The Teichm\"{u}ller compactification will be denoted by $\overline{\T(S)}$. Thus, the boundary $\partial \overline{\T(S)}$ of $\overline{\T(S)}$ is the unit sphere $QD^{1}(X)$.\\
\noindent One could give a geometric description of $\partial \overline{\T(S)}$ via projective measured foliations. A {\it measured foliation} on $S$ is a singular foliation of $S$ endowed with a transverse measure. The space $\MF(S)$ of measured foliations is then the set of equivalent classes of measured foliations where the equivalence is given by Whitehead moves and isotopy. The space $\MF(S)$, endowed with the weak topology on measures, is homeomorphic to $\mathbb{R}^{6g-6}$. The quotient, called {\it the projective measured foliation} space $\PMF(S)$ of $S$, of $\MF(S)$ by the nature action of $\mathbb{R}_{+}$ is homeomorphic to the $6g-7$ sphere $S^{6g-7}$. Both $\MF(S)$ and $\PMF(S)$ are equipped with a $\M(S)-$action. There is a deep relation between $\MF(S)$ and $QD(X)$. Namely, for each holomorphic quadratic differential $q$, the {\it vertical measured foliation} $\mathcal{V}(q)$ of $q = q(z)dz^2$ is the foliation given by the integral curves of the holomorphic tangent vector field on $S$ such that each vector has a value in negative real numbers under $q$, where the transverse measure is given by integration of $|Re\sqrt{q}|$. By a theorem of Hubbard-Masur, the map $\mathcal{V}$ that assigns each holomorphic quadratic differential $q$ on $X$ to $\mathcal{V}(q)$ is a homeomorphism from $QD(X)-\{0\}$ onto $\MF(S)$. The composition $\pi \circ \mathcal{V}$ of the map $\mathcal{V}: QD(X)-\{0\} \longrightarrow \MF(S)$ and the quotient map $\pi: \MF(S) \longrightarrow \PMF(S)$ gives the identification of $QD^{1}(X)$ with $\PMF(S)$. Thus, we will regard $\PMF(S)$ as the boundary of the Teichm\"{u}ller compactification of $\T(S)$. The equivalent class of $\xi \in \MF(S)$ in $\PMF(S)$ will be denoted by $[\xi]$. Any $q \in QD^1(X)$ (hence $[\mathcal{V}(q)] \in \PMF(S)$) determines a Teichm\"{u}ller geodesic ray $g_t$ starting from $o$, hence, by abuse of terminologies, we will call $q$ and $[\mathcal{V}(q)]$ the {\it direction} of $g_t$ and sometimes write $g_t$ as $g^q_t$ or $\mathcal{V}(q)(t)$. \\
\noindent Any isotopy class $\gamma$ of essential simple closed curves on $S$ defines a (topological) foliation $\lambda(\gamma)$. Hence, any weighted isotopy class $c\gamma$ of essential simple closed curves defines a foliation $\lambda \in \MF(S)$. The measured foliation $\lambda$, as topological foliation, is the same as $\lambda(\gamma)$, but the transverse measure is given by $c$. Therefore, let $\mathcal{C}(S)$ denote the set of isotopy classes of essential simple closed curves, there is an embedding of $\mathcal{C}(S) \times \mathbb{R}_{+}$ into $\MF(S)$. The image is dense (See \cite{Thurston1988}). This embedding enable us to define three functions that we will use. The first one is the intersection number on $\MF(S)$. The intersection number $i : \MF(S) \times \MF(S)\longrightarrow \mathbb{R}_{+}$ is the unique continuous function on $\MF(S) \times \MF(S)$ that extends the geometric intersection number of two essential simple closed curves and satisfies $i(c\lambda,\xi)=ci(\lambda,\xi)$ for every $c>0$ (See \cite[Corollary 1.11]{Rees}). The second one is the extremal length. Let $o = [(X,\phi)] \in \T(S)$ where $X$ is a Riemann surface. Let $\gamma$ be the isotopy class of an essential simple closed curve. The extremal length $\E_{X}(\gamma)$ of $\gamma$ in $X$ is defined to be $$\E_{X}(\gamma) = \sup_{\rho}\ell_{\rho}(\gamma)^2,$$ where $\rho$ runs over all metrics with unit area in the conformal class of $X$ and $\ell_{\rho}(\gamma)$ is the infimum of $\rho-$length of simple closed curves in $\gamma$. Then the extremal length $\E_{X}: \MF(S) \longrightarrow \mathbb{R}_{+}$ is the unique continuous function on $\MF(S)$ that extends the extremal length of $\mathcal{C}(S)$ and satisfies $\E_{X}(c\lambda) =c^2\E_{X}(\lambda)$ for $c \in \mathbb{R}_{+}$ (See \cite[Proposition 3]{Kerckhoff80}). Note that the extremal length in fact is defined on $\T(S) \times \MF(S)$, namely, if $[(X,\phi)] = [(Y,\psi)] \in \T(S)$, then $\E_{X}(\cdot) = \E_{Y}(\cdot)$. So we will write $\E_{o}(\cdot)$ rather than $\E_{X}(\cdot)$ for $o = [(X,\phi)]$. The third one is the hyperbolic length $\ell_{o}(\gamma)$ which is defined to be the $X-$length of unique $X-$hyperbolic geodesic $\tilde{\gamma}$ in the isotopy class $\gamma$. The function $\ell_{o}(\cdot)$ can be uniquely extended as well to $\MF(S)$ to obtain a continuous function $\ell_{o}$ on $\MF(S)$ \cite{Kerckhoff85}. We will use the following relation (see \cite{ABEM}): given a point $o$ in $\T(S)$, then there exists a constant $C = C(o)$, depending on $o$, such that $$ \forall \xi \in \MF(S), \frac{1}{C} \ell_o(\xi) \leq \sqrt{\E_{o}(\xi)} \leq C\ell_o(\xi).$$ \\
\noindent Recall that a measured foliation $\lambda$ is called {\it minimal} if it has no simple closed leaves. Two measured foliations are said to be {\it topologically equivalent} if they, as topological foliations, are differ by isotopies and Whitehead moves. A measured foliation $\xi$ is called {\it uniquely ergodic} if it is minimal and any measured foliation $\zeta$ that topologically equivalent to $\xi$ is measure equivalent to $\xi$, that is, $[\xi]=[\zeta]$. When $\xi$ is uniquely ergodic, we will call $[\xi]$ uniquely ergodic. It is well-known that the set of uniquely ergodic measured foliations has full measure with respect to the Thurston measure defined later. The following two lemmas are essential to our approach using intersection numbers.
\begin{lemma}[\cite{Rees}, Theorem 1.12 or \cite{Masur82}]\label{lemma:uniquelyergodic}
Let $\lambda$ be a uniquely ergodic measured foliation and $\eta$ be any measured foliation. Then $i(\lambda,\eta) = 0$ if and only if $[\lambda] = [\eta]$.
\end{lemma}
\begin{lemma}[Masur's criterion \cite{Masur92}]\label{lemma:masurcriterion}
Given $\epsilon > 0$. If a Teichm\"{u}ller geodesic ray $g_t$ starting from $o$ does not leave $\T_{\epsilon}(S)$ eventually, then the direction of $g_t$ is uniquely ergodic.
\end{lemma}
\noindent One feature of the Teichm\"{u}ller compactification is that the action of $\M(S)$ cannot be extended continuously to $\overline{\T(S)}$ \cite{Kerckhoff80}. However, uniquely ergodic measured foliations are nice points in terms of $\M(S)-$action in the following sense.
\begin{lemma}[\cite{Masur82}]\label{smoothness}
The mapping class group acts continuously on $\overline{\T(S)}$ at uniquely ergodic points on the boundary.
\end{lemma}
\noindent The following formula proved by Kerckhoff concerning the calculation of Teichm\"{u}ller distances will be used frequently.
\begin{lemma}[\cite{Kerckhoff80}, Kerckhoff's formula]\label{lemma:Kerckhoffformula}
$$\forall x,y \in \T(S), d_T(x,y) = \frac{1}{2}\sup_{[\xi] \in \PMF(S)} \ln \left(\frac{\E_{x}(\xi)}{\E_{y}(\xi)}\right).$$
\end{lemma}
\noindent{\bf Hyperbolicity.}~~~ It was first proved in \cite{MasurWolf} that the Teichm\"{u}ller space $(\T(S),d_T)$ is not hyperbolic in the sense of Gromov. However some triangles in $(\T(S),d_T)$ are indeed thin. We now collect several related results in order to compare neighborhoods in $\PMF(S)$ defined by projections of balls in $\T(S)$ and the ones defined by intersection numbers.\\
\noindent Recall that, for $\epsilon > 0$, the {\it $\epsilon-$thick part} $\T_{\epsilon}(S)$ of the Teichm\"{u}ller space $\T(S)$ is defined to be $$\T_{\epsilon}(S) = \{y\in \T(S): \forall c \in \mathcal{C}(S),\E_{y}(c) \geq \epsilon\}.$$
\noindent The first result, generalizing a theorem of Rafi \cite{Rafi14}, describes when triangles are thin. We denote $ \mathcal{N}_D(A)$ for a subset $A$ of $\T(S)$ by the $D-$neighborhood of $A$. Recall that {\it a geodesic segment $I:[a,b] \rightarrow \T(S)$ has at least proportion $\theta$ in $\T_{\epsilon}(S)$} if $$Thk^{\%}_{\epsilon}[I]\doteq \frac{|\{a \leq s \leq b: I(s) \in \T_{\epsilon}(S)\}|}{b-a} \geq \theta.$$
\begin{theorem}[\cite{DDM}]\label{theorem:DDM}
Given $\epsilon > 0$ and $0 < \theta \leq 1 $, there exist constants $D = D(\epsilon,\theta), L_0=L_0(\epsilon,\theta)$ such that if $I \subset [x,y]$ is a geodesic subinterval in $\T(S)$ of length at least $L_0$ and at least proportion $\theta$ of $I$ is in $\T_{\epsilon}(S)$, then for every $z \in \T(S)$, we have $$ I \cap \mathcal{N}_{D}([x,z] \cup [y,z]) \ne \emptyset.$$
\end{theorem}
\noindent The following result will also be used later. Recall that two parametrized geodesics segment $\delta(t)$ and $\delta'(t)$ defined on $[a,b]$ are said to {\it $P-$fellow travel} in a parametrized fashion if, for every $t\in [a,b]$, $d_T(\delta(t),\delta'(t)) \leq P$.
\begin{theorem}[\cite{Rafi14}]\label{theorem:fellowtravelling}
Let $\epsilon > 0$. Then there exists $P =P(\epsilon) > 0$ such that whenever $x_1,x_2,y_1,y_2$ are in $\T_{\epsilon}(S)$ with $$d_T(x_1,x_2) \leq 1,d_T(y_1,y_2) \leq 1,$$ the geodesic segment $[x_1,y_1]$ and $[x_2,y_2]$ are $P-$fellow travelling.
\end{theorem}
\noindent{\bf Boundary representations of mapping class groups.}~~~ We are in a position to discuss a special type of quasi-regular unitary representations of mapping class groups. Fix $o \in \T(S)$, we first define a Radon measure $\nu_{o}$ on $\PMF(S)$. Let $\nu_{Th}$ be the Thurston measure on $\MF(S)$. For any open subset $U \subset \PMF(S),$ one defines $\nu_{o}(U)$ to be $$\nu_{o}(U) = \nu_{Th}\left(\{\xi: [\xi] \in U, \E_{o}(\xi) \leq 1\}\right).$$ One could verify that $\forall \gamma \in \M(S), \gamma_{*}\nu_{o}=\nu_{\gamma.o}$ and $[\nu_{x}] = [\nu_{y}],\forall x,y \in \T(S)$. Therefore, one has $$\forall x,y \in \T(S), [\xi]\in \PMF(S), \frac{d\nu_{x}}{d\nu_{y}}([\xi])= \left(\frac{\E_{y}(\xi)}{\E_{x}(\xi)}\right)^{\frac{6g-6}{2}}.$$ By the definition of extremal length, the function $[\xi] \mapsto \left(\frac{\E_{y}(\xi)}{\E_{x}(\xi)}\right)^{\frac{6g-6}{2}}$ is well-defined on $\PMF(S)$. We have, in particular, $$\forall \gamma \in \M(S), [\xi]\in \PMF(S), \frac{d\gamma_{*}\nu_{o}}{d\nu_{o}}([\xi])= \left(\frac{\E_{o}(\xi)}{\E_{\gamma.o}(\xi)}\right)^{\frac{6g-6}{2}}.$$ Hence one has a quasi-regular unitary representation $\pi_{\nu_{o}}$ of $\M(S)$ on the Hilbert space $L^{2}(\PMF(S),\nu_{o})$. The quasi-regular representation $\pi_{\nu_{o}}$ of $\M(S)$ is called the {\it boundary representation} of $\M(S)$ (with respect to $o$).\\
\noindent As intersection numbers will be the main tool, we embed $\PMF(S)$ into $\MF(S)$. For each $[\xi]$, define $\tau(\xi) \in \MF(S)$ to be the unique element in $[\xi]$ such that $\E_{o}\left(\tau(\xi)\right) = 1.$ Hence, the map $\tau : \PMF(S) \longrightarrow \MF(S)$ is a section of the projection $\pi : \MF(S) \longrightarrow \PMF(S).$ When talking about intersection numbers for two points in $\PMF(S)$, we will always use the image of $\tau$.\\
\noindent {\bf Ergodic boundary representation.}~~~ From now on, let $S =S_{g} (g \geq 2)$ be a genus $g$ closed, orientable surface and fix a point $o=[(X,\phi)] \in \T(S)$. Normalize $\nu_o$ to be a probability measure. Denote $h = 6g - 6$ and let $\epsilon > 0$ and $\theta > 0$. Let also $L$ be the length function on $G$ induced by the Teichm\"{u}ller distance $d_T$, namely $L(g) = d_T(o, g\cdot o)$.
\noindent Inspired by \cite{gekhtman2013stable} and \cite{DDM}, we first describe our choice of $E_n$ that fits in Theorem \ref{criterion2}. Let $g^q_t$ be a Teichm\"{u}ller geodesic ray starting from $o$ in the direction of $q \in QD^1(X)$. For every $m > 0$, recall that $$Thk^{\%}_{\epsilon}[o,g_m]\doteq \frac{|\{0 \leq s \leq m: g_s \in \T_{\epsilon}(S)\}|}{d_T(o,g_m)}.$$
\begin{theorem}[\cite{DDM} Proposition 5.5] \label{theorem:DowDucMas}
For all $0 < \theta < 1$, there exists $\epsilon > 0$ such that for all $o = (X,\phi) \in \T(S)$ $$\lim_{R_0 \rightarrow \infty}\nu_o\left( \{q \in QD^1(X): Thk^{\%}_{\epsilon}[o,g^q_m] \geq \theta , \forall m > R_0 \}\right) = 1.$$
\end{theorem}
\noindent We then fix any $\theta \in (0,1)$ and take $\epsilon > 0$ by the above theorem. We identify $QD^1(X)$ with $\PMF(S)$ and $g^q_t$ with $\mathcal{V}(q)(t)$. For each $R > 0$, we define $$ U(R,\theta,\epsilon) = \{\xi \in \PMF(S): Thk^{\%}_{\epsilon}[o,\xi_m] \geq \theta, \forall m >R\}.$$ Then if $R_2 \geq R_1 > 0$, we have $$ U(R_1,\theta,\epsilon) \subset U(R_2,\theta,\epsilon).$$ Define $$U(\theta, \epsilon) = \cup_{R > 0} U(R,\theta,\epsilon),$$ then, by Theorem \ref{theorem:DowDucMas}, one has $$\nu_o(U(\theta,\epsilon))) = 1.$$ Furthermore, after a suitable choice of $\theta$, one has $\nu_o(\partial (U(\theta,\epsilon))) = 0$ and by Masur's criterion (Lemma \ref{lemma:masurcriterion}), the set $U(\epsilon,\theta)$ consists of uniquely ergodic directions. We now fix the choice of $\epsilon$ and $\theta$ and for $\gamma \in \M(S)$, denote the direction determined by the oriented geodesic $[o,\gamma\cdot o]$ by $\xi_{\gamma}$. Now we are in a position to describe $E_n$. Fix $\rho > 0$ and let $L_0 = L_0(\theta,\epsilon)$ be the constant as Theorem \ref{theorem:DDM}. For $\frac{1}{3h}\ln \ln n > \max{\{L_0,\rho\}}$, define \textbf{THE SET $\mathcal{E}(\theta,\epsilon, n, o, \rho)$} to be the set of all elements $\gamma$ in $\M(S)$ satisfying:
\begin{itemize}
\item[(a)] $d(\gamma \cdot o,o) \in (n- \rho, n+\rho)$;
\item[(b)] Both $\xi_{\gamma}$ and $\xi_{\gamma^{-1}}$ are in $U(\theta, \epsilon)$;
\item[(c)] If $g(t)$ is either the geodesic ray $\xi_{\gamma}(t)$ or $\xi_{\gamma^{-1}}(t)$, then the segment $[o, g(\frac{1}{3h}\ln \ln n)]$ has at least proportion $\theta$ in $\T_{\epsilon}(S)$.
\end{itemize}
\begin{lemma}\label{lemma:longsegment}
Let $n$ large enough as before. Then for $\gamma \in \mathcal{E}(\theta,\epsilon, n, o, \rho)$, there exists a geodesic segment $I_{\gamma}$ of length $\frac{1}{3h}\ln \ln n$ in the geodesic $[o,\gamma \cdot o]$ that has at least proportion $\theta$ in $\T_{\epsilon}(S)$ and containing $\gamma \cdot o$.
\end{lemma}
\begin{proof}
Let $\gamma \in \mathcal{E}(\theta,\epsilon, n, o, \rho)$. Since the geodesic ray $\xi_{\gamma^{-1}}(t)$ satisfies (c) in the definition of $\mathcal{E}(\theta,\epsilon, n, o, \rho)$, the first segment $I_{\gamma}$ of $[o,\gamma^{-1} \cdot o]$ of length $\frac{1}{3h}\ln \ln n$ has at least proportion $\theta$ in $\T_{\epsilon}(S)$. As $\gamma \cdot [o, \gamma^{-1} \cdot o] = [\gamma \cdot o,o]$, therefore, the geodesic $[o,\gamma \cdot o]$ has a subinterval $\gamma \cdot I_{\gamma^{-1}}$ of length $\frac{1}{3h}\ln \ln n$ that at least proportion $\theta$ in $\T_{\epsilon}(S)$ and starting at point $\gamma \cdot o$.
\end{proof}
\noindent In the next section, we will prove that $\mathcal{E}(\theta,\epsilon, n, o, \rho)$ has exponential growth. We first state one obvious property of the boundary representation.
\begin{lemma}\label{essentialbound}
Let $\pi_{\nu}$ be the boundary representation of $\M(S)$. For every $g \in \M(S)$, $\|\pi_{\nu_o}(g)\mathds{1}_{\PMF(S)}\|_{L^{\infty}(\PMF(S),\nu_o)} < \infty$
\end{lemma}
\begin{proof}
The lemma is an easy consequence of Kerckhoff's formula, namely Lemma \ref{lemma:Kerckhoffformula}, on Teichm\"{u}ller distances. By Lemma \ref{lemma:Kerckhoffformula}, $$\forall x,y \in \T(S), \forall [\xi] \in \PMF(S), \left(\frac{\E_{x}(\xi)}{\E_{y}(\xi)}\right)^{\frac{1}{2}} \leq e^{d_{T}(x,y)}.$$ As $\pi_{\nu_o}(g)\mathds{1}_{\PMF(S)} = \left(\frac{\E_{o}(\xi)}{\E_{g\cdot o}(\xi)}\right)^{\frac{6g-6}{4}},$ one has $$\|\pi_{\nu_{o}}(g)\mathds{1}_{\PMF(S)}\|_{L^{\infty}(\PMF(S),\nu_{o})} \leq e^{\frac{6g-6}{2}d_T(o,g \cdot o)} < \infty.$$
\end{proof}
\noindent The following theorem is a slight variant of Theorem \ref{criterion} whose proof is close to its original proof.
\begin{theorem}\label{criterion2}
Let $\pi_{\nu_o}$ be the associated quasi-regular representation of $\M(S)$ on $L^2(\PMF(S),\nu_o)$. Let $i(\cdot,\cdot)$ be the intersection number function on $\PMF(S)$. Let $n \gg \rho$ and let $E_n = E_n(\rho) \subset \{g \in \M(S): d_T(o,g\cdot o) \in [n-\rho,n+\rho]\}$ be symmetric. Let $e_n = Pr: E_n \longrightarrow \PMF(S)$ be the radial projection from $o$. Assume that the following conditions hold:
\begin{itemize}
\item[(1)] $\lim_{n \rightarrow \infty}|E_n| = \infty$,
\item[(2)] for all Borel subsets $W,V \subset \PMF(S)$ such that $\nu_o(\partial W) = \nu_o(\partial V) = 0$, $$\limsup_{n \rightarrow \infty}\frac{1}{|E_n|}|\left\{\gamma \in E_n: e_n(\gamma^{-1})\in W ~\mbox{and}~e_n(\gamma)\in V \right\}| \leq \nu_o(W)\nu_o(V),$$
\item[(3)] for every $n \gg \rho$, there are two sequences of reals $\{h_{r_n}(n, \rho)\}$ and $\{r_n\}$ such that $\lim_{n \rightarrow \infty}h_{r_n}(n,\rho) = \lim_{n\rightarrow \infty}r_n = 0$ and such that $$\forall n \in \mathbb{N}, \forall \gamma \in E_n, \frac{\langle \pi_{\nu_o}(\gamma)\mathds{1}_{\PMF(S)}, \mathds{1}_{\{x\in \PMF(S):~i(x,e_n(\gamma))\geq r_n\}}\rangle}{\Phi(\gamma)} \leq h_{r_n}(n,\rho),$$
\item[(4)] $$\sup_{n}\left\|M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}\right\|_{L^{\infty}(\PMF(S),\nu_o)} < \infty.$$
\end{itemize}
Then the quasi-regular representation $\pi_{\nu_o}$ is ergodic with respect to $(E_n, e_n)$ and any $f \in \overline{H}^{L^{\infty}(\PMF(S),\nu_o)}$, where $$H = <\mathds{1}_{U}:\nu_o(\partial U) = 0 ~\mbox{and}~ U ~\mbox{is a Borel subset of}~ \PMF(S)>.$$
\end{theorem}
\begin{remark}
As Theorem \ref{criterion2} is slightly different from Theorem \ref{criterion}, we should make a few comments. One could easily find the only difference is the point (3) here since the original point (1) has been replaced automatically by Lemma \ref{essentialbound}. The assumption (3) in Theorem \ref{criterion2} is slightly weaker than the assumption (4) in Theorem \ref{criterion} as we don't require $h_{r}$ to be independent on $L(g)$. The cost of this weaker assumption is that, in order to make it easy to state, we should assume that elements in $E_n$ has almost the same induced distance $n$ with $id \in \M(S)$.
\end{remark}
\begin{proof}
As the proof of in \cite[Theorem 2.2]{BLP} works quite well in our situation except small modifications, we only point out the necessary modifications here. The main issue here is that the intersection number function $i(\cdot,\cdot)$ does not define a metric structure, even not the usual topological structure on $\PMF(S)$ since there are points $\xi \neq \eta$ with $i(\eta,\xi) = 0$. However, there is a full $\nu_o-$measure subset $\mathcal{UF} \subset \PMF(S)$ consisting of uniquely ergodic measured foliations such that, for every metric on $\PMF(S)$, the induced topology coincides with the topology induced by the intersection number functions when restrict to $\mathcal{UF}$ by Lemma \ref{lemma:uniquelyergodic}.
The proof of \cite[Page 2037, Theorem 2.2]{BLP} uses \cite[Lemma 2.20]{BLP} and \cite[Proposition 2.21]{BLP}, we point out modifications in these two places and the rest is the same as the proof in \cite[Theorem 2.2]{BLP}.
As the proof eventually based on \cite[Lemma 2.19]{BLP}, our proof also works on this situation and here are our assumptions: The set $B = \PMF(S)$ is equipped with any metric $d$ which is compatible the quotient topology and whenever talking about Borel subsets $U,W$, we mean a compact subsets with null $\nu_o-$zero boundaries. These assumptions are reasonable since all statements in\cite[Lemma 2.19]{BLP} is stable under taking closures. Note that although working on metric measure spaces, the metric is only used in the following way: for subsets $U, W$, $d(U,W) > 0$ iff $\overline{U} \cap \overline{W} = \emptyset$.
Define open sets $W(r)$ for $r > 0$,
\begin{equation*}
\begin{aligned}
&W(r) = \{\eta \in \PMF(S): \exists ~\xi \in \mathcal{UF} \cap W,~i(\eta, \xi) < r\} \\
&=\cup_{\xi \in \mathcal{UF} \cap W}\{\eta: i(\eta,\xi) < r\}.
\end{aligned}
\end{equation*}
For\cite[Lemma 2.20]{BLP}, consider $W(r_n)$ rather than $W(r)$ where $r_n$ is the sequence in (3), then take the limit $\lim_{n \to \infty}h_{\tau_n}(n,\rho)$ rather than $\lim_{s \to \infty}h_r(s)$.
For \cite[Proposition 2.21]{BLP}, consider $W(r_n)$ and $V(r_n)$ rather than $W(r)$ and $V(r)$ in the proof in \cite[Page 2037, Proposition 2.21]{BLP}. First one has, for $r \geq 0$,
$$\overline{W(r)} = \cup_{\xi \in W}\{\eta: i(\eta,\xi) \leq r\}.$$
Then using the same notations,
\begin{equation*}
\begin{aligned}
& \limsup_{n\to \infty}\langle M^{\mathds{1}_U}_{(E_n,e_n)}\mathds{1}_{B}, \mathds{1}_{W}\rangle =\limsup_{n\to \infty}\frac{1}{|E_n|}\int_{E_n}\mathds{1}(e_n(g))\mathds{1}_{W(r_n)}(e_n(g))dg \\
&\leq\limsup_{n\to \infty}\frac{1}{|E_n|}\int_{E_n}\mathds{1}(e_n(g))\mathds{1}_{\overline{W(r_n)}}(e_n(g))dg
\end{aligned}
\end{equation*}
Since $$\forall r \leq r',~~~\overline{W(r)} \subset \overline{W(r')},$$
so if $\nu_o(\partial \overline{W(r_k)}) = 0$ for all $k$, then by Theorem \ref{ABEM2}, there is a constant $K$ such that
\begin{equation*}
\begin{aligned}
\limsup_{n\to \infty}\langle M^{\mathds{1}_U}_{(E_n,e_n)}\mathds{1}_{B}, \mathds{1}_{W}\rangle \leq K\nu_o(U \cap \overline{W(r_k)}), \forall k
\end{aligned}
\end{equation*}
Hence if $$\lim\nu_o(\overline{W(r_k)} \cap U) \leq \nu_o(W(0) \cap U),$$ then one can continue the same argument as \cite[Proposition 2.21]{BLP} since $U \cap W(0)$ has only non uniquely ergodic points (by the assumption that $U \cap W = \emptyset$) and $\nu_o$ is supported on uniquely ergodic points. So we are leave to prove the following lemma which is an analogue of \cite[Proposition 2.8]{BLP} and then change $r_k$ in the assumption a little bit.
\end{proof}
\begin{lemma}
Use the notations in the proof above. We have
\begin{itemize}
\item[(a)] The set $\{r > 0, \nu_o(\partial \overline{W(r)}) = 0\}$ is dense.
\item[(b)] $$\lim_{k \to \infty}\nu_o(\overline{W(r_k)} \cap U) \leq \nu_o(W(0) \cap U).$$
\end{itemize}
\end{lemma}
\begin{proof}
Assume $W$ is compact, otherwise replace it by its closure. The part (b) is simply an application of Monotone Convergence Theorem since $\overline{W(r_{k+1})} \cap U \subset \overline{W(r_{k})} \cap U$ for a decreasing sequence $\{r_k\}$. For the part (a), we argue in the same way as \cite[Proposition 2.18]{BLP}. Also assume $W$ is compact, thus for $\xi \in \PMF(S)$, define $s(\xi,W) = \min{\{i(\xi,w),w \in W \}}$. Therefore $$\forall r > 0, \partial \overline{W(r)} \subset \{\eta: s(\eta, W) = r\}.$$ So for $r \neq r', \partial \overline{W(r)} \cap \partial \overline{W(r')} = \emptyset$. Hence if the part (a) is wrong, then it will contradict with the fact that $\nu_o$ has finite measure.
\end{proof}
\noindent Our main result is the following theorem.
\begin{theorem}\label{theoreom:ergodicitymcg}
There exist $\theta$ and $\epsilon$ such that, if $E_n = \mathcal{E}(\theta,\epsilon, n, o, \rho)$, which is described before Lemma \ref{lemma:longsegment} (and up to passing to asubsequence), then the boundary representation $\pi_{\nu_o}$ is ergodic with respect to $(E_n, Pr)$ and any $f \in \overline{H}^{L^{\infty}}$ as above. In other words, the pair $(E_n = \mathcal{E}(\theta,\epsilon, n, o, \rho), Pr)$ satisfies all conditions listed in Theorem \ref{criterion2}.
\end{theorem}
\noindent As $\nu_o$ is a Radon measure, one has immediately the following two corollaries by Proposition \ref{Irre} and Remark \ref{remark:quasiregularirreducible}.
\begin{corollary}\label{corollary:mappingclassgroupirreducibility}
The boundary representation $\pi_{\nu_o}$ of $\M(S)$ is irreducible.
\end{corollary}
\begin{corollary}
The mapping class group $\M(S)$ acts ergodically on $\PMF(S)$ with respect to the measure class $[\nu_o]$.
\end{corollary}
\noindent We then mention a property of the boundary representation $\pi_{\nu_o}$. Recall that a unitary representation of a group $G$ is called {\it tempered} if it is weakly contained in the regular representation $L^{2}(G)$.
\begin{proposition}\label{proposition:tempered}
The boundary representation $\pi_{\nu_o}$ of $\M(S)$ is tempered.
\end{proposition}
\begin{proof}
We argue as \cite[Proposition 6.3]{garncarek2016boundary}. By the main theorem in \cite{Gabriella}, we need to verify that the action of $\M(S)$ on $\PMF(S)$ is amenable. This is in \cite[Proposition 8.1]{Hamenstaedt2009a} as a corollary of topological amenability of the action of $\M(S)$ on $\PMF(S)$.
\end{proof}
\noindent {\bf Notations.}~~~ We make some conventions that we will use in the sequel.
\begin{itemize}
\item $S = S_g$: a genus $g \geq 2$, closed, oriented, connected surface.
\item $h = 6g-6$.
\item $o$: the base point in $\T(S)$ which is chosen to be generic in the sense that $Stab_{o}(\M(S)) = {id}$. Denote $\nu =\nu_o$ and the measure is normalized so that $\nu (\PMF(S)) = 1$.
\item The projective measured foliation space $\PMF(S)$ is regarded as a subset of $\MF(S)$ by $\tau$ and an element $[\xi]$ in $\PMF(S)$ is then written as $\xi$, so both $[\xi]$ and $\xi$ will be called directions when there are no confusions.
\item Fix arbitrary $\rho > 0$ and assume $n \gg \rho$.
\item $Pr_{y} : \T(S)-\{y\} \longrightarrow \PMF(S)$: the radial projection from $\T(S)$ to $\PMF(S)$ that assigns every point $z \in \T(S) -\{y\}$ to the vertical measured foliation of the unit quadratic differential defined by the oriented geodesic $[y,z]$. For $y = o$, we simply denote $Pr_{o}$ to be $Pr$.
\item $B(y,R)$: the closed ball in $\T(S)$ of radius $R$ at $y$ with respect to the Teichm\"{u}ller distance $d = d_T$.
\item $\asymp$: if $A(t),B(t)$ are two functions, we use the notation $A \asymp B$ to mean $\frac{A(t)}{B(t)}\rightarrow 1$ as $t \rightarrow \infty$ and $A \widetilde{<} B$ to mean $\lim_{t \rightarrow \infty}\frac{A(t)}{B(t)} \leq 1$. The notation $A \widetilde{>} B$ is defined similarly.
\item $A \sim_{\theta} B$: there is multiplicative constants $C_1 > 0,C_2 > 0$ depending on $\theta$ so that $$C_1A \leq B \leq C_2A.$$ $A \prec_{\theta} B$: there is a multiplicative constant $D=D(\theta) > 0$ so that $$A \leq DB.$$ And $A \succ_{\theta} B$ is defined similarily.
\item Denote $U = U(\theta,\epsilon)$ and $E_n = \mathcal{E}(\theta,\epsilon, n, o, \rho)$ in the sequel which is described before Lemma \ref{lemma:longsegment} with $\theta > 0.999$.
\item $\xi_{\gamma} \in \PMF(S)$ (for $\gamma \in \M(S)-\{id\}$): the direction of the oriented geodesic segment $[o,\gamma \cdot o]$.
\end{itemize}
\section{Exponential growth and Shadow lemma}
\subsection{Exponential growth.}
\noindent In this subsection, we will show that $|E_n|$ goes to infinity. In fact, we will show that $|E_n|$ grows exponentially. For any Borel subset $W$ of $\PMF(S)$, denote by $Sect_{W}$ the union of geodesics starting from $o$ and ending at $W$. We first recall the following theorem in \cite{ABEM} in our setting. Let $$C(n,\rho) = \left\{\gamma \in \M(S): d_T(\gamma \cdot o,o) \in (n-\rho,n+\rho)\right\}.$$
\begin{theorem}[\cite{ABEM},Theorem 2.10]\label{ABEM2}
Let $W$ and $V$ be two Borel subsets of $\PMF(S)$ with measure zero boundaries. Then as $R$ tends to $\infty$,
\begin{displaymath}
\begin{aligned}
& \left|\{\gamma \in C(n,\rho): \gamma \cdot o \in Sect_{W} ~ \mbox{and} ~\gamma^{-1} \cdot o \in Sect_{V} \}\right|\\
&\phantom{=\;} \asymp K e^{h n} \nu(W)\nu(V).
\end{aligned}
\end{displaymath}
where $K$ is a constant depending on $g$, $\rho$ and $o$. In fact, using the notations in \cite{ABEM}, one has $K = \frac{2sinh(h\rho)\|\nu(\PMF(S))\|^2}{hm(\mathcal{M}_g)},$ where $m(\mathcal{M}_g)$ is the push forward of the Masur-Veech volume.
\end{theorem}
\begin{corollary}\label{corollary:exponentialgrowth}
Let $n \gg 0 $ and $K$ be the constant in Theorem \ref{ABEM2}. Then $|E_n| \asymp Ke^{hn}$ (up to a subsequence). In particular, $\lim_{n \rightarrow \infty}|E_n| = \infty$.
\end{corollary}
\begin{proof}
As $E_n \subset C(n,\rho)$ and, by Theorem \ref{ABEM2}, $|C(n,\rho)| \asymp Ke^{hn}$, it is obvious that $|E_n| ~\widetilde{<}~ Ke^{hn}$. We now show that $ |E_n| ~\widetilde{>}~ Ke^{hn}$. Recall that $U(\theta,\epsilon) = \cup_{R> 0}U(R,\theta,\epsilon)$ with $\nu(U(\theta,\epsilon)) = 1$ and $U(S,\theta,\epsilon) \subset U(T,\theta,\epsilon)$ for $T > S$. Let $\delta_1 > 0$ small enough and choose $R \gg 0$ such that $$1-\delta_1 \leq \nu(U(R, \theta,\epsilon)) \leq 1, ~\nu(\partial U(R,\theta,\epsilon)) = 0.$$ By Theorem \ref{ABEM2} again, for any $\delta_2 > 0$ small enough, there exists $N(\delta_2)$ so that, whenever $n \geq N(\delta_2)$ and $\frac{1}{3h}\ln \ln n > R$,
\begin{displaymath}
\begin{aligned}
&\left|\{\gamma \in C(n,\rho): \gamma \cdot o \in Sect_{U(R,\theta,\epsilon)} ~ \mbox{and} ~\gamma^{-1} \cdot o \in Sect_{U(R,\theta,\epsilon)} \}\right|\\
& \geq Ke^{-\delta_2} e^{hn} \left(\nu(U(R,\theta,\epsilon))\right)^2\\
& \geq Ke^{-\delta_2}(1-\delta_1)^2e^{hn}.
\end{aligned}
\end{displaymath}
On the other hand, by the choice of $n$ and the definition of $U(R, \theta,\epsilon)$,
\begin{displaymath}
\begin{aligned}
&\{\gamma \in C(n,\rho): \gamma \cdot o \in Sect_{U(R,\theta,\epsilon)} ~ \mbox{and} ~\gamma^{-1} \cdot o \in Sect_{U(R,\theta,\epsilon)} \} \\
& \subset E_n.
\end{aligned}
\end{displaymath}
Therefore, we have $|E_n| \geq Ke^{-\delta_2}(1-\delta_1)^2e^{hn}$. As $\delta_1$ and $\delta_2$ can be arbitrary small, one has $|E_n| ~\widetilde{>}~ Ke^{hn}$.
\end{proof}
\begin{corollary}\label{weakconvergence}
For all Borel subsets $W,V \subset \PMF(S)$ such that $\nu(\partial W) = \nu(\partial V) = 0$, $$\limsup_{n \rightarrow \infty}\frac{1}{|E_n|}|\left\{\gamma \in E_n: Pr(\gamma^{-1})\in W ~\mbox{and}~Pr(\gamma)\in V \right\}| \leq \nu(W)\nu(V).$$
\end{corollary}
\begin{proof}
By Corollary \ref{corollary:exponentialgrowth} and Theorem \ref{ABEM2}, $|E_n| \asymp |C(n,\rho)|$. Notice that $Pr(\gamma) \in V$ if and only if $\gamma \cdot o \in Sect_V$. Hence,
\begin{displaymath}
\begin{aligned}
&\limsup_{n \rightarrow \infty}\frac{1}{|E_n|}\left|\left\{\gamma \in E_n: Pr(\gamma^{-1})\in W ~\mbox{and}~Pr(\gamma)\in V \right\}\right|\\
&\leq \limsup_{n \rightarrow \infty}\frac{1}{|E_n|}\left|\left\{\gamma \in C(n,\rho): Pr(\gamma^{-1})\in W ~\mbox{and}~Pr(\gamma)\in V \right\}\right|\\
&= \limsup_{n \rightarrow \infty}\frac{|C(n,\rho)|}{|E_n|}\frac{1}{|C(n,\rho)|}\left|\left\{\gamma \in C(n,\rho): Pr(\gamma^{-1})\in W ~\mbox{and}~Pr(\gamma)\in V \right\}\right|\\
&\leq \nu(W)\nu(V).
\end{aligned}
\end{displaymath}
\end{proof}
\subsection{Shadow lemma.}
\begin{definition}($\mathcal{O}$-points)
A point $y$ in $\T(S)$ is called an $\mathcal{O}-$point (with respect to $o$) if for every $R > 0$, there is a real number $C \geq 1$ depending on $R$, such that $$\frac{1}{C} \exp{(-h d(o,y))} \leq \nu(Pr(B(y,R))) \leq C\exp{(-h d(o,y))} $$
\end{definition}
\begin{remark}
We call such points $\mathcal{O}$-points because they are the points that satisfy the classic shadow lemma \cite{Sullivan}.
\end{remark}
\begin{definition}
An element $g \in \M(S)$ is called an $\mathcal{O}-$mapping class (with respect to $o \in \T(S)$) if $g \cdot o$ is an $\mathcal{O}$-point.
\end{definition}
\noindent Recall that $U$ has a full measure. We need a lemma that relates Busemann functions to extremal lengths. Recall that if $(X,d_{X})$ is a metric space and $\xi$ is a geodesic ray starting from a point $x_0 \in X$, then {\it the Busemann function} associated to the geodesic ray $\xi$ is the function $b_{\xi}$ on $X$ defined by $$b_{\xi}: x \mapsto \lim_{t \rightarrow \infty}\left(d_{X}(x, \xi(t))-t\right).$$ For $(X = \T(S),d_{X}=d)$ and $\xi$ be a geodesic ray starting from $o$, one has,
\begin{lemma}[\cite{Walsh09} or \cite{su2018horospheres}]\label{busemann}
If $[\xi]$ is uniquely ergodic, then the Busemann function associated to the geodesic ray in the direction $[\xi]$ is $$\forall x \in \T(S), b_{[\xi]}(x) = \frac{1}{2}\ln \left(\frac{\E_x(\xi)}{\E_{o}(\xi)}\right).$$
\end{lemma}
\noindent The following Shadow lemma will be used in the proof of uniform boundedness (Section \ref{subsection:uniformboundedness}).
\begin{lemma}[\cite{Yang}, Lemma 6.3]\label{lemma:Shodowlemma}
Let $n \gg \rho$. Then elements in $\bigcup_{n}E_n$ are $\mathcal{O}-$mapping classes, where $C$ depends on $R, \theta, \epsilon$.
\end{lemma}
\section{Harish-Chandra estimates}\label{Section:Harish-Chandraestimations}
\noindent This section is devoted to prove the following Harish-Chandra estimates.
\begin{theorem}\label{Harish-Chandra}
Given $n \gg \rho $. There exist $a_1 > 0, a_2 > 0,b_1, b_2,c_1 >0$ depending on $\epsilon,o, g,\theta,\rho$ such that $$ \forall \gamma \in E_n,~
(a_1n-c_1\ln \ln n +b_1)e^{-\frac{h}{2}n} \leq \Phi(\gamma) \leq (a_2n+b_2)e^{-\frac{h}{2}n}. $$
\end{theorem}
\noindent Recall that
\begin{displaymath}
\begin{aligned}
& \Phi(\gamma) = <\pi_{\nu}(\gamma)\mathds{1}_{\PMF(S)}, \mathds{1}_{\PMF(S)}>_{L^2(\PMF(S),\nu)}\\
&\phantom{=\;\;} = \int_{\PMF(S)}\left(\frac{\E_{o}(\xi)}{\E_{\gamma.o}(\xi)}\right)^{\frac{h}{4}}d\nu([\xi]).
\end{aligned}
\end{displaymath}
\begin{remark}
\begin{itemize}
\item[1.] In \cite{Boyer17}, the left side is of the form $(an+b)e^{-\frac{\alpha}{2}n}$. However, it seems that some other terms like $\ln \ln n$ should be added for mapping class groups if we require $\lim \frac{|E_n|}{C(n,\rho)} = 1$.
\item[2.] The following oberservation will be useful, namely $\Phi(\gamma) \asymp ne^{-\frac{hn}{2}}$.
\end{itemize}
\end{remark}
\noindent The proof is divided into several steps and will be given at the end of this section.
\subsection{Reduction to intersection numbers.}
\noindent By our convention, for every $\xi \in \PMF(S)$, one has $\E_{o}(\xi) = 1$, we then have $$\Phi(\gamma) = \int_{\PMF(S)}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}d\nu(\zeta).$$
\noindent Let $\xi_{\gamma}$ be the direction of $[o,\gamma \cdot o]$. In order to estimate $\Phi(\gamma)$, we will relate it to the following integrations on intersection numbers:
$$\Psi(\gamma) = \int_{\PMF(S)}\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}}d\nu(\eta).$$
\noindent Denote $$\Psi(\gamma)_{\geq A} = \int_{\{\eta \in \PMF(S): i(\xi_{\gamma}, \eta) \geq A\}}\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}}d\nu(\eta).$$
\noindent The first step is to bound $\Phi(\gamma)$ from above. This can be easily done by Minsky's inequality. Namely,
\begin{lemma}[Minsky's inequality \cite{Minsky93}]\label{Minskyinequailty}
Let $\xi$ and $\eta$ be two measured foliations on $S$ and $x \in \T(S)$, then $$i^2(\xi, \eta) \leq \E_{x}(\xi) \E_{x}(\eta),$$ where the equilty holds if and only if there is a qudratic differential $q$ so that the vertical measured foliation of $q$ on $X$ is $\xi$ and the horizontal measured foliation is $\eta$.
\end{lemma}
\begin{corollary}\label{corollary:upperbound}
There exist constants $C_3 = C_3(g,\rho) > 0$ and $C_4 = C_4(g,\rho) > 0$ such that, for every $M \in (0,1)$ and every $\gamma \in \M(S)$, $$\Phi(\gamma) \leq C_3e^{-\frac{h}{2}n}\Psi(\gamma)_{\geq M} + C_4e^{\frac{h}{2}n} \nu(\{\eta \in \PMF(S): i(\eta, \xi_{\gamma}) \leq M\} ).$$
\end{corollary}
\begin{proof}
Decompose $\PMF(S)$ into two subsets $A = \{ \eta \in \PMF(S): i(\eta, \xi_{\gamma}) \leq M \}$ and $B = \{ \eta \in \PMF(S): i(\eta, \xi_{\gamma}) \geq M \}$. Then we have
\begin{equation}
\begin{aligned}
&\Phi(\gamma) = \int_{A}\left(\frac{1}{\E_{\gamma \cdot o}(\eta)}\right)^{\frac{h}{4}}d\nu(\eta) + \int_{B}\left(\frac{1}{\E_{\gamma \cdot o}(\eta)}\right)^{\frac{h}{4}}d\nu(\eta)\\
&= {\rm I + II}.
\end{aligned}
\end{equation}
By Kerckhoff's formula, ${\rm I} \prec_{g,\rho} e^{\frac{h}{2}n} \nu(\{\eta \in \PMF(S): i(\eta, \xi_{\gamma}) \leq M\} )$. Thanks to Lemma \ref{busemann}, we can replace $\E_{\gamma \cdot o}(\xi_{\gamma})$ in Lemma \ref{Minskyinequailty} by $e^{-2n}$, so one has $$\frac{1}{i^2(\xi_{\gamma}, \eta) e^{2n}} \succ_{g,\rho} \frac{1}{\E_{\gamma \cdot o}(\eta)},$$ which gives the bound for the term ${\rm II}$.
\end{proof}
\noindent In order to bound $\Phi(\gamma)$ from below, we will use the fact that $\gamma \in E_n$.
\begin{lemma}\label{lemma:inverseminsky}
There exists a constant $F$ depending on $g,o,\epsilon,\theta,\rho$ such that if $i(\xi_{\gamma}, \eta) \geq F \ln ne^{-2n}$, where $\eta \in U(\epsilon,\theta)$ and $\gamma \in E_n$, then $i^2(\xi_{\gamma},\eta) \succ_{g,o,\epsilon,\theta,\rho} \E_{\gamma \cdot o}(\eta)e^{-2n}$.
\end{lemma}
\begin{proof}
First we remark that, since both $\eta$ and $\xi_{\gamma}$ are uniquely ergodic, by [\cite{klarreich2018boundary}, Proposition 5.1], there is a geodesic whose horizontal and vertical measured foliations are in the projective classes $\xi_{\gamma}$ and $\eta$ respectively. Hence we have a geodesic triangle $\triangle(o,\xi_{\gamma}, \eta)$. As $\gamma \in E_n$, Lemma \ref{lemma:longsegment} implies that there is a geodesic segment $I$ of length $\ell = \frac{1}{3h}\ln \ln n$ in $[o,\gamma \cdot o]$ ending at $\gamma \cdot o$ that has at least proportion $\theta$ in $\T_{\epsilon}(S)$. By Theorem \ref{theorem:DDM}, $$I \cap \mathcal{N}_{D}([o,\xi_{\gamma}]\cap[o,\eta]) \ne \emptyset,$$ where $D$ comes from Theorem \ref{theorem:DDM}. Choose $q \in I \cap \mathcal{N}_{D}([o,\xi_{\gamma}]\cap[o,\eta])$. Then there are two possibilities:\\
\noindent Case 1: $d(q, y) \leq D$ with $y\in [\xi_{\gamma},\eta]$.\\
\noindent Then we have, by Kerckhoff's formula and Lemma \ref{Minskyinequailty},
\begin{displaymath}
\begin{aligned}
& i^2(\xi_{\gamma},\eta) = \E_{y}(\eta)\E_{y}(\xi_{\gamma})\\
&\succ_{g,o,\theta,\epsilon} \E_{q}(\eta)\E_{q}(\xi_{\gamma})\\
&=\E_{q}(\eta)e^{-2d(o,q)}\\
&\geq \E_{\gamma \cdot o}(\eta)e^{-2n},
\end{aligned}
\end{displaymath}
which means that, in this case, we always have $i^2(\xi_{\gamma},\eta) \succ_{g,o,\epsilon,\theta} \E_{\gamma \cdot o}(\eta)e^{-2n}$.\\
\noindent Case 2: $d(q, y) \leq D$ with $y\in [o,\eta]$.\\
\noindent Then we have
\begin{displaymath}
\begin{aligned}
&i^2(\xi_{\gamma},\eta) \leq \E_{y}(\eta)\E_{y}(\xi_{\gamma})\\
&\sim_{g,o,\theta,\epsilon} \E_{y}(\eta)\E_{q}(\xi_{\gamma})\\
&\sim_{g,o,\theta,\epsilon,\rho} e^{-4d(o,q)} = e^{-4(d(o,\gamma \cdot o)-d(q,\gamma \cdot o))}\\
&\leq e^{-4n}e^{4\ell}.
\end{aligned}
\end{displaymath}
\noindent Therefore, in this case we have a constant $F_1$ depending on $g,o,\epsilon,\theta,\rho$ such that $$i(\xi_{\gamma},\eta) \leq F_1e^{-2n}e^{2\ell} \leq F_1e^{-2n}e^{\ln \ln n} = F_1\ln n e^{-2n}.$$ Thus if we take $F \gg F_1$ and require $i(\xi_{\gamma},\eta) \geq F\ln ne^{-2n}$, it forces us in the Case 1 which implies the conclusion that $$i^2(\xi_{\gamma},\eta) \succ_{g,o,\epsilon,\theta} \E_{\gamma \cdot o}(\eta)e^{-2n}.$$
\end{proof}
\begin{corollary}\label{corollary:lowerbound}
For every $\gamma \in E_n$, take $\overline{M} = F\ln ne^{-2n}$ where $F$ is the constant in Lemma \ref{lemma:inverseminsky}. Then $\Phi(\gamma) \succ_{g,o,\epsilon,\theta,\rho} e^{-\frac{h}{2}n}\Psi(\gamma)_{\geq \overline{M}}.$
\end{corollary}
\begin{proof}
Note that $U(\epsilon,\theta)$ has a full measure. Hence, by Lemma \ref{lemma:inverseminsky}
\begin{displaymath}
\begin{aligned}
&\Phi(\gamma) = \int_{U(\epsilon,\theta)}\left(\frac{1}{\E_{\gamma.o}(\eta)}\right)^{\frac{h}{4}}d\nu(\eta)\\
&\geq \int_{\{\eta \in U(\epsilon,\theta): i(\eta,\xi_{\gamma}) \geq \overline{M}\}}\left(\frac{1}{\E_{\gamma.o}(\eta)}\right)^{\frac{h}{4}}d\nu(\eta)\\
&\succ_{g,o,\epsilon,\theta} e^{-\frac{hn}{2}}\Psi(\gamma)_{\geq \overline{M}}.
\end{aligned}
\end{displaymath}
\end{proof}
\begin{lemma}\label{lemma:log}
Assume that there exist a sequence $\{\xi_k\} \in \PMF(S)$ converges to $\xi_{\gamma}$ and constants $N_0 > 0, a > 0, b > 0$ such that $$\forall N \leq N_0, ~~~a N^{\frac{h}{2}} \leq \nu(\{\eta \in \PMF(S): i(\eta, \xi_k) \leq N\}) \leq b N^{\frac{h}{2}} ,$$ then there exist $A,B,D_1,D_2$ such that $$-A\ln N + D_1 \leq \Psi(\gamma)_{\geq N }\leq -B\ln N +D_2. $$
\end{lemma}
\begin{proof}
Fix $N \leq N_0$, then the set $\{\eta \in \PMF(S): i(\eta,\xi_{\gamma}) \geq N\}$ is compact. Since $i(\xi_k,\cdot)$ converges to $i(\xi_{\gamma},\cdot)$ uniformly on compact sets outside $\{\xi_{\gamma}\}$, there exists $K_1$ so that, when $k \geq K_1$, $$\{\eta: i(\xi_k, \eta) \geq 2N\} \subset \{\eta: i(\xi_{\gamma},\eta) \geq N\}.$$ Hence $$\forall k \geq K_1, \int_{\{\eta \in \PMF(S): i(\xi_{k},\eta) \geq 2N\}}\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}}d\nu \leq \int_{\{\eta \in \PMF(S): i(\xi_{\gamma},\eta) \geq N\}}\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}}d\nu.$$ By convergence uniform on compact sets again. there is $K_2$ such that when $k \geq K_2$, on $\{\eta: i(\eta,\xi_(\gamma)) \geq N\}$, $$\frac{1}{2} \leq \left(\frac{i(\xi_k,\eta)}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}} \leq 2.$$ Take $k > \max\{K_1,K_2\}$, then
\begin{equation*}
\begin{aligned}
&\frac{1}{2}\int_{\{\eta \in \PMF(S): i(\xi_{k},\eta) \geq 2N\}}\left(\frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}}d\nu\\
&\leq \int_{\{\eta \in \PMF(S): i(\xi_{k},\eta) \geq 2N\}}\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}}d\nu \\
&\leq \Psi(\gamma)_{\geq N}
\end{aligned}
\end{equation*}
Since for $\eta \in \PMF(S)$, $$\lim_{k \to \infty} \mathds{1}_{\{\eta \in \PMF(S): i(\xi_{k},\eta) \geq N\}}(\eta)\left(\frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}} = \mathds{1}_{\{\eta \in \PMF(S): i(\xi_{\gamma},\eta) \geq N\}}(\eta)\left(\frac{1}{i(\xi_{\gamma},\eta)}\right)^{\frac{h}{2}},$$
by Fatou's lemma, there is $K_3$ such that when $k \geq K_3$, $$\Psi(\gamma)_{\geq N} \leq \int_{\{\eta \in \PMF(S): i(\xi_{k},\eta) \geq N\}}\left(\frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}}d\nu + 1.$$
Define $$\Psi(k)_{\geq N} = \int_{\{\eta \in \PMF(S): i(\xi_{k}, \eta) \geq N\}}\left( \frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}}d\nu(\eta),$$ we have for $k >> 0$ $$\frac{1}{2}\Psi(k)_{\geq 2N} \leq \Psi(\gamma)_{\geq N} \leq \Psi(k)_{\geq N} + 1.$$ Hence it is sufficient to show that
there exist $A,B,D_1,D_2$ such that $$-A\ln N + D_1 \leq \Psi(k)_{\geq N }\leq -B\ln N +D_2.$$
Now the proof is close to part of \cite[Proposition 3.2]{Boyer17}. We repeat here for completeness. Namely,
\begin{equation}
\begin{aligned}
& \Psi(k)_{\geq N}= \int_{\{\eta \in \PMF(S): i(\xi_{k}, \eta) \geq N\}}\left( \frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}}d\nu(\eta)\\
&=\int_{}\nu\left(\left\{\eta \in \PMF(S): \left(\frac{1}{i(\xi_{k},\eta)}\right)^{\frac{h}{2}} \geq t\right\}\right)dt\\
&=\int_{1}^{\frac{1}{N^{\frac{h}{2}}}}\nu\left(\left\{\eta \in \PMF(S): i(\xi_{k},\eta)\leq \frac{1}{t^{\frac{2}{h}}}\right\}\right)dt\\
&=\int_{1}^{N_0}\nu\left(\left\{\eta \in \PMF(S): i(\xi_{k},\eta)\leq \frac{1}{t^{\frac{2}{h}}}\right\}\right)dt \\
&+\int_{N_0}^{\frac{1}{N^{\frac{h}{2}}}}\nu\left(\left\{\eta \in \PMF(S): i(\xi_{k},\eta)\leq \frac{1}{t^{\frac{2}{h}}}\right\}\right)dt.
\end{aligned}
\end{equation}
\noindent By the assumption and $\nu$ is a probability measure, one can have the conclusion.
\end{proof}
\subsection{A basic example.}
\noindent Before continuing our discussions, we digress to the case of once-punctured torus $S_{1,1}$. Some standard facts are taken from [\cite{Miyachi17}, 7.2 Examples].
\noindent Let $S = S_{1,1}$. Then $\M(S) = SL(2,\mathbb{Z})$ and $\T(S) = \mathbb{H}^{2}$, the upper half plane. Take $o$ to be $i \in \mathbb{H}^{2}$. The space $\MF(S)$ of measured foliations can be identified with the real plane module the inversion, namely $\{\mathbb{R}^2-(0,0)\}/\{I,-I\}$. By the ergodicity of the Thurston measure $\nu_{Th}$, up to a constant multiple, the measure $\nu_{Th}$, which is defined to be the weak limit of counting measures on $\MF(S)$, can be identified with the Lebesgue measure on $\mathbb{R}^2$. Rays in $\{\mathbb{R}^2-(0,0)\}/\{I,-I\}\}$ are then identified with points in $\PMF(S)$. It implies that $\PMF(S)$ can be identified with $\mathbb{R}P^1$. Notice that all identifications here are $\M(S)-$equivariant. Hence $\PMF(S)$ can be represented as $\{[x:y]: x^2+y^2 \neq 0, x,y \in \mathbb{R}\}$, or $\mathbb{R}\cup \{\infty\}$. $\overline{\T(S)}$ is then the usual compactification of $\mathbb{H}^{2}$. In this case, $\M(S)$ acts on $\overline{\T(S)}$ via linear fractional transformations. For $(x,y) \in \mathbb{R}^2$, the extremal length at $o$ is $$ \E_{o}((x,y)) = x^2+y^2,$$ hence the image of $\PMF(S)$ under $\tau$ is the circle. We will ignore the difference between $\mathbb{R}^2$ and $\mathbb{R}^2/\{I,-I\}$. For two points $(x,y),(p,q) \in \MF(S)$, the intersection number is $|qx-py|$. Write the image of $\PMF(S)$ in the form of $(\sin(\theta), \cos(\theta))$, and fix any $\xi = (\sin(\theta_0),\cos(\theta_0)) \in \PMF(S).$ Let $M$ to be small enough, then
\begin{equation}
\begin{aligned}
&\{\eta \in \PMF(S): i(\xi, \eta) \leq M\} \\
&=\{\theta \in [0,2\pi]: |\sin(\theta)\cos(\theta_0)- \cos(\theta)\sin(\theta_0)| \leq M\}\\
&=\{\theta \in [0,2\pi]: |\sin(\theta-\theta_0)| \leq M \}\\
&=\{\theta \in [0,2\pi]: -M \leq \sin(\theta-\theta_0) \leq M \}.
\end{aligned}
\end{equation}
As $M$ is enough small, $\sin(\theta)$ is almost the same as $\theta$, so there exist constants $A$ and $B$, so that $$AM \leq \nu(\{\eta \in \PMF(S): i(\xi, \eta) \leq M\}) \leq BM,$$
Notice that, when $S$ is $S_{1,1}$, we have $h =6g-6+n= 6\times 1- 6+ 2 \times 1 = 2$, hence $\frac{h}{2} = 1$.
\subsection{Approximation by pant curves.}
\noindent Now we want to prove that the assumption in Lemma \ref{lemma:log} holds. We do this by pants curves using the map considered in \cite{MasurMinsky99}. \\
\noindent Recall that, thanks to Bers' theorem, there is a constant $C_1=C(g)$, depending only on the genus $g$, such that for every point $x \in \T(S)$, there exists a pants decomposition, namely a collection of $3g-3$ essential simple closed curves $\mathcal{P} = \{\alpha_1,\cdots,\alpha_{3g-3}\}$, such that $$\forall 1\leq i \leq 3g-3,~ \E_{x}(\alpha_i) \leq C^2.$$ If $x$ is further assumed to be in $\T_{\epsilon}(S)$, one can choose a collection of $3g-3$ essential simple closed curves $ \{\alpha_1(x),\cdots,\alpha_{3g-3}(x)\} $ on $S$ such that $$\forall 1\leq i \leq 3g-3, \epsilon \leq \E_{x}(\alpha_i(x)) \leq C^{2}_{1}.$$ Denote $\alpha(x) \in MF(S)$ to be the measured foliation $\alpha(x) = \sum_{i=1}^{3g-3}\alpha_i(x)$ and $[\alpha(x)]$ be its projective class in $\PMF(S)$. By the Jenkins-Strebel theorem (see also \cite[Theorem 2.1]{Kerckhoff80}), there is a unit holomorphic quadratic differential $q = q(x, \alpha(x))$ on $o \in \T(S)$ whose projective class of the vertical measured foliation is $[\alpha(x)]$. \\
\noindent Given $\gamma \in E_n$, by the construction of $E_n$, the ending point $\xi_{\gamma}$ of the geodesic ray $g^{\gamma}_t$ determined by $[o,\gamma \cdot o]$ is in $U(\epsilon,\theta)$. Hence, there is a sequence of points $y(k,\gamma) \in \T_{\epsilon}(S)$ in $g^{\gamma}_t$ tends to $\xi_{\gamma}$ in $\overline{\T(S)}$. By the above discussion, there is a sequence of pants curves $\alpha(y(k,\gamma))$ and a sequence of points $[\alpha(y(k,\gamma))]$ in $\PMF(S)$. By Minsky's inequality, $[\alpha(y(k,\gamma))]$ converges to $[\xi_{\gamma}]$ in $\PMF(S)$, which means, under the map $\tau$, $\frac{\alpha(y(k,\gamma))}{\sqrt{\E_{o}(\alpha(y(k,\gamma)))}}$ converges to $\xi_{\gamma}$. We first estimate $ \E_{o}(\alpha(y(k,\gamma)))$. \\
\noindent Now let $y \in \T_{\epsilon}(S)$ and denote $\alpha(y)$ and $q(y) = q(y, \alpha(y))$ as above. Let $g_t$ be the Teichm\"{u}ller geodesic ray starting from $o$ in the direction of $q(y)$. Let $t_y$ be the unique point in $g_t$ such that $t_y$ has maximal distance with $o$ and $$\forall 1\leq i \leq 3g-3,~ \epsilon \leq \E_{t_y}(\alpha_{i}(y)) \leq C^{2}_{1} .$$
\begin{lemma}\label{lemma:boundeddistance}
There is a constant $C_2$ depending on $g$ and $\epsilon$ such that $$\forall y \in \T_{\epsilon}(S), d(y, t_y) \leq C_2.$$
\end{lemma}
\noindent The proof is based on the following theorem. The theorem is used in the form of \cite[Theorem 5.3]{ABEM}. For the definition of twist numbers $tw(\alpha,\beta)$, the reader is referred to \cite{PennerHarer}.
\begin{theorem}[\cite{Minsky96}]\label{Minskyextremal}
Let $x \in \T(S)$ and $\mathcal{P} = \{\alpha_1, \cdots, \alpha_{3g-3}\}$ be a pants decomposition produced by the Bers' theorem mentioned above. Then for any simple closed curve $\beta$,
\begin{equation}\label{equation:extremal}
\E_{x}(\beta) \sim_g \max_{1 \leq i \leq 3g-3}\left( \frac{i^2(\beta, \alpha_i)}{\E_{x}(\alpha_i)} + tw^2(\beta,\alpha_i)\E_{x}(\alpha_i) \right).
\end{equation}
\end{theorem}
\begin{proof}[Proof of Lemma \ref{lemma:boundeddistance}]
By Kerckhoff's formula, we only need to bound the ratio $\frac{\E_{y}(\beta)}{\E_{ t_y}(\beta)}$ for any essential simple closed curve $\beta$ on $S$. However, by the construction of $y(x)$, the two hyperbolic surfaces $t_y$ and $y$ have the same pants decomposition which satisfies the condition in Theorem \ref{Minskyextremal}, namely $\alpha(\gamma) = \{ \alpha_1(y), \cdots, \alpha_{3g-3}(y)\}.$ As, for $1 \leq i \leq 3g-3$, both extremal lengths $\E_{y}(\alpha_i)$ and $\E_{t_y}(\alpha_i)$ are bounded below by the constant $\epsilon$ and above by a constant $C^{2}_{1}$ depending only on $g$, one can conclude the proof of the lemma by using Equation (\ref{equation:extremal}).
\end{proof}
\begin{corollary}\label{corollary:friendpoint}
We have $ |d_T(o, y) - d_T(o, t_y)| \leq C_2 $, where $C_2$ is the constant in Lemma \ref{lemma:boundeddistance}. Hence, $\E_{o}(\alpha_i(y(k,\gamma))) \sim_{g,\epsilon,o} e^{2d(y(k,\gamma),o)}$.
\end{corollary}
\begin{proof}
One only needs to show the second statement. Let $y = y(k,\gamma)$ and $t_y = t_{y(k,\gamma)}$. Let $T = d(o,t_y)$ and $f: o \rightarrow t_y$ be the Teichm\"{u}ller mapping with dilatation $e^{2T}$ between $o$ and $t_y$. One knows that $T \sim_{g,\epsilon} d(o,y(k,\gamma))$. We want to show that $\E_{o}(\alpha_i(y)) \sim_{g,\epsilon,o} e^{2T}$, for each $1 \leq i \leq 3g-3$. Notice that $\E_{t_y}(\alpha_i(y)) \sim_{g,\epsilon} 1$. On the one hand, by Kerckhoff's formula, one has $$\E_{o}(\alpha_i(y)) \prec_{g,\epsilon} e^{2T}.$$ In order to bound $\E_o(\alpha_i(y))$ from below, we construct a metric and use the analytic definition of extremal length in \cite{Kerckhoff80}. Fix $1 \leq i \leq 3g-3$. Let $q$ be the unit quadratic differential on $o$ in the direction of $[o, t_y]$, that is, under the above notations, $q = q(y, \alpha(y))$. Let $m_i$ be the modulus of the cylinder $C_i$ determined by $q$, where $C_i$ has the same core curve with $\alpha_i(y)$. According to the proof of \cite[Proposition 2]{Kerckhoff80}, there is a metric $\sigma$ in the conformal class of $t_y$ such that the core curve of $\alpha_i(y)$ has $\sigma-$length $1$ and area $e^{T}m_i + A$, where $A$ is a constant depending on $o$. One also has $$\frac{1}{e^{T}m_i} \geq \E_{t_y}(\alpha_i(y)) \geq \frac{1}{e^{T}m_i + A}.$$ Now consider the metric $\Sigma = f^{*}\sigma$ on $o$ defined by the pullback of $\sigma$ via the Teichm\"{u}ller mapping $f$. As $f$ preserves the area (coarsely, since the metric $\sigma$ supported on the neighborhood of $C_i$), but shrink the vertical length, the core curve of $\alpha_i(y)$ has $\Sigma-$length $e^T$. Thus $$\E_{o}(\alpha_i(y)) \geq \frac{e^{2T}}{e^{T}m_i + A}.$$ As $$e^{T}m_i \prec_{\epsilon,g} 1, $$ one then further has $$\E_{o}(\alpha_i(y)) \succ_{g,\epsilon,o} e^{2T}.$$
\end{proof}
\noindent Summarize all discussions above: there are $A > 0,B >0$, depending on $\epsilon,g,o$, such that, for every $\gamma \in E_n$, there is a sequence $\{\xi_k(\gamma) \in \PMF(S)\}$ satisfying
\begin{itemize}
\item[(a)] $\forall k, \xi_k(\gamma) = [x_k(\gamma) = \sum_{i=1}^{3g-3}\alpha_i(\gamma)]$ where $\{\alpha_i(\gamma) \}^{3g-3}_{i=1}$ is a pants decomposition of $S$;
\item[(b)] For each $i$, there is $t_i$ such that $Ae^{2t_i} \leq \E_{o}(\alpha_i(\gamma)) \leq Be^{2t_i}$ and $\lim_it_i = \infty$;
\item[(c)] The limit of $\{\xi_k(\gamma)=\frac{x_k(\gamma)}{\sqrt{\E_{o}(x_k(\gamma))}}\}$ in $\PMF(S)$ is $\xi_{\gamma}$.
\end{itemize}
\begin{lemma}\label{lemma:approximation}
If there exists $N_0 > 0$ small enough such that, for every $\gamma \in E_n$ and $\xi_k(\gamma)$, one has $$\forall N \leq N_0,~ aN^{\frac{h}{2}} \leq \nu(\{\eta \in \PMF(S):i(\eta,\xi_k(\gamma)) \leq N\}) \leq bN^{\frac{h}{2}},$$ then there exist $N_0 > 0,a'>0$ such that $$\forall N \leq N_0,~ \nu(\{\eta \in \PMF(S):i(\eta,\xi_{\gamma})\leq N\}) \leq a'N^{\frac{h}{2}}.$$
\end{lemma}
\begin{proof}
Denote $A_k(N) = \{\eta: i(\eta, \xi_k(\gamma)) \leq N\}$ and $B=\{\eta: i(\eta, \xi_{\gamma}) \leq N\}$. Then since $i(\cdot,\cdot)$ is continuous, so $$\lim_{k \to \infty}\mathds{1}_{A_k(N)}(x)= \mathds{1}_{B(N)}(x).$$ By Fatou's lemma, $$\nu(B(N)) \leq \liminf_{k} \nu(A_k(N)) \leq bN^{\frac{h}{2}}. $$
\end{proof}
\subsection{Regularity at pants curves.}
\noindent We are now in a position to prove that the assumption in Lemma \ref{lemma:approximation} holds. We first summarize all properties of $\xi_k(\gamma)$ that we really need. \\
\noindent {\bf More conventions:}~~~ From now on, we will use the hyperbolic length function $\ell_o(\cdot)$. Since $\ell_o^2(\cdot) \sim_{o} \E_{o}(\cdot)$, we can use $\ell_o(\cdot)$ to replace $\E_o(\cdot)$ without affecting the result when we defining the measure $\nu_o$, the embedding $\tau : \PMF(S) \longrightarrow \MF(S)$ and $\xi_{k}(\gamma)$. For instance, for a measurable subset $U \in \PMF(S)$, we have $$\nu_0(U) = \mu(\{\eta: [\eta] \in U, \ell_{o}(\eta) \leq 1 \}).$$ \\
\noindent {\bf Set-up 0}: Let $\alpha = \{\alpha_1, \cdots, \alpha_{3g-3}\}$ be a pants decomposition of $S$ and consider it to be a measured foliation still denoted by $\alpha$. Then $[\alpha]$ defines a unit holomorphic quadratic differential $q$ on $o$, namely the unique $q$ such that $[\mathcal{V}(q)] = [\alpha]$. Let $\xi = \frac{\alpha}{\ell_{o}(\alpha)}$, then $\xi$ is the image of $[\alpha]$ under $\tau$. We denote $g_t$ the Teichm\"{u}ller geodesic defined by $q$. We assume that for all $i \in \{1,\cdots,3g-3\}$, $\ell_{o}(\alpha_i)$ is bounded bleow and above, up to multiplicative constants depending only on $g,o,\epsilon$, by $e^{T}$ for some $T$.\\
\begin{theorem}\label{theorem:regularity}
Under the above {\bf Set-up 0}, there exist $M_0 > 0, C > 0$ and $D > 0$, depending on $g,o,\epsilon$ such that when $M < M_0$, we have $$C M^{\frac{h}{2}} \leq \nu(\{\eta \in \PMF(S): i(\eta, \xi) \leq M\}) \leq D M^{\frac{h}{2}}.$$
\end{theorem}
\noindent The main tool to prove the above theorem is the following Dehn-Thurston theorem. Let $P = \{\alpha_k\}$ be a pants decomposition. For each $\alpha_k$, let $m_k: \MF(S) \longrightarrow \mathbb{R}_{\geq 0}, \xi \mapsto i(\alpha_k,\xi)$ be the intersection function defined by $\alpha_k$ and $t_k =tw_{k}$ be the twist function associated to $\alpha_k$.
\begin{theorem}[The Dehn-Thurston theorem \cite{PennerHarer},Theorem 3.1.1]\label{theorem:Dehn-Thurston}
Let $S = S_g$ and $\alpha =\{\alpha_1, \cdots, \alpha_{3g-3}\}$ be a pants decomposition of $S$. Then the map
\begin{equation}
\begin{aligned}
& \varpi: \MF(S) \longrightarrow \mathbb{R}^{6g-6}\\
&\mathcal{F} \mapsto (m_1(\mathcal{F}),\cdots m_{3g-3}(\mathcal{F}), t_1(\mathcal{F}),\cdots, t_{3g-3}(\mathcal{F})).
\end{aligned}
\end{equation}
gives a global coordinate for $\MF(S)$.
\end{theorem}
\noindent There are various (equivalent) definitions for the Thurston measure $\nu_{Th}$ on $\MF(S)$. For doing computation later, the Dehn-Thurston theorem then enable us to define it to be the measure $\frac{1}{n!} \omega^{n}$ induced by the symplectic form $\omega = dm_1 \wedge dt_1 + \cdots + dm_{3g-3} \wedge dt_{3g-3}$. Notice that, although the symplectic form $\omega$ depends on $\alpha$, the meausre does not since $\MF(S)$ has piecewise integral linear structure \cite[Section 3.1]{PennerHarer} which means different pants decompositions give the same measure. Hence the constants does not depend on $\xi$ in Theorem \ref{theorem:regularity}. We now denote $\mu = \nu_{Th}$ with $\alpha$ is fixed to the one given in {\bf Set-up 0}. Note that $\frac{h}{2} = 3g-3$. We now prove the theorem.
\begin{proof}[Proof of Theorem \ref{theorem:regularity}]
By Lemma \ref{Minskyinequailty}, for every two elements $\xi$ and $\eta$ in $\PMF(S)$, the intersection number $i(\eta,\xi) \leq 1$ and $1$ is achievable. So take $M_0 = \frac{1}{4}$ and let $M \leq M_0$. The proof is then divided into two parts. In the sequel, denote $a = \frac{1}{\sum_i\ell_{o}(\alpha_i)} $ and $\ell$ to be $$ \ell = \frac{1}{a\prod\left(\ell_{o}(\alpha_i)\right)^{\frac{2}{h}}} = \frac{\sum_i\ell_{o}(\alpha_i)}{\prod\left(\ell_{o}(\alpha_i)\right)^{\frac{2}{h}}}.$$ Then by our assumption, there exists $A_1 > 0$ and $B_1 > 0$ depending on $g, o, \epsilon$, such that $B_1 \leq \ell \leq A_1$.\\
\noindent {\bf Upper bound:} $\nu(\{\eta \in \PMF(S): i(\eta, \xi) \leq M\}) \leq D M^{\frac{h}{2}}$:
\noindent By the definition of $\nu$ and $i(\eta,\xi) = a\sum^{3g-3}_{k=1}i(\eta,\alpha_k)$, we have,
\begin{equation}\label{equation:regularityup1}
\begin{aligned}
&\nu\left(\left\{\eta \in \PMF(S): i(\eta, \xi) \leq M\right\}\right) \\
&= \mu\left(\left\{t\eta \in \MF(S): i(\eta, \xi) \leq M, \ell_{o}(\eta) = 1, 0 \le t \le 1\right\}\right) (\mbox{by definition})\\
&=\mu\left(\left\{t\eta \in \MF(S): a\sum_k m_k(\eta) \leq M, \ell_{o}(\eta) = 1, 0 \le t \le 1\right\}\right)\\
&\leq \mu\left(\left\{ \eta: \forall k, m_k(\eta) \leq \frac{M}{a}, t_i(\eta)\ell_o(\alpha_i) \leq A_2 \right\}\right),
\end{aligned}
\end{equation}
\noindent where $A_2$ is a constant depending only on $o$. In fact, $A_2$ depends on the diameter of $X$. The last step comes from the fact that large twists will make the length to be large. Thus we further have
\begin{equation}
\begin{aligned}
&\nu\left(\left\{\eta \in \PMF(S): i(\eta, \xi) \leq M\right\}\right)\\
&\leq \mu\left( \left\{(m_1,\cdots, m_{3g-3},t_1,\cdots,t_{3g-3}): \forall k, m_k \leq \frac{M}{a}, t_k \leq \frac{A_2}{\ell_{o}(\alpha_k)}\right\}\right) (\mbox{by (\ref{equation:regularityup1})})\\
&\leq A_3 M^{3g-3}\frac{1}{a^{3g-3}\prod^{3g-3}_{k=1}\ell_{0}(\alpha_k)}(\mbox{by the definition of $\mu$})\\
&= A_3 M^{3g-3}\ell^{3g-3}\\
&\leq A_3M^{3g-3}A_1^{3g-3} (\mbox{since $B_1 \leq \ell \leq A_1$})\\
&\leq DM^{\frac{h}{2}}.
\end{aligned}
\end{equation}
\noindent {\bf Lower bound:} $C M^{\frac{h}{2}} \leq \nu(\{\eta \in \PMF(S): i(\eta, \xi) \leq M\})$:\\
\noindent In order to bound the measure from below, we will construct a subset contained in the set. First fix, for each $i$, a positive orientation for $\alpha_i$. Let
\begin{equation}
\begin{aligned}
&V =\left\{t\eta \in \MF(S): i(\eta, \xi) \leq M, \ell_{o}(\eta) = 1, 0 \le t \le 1\right\}.
\end{aligned}
\end{equation}
Then $\eta^0 = \frac{1}{3g-3}\xi$ is in $V$. Let $a$ and $M$ as above, and $\delta > 0$ be a positive number. let $\kappa(g)$ be a sufficiently small positive number depending only on $o$. Define a set of $6g-6$-tuples by
\begin{displaymath}
\begin{aligned}
&W_0(a, M,\delta) = \\
&\{ (x_1,\cdots,x_{3g-3},y_1,\cdots,y_{3g-3}): \forall i, 0 \leq ax_i \leq \kappa(g)M, 0 \leq y_i\ell_o(\alpha_i) \leq \delta\}.
\end{aligned}
\end{displaymath}
Let also $\varpi$ be the coordinate map in Theorem \ref{theorem:Dehn-Thurston}. Then, on the one hand, by hyperbolic geometry (or by Theorem \ref{Minskyextremal}), there is $M_0 > 0$ and $\delta_0 = \delta_0(M_0)$, depending on $o$, such that for all $M \leq M_0$, one has
\begin{displaymath}
\begin{aligned}
&\varpi^{-1}(\varpi(\eta^0) + W_0(a,M,\delta_0)) \subset V.
\end{aligned}
\end{displaymath}
\noindent Notice that one could choose $a$ large enough such that $\varpi^{-1}$ is a homeomorphism on $\varpi(\eta^0) + W_0(a,M,\delta_0)$. Therefore $\mu(V) \geq \nu(\varpi^{-1}(\varpi(\eta^0) + W_0(a,M,\delta_0)))$. On the other hand,
\begin{displaymath}
\begin{aligned}
& \nu(\varpi^{-1}(\varpi(\eta^0) + W_0(a,M,\delta_0))) \\
&= \nu( \varpi^{-1}(W_0(a,M,\delta_0))).
\end{aligned}
\end{displaymath}
The last measure is easy to see to be at least $CM^{\frac{h}{2}}$, where $C$ depends on $M_0$ and $o$. Hence the proof is complete.
\end{proof}
\begin{remark}
Although we use hyperbolic length functions in the proof of above theorem to simplify notations, the proof will become even more transparent and constants involved will be explicit if one uses extremal length functions combined with Theorem \ref{Minskyextremal}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Harish-Chandra}]
Ler $\gamma \in E_n$ and take $M = e^{-2n}$ in Corollary \ref{corollary:upperbound} and $\overline{M} = F\ln n e^{-2n}$ in Corollary \ref{corollary:lowerbound} where $F$ is the constant in Lemma \ref{lemma:inverseminsky}. Then Theorem \ref{theorem:regularity} implies the assumption in Lemma \ref{lemma:approximation} and Lemma \ref{lemma:log} is true and hence $\Psi(\gamma)_{\geq M } \sim_{g,o,\epsilon} n$ and $\Psi(\gamma)_{\geq \overline{M} } \sim_{g,o,\epsilon} a_1n-c_1\ln\ln n$. Then by Corollary \ref{corollary:upperbound}, Corollary \ref{corollary:lowerbound} and Lemma \ref{lemma:log}, the proof is complete.
\end{proof}
\section{Ergodicity of boundary representation}
\subsection{Main theorem.}
\noindent In this section, we will prove the main theorem: Theorem \ref{theoreom:ergodicitymcg}, namely,
\begin{theorem}\label{theorem:Ergdicitymainproof}
Let $S = S_g (g \geq 2)$ and $\pi_{\nu}$ be the associated quasi-regular representation of the mapping class group $\M(S)$ on $L^2(\PMF(S),\nu)$. Let $n \gg \rho$ and $E_n = \mathcal{E}(\theta,\epsilon, n, o, \rho)$ (up to a subsequence). Let $e_n = Pr: E_n \longrightarrow \PMF(S)$ be the radial projection which assigns $g \in E_n$ to the direction $\xi_g$ of the oriented geodesic $[o, g \cdot o]$. Then the quasi-regular representation $\pi_{\nu}$ is ergodic with respect to $(E_n, e_n)$ and any $f \in \overline{H}^{L^{\infty}(\PMF(S),\nu)}$, where $$H = <\mathds{1}_{U}:\nu(\partial U) = 0 ~\mbox{and}~ U ~\mbox{is a Borel subset of}~ \PMF(S)>.$$
\end{theorem}
\begin{proof}
The proof consists of verifying all assumptions in Theorem \ref{criterion2} for $E_n$. The first two will be verified by showing $E_n$ is of exponential growth (namely, Corollary \ref{corollary:exponentialgrowth} and Corollary \ref{weakconvergence}). The third one is by Proposition \ref{nontangentconvergence}. The last one is Theorem \ref{uniformbounded}.
\end{proof}
\begin{proposition}\label{nontangentconvergence}
For every $n \gg \rho$, there are two sequences of real numbers $\{h_{r_n}(n, \rho)\}$ and $\{r_n\}$ such that $\lim_{n \rightarrow \infty}h_{r_n}(n,\rho) = \lim_{n\rightarrow \infty}r_n = 0$ and such that $$\forall n \in \mathbb{N}, \forall \gamma \in E_n, \frac{\langle \pi_{\nu}(\gamma)\mathds{1}_{\PMF(S)}, \mathds{1}_{\{x\in \PMF(S): i(x,e_n(\gamma))\geq r_n\}}\rangle}{\Phi(\gamma)} \leq h_{r_n}(n,\rho).$$
\end{proposition}
\begin{proof}
Now,Let $n \gg \rho$ and $\gamma \in E_n$. Let $\xi_{\gamma}$ as before. Take $r_n = \frac{1}{n}$, by the Harish-Chandra estimates (Theorem \ref{Harish-Chandra}), Corollary \ref{corollary:upperbound}, Lemma \ref{lemma:log} and the proof of its assumption (Theorem \ref{theorem:regularity}), $$\frac{\langle \pi_{\nu}(\gamma)\mathds{1}_{\PMF(S)}, \mathds{1}_{\{x\in \PMF(S): i(x,\xi_{\gamma})\geq \frac{1}{n}\}}\rangle}{\Phi(\gamma)} \leq c(g,o,\rho)\frac{\ln n-D}{a_1n-c_1\ln\ln n +b_1}.$$ Take $h(n,\rho) = c(g,o,\rho)\frac{\ln n - D}{a_1n-c_1\ln \ln n +b_1}$, we complete the proof.
\end{proof}
\subsection{Uniform boundedness.}\label{subsection:uniformboundedness}
In this section, we complete our proof of the main theorem by proving the uniform boundedness. We start with some lemmas comparing two types of neighborhoods.
\begin{lemma}\label{lemma:exponentialapproximate}
Using notations as Corollary \ref{corollary:friendpoint}. Let $\xi_{\gamma} \in \PMF(S)$ be the direction of $[o,y=\gamma \cdot o]$ and $\xi^{\gamma} \in \PMF(S)$ be the direction of $[o,x_{\gamma}=t_y]$. Then there exists a constant $C$ such that $i(\xi_{\gamma}, \xi^{\gamma}) \leq C e^{-2d(\gamma \cdot o,o)}$.
\end{lemma}
\begin{proof}
By Lemma \ref{Minskyinequailty}, $$ i^2(\xi_{\gamma}, \xi^{\gamma}) \leq \E_{x_{\gamma}}(\xi_{\gamma})\E_{x_{\gamma}}(\xi^{\gamma}).$$
Let $\alpha = \alpha(\gamma)$ as before. Then $\xi^{\gamma} = \frac{\alpha}{\sqrt{\E_{o}(\alpha)}} \sim_{g,o} e^{-d(\gamma \cdot o,o)}\alpha$. As $\E_{x_{\gamma}}(\alpha) \sim_g 1$, we have $\E_{x_{\gamma}}(\xi^{\gamma}) = \frac{1}{\E_{o}(\alpha)}\E_{x_{\gamma}}(\alpha) \prec_{g,o} e^{-2d(\gamma \cdot o,o)}$. On the other hand, by Lemma \ref{lemma:boundeddistance}, up to a multiplicative constant, one could replace $\E_{x_{\gamma}}(\xi_{\gamma})$ by $\E_{\gamma \cdot o}(\xi_{\gamma}) = e^{-2d(\gamma \cdot o,o)}$. Collect all discussions together, one can complete the proof.
\end{proof}
\noindent Let $\xi \in \PMF(S)$ and $x \in [o,\xi]$, denote $\mathcal{I}_C(\xi,x) = \{\eta \in \PMF(S): i(\eta,\xi) \leq C e^{-2d(x,o)} \}$. Let $L, \theta, \epsilon$ as in Theorem \ref{theorem:DDM}.
\begin{lemma}\label{lemma:twoneigborhood}
Let $\eta \in \PMF(S)$ and $\gamma \in E_n$. Suppose that $\eta$ does not leave $\T_{\epsilon}(S)$ eventually. Let $x \in [o,\eta]$ such that $d(x,o) = n$. Let $C \geq 1$ and $n$ large enough. Then if $\gamma \in E_n$ such that $i(\xi_{\gamma}, \eta) \leq Ce^{-2n}$, then $d(x, \gamma \cdot o) \leq \frac{1}{h}\ln\ln n$.
\end{lemma}
\begin{proof}
We argue as Lemma \ref{lemma:inverseminsky}. Denote $\xi = \xi_{\gamma}$, hence by assumption $i(\xi,\eta) \leq Ce^{-2n}$. First we remark that, since both $\eta$ and $\xi$ are uniquely ergodic, we have a geodesic triangle $\triangle(o,\xi, \eta)$. As $\gamma \in E_n$, there is also a geodesic segment $I$ of length $\ell = \frac{1}{3h}\ln \ln n$ in $[o,\gamma \cdot o]$ ending at $p = \gamma \cdot o$ that has at least proportion $\theta$ in $\T_{\epsilon}(S)$. By Theorem \ref{theorem:DDM}, $$I \cap \mathcal{N}_{D}([o,\eta]\cap[\xi,\eta]) \ne \emptyset,$$ where $D$ is the constant given in Theorem \ref{theorem:DDM}. Choose $q \in I \cap \mathcal{N}_{D}([o,\eta]\cap[\xi,\eta])$. Then there are two possibilities:\\
\noindent Case 1: $d(q, y) \leq D$ with $y\in [o,\eta]$.\\
\noindent Then $$ d(q,o) - D \leq d(o,y) \leq d(q,o) +D.$$ Since $$n-\ell-\rho \leq d(q,o) \leq n+\rho,$$ we have $$0 \leq d(x,y) \leq \ell +D+\rho.$$ Hence,
\begin{displaymath}
\begin{aligned}
&d(x,\gamma \cdot o) \leq d(x,y) + d(y,q) + d(q,p)\\
&\leq \ell + D + D + \ell+\rho\\
&\leq 2(\ell +D+\rho)\\
&\leq 3 \ell.
\end{aligned}
\end{displaymath}
\noindent Case 2: $d(q,y) \leq D$ with $y \in [\xi,\eta]$.
\noindent Then by Lemma \ref{Minskyinequailty}, one has $$i^2(\eta,\xi) = \E_y(\xi)\E_{y}(\eta).$$ Now, since $d(q,y) \leq D$, by Kerckhoff's formula, we have $$e^{-2D}\E_{q}(\xi) \leq \E_{y}(\xi), ~~~e^{-2D}\E_{q}(\eta) \leq \E_{y}(\eta).$$ Therefore, $$ e^{-4D }\E_q(\xi)\E_{q}(\eta) \leq i^2(\xi,\eta).$$ On the other hand, we have $$\E_q(\xi) = e^{-2d(o,q)},~~~ i(\xi,\eta) \leq Ce^{-2n},$$ which implies that $$e^{-4D} \E_{q}(\eta)e^{-2d(o,q)} \leq C^2e^{-4n}.$$
That is, $$e^{-4D} \E_{q}(\eta)e^{2(n-d(o,q))} \leq C^2e^{-2n}.$$ By Kerckhoff's formula again, $$ \E_p(\eta) \leq C^2e^{2\rho+4D}e^{-2n},$$ or $$ \frac{1}{2}\ln\E_p(\eta) \leq \ln (Ce^{\rho+2D})-n.$$ Apply Lemma \ref{busemann}, one could choose $z \in [o,\eta] \cap \T_{\epsilon}(S)$ so that, if denote $d(o,p) = t$, $d(p,z)=a$ and $d(z,o) = b$, then $a-b \leq -n + \ln (Ce^{2\rho+4D}) +1$. Therefore, we have $$ 0 \leq t + a-b \leq \ln (Ce^{2\rho+4D}) +1+ \rho =c_1.$$ Note that $c_1$ is a constant. Now consider the geodesic triangle $\Delta(o,p,z)$. Since the side $[o,p]$ has a segment $I =[s,p]$ ending at $p$ which has at least proportion $\theta$ in the thick part, take the midpoint $m$ of $I$, then since $\theta = 0.999$, the subsegment $[s,m]$ has at least $0.1$ in the thick part. By Theorem \ref{theorem:DDM} again, there exist $g \in [s,m]$ and a constant $D'$ when $n>>0$, such that, $$d(g, [o,z] \cup [p,z]) \leq D'.$$ If there is $h_1 \in [o,z]$ such that $d(h_1,g) \leq D'$, one can complete the proof as in the first case. Otherwise, there is a $h_2 \in [p,z]$ such that $d(h_2,g) \leq D'$. Then
\begin{equation*}
\begin{aligned}
t+a-b = d(o,g) + d(g,p) + d(p,h_2) +d(h_2,z) - d(o,z)\\
= d(o,g) + D' + d(h_2,z) - d(o,z)+ d(g,p) + d(p,h_2) -D\\
\geq \ell - 2D' \to \infty,
\end{aligned}
\end{equation*}
which is impossible since $t+a-b$ is bounded by $c_1$.
\end{proof}
\begin{corollary}\label{corollary:boundednumberpoints}
Let $\eta \in \PMF(S)$ and suppose that $\eta$ does not leave $\T_{\epsilon}$ eventually. Let $x \in [o,\eta]$ such that $d(x,o) = n$. Let further $C > 0$ and $n$ large enough. Then
\begin{displaymath}
\begin{aligned}
&\left|\{ \gamma \in E_n: \gamma \cdot o \in Sec_{\mathcal{I}_C(\eta, x)}\}\right|\\
& \prec_{g,o,\rho} \ln n.
\end{aligned}
\end{displaymath}
\end{corollary}
\begin{proof}
By Theorem 1.2 in \cite{ABEM} (note that $\Lambda$ in the theorem is a constant function), when $n$ is large enough, there exists a constant $N_0 > 0$, such that $|B(x, R) \cap \M(S)\cdot o| \leq N_0 e^{hR}$. Apply Lemma \ref{lemma:twoneigborhood}, we have the conclusion.
\end{proof}
\begin{theorem}\label{uniformbounded}
Under the notations used in Theorem \ref{theorem:Ergdicitymainproof}, we have
$$\sup_{n}\left\|M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}\right\|_{L^{\infty}(\PMF(S),\nu)} < \infty.$$
\end{theorem}
\noindent Recall that
\begin{displaymath}
\begin{aligned}
& M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}([\xi])\\
&\phantom{=\;\;} = \frac{1}{|E_n|}\sum_{\gamma \in E_n}\frac{\pi_{\nu}(\gamma)\mathds{1}_{\PMF(S)}([\xi])}{\Phi(\gamma)}\\
&\phantom{=\;\;} = \frac{1}{|E_n|}\sum_{\gamma \in E_n}\left(\frac{\E_{o}(\xi)}{\E_{\gamma.o}(\xi)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}.
\end{aligned}
\end{displaymath}
By using the embedding map $\tau$ of $\PMF(S)$ into $\MF(S)$. One can rewrite the above formula to be
\begin{displaymath}
\begin{aligned}
& M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}([\xi])\\
&\phantom{=\;\;} = \frac{1}{|E_n|}\sum_{\gamma \in E_n}\left(\frac{1}{\E_{\gamma.o}(\xi)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}.
\end{aligned}
\end{displaymath}
\noindent We first introduce a type of open sets $\mathcal{IN}$ in $\PMF(S)$ defined by intersection numbers. For every $\eta \in \PMF(S), C > 0, t > 0$, define
\begin{displaymath}
\begin{aligned}
& \mathcal{IN}(\eta,t,C) = \{\xi \in \PMF(S): i(\xi,\eta) \leq Ce^{-2t} \}.
\end{aligned}
\end{displaymath}
\begin{proof}[Proof of Theorem \ref{uniformbounded}]
Let $U(\epsilon, \theta)$ the subset of $\PMF(S)$ of full measure. We shall give a bound independent on $n \gg \rho$ for $ M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}(\zeta)$ for every point $\zeta \in U(\epsilon, \theta)$. Fix $R > 0$. As this stage, $R$ is arbitrary, but it will be carefully chosen at the end of the proof. As usual, for $\gamma \in E_n$, denote $\xi_{\gamma}$ to be the direction corresponding to $[o,\gamma \cdot o]$, hence a point in $\PMF(S)$. For each point $\gamma \cdot o$, consider the open ball $B(\gamma, R)$ of radius $R$ at $\gamma \cdot o$. Denote the projection of $B(\gamma, R)$ to $\PMF(S)$ by $\mathcal{O}(\gamma \cdot o,R)$. Then by Lemma \ref{lemma:Shodowlemma}, the measure $\nu(\mathcal{O}(\gamma \cdot o,R)) \sim_{g,R,\rho} e^{-hn}$.
Fix any $C \geq 1$, for instance $C = 1$. Divide $E_n$ to be two sets $E^1_n$ and $E^2_n = E_n-E^1_n$ where $E^1_n$ consists of $\gamma \in E_n$ so that $\xi_{\gamma} \notin \mathcal{IN}(\zeta, n, C) $. We then have, for each $\zeta \in U(\epsilon,\theta)$,
\begin{equation} \label{equation:uniformboundedness}
\begin{aligned}
& M^{\mathds{1}_{\PMF(S)}}_{E_n}\mathds{1}_{\PMF(S)}(\zeta)\\
&\phantom{=\;\;} = \frac{1}{|E_n|}\sum_{\gamma \in E_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}\\
&\phantom{=\;\;} = \frac{1}{|E_n|}\sum_{\gamma \in E^1_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)} + \frac{1}{|E_n|}\sum_{\gamma \in E^2_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}\\
&= {\rm I + II}.
\end{aligned}
\end{equation}
First we want to bound term ${\rm I}$ in Equation (\ref{equation:uniformboundedness}). The set $E^1_n$ can be further decomposed into two sets: $F^1_n$ and $F^2_n = E^1_n - F^1_n$, where $F^1_n = \{ \gamma \in E^1_n: Pr(B(\gamma, R)) \cap \mathcal{IN}(\zeta,n,C) = \emptyset\}.$ One then has,
\begin{equation}
\begin{aligned}
&{\rm I} = \frac{1}{|E_n|}\sum_{\gamma \in F^1_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)} + \frac{1}{|E_n|}\sum_{\gamma \in F^2_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}\\
&\phantom{=\;\;} = {\rm III + IV}.
\end{aligned}
\end{equation}
\noindent For the term ${\rm III}$. First notice that $$\forall y \in B(\gamma,R), \frac{1}{\E_{\gamma \cdot o}(\zeta)} \sim_{R} \frac{1}{\E_{y}(\zeta)}.$$ Now, by Lemma \ref{Minskyinequailty}, for $\nu-$almost every $\xi_y \in \mathcal{O}(\gamma \cdot o, R)$,$$ \left(\frac{1}{\E_y(\zeta)}\right)^{\frac{h}{4}} \prec_{\rho,R} \frac{1}{e^{\frac{hn}{2}} (i(\xi_y, \zeta))^{\frac{h}{2}}}.$$ Hence, for $\nu-$almost every $ \xi_{y} \in \mathcal{O}(\gamma \cdot o,R),$
\begin{displaymath}
\begin{aligned}
&\left(\frac{1}{\E_{\gamma \cdot o}(\zeta)}\right)^{\frac{h}{4}}\\
&\prec_{R,\rho} e^{-\frac{hn}{2}} \frac{1}{(i(\xi_y, \zeta))^{\frac{h}{2}}}.
\end{aligned}
\end{displaymath}
Therefore,
\begin{displaymath}
\begin{aligned}
& {\rm III} = \frac{1}{|E_n|}\sum_{\gamma \in F^1_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}\\
& \prec_{R} \frac{1}{|E_n|}\sum_{\gamma \in F^1_n} \frac{e^{-\frac{hn}{2}}}{\nu(\mathcal{O}(\gamma \cdot o, R))} \int_{\mathcal{O}(\gamma \cdot o, R)}\frac{1}{(i(\eta, \zeta))^{\frac{h}{2}}}d\nu(\eta) \frac{1}{\Phi(\gamma)}.
\end{aligned}
\end{displaymath}
Note that there are bounded number of intersections of open sets of the form $\mathcal{O}(\gamma \cdot o,R)$ and the bound depends on $R$ and $\rho$. Thus, since $|E_n| \asymp e^{hn}$ (Corollary \ref{corollary:exponentialgrowth}) and $\Phi(\gamma) \succ_{g,o,\rho} (a_1n-c_1\ln \ln n +b_1)e^{-\frac{hn}{2}}$ (Harish-Chandra estimates), substitute all these together, one has,
\begin{equation}
\begin{aligned}
&{\rm III} \prec_{g,o,\rho,R} \frac{1}{a_1n-c_1\ln \ln n +b_1}\int_{\{\eta \in \PMF(S): i(\eta,\zeta) > Ce^{-2n}\}}\left(\frac{1}{i(\eta, \zeta)}\right)^{\frac{h}{2}}d\nu(\eta)\\
& \prec_{g,o,\rho,R} 1.
\end{aligned}
\end{equation}
The last inequality follows from the fact that $\zeta \in U(\epsilon,\theta)$ and the proof of Harish-Chandra estimates.\\
\noindent For terms ${\rm IV}$ and ${\rm II} $. Take $H_n = E^2_n \cup F^2_n$. These two terms can be put together to obtain:
\begin{equation}
\begin{aligned}
& {\rm IV + II} \\
& = \frac{1}{|E_n|}\sum_{\gamma \in H_n}\left(\frac{1}{\E_{\gamma.o}(\zeta)}\right)^{\frac{h}{4}}\frac{1}{\Phi(\gamma)}\\
&\leq \frac{1}{|E_n|}\sum_{\gamma \in H_n} \frac{e^{\frac{hL(\gamma)}{2}}}{\Phi(\gamma)}\\
& \sim_{g,\rho,o} e^{-hn} \sum_{\gamma \in H_n}\frac{e^{\frac{hn}{2}}}{(a_1n-c_1\ln \ln n+b_1)e^{\frac{-hn}{2}}}\\
& = \frac{1}{a_1n-c_1\ln\ln n+b_1}|H_n|.
\end{aligned}
\end{equation}
We now \textbf{claim:} $|H_n| \prec_{g,o,\rho,\epsilon,\theta} \ln n$. Thus the sum ${\rm IV + II}$ tends to $0$ when $n \rightarrow \infty$ which complete the proof of the theorem.\\
\noindent It remains to prove the above \textbf{claim}.
\begin{proof}[Proof of the \textbf{claim}]
By Corollary \ref{corollary:boundednumberpoints}, the number $|E^2_n| \prec \ln n$. We now show that so is $|F^2_n|$. Choose $R \leq \min{\{1, S_0\}}$ where $S_0$ is the injective radius of $o$ in the $\epsilon-$ thick part of the moduli space $\mathcal{M}(S) = \T(S)/\M(S)$. Hence for every $\gamma \in \M(S)$ and every point $q \in B(\gamma \cdot o, R)$, $q \in \T_{\epsilon}(S) $ and $d_T(\gamma \cdot o, q) \leq 1$. Fix such $R$ a priori. Assume now that $\gamma \in F^2_n$, namely $Pr(B(\gamma \cdot o, R)) \cap \mathcal{IN}(\zeta, n, C) \ne \emptyset$. As $U(\epsilon, \theta)$ has full measure, in particular, it is dense in $\PMF(S)$, thus one can choose $q \in B(\gamma \cdot o, R)$ so that the direction $\xi_q$ of $[o,q]$ is in $U(\epsilon, \theta) \cap \mathcal{IN}(\zeta, n, C)$. By Theorem \ref{theorem:fellowtravelling}, there is a $P = P(\epsilon)$, so that the two geodesics $[o,\gamma \cdot o]$ and $[o,q]$ are $P-$fellow travelling in a parametrized fashsion. Now consider the $P-$neighborhood $\mathcal{N}_P$ of $\T_{\epsilon}(S)$, namely the union of points in $\T(S)$ that has distance at most $P$ with a point in $\T_{\epsilon}(S)$. As $\M(S)$ acts as isometries on $\T(S)$ and $\T_{\epsilon}(S)$ is $\M(S)-$invariant and cocompact, the neighborhood $\mathcal{N}_P$ is $\M(S)-$invariant and cocompact. By Mumford's compactness, there is a small $\epsilon'$ so that $\mathcal{N}_P \subset \T_{\epsilon'}(S)$. Then as $\gamma \in E_n$, the geodesic segment $[o,q]$ has the property that it contains a segment $I= [a,q]$ of length $\frac{1}{3h}\ln \ln n$ such that $I$ has at least $\theta$ in $\T_{\epsilon'}(S)$. Note that $\epsilon$ is fixed, hence $C$ depends on $g$ and $o$. Hence by Theorem \ref{theorem:DDM}, there are two constants $D'= D'(\epsilon',\theta)$ and $L_0'= L_0'(\epsilon',\theta)$ satisfy Theorem \ref{theorem:DDM}. Take $n$ large enough and follow the proof Lemma \ref{lemma:twoneigborhood} and Corollary \ref{corollary:boundednumberpoints}, one has $|F^2_n| \prec \ln n$.
\end{proof}
\end{proof}
| {
"timestamp": "2022-08-23T02:26:01",
"yymm": "2208",
"arxiv_id": "2208.10223",
"language": "en",
"url": "https://arxiv.org/abs/2208.10223",
"abstract": "Let $S = S_g$ be a closed orientable surface of genus $g \\geq 2$ and $Mod(S)$ be the mapping class group of $S$. In this paper, we show that the boundary representation of $Mod(S)$ is ergodic using statistical hyperbolicity, which generalizes the classical result of Masur on ergodicity of the action of $Mod(S)$ on the projective measured foliation space $\\mathcal{PMF}(S).$ As a corollary, we show that the boundary representation of $Mod(S)$ is irreducible.",
"subjects": "Geometric Topology (math.GT); Dynamical Systems (math.DS); Representation Theory (math.RT)",
"title": "Boundary representations of mapping class groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750477464142,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7095221726749545
} |
https://arxiv.org/abs/1207.4556 | Refined Quicksort asymptotics | The complexity of the Quicksort algorithm is usually measured by the number of key comparisons used during its execution. When operating on a list of $n$ data, permuted uniformly at random, the appropriately normalized complexity $Y_n$ is known to converge almost surely to a non-degenerate random limit $Y$. This assumes a natural embedding of all $Y_n$ on one probability space, e.g., via random binary search trees. In this note a central limit theorem for the error term in the latter almost sure convergence is shown: $$\sqrt{\frac{n}{2\log n}}(Y_n-Y) \stackrel{d}{\longrightarrow} {\cal N} \qquad (n\to\infty),$$ where ${\cal N}$ denotes a standard normal random variable. | \section{Introduction and result}
Quicksort, invented by Hoare \cite{Ho62}, is one of the most
widely used algorithms for sorting. Given a list $\Gamma=(u_1,\ldots,u_n)\in\Rset^n$, Quicksort starts picking a key (i.e., an element), say the first one $u_1$, as ``pivot'' element. The other keys in $\Gamma$ are then partitioned into lists $\Gamma_\le$ and $\Gamma_>$. Key $u_j$ is contained in list $\Gamma_\le$ if the ``key comparison'' between the pivot element $u_1$ and $u_j$ yields
$u_j\le u_1$, otherwise $u_j$ is contained in list $\Gamma_>$, $2\le j\le n$. Finally, the lists $\Gamma_\le$ and $\Gamma_>$ are each sorted recursively unless their size is $0$ or $1$.
The complexity of Quicksort is most commonly measured by the total number of key comparisons used, although other cost measures have been studied as well. To capture the typical complexity of the algorithm it is usually assumed that the ranks of the elements in $\Gamma$ form a random, uniformly distributed permutation of $\{1,\ldots,n\}$. Subsequently this model assumption is met by starting with the list $\Gamma=(U_1,\ldots,U_n)$, where $(U_j)_{j\ge 1}$ is a sequence of independent random variables, identically distributed with the uniform distribution on $[0,1]$. To be definite about the partitioning phase of the algorithm we assume that the order of elements in $\Gamma$ is preserved within the lists $\Gamma_\le$ and $\Gamma_>$, e.g., list $\Gamma=(4,2,5,6,1,8,3,7)$ is partitioned into the lists $\Gamma_\le=(2,1,3)$ and $\Gamma_>=(5,6,8,7)$. This property is shared by standard implementations when always using the first element as pivot element, see as general reference Mahmoud \cite{Ma00}.
We denote by $K_n$ the number of key comparisons used by Quicksort to sort the list
$\Gamma=(U_1,\ldots,U_n)$, $n\ge 1$, and set $K_0:=0$. In the probabilistic analysis of the complexity of Quicksort often characteristics of $K_n$ are studied that only depend on the distribution ${\cal L}(K_n)$ of $K_n$. With respect to weak convergence such results are reviewed below.
However, in the present setting the $K_n$ are constructed on a joint probability space which in fact is a formulation via random binary search trees discussed in Section \ref{asc} and used in the subsequent analysis. Hence, for $(K_n)_{n\ge 0}$ also path properties (in particular strong limit theorems)
can be studied. R\'{e}gnier \cite{re89} showed that
\begin{align}\label{def_yn}
Y_n:= \frac{K_n -\E[K_n]}{n+1}, \quad n\ge 0,
\end{align}
is a martingale, which converges towards a random, non-degenerate limit $Y$ almost surely and in $L_p$:
\begin{align*}
\|Y_n-Y\|_p \to 0 \quad (n\to \infty),
\end{align*}
for any $1\le p<\infty$, where we denote $\|X\|_p:=\E[|X|^p]^{1/p}$ for a random variable $X$.
(The case $p=2$ is explicitly discussed in \cite{re89}.) The mean of $K_n$ is $\E[K_n]=2(n+1)H_n -4n$, where $H_n:=\sum_{k=1}^n 1/k$ denotes the $n$th harmonic number. Another proof for the almost sure convergence of $(Y_n)_{n\ge 0}$ via a Doob-Martin compactification is given in Gr\"ubel \cite{Gr12}, see also Evans, Gr\"ubel and Wakolbinger \cite{EvGrWa12}.
R\"osler \cite{ro91} gave a proof based on a contraction argument for the convergence in distribution of $Y_n$ towards $Y$ and found that the limit distribution ${\cal L}(Y)$ satisfies
\begin{align} \label{def_c_func_a}
{\cal L}(Y) = {\cal L}(UY'+(1-U)Y''+C(U)),
\end{align}
with, for $x\in[0,1]$,
\begin{align} \label{def_c_func_b}
C(x):=1+2x\log(x)+2(1-x)\log(1-x),
\end{align}
where $U,Y'$ and $Y''$ are independent, $Y'$ and $Y''$ are distributed as $Y$ and $U$ is uniformly distributed on $[0,1]$.
The rate of the convergence $Y_n\to Y$ has been bounded, regarding the distributions ${\cal L}(Y_n)$ and ${\cal L}(Y)$, by various distance measures.
The minimal $L_p$-metric $\ell_p$ is given by
\begin{align}\label{ell_p}
\ell_p(V,W):= \ell_p({\cal L}(V),{\cal L}(W)):= \inf\{\|V'-W'\|_p: {\cal L}(V)={\cal L}(V'), {\cal L}(W)={\cal L}(W')\},
\end{align}
for all $1\le p<\infty$ and random variables $V$, $W$ with $\|V\|_p, \|W\|_p<\infty$. Note that the infimum in (\ref{ell_p})
is over all joint distributions ${\cal L}(V',W')$ with the given marginals ${\cal L}(V)$ and ${\cal L}(W)$.
Fill and Janson \cite{FJ02} obtained for all $2\le p<\infty$ the bounds
\begin{align*}
\ell_p(Y_n,Y)=\bo\left(\frac{1}{\sqrt{n}}\right),\quad \ell_p(Y_n,Y) = \Omega\left(\frac{\log n}{n}\right),
\end{align*}
as well as the explicit bound $\ell_2(Y_n,Y)< 2/\sqrt{n}$ for all $n\ge 1$.
We denote by $F_V$ the distribution function of a random variable $V$. Then, for the Kolmogorov--Smirnov distance
(uniform distance)
\begin{align*}
\varrho(V,W):=\varrho({\cal L}(V),{\cal L}(W)):= \sup_{x\in\Rset} |F_V(x)-F_W(x)|
\end{align*}
Fill and Janson \cite{FJ02} obtained for all $\varepsilon>0$ that
\begin{align*}
\varrho(Y_n,Y)=\bo\left(n^{\varepsilon-(1/2)}\right),\quad \varrho(Y_n,Y)=\Omega\left(\frac{1}{n}\right)
\end{align*}
together with the explicit lower bound $\varrho(Y_n,Y)\ge 1/(8(n+1))$ for all $n\ge 1$.
For the Zolotarev metric $\zeta_3$ defined in Section \ref{seczo} Neininger and R\"uschendorf \cite{NR02} obtained the order
\begin{align}\label{rate_zolo}
\zeta_3(Y_n,Y)=\Theta\left(\frac{\log n}{n}\right).
\end{align}
The techniques of \cite{NR02} are sufficiently sharp to obtain $\zeta_{2+\alpha}(Y_n,Y)=\Theta((\log n)/n)$ for all $\alpha \in (0,1]$ as well. Using inequalities between probability metrics, based upon (\ref{rate_zolo}), a couple of upper bounds for related distance measures between ${\cal L}(Y_n)$ and ${\cal L}(Y)$ were obtained in Section 3 of \cite{NR02}.
The results mentioned above bound distances between the distributions of the $Y_n$
and $Y$. However, the embedding of the $K_n$ on one probability space allows to measure the
approximation given by the martingale convergence $Y_n \to Y$ as well. Very recently, Bindjeme and Fill \cite{BiFi12} started quantifying the almost sure convergence $Y_n \to Y$ by identifying the $L_2$-distance between $Y_n$ and $Y$ exactly and asymptotically:
\begin{align}\label{var_asy_exp}
\|Y_n-Y\|_2=\left(\frac{1}{n+1}\left( 2H_n + 1 + \frac{6}{n+1} \right) -4\sum_{k=n+1}^\infty \frac{1}{k^2}\right)^\frac{1}{2} \sim \sqrt{\frac{2\log n}{n}}.
\end{align}
In the present note the error term $Y_n-Y$ is further studied with respect to its asymptotic distribution:
\begin{thm} \label{main_thm}
Let the data $(U_i)_{i\ge 1}$ be a sequence of independent and identically distributed random variables each with the uniform distribution on $[0,1]$.
For the number $K_n$ of key comparisons needed by the Quicksort algorithm to sort the list
$(U_1,\ldots,U_n)$ and the almost sure limit $Y$ of $Y_n$ defined in {\rm (\ref{def_yn})} we have, as $n\to\infty$, that
\begin{align*}
\sqrt{\frac{n}{2\log n}}\left(Y_n - Y\right) \stackrel{d}{\longrightarrow} {\cal N}.
\end{align*}
\end{thm}
The methods used for the proof of Theorem \ref{main_thm} also imply convergence of the third absolute moments, which yields an asymptotic expression for the $L_3$-distance between $Y_n$ and $Y$:
\begin{cor}\label{coro}
For the normalized number $Y_n$ of key comparisons needed by the Quicksort algorithm and its almost sure limit $Y$ as in Theorem \ref{main_thm} we have, as $n\to\infty$, that
\begin{align*}
\left\|Y_n - Y\right\|_3 \sim \frac{2}{\pi^{1/6}} \sqrt{\frac{\log n}{ n}}.
\end{align*}
\end{cor}
\noindent
{\bf Notation.} Throughout, by $\stackrel{d}{\longrightarrow}$ convergence in distribution is denoted, by ${\cal N}$ a random variable with the standard normal distribution. The Bachmann--Landau symbols are used in asymptotic statements. We denote by $\Nset$ the positive integers and let $\Nset_0:=\Nset \cup \{0\}$. By $\mathrm{B}(n,u)$ the binomial distribution with $n\in\Nset$ trials and success probability $u\in[0,1]$ is denoted, and by $\log x$ the natural logarithm of $x$ for $x>0$. Further $x\log x:=0$ for $x=0$ is used.
\subsubsection*{Acknowledgment} I thank Henning Sulzbach and Kevin Leckey for comments on a draft of this note and three anonymous referees for their careful reading and constructive remarks.
\section{Proof} \label{sec2}
The outline of the proof is as follows:
First, in Section \ref{asc} an explicit construction of $Y_n$ and $Y$ is recalled, which leads to a sample-pointwise recurrence relation for $Y_n-Y$. Then ideas from the contraction method are
used for this recurrence. Compared to standard applications of the contraction method, mainly additional dependencies between the arising random variables need to be controlled, see the discussion at the end of Section \ref{asc}. This is done by use of inequalities for the Zolotarev metric, provided in Section \ref{seczo}. After technical preparations in Section \ref{seclem} the convergence in Theorem \ref{main_thm} then is shown in Section \ref{pfpf} within the Zolotarev metric, which implies the stated convergence in distribution.
\subsection{Almost sure construction} \label{asc}
An explicit construction for the limit distribution ${\cal L}(Y)$ was given by R\"osler \cite{ro91} and recently linked to the martingale limit $Y$ by Bindjeme and Fill \cite{BiFi12}. Since below, as a starting point, the same recursive equation (\ref{rec_bifi}) for $Y_n-Y$ is used as in \cite{BiFi12}, for the reader's convenience, some notation is adopted from there.
Consider the rooted complete infinite binary tree, where the root is labeled by the empty word $\epsilon$ and left and right children of a node labeled $\vartheta$ are labeled by the extended words $\vartheta 0$ and $\vartheta 1$ respectively. The set of labels is denoted by $\Theta:=\cup_{k=0}^\infty \{0,1\}^k$. The length $|\vartheta|$ of a label of a node is identical to the depth of the node in the rooted complete infinite binary tree. Now the sequence of keys $(U_j)_{j\ge 1}$ is inserted into the rooted infinite binary tree according to the binary search tree algorithm: The first key $U_1$ is inserted in the root and occupies the root. Then we successively insert the following keys, where each key traverses the occupied nodes starting at the root. Whenever the key traversing is less than the occupying key at a node it moves on to the left child of that node, otherwise to its right child. The first empty node visited is occupied by the key. General references for search tree algorithms are Knuth \cite{Kn98} or Mahmoud \cite{Ma92}.
We denote by $V_\vartheta$ the key which occupies the node labeled $\vartheta$. In particular we have $V_\epsilon=U_1$.
Further, we associate with each node labeled $\vartheta$ an interval $I_\vartheta$ defined recursively: We set $I_\epsilon:=[0,1]$. Assume that $I_\vartheta=[L_\vartheta,R_\vartheta]$ is already defined for some $\vartheta\in \Theta$. Then we set $I_{\vartheta 0}:=[L_\vartheta,V_\vartheta]$ and $I_{\vartheta 1}:=[V_\vartheta,R_\vartheta]$. Note that by construction we always have $V_\vartheta \in I_\vartheta$. The relative positions of $V_\vartheta$ within $I_\vartheta$ are crucial: We denote the interval lengths by $\varphi_\vartheta:= R_\vartheta-L_\vartheta$ and
\begin{align*}
\Upsilon_\vartheta:= \frac{V_\vartheta-L_\vartheta}{R_\vartheta-L_\vartheta}=\frac{\varphi_{\vartheta 0}}{\varphi_\vartheta}, \quad \vartheta\in\Theta.
\end{align*}
By construction, $(\Upsilon_\vartheta)_{\vartheta\in\Theta}$ is a family of independent random variables, identically distributed with the uniform distribution on $[0,1]$.
Furthermore, with the function $C$ defined in (\ref{def_c_func_b}) we set
\begin{align*}
G_\vartheta:= \varphi_\vartheta C(\Upsilon_\vartheta).
\end{align*}
In a related setting R\"osler \cite{ro91} showed that the series
\begin{align}\label{limit_series}
\sum_{j=0}^\infty \sum_{\vartheta\in \Theta: |\vartheta|=j} G_\vartheta
\end{align}
is convergent in any $L_p$ with $p<\infty$ and that it
has the same distribution as the martingale limit $Y$.
Bindjeme and Fill \cite{BiFi12} showed that the random variable in (\ref{limit_series}) is even almost surely identical to $Y$.
Moreover, they use the latter construction to give the following sample-pointwise extension of the distributional identity (\ref{def_c_func_a}) for $Y$: Roughly, the left and right subtree of the root, i.e., the complete infinite binary trees rooted at the nodes labeled $0$ and $1$ get all their nodes' interval lengths renormalized by $1/U_1$ and $1/(1-U_1)$ respectively. This unwinds the original dependence between interval lengths from nodes of the left and right subtree induced from $U_1$ and allows an almost sure construction of the distributional identity (\ref{def_c_func_a}). Formally,
with the root key $V_\epsilon=U_1$, we define for all $\vartheta\in\Theta$ random variables
\begin{align*}
\varphi^{(0)}_\vartheta := \frac{1}{U_1}\varphi_{0\vartheta}=\frac{\varphi_{0\vartheta}}{\varphi_{0}} ,\quad
\varphi^{(1)}_\vartheta := \frac{1}{1-U_1}\varphi_{1\vartheta}=\frac{\varphi_{1\vartheta}}{\varphi_{1}},
\end{align*}
and
\begin{align*}
G^{(i)}_\vartheta :=\varphi^{(i)}_\vartheta C(\Upsilon_{i\vartheta}), \quad
Y^{(i)}:= \sum_{j=0}^\infty \sum_{\vartheta\in \Theta: |\vartheta|=j} G^{(i)}_\vartheta,\quad i\in\{0,1\}.
\end{align*}
Then, cf.~Proposition 2.1 in \cite{BiFi12}, we have
\begin{align}\label{rec_y}
Y= U_1 Y^{(0)} + (1-U_1)Y^{(1)} + C(U_1),
\end{align}
and $U_1$, $Y^{(0)}$, $Y^{(1)}$ are independent and $Y^{(0)}$ and $Y^{(1)}$ have the same distribution as $Y$.
Now, we denote by $I_n$ the number of keys among $U_1,\ldots,U_n$ that are inserted in the left subtree of the root, i.e., the subtree rooted at the node labeled $0$. Note that $I_n$ takes values in $\{0,\ldots,n-1\}$ and, conditional on $U_1=u$, we have for $I_n$ the binomial $\mathrm{B}(n-1,u)$ distribution. Furthermore, denote by $K_{n,0}$ and $K_{n,1}$ the number of key comparisons used to sort the left and right sublists $\Gamma_\le$ and $\Gamma_>$ generated when first partitioning $(U_1,\ldots,U_n)$.
Note that the sizes of $\Gamma_\le$ and $\Gamma_>$ are $I_n$ and $n-1-I_n$, respectively. Since the first partitioning phase of Quicksort requires $n-1$ key comparisons we have for all $n\ge 1$ that
\begin{align}\label{def_xn}
K_n = K_{n,0} + K_{n,1} + n-1.
\end{align}
Recall the normalization (\ref{def_yn}) for $K_n$. Hence, with $\mu(n):=\E[K_n]$ we define normalizations of $K_{n,0}$ and $K_{n,1}$ by
\begin{align}\label{def_scal}
Y_{n,0} := \frac{K_{n,0} - \mu(I_n)}{I_n+1}, \quad Y_{n,1} := \frac{K_{n,1} - \mu(n-1-I_n)}{n-I_n}.
\end{align}
(To be clear about the notation, we have $\mu(I_n)=\E[K_{I_n}\,|\, I_n]$ and in general $\mu(I_n)\neq \E[K_{I_n}]$.)
Note that conditional on $I_n=j$ we have that $Y_{n,0}$ and $Y_{n,1}$ are independent and have the same distributions as $Y_j$ and $Y_{n-1-j}$, respectively.
From (\ref{def_yn}), (\ref{def_xn}) and (\ref{def_scal})
we obtain the (sample-pointwise) recurrence, cf.~equation (2.4) in \cite{BiFi12},
\begin{align}\label{rec_yn}
Y_n= \frac{I_n+1}{n+1}Y_{n,0}+ \frac{n-I_n}{n+1} Y_{n,1} + \frac{n}{n+1}C_n(I_n+1), \quad n\ge 1,
\end{align}
where, for $1\le i\le n$ we define
\begin{align*}
C_n(i):=\frac{1}{n}\left(\mu(i-1)+\mu(n-i)-\mu(n)+n-1\right).
\end{align*}
Altogether, (\ref{rec_y}) and (\ref{rec_yn}) yield a recurrence for the error term under consideration in Theorem \ref{main_thm}, for all $n\ge 1$, cf.~equation (2.6) in \cite{BiFi12},
\begin{align}
Y_n -Y &= \frac{I_n+1}{n+1}\left(Y_{n,0}-Y^{(0)}\right) +
\frac{n-I_n}{n+1}\left(Y_{n,1}-Y^{(1)}\right) + \left(\frac{I_n+1}{n+1} - U_1\right) Y^{(0)} \nonumber \\
&\;\;\;~
+ \left(\frac{n-I_n}{n+1} -(1-U_1)\right) Y^{(1)}+\frac{n}{n+1}C_n(I_n+1)-C(U_1). \label{rec_bifi}
\end{align}
Note that $Y_n -Y$ is already centered and has variance, see (\ref{var_asy_exp})
\begin{align} \label{var_asy}
\sigma^2(n):=\V(Y_n -Y) = \|Y_n-Y\|_2^2 \sim \frac{2\log n}{n} \quad (n\to\infty),
\end{align}
and $\sigma(n)>0$ for all $n\ge 0$.
Hence, with the scaling
\begin{align} \label{scaling}
X_n:=\frac{Y_n -Y}{\sigma(n)}, \quad n\ge 0,
\end{align}
we obtain for all $n\ge 1$ that
\begin{align} \label{basic_eqn}
X_n= A_0^{(n)} \frac{1}{\sigma(I_n)}\left(Y_{n,0}-Y^{(0)}\right)
+ A_1^{(n)}\frac{1}{\sigma(n-1-I_n)}\left(Y_{n,1}-Y^{(1)}\right) + b^{(n)},
\end{align}
where
\begin{align}
A_0^{(n)}&:= \frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}, \qquad
A_1^{(n)}:= \frac{(n-I_n)\sigma(n-1-I_n)}{(n+1)\sigma(n)}, \nonumber\\
b^{(n)}&:= \frac{1}{\sigma(n)}\left[\left(\frac{I_n+1}{n+1} - U_1\right) Y^{(0)}
+ \left(\frac{n-I_n}{n+1} -(1-U_1)\right) Y^{(1)} \nonumber \right.\\
&\left. \phantom{\frac{1}{\sigma(n)}}\quad\quad ~ +\frac{n}{n+1}C_n(I_n+1) -C(U_1) \right].
\label{def_bn}
\end{align}
The asymptotics of $\sigma(n)$ in (\ref{var_asy}) and the fact that $I_n$, conditionally on $U_1=u$, has the binomial $\mathrm{B}(n-1,u)$ distribution imply together with the strong law of large numbers and dominated convergence that,
as $n\to\infty$,
\begin{align} \label{asy_norm}
\left\|A_0^{(n)}-\sqrt{U_1}\right\|_p \to 0, \quad \left\|A_1^{(n)}-\sqrt{1-U_1}\right\|_p \to 0
\end{align}
for all $1\le p<\infty$.\\
\noindent
{\bf Remark.} Note that convergence theorems from the contraction method, e.g., Corollary 5.2 in \cite{NeRu04}, do not in general apply to recurrence (\ref{basic_eqn}). The reason is that each of the random variables
\begin{align*}
\frac{1}{\sigma(I_n)}\left(Y_{n,0}-Y^{(0)}\right), \quad
\frac{1}{\sigma(n-1-I_n)}\left(Y_{n,1}-Y^{(1)}\right)
\end{align*}
is conditionally on $I_n$ still (stochastically) dependent on $b^{(n)}$, via the joint occurrence
of $Y^{(0)}$ and $Y^{(1)}$ respectively, while in typical theorems from the contraction method conditional independence is assumed.
\subsection{The Zolotarev metric}\label{seczo}
The proof of Theorem \ref{main_thm} in Section \ref{pfpf} is based on showing appropriate convergence within the Zolotarev metric and using that convergence in the Zolotarev metric implies weak convergence. The Zolotarev metric has been
studied in the context of distributional recurrences systematically in \cite{NeRu04}. We collect the properties that are used subsequently, which can be found in Zolotarev \cite{zo76,zo77} if not stated otherwise.
For distributions
${\cal L}(V)$, ${\cal L}(W)$ on $\Rset$ the Zolotarev distance $\zeta_s$, $s>0$, is defined by
\begin{equation}
\label{eq:3.6_app} \zeta_s(V,W) := \zeta_s({\cal L}(V),{\cal L}(W)):=\sup_{f\in {\cal F}_s}|\E[f(V) -
f(W)]|
\end{equation}
where $s=m+\alpha$ with $0<\alpha\le 1$ and
$m\in\Nset_0$. Here
\begin{equation}
{\cal F}_s:=\{f\in
C^m(\Rset,\Rset):|f^{(m)}(x)-f^{(m)}(y)|\le
|x-y|^\alpha\}
\end{equation}
denotes the space of $m$-times
continuously differentiable functions from
$\Rset$ to $\Rset$ such that the $m$-th
derivative is H\"older continuous of order
$\alpha$ with H\"older-constant $1$.
We have that $\zeta_s(V,W)<\infty$ if (i) all
moments of orders $1,\ldots,m$ of $V$ and $W$ are
equal and (ii) the $s$-th absolute moments of $V$ and
$W$ are finite. Since later on only the case $s=3$ is
used, for finiteness of $\zeta_3(V,W)$ it is thus sufficient
that mean and variance of
$V$ and $W$ coincide and both have a finite absolute moment of
order $3$. A pair $(V,W)$ satisfying these moment assumptions subsequently is called
{\em $\zeta_3$-compatible}, a term not in use elsewhere. In particular, for fixed $\mu \in \Rset$ and $\sigma>0$, within the space of distributions
\begin{align*}
{\cal M}_3(\mu,\sigma^2):=\{ {\cal L}(V)\,:\, \E[V]=\mu, \V(V)=\sigma^2, \E[|V|^3]<\infty\}
\end{align*}
all pairs are $\zeta_3$-compatible and $({\cal M}_3(\mu,\sigma^2), \zeta_3)$ is a complete metric space. For the completeness (not used subsequently) see \cite[Theorem 5.1]{DrJaNe08}.
Convergence
in $\zeta_3$ implies weak convergence on $\Rset$.
Furthermore, $\zeta_3$ is $(3,+)$ ideal, i.e.,
\begin{eqnarray*}
\zeta_3(V+Z,W+Z)\le\zeta_3(V,W), \quad \zeta_3(cV,cW) = c^3 \zeta_3(V,W)
\end{eqnarray*}
for all $Z$ being independent of $(V,W)$ and all $c>0$.
This in particular implies that for independent pairs $(V_1,V_2)$, $(W_1,W_2)$ such that both pairs are $\zeta_3$-compatible we have
\begin{align}\label{sum_bound}
\zeta_3(V_1+W_1,V_2+W_2)&\le \zeta_3(V_1+W_1,V_2+W_1) + \zeta_3(V_2+W_1,V_2+W_2) \nonumber \\
&\le \zeta_3(V_1,V_2) + \zeta_3(W_1,W_2).
\end{align}
The metric $\zeta_3$ can be upper-bounded in terms of the minimal $L_3$-metric $\ell_3$ defined in (\ref{ell_p}):
For $\zeta_3$-compatible $(V,W)$ we have, see \cite[Lemma 2.1]{NR02},
\begin{align}\label{est_zeta_ell}
\zeta_3(V,W)\le \frac{1}{2}\left(\|V\|_3^2+ \|V\|_3\|W\|_3 + \|W\|_3^2\right)\ell_3(V,W).
\end{align}
Finally, a substitute for (\ref{sum_bound}) when the independence assumption there is violated is later used:
\begin{lem} \label{zolo_est}
Let $V_1,V_2,W_1,W_2$ be random variables such that $(V_1,V_2)$ is $\zeta_3$-compatible and $(V_1+W_1,V_2+W_2)$ is $\zeta_3$-compatible. Then we have
\begin{align}
\zeta_3(V_1+W_1, V_2+W_2) \le \zeta_3(V_1, V_2)+ \sum_{i=1}^2 \left\{ \frac{\|V_i\|_3^2\|W_i\|_3}{2}+ \frac{\|V_i\|_3\|W_i\|_3^2}{2} + \frac{\|W_i\|_3^3}{6} \right\}.
\end{align}
\end{lem}
\begin{proof}
By the assumptions on $\zeta_3$-compatibility we have that the $\zeta_3$-distances appearing in the formulation of the Lemma are finite.
First note that for all $f \in {\cal F}_3$ and $g$ defined by $g(x):=f(x)-f'(0)x-f''(0)x^2/2$ for $x\in \Rset$ we have for all $\zeta_3$-compatible pairs $(V,W)$ that
$\E[f(V)-f(W)]=\E[g(V)-g(W)]$. Since $g'(0)=g''(0)=0$ we hence have
\begin{align*}
\zeta_3(V,W)= \sup_{f\in{\cal F}_3}| \E[f(V)-f(W)]| = \sup_{g\in{\cal F}^*_3}| \E[g(V)-g(W)]|,
\end{align*}
with ${\cal F}^*_3:=\{g\in {\cal F}_3 : g'(0)=g''(0)=0\}$.
For $g\in{\cal F}^*_3$ we have the Taylor expansion
$g(x+h)=g(x)+g'(x)h+g''(x)h^2/2+R(x,h)$ for all $x,h\in\Rset$ with, using the remainder in integral form, $|R(x,h)|\le |h|^3/6$. Hence, with $V_1,W_1, V_2,W_2$ as in the statement of the Lemma we obtain
\begin{align*}
\zeta_3(V_1+W_1, V_2+W_2)&= \sup_{g\in{\cal F}^*_3}| \E[g(V_1+W_1)-g(V_2+W_2)]|\\
&=\sup_{g\in{\cal F}^*_3}\left|\E\left[g(V_1)+g'(V_1)W_1+ \frac{g''(V_1)W_1^2}{2}+R(V_1,W_1) \right.\right.\\
&\left.\left. \phantom{\sup_{g\in{\cal F}^*_3}|\E[}\quad
- \left(g(V_2)+g'(V_2)W_2+ \frac{g''(V_2)W_2^2}{2}+R(V_2,W_2)\right)\right]\right|\\
&\le \zeta_3(V_1, V_2) + B,
\end{align*}
with
\begin{align*}
B&:= \sup_{g\in{\cal F}^*_3}\left|\E\left[g'(V_1)W_1
+ \frac{g''(V_1)W_1^2}{2}+R(V_1,W_1) \right.\right.\\
&\left.\left. \phantom{\sup_{g\in{\cal F}^*_3}|\E[}\quad - \left(g'(V_2)W_2+ \frac{g''(V_2)W_2^2}{2}+R(V_2,W_2)\right)\right]\right|\\
&\le \sup_{g\in{\cal F}^*_3} \sum_{i=1}^2 \left\{ |\E[g'(V_i)W_i]| + \frac{|\E[g''(V_i)W_i^2]|}{2} +\frac{\E[|W_i|^3]}{6} \right\}.
\end{align*}
Since $g'(0)=g''(0)=0$ and $g''$ is Lipschitz-continuous with Lipschitz-constant $1$ we obtain for all $x\in\Rset$ that
$|g''(x)|=|g''(x)-g''(0)| \le |x|$ and, integrating this inequality, that $|g'(x)|\le x^2/2$. Hence we obtain
\begin{align*}
B&\le \sum_{i=1}^2 \E\left[\frac{V_i^2|W_i|}{2}+\frac{|V_i|W_i^2}{2} + \frac{|W_i|^3}{6}\right].
\end{align*}
H\"older's inequality implies the assertion.
\end{proof}
\subsection{Two more technical Lemmata}\label{seclem}
The proof of Theorem \ref{main_thm} in Section \ref{pfpf} requires that $b^{(n)}$ defined in (\ref{def_bn}) tends to $0$ in the $L_3$-norm. The following Lemma provides a quantitative estimate.
\begin{lem}\label{bn_con}
For $b^{(n)}$ defined in {\rm (\ref{def_bn})} we have, as $n\to\infty$,
\begin{align}
\| b^{(n)}\|_3 =\bo\left(\frac{1}{\sqrt{\log n}}\right).
\end{align}
\end{lem}
\begin{proof}
We have
\begin{align*}
\|b^{(n)}\|_3&\le \frac{1}{\sigma(n)}\left(\left\|\left(\frac{I_n+1}{n+1} - U_1\right) Y^{(0)}\right\|_3+
\left\| \left(\frac{n-I_n}{n+1} -(1-U_1)\right) Y^{(1)}\right\|_3 \right.\\
&~\left. \quad\quad \quad\quad +\left\|\frac{n}{n+1}C_n(I_n+1) -C(U_1) \right\|_3 \right)\\
&=: \frac{1}{\sigma(n)}(S_1+S_2+S_3).
\end{align*}
Note that the summands $S_1$ and $S_2$ are equal. Moreover, we have that $(I_n,U_1)$ is independent of $Y^{(0)}$ and $Y^{(1)}$. Hence, we have
\begin{align*}
S_1+S_2= 2 \left\|\frac{I_n+1}{n+1} - U_1 \right\|_3 \|Y\|_3.
\end{align*}
By the Marcinkiewicz--Zygmund inequality, see, e.g.~Chow and Teicher \cite[p.~386]{chte97}, there exists a finite constant $M_3>0$ such that for all $u\in[0,1]$ we have
\begin{align}\label{mazy}
\E\left[|B_{n-1,u}-(n-1)u|^3\right]\le M_3 (n-1)^{3/2}.
\end{align}
Recall that conditionally on $U_1=u$ we have that $I_n$ is binomial $\mathrm{B}(n-1,u)$ distributed. The bound (\ref{mazy}) and integration hence imply $\|(I_n+1)/(n+1) - U_1 \|_3 = \bo(1/\sqrt{n})$ and $S_1+S_2=\bo(1/\sqrt{n})$.
To bound the summand $S_3$ note that for $S_3= \bo(1/\sqrt{n})$ it is sufficient to show
$\|C_n(I_n+1) -C(U_1) \|_3= \bo(1/\sqrt{n})$. We have
\begin{align*}
\|C_n(I_n+1) -C(U_1) \|_3 \le \left\|C_n(I_n+1) -C\left(\frac{I_n}{n-1}\right) \right\|_3 + \left\|C\left(\frac{I_n}{n-1}\right) -C(U_1) \right\|_3.
\end{align*}
Note that we have $\|C_n(I_n+1) -C(I_n/(n-1)) \|_3=\bo((\log n)/n)$ using Proposition 3.2 in R\"osler \cite{ro91}. Hence, it remains to bound $\|C(I_n/(n-1)) -C(U_1) \|_3.$ Using symmetry in the terms $x\log x$ and $(1-x)\log(1-x)$ appearing in $C(x)$ and the triangle inequality, we have
\begin{align}\label{est_t1}
\left\|C\left(\frac{I_n}{n-1}\right) -C(U_1) \right\|_3 \le 4\left\|\frac{I_n}{n-1} \log\left(\frac{I_n}{(n-1)U_1} \right)\right\|_3 + 4\left\|\left(\frac{I_n}{n-1}-U_1\right) \log U_1\right\|_3.
\end{align}
To bound the first summand in the latter display we again use that conditional on $U_1=u$ the random variable $I_n$ has the Binomial B$(n-1,u)$ distribution. Hence
\begin{align}\label{re1}
\left\|\frac{I_n}{n-1} \log\left(\frac{I_n}{(n-1)U_1}\right) \right\|_3^3 = \int_0^1\E\left[ \left|\frac{B_{n-1,u}}{n-1} \log\left(\frac{B_{n-1,u}}{(n-1)u}\right)\right|^3\right]\,du.
\end{align}
To bound the expectation appearing as integrand in the latter display we consider for $u\in(0,1)$ the event
\begin{align*}
E_u:=\left\{ B_{n-1,u}\ge \frac{u}{2}(n-1)\right\}.
\end{align*}
Note that for the complement $E_u^c$ of $E_u$, Chernoff's bound, see \cite{ch52} or \cite[Theorem 1.1]{mcdi98}, yields $\Prob(E_u^c)\le \exp(-(n-1)u^2/2)$. We denote $h(x):=x\log x$ for $x\in[0,\infty)$. With $\sup_{x\in [0,1/2]}|h(x)| = 1/e\le 1 $ we bound the contribution on $E_u^c$ by
\begin{align}\label{re2}
\int_{E_u^c} \left|\frac{B_{n-1,u}}{n-1} \log\left(\frac{B_{n-1,u}}{(n-1)u}\right)\right|^3\, d\Prob &=
\int_{E_u^c} u^3\left|h\left(\frac{B_{n-1,u}}{(n-1)u}\right)\right|^3\, d\Prob \nonumber\\
&\le u^3 \exp\left(-\frac{(n-1)u^2}{2}\right).
\end{align}
On $E_u$ we apply the mean value theorem to $h(1+y)=h(1+y)-h(1)$ and obtain
\begin{align}\label{re3}
\lefteqn{\int_{E_u} \left|\frac{B_{n-1,u}}{n-1} \log\left(\frac{B_{n-1,u}}{(n-1)u}\right)\right|^3\, d\Prob } \nonumber\\
&= \int_{E_u} u^3 \left|h\left(1+\frac{B_{n-1,u}-(n-1)u}{(n-1)u}\right)\right|^3\, d\Prob \nonumber \\
&\le \int_{E_u} u^3(1-\log u)^3\left|\frac{B_{n-1,u}-(n-1)u}{(n-1)u}\right|^3\, d\Prob.
\end{align}
With the Marcinkiewicz--Zygmund inequality (\ref{mazy}) we can further estimate the integral in (\ref{re3}) and obtain
\begin{align}\label{re3b}
\int_{E_u} \left|\frac{B_{n-1,u}}{n-1} \log\left(\frac{B_{n-1,u}}{(n-1)u}\right)\right|^3\, d\Prob
\le M_3\frac{(1-\log u)^3}{(n-1)^{3/2}}.
\end{align}
Hence, plugging (\ref{re2}) and (\ref{re3b}) into (\ref{re1}) we have
\begin{align}
\left\|\frac{I_n}{n-1} \log\left(\frac{I_n}{(n-1)U_1}\right) \right\|_3^3
&\le \int_0^1 \left\{ u^3 \exp\left(-\frac{(n-1)u^2}{2}\right) + M_3 \frac{(1-\log u)^3}{(n-1)^{3/2}} \right\} \,du \nonumber\\
&= \bo\left(\frac{1}{n^2}\right) + \bo\left(\frac{1}{n^{3/2}}\right) \label{okamoto}\\
&= \bo\left(\frac{1}{n^{3/2}}\right). \nonumber
\end{align}
The second summand in (\ref{est_t1}) is also estimated by use of the bound (\ref{mazy}):
\begin{align*}
\left\|\left(\frac{I_n}{n-1}-U_1\right) \log U_1\right\|_3^3
&= \int_0^1 \E\left[\left| \frac{B_{n-1,u}}{n-1}-u\right|^3\right] |\log u|^3 \,du\\
&\le \int_0^1 M_3 \frac{|\log u|^3}{(n-1)^{3/2}} \,du\\
&=\bo \left(\frac{1}{n^{3/2}}\right).
\end{align*}
Altogether, we have $S_3=\bo(1/\sqrt{n})$, hence
$S_1+ S_2+S_3=\bo(1/\sqrt{n})$. Since $\sigma(n)=\Omega(\sqrt{\log n}/\sqrt{n})$ the assertion follows.
\end{proof}
Moreover the proof of Theorem \ref{main_thm} in Section \ref{pfpf} requires an initial estimate for the $L_3$-norm $\|Y_n-Y\|_3$. Note that the following Lemma \ref{lem_l3} is improved later by proving Corollary \ref{coro}.
\begin{lem} \label{lem_l3}
For the error term $Y_n-Y$ in Theorem \ref{main_thm} we have, as $n\to\infty$,
\begin{align} \label{err_l3}
\|Y_n-Y\|_3 = \bo\left(\sqrt{\frac{\log n}{n}}\right).
\end{align}
\end{lem}
\begin{proof}
Since $Y_n$ is a bounded random variable and $Y$ has finite absolute moments of arbitrary order we have $\|Y_n-Y\|_3<\infty$ for all $n\ge 0$.
Note that with $X_n$ defined in (\ref{scaling}) the assertion (\ref{err_l3}) is equivalent to $\E[|X_n|^3] =\bo(1)$. From (\ref{basic_eqn}) we obtain
\begin{align*}
|X_n|\le \Lambda_0 + \Lambda_1 + |b^{(n)}|
\end{align*}
with
\begin{align*}
\Lambda_0:=A_0^{(n)} \frac{1}{\sigma(I_n)}\left|Y_{n,0}-Y^{(0)}\right|, \quad
\Lambda_1:=A_1^{(n)}\frac{1}{\sigma(n-1-I_n)}\left|Y_{n,1}-Y^{(1)}\right|.
\end{align*}
Hence, we have for all $n\ge 1$ that
\begin{align}
\E\left[|X_n|^3\right]&\le \E\left[\Lambda_0^3\right]+ \E\left[\Lambda_1^3\right] + \E\left[|b^{(n)}|^3\right] + 3\E\left[\Lambda_0^2\Lambda_1\right]+
3\E\left[\Lambda_0\Lambda_1^2\right] \nonumber \\
&\quad ~ + 3\E\left[\Lambda_0^2|b^{(n)}|\right] + 3\E\left[\Lambda_0|b^{(n)}|^2\right]+
3\E\left[\Lambda_1^2|b^{(n)}|\right] + 3\E\left[\Lambda_1|b^{(n)}|^2\right] \label{mittel}\\
&\quad ~ + 6\E\left[\Lambda_0\Lambda_1|b^{(n)}|\right]. \nonumber
\end{align}
We use the notation
\begin{align*}
\beta_n:= 1\vee \max_{0\le j\le n} \E\left[|X_j|^3\right].
\end{align*}
We start bounding the previous sum with the summand $\E[\Lambda_0^3]$. For all $0\le j\le n-1$, conditionally given $I_n=j$ we have that
$A_0^{(n)}$ is deterministic and $|Y_{n,0}-Y^{(0)}|/\sigma(I_n)$ is distributed as $|X_j|$. Hence we obtain
\begin{align} \label{aaaa}
\E\left[\Lambda_0^3\right] \le \E\left[\left(A_0^{(n)}\right)^3\right] \beta_{n-1}
\end{align}
and an analogous bound for $\E[\Lambda_1^3]$. The summand $\E[|b^{(n)}|^3]$ tends to zero by Lemma \ref{bn_con}. For the summand $\E[\Lambda_0^2\Lambda_1]$ first note that again by conditioning on $I_n=j$ we have independence of $|Y_{n,0}-Y^{(0)}|/\sigma(I_n)$ and $|Y_{n,1}-Y^{(1)}|/\sigma(n-1-I_n)$ with distributions of $|X_j|$ and $|X_{n-1-j}|$, respectively. Since moreover $A_0^{(n)}$ and $A_1^{(n)}$ are uniformly bounded we obtain for an appropriate constant $0<D<\infty$ that
\begin{align*}
\E[\Lambda_0^2\Lambda_1] \le D \left(\max_{0\le j\le n-1} \|X_j\|_2^2 \right) \left(\max_{0\le j\le n-1} \|X_j\|_1 \right).
\end{align*}
Note that (\ref{var_asy_exp}) implies $\sup_{n\ge 0} \|X_n\|_2<\infty$, hence we have $\E[\Lambda_0^2\Lambda_1]=\bo(1)$. Analogously, $\E[\Lambda_0\Lambda_1^2]=\bo(1)$.
The summands in line (\ref{mittel}) are all bounded by H\"older's inequality, e.g., for the first of these summands we have, also using (\ref{aaaa}), Lemma \ref{bn_con} and (\ref{asy_norm}), that for all $n$ sufficiently large
\begin{align*}
\E\left[\Lambda_0^2|b^{(n)}|\right]\le \|\Lambda_0\|_3^2 \|b^{(n)}\|_3 \le \beta_{n-1}^{2/3} \|b^{(n)}\|_3 \le \beta_{n-1} \|b^{(n)}\|_3 = o(1)\beta_{n-1}.
\end{align*}
The other summands in line (\ref{mittel}) yield the same contribution. Finally, we similarly have
\begin{align*}
\E[\Lambda_0\Lambda_1|b^{(n)}|] \le \|\Lambda_0\|_3 \|\Lambda_1\|_3\|b^{(n)}\|_3= o(1)\beta_{n-1}.
\end{align*}
Collecting all terms we obtain
\begin{align} \label{basic_ee}
\E\left[|X_n|^3\right]&\le \left(\E\left[\left(A_0^{(n)}\right)^3 + \left(A_1^{(n)}\right)^3\right] +o(1)\right) \beta_{n-1} + \bo(1).
\end{align}
With the asymptotic result (\ref{asy_norm}) this implies
\begin{align*}
\E\left[|X_n|^3\right]&\le \left(\E\left[U_1^{3/2} + (1-U_1)^{3/2}\right] +o(1)\right) \beta_{n-1} + \bo(1) = \left(\frac{4}{5} +o(1)\right) \beta_{n-1} + \bo(1).
\end{align*}
Hence, there exist an $n_0\in \Nset$ and a constant $0<D'<\infty$ such that for all $n\ge n_0$ we have
\begin{align*}
\E\left[|X_n|^3\right]\le \frac{9}{10}\beta_{n-1} + D'.
\end{align*}
It is easy to check by induction that $\E\left[|X_n|^3\right]\le \beta_{n_0}\vee (10D')$ for all $n\ge 0$, hence
$\E[|X_n|^3]=\bo(1)$, as $n\to\infty$.
\end{proof}
\noindent
{\bf Remark.} The argument of the proof of Lemma \ref{lem_l3} can be extended by induction on $p$ to show, as $n\to\infty$,
\begin{align*}
\|Y_n-Y\|_p = \bo\left(\sqrt{\frac{\log n}{n}}\right)
\end{align*}
for any $1\le p <\infty$. A related induction argument for a bound of the minimal $L_p$-metric $\ell_p(Y_n,Y)$ is given in Fill and Janson \cite[Section 3]{FJ02}.
\subsection{The proof of Theorem \ref{main_thm}}\label{pfpf}
We now prove Theorem \ref{main_thm} and Corollary \ref{coro}.
\begin{proof}[Proof of Theorem \ref{main_thm}]
We first define a ``hybrid" random variable to connect between $X_n$ and a standard normal random variable as follows: For ${\cal N}^{(0)}$ and ${\cal N}^{(1)}$ independent standard normal random variables also independent of all other random variables, i.e., independent of $(U_i)_{i\ge 1}$,
we set
\begin{align*}
Q_n:= A_0^{(n)} {\cal N}^{(0)}
+ A_1^{(n)} {\cal N}^{(1)}, \quad n\ge 1.
\end{align*}
Note that (\ref{asy_norm}) with $p=2$ implies that $\V(Q_n)\to 1$ as $n\to \infty$. Further, we have $\V(Q_n)>0$ for all $n\ge 1$.
Hence, there exists a (deterministic) sequence $(\kappa_n)_{n\ge 1}$ with $\kappa_n\to 0$
as $n\to \infty$ such that $\V((1+\kappa_n)Q_n)=1$ for all $n\ge 1$. Denoting
by ${\cal N}$ another standard normal random variable we have that each
pair from the three random variables $X_n$, $(1+\kappa_n)Q_n$ and ${\cal N}$ is $\zeta_3$-compatible. Thus, we can use the triangle inequality to obtain
\begin{align}\label{tria}
\zeta_3(X_n,{\cal N}) \le \zeta_3(X_n,(1+\kappa_n)Q_n) + \zeta_3((1+\kappa_n)Q_n, {\cal N}).
\end{align}
For $n\ge 1$ we now introduce the abbreviations
\begin{align*}
Z_n^{(0)}:=\frac{1}{\sigma(I_n)}(Y_{n,0}-Y^{(0)}), \quad Z_n^{(1)}:=\frac{1}{\sigma(n-1-I_n)}(Y_{n,1}-Y^{(1)})
\end{align*}
and
\begin{align*}
\Phi_n:=A^{(n)}_0 Z^{(0)}_n+A^{(n)}_1 Z^{(1)}_n.
\end{align*}
Then, Lemma \ref{zolo_est} can be applied to the sums
\begin{align*}
X_n=\Phi_n+ b^{(n)},\quad (1+\kappa_n)Q_n= Q_n+\kappa_n Q_n
\end{align*}
and yields
\begin{align*}
\zeta_3(X_n,(1+\kappa_n)Q_n) \le \zeta_3(\Phi_n, Q_n)&+\frac{1}{2}\|\Phi_n\|_3^2 \|b^{(n)}\|_3+\frac{1}{2} \|\Phi_n\|_3 \|b^{(n)}\|_3^2 + \frac{1}{6}\|b^{(n)}\|_3^3\\
&~+ \left(\frac{1}{2}|\kappa_n| + \frac{1}{2}\kappa_n^2 + \frac{1}{6} |\kappa_n|^3\right)\|Q_n\|_3^3.
\end{align*}
Note that by definition of $Q_n$ we have $\sup_{n\ge 1} \|Q_n\|_3 <\infty$. Moreover, Lemma \ref{lem_l3} implies that $\sup_{n\ge 1} \|\Phi_n\|_3 <\infty$.
Hence, with $\kappa_n \to 0$ and, by Lemma \ref{bn_con}, $\|b^{(n)}\|_3 \to 0$
we obtain, as $n\to\infty$,
\begin{align}\label{tria2}
\zeta_3(X_n,(1+\kappa_n)Q_n) \le \zeta_3(\Phi_n, Q_n)+ o(1).
\end{align}
Next we show that for the second summand in (\ref{tria}) we have $\zeta_3((1+\kappa_n)Q_n, {\cal N})=o(1)$: First note that $\sup_{n\ge 1} \|Q_n\|_3 <\infty$ implies that the $L_3$-norm of $(1+\kappa_n)Q_n$ is uniformly bounded in $n$. Hence, the bound (\ref{est_zeta_ell}) implies $\zeta_3((1+\kappa_n)Q_n, {\cal N})\le M \ell_3((1+\kappa_n)Q_n, {\cal N})$ for all $n\ge 0$ and a fininte constant $M>0$. Using the uniform $U_1$ in (\ref{asy_norm}) (that is also independent of ${\cal N}^{(0)}$ and ${\cal N}^{(1)}$) we have that $\sqrt{U_1} {\cal N}^{(0)} + \sqrt{1-U_1} {\cal N}^{(1)}$ has also the standard normal distribution. Hence we obtain
\begin{align}
\zeta_3((1+\kappa_n)Q_n, {\cal N})&\le M \ell_3((1+\kappa_n)Q_n, {\cal N})\nonumber\\
&\le M\left\| \left((1+\kappa_n) A_0^{(n)} -\sqrt{U_1}\right){\cal N}^{(0)} + \left((1+\kappa_n) A_1^{(n)} -\sqrt{1-U_1}\right){\cal N}^{(1)} \right\|_3 \nonumber\\
&\to 0, \label{rrr}
\end{align}
by independence and (\ref{asy_norm}).
Hence, we obtain
from (\ref{tria}), (\ref{tria2}) and (\ref{rrr}) that
\begin{align}\label{cone0}
\zeta_3(X_n,{\cal N}) \le \zeta_3(A^{(n)}_0 Z^{(0)}_n+A^{(n)}_1 Z^{(1)}_n, A^{(n)}_0 {\cal N}^{(0)}+A^{(n)}_1{\cal N}^{(1)}) +o(1).
\end{align}
Now, note that for all $0\le k\le n-1$, conditionally given $I_n=k$ we have that
$Z_n^{(0)}$ and $Z_n^{(1)}$ are independent with distributions of $X_k$ and $X_{n-1-k}$, respectively. By $(X^{(0)}_0,\ldots,X^{(0)}_{n-1})$, $(X^{(1)}_0,\ldots,X^{(1)}_{n-1})$ independent vectors with identical distribution $(X_0,\ldots,X_{n-1})$ are denoted. Thus, conditioning on $I_n$ and using that $\zeta_3$ is $(3,+)$-ideal and (\ref{sum_bound}), we obtain
\begin{align} \label{cone1}
\lefteqn{\zeta_3(A^{(n)}_0 Z^{(0)}_n+A^{(n)}_1 Z^{(1)}_n, A^{(n)}_0 {\cal N}^{(0)}+A^{(n)}_1{\cal N}^{(1)}) } \nonumber \\
&\le \frac{1}{n}\sum_{k=0}^{n-1} \zeta_3\left(\frac{(k+1)\sigma(k)}{(n+1)\sigma(n)}X_k^{(0)} +\frac{(n-k)\sigma(n-1-k)}{(n+1)\sigma(n)} X_{n-1-k}^{(1)}, \right. \nonumber\\
&\left. \phantom{\frac{1}{n}\sum_{k=0}^{n-1} \zeta_3} \quad\quad
\frac{(k+1)\sigma(k)}{(n+1)\sigma(n)} {\cal N}^{(0)}+\frac{(n-k)\sigma(n-1-k)}{(n+1)\sigma(n)}{\cal N}^{(1)}\right) \nonumber \\
&\le \frac{1}{n}\sum_{k=0}^{n-1}\left\{ \left(\frac{(k+1)\sigma(k)}{(n+1)\sigma(n)}\right)^3 \zeta_3(X_k,{\cal N}) + \left(\frac{(n-k)\sigma(n-1-k)}{(n+1)\sigma(n)}\right)^3\zeta_3(X_{n-1-k},{\cal N})\right\} \nonumber \\
&= \frac{1}{n}\sum_{k=0}^{n-1} 2 \left(\frac{(k+1)\sigma(k)}{(n+1)\sigma(n)}\right)^3 \zeta_3(X_k,{\cal N}).
\end{align}
With $\Delta(n):= \zeta_3(X_n,{\cal N})$ we obtain from (\ref{cone0}) and (\ref{cone1}) that
\begin{align} \label{dist_e}
\Delta(n)\le \E\left[ 2\left(\frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}\right)^3\Delta(I_n)\right]+o(1).
\end{align}
Now, a standard argument implies $\Delta(n)\to 0$ as follows: Note that $\sigma(n)\sim \sqrt{2\log(n)/n}$ and that $I_n$ is distributed uniformly on $\{0,\ldots,n-1\}$ imply for $U$ uniformly distributed on $[0,1]$ that
\begin{align} \label{cont_fac}
\E\left[ 2\left(\frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}\right)^3\right] \to \E\left[2 U^{3/2}\right]=\frac{4}{5}<1.
\end{align}
First we use (\ref{dist_e}) for a rough bound:
\begin{align*}
\Delta(n)\le \E\left[ 2\left(\frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}\right)^3\right]\sup_{0\le k\le n-1} \Delta(k)+o(1).
\end{align*}
In view of (\ref{cont_fac}) this implies, similarly to the last four lines of the proof of Lemma \ref{lem_l3}, that $(\Delta(n))_{n\ge 0}$ is bounded. We denote $\eta:=\sup_{n\ge 0} \Delta(n)<\infty$ and $\lambda :=\limsup_{n\to\infty} \Delta(n)\ge 0$. For any $\varepsilon>0$ there exists an $n_0 \ge 0$ such that $\Delta(n)\le \lambda + \varepsilon$ for all $n\ge n_0$. Hence, from (\ref{dist_e}) we obtain
\begin{align*}
\Delta(n)&\le \E\left[ {\bf 1}_{\{I_n<n_0\}}2\left(\frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}\right)^3\right]\eta \\
&\quad ~+ \E\left[ {\bf 1}_{\{I_n\ge n_0\}}2\left(\frac{(I_n+1)\sigma(I_n)}{(n+1)\sigma(n)}\right)^3\right] (\lambda+\varepsilon) + o(1).
\end{align*}
With $n\to \infty$ this implies
\begin{align}
\lambda = \limsup_{n\to\infty}\Delta(n)\le \frac{4}{5} (\lambda+\varepsilon).
\end{align}
Since $\varepsilon>0$ is arbitrary this implies $\lambda=0$. Hence, we have $\zeta_3(X_n,{\cal N})\to 0$ as $n\to \infty$. Since convergence in $\zeta_3$ implies weak convergence, the assertion follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{coro}]
Note that in the proof of Theorem \ref{main_thm} with $X_n=(Y_n-Y)/\sigma(n)$ the
convergence $\zeta_3(X_n,{\cal N})\to 0$ is shown. This implies $\E[|X_n|^3] \to \E[|{\cal N}|^3]$ as $n\to \infty$, since the function $x\mapsto |x|^3/6$ is an element of ${\cal F}_3$.
Hence we obtain
\begin{align*}
\left\|Y_n - Y\right\|_3 = \sigma(n)\|X_n\|_3 \sim \sqrt{\frac{2\log n}{ n}}\|{\cal N}\|_3=
\frac{2}{\pi^{1/6}} \sqrt{\frac{\log n}{ n}},
\end{align*}
the assertion.
\end{proof}
| {
"timestamp": "2013-01-25T02:01:38",
"yymm": "1207",
"arxiv_id": "1207.4556",
"language": "en",
"url": "https://arxiv.org/abs/1207.4556",
"abstract": "The complexity of the Quicksort algorithm is usually measured by the number of key comparisons used during its execution. When operating on a list of $n$ data, permuted uniformly at random, the appropriately normalized complexity $Y_n$ is known to converge almost surely to a non-degenerate random limit $Y$. This assumes a natural embedding of all $Y_n$ on one probability space, e.g., via random binary search trees. In this note a central limit theorem for the error term in the latter almost sure convergence is shown: $$\\sqrt{\\frac{n}{2\\log n}}(Y_n-Y) \\stackrel{d}{\\longrightarrow} {\\cal N} \\qquad (n\\to\\infty),$$ where ${\\cal N}$ denotes a standard normal random variable.",
"subjects": "Probability (math.PR); Data Structures and Algorithms (cs.DS)",
"title": "Refined Quicksort asymptotics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750533189538,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221707289138
} |
https://arxiv.org/abs/1411.0537 | Toric graph associahedra and compactifications of $M_{0,n}$ | To any graph $G$ one can associate a toric variety $X(\mathcal{P}G)$, obtained as a blowup of projective space along coordinate subspaces corresponding to connected subgraphs of $G$. The polytope of this toric variety is the graph associahedron of $G$, a class of polytopes that includes the permutohedron, associahedron, and stellahedron. We show that the space $X(\mathcal{P}{G})$ is isomorphic to a Hassett compactification of $M_{0,n}$ precisely when $G$ is an iterated cone over a discrete set. This may be viewed as a generalization of the well-known fact that the Losev--Manin moduli space is isomorphic to the toric variety associated to the permutohedron. | \section{Introduction}
In this note, we study the relationship between two families of blowups of projective space. The first is given by Hassett's spaces of weighted pointed stable rational curves~\cite{Has03}. The second is a family of toric varieties built as blowups of projective space, from polytopes known as \textit{graph associahedra}.
Given a simple graph $G$ on $d+1$ vertices, the graph associahedron ${\mathcal P}G$ is a $d$-dimensional convex polytope, constructed by truncating the $d$-simplex based on the connected subgraphs of $G$. We review this construction in Section~\ref{sec: background}. If $G$ is the complete graph on $d+1$ vertices, then ${\mathcal P}G$ is the permutohedron. Our motivation for the present work goes back to the following beautiful result of Losev and Manin~\cite{LM}.
\begin{theorem*}
Let $X(\mathcal P K_{n-2})$ be the toric variety associated to the $(n-3)$ dimensional permutohedron. There is an isomorphism
\[
X(\mathcal P K_{n-2}) \cong \overline M_{0,n}^{LM},
\]
where $\overline M_{0,n}^{LM}$ is the Losev--Manin space of chains of rational curves with $n$ marked points.
\end{theorem*}
\subsection{Main results} In addition to the permutohedron, graph associahedra contain several important families of polytopes, including the associahedron (or \textit{Stasheff} polytope), the cyclohedron (or \textit{Bott--Taubes} polytope), and the stellahedron. The main result of this paper is a complete classification of graphs $G$ such that $X({\mathcal P}G)$ is isomorphic to a Hassett compactification of $M_{0,n}$. Recall that given a graph $G$, the cone, denoted $Cone(G)$, is the graph obtained by introducing one new vertex $v_0$ to $G$, and connecting each of the vertices in $G$ to $v_0$. We denote the $\ell$-times iterated cone by $Cone^{\ell}(G)$.
\begin{theorem}
\label{thm: mainthm}
Let $G$ be a graph on $(n-2)$ vertices. Then there exists a weight vector $\omega\in (0,1]^n$ such that
\[
X({\mathcal P}G)\cong \overline M_{0,\omega}
\]
if and only if $G$ is an iterated cone over a discrete set. That is,
\[
G\cong Cone^{n-k-2}(\sqcup_{i=1}^k v_i).
\]
\end{theorem}
We give a precise description of the weights for the associated moduli space in Remark~\ref{rem: specific-weights}.
As an example, we have the following result for the stellahedron.
\begin{corollary}
If $G$ is a star graph on $n-2$ vertices, then
\[
X({\mathcal P}G) \cong \overline M_{0,\omega},
\]
where $\omega = (1,\frac{1}{2},\frac{1}{2}+\epsilon,\epsilon,\ldots, \epsilon)$, with $\epsilon<\frac{1}{n}$.
\end{corollary}
Several well studied polytopes do not give rise to Hassett spaces.
\begin{corollary}
Let $|V(G)|\geq 4$. If $G$ is a path graph, a cycle, or a complete bipartite or multipartite graph, then $X({\mathcal P}G)$ is not isomorphic to $\overline M_{0,\omega}$ for any choice of weight vector $\omega$.
\end{corollary}
\begin{figure}
\includegraphics[scale=0.3]{permutohedron.pdf}
\caption{The $3$ dimensional permutohedron. This is the graph associahedron for the complete graph on $4$ vertices.}
\end{figure}
\subsection{Context and motivation} In~\cite{Kap93}, Kapranov gives the following beautiful description of the moduli space $\overline M_{0,n}$ as a blowup of $\mathbb{P}^{n-3}$.
\begin{theorem}[Kapranov]
The moduli space $\overline M_{0,n}$ is isomorphic to the iterated blowup of $\mathbb{P}^{n-3}$ at $(n-1)$ points $p_1,\ldots, p_{n-1}$ in linear general position, followed by the blowups of the strict transforms of the linear subspaces through these points, in increasing order of dimension.
\end{theorem}
This blowup is manifestly not toric. The maximal \textit{toric} blowup of $\mathbb{P}^{n-3}$, at the coordinate subspaces in order of increasing dimension, is isomorphic to the toric variety associated to the permutohedron. This gives rise to the alternative modular compactification of $M_{0,n}$ studied by Losev and Manin in~\cite{LM}. Another point of view, taken by Hassett, is to give the marked points ``weights'', allowing markings to collide if their total weight is sufficiently small. The Losev--Manin space is obtained by giving the first two marked points weight $1$, and the remaining points weight $\epsilon$ for sufficiently small $\epsilon$.
Hassett also points out that $\mathbb{P}^{n-3}$ is itself a modular compactification of $M_{0,n}$, given by the weight vector that assigns weight $1$ to the first mark and weight $\epsilon$ to the remaining marks, for $\epsilon$ sufficiently small. Hassett's perspective yields a large class of interesting compactifications of $M_{0,n}$ lying in between $\mathbb{P}^{n-3}$ and $\overline M_{0,n}$.
On the other hand, given any finite graph $G$ on $n-2$ vertices, Carr and Devadoss~\cite{CD06} exhibit the graph associahedron ${\mathcal P}G$ as a truncation of the simplex on $n-2$ vertices. This naturally gives rise to a toric blowup of $\mathbb{P}^{n-3}$. The complete graph $K_{n-2}$ produces the permutohedral variety.
Theorem \ref{thm: mainthm} tells us that there is remarkably little overlap between these two constructions. Much of the work on the birational geometry of $M_{0,n}$ has focused on modular compactifications such as Hassett's \cite{GJM, Has03, Smyth}. In a strict sense, the maps between such compactifications are well behaved (see \cite[Theorem 1.15]{Smyth}), leading some to believe that $\overline{M}_{0,n}$ has good Mori-theoretic properties. The recent proof that $\overline{M}_{0,n}$ is not a Mori Dream Space for $n$ sufficiently large~\cite{CT13, GK14} relies instead on the combinatorics of toric compactifications, suggesting a significant difference between these two points of view.
\[
\begin{tikzcd}
\phantom{1} & {\color{gray}\overline{M}_{0,n}} \arrow[color=gray]{dr} \arrow[color=gray]{dl} & \phantom{1}\\
\overline{M}_{0,n}^{LM}\arrow{d} \arrow{rr}{\cong} & & X(\mathcal{P}K_{n-2})\arrow{d}\\
\vdots \arrow{d} & & \vdots\arrow{d} \\
\overline{M}_{0,\omega} \arrow[dashed]{rr}{?} \arrow[color=gray]{dr} & & X({\mathcal P}G) \arrow[color=gray]{dl} \\
\phantom{1} & {\color{gray}\mathbb{P}^{n-3}} & \phantom{1}\\
\end{tikzcd}
\]
It is worth noting that, although we are primarily concerned with graph associahedra of connected graphs, the disconnected case has been considered by Carr, Devadoss, and Forcey in~\cite{CDF}. In this formulation, the graph associahedron of the discrete set on $n-2$ vertices is the $(n-3)$-simplex. The associated toric variety is simply $\mathbb{P}^{n-3}$. In this sense, $\mathbb{P}^{n-3}$ is a second example, after Losev--Manin, of a toric graph associahedron that is a Hassett space.
Graph associahedra encompass a large number of important polytopes that arise in a multitude of situations in geometry and topology. For instance, the real locus $\overline M_{0,n}(\RR)$ of the Grothendieck--Knudson space of $n$-pointed rational curves has an intrinsic tiling by associahedra~\cite{Sta63} (i.e. the graph associahedron of the path graph). More generally, the wonderful compactification of a hyperplane arrangement of a Coxeter system has a tiling by graph associahedra~\cite{CD06}. The cyclohedra first appeared in work of Bott and Taubes on knot invariants~\cite{BT94}. Recently, Bloom has exhibited beautiful connections between the combinatorics of graph associahedra and Floer homology theories~\cite{Bloom11}.
In algebraic geometry, the permutohedron is closely related to the geometry of the Cremona transformation on projective space, and in turn to the closed topological vertex in Gromov--Witten theory~\cite{BK}. The Gromov--Witten theory of toric graph associahedra is considered in~\cite{KRRW}. In the present work, we further explore the connection first noticed by Kapranov, and Losev and Manin, between compactifications of $M_{0,n}$ and the permutohedron.
\subsection*{Acknowledgements} This work was completed as part of the 2014 Summer Undergraduate Mathematics Research at Yale (SUMRY) program, where the first author was a participant and the second and third authors were mentors. We are grateful to all involved in the SUMRY program for the vibrant research community that they helped create. It is a pleasure to thank Dagan Karp, who actively collaborated with the third when the ideas in the present text were at their early stages. We thank Satyan Devadoss for his encouragement, as well as permission to include Figure~\ref{fig: star-4} from~\cite{CDF}. Finally, we thank the referee for their careful reading and comments. The authors were supported by NSF grant CAREER DMS-1149054 (PI: Sam Payne).
\section{Background}\label{sec: background}
\subsection{Toric graph associahedra} Let $G$ be a connected finite simple graph with vertex set $V(G)=\{0,{\ldots},d \}$. The graph associahedron ${\mathcal P}G$ of $G$ is defined by iterated truncations of the $d$-simplex. Let $\{H_i\}$ denote the set of facets of the $d$-simplex $\Delta_d$, and fix a bijection $V(G)\leftrightarrow \{H_i\}$. Notice that, for every subset of $S\subset V(G)$ there corresponds a unique face of $\Delta_d$, which is the intersection of the facets $H_i$ for $i\in S$.
\begin{definition}
A \textit{tube} in $G$ is a subset $T\subset V(G)$, such that the induced subgraph on the vertices $T$ is connected. We say that a tube is \textit{trivial} if $\vert T \vert = 1$. We call a subset of vertices $D$ a \textit{non-tube} if the induced subgraph is not connected.
\end{definition}
The polytope ${\mathcal P}G$ is constructed by the following procedure.
\begin{construction}
\textnormal{Fix a connected graph $G$ on $d+1$ vertices and a bijection between the vertices of $G$ and the facets of $\Delta_d$. Let $T_1,\ldots, T_\ell$ denote the tubes in $G$ having cardinality $d$. Let $f_1,\ldots, f_\ell$ denote the corresponding faces (vertices) of $\Delta_d$. Truncate the faces $f_i$, to produce a polytope $\mathcal P G^{(1)}$. Now, let $T'_1,\ldots, T'_r$ be the tubes in $G$ having cardinality $d-1$. These correspond to faces of dimension $1$ (edges) $e_1,\ldots, e_r$ in $\Delta_d$. Each such edge $e_k$ corresponds to a unique edge in $\mathcal P G^{(1)}$, which we continue to denote $e_k$. Truncate each such edge to obtain a polytope $\mathcal P G^{(2)}$. Proceed inductively, until the cardinality of the tubes are $2$. The resulting iterated truncation is the graph associahedron, denoted $\mathcal PG$. }
\end{construction}
\begin{figure}[h!]
\includegraphics{stellahedron.pdf}
\caption{The construction of the stellahedron as a truncation of the $3$-simplex. This is the graph associahedron of the $4$-star graph.}
\label{fig: star-4}
\end{figure}
We refer to~\cite{Dev09} for a more thorough description of these truncations, and other aspects of graph associahedra.
A toric variety is obtained from a fan, or a \textit{lattice} polytope, and the above construction does not give $\mathcal PG$ a canonical integer realization. Moreover, there can be distinct integer polytopes, producing different toric varieties, having identical posets of faces. Our point of view is to take the polytope of the toric variety $\mathbb{P}^d$ (which is combinatorially a simplex) as a starting point, interpreting the construction of Devadoss and Carr as prescribing an iterated blowup of $\mathbb{P}^d$. We will refer to these varieties as \textit{toric graph associahedra}, and will denote them $X({\mathcal P}G)$.
Fix a finite simple graph $G$ on $d+1$ vertices, and let $\Sigma$ denote the fan of the toric variety $\mathbb{P}^d$. The cones of $\Sigma$ are in natural bijection with the faces of $\Delta_d$. As above, a tube $T$ corresponds to a unique cone $\sigma_T$ of $\Sigma$, and hence a unique coordinate subspace of codimension $|T|$.
\begin{definition}
The graph associahedral fan $\Sigma_G$ is obtained from $\Sigma$ by iterated stellar subdivison along the cones $\sigma_T$ for tubes $T$ of $G$, in increasing order of codimension.
\end{definition}
This fan is independent of the chosen order of subdivision among cones of a given dimension. This follows immediately from~\cite[Theorem 2.6]{CD06}.
\begin{definition}
The toric graph associahedron $X({\mathcal P}G) := X(\Sigma_G)$, is defined to be be the toric variety associated to the fan $\Sigma_G$. The variety $X({\mathcal P}G)$ is the iterated blowup of $\mathbb{P}^d$ along the coordinate subspaces corresponding to tubes $T$ in order of increasing dimension.
\end{definition}
\begin{remark}
We can easily recover a polytope ${\mathcal P}G$ from the toric variety $X({\mathcal P}G)$. Choose an equivariant projective embedding of $X({\mathcal P}G)$. Then the poset of faces of the associated lattice polytope can be identified with that of the graph associahedron ${\mathcal P}G$. Alternatively, the canonically associated compactified fan of $\Sigma_G$, as described in~\cite[Section 2]{ACP}, is a polytope whose face poset is identified with that of ${\mathcal P}G$. This also coincides with the Kajiwara--Payne extended tropicalization of the toric variety $X({\mathcal P}G)$~\cite[Remark 3.3]{Pay09}.
\end{remark}
\subsection{Kapranov's model and Hassett spaces}~\label{sec: Hassett}
The connection to moduli spaces comes from Kapranov's blowup model of $\overline{M}_{0,n}$. Given a general point $p \in \mathbb{P}^{n-3}$, there exists a unique rational normal curve $C$ through $p$, the point $p_0 = (1, \ldots , 1)$, and the $n-2$ coordinate points $p_1 , \ldots , p_{n-2}$. The curve $C$, together with the $n$ points $p, p_0 , \ldots , p_{n-2}$, determines a point in $M_{0,n}$, and in this way we obtain a birational map $\mathbb{P}^{n-3} \dashrightarrow \overline{M}_{0,n}$. The indeterminacy loci of this map are the linear spans of subsets of the points $p_i$, and Kapranov~\cite{Kap93} shows that $\overline{M}_{0,n}$ is isomorphic to the blowup of $\mathbb{P}^{n-3}$ along these linear spans, in increasing order of dimension.
By blowing up the projective space $\mathbb{P}^{n-3}$ along some subset of these linear spans, one obtains an alternate compactification of $M_{0,n}$. For example, blowing up $\mathbb{P}^{n-3}$ along the linear spans of the coordinate points produces the well-known Losev-Manin space $\overline{M}_{0,n}^{LM}$.
Both the Grothendieck--Knudson compactification $\overline{M}_{0,n}$ and the Losev-Manin space $\overline{M}_{0,n}^{LM}$ are examples of a more general construction due to Hassett. For each weight vector $\omega = (c_M , c_0 , \ldots , c_{n-2} )$ such that $0 < c_i \leq 1$ and $\sum c_i > 2$, Hassett constructs a smooth moduli space of $\omega$-stable curves $\overline{M}_{0,\omega}$ \cite{Has03}.
\begin{definition}
A genus 0 marked curve $(C, p_M , p_0 , \ldots , p_{n-2})$ is $\omega$-stable if
\begin{enumerate}[(S1)]
\item the only singularities of $C$ are nodes,
\item the marked points are smooth points of $C$,
\item the total weight of coincident points is at most 1, and
\item the line bundle $\omega_C (\sum c_i p_i )$ is ample.
\end{enumerate}
\end{definition}
The last condition can also be rephrased as saying that the total weight of marked points on any component of $C$, plus the number of nodes, must be strictly greater than 2.
We note the following property of Hassett's weighted spaces, which will be useful in the next section of the paper.
\begin{proposition} \cite[Theorem 4.1]{Has03}
Let $\omega = (c_M , c_0 , \ldots , c_{n-2})$ and $\omega' = (c_M' , c_0' , \ldots , c_{n-2}')$ be collections of weight data such that $c_i \geq c_i'$ for all $i\in \{0,\ldots,n-2\}$. Then there exists a natural birational reduction morphism
$$\rho: \overline{M}_{0,\omega} \to \overline{M}_{0,\omega'} . $$
\end{proposition}
\section{Main results}
\label{sec: main-results}
Throughout this section, $G$ will be a graph on $n-2$ vertices, labeled $v_1,\ldots, v_{n-2}$. Furthermore, we fix a bijection between the set of vertices $\{v_i\}$ and the facets of the $(n-2)$-simplex $\Delta_{n-2}$. We label the markings on an $n$-pointed rational curve by $p_M, p_0, p_1,\ldots, p_{n-2}$. We think of $p_M$ as being the moving point in Kapranov's construction, and $p_0$ as being the identity of the torus in $\mathbb{P}^{n-3}$.
We begin with the following proposition.
\begin{proposition}
\label{prop:weightrelations}
Let $\omega = (c_M, c_0,\ldots, c_{n-2})$ be a weight vector such that
\[
\overline M_{0,\omega} \cong X(\mathcal P G).
\]
Then we have the following relationships among the entries of $\omega$.
\begin{enumerate}[(W1)]
\item For every nontrivial tube $T\subset V(G)$,
\[
c_0+\sum_{i\in T} c_i>1.
\]
\item For every non-tube $D\subset V(G)$,
\[
c_0+\sum_{j\in D} c_j \leq 1.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
Let $G$ be a graph on $n-2$ vertices, and fix a bijection of the vertices $\{v_i\}$ with the coordinate hyperplanes $\{H_i\}$ of $\mathbb{P}^{n-3}$. Moreover, we fix an identification of $\overline M_{0,n}$ with the iterated blowup of $\mathbb{P}^{n-3}$. We consider the reduction map
\[
\rho: \overline M_{0,n}\to \overline M_{0,\omega} = X({\mathcal P}G).
\]
Let $T$ be a nontrivial tube consisting of vertices $v_{i_1},\ldots, v_{i_e}$, and let $E_T$ be the exceptional divisor in $\overline M_{0,n}$ above the linear subspace $H_{i_1}\cap\cdots \cap H_{i_e}$ in $\mathbb{P}^{n-3}$. Observe that $\rho$ restricts to an isomorphism on this locus, and thus, the universal curve ${\cal C}_{0,n}$ is $\omega$-stable on $E_T$. Over the generic point of $E_T$, ${\cal C}_{0,n}$ is an $\omega$-stable curve with two components. The components are marked by the sets $I$ and $I^C$, where $I = \{0,i\}_{i\in T}$, whence (W1) follows. Let $D$ be a non-tube and $E_D$ be the exceptional divisor in $\overline M_{0,n}$ above the linear subspace of $\mathbb{P}^{n-3}$ given by $\bigcap_{i\in D} H_i$. The reduction morphism $\rho$ contracts $E_D$, specifically by forgetting the moduli of the component marked by the set $I$, and similarly to the above case, we conclude (W2).
\end{proof}
The above proposition yields two natural obstructions for a toric graph associahedron to be a Hassett space.
\noindent \textbf{Obstruction A.} Let $D$ be a non-tube. If there exists a nontrivial tube $T_D\subset D$, then $X(\mathcal P G)$ cannot be isomorphic to $\overline M_{0,\omega}$ for any weight vector $\omega$.
\begin{proof}
Observe that since $T_D$ is a tube, we may apply the inequality (W2) above to obtain
\[
c_0+\sum_{i\in T_D} c_i>1.
\]
On the other hand, $D$ is not a tube, so
\[
c_0+\sum_{j\in D} c_j\leq 1.
\]
Subtracting these inequalities, we obtain
\[
\sum_{i\in D\setminus T_D} c_i <0,
\]
which is impossible.
\end{proof}
\noindent \textbf{Obstruction B.} Suppose there exists a set of vertices $S\subset V(G)$ such that $S$ can be partitioned into $k$ nontrivial tubes and can also be partitioned into $k'$ non-tubes, with $k'\leq k$. Then $X(\mathcal P G)$ cannot be isomorphic to $\overline M_{0,\omega}$ for any weight vector $\omega$.
\begin{proof}
Let
\[
S = \coprod_{i=1}^k T_i = \coprod_{i=1}^{k'} D_i
\]
where the $T_i$'s are nontrivial tubes and the $D_i$'s are non-tubes. We then have
\begin{eqnarray*}
\sum_{i=1}^k (c_0 + \sum_{j \in T_i} c_j ) = k c_0 +\sum_{i\in S} c_i &>&k\\
\sum_{i=1}^{k'} (c_0 + \sum_{j \in D_i} c_j ) = k' c_0+\sum_{i\in S} c_i&\leq& k'.
\end{eqnarray*}
Subtracting the inequalities, we see that if $k'\leq k$, then $c_0>1$, which is impossible, and we obtain Obstruction B.
\end{proof}
\begin{remark}
Obstruction A is more generally an obstruction to $X(\mathcal P G)$ being modular in the sense of Smyth~\cite{Smyth}. However, Obstruction B is not (see, for example, the discussion in~\cite[Section 7.5]{GJM}). We note that the class of graphs that are unobstructed by A is strictly larger than that of graphs that are unobstructed by both A and B. For example, complete bipartite graphs are obstructed by B but not A. It would be interesting to explore which graph associahedra are isomorphic to modular compactifications of $M_{0,n}$.
\end{remark}
\subsection{Proof of main theorem} Assume that $X(\mathcal P G)$ is isomorphic to a Hassett space, and let $I\subset V(G)$ be a maximal independent set. Suppose there exists a vertex $v$ with $v\notin I$. By definition, there exists a vertex $w \in I$ such that there is an edge between $v$ and $w$. If there is another vertex $u \in I$ with no edge connecting it to $v$, then $\{ v,w \}$ is a tube and $\{ u,v,w \}$ is a non-tube, contradicting Obstruction A. It follows that there must exist edges between $v$ and $u$ for every vertex $u\in I$.
Now, consider another vertex $v' \neq v$ not lying in the independent set $I$. If $\vert I \vert = 1$, then there is an edge between $v$ and $v'$ by the assumption that $I$ is maximal. Otherwise, if there is no edge between $v$ and $v'$, then for some $u,u' \in I$, both $\{ u,v \}$ and $\{ u',v' \}$ are tubes, but $\{ u,u' \}$ and $\{ v,v' \}$ are non-tubes, contradicting Obstruction B. It follows that there is an edge between any two vertices not contained in the independent set $I$. Inductively, we may reconstruct $G$ from $I$ as an iterated cone.
It remains to show that for such $G$, $X(\mathcal P G)$ is indeed a Hassett space. We write $G$ as an iterated cone
\[
Cone^{n-2-k}\left(\sqcup_{i=1}^k v_i \right).
\]
Recall that we have fixed a bijection between vertices $\{v_i\}$ of $G$, and facets $\{F_i\}$ of $\Delta_{n-3}$. Accordingly, there is a bijection between $\{v_i\}$ and torus invariant points $\{p_1,\ldots, p_{n-2}\}$ of $\mathbb{P}^{n-3}$, via the induced bijection between $\{v_i\}$ and vertices opposite to the facets $\{F_i\}$.
We call the torus fixed points $\{p_1,\ldots, p_k\}$ corresponding to the discrete base set of $G$ \textit{independent points}, and the remaining points $\{p_{k+1},\ldots, p_{n-2}\}$ \textit{cone points}. Now, let $\omega = (c_M,c_0,c_1,\ldots, c_k,c_{k+1},\ldots, c_{n-2})$. By symmetry, we may assume that the weight vector is unchanged by permuting the cone points and independent points amongst themselves. We let $c_c$ denote the weight of the cone points and $c_i$ denote the weight of the independent points. The point $p_M$ in Kapranov's construction is not allowed to coincide any of the other marked points, so we are forced to set $c_M = 1$.
Recall that points are allowed to collide precisely when the sum of their weights is less than $1$. Every cone vertex is connected to every other vertex in the graph, so the non-tubes of $G$ are precisely subsets of independent vertices of size at least two. Thus we may conclude that any collection of independent points may coincide with $p_0$. Conversely, any subset of vertices that contains a cone vertex is a tube, so none of the cone points may collide with $p_0$. Set $c_i$ equal to $\epsilon$ for $\epsilon$ sufficiently small, as this allows them to collide arbitrarily. We set $c_c = (k+2)\epsilon$ where $k$ is the number of independent points. Finally, we set $c_0 = 1-(k+1)\epsilon$. With these weights $c_0$ cannot collide with the cone points, as the sum of weights exceeds $1$. The space $\overline M_{0,\omega}$ can now be obtained via Hassett's reduction map
\[
\overline M_{0,n}\to \overline M_{0,\omega},
\]
by contracting unstable loci in $\overline M_{0,n}$. By blowing down from Kapranov's realization of $\overline M_{0,n}$, the same arguments above, applied to the universal curve $\mathcal C_{0,n}$ show that this is the same variety $X(\mathcal{P}G)$ obtained by blowup of $\mathbb{P}^{n-3}$. \qed
\begin{remark}\label{rem: specific-weights}{\bf (Specific weights for the moduli spaces)} The parameter space for weights $(0,1]^n$ has a natural wall and chamber structure, so for $\omega_1,\omega_2$ in the same chamber, the moduli functor for $\omega_1$- and $\omega_2$-stable curves coincide. We record here the weights obtained in the proof above so the reader may have access to it easily.
Let $G$ be a graph on $n-2$ vertices, obtained as an iterated cone over the discrete set on $k$ vertices $v_1,\ldots, v_k$. Then, $X(\mathcal PG)\cong \overline M_{0,\omega}$ for the weight vector $(c_M,c_0,c_1,\ldots, c_{n-2})$ for $\epsilon$ sufficiently small:
\begin{eqnarray*}
c_M &=& 1\\
c_0 &=& 1-(k+1)\epsilon\\
c_c &=& (k+2)\epsilon \ \ \textnormal{when $p_c$ is a cone point} \\
c_i &=& \epsilon \ \ \textnormal{when $p_i$ is an independent point.} \\
\end{eqnarray*}
\end{remark}
\section{Examples}
\subsection{Graphs on $3$ vertices}
There are two connected graphs on $3$ vertices, the cycle $K_3$, and the path graph $P_3$. The toric graph associahedron $X(\mathcal P K_3)$ is the blowup of $\mathbb{P}^2$ at its $3$ torus fixed points. The toric graph associahedron $X(\mathcal P P_3)$ is the blowup of $\mathbb{P}^2$ at $2$ torus fixed points. The Grothendieck--Knudson compactification of the moduli space of $5$-pointed curves is isomorphic to the blowup of $\mathbb{P}^2$ at its $3$ torus fixed points, and the identity of the torus.
The cycle on $3$ vertices coincides with the complete graph, and thus the toric graph associahedron $X(\mathcal P K_3)$ is isomorphic to the Losev--Manin compactification $\overline M_{0,\omega}$ for $\omega = (1,1,\epsilon,\epsilon,\epsilon)$. On the other hand, the path graph $P_3$ can be viewed as the cone over the discrete set on $2$ vertices. The toric graph associahedron $X(\mathcal P P_3)$ is isomorphic to $\overline M_{0,\omega'}$ for $\omega' = (1,\frac{1}{2},\frac{1+\epsilon}{2},\epsilon,\epsilon)$.
The Losev--Manin stable $5$-pointed rational curves are those chains of $\mathbb{P}^1$'s, where $p_1$ and $p_2$ lie on either end of the chain, and each component has at least one of the light points $p_3,p_4$, or $p_5$. Thus, the chain can have at most $3$ components.
If $C$ is a $\omega'$-stable $5$-pointed rational curve with two components, then $p_2$ and $p_3$ must lie on the same component as each other, but a different component than $p_1$. Moreover, the light points $p_4$ and $p_5$ must lie on distinct components of $C$. In fact, $C$ cannot have $3$ components. To see this, assume otherwise. Then $C$ is a chain of $\mathbb{P}^1$'s
\[
C = C_1\cup C_2\cup C_3.
\]
We may assume without loss of generality that $p_1$ lies on $C_1$. Then $p_2$ and $p_3$ must both lie on $C_3$. To stabilize the components $C_1$ and $C_3$, they must both be additionally marked by one of the light points. However, now the central component $C_2$ has $2$ nodes and no marks, and is unstable.
\[
\begin{tikzcd}
\phantom{1} & {\color{gray}\overline{M}_{0,5} \cong Bl_{4}(\mathbb{P}^2)} \arrow[color=gray]{dr} \arrow[color=gray]{dl} & \phantom{1}\\
\overline{M}_{0,n}^{LM}\arrow{d} \arrow{rr}{\cong} & & X(\mathcal{P}{K_3})\arrow{d}\\
\overline{M}_{0,\omega'} \arrow[swap]{rr}{\cong} \arrow[color=gray]{dr} & & X(\mathcal P P_3) \arrow[color=gray]{dl} \\
\phantom{1} & {\color{gray}\mathbb{P}^{2}} & \phantom{1}\\
\end{tikzcd}
\]
Note that, although $X (\mathcal P P_3)$ is isomorphic to $\overline{M}_{0,\omega'}$, the universal families over the two spaces are different. For example, in the toric model, if the moving point lies on the line between one of the points of weight $\frac{1}{2}$ and one of the points of weight $\epsilon$, then the unique conic through these 5 points is a union of two lines, and the resulting pointed curve is not $\omega'$-stable.
\subsection{Graphs on $4$ vertices} There are, up to symmetry, $6$ different graphs on $4$ vertices, but only $3$ graphs that can be obtained as iterated cones over discrete sets. These graphs are the complete graph $K_4$, the complete graph minus a single edge $V_4$, and the star graph on $4$ vertices $S_4$.
\begin{figure}[h!]
\begin{tikzpicture}[scale=1.25]
\draw [ball color=black] (0,0) circle (0.5mm);
\draw [ball color=black] (1,0) circle (0.5mm);
\draw [ball color=black] (1,1) circle (0.5mm);
\draw [ball color=black] (0,1) circle (0.5mm);
\draw (0,1)--(1,1)--(1,0); \draw (1,1)--(0,0);
\draw [ball color=black] (3,0) circle (0.5mm);
\draw [ball color=black] (4,0) circle (0.5mm);
\draw [ball color=black] (4,1) circle (0.5mm);
\draw [ball color=black] (3,1) circle (0.5mm);
\draw (3,0)--(3,1)--(4,1)--(4,0)--(3,0)--(4,1);
\draw [ball color=black] (6,0) circle (0.5mm);
\draw [ball color=black] (7,0) circle (0.5mm);
\draw [ball color=black] (7,1) circle (0.5mm);
\draw [ball color=black] (6,1) circle (0.5mm);
\draw (6,0)--(6,1)--(7,1)--(7,0)--(6,0)--(7,1); \draw (7,0)--(6,1);
\end{tikzpicture}
\caption{Graphs on $4$ vertices giving rise to Hassett spaces.}
\end{figure}
We discuss the star graphs and complete graphs in greater generality in the forthcoming subsection.
The graph $V_4$ can be obtained as the twice iterated cone over the discrete set on $2$ vertices, $Cone^2(v_1\sqcup v_2)$. The toric variety $X(\mathcal P V_4)$ is isomorphic to $\overline M_{0,\omega}$ for
\[
\omega = (1,1-3\epsilon, 4\epsilon,4\epsilon,\epsilon, \epsilon).
\]
This space can be obtained from the Losev-Manin space $\overline{M}_{0,6}^{LM}$ by blowing down the divisor $\Delta_{134}$. To put this another way, suppose that $C = C_1 \cup C_2$ is a curve with two components, with three marked points on $C_1$ and three marked points on $C_2$. Then $C$ is $\omega$-stable if and only if at least one of the ``light'' points with weight $\epsilon$ lies on the same component as the ``heavy'' point with weight 1.
\subsection{Complete graphs and star graphs} As we have discussed above, when $G = K_{n-2}$, then $X(\mathcal P K_{n-2})$ is the permutohedral variety, isomorphic to the Losev--Manin compactification of $M_{0,n}$. The graph $K_{n-2}$ is obtained as the $(n-3)$-times iterated cone over a single vertex. On the other extreme, we may consider the cone over the discrete set on $n-3$ vertices. The resulting graph is the star graph on $n-2$ vertices, denoted $S_{n-2}$.
\begin{figure}[h!]
\begin{tikzpicture}
\draw [ball color = black] (-1.5,0) circle (0.5mm);
\draw [ball color = black] (-1,0) circle (0.5mm);
\draw [ball color = black] (-0.5,0) circle (0.5mm);
\draw [ball color = black] (1.5,0) circle (0.5mm);
\draw [ball color = black] (1,0) circle (0.5mm);
\draw [ball color = black] (0.5,0) circle (0.5mm);
\draw node at (0,0) {\tiny $\cdots$};
\draw [ball color=black] (0,1) circle (0.5mm);
\foreach \a in {-1.5,-1,-0.5,1.5,1,0.5}
\draw (0,1)--(\a,0);
\end{tikzpicture}
\caption{The star graph is a cone over a discrete set.}
\end{figure}
Observe that the tubes of $S_d$ are single vertices, or any subset of the vertices containing the cone point. Using this fact, it is straightforward to check that the graph associahedron $\mathcal P S_d$ can be described as follows. Choose a distinguished facet $F_0$ of the simplex $\Delta_d$, corresponding to the unique high-valence vertex of $S_d$. Then, $\mathcal P S_d$ may be obtained from $\Delta_d$ by truncating all faces lying in $F_0$. Correspondingly, the toric variety $X(\mathcal P S_d)$ is obtained by choosing a distinguished coordinate hyperplane $H_0$, and blowing up all coordinate planes contained in $H_0$. Observe that the proper transform $E$ of $H_0$ in $X(\mathcal P S_d)$ is isomorphic to the $(d-1)$ dimensional permutohedral variety.
It follows from the main result that
\[
X(\mathcal P S_{n-2}) \cong \overline M_{0,\omega},
\]
where $\omega = (1,\frac{1}{2},\frac{1}{2}+\epsilon,\epsilon,\epsilon,\ldots, \epsilon)$.
The locus of curves consisting of two components, where one component is marked with $p_2$ and $p_3$ is isomorphic to the Losev--Manin compactification of $M_{0,{n-1}}$. It is straightforward to check that this stratum coincides with the torus invariant exceptional divisor $E$ of $X(\mathcal P S_{n-2})$ described above.
\begin{remark}
For different choices of weights $\omega_1$ and $\omega_2$, it may be possible for $\overline M_{0,\omega_1}$ to be isomorphic to $\overline M_{0,\omega_2}$ as varieties but \textit{not} as moduli spaces, i.e. the universal families may not coincide. For instance, choosing $\omega = (1,\frac{1}{2},\frac{1}{2},\epsilon,\epsilon,\ldots, \epsilon)$, the space $\overline M_{0,\omega}$ is isomorphic to $X(\mathcal P S_{n-2})$ as above. Here, the Losev--Manin compactification of $M_{0,n-1}$ appears as the locus where $p_2$ and $p_3$ coincide.
\end{remark}
\bibliographystyle{siam}
| {
"timestamp": "2015-08-14T02:10:53",
"yymm": "1411",
"arxiv_id": "1411.0537",
"language": "en",
"url": "https://arxiv.org/abs/1411.0537",
"abstract": "To any graph $G$ one can associate a toric variety $X(\\mathcal{P}G)$, obtained as a blowup of projective space along coordinate subspaces corresponding to connected subgraphs of $G$. The polytope of this toric variety is the graph associahedron of $G$, a class of polytopes that includes the permutohedron, associahedron, and stellahedron. We show that the space $X(\\mathcal{P}{G})$ is isomorphic to a Hassett compactification of $M_{0,n}$ precisely when $G$ is an iterated cone over a discrete set. This may be viewed as a generalization of the well-known fact that the Losev--Manin moduli space is isomorphic to the toric variety associated to the permutohedron.",
"subjects": "Algebraic Geometry (math.AG); Combinatorics (math.CO)",
"title": "Toric graph associahedra and compactifications of $M_{0,n}$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750514614409,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221693941154
} |
https://arxiv.org/abs/math/0603106 | Lattice Grids and Prisms are Antimagic | An \emph{antimagic labeling} of a finite undirected simple graph with $m$ edges and $n$ vertices is a bijection from the set of edges to the integers $1,...,m$ such that all $n$ vertex sums are pairwise distinct, where a vertex sum is the sum of labels of all edges incident with the same vertex. A graph is called \emph{antimagic} if it has an antimagic labeling. In 1990, Hartsfield and Ringel conjectured that every connected graph, but $K_2$, is antimagic. In 2004, N. Alon et al showed that this conjecture is true for $n$-vertex graphs with minimum degree $\Omega(\log n)$. They also proved that complete partite graphs (other than $K_2$) and $n$-vertex graphs with maximum degree at least $n-2$ are antimagic. Recently, Wang showed that the toroidal grids (the Cartesian products of two or more cycles) are antimagic. Two open problems left in Wang's paper are about the antimagicness of lattice grid graphs and prism graphs, which are the Cartesian products of two paths, and of a cycle and a path, respectively. In this article, we prove that these two classes of graphs are antimagic, by constructing such antimagic labelings. | \section{Introduction}
All graphs in this paper are finite, undirected and simple. In 1990,
Hartsfield and Ringel \cite{HaRi} introduced the concept of
\emph{antimagic} graph. An \emph{antimagic labeling} of a graph with $m$ edges and
$n$ vertices is a bijection from the set of edges to the integers
$1,\ldots,m$ such that all $n$ vertex sums are pairwise distinct,
where a vertex sum is the sum of labels of all edges incident with
that vertex. A graph is called antimagic if it has an antimagic
labeling. Hartsfield and Ringel showed that paths $P_n (n\geq 3)$,
cycles, wheels, and complete graphs $K_n (n\geq 3)$ are antimagic.
They conjectured that all trees except $K_2$ are antimagic.
Moreover, all connected graphs except $K_2$ are antimagic. These two
conjectures are unsettled. In 2004, Alon et al \cite{AKLRY} showed
that the latter conjecture is true for all graphs with $n$ vertices
and minimum degree $\Omega (\log n)$. They also proved that a graph
$G$ with $n\ (\geq 4)$ vertices and maximum degree $\Delta (G)\geq
n-2$ is antimagic, and all complete partite graphs except $K_2$ are
antimagic. In \cite{Wa}, Wang showed that the toroidal
grids (the Cartesian products of two cycles) are antimagic, the author
also proved that all Cartesian products of an antimagic $k$-regular graph ($k>1$)
and a cycle (consequently Cartesian products of more than two cycles) are antimagic.
Two open problems left in \cite{Wa} are
about the antimagicness of lattice grid graphs and prism graphs, which are the Cartesian
products of two paths, and of a cycle and a path, respectively.
In this paper, we prove that these two classes of graphs are
antimagic, by constructing such antimagic labelings.
In contrast to toroidal grids, lattices and prisms have less symmetry (more local
structures), we will incorporate new strategies in our labeling.
Our main results are the following two theorems, which are proved in
Section 3 and Section 4, respectively.
\begin{theorem}\label{lattice}
\textit{All lattice grid graphs $P_1[m+1]\times P_2[n+1]$ are
antimagic, for integers $m,n\geq 1$.}
\end{theorem}
\begin{theorem}\label{prism}
\textit{All prism graphs $C[m]\times P[n+1]$ are antimagic, for
integers $m\geq 3, n\geq 1$.}
\end{theorem}
For more results, open problems and conjectures on antimagic graphs
and various graph labeling problems, please see \cite{Ga, He}.
\section{Preliminaries}\label{prelim}
The \emph{Cartesian product} $G_1\times G_2$ of two graphs $G_1=(V_1, E_1)$
and $G_2=(V_2, E_2)$ is a graph with vertex set $V_1\times V_2$, and
$(u_1,u_2)$ is adjacent to $(v_1,v_2)$ in $G_1\times G_2$ if and
only if $u_1=v_1$ and $u_2v_2\in E_2$, or, $u_2=v_2$ and $u_1v_1\in
E_1$. The Cartesian product of two paths is a lattice grid graph,
and the Cartesian product of a path and a cycle is a prism grid
graph.
Before proving our main results, we first describe antimagic
labeling on paths and cycles respectively (see Figure \ref{fig:pathcycle}).
The labeling methods are the same as in \cite{Wa},
here we rephrase them for the sake of completeness.
\begin{lemma}\label{path}
\textit{All paths $P[m+1]$ are antimagic for integers $m\geq 2$.}
\end{lemma}
\noindent {\bf Proof:}\, Suppose the vertex set is $\{v_1,\ldots,
v_{m+1}\}$ and the edge set is arranged to be
$\{v_iv_{i+2}|i=1,\ldots,m-1\}\cup\{v_mv_{m+1}\}$. The following
labeling $f(v_iv_{i+2})=i$, for $1\leq i\leq m-1$, and
$f(v_mv_{m+1})=m$ is antimagic, since we have
$$
f^{+}(v_i) = \left\{
\begin{array}{ll}
i & i=1,2;
\\
2i-2 & i=3,\ldots,m;
\\
2m-1 & i=m+1.
\end{array}
\right.
$$
Therefore,
$$
f^{+}(v_1)<f^{+}(v_2)< \ldots\ldots <f^{+}(v_{m+1})
$$\hfill\square\bigskip
\begin{lemma}\label{cycle}
\textit{All cycles $C[m]$ are antimagic for integers $m\geq 3$.}
\end{lemma}
\noindent {\bf Proof:}\, Suppose the vertex set is $\{v_1,\ldots,
v_m\}$ and the edge set is arranged to be $\{v_1v_2\}\cup
\{v_iv_{i+2}|i=1,\ldots,m-2\}\cup\{v_{m-1}v_m\}$. The following
labeling $f(v_1v_2)=1$, $f(v_iv_{i+2})=i+1$, for $1\leq i\leq m-2$,
and $f(v_{m-1}v_m)=m$ is antimagic, since we have
$$
f^{+}(v_i) = \left\{
\begin{array}{ll}
3 & i=1;
\\
2i & i=2,\ldots,m-1;
\\
2m-1 & i=m.
\end{array}
\right.
$$
Therefore,
$$
f^{+}(v_1)<f^{+}(v_2)< \ldots\ldots <f^{+}(v_m)
$$\hfill\square\bigskip
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=120mm]{pathcycle.eps}
\renewcommand{\figurename}{Fig.}
\caption{Antimagic labeling of $P[n+1]$ and $C[m]$, for $n=5$,
$m=5$} \label{fig:pathcycle}
\end{figure}
\section{Proof of Theorem \ref{lattice}}
Let $f: E(P_1[m+1]\times P_2[n+1])\rightarrow
\{1,2,\ldots,2mn+m+n\}$ be an edge labeling of $P_1[m+1]\times
P_2[n+1]$, and denote the induced sum at vertex $(u,v)$ by
$f^{+}(u,v)=\sum f((u,v),(y,z))$ , where the sum runs over all
vertices $(y,z)$ adjacent to $(u,v)$ in $P_1[m+1]\times
P_2[n+1]$. To prove Theorem \ref{lattice}, first, we construct a
labeling that is antimagic on product graphs of two paths $P_1[m+1]$
and $P_2[n+1]$, for $n\geq m\geq 2$. Then, we give an antimagic
labeling of graphs $P_1[2]\times P_2[n+1]$, for $n\geq 1$.
\subsection{$P_1[m+1]\times P_2[n+1]$ is Antimagic, for $n\geq m\geq 2$}
Assume that $P_1[m+1]$ has edge set
$\{u_iu_{i+2}|i=1,\ldots,m-1\}\cup\{u_mu_{m+1}\}$, and $P_2[n+1]$
has edge set $\{v_iv_{i+1}|i=1,\ldots,n\}$. We will construct an antimagic labeling of
$P_1[m+1]\times P_2[n+1]$ for $n\geq m\geq 2$, which contains
two phases.
\\[2mm]
\noindent \emph{Phase 1:} For the $mn+m$ edges
contained in copies of $P_1[m+1]$ component (i.e., the edges
$((u_i,v_j), (u_{i+2},v_j))$ and $((u_m,v_j), (u_{m+1},v_j))$, for
$1\leq i\leq m-1, 1\leq j\leq n+1$), label them with even
numbers $2,4,\ldots,2mn+2m$ (notice $n\geq m$).
Specifically, first label the edges of $P_1[m+1]$ with
$U$ and $R$ such that $u_1u_3$ is labeled with $U$, and two edges
are labeled with different letters if they are incident to a same vertex. Obviously, there is one unique such
labeling. For each edge $u_iu_j\in E(P_1[m+1])$ labeled with $U$,
label the edges $((u_i,v_1),(u_j,v_1)), ((u_i,v_2),(u_j,v_2)),\ldots\ldots,((u_i,v_{n+1}),(u_j,v_{n+1}))$
in usual order; for each edge $u_iu_j\in E(P_1[m+1])$
labeled with $R$, label the edges $((u_i,v_1),(u_j,v_1)),
((u_i,v_2),(u_j,v_2)),\ldots\ldots$, \\$((u_i,v_{n+1}),(u_j,v_{n+1}))$
in reversed order, and
\begin{center}
\noindent $2,\ 4,
\ldots\ldots\ldots\ldots\ldots\ldots\ldots,
2n+2$, \ \
(labels for $((u_1,v_i),(u_3,v_i)), i=1,2,\ldots,n+1$)
\\[2mm]
$2n+4,\ 2n+6,
\ldots\ldots\ldots\ldots,
4n+4$,\ \
(labels for $((u_2,v_i),(u_4,v_i)), i=1,2,\ldots,n+1$)
\\[2mm]
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\[2mm]
$2mn+2m-2n,\ldots, 2mn+2m$, \ \
(labels for $((u_m,v_i),(u_{m+1},v_i)), i=1,2,\ldots,n+1$)
\end{center}
\noindent \emph{Phase 2:} Denote by $A: a_1<a_2<\ldots<a_s$
the sequence of all odd numbers in $\{1,2,\ldots,2mn+m+n\}$, and
denote by $B: b_1<\ldots<b_t$ the sequence of all even numbers
in $\{2mn+2m+1,\ldots,2mn+m+n\}$, i.e., the even numbers that are
not used in Phase 1. Notice that $t\leq \frac
{1}{2}(2mn+m+n)-(mn+m)=\frac {1}{2}(n-m)$. We merge $A$ and $B$ into
a sequence $C:$ $a_1,a_2,\ldots,a_{s-t},b_1,a_{s-t+1},b_2,\ldots,b_t,a_s$ of $s+t$
terms ($s+t=mn+n$), and denote the sequence $C$ by $c_1,c_2,...,c_{mn+n}$, which are
the labels for the other $mn+n$ edges contained in copies of $P_2[n+1]$ component.
For the $i$-th $P_2[n+1]$ component (with vertices $(u_i,v_1)$,
$(u_i,v_2)$,\ldots, $(u_i,v_{n+1})$), label its edges
in usual order according to the indices in the sequence $C$, $i=1,2,\ldots, m+1$, and
\begin{center}
\noindent $c_1,\ c_2,
\ldots\ldots\ldots\ldots\ldots\ldots\ldots,
c_n$, \ \ (labels for the 1st $P_2[n+1]$ component)
\\[2mm]
$c_{n+1},\ c_{n+2},
\ldots\ldots\ldots\ldots\ldots,
c_{2n}$, \ \ (labels for the 2nd $P_2[n+1]$ component)
\\[2mm]
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\[2mm]
$c_{mn+1},c_{mn+2},
\ldots,c_{mn+n}$, \ \ (labels for the $(m+1)$-th $P_2[n+1]$ component)
\end{center}
Notice that $2t\leq n-m$, hence only the edges in the $(m+1)$-th $P_2[n+1]$ component
may be labeled with even numbers (see Figure~\ref{fig:l48}).
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=128mm]{l48.eps}
\renewcommand{\figurename}{Fig.}
\caption{Antimagic labeling of $P_1[m+1]\times P_2[n+1]$, for $m=3,
n=7$} \label{fig:l48}
\end{figure}
\
In what follows, we will show that the above labeling is antimagic.
In the product graph $P_1[m+1]\times P_2[n+1]$, at each vertex
$(u,v)$, the edges incident to this vertex can be partitioned into
two parts, one part is contained in a copy of $P_1[m+1]$ component,
and the other part is contained in a copy of $P_2[n+1]$ component.
Let $f^{+}_1(u,v)$ and $f^{+}_2(u,v)$ denote the sum at vertex
$(u,v)$ restricted to $P_1[m+1]$ component and $P_2[n+1]$ component
respectively, i.e., $f^{+}_1(u,v)=\sum f((u,v),(y,v))$, where the
sum runs over all vertices $y$ adjacent to $u$ in $P_1[m+1]$, and
$f^{+}_2(u,v)=\sum f((u,v),(u,z))$, where the sum runs over all
vertices $z$ adjacent to $v$ in $P_2[n+1]$. Therefore,
$f^{+}(u,v)=f^{+}_1(u,v)+f^{+}_2(u,v)$. The following two claims imply the antimagicness of the above labeling.
\begin{claim} \label{clai:lattice1}
\noindent For the above labeling of $P_1[m+1]\times P_2[n+1]$,
$n\geq m\geq 2$, we have
\begin{eqnarray*}
&&f^{+}(u_1,v_2)<f^{+}(u_1,v_3)<\ldots\ldots\ldots\ldots<f^{+}(u_1,v_n)<
\\
&&f^{+}(u_2,v_2)<f^{+}(u_2,v_3)<\ldots\ldots\ldots\ldots<f^{+}(u_2,v_n)<
\\
&&\;\;\;\;\;\;
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
&&f^{+}(u_m,v_2)<\,f^{+}(u_m,v_3)<\,\ldots\ldots\ldots<\,f^{+}(u_m,v_n)<
\\
&&f^{+}(u_{m+1},v_2)<\ldots<f^{+}(u_{m+1},v_{n-2t}),
\end{eqnarray*}
where $t$ $(\leq \frac {1}{2}(n-m))$ is the number of even numbers
in $\{2mn+2m+1,\ldots,2mn+m+n\}$. In addition, all the above sums
are even numbers.
\end{claim}
\noindent {\bf Proof:}\, Since $f^{+}_1(u_1,v_2)<f^{+}_1(u_1,v_3)<\ldots<f^{+}_1(u_1,v_n)$ and
$f^{+}_2(u_1,v_2)<f^{+}_2(u_1,v_3)<\ldots<f^{+}_2(u_1,v_n)$, we have $f^{+}(u_1,v_2)<f^{+}(u_1,v_3)<\ldots<f^{+}(u_1,v_n)$.
$f^{+}(u_1,v_n)<f^{+}(u_2,v_2)$ since
$f^{+}_1(u_1,v_n)<f^{+}_1(u_2,v_2)$ and
$f^{+}_2(u_1,v_n)<f^{+}_2(u_2,v_2)$. $f^{+}(u_2,v_2)<f^{+}(u_2,v_3)<
\ldots\ldots<f^{+}(u_2,v_n)$ since
$f^{+}_2(u_2,v_{i+1})-f^{+}_2(u_2,v_i)\geq 4$ and
$f^{+}_1(u_2,v_{i+1})-f^{+}_1(u_2,v_i)\geq -2$, it follows that $f^{+}(u_2,v_{i+1})-f^{+}(u_2,v_i)\geq 2$, for
$i=2,\ldots,n-1$. If $m=2$, $f^{+}_1(u_3,v_2)=f^{+}_1(u_3,v_n)>f^{+}_1(u_2,v_n)$;
if $m>2$, $f^{+}_1(u_3,v_2)>f((u_3,v_2),(u_j,v_2))>f((u_2,v_n),(u_4,v_n))=f^{+}_1(u_2,v_n)$, where $j=4$ or $5$.
Thus, in either case we have $f^{+}_1(u_2,v_n)<f^{+}_1(u_3,v_2)$. Clearly,
$f^{+}_2(u_2,v_n)<f^{+}_2(u_3,v_2)$. It follows that $f^{+}(u_2,v_n)<f^{+}(u_3,v_2)$.
For the vertices of degree $4$, clearly, $f^{+}_
1(u_i,v_2)=f^{+}_1(u_i,v_3)=\ldots\ldots\ldots\ldots=f^{+}_1(u_i,v_n)$ for $i=3,\ldots,m+1.$ Moreover, $f^{+}_
1(u_3,v_2)<f^{+}_1(u_4,v_2)<\ldots<f^{+}_1(u_{m+1},v_2)$ since $f((u_1,v_2),(u_3,v_2))<f((u_2,v_2),(u_4,v_2))<\ldots<
f((u_{m-1},v_2),(u_{m+1},v_2))<f((u_m,v_2),(u_{m+1},v_2))$. It follows that
\begin{eqnarray*}
&&f^{+}_
1(u_3,v_2)=f^{+}_1(u_3,v_3)=\ldots\ldots\ldots\ldots=f^{+}_1(u_3,v_n)<
\\
&&f^{+}_1(u_4,v_2)=f^{+}_1(u_4,v_3)=
\ldots\ldots\ldots\ldots=f^{+}_1(u_4,v_n)<
\\
&&\;\;\;\;\;\;\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
&&f^{+}_1(u_m,v_2)=\,f^{+}_1(u_m,v_3)=\,
\ldots\ldots\ldots=\,f^{+}_1(u_m,v_n)<
\\
&&f^{+}_1(u_{m+1},v_2)=\ldots=f^{+}_1(u_{m+1},v_{n-2t}).
\end{eqnarray*}
On the other hand, since $c_1<c_2<\ldots<c_{mn+n-2t}$, we have that
\begin{eqnarray*}
&&f^{+}_2(u_3,v_2)<f^{+}_2(u_3,v_3)<\ldots\ldots\ldots\ldots<f^{+}_2(u_3,v_n)<
\\
&&f^{+}_2(u_4,v_2)<f^{+}_2(u_4,v_3)<
\ldots\ldots\ldots\ldots<f^{+}_2(u_4,v_n)<
\\
&&\;\;\;\;\;\;\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
&&f^{+}_2(u_m,v_2)<\,f^{+}_2(u_m,v_3)<\,
\ldots\ldots\ldots<\,f^{+}_2(u_m,v_n)<
\\
&&f^{+}_2(u_{m+1},v_2)<\ldots<f^{+}_2(u_{m+1},v_{n-2t}).
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
&&f^{+}(u_3,v_2)<f^{+}(u_3,v_3)<\ldots\ldots\ldots\ldots<f^{+}(u_3,v_n)<
\\
&&f^{+}(u_4,v_2)<f^{+}(u_4,v_3)<
\ldots\ldots\ldots\ldots<f^{+}(u_4,v_n)<
\\
&&\;\;\;\;\;\;\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
&&f^{+}(u_m,v_2)<\,f^{+}(u_m,v_3)<\,
\ldots\ldots\ldots<\,f^{+}(u_m,v_n)<
\\
&&f^{+}(u_{m+1},v_2)<\ldots<f^{+}(u_{m+1},v_{n-2t}).
\end{eqnarray*}
All the above sums are even because each of them contains exactly two
odd labels. \hfill\square\bigskip
\begin{claim}\label{clai:lattice2}
The remaining $2m+2+2t$ sums $f^{+}(u_1,v_1)$,
$f^{+}(u_1,v_{n+1})$, $f^{+}(u_2,v_1)$, $f^{+}(u_2,v_{n+1})$,\ldots,\\
$f^{+}(u_{m+1},v_1)$, $f^{+}(u_{m+1},v_{n+1})$, and
$f^{+}(u_{m+1},v_{n+1-2t})$, $f^{+}(u_{m+1},v_{n+2-2t})$,\ldots,
$f^{+}(u_{m+1},v_n)$ are pairwise distinct. In addition, they are
all odd numbers.
\end{claim}
\noindent {\bf Proof:}\, Let us first consider the $2m+2$ sums
$f^{+}(u_1,v_1)$, $f^{+}(u_1,v_{n+1})$, $f^{+}(u_2,v_1)$,
$f^{+}(u_2,v_{n+1})$,\ldots, $f^{+}(u_{m+1},v_1)$,
$f^{+}(u_{m+1},v_{n+1})$, there are two natural cases:
\\[2mm]
\emph{Case 1.} $m$ is odd. In this case $u_2u_4\in
E(P_1[m+1])$ is labeled with $U$, from the way we do the labeling, we
have $f^{+}_1(u_1,v_1)\leq f^{+}_1(u_1,v_{n+1})\leq
f^{+}_1(u_2,v_1)\leq f^{+}_1(u_2,v_{n+1})\leq\ldots\leq
f^{+}_1(u_{m+1},v_1)\leq f^{+}_1(u_{m+1},v_{n+1})$ and
$f^{+}_2(u_1,v_1)<f^{+}_2(u_1,v_{n+1})<f^{+}_2(u_2,v_1)<
f^{+}_2(u_2,v_{n+1})<\ldots<f^{+}_2(u_{m+1},v_1)<f^{+}_2(u_{m+1},v_{n+1}).$
Therefore, $
f^{+}(u_1,v_1)<f^{+}(u_1,v_{n+1})<f^{+}(u_2,v_1)<f^{+}(u_2,v_{n+1})<\ldots<f^{+}(u_{m+1},v_1)<f^{+}(u_{m+1},v_{n+1}).
$
\\[2mm]
\emph{Case 2.} $m$ is even. In this case $u_2u_j\in
E(P_1[m+1])$ is labeled with $R$ (where $j=3$ if $m=2$, $j=4$ if
$m>2$), the ordering of the $2m+2$ sums $f^{+}(u_1,v_1)$,
$f^{+}(u_1,v_{n+1})$, $f^{+}(u_2,v_1)$, $f^{+}(u_2,v_{n+1})$,\ldots,
$f^{+}(u_{m+1},v_1)$, $f^{+}(u_{m+1},v_{n+1})$ is the same as in
case 1, but between vertices $(u_2,v_1)$ and $(u_2,v_{n+1})$.
Specifically, we have $f^{+}_1(u_1,v_1)\leq f^{+}_1(u_1,v_{n+1})\leq
f^{+}_1(u_2,v_1), f^{+}_1(u_2,v_{n+1})\leq f^{+}_1(u_3,v_1)\leq
\ldots\leq f^{+}_1(u_{m+1},v_{n+1})$ and
$f^{+}_2(u_1,v_1)<f^{+}_2(u_1,v_{n+1})<f^{+}_2(u_2,v_1)<f^{+}_2(u_2,v_{n+1})
<\ldots<f^{+}_2(u_{m+1},v_1)<f^{+}_2(u_{m+1},v_{n+1}).$ Therefore,
$$
f^{+}(u_1,v_1)<f^{+}(u_1,v_{n+1})<f^{+}(u_2,v_1),f^{+}(u_2,v_{n+1})<\ldots<f^{+}(u_{m+1},v_1)<f^{+}(u_{m+1},v_{n+1}).
$$
Since
$f^{+}(u_2,v_1)=f^{+}_1(u_2,v_1)+f^{+}_2(u_2,v_1)=(4n+4)+(2n+1)=6n+5$,
and
$f^{+}(u_2,v_{n+1})=f^{+}_1(u_2,v_{n+1})+f^{+}_2(u_2,v_{n+1})=(2n+4)+(4n-1)=6n+3$,
it follows that
$f^{+}(u_1,v_1)<f^{+}(u_1,v_{n+1})<f^{+}(u_2,v_{n+1})<f^{+}(u_2,v_1)<\ldots<f^{+}(u_{m+1},v_1)<f^{+}(u_{m+1},v_{n+1}).
$\\
Thus, in any of the above two cases, the $2m+2$ sums
$f^{+}(u_1,v_1)$, $f^{+}(u_1,v_{n+1})$, $f^{+}(u_2,v_1)$,
$f^{+}(u_2,v_{n+1})$,\ldots, $f^{+}(u_{m+1},v_1)$,
$f^{+}(u_{m+1},v_{n+1})$ are pairwise distinct, and
$f^{+}(u_{m+1},v_{n+1})$ is the largest among them. For the other
$2t$ sums $f^{+}(u_{m+1},v_{n+1-2t})$,
$f^{+}(u_{m+1},v_{n+2-2t})$,\ldots, $f^{+}(u_{m+1},v_n)$, they are in strict increasing order
$f^{+}(u_{m+1},v_{n+1-2t})<f^{+}(u_{m+1},v_{n+2-2t})<\ldots<
f^{+}(u_{m+1},v_n)$, since
$f^{+}_1(u_{m+1},v_{n+1-2t})=f^{+}_1(u_{m+1},v_{n+2-2t})=\ldots=
f^{+}_1(u_{m+1},v_n)$ and
$f^{+}_2(u_{m+1},v_{n+1-2t})<f^{+}_2(u_{m+1},v_{n+2-2t})<\ldots<
f^{+}_2(u_{m+1},v_n)$.
At this point, the only remained issue is to
notice that $f^{+}(u_{m+1},v_{n+1-2t})>f^{+}(u_{m+1},v_{n+1})$,
since $f^{+}_1(u_{m+1},v_{n+1-2t})=f^{+}_1(u_{m+1},v_{n+1})$ and
$f^{+}_2(u_{m+1},v_{n+1-2t})=a_{s-t}+b_1\geq
(2mn+m+n-1-2t)+(2mn+2m+2)\geq
2mn+m+n-1-(n-m)+2mn+2m+2=4mn+4m+1>2mn+m+n\geq
a_s=f^{+}_2(u_{m+1},v_{n+1})$. Hence, the $2m+2t+2$ sums are
pairwise distinct. They are all odd numbers since each of them contains
exactly one odd label. \hfill\square\bigskip
Combining Claim \ref{clai:lattice1} and Claim \ref{clai:lattice2},
we have proved that the above labeling of $P_1[m+1]\times P_2[n+1]$
is antimagic, for $n\geq m\geq 2$. Please see Figure \ref{fig:l48}
as an example of antimagic labeling of $P_1[m+1]\times P_2[n+1]$,
for $m=3, n=7$.
\subsection{$P_1[2]\times P_2[n+1]$ is Antimagic, for $n\geq 1$}
Assume that $P_2[n+1]$ has edge set
$\{v_iv_{i+2}|i=1,\ldots,n-1\}\cup\{v_nv_{n+1}\}$. For $n=1$,
$P_1[2]\times P_2[2]$ is isomorphic to $C[4]$, hence by Lemma \ref{cycle}, it is antimagic. For $n>1$, label
$1,3,\ldots,2n-1$ to the edges $((u_1,v_1),(u_1,v_3))$,
$((u_1,v_2),(u_1,v_4))$,\ldots\ldots,
$((u_1,v_{n-1}),(u_1,v_{n+1}))$ ,$((u_1,v_n),(u_1,v_{n+1}))$, label
$2,4,\ldots,2n$ to the edges $((u_2,v_1),(u_2,v_3))$,
$((u_2,v_2),(u_2,v_4))$ ,\ldots\ldots,
$((u_2,v_{n-1}),(u_2,v_{n+1}))$, $((u_2,v_n),(u_2,v_{n+1}))$, and
label $2n+1,2n+2,\ldots,3n+1$ to $((u_1,v_1),(u_2,v_1))$,
$((u_1,v_2),(u_2,v_2))$, \ldots\ldots,
$((u_1,v_{n+1}),(u_2,v_{n+1}))$ (see Figure \ref{fig:l2n}).\\
We will show that the above labeling (for $n>1$) is antimagic.
Since the vertex sums restricted to $P_1[2]$ component satisfy that
$
f^{+}_1(u_1,v_1)=f^{+}_1(u_2,v_1)<f^{+}_1(u_1,v_2)=f^{+}_1(u_2,v_2)<\ldots
<f^{+}_1(u_1,v_{n+1})=f^{+}_1(u_2,v_{n+1})$ (`$=$' and `$<$'
alternate), and the vertex sums restricted to $P_2[n+1]$ component
are
$$
f^{+}_2(u_1, v_i) = \left\{
\begin{array}{ll}
1 & i=1;
\\
3 & i=2;
\\
4i-6 & i=3,\ldots,n;
\\
4n-4 & i=n+1;
\end{array}
\right. ~~~~~~f^{+}_2(u_2, v_i) = \left\{
\begin{array}{ll}
2 & i=1;
\\
4 & i=2;
\\
4i-4 & i=3,\ldots,n;
\\
4n-2 & i=n+1.
\end{array}
\right.
$$
It follows that $
f^{+}_2(u_1,v_1)<f^{+}_2(u_2,v_1)<f^{+}_2(u_1,v_2)<$\ldots$
<f^{+}_2(u_2,v_n)=f^{+}_2(u_1,v_{n+1})<f^{+}_2(u_2,v_{n+1})$ (there
is one equality). Therefore,
$f^{+}(u_1,v_1)<f^{+}(u_2,v_1)<f^{+}(u_1,v_2)<f^{+}(u_2,v_2)<\ldots
<f^{+}(u_1,v_{n+1})<f^{+}(u_2,v_{n+1})$, implying the antimagicness of the above labeling.\\
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=120mm]{l2n.eps}
\renewcommand{\figurename}{Fig.}
\caption{Antimagic labelings of $P_1[2]\times P_2[2]$ and $P_1[2]\times P_2[n+1]$, for $n=5$} \label{fig:l2n}
\end{figure}
Combining the above two cases, we have proved Theorem \ref{lattice}.
\section{Proof of Theorem \ref{prism}}
Assume that in the product graph $C[m]\times P[n+1]$, $C[m]$ has
edge set $\{u_1u_2\}\cup
\{u_iu_{i+2}|i=1,\ldots,m-2\}\cup\{u_{m-1}u_m\}$, and $P[n+1]$ has
edge set $\{v_iv_{i+2}|i=1,\ldots,n-1\}\cup\{v_nv_{n+1}\}$. To prove Theorem \ref{prism}, first, we
construct a labeling that is antimagic on product graphs $C[m]\times
P[n+1]$ for $m\geq 3, n\geq 2$. Then, we give an antimagic labeling of
graphs $C[m]\times P[2]$ for $m\geq 3$.
\begin{lemma} \label{lemm:prism1}
\noindent $C[m]\times P[n+1]$ is antimagic for $m\geq 3, n\geq 2$.
\end{lemma}
\noindent {\bf Proof:}\, The antimagic labeling we will construct in
this case ($m\geq 3, n\geq 2$) is similar with the
labeling constructed in \cite{Wa} on toroidal grids, the
difference made here is to adapt the structure of prisms. The labeling contains two phases.
\\[2mm]
\noindent \emph{Phase 1:} Using the same way as in the antimagic labeling
of cycles in Lemma \ref{cycle}, label the edges on the $i$-th
$C[m]$ component (with vertices $(u_1,v_i)$,
$(u_2,v_i)$,\ldots, $(u_m,v_i)$), for $i=1,2,\ldots, n+1$, and
\begin{center}
\noindent $1,\ 2,
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots, m$, \
\ (labels for the 1st $C[m]$ component)
\\[2mm]
$m+1,\ m+2, \ldots\ldots\ldots\ldots\ldots\ldots\ldots, 2m$, \ \
(labels for the 2nd $C[m]$ component)
\\[2mm]
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\[2mm]
$mn+1,\ mn+2, \ldots\ldots\ldots mn+m$. \ \ (labels for the
$(n+1)$-th $C[m]$ component)
\end{center}
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=100mm]{modi.eps}
\renewcommand{\figurename}{Fig.}
\caption{Modification on the 2nd $C[m]$ component in case $n$ is
even, for $m=5$} \label{fig:modi}
\end{figure}
\noindent \emph{Phase 2:} Similarly, label the edges of $P[n+1]$ with $U$ and $R$ such that
$v_1v_3$ is labeled with $U$, and two edges are labeled with
different letters if they are incident to a same vertex. For each edge
$v_iv_j\in E(P[n+1])$ labeled with $U$, the edges $((u_1,v_i),(u_1,v_j))$,
$((u_2,v_i),(u_2,v_j))$,\ldots\ldots,$((u_m,v_i),(u_m,v_j))$ will be
labeled in usual order; for each edge $v_iv_j\in E(P[n+1])$ labeled with $R$,
the edges $((u_1,v_i),(u_1,v_j))$, $((u_2,v_i),(u_2,v_j))$,\ldots\ldots,$((u_m,v_i),(u_m,v_j))$ will be
labeled in reversed order, and
\begin{center}
$mn+m+1,\ mn+m+2, \ldots,
mn+2m$, \ \ (labels for $((u_i,v_1),(u_i,v_3)), i=1,2,\ldots,m$)
\\[2mm]
$mn+2m+1,mn+2m+2, \ldots,
mn+3m$, \ (labels for $((u_i,v_2),(u_i,v_4)), i=1,2,\ldots,m$)
\\[2mm]
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\[2mm]
$2mn+1,2mn+2,
\ldots\ldots\ldots,
2mn+m$, \ \ (labels for $((u_i,v_n),(u_i,v_{n+1})), i=1,2,\ldots,m$)
\end{center}
If $v_2v_j\in E(P[n+1])$ ($j=3$ if $n=2$, $j=4$ if $n>2$) is
labeled with $R$ (i.e., when $n$ is even), we will take a
\emph{modification} process on the 2nd $C[m]$ component (with vertices $(u_1,v_2)$,
$(u_2,v_2)$,\ldots, $(u_m,v_2)$), which goes as follows. For each $u_iu_j\in E(C[m])$, the edge
$((u_i,v_2),(u_j,v_2))$ will be
relabeled with $(3m+1)-l_0(i,j)$, where $l_0(i,j)$ is the original
label assigned to $((u_i,v_2),(u_j,v_2))$ in Phase 1 (i.e., we `reverse' the labeling on the 2nd $C[m]$ component,
whose edges will still be labeled with the
same set of numbers $\{m+1,m+2,\ldots, 2m\}$).
Then, we rename each vertex $(u_i,v_2)$ as $(u_{m+1-i},v_2)$, for
$i=1,2,\ldots,m$ (see Figure \ref{fig:modi}).\\
Let $f^{+}_1(u,v)$ and $f^{+}_2(u,v)$ be the
vertex sum at $(u,v)\in V(C[m]\times P[n+1])$
restricted to $C[m]$ component and $P[n+1]$
component, respectively. Then,
$f^{+}(u,v)=f^{+}_1(u,v)+f^{+}_2(u,v)$ is the vertex sum at $(u,v)$.
It is easy to see that, for the above labeling, independent of the parity of $n$ (i.e.,
no matter whether there is a modification
process or not), the ordering
$f^{+}_1(u_1,v_2)<f^{+}_1(u_2,v_2)<\ldots\ldots<f^{+}_1(u_m,v_2)$
and
$f^{+}_2(u_1,v_2)<f^{+}_2(u_2,v_2)<\ldots\ldots<f^{+}_2(u_m,v_2)$
will hold.\\
Using similar arguments, it is straightforward to prove that for the above labeling we have
\begin{center}
$f^{+}_1(u_1,v_1)<f^{+}_1(u_2,v_1)<\ldots\ldots\ldots\ldots<f^{+}_1(u_m,v_1)<$
\\[2mm]
$f^{+}_1(u_1,v_2)<f^{+}_1(u_2,v_2)<\ldots\ldots\ldots\ldots<f^{+}_1(u_m,v_2)<$
\\
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
$f^{+}_1(u_1,v_{n+1})<f^{+}_1(u_2,v_{n+1})<\ldots\ldots<f^{+}_1(u_m,v_{n+1}),$
\end{center}
and
\begin{center}
$f^{+}_2(u_1,v_1)\leq f^{+}_2(u_2,v_1)\leq
\ldots\ldots\ldots\ldots\leq f^{+}_2(u_m,v_1)\leq $
\\[2mm]
$f^{+}_2(u_1,v_2)\leq f^{+}_2(u_2,v_2)\leq
\ldots\ldots\ldots\ldots\leq f^{+}_2(u_m,v_2)\leq $
\\
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
$f^{+}_2(u_1,v_{n+1})\leq f^{+}_2(u_2,v_{n+1})\leq \ldots\ldots\leq
f^{+}_2(u_m,v_{n+1}).$
\end{center}
Therefore,
\begin{center}
$f^{+}(u_1,v_1)<f^{+}(u_2,v_1)<\ldots\ldots\ldots\ldots<f^{+}(u_m,v_1)<$
\\[2mm]
$f^{+}(u_1,v_2)<f^{+}(u_2,v_2)<\ldots\ldots\ldots\ldots<f^{+}(u_m,v_2)<$
\\
\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots
\\
$f^{+}(u_1,v_{n+1})<f^{+}(u_2,v_{n+1})<\ldots\ldots<f^{+}(u_m,v_{n+1}),$
\end{center}
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=100mm]{c5p4.eps}
\renewcommand{\figurename}{Fig.}
\caption{Antimagic labeling of $C[m]\times P[n+1]$, for $m=5$,
$n=3$} \label{fig:c5p4}
\end{figure}
which implies that the above labeling is antimagic. Please see
Figure \ref{fig:c5p4} as an example of antimagic labeling of
$C[m]\times P[n+1]$, for $m=5$, $n=3$.
\hfill\square\bigskip
\begin{lemma} \label{lemm:prism2}
\noindent $C[m]\times P[2]$ is antimagic for $m\geq 3$.
\end{lemma}
\noindent {\bf Proof:}\, Assume that $C[m]$ has edge set
$\{u_1u_2\}\cup \{u_iu_{i+2}|i=1,\ldots,m-2\}\cup\{u_{m-1}u_m\}$.
Label $1,3,\ldots,2m-1$ to the edges $((u_1,v_1),(u_2,v_1))$,
$((u_1,v_1),(u_3,v_1))$,\ldots\ldots, $((u_{m-2},v_1),(u_m,v_1))$
,$((u_{m-1},v_1),(u_m,v_1))$, label $2,4,\ldots,2m$ to the edges
$((u_1,v_2),(u_2,v_2))$, $((u_1,v_2),(u_3,v_2))$,\ldots\ldots,
$((u_{m-2},v_2),(u_m,v_2))$ ,$((u_{m-1},v_2),(u_m,v_2))$, and label
$2m+1,2m+2,\ldots,3m$ to the edges \\
$((u_1,v_1),(u_1,v_2))$, $((u_2,v_1),(u_2,v_2))$, \ldots\ldots,
$((u_m,v_1),(u_m,v_2))$ (see Figure \ref{fig:c5p2}).\\
We will show that the above labeling ($m\geq 3$) is
antimagic. Since the vertex sums restricted to $C[m]$ component are
$$
f^{+}_1(u_i, v_1) = \left\{
\begin{array}{ll}
4 & i=1;
\\
4i-2 & i=2,\ldots,m-1;
\\
4m-4 & i=m;
\end{array}
\right. ~~~~~f^{+}_1(u_i, v_2) = \left\{
\begin{array}{ll}
6 & i=1;
\\
4i & i=2,\ldots,m-1;
\\
4m-2 & i=m.
\end{array}
\right.
$$
It follows that $
f^{+}_1(u_1,v_1)<f^{+}_1(u_1,v_2)=f^{+}_1(u_2,v_1)<\ldots
<f^{+}_1(u_{m-1},v_2)=f^{+}_1(u_m,v_1)<f^{+}_1(u_m,v_2)$ (there are
two equalities). In addition, $f^{+}_2(u_1,v_1)=f^{+}_2(u_1,v_2)<f^{+}_2(u_2,v_1)=f^{+}_2(u_2,v_2)<\ldots
<f^{+}_2(u_m,v_1)=f^{+}_2(u_m,v_2)$ (`$=$' and `$<$' alternate).
Therefore,
$f^{+}(u_1,v_1)<f^{+}(u_1,v_2)<f^{+}(u_2,v_1)<f^{+}(u_2,v_2)<\ldots
<f^{+}(u_m,v_1)<f^{+}(u_m,v_2)$, implying the antimagicness of the above labeling.
\hfill\square\bigskip
Combining Lemma \ref{lemm:prism1} and Lemma \ref{lemm:prism2}, we
have proved Theorem \ref{prism}.
\begin{figure}[t]
\renewcommand{\captionlabelfont}{\bf}
\renewcommand{\captionlabeldelim}{.~}
\centering
\includegraphics[width=60mm]{c5p2.eps}
\renewcommand{\figurename}{Fig.}
\caption{Antimagic labeling of $C[m]\times P[2]$, for $m=5$}
\label{fig:c5p2}
\end{figure}
| {
"timestamp": "2006-03-04T02:26:26",
"yymm": "0603",
"arxiv_id": "math/0603106",
"language": "en",
"url": "https://arxiv.org/abs/math/0603106",
"abstract": "An \\emph{antimagic labeling} of a finite undirected simple graph with $m$ edges and $n$ vertices is a bijection from the set of edges to the integers $1,...,m$ such that all $n$ vertex sums are pairwise distinct, where a vertex sum is the sum of labels of all edges incident with the same vertex. A graph is called \\emph{antimagic} if it has an antimagic labeling. In 1990, Hartsfield and Ringel conjectured that every connected graph, but $K_2$, is antimagic. In 2004, N. Alon et al showed that this conjecture is true for $n$-vertex graphs with minimum degree $\\Omega(\\log n)$. They also proved that complete partite graphs (other than $K_2$) and $n$-vertex graphs with maximum degree at least $n-2$ are antimagic. Recently, Wang showed that the toroidal grids (the Cartesian products of two or more cycles) are antimagic. Two open problems left in Wang's paper are about the antimagicness of lattice grid graphs and prism graphs, which are the Cartesian products of two paths, and of a cycle and a path, respectively. In this article, we prove that these two classes of graphs are antimagic, by constructing such antimagic labelings.",
"subjects": "Combinatorics (math.CO)",
"title": "Lattice Grids and Prisms are Antimagic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750503469331,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221685932364
} |
https://arxiv.org/abs/2202.09983 | Chaos for foliated spaces and pseudogroups | We generalize "sensitivity to initial conditions" to foliated spaces and pseudogroups, offering a definition of Devaney chaos in this setting. In contrast to the case of group actions, where sensitivity follows from the other two conditions of Devaney chaos, we show that this is true only for compact foliated spaces, exhibiting a counterexample in the non-compact case. Finally, we obtain an analogue of the Auslander-Yorke dichotomy for compact foliated spaces and compactly generated pseudogroups. | \section{Introduction}
There are several definitions of chaos for dynamical systems (Li-Yorke chaos, positive entropy, ...), but in this article we will consider only Devaney's, first introduced in~\cite{Devaney}.
\begin{definition}[Devaney chaos] \label{d:dev}
A continuous map $f\colon X\to X$ on a metric space $(X,d)$ is \emph{chaotic} if,
\begin{enumerate}[(i)]
\item \label{i:devtt}
for all non-empty open $U,V\subset X$, there is $n\geq 0$ such that
\[
f^n(U)\cap V\neq \emptyset
\]
($f$ is \emph{topologically transitive}),
\item \label{i:devper}the set of periodic points is dense in $X$ ($f$ has \emph{density of periodic points}), and
\item \label{i:devsens} there is $c>0$ such that, for every $x\in X$ and $r>0$, there are $y\in B(x,r)$ and $n\geq 0$ satisfying
\[
d(f^n(x),f^n(y))\geq c
\] ($f$ is \emph{sensitive to initial conditions}).
\end{enumerate}
\end{definition}
This definition can be readily adapted for group actions $G\curvearrowright X$ by substituting $g\in G$ in place of $f^n$ ($n\in \mathbb{N}$) above.
Topological transitivity conveys the indecomposability of the dynamical system, whereas~(\ref{i:devper}), according to Devaney himself, provides ``an element of regularity''~\cite[p.~50]{Devaney}. Sensitivity to initial conditions, for its part, expresses what is commonly known as the ``butterfly effect''. This rough sketch may lead to the impression that~(\ref{i:devsens}) alone imbues this definition with its chaotic nature; suprisingly, it was proved later that this condition is, in fact, redundant.
\begin{theorem}[{\cite{BBCDS}}]\label{t:bbcds}
If a continuous map $f\colon X\to X$ on a metric space $(X,d)$ satisfies~(\ref{i:devtt}) and~(\ref{i:devper}), then it also satisfies~(\ref{i:devsens}).
\end{theorem}
This result was later generalized to topological group and semigroup actions~\cite{Kontorovich,SKBS}. The reader should bear in mind that these results hold even when the phase space is not compact.
If the local behaviour of $f$ around a point $x$ is not sensitive to initial conditions, then there is an assignment $\epsilon\mapsto\delta(\epsilon)$ such that
\[
d(x,y)<\delta(\epsilon)\quad\Longrightarrow\quad d(f^nx,f^ny)<\epsilon \qquad\text{for every}\quad y\in X,\ n\in\mathbb{N},
\]
and we say that $x$ is a \emph{point of equicontinuity}. If the set of points of equicontinuity is dense in $X$, we call the dynamical system \emph{almost equicontinuous}; if every point is of equicontinuity with the same modulus $\epsilon\mapsto \delta(\epsilon)$, then the system is \emph{equicontinuous}. This rough opposition between chaos and equicontinuity is rigorously formulated
by the Auslander-Yorke dichotomy.
\begin{theorem}[{Auslander-Yorke dichotomy~\cite[Cor.~2]{AuslanderYorke}}]\label{t:auslanderyorkemin}
Let $X$ be a compact space and let $f\colon X\to X$ be a continuous map such that $(X,f)$ is minimal. Then $f$ is either equicontinuous or sensitive to initial conditions.
\end{theorem}
After this brief review, we can state the aim of the present paper: to study topological chaos for foliations and their generalization, foliated spaces; as we will see, this requires considering pseudogroups too. Recall that a foliated space is a topological generalization of a foliation where the choice of local transversal models is not restricted to manifolds: they are only required to be Polish spaces (see Section~\ref{ss:foliated}). The Smale-Williams attractor provides an example of a foliated space that is not a foliation---it is locally homeomorphic to the product of the real line and the Cantor set.
Thus, our work fits into the broader field of topological dynamics for foliated spaces, which has received much attention of late. The most studied foliation dynamics are the equicontinuous, featuring the celebrated tools of Molino theory for \emph{Riemannian foliations}~\cite{Molino}. Equicontinuity was generalized to foliated spaces and pseudogroups in~\cite{Sacksteder, Molino, Kellum, Tarquini, AlvarezCandel} with varying degrees of generality, the methods of~\cite{AlvarezCandel} in particular being a main source of inspiration for this paper. Molino theory itself has also been generalized in~\cite{AlvarezMoreira, DyerHurderLukina2016,DyerHurderLukina}, giving rise to the study of \emph{wild} solenoids and Cantor actions, which have a complicated interplay between local and global behavior~\cite{HurderLukina,HurderLukina2021,Lukina, AlvarezBarralLukinaNozawa}. There has been some recent work on complex dynamics, concerning Fatou-Julia decompositions for holomorphic foliations~\cite{GhysGomezSaludes,
Asuke2013}.
In order to analyze foliated spaces from a dynamical point of view, we regard them as generalized dynamical systems where the leaves play the role of the orbits; as the title of~\cite{Churchill} reads, they are dynamical systems ``in the absence of time.'' We may identify the paper on topological entropy for foliations by Ghys, Langevin, and Walczak~\cite{GLW} as the first study on chaotic foliations, even though the word ``chaos'' is never mentioned. In fact, it looks like the term ``chaotic foliation'' has only appeared twice in the literature; its debut was in the context of general relativity, where Churchill was trying to provide a definition of chaos invariant by relativistic reparametrizations of time:
\begin{definition}[{Churchill chaos~\cite{Churchill}}]
A foliation is chaotic if
\begin{enumerate}[(i)]
\item \label{i:churchilltt}there is a dense leaf,
\item \label{i:churchilldpp}the set of compact leaves is dense, and
\item \label{i:churchillnontrivial}there are at least two different leaves.
\end{enumerate}
\end{definition}
Items~(\ref{i:churchilltt}) and~(\ref{i:churchilldpp}) correspond to~\ref{d:dev}(\ref{i:devtt})--(\ref{i:devper}), whereas~(\ref{i:churchillnontrivial}) avoids the trivial scenario where the foliation consists of a single leaf.
Churchill did not include a foliation-theoretic definition of sensitivity: only foliations by curves arising from a flow were considered.
More recently, Bazaikin, Galaev, and Zhukova have provided the following definition of chaos for foliations:
\begin{definition}[Bazaikin-Galaev-Zhukova chaos~\cite{BGZ}]
A foliation is chaotic if
\begin{enumerate}[(i)]
\item there is a dense leaf and
\item the set of closed leaves is dense.
\end{enumerate}
\end{definition}
They have used this definition to study chaos for Cartan foliations, relating it to conditions in their holonomy pseudogroups and global holonomy groups. Of course, their definition coincides with Churchill's when the ambient manifold is compact. Again, it does not take into account sensitivity to initial conditions.
Regarding examples of chaotic foliated spaces (according to the definitions above), besides those appearing in~\cite{Churchill,BGZ}, the author was involved in the recent study of a hyperbolic version of the cut-and-project method of tiling theory~\cite{ABHNP}. This yields Delone subsets of $\mathbb{R}$ whose continuous hulls, which are naturally foliated spaces, are chaotic with respect to the natural action by translations.
This discussion motivates the first contribution of the present paper: we introduce a suitable definition of sensitivity to initial conditions for foliated spaces. We phrase this definition in terms of \emph{holonomy pseudogroups}, which have long been used as dynamical models for foliations. A pseudogroup in a topological space $X$ is a collection of homeomorphisms between open subsets of $X$ containing the identity and closed under composition, inversion, restriction, and combination of partial maps (see Section~\ref{s:pseudogroup}). H.~Nozawa and the author have developed a slightly different dynamical model in order to define sensitivity and Devaney chaos for closed saturated subsets of the Gromov space of pointed colored graphs~\cite{BarralNozawa}. These subsets resemble singular foliations by graphs and do not admit a holonomy pseudogroup in the usual sense.
After some preliminary results in Section~\ref{s:preliminaries}, we discuss our definition of sensitivity for pseudogrous in Section~\ref{ss:sensitivityandchaos}: showing first why a naive approach fails, we follow the ideas present in~\cite{AlvarezCandel} in order to arrive at Definition~\ref{d:sensitivity}.
We also provide definitions for almost equicontinuity and density of periodic orbits, making use of the latter to define Devaney chaos as follows:
\begin{definition}[Devaney chaos for pseudogroups]
A pseudogroup $\mathcal G\curvearrowright X$ is \emph{chaotic} if it is topologically transitive, has density of periodic orbits, and is sensitive to initial conditions.
\end{definition}
The next results test whether this new definition of sensitivity constitutes a satisfactory generalization of the original one. We start by examining pseudogroups generated by group actions:
\begin{theorem}\label{t:sicactionpseudogroup}
If $G$ is a finitely generated group acting on a compact space $X$, then the action is sensitive to initial conditions if and only if the pseudogroup generated by the action is.
\end{theorem}
We also show in Section~\ref{s:nonsensitiveaction} that the conditions on $G$ and $X$ are necessary for the result to hold. So, in general, sensitivity of the pseudogroup induced by a group action is strictly stronger than sensitivity of the group action itself.
\begin{theorem}\label{t:linked}
There are group actions $G\curvearrowright X$ that are sensitive to initial conditions but such that the pseudogroup generated by the action is not, where either
\begin{itemize}
\item $G$ is the free group on two generators and $X=\mathbb{T}^2\times \mathbb{Z}$, where $\mathbb{T}^2$ is the $2$-torus, or
\item $G$ is the free group on countably many generators and $X=\mathbb{T}^2$.
\end{itemize}
\end{theorem}
These actions are constructed using \emph{linked twists}, a family of classical examples of chaotic dynamical systems (see Section~\ref{s:linked} and the references therein). We will later recycle these counterexamples in the proof of Theorem~\ref{t:counterfol}.
We continue in Section~\ref{s:main} with our three main contributions regarding pseudogroup dynamics. Our first result addresses the following issue: if we are to use pseudogroups as dynamical models for foliated spaces, all our new definitions must be invariant by (Haefliger) equivalences. This is because the holonomy pseudogroup of a foliated space is only well-defined up to equivalence (see Sections~\ref{ss:equivalences} and~\ref{ss:foliated}).
\begin{theorem}\label{t:invariant}
Sensitivity to initial conditions, density of periodic orbits, Devaney chaos, and almost equicontinuity are invariant by equivalences of pseudogroups acting on locally compact Polish spaces.
\end{theorem}
The corresponding result for equicontinuity was proved in~\cite{AlvarezCandel}. The main difficulty in~\ref{t:invariant} is proving the invariance of sensitivity. The reason why this is not trivial is that sensitivity and almost equicontinuity involve a metric, which is a global object; equivalences, however, are made up of local homeomorphisms, so we have to put in some work to construct a global metric using the information carried over by the local maps.
Our next objective will be to study whether Theorems~\ref{t:bbcds} and~\ref{t:auslanderyorkemin} extend to the pseudogroup setting.
We manage to do so for compactly generated pseudogroups (see Section~\ref{ss:compactlygenerated} for the definition of compact generation).
\begin{theorem}[Auslander-Yorke dichotomy for pseudogroups]\label{t:auslanderyorkepseudo}
Let $\mathcal{G}$ be a compactly generated and topologically transitive pseudogroup. Then $\mathcal{G}$ is either sensitive to initial conditions or almost equicontinuous. Moreover, if $\mathcal{G}$ is minimal, then it is either sensitive to initial conditions or equicontinuous.
\end{theorem}
\begin{theorem}\label{t:ttdposicpseudo}
If $\mathcal{G}$ is a compactly generated and topologically transitive pseudogroup which has density of periodic orbits, then it is sensitive to initial conditions.
\end{theorem}
Even though Theorem~\ref{t:bbcds} holds for actions on non-compact spaces, we exhibit in Section~\ref{s:cantor} a non-compactly generated, countably generated pseudogroup that is topologically transitive and has density of periodic orbits, but it is not sensitive to initial conditions. This shows that compact generation is a necessary condition in Theorem~\ref{t:ttdposicpseudo}.
At this point, we turn our attention to studying chaos for foliated spaces. By virtue of Theorem~\ref{t:invariant}, we can define almost equicontinuity and sensitivity using the holonomy pseudogroup.
\begin{definition}
A foliated space is sensitive to initial conditions or almost equicontinuous if its holonomy pseudogroup is.
\end{definition}
Regarding density of periodic orbits and Devaney chaos, we encounter an additional subtlety: It is easy to check that density of periodic orbits for the holonomy pseudogroup implies density of closed leaves, but one might also consider the stronger condition of density of compact leaves. We choose the latter option because the counterexample we exhibit in Theorem~\ref{t:counterfol} satisfies this stronger condition. On the other hand, we run into the problem that density of compact leaves cannot be formulated as a equivalence-invariant property of the holonomy pseudogroup (see Example~\ref{e:rz}).
\begin{definition}\label{d:chaosfs}
A foliated space is \emph{chaotic} if it is topologically transitive, it has a dense set of compact leaves, and it is sensitive to initial conditions.
\end{definition}
Note that, by the previous discussion, chaoticity of the foliated space is strictly stronger than chaoticity of the holonomy pseudogroup. We show in Section~\ref{s:chaoschaos} an explicit example of this behavior.
Our next step is to extend Theorems~\ref{t:bbcds} and~\ref{t:auslanderyorkemin} for compact foliated spaces, where density of compact leaves and of closed leaves coincide. By the previous discussion, these results follows immediately from Theorems~\ref{t:ttdposicpseudo} and~\ref{t:auslanderyorkepseudo}.
\begin{theorem}\label{t:ttdposicfol}
Let $X$ be a compact foliated space. If $X$ is topologically transitive and has density of compact leaves, then it is sensitive to initial conditions.
\end{theorem}
\begin{theorem}
Let $X$ be a compact and topologically transitive foliated space. Then $X$ is either sensitive to initial conditions or almost equicontinuous. Moreover, if $X$ is minimal, then it is either sensitive to initial conditions or equicontinuous.
\end{theorem}
In analogy to the case of pseudogroups, where we need compact generation in Theorem~\ref{t:ttdposicpseudo}, compactness is a necessary condition for Theorem~\ref{t:ttdposicfol}; a simple counterexample with totally disconnected transversals is constructed in Section~\ref{s:ttdposicfol}.
One might wonder whether it is possible to find similar counterexamples among non-compact foliations, perhaps with smooth transversal dynamics.
We conclude the paper in Sections~\ref{s:affinepseudogroup} and~\ref{s:ttdclnotsicaffine} with the following counterexample:
\begin{theorem}\label{t:counterfol}
There is a foliation by surfaces on a smooth $4$-manifold that is topologically transitive and has a dense set of compact leaves, but is not sensitive to initial conditions. This foliation is $C^\infty$ and transversally affine.
\end{theorem}
As a result, we observe that chaos for foliated spaces seems to be quite a strong condition, at least when compared to the case of group actions.
We can offer the following geometrical interpretation of the lack of sensitivity in Theorem~\ref{t:counterfol}. For this non-compact, smooth $4$-manifold $M$, there is a locally finite foliated atlas $(U_i,\phi_i)$---where $\phi_i\colon U_i\to \mathbb{R}^2\times T_i$ and $T_i\subset \mathbb{R}^2$ are the local transversals---satisfying the following condition: every holonomy transformation is an affine map, and there is a leaf $L$ such that every holonomy transformation $h$ between transversals $T_i\to T_j$ induced by a path in $L$ is an isometry with respect to the Euclidean metric on $T_i, T_j\subset \mathbb{R}^2$.
\section{Preliminaries}\label{s:preliminaries}
\subsection{Metric spaces}
In this paper, we consider metric functions $d\colon X\times X\to [0,\infty]$ that may attain an infinite value. A metric on a topological space is said to be \emph{compatible} if the underlying topology agrees with that generated by the open balls. A topological space is Polish if it is separable and it admits a compatible complete metric; that is, one where every Cauchy sequence converges. All topological spaces will be implicitly assumed to be Polish.
\subsection{Partial maps and pseudogroups}\label{s:pseudogroup}
Let $X$ and $Y$ be topological spaces. A \emph{partial map} from $X$ to $Y$ is a map $f\colon A\to Y$ with domain a subset $A\subset X$. Given a partial map $f$, let $\dom f$ and $\im f$ denote the domain and image of $f$, respectively. We say that a partial map $f$ from $X$ to $Y$ is a \emph{partial homeomorphism} if $\dom f\subset X$ and $\im f\subset Y$ are open and $f\colon\dom f\to \im f$ is a homeomorphism; we denote by $\Ph(X,Y)$ the set of partial homeomorphisms from $X$ to $Y$. From now on, we use $f(A)$ as shorthand for $f(A\cap\dom f)$, where $f\in\Ph(X,Y)$ and $A\subset X$.
Given $f\in \Ph(X,Y)$ and $g\in\Ph(Y,Z)$, the \emph{composition} $gf\in \Ph(X,Z)$ is defined by
\[
\dom gf= f^{-1}(\dom g),\qquad (gf)(x)=g(f(x)).
\]
Given $f\in \Ph(X,Y)$ and an open set $U\subset \dom f$, the \emph{restriction} $f_{|U}$ has domain $U$ and image $f(U)$.
Let $\{f_i\mid i\in I\}$ be a family of maps in $\Ph(X,Y)$ and suppose that
\begin{equation}\label{fifj}
(f_{i})|_{\dom f_i\cap \dom f_j}=(f_{j})|_{\dom f_i\cap \dom f_j}\qquad \text{for every}\ i,j\in I,
\end{equation}
then the \emph{combination} $\bigcup_{i\in I}f_i$ is defined by
\[
\dom (\bigcup_{i\in I}f_i) =\bigcup_{i\in I}(\dom f_i),\qquad (\bigcup_{i\in I}f_i )(x)= f_i(x) \quad \text{for} \ x\in \dom f_i.
\]
For $f,g\in\Ph(X,Y)$, we say that $f$ \emph{extends} $g$, or $f$ is an \emph{extension} of $g$, if
\[
\dom g\subset \dom f\qquad \text{and} \qquad f|_{\dom g}=g.\]
For brevity, we use $\Ph(X)$ to denote the set $\Ph(X,X)$.
\begin{definition}[\cite{CY}]\label{d:pseudogroup}
A subset $\mathcal{G}\subset \Ph(X)$ is a \emph{pseudogroup} if the following conditions are satisfied:
\begin{itemize}[--]
\item Group-like axioms:
\begin{enumerate}[(i)]
\item \label{i:identity} $\id_X\in\mathcal{G}$,
\item if $f\in\mathcal{G}$, then $f^{-1}\in \mathcal{G}$ (\emph{closure under inversion}), and
\item if $f,g\in \mathcal{G}$, then $fg\in\mathcal{G}$ (\emph{closure under composition}).
\end{enumerate}
\item Sheaf-like axioms:
\begin{enumerate}[(i)]
\setcounter{enumi}{3}
\item \label{i:restrictions} if $f\in \mathcal{G}$ and $U\subset \dom f$ is open, then $f|_{U}\in \mathcal{G}$ (\emph{closure under restrictions}), and,
\item \label{i:combination} if $\{f_i,\ i\in I\}$ is a family of maps in $\mathcal{G}$ satisfying~\eqref{fifj}, then $\bigcup_{i\in I} f_i\in \mathcal{G}$. (\emph{closure under combinations}).
\end{enumerate}
\end{itemize}
The last axiom can be refomulated as follows:
\begin{enumerate}
\item[(v)$'$] if $f\in \Ph(X)$ is such that every $x\in \dom f$ has some open neighborhood $U_x$ with $f|_{U_x}\in \mathcal{G}$, then $f\in \mathcal{G}$.
\end{enumerate}
\end{definition}
If $\mathcal{G}\subset\Ph(X)$ is a pseudogroup, we say that $\mathcal{G}$ \emph{acts} on $X$ and we denote it by $\mathcal{G}\curvearrowright X$.
If $\{\mathcal{G}_i\}_{i\in I}$ is a collection of pseudogroups acting on $X$, then $\bigcap_{i\in I}\mathcal{G}_i\subset \Ph(X)$ is also a pseudogroup. A subset $S\subset\mathcal{G}$ \emph{generates} $\mathcal{G}$ if $\mathcal{G}$ is the smallest pseudogroup containing $S$; equivalently, $\mathcal{G}$ is the intersection of all the pseudogroups in $\Ph(X)$ that contain $S$.
Let $\mathcal{G}$ be a pseudogroup acting on $X$, and let $U$ be an open subset of $X$, then
the \emph{restriction}
\[
\mathcal{G|}_{U}=\{\,f\in\mathcal{G} \mid \dom f\subset U,\ \im f\subset U \,\}
\]
is a pseudogroup acting on $U$.
One can find in the literature definitions of pseudogroup that omit Axiom~(\ref{i:combination}) (e.g., \cite{HectorHirsch}). The reason is that, by allowing combinations, a pseudogroup might have ``too many maps'' to satisfy reasonable dynamical properties
(see Definition~\ref{d:naive} and Lemma~\ref{l:naivefail}); this motivates the following definition.
\begin{definition}
A \emph{pseudo{\textasteriskcentered}group} is a subset $\mathcal{S}\subset \Ph(X)$ satisfying Axioms~(\ref{i:identity})--(\ref{i:restrictions}) in Definition~\ref{d:pseudogroup}.
\end{definition}
This terminology was introduced by S.~Matsumoto in~\cite{Matsumoto}. For $S\subset\Ph(X)$, let $\langle S\rangle\subset\Ph(X)
$ denote the set consisting of finite compositions and inversions of elements in $S$, and let $S^*\subset\Ph(X)$ denote the set of partial homeomorphisms obtained from $S$ by composition, inversion, and restriction to open subsets; equivalently,
\[
S^*=\{\,s|_U\mid s\in\langle S\rangle,\ U\subset \dom s\ \text{open}\,\}
\]
and $S^*$ is the smallest pseudo{\textasteriskcentered}group containing $S$.
\begin{lemma}\label{l:generate}
Let $\mathcal{G}\curvearrowright X$ be a pseudogroup and let $S\subset \mathcal{G}$. Then $S$ generates $\mathcal{G}$ if and only if, for every $g\in \mathcal{G}$ and $x\in\dom g$, there is an open neighbourhood $U$ of $x$ such that $g|_{U}\in S^*$.
\end{lemma}
\begin{proof}
Let $\mathcal{H}\subset \Ph(X)$ denote the set of maps that result from combining families of maps in $S^*$ using Axiom~\ref{d:pseudogroup}(\ref{i:combination}). In other words, $\mathcal{H}$ is the pseudogroup generated by $S$. Then $S$ generates $\mathcal{G}$ if and only if $\mathcal{H}=\mathcal{G}$; that is, every map in $\mathcal{G}$ can be obtained as the combination of a family of maps in $S^*$, but this follows from the hypothesis and Axiom~\ref{d:pseudogroup}(\ref{i:combination})$'$.
\end{proof}
\begin{corollary}\label{c:restrictionsstar}
Let $S$ be a generating set for $\mathcal{G}\curvearrowright X$, let $d$ be a compatible metric on $X$, let $g\in \mathcal{G}$, and let $K\subset \dom g$ be compact. Then there is $\epsilon>0$ such that, for every $x\in K$, the restriction of $g$ to $B_d(x,\epsilon)$ belongs to $S^*$.
\end{corollary}
\subsection{Equivalences}\label{ss:equivalences}
If we are to use pseudogroups to study foliated spaces, the right notion of isomorphism is that of \emph{equivalence}, sometimes also referred to as \emph{Haefliger} or \emph{\'etal\'e equivalence}.
\begin{definition}\label{d:equiv}
Let $\mathcal{G}\curvearrowright X$ and $\mathcal{H}\curvearrowright Y$ be pseudogroups. An \emph{equivalence} $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ is a collection of partial homeomorphisms $\Phi\subset\Ph(X,Y)$ satisfying the following conditions.
\begin{enumerate}[(i)]
\item \label{i:equivcover}$\{\,\dom \phi\mid\phi\in\Phi\,\}$ and $\{\,\im \phi\mid\phi\in\Phi\,\}$ are open coverings of $X$ and $Y$, respectively.
\item \label{i:equivrest}If $\phi\in\Phi$ and $U$ is an open subset of $\dom \phi$, then $\phi|_{U}\in \Phi$.
\item \label{i:equivcomb}Let $\phi\in\Ph(X,Y)$. If there is an open covering $\{U_i\}_{i\in I}$ of $\dom \phi$ such that $\phi|_{U_i}\in\Phi$ for every $i\in I$, then $\phi\in\Phi$.
\item \label{i:equivcomp}If $f\in\mathcal{G}$, $g\in \mathcal{H}$, and $\phi\in\Phi$, then $g\phi f\in \Phi$.
\item \label{i:equivgamma}If $\phi,\psi\in\Phi$, then $\psi^{-1}\phi\in\mathcal{G}$ and $\psi \phi^{-1}\in \mathcal{H}$.
\end{enumerate}
\end{definition}
The following properties follow immediately from the definition.
\begin{lemma}\label{l:equivproperties}
Let $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ and $\Psi\colon (Y,\mathcal{H})\to (Z,\mathcal{I})$ be equivalences. Then the \emph{inverse}
\[
\Phi^{-1}:=\{\, \phi^{-1}\mid\phi\in \Phi \,\}\subset \Ph(Y,X)
\]
and the \emph{composition}
\[
\Psi\circ\Phi:=\{\,\psi\circ\phi\mid\phi\in\Phi,\ \psi\in\Psi\,\}\subset\Ph(X,Z)
\]
are equivalences $(Y,\mathcal{H})\to (X,\mathcal{G})$ and $ (X,\mathcal{G})\to(Z,\mathcal{I})$, respectively.
\end{lemma}
\begin{lemma}\label{l:equivid}
Let $\mathcal{G}\curvearrowright X$ be a pseudogroup and let $U\subset X$ be an open set that meets every $\mathcal{G}$-orbit. Then
\[
\Phi:=\{\,g\in\mathcal{G}\mid \dom g\subset U\,\}
\]
is an equivalence $\Phi\colon (U,\mathcal{G|}_U)\to (X,\mathcal{G})$; in particular, $\mathcal{G}$ is an equivalence $(X,\mathcal{G})\to (X,\mathcal{G})$.
\end{lemma}
\begin{proof}
Item~(\ref{i:equivcover}) in Definition~\ref{d:equiv} follows from the assumption that $U$ meets every $\mathcal{G}$-orbit, whereas~(\ref{i:equivrest})--(\ref{i:equivgamma}) hold because $\mathcal{G}$ is a pseudogroup.
\end{proof}
Pseudogroup equivalences are maximal families in the following sense.
\begin{lemma}\label{l:equivmaximal}
Let $\Phi, \Psi$ be equivalences $(X,\mathcal{G})\to (Y,\mathcal{H})$. If $\Phi \subset \Psi$, then $\Phi=\Psi$.
\end{lemma}
\begin{proof}
Let $\psi\in \Psi$. Since $\{\,\im \phi\mid\phi\in\Phi\,\}$ covers $Y$, there is a family $\{\phi_i\}_{i\in I} \subset \Phi$ such that $\{\im \phi_i\}_{i\in I}$ covers $\im \psi$. Moreover, $\phi_i\in \Psi$ by hypothesis, so $\phi_i^{-1}\psi\subset \mathcal{G}$ by~\ref{d:equiv}(\ref{i:equivgamma}), and then $\psi|_{\im \phi_i}= \phi_i(\phi_i^{-1}\psi)$ belongs to $\Phi$ by~\ref{d:equiv}(\ref{i:equivcomp}). Finally, $\psi$ is the combination of the family $\{\psi|_{\im \phi_i}\}_{i\in I}$, so $\psi\in \Phi$ by~\ref{d:equiv}(\ref{i:equivcomb}). This shows $\Psi\subset \Phi$ because $\psi$ was chosen arbitrarily.
\end{proof}
\begin{comment}
The following result shows that ``being equivalent'' is an equivalence relation for pseudogroups. The proof follows easily from Definition~\ref{d:equiv}.
\begin{lemma}
Let $\mathcal{F}\curvearrowright X$, $\mathcal{G}\curvearrowright Y$, and $\mathcal{H}\curvearrowright Z$ be pseudo(semi)groups, and let $\Phi\colon (X,\mathcal{F})\to (Y,\mathcal{G})$ and $\Psi\colon (Y,\mathcal{G})\to (Z,\mathcal{H})$ be equivalences. Then $\Phi^{-1}\colon (Y,\mathcal{G})\to (X,\mathcal{F})$ and $\Psi\Phi\colon(X,\mathcal{F})\to(Z,\mathcal{H})$ are also equivalences, where \[\Phi^{-1}:=\{\phi^{-1}\mid \phi\in\Phi\},\qquad\Psi\circ\Phi:=\{\psi\phi\mid \psi\in\Psi,\ \phi\in\Phi\}.\]
\end{lemma}
\end{comment}
\begin{lemma}\label{l:morphismgenerate}
Let $\mathcal{G}\curvearrowright X$ and $\mathcal{H}\curvearrowright Y$ be pseudogroups, and let $\Sigma\subset\Ph(X,Y)$ be a family of maps such that
\begin{enumerate}[(i)]
\item \label{i:equivgenmeetg}$\bigcup_{\phi\in\Sigma} \dom \phi\subset X$ meets every $\mathcal{G}$-orbit;
\item \label{i:equivgenmeeth}$\bigcup_{\phi\in\Sigma} \im \phi\subset Y$ meets every $\mathcal{H}$-orbit; and,
\item \label{i:equivgengamma}if $\phi$, $\psi\in\Sigma$, $g\in \mathcal{G}$, and $h\in\mathcal{H}$, then $\psi^{-1}h\phi\in\mathcal{G}$ and $\psi g\phi^{-1}\in \mathcal{H}$.
\end{enumerate}
Then there is a unique equivalence $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ containing $\Sigma$.
\end{lemma}
\begin{proof}
Let $\Phi\subset \Ph(X,Y)$ consist of the combinations of maps of the form $h\sigma g$, where $g\in \mathcal{G}$, $h\in \mathcal{H}$, and $\sigma \in \Sigma$; then $\Phi$ is an equivalence: Axiom~\ref{d:equiv}(\ref{i:equivcover}) follows from~(\ref{i:equivgenmeetg}) and~(\ref{i:equivgenmeeth}), \ref{d:equiv}(\ref{i:equivrest})--(\ref{i:equivcomp}) follow from the definition of $\Phi$, and ~\ref{d:equiv}(\ref{i:equivgamma}) follows from~(\ref{i:equivgengamma}). Finally, Lemma~\ref{l:equivmaximal} yields uniqueness.
\end{proof}
We will refer to the equivalence given by Lemma~\ref{l:morphismgenerate} as the equivalence \emph{generated} by $\Sigma$. We say that two pseudogroups are \emph{equivalent} if there is an equivalence from one to the other; this is a reflexive, symmetric, and transitive relation by Lemmas~\ref{l:equivproperties} and~\ref{l:equivid}. The reader should be mindful that equivalence of pseudogroups is a very lax condition, as the next example shows.
\begin{example}[{\cite[{p.\ 277}]{Haefliger}}]\label{e:rz}
Let $\mathcal{G}$ be the pseudogroup on $\mathbb{R}$ generated by the translation $t\mapsto t+1$, and let $\mathcal{H}$ be the pseudogroup on $\mathbb{S}^1$ generated by the identity map. Consider the natural projection map $\pi\colon\mathbb{R}\to \mathbb{R}/\mathbb{Z}\cong\mathbb{S}^1$. Then
\[
\Phi:=\{\,\pi|_{U}\mid U\subset \mathbb{R}\ \text{open},\ \pi|_{U}\colon U\to \phi(U)\ \text{is a homeomorphism}\,\}
\]
is an equivalence from $(\mathbb R,\mathcal{G})$ to $(\mathbb{S}^1,\mathcal{H})$.
\end{example}
Finally, if $X$ and $Y$ are $C^i$-manifolds for some $i\in \mathbb{N}\cup\{\infty,\omega\}$\footnote{The notation $C^\omega$ means that the manifold or map is analytic}, we say that a family $A\subset \Ph(X,Y)$ is $C^i$ if all the maps in $A$ are $C^i$ in the usual sense. In this way we obtain a definition of $C^i$-pseudogroups and equivalences, and the notion of being a $C^i$-pseudogroup is then invariant by $C^i$-equivalences. Similarly, if $X$ and $Y$ are affine manifolds, we can define affine pseudogroups and equivalences, and being an affine pseudogroup is invariant by affine equivalences.
\subsection{Compact generation}\label{ss:compactlygenerated}
\begin{definition}
Let $\mathcal{G}\curvearrowright X$ be a pseudogroup.
A \emph{system of compact generation} is a triple $(U,F,\widetilde F)$, where
\begin{enumerate}[(i)]
\item $U$ is a relatively compact open set of $X$ meeting every $\mathcal{G}$-orbit,
\item both $F\subset\mathcal{G}|_U$ and $\widetilde F\subset\mathcal{G}$ are finite and symmetric,
\item $F$ generates $\mathcal{G}|_U$, and
\item there is a bijection $f\mapsto \tilde f$ ($f\in F$, $\tilde f\in\widetilde F$) where $\tilde f$ is an extension of $f$ and $\overline{\dom f}\subset \dom(\tilde f)$ for every $f\in F$.
\end{enumerate}
\end{definition}
We say that $\mathcal{G}$ is \emph{compactly generated} if it admits a system of compact generation; note that compact generation implies that $X$ is locally compact.
We consider the set of extensions $\widetilde{F}$ as part of the generating set.
Note that the symmetry condition of both $F$ and $\widetilde{F}$ is included for simplicity.
From now on, for every map $f\in \langle F\rangle$, $f=f_{n} \cdots f_{1}$ with $f_{i}\in F$, we denote by $\tilde{f}\in\langle\widetilde{F}\rangle$ the composition $\tilde f_{n}\cdots \tilde f_{1}$. For notational convenience, we will assume from now on that $\tilde{f}^{-1}=\widetilde{f^{-1}}$. Properly speaking, the map $\tilde f$ depends not only on $f$, but on the representation $f=f_{n} \cdots f_{1}$; we will incur in this slight abuse of notation anyway because this subtlety will be of no relevance to our proofs.
\begin{lemma}\label{l:systemcg}
Let $\mathcal{G}$ be a compactly generated pseudogroup, let $x\in X$, and let $S$ be a generating set. Then there is a system of compact generation $(U,F,\widetilde F)$ with $x\in U$ and $\widetilde F^*\subset S^*$.
\end{lemma}
\begin{proof}
We begin by showing that we can choose $(U,F,\widetilde F)$ with $x\in U$. Indeed, let $(V,H,\widetilde H)$ any system of compact generation and let $g\in\mathcal{G}$ be any map with $x\in\dom g$ and $g(x)\in V$. Let $W,W'$ be relatively compact neighborhoods of $x$ with $\overline{W}\subset W'\subset \dom g$. Then $(V\cup W, H\cup \{g|_{W}\}, \widetilde{H}\cup\{g|_{W'}\})$ is a system of compact generation.
Let us show now that we may take $(U,F,\widetilde F)$ with $\widetilde{F}^*\subset S^*$. Let $(U,H,\widetilde H)$ satisfy $x\in U$. Write $H=\{f_i\}_{i\in I}$; then, for every $i\in I$, there is a finite open cover $\{V_{i,j}\}_{j\in J_i}$ of $\overline{\dom f_i}$ and a shrinking $\{W_{i,j}\}_{j\in J_i}$ such that $\tilde f|_{V_{i,j}}\in S^*$ for every $j\in J_i$ by Corollary~\ref{c:restrictionsstar}. Then
\[
(U,F,\widetilde F):=(U,\{\tilde f_i|_{W_{i,j}\cap U}\},\{\tilde f_i|_{V_{i,j}}\})
\]
is a system of compact generation satisfying the desired conditions.
\end{proof}
\subsection{Foliated spaces}\label{ss:foliated}
Let $X$ be a Polish space and let $\mathcal{F}$ be a partition of $X$. Then $(X,\mathcal{F})$ is a \emph{foliated space} of \emph{leafwise class} $C^k$ ($k\in\mathbb{N}\cup\{\infty\}$) and dimension $n\in \mathbb{N}$ if $X$ admits an atlas of charts $(U_i,\phi_i)$, where $\{U_i\}$ is an open covering of $X$ and the maps $\phi_i$ are homeomorphisms $\phi_i\colon U_i\to \mathbb{R}^n\times Z_i$ (for $Z_i$ Polish), and with coordinate changes of the form
\[
\phi_i\phi_j^{-1}(x_j,z_j)=(x_i(x_j,z_j),z_i(z_j)),
\]
where $z_i$ is continuous and $x_i\colon \phi_j(U_i\cap U_j)\to\mathbb{R}^n$ is of class $C^k$ on every plaque. Remember that the \emph{plaques} of the chart $(U_i,\phi_i)$ are the sets $\phi_i^{-1}(\mathbb{R}^n\times\{z_i\})$. Moreover, we require that the equivalence relation induced by $\mathcal{F}$ coincides with the transitive closure of the relation ``being in the same plaque''; it follows that $\mathcal{F}$ partitions $X$ into connected $C^k$-manifolds of dimension $n$: the \emph{leaves} of the foliated space. If $(X,\mathcal{F})$ is a foliation of dimension $n$, codimension $m$, and class $C^{k,l}$ (see~\cite[p.~32]{CandelConlon2000-I}), then it is an $n$-dimensional foliated space of leafwise class $C^k$, the transversal models $Z_i$ are $C^l$-manifolds of dimension $m$, and the maps $z_i$ are of class $C^l$.
The \emph{holonomy pseudogroup} serves as a dynamical model for the foliated space $X$: let $\{(U_i,\phi_i)\mid i\in I\}$ be a locally finite atlas, let $p_i$ denote the composition of $\phi_i$ with the projection $\mathbb{R}^n\times Z_i\to Z_i$, and let $Z=\coprod_{i\in I} Z_i$. The transversal components of the change of coordinate maps
\[
h_{i,j}\colon p_j(U_i\cap U_j)\to p_i(U_i\cap U_j) ,\qquad h_{i,j}(z_j)=z_i(z_j)
\]
generate a pseudogroup in $Z$, called the \emph{holonomy pseudogroup} of $X$. Note that we are only considering holonomy pseudogroups induced by locally finite atlases so that the transversal space $Z$ is Polish and the pseudogroup is countably generated.
The holonomy pseudogroup depends on the choice of atlas $\{(U_i,\phi_i)\mid i\in I\}$, but different choices give rise to equivalent pseudogroups (in the case of foliations of class $C^{k,l}$, $C^l$-equivalent pseudogroups). Thus, from now on, we restrict ourselves to considering properties of pseudogroups that are invariant by equivalences; this justifies our abuse of language when we talk about ``the'' holonomy pseudogroup of $X$.
\begin{lemma}[{\cite{Haefliger}}]
If $X$ is a compact foliated space, then its holonomy pseudogroup is compactly generated.
\end{lemma}
If $X$ is a foliation of codimension $m$ and it admits an atlas such that the transversals have an affine structure and the maps $h_{i,j}$ are all affine, then it is a \emph{transversally affine foliation} and its holonomy pseudogroup is also affine.
A \emph{matchbox manifold} is a compact foliated space admitting an atlas with totally disconnected transversals.
\subsection{Toral linked twist maps}\label{s:linked}
Let $\mathbb{T}^2:=\mathbb{R}^2/\mathbb{Z}^2$ be the $2$-torus, whose points we will simply denote as pairs $(x,y)$, $x, y\in\mathbb{R}^2$. For an interval $A=[a,b]$, $0\leq a< b\leq 1$, let $H_A$ be the horizontal closed annulus defined by
\[
H_A=\{(x,y)\in \mathbb{T}^2\mid a\leq y\leq b\},
\]
and let $V$ be the corresponding vertical closed annulus
\[
V_A=\{(x,y)\in \mathbb{T}^2\mid a\leq x\leq b\}.
\]
For any integer $m>1$, we have the horizontal and vertical twist maps, defined on $H_A$ and $V_A$, respectively, by
\[
(x,y)\mapsto (x+\phi_m(y),y),\qquad (x,y)\mapsto (x,y+\phi_m(x)),
\]
where
\[
\phi_m(t)=\frac{m(t-a)}{(b-a)}
\]
is the only linear map satisfying
\[
\phi_m(a)=0,\qquad \phi_m(b)=m.
\]
Note that $\phi_m$ depends of course on the choice of interval $A$, but we leave it implicit to avoid cumbersome notation. Toral linked twists can be constructed with more general maps $\phi_m$ (see e.g.~\cite{Devaney}), but in this paper we restrict our attention to the linear case for the sake of simplicity.
A \emph{toral linked twist} is the composition of horizontal twists map on a finite number of horizontal annuli with vertical twists on a finite number of vertical annuli. Let $\widehat H_1,\ldots,\widehat H_k$ be a collection of closed intervals in $[0,1]$ such that every intersection $\widehat H_i\cap \widehat H_j$ consists of at most one common endpoint, and let $\widehat V_1,\ldots,\widehat V_l$ be another such collection. Let $H_1,\ldots, H_k$ be the horizontal closed annuli induced by the intervals $\widehat H_1,\ldots,\widehat H_k$; similarly, let $V_1,\ldots,V_l$ be the vertical closed annuli induced by $\widehat V_1,\ldots,\widehat V_l$. Choose two sequences of integers
\[m_1,\ldots, m_k,\qquad n_1,\ldots,n_l,\]
and, for $i=1,\ldots,k$, let $h_i$ denote the horizontal $m_i$-twist map on $H_i$,
\[
h_i(x,y)=(x+\phi_{m_i}(y),y).
\]
Define the vertical $n_i$-twist maps $v_j$, $1\leq j\leq l$, similarly.
We combine the horizontal twists $h_i$ into one map $T_h$ and the vertical twists into another map $T_v$ as follows:
\begin{align*}
T_h(x,y)&=\begin{cases}
h_i(x,y) &\qquad\text{if}\ (x,y)\in H_i\ \text{for some}\ 1\leq i\leq k,\\
(x,y) &\qquad\text{else}.
\end{cases}\\
T_v(x,y)&=\begin{cases}
v_i(x,y) &\qquad\text{if}\ (x,y)\in V_i\ \text{for some}\ 1\leq i\leq l,\\
(x,y) &\qquad\text{else}.
\end{cases}
\end{align*}
The linked twist map corresponding to our choice of intervals and integers is then $T=T_v\circ T_h$.
We review some of the basic properties that will be of use later. First, note that $T$ is the identity on $\mathbb{T}^2\setminus M$, where
\[
M=H_1\cup\cdots\cup H_k \cup V_1\cup \cdots\cup V_l.
\]
Also, $T$ is affine (hence smooth) on $X\setminus\Delta$, where
\[
\Delta = \partial H_1\cup\cdots\cup\partial H_k \cup T_h^{-1}(\partial V_1)\cup \cdots\cup T_h^{-1}(\partial V_l)
\]
and $\partial$ denotes the topological boundary.
Finally, we will also employ the following result.
\begin{theorem}[{\cite[Thm.\ A]{Devaney80}}]\label{t:twisttt}
The restriction of the toral linked twist map $T$ to $M$ is topologically transitive and sensitive to initial conditions.
\end{theorem}
\begin{figure}[tbh]
\includegraphics[width=\textwidth]{twist.pdf}
\caption{The horizontal (left, $T_h$)\ and vertical (middle, $T_v$) components of a linked twist; they involve two and one intervals, respectively. The shaded area on the right represents $M$.}
\end{figure}
\subsection{Equicontinuous pseudogroups}
\begin{definition}[{\cite[{Def.~7.1}]{AlvarezCandel}}]\label{d:qlm}
Let $X$ be a topological space, and let $\{(U_i,d_i)\mid i\in I\}$ be a family of metric spaces such that $\{U_i\}$ is an open covering of $X$ and every $d_i$ is a compatible metric on $U_i$.
We say that $\{(U_i,d_i)\}$ is a \emph{cover of $X$ by quasi-locally equal metric spaces} if there is an assignment $\epsilon\mapsto \delta(\epsilon)$ such that, for every $i,j\in I$, every point $z\in U_i\cap U_j$ has an open neighborhood $U_{i,j,z}\subset U_i\cap U_j$ satisfying
\[
d_i(x,y)<\delta(\epsilon)\quad\Longrightarrow\quad d_j(x,y)<\epsilon
\]
for every $\epsilon>0$ and $x,y\in U_{i,j,z}$.
Two such covers $\{(U_i,d_i)\}$ and $\{(V_j,d'_j)\}$ are \emph{equivalent} if their union is again a cover by quasi-locally equal metric spaces; an equivalence class of covers is called a \emph{quasi-local metric space}.
\end{definition}
\begin{proposition}[{\cite[{Thm.~15.1}]{AlvarezCandel}}]
If $X$ is Hausdorff and paracompact, then every cover by quasi-locally equal metric spaces is equivalent to a metric; that is, equivalent to a cover of the form $\{(X,d)\}$.
\end{proposition}
\begin{definition}[{\cite[{Def.~8.4}]{AlvarezCandel}}]\label{d:eqpsg}
Let $\mathcal{G}$ be a pseudogroup acting on a Polish space $X$. We say that $\mathcal{G}$ is \emph{equicontinuous} if there is a cover by quasi-locally equal metric spaces $\{(U_i,d_i)\mid i\in I\}$, a generating pseudo{\textasteriskcentered}group $\mathcal{S}$, and an assignment $\epsilon\mapsto \delta(\epsilon)$ such that
\[
d_i(x,y)<\epsilon\quad\Longrightarrow\quad d_j(sx,sy)<\delta(\epsilon)
\]
for every $i,j\in I$, $s\in \mathcal{S}$, $x,y\in\dom s\cap U_i$ and $sx,sy\in U_j$.
\end{definition}
Note that if the above condition is fulfilled with a cover by quasi-locally equal metric spaces, then it is also fulfilled with any other equivalent cover, so we can regard equicontinuity as a property of the quasi-local metric.
By the previous results, Definition~\ref{d:eqpsg} is equivalent to the following:
\begin{definition}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{equicontinuous} if there is a generating pseudo{\textasteriskcentered}group $S$, a compatible metric $d$, and an assignment $\epsilon\mapsto\delta(\epsilon)$ such that
\[
d(x,y)<\delta(\epsilon)\qquad \Longrightarrow \qquad d(sx,sy)<\epsilon
\]
for every $s\in S$ and $x,y\in \dom s$.
\end{definition}
\begin{proposition}[{\cite[{Lem.~8.8}]{AlvarezCandel}}] \label{p:equicontinuityinvariant}
Being equicontinuous is invariant by equivalences of pseudogroups.
\end{proposition}
\section{Pseudogroup dynamics}
\subsection{Sensitivity and chaos}\label{ss:sensitivityandchaos}
The aim of this section is to introduce our definition of Devaney chaos for pseudogroups; first, we need to obtain suitable analogues of conditions~(\ref{i:devtt})--(\ref{i:devsens}) in Definition~\ref{d:dev}.
\begin{definition}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{topologically transitive} if, for every non-empty open subsets $U,V\subset X$, there is some $g\in \mathcal{G}$ with
\[
g(U)\cap V\neq \emptyset.
\]
\end{definition}
\begin{definition}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{point transitive} if there is some $x\in X$ such that $\mathcal{G}x$ is dense in $X$.
\end{definition}
\begin{lemma}[{cf.~\cite[Prop.~1.1]{Silverman}}]\label{l:pt}
Point transitivity implies topological transitivity for every pseudogroup $\mathcal{G}\curvearrowright X$; if $X$ is separable and Baire, then the converse also holds.
\end{lemma}
\begin{proof}
To show that point transitivity implies topological transitivity, let $\mathcal{G}x$ be a dense orbit and let $U,V$ be non-empty open sets. Then there are $g,h\in\mathcal{G}$ with $g(x)\in U$, $h(x)\in V$, and therefore $hg^{-1}(U)\cap V\neq\emptyset$.
Suppose now that $\mathcal{G}$ is topologically transitive but there is no dense orbit, and let $\{U_n\}_{n\in\mathbb{N}}$ be a countable base for $X$. For every $x\in X$, there is some $U_{n(x)}$ such that $\mathcal{G}x\cap U_{n(x)}=\emptyset$. For each $n$, $\mathcal{V}_{n}=\bigcup_{g\in\mathcal{G}} g(U_{n})$ is a dense open set because $\mathcal{G}$ is topologically transitive, so $X\setminus \mathcal{V}_{n(x)}$ is a closed and nowhere dense set containing $x$. Thus, $X=\bigcup_{n\in\mathbb{N}}X\setminus \mathcal{V}_{n}$ is a countable union of closed and nowhere dense sets, contradicting the assumption that $X$ was Baire.
\end{proof}
\begin{comment}
Another common definition of transitivity for dynamical systems is \emph{point transitivity}; i.e., the existence of a dense orbit. We can phrase this condition is the language of pseudogroups quite easily.
\begin{definition}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{point transitive} (PT) if there is $x\in X$ such that $\mathcal{G}x$ is dense in $X$.
\end{definition}
It is easy to check that both properties are invariant by equivalences. If $X$ is not Baire or not separable, then $(TT)\nrightarrow(PT)$ and, if $X$ is not perfect, then $(PT)\nrightarrow(TT)$~\cite[p.~6]{KolyadaSnoha}. Note that we are always considering $X$ a locally compact separable metric space, hence Baire.
\begin{lemma}[{\cite[Prop.~1.1]{Silverman}}]
$(PT)$ implies $(TT)$. If $X$ is a perfect topological spaces, then $(TT)$ implies $(PT)$.
\end{lemma}
\begin{definition}
Let $\mathcal{G}\curvearrowright X$, and let $x\in X$. $x$ is a \emph{periodic point} if there is a open set $U\subset X$ meeting every $\mathcal{G}$-orbit and such that $U\cap \mathcal{G}x$ is finite. We say that $\mathcal{G}$ has \emph{density of periodic points} if the periodic points are dense in $X$.
\end{definition}
\end{comment}
Recall that we are working with Polish (hence, Baire) spaces, so topological transitivity and point transitivity coincide.
Regarding density of periodic points, one may be tempted to use the following naive definition: $\mathcal{G}\curvearrowright X$ has density of periodic points if the union of finite $\mathcal{G}$-orbits is dense in $X$. Unfortunately, Example~\ref{e:rz} shows that this condition is not invariant by equivalences, so we need to reformulate it as follows:
\begin{definition}\label{d:dpo}
A pseudogroup $\mathcal{G}\curvearrowright X$ has \emph{density of periodic orbits} if there is an open set $U\subset X$ meeting every $\mathcal{G}$-orbit and such that the set of finite $\mathcal{G}|_U$-orbits is dense in $U$.
\end{definition}
\begin{comment}
\begin{lemma}
Let $\mathcal{G}\curvearrowright X$ satisfy (DPP). Then there is an open set $U\subset X$ meeting every $\mathcal{G}$-orbit and such that
\[
\{x\in U\mid \mathcal{G}\cap U\ \text{is finite} \}
\]
is dense in $U$. Moreover, any relatively compact open set meeting every orbit satisfies this condition.
\end{lemma}
\begin{proof}
Let $\{x_1, x_2, \ldots\}$ be a dense set of periodic points. First, note that being periodic implies that the orbits $\mathcal{G}x_n$ on $X$ are discrete. Since $X$ is $\sigma$-compact, we can find an exhausting sequence of compact sets $K_n$ such that $K_n$ is a neighbourhood of $x_n$ and $x_n\notin K_{n-1}$. For every $y\in \mathcal{G}x_1\cap (X\setminus K_1)$, choose a closed neighborhood $V_y$ of $y$ such that
\begin{enumerate}
\item \label{i:vyvyprime} $d(V_y,V_{y'})>d(y,y')/2$, for $y,y'\in \mathcal{G}x_1\cap (X\setminus K_1)$, $y\neq y'$,
\item $V_y\cap K_1=\emptyset$, and
\item $V_y\subset \im g$ for some $g\in \mathcal{G}$ with $\dom g\subset K_n$.
\end{enumerate}
Let
\[
W_1=\bigcup_{y\in \mathcal{G}x_1\cap (X\setminus U_1)} V_y;
\]
which is a closed set: indeed, let $z_i$ be a Cauchy sequence in $W_1$, then~(\ref{i:vyvyprime}) yields that all but finitely many $z_i$ are contained in the same set $V_y$, so the sequence converges to a point in $W$ because $V_y$ is closed.
Let us now define $W_n$ by induction on $n$. For every $y\in \mathcal{G}x_n\cap (X\setminus K_n)$, choose a closed neighborhood $V_y$ of $y$ such that
\begin{enumerate}
\item \label{i:vyvyprimen} $d(V_y,V_{y'})>d(y,y')/2$, for $y,y'\in \mathcal{G}x_n\cap (X\setminus K_n)$, $y\neq y'$,
\item $V_y\cap K_n=\emptyset$, and
\item \label{i:vykn} $V_y\subset \im g$ for some $g\in \mathcal{G}$ with $\dom g\subset K_n\setminus \bigcup_{m=1}^{n-1}W_m$.
\end{enumerate}
Let us show that, for $n\geq 2$, every point $y\in \mathcal{G}x_n\cap (X\setminus K_n)$ has a neighborhood satisfying~(\ref{i:vykn}).
\end{proof}
\end{comment}
\begin{lemma}\label{l:dppinvariant}
Having density of periodic orbits is invariant by equivalences.
\end{lemma}
\begin{proof}
Let $\mathcal{G}\curvearrowright X$ be a pseudogroup with density of periodic orbits, let $U\subset X$ be an open set meeting every $\mathcal{G}$-orbit and such that the finite $\mathcal{G}|_U$-orbits are dense, and let $\Phi\colon (X,\mathcal{G}) \to (Y,\mathcal{H})$ be an equivalence of pseudogroups.
Since $U$ is a paracompact space, we can find a subset $\Phi_0\subset \Phi$ such that $\{\dom \phi\mid \phi\in\Phi_0\}$ is a locally finite family and $\bigcup_{\phi\in\Phi_0} \dom \phi=U$. We claim that
\[
V=\bigcup_{\phi\in\Phi_0} \im \phi
\]
satisfies the statement in Definition~\ref{d:dpo}. Clearly, $V$ meets every $\mathcal{H}$-orbit, so let us prove that finite $\mathcal{H}|_V$-orbits are dense in $V$.
Let $W\subset V$ be any open set and choose $\phi_0\in\Phi_0$ with $\im \phi_0\cap W\neq\emptyset$. $\phi_0^{-1}(W)\subset U$, so there is some $y\in \phi^{-1}_0(W)$ such that $\mathcal{G}|_U(y)$ is finite.
Since $\{\dom \phi\mid \phi\in\Phi_0\}$ is a locally finite family and $\mathcal{G}|_U(y)$ is finite, there are only finitely many maps in $\Phi_0$ defined on $\mathcal{G}|_U(y)$, so
\[
A_y:=\{\phi(z)\mid \phi\in\Phi_0, z\in \mathcal{G}|_U(y)\}
\]
is a finite set.
Let us show that $\mathcal{H}|_V(A_y)= A_y$. For every $h\phi(z)$, $h\in \mathcal{H}|_{V}$, there is some $\psi\in \Phi_0$ with $h\phi(z)\in \im \psi$, and therefore
\[
z':=\psi^{-1} h\phi(z)\in \mathcal{G}|_U(z)=\mathcal{G}|_U(y)
\]
by~\ref{d:equiv}(\ref{i:equivgamma}). Hence $h\phi(z)= \psi(z')$ with $z'\in \mathcal{G}|_U(y)$, showing that $\mathcal{H}|_V(A_y)= A_y$.
We have proved that every open set $W\subset V$ meets an $\mathcal{H}|_V$-invariant finite set $A_y$, so $\mathcal{H}$ has density of periodic orbits.
\end{proof}
\begin{corollary}\label{c:dpocg}
If $\mathcal{G}\curvearrowright X$ has density of periodic orbits and $W\subset X$ is a relatively compact open set meeting every orbit, then the finite $\mathcal{G}|_W$-orbits are dense in $W$.
\end{corollary}
\begin{proof}
Let $U\subset X$ be an open set satisfying the statement of Definition~\ref{d:dpo}, and let $F\subset \mathcal{G}$ be a finite set satisfying $\overline{W}\subset \{\im f\mid f\in F\}$ and $\bigcup_{f\in F}\dom f\subset U$. Since $W$ meets every $\mathcal{G}$-orbit, $V:=\bigcup_{f\in F}f^{-1}(W)$ also meets every $\mathcal{G}$-orbit; moreover, $V\subset U$, so the set of finite $\mathcal{G}|_V$-orbits is dense in $V$. Hence $\Phi_0=\{f|_{f^{-1}(W)} \mid f\in F\}$ is a finite set satisfying $W=\bigcup_{\phi\in\Phi_0}\im \phi$, and, arguing as in the proof of the previous proposition, we get that the set of finite $\mathcal{G}|_W$-orbits is dense in $W$.
\end{proof}
Finally, we come to the definition of sensitivity for pseudogroups. A naive approach would suggest the following definition:
\begin{definition}[Naive sensitivity]\label{d:naive}
$\mathcal{G}\curvearrowright X$ is sensitive if there is $c>0$ such that, for every $x\in X$ and $r>0$, there are $g\in \mathcal{G}$ and $y\in B(x,r)$ with
$x\in\dom g$ and
$d(g(x),g(y))\geq c$.
\end{definition}
However, the following lemma shows that this condition is too weak to model our idea of sensitivity:
\begin{lemma}\label{l:naivefail}
Any topologically transitive pseudogroup $\mathcal{G}$ on a perfect Polish space $X$ satisfies Definition~\ref{d:naive}.
\end{lemma}
\begin{proof}
Choose non-empty open sets $W_1,W_2$ and $c>0$ satisfying
\begin{equation}\label{dw1w22c}
d(W_1,W_2)\geq 2c,
\end{equation} let $x\in X$, and choose $0<r<s$ such that $V=B(x,s)\setminus \overline{B(x,r)}$ is non-empty; such $r$ and $s$ exist because $X$ is perfect. By topological transitivity, there are maps $g_1$ and $g_2$ in $\mathcal{G}$ such that
\[
g_1(V)\cap W_1\neq \emptyset,\qquad g_2(V)\cap W_2\neq \emptyset;
\]
we may assume $\dom g_1,\dom g_2\subset V$. Let $h_i$, $i=1,2$, be the partial map with domain
\[
\dom h_i = \dom g_i\cup B(x,r)
\]
defined by
\[
h_i|_{\dom g_i}=g_i,\qquad h_i|_{B(x,r)}=\id_{B(x,r)}.
\]
Axiom~\ref{d:pseudogroup}(\ref{i:combination}) ensures that $h_i$ belongs to $\mathcal{G}$.
The triangle inequality and~\eqref{dw1w22c} now yield the existence of some $y\in V\subset B(x,r)$ and $i\in \{1,2\}$ such that
\[
d(g_i(x),g_i(y))\geq c.\qedhere
\]
\end{proof}
This argument shows that the problem originates from Axiom~\ref{d:pseudogroup}(\ref{i:combination}) (closure under combinations). Following the ideas in~\cite{AlvarezCandel}, which in turn can be traced back to previous works (see~\cite{HectorHirsch,Matsumoto}), we phrase our definition of sensitivity for pseudogroups in terms of generating pseudo{\textasteriskcentered}groups. We also quantify over all compatible metrics in order to make it a topological condition.
\begin{definition}\label{d:sensitivity}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{sensitive to initial conditions} if, for every metric $d$ and every generating pseudo{\textasteriskcentered}group $\mathcal{S}$, there is a \emph{sensitivity constant} $c:=c(S,d)>0$ such that, for every $x\in X$ and $r>0$, there are $s\in S$ and $y\in \dom s$ with
\[
d(x,y)<r\quad \text{and} \quad d(sx,sy)\geq c.
\]
\end{definition}
Note that $c$ clearly varies with the choice of metric and pseudo{\textasteriskcentered}group: if we fix $d$ and choose a sequence of pseudo{\textasteriskcentered}groups $S_n$ such that all sets $\im s_n$ ($s\in S_n$) have diameter less than $1/n$, then $c(d,S_n)\downarrow 0$.
Having obtained analogues of \ref{d:dev}(\ref{i:devtt})--(\ref{i:devsens}), we can introduce our definition of Devaney chaos for pseudogroups.
\begin{definition}
A pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{chaotic} if it is topologically transitive, has density of periodic orbits, and is sensitive to initial conditions.
\end{definition}
\subsection{Equicontinuous points}
In this subsection we will generalize to the setting of pseudogroups some dynamical notions expressing regularity; we will need them in order to prove the Auslander-Yorke dichotomy.
\begin{definition}
A point $x\in X$ is \emph{$(S,d)$-equicontinuous} for a generating set $S$ and a metric $d$ on $X$ if there is an assignment $\epsilon\mapsto\delta(\epsilon)$ so that
\[
d(x,y)<\delta(\epsilon)\quad\Longrightarrow\quad d(s(x),s(y))<\epsilon
\]
for every $s\in S^*$ and $y\in X$ with $x,y\in \dom s$.
We say that a point $x$ is \emph{equicontinuous} if it is equicontinuous for some choice of $S$ and $d$.
\end{definition}
We will refer to any assignment $\epsilon\mapsto\delta$ satisfying the above condition as a \emph{modulus of equicontinuity} for $(S,d)$.
\begin{definition}
The pseudogroup $\mathcal{G}\curvearrowright X$ is \emph{almost equicontinuous} if there are $S$ and $d$ so that the set of $(S,d)$-equicontinuous points is dense in $X$.
\end{definition}
\begin{lemma}
If $x\in X$ is $(S,d)$-equicontinuous, then every $y\in \mathcal{G}x$ is $(S,d)$-equicontinuous (perhaps with a different modulus).
\end{lemma}
\begin{proof}
Let $y\in \mathcal{G}x$. Since $S$ is a generating set for $\mathcal{G}$, there is some $s\in S^*$ so that $s(x)=y$. For every $\epsilon>0$, let $\delta'(\epsilon)>0$ be small enough so that
\[
B(y,\delta'(\epsilon))\subset \im s,\qquad s^{-1}(B(y,\delta'(\epsilon)))\subset B(x,\delta(\epsilon)).
\] Then, for every $t\in S^*$, the restrictions of $t$ and $t s s^{-1}$ to $\dom t \cap B(y,\delta'(\epsilon))$ coincide, and thus
\[
t(B(y,\delta'(\epsilon)))=tss^{-1}(B(y,\delta'(\epsilon)))\subset ts(B(x,\delta(\epsilon)).
\]
Since $ts\in S^*$ and $\delta\mapsto\epsilon$ is a modulus of equicontinuity for $(S,d)$, we obtain
\[
t(B(y,\delta'(\epsilon)))\subset B(t(y),\epsilon). \qedhere
\]
\end{proof}
\subsection{Dynamics and compact generation}
We begin with some preliminary results for compactly generated pseudogroups. The following proposition reveals that, for every system of compact generation $(U,F,\widetilde F)$ and every point $x\in U$, either the pseudogroup is sensitive to initial conditions around $x$, or every map in $\langle F\rangle$ defined on $x$ has an extension in $\langle \widetilde F\rangle$ whose domain contains a ball of a fixed radius $\rho>0$.
\begin{proposition}\label{p:halo}
Let $\mathcal{G}$ be a compactly generated pseudogroup on $X$, let $d$ be a compatible metric, let $(U,F,\widetilde F)$ be a system of compact generation, and let
\[
\sigma:=\sigma(U,F,\widetilde F)=\inf\{r>0\mid B(u,r)\subset \dom \tilde f\ \ \forall f\in F,\ u\in\dom f\}>0.
\]
Then, for every $x\in U$, either
\begin{enumerate}[(i)]
\item \label{i:halo} there is $\rho>0$ such that $B(x,\rho)\subset \dom \tilde f$ for every $f\in \langle F\rangle$ with $\dom f\cap B(x,\rho)\neq \emptyset$; or,
\item \label{i:nothalo} for every $r>0$, there are $y\in B(x,r)$ and $\tilde f\in\langle \widetilde F\rangle$ such that
\[
x,y\in\dom \tilde f,\qquad d(\tilde f(x),\tilde f(y))\geq \sigma/2.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that~(\ref{i:halo}) does not hold, so that, for every $r>0$, there are $f\in \langle F\rangle$ and $y,z\in B(x,r)$ satisfying
\[
z\in\dom \tilde f,\qquad y\notin \dom \tilde f.
\]
Let $f=f_n\cdots f_1$, where $f_i\in F$ for $i=1,\ldots,n$. Let $j$ be the largest index $0\leq j<n$ satisfying $B(x,r)\subset \dom \tilde f_j\cdots\tilde f_1$. This means that there is $y\in \dom\tilde f_j\cdots\tilde f_1$ such that
\[
\tilde f_j\cdots\tilde f_1(y)\notin \dom \tilde f_{j+1}.
\]
But $z\in \dom f_{j+1}$, so
\[
d(\tilde f_j\cdots\tilde f_1(y),\tilde f_j\cdots\tilde f_1(z))\geq \sigma
\]
by the definition of $\sigma$. Now the triangle inequality yields that either
\[d(\tilde f_j\cdots\tilde f_1(x),\tilde f_j\cdots\tilde f_1(y))\geq \sigma/2\quad \text{or} \quad d(\tilde f_j\cdots\tilde f_1(x),\tilde f_j\cdots\tilde f_1(z))\geq \sigma/2.\qedhere\]
\end{proof}
\begin{comment}
We take advantage of this result to relate equicontinuity of maps defined on a point $x$ and equicontinuity of maps whose domain intersects a neighborhood of $x$.
\begin{lemma}
Let $\mathcal{G}$ be a compactly generated pseudogroup in $X$, and let $\langle U,F,\widetilde F\rangle$ be a system of compact generation for $\mathcal{G}$. Assume that, for $x\in X$, the following condition is satisfied:
\begin{enumerate}
\item \label{i:pointeq} There is an assignment $\epsilon\mapsto \delta(\epsilon)$ such that, for every $\tilde f\in \langle \widetilde F\rangle$ and $y\in X$ with $x,y\in\dom \tilde f$,
\[
d(x,y)<\delta(\epsilon)\quad \Longrightarrow\quad d(\tilde f(x),\tilde f(y))<\epsilon.
\]
\end{enumerate}
Then the next condition is satisfied too:
\begin{enumerate}
\setcounter{enumi}{1}
\item \label{i:pointloceq} There is an assignment $\epsilon\mapsto \delta'(\epsilon)$ such that, for every $f\in \langle F\rangle$ and $y,z\in\dom f$,
\[
y,z\in B(x,\delta'(\epsilon))\quad \Longrightarrow\quad d(f(y),f(z))<\epsilon.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Note that (\ref{i:pointeq}) contradicts Proposition~\ref{p:halo}(\ref{i:nothalo}), so~\ref{p:halo}(\ref{i:halo}) has to hold. Suppose for the sake of contradiction that there is $c>0$ so that, for every $r>0$, there are $f\in\langle F\rangle$ and $y,z\in\dom f$ with
\[
y,z\in B(x,r),\qquad d(f(y),f(z))\geq c.
\]
Then the triangle inequality yields
\[
\max \{d(\tilde f(x),\tilde f(y)), d(\tilde f(x),\tilde f(z))\}\geq c/2.
\]
By choosing $y,z\in B(x,\delta(c/2))$, we obtain a contradiction with (\ref{i:pointeq}).
\end{proof}
\end{comment}
The following result, which will be of use later, is a generalization of the well-known fact that equicontinuity and uniform equicontinuity agree for actions on compact spaces.
\begin{lemma}\label{l:uniformity}
Let $\mathcal{G}$ be a compactly generated pseudogroup. If there is a metric $d$ and a generating pseudo{\textasteriskcentered}group $\mathcal{S}$ such that every point is a point of $(\mathcal{S},d)$-equicontinuity, then $\mathcal{G}$ is equicontinuous.
\end{lemma}
\begin{proof}
Let $(U,F,\widetilde F)$ be a system of compact generation with $\widetilde F\subset S$ (Lemma~\ref{l:systemcg}). By Proposition~\ref{p:halo}, for every $x\in U$ there is $\rho_x>0$ such that $B(x,\rho_x)\subset U$ and
\begin{equation*}(x,\rho_x)\subset\dom\tilde f\qquad \text{for every}\ f\in \langle F\rangle\ \text{with}\ \dom f\cap B(x,\rho_x)\neq \emptyset.
\end{equation*}
The sets $U_x:=B(x,\rho_x)$, $x\in U$, form an open cover of $U$. Let $d_x$ be the metric on $U_x$ defined by setting
\[
d_x(y,z)=\sup\{ d(\tilde fy,\tilde fz)\mid f\in\langle F\rangle,\ \dom f\cap U_x\neq \emptyset \}.
\]
Let us show that every $d_x$ is compatible with the topology of $U_x$. We may assume without loss of generality that $\id_U\in F$, so $d\leq d_x$ for every $x\in U$. Now we only need to show that every ball $B_{d_x}(y,r)$ contains a ball $B_d(y,s)$ for some $s>0$.
\begin{claim}\label{c:quasilocal}
Let $x,y,u,v\in U$, and $f\in \langle F\rangle$ be such that $u,v\in\dom f\cap U_x$ and $fu,fv\in U_y$, and let $\epsilon \mapsto \delta_x(\epsilon)$ and $\epsilon \mapsto \delta_y(\epsilon)$ be moduli of $(S, d)$-equicontinuity at $x$ and $y$, respectively.
Then
\[
d_x(u,v)<\delta'(\epsilon)\quad\Longrightarrow \quad d_y(fu,fv)<\epsilon
\]
for $\delta'(\epsilon)=\delta_x(\delta_y(\epsilon/2)/2)$.
\end{claim}
The triangle inequality and $d\leq d_x$ yield
\[
f(B_{d_x}(x,\delta'(\epsilon)))\subset f(B_{d}(x,\delta'(\epsilon)))\subset B_{d}(y,\delta_y(\epsilon/2)/2).
\]
Applying again the triangle inequality we get $d(fu,fv)<\delta_y(\epsilon/2)$. Since $\delta_y$ is a modulus of equicontinuity at $y$, we obtain
\[
\sup \{d(\tilde g fu,\tilde gfv)\mid g\in\langle F\rangle,\ \dom g\cap B(y,\rho_y)\neq \emptyset\}\leq \epsilon/2,
\]
so $d_y(fu,fv)<\epsilon$, proving Claim~\ref{c:quasilocal}.
Let $V$ be an open set meeting every orbit and such that $\overline{V}\subset U$; choose a finite covering $\{U_{x_i}\}$ of $\overline V$. There are only a finite number of moduli of equicontinuity $\epsilon\mapsto \delta_{x_i}(\epsilon)$, so, by taking $\delta(\epsilon):=\min_i\{\delta_{x_i}(\epsilon)\}$, Claim~\ref{c:quasilocal} yields \[d_{x_i}(u,v)<\delta'(\epsilon)\quad\Longrightarrow\quad d_{x_j}(fu,fv)<\epsilon\]
for every $i,j,u,v,f$, and $\delta'=\delta(\delta(\epsilon/2)/2)$.
We have proved that $\mathcal{G}|_V$ is equicontinuous with respect to the cover by quasi-local metric spaces $\{(U_{x_i},d_{x_i})\}$ (see Definition~\ref{d:qlm}). Since $\mathcal{G}|_V$ is equivalent to $\mathcal{G}$, the result follows by Proposition~\ref {p:equicontinuityinvariant}.
\end{proof}
\begin{proof}[Proof of Thm.~\ref{t:sicactionpseudogroup}]
Let us prove that, if the action $G\curvearrowright X$ is sensitive, then the induced pseudogroup $\mathcal{G}$ is sensitive too, the converse implication being trivial. Let $c_G>0$ be a sensitivity constant for $G\curvearrowright X$, let $H=\{f_1,\ldots,f_n\}\subset G$ be a symmetric finite generating set (in the group-theoretic sense), let $d$ be a metric on $X$, and let $\mathcal{S}$ be a generating pseudo{\textasteriskcentered}group for $\mathcal{G}$. Since $H\subset \mathcal{G}$ and $\mathcal{S}$ generates $\mathcal{G}$, Lemma~\ref{l:generate} yields a finite sequence of open coverings of $X$,
\[
\widetilde{\mathcal{U}}_i=\{\widetilde{U}_{i,j}\},\qquad i=1,\ldots,n,
\] such that
\[\tilde f_{i,j}:=(f_i)|_{\widetilde U_{i,j}}\in\mathcal{S}\qquad \text{for every} \ i,j;\] furthermore, we may assume that every $\widetilde{\mathcal{U}}_i$ is finite because $X$ is compact. Let $\mathcal{U}_i=\{U_{i,j}\}$ be a shrinking of $\widetilde{\mathcal{U}}_i$, let $ f_{i,j}:=(f_i)|_{U_{i,j}}$, and let
\[
F=\{f_{i,j}\},\qquad \widetilde F=\{\tilde f_{i,j}\}.
\]
Then $(X,F,\widetilde F)$ is a system of compact generation for $\mathcal{G}$.
Let us show that $(S,d)$ is sensitive with constant $c_F:=\min\{\sigma,c_G\}$, where $\sigma:=\sigma(X,F,\widetilde F)$ is given by Proposition~\ref{p:halo}.
If $(X,F,\widetilde F)$ satisfies \ref{p:halo}(\ref{i:nothalo}) at every point in $X$, then we are done, so suppose that \ref{p:halo}(\ref{i:halo}) holds for some $x\in X$ and $\rho>0$. Since $G$ is sensitive, there are \[g=f_{i_k}\cdots f_{i_1}\in G\] and $y\in B(x,\rho)$ with $d(g(x),g(y))\geq c_G$. Clearly, there is a sequence $j_1,\ldots, j_k$ such that $x\in\dom h$, where $h=f_{i_k,j_k}\cdots f_{i_1,j_1}$. But $B(x,\rho)\subset\dom \tilde h$ by \ref{p:halo}(\ref{i:halo}), whence $y\in \dom \tilde h$ and
\[
d(\tilde h(x),\tilde h(y))\geq c_G\geq c_F.
\]
This shows that $\mathcal{G}$ is $(\mathcal{S},d)$-sensitive and, since both $\mathcal{S}$ and $d$ were arbitrary, the result follows.
\end{proof}
\begin{comment}
\begin{lemma}\label{l:uniequicg}
Let $\mathcal{G}\curvearrowright X$ be a compactly generated pseudogroup and let $(U,F,\widetilde{F})$ be a system of compact generation, then the following conditions are equivalent:
\begin{enumerate}
\item \label{i:uniequicgpg}$\mathcal{G}$ is equicontinuous.
\item \label{i:uniequicgpsg}There is a generating pseudo{\textasteriskcentered}group $S\subset \mathcal{G}$ and a compatible metric $d$ on $\overline{U}$ such that
every point in $U$ is a point of $(S,d)$-equicontinuity.
\end{enumerate}
\end{lemma}
\begin{proof}
By the shrinking lemma REF, we can take
\end{proof}
\end{comment}
\subsection{Sensitive group actions whose induced pseudogroups are not sensitive}\label{s:nonsensitiveaction}
In this section we construct the counterexamples of Theorem~\ref{t:linked}. Let us start by defining a family of linked twists on the $2$-torus $\mathbb{T}^2$:
Let $p_z$ ($z\in\mathbb{Z}$) denote the following integer-indexed sequence of real numbers:
\[
p_z=\begin{cases}
1-2^{-1-z} \quad &\text{if}\ z\geq 1,\\
2^{z-2} \quad &\text{if}\ z\leq 0.
\end{cases}
\]
Let
\[H=\{(x,y)\in \mathbb{T}^2\mid 1/4\leq y\leq 3/4\},\]
and
\[
V_z=\{(x,y)\in \mathbb{T}^2\mid p_z\leq x\leq p_{z+1}\}\qquad (z\in \mathbb{Z}).
\]
Let $T_h\colon \mathbb{T}^2\to \mathbb{T}^2$ be the horizontal twist defined by
\[
T_h(x,y)=\begin{cases}
(x+2(y-\frac{1}{4}),y)\quad &\text{if}\ (x,y)\in H,\\
(x,y)\quad &\text{else};
\end{cases}
\]
and, for $m\in\mathbb{N}$, let $T_{v,m}\colon\mathbb{T}^2\to \mathbb{T}^2$ be the vertical twist:
\[
T_{v,m}(x,y)=\begin{cases}
(x,y+2^{2+|z|}(x-p_z))\quad &\text{if}\ (x,y)\in V_z,\ |z|\leq m,\\
(x,y)\quad &\text{else}.
\end{cases}
\]
Letting $T_m=T_{v,m}\circ T_h$ we obtain a sequence of linked twist maps on $\mathbb{T}^2$.
By Theorem~\ref{t:twisttt}, $T_z$ is topologically transitive on
\[
M_z= H\cup \bigcup_{|z|\leq m} V_z
\]
and is the identity on $\mathbb{T}^2\setminus M_m$ for every $m\in\mathbb{N}$. Note that, by the definition of the sequence $p_z$, we have
\begin{equation}\label{unionmm}
\bigcup_{m\geq 0} M_m=\mathbb{T}^2 \setminus\big(\{0\}\times([0,1/4)\cup (3/4,1])\big).
\end{equation}
Moreover, $T_m$ is affine in $X\setminus \Delta_m$, where
\[
\Delta_m:= \mathbb{T}^2\setminus (\partial H\cup \bigcup_{|z|\leq m}T_h^{-1}(\partial V_z)).
\]
\begin{lemma}
Let $(p/q,r/s)\in \mathbb{T}^2$ be a point with rational coordinates. Then, for every $m\geq 0$, its $T_m$-orbit is contained in the finite set
\[
\left\{\left(\frac{l_1}{d},\frac{l_2}{d}\right)\mid l_1,l_2\in \{0,\ldots,d-1\}\right\},
\]
where $d=\lcm(q,s,2^{-m-2})$.
\end{lemma}
\begin{proof}
Follows from the definition of $T_h$ and $T_{v,m}$.
\end{proof}
We are now in position to introduce our examples. We begin by showing that an action $G\curvearrowright X$ with $G$ finitely generated but $X$ non-compact might be sensitive, while the induced pseudogroups is not. Let $X=\mathbb{T}^2\times \mathbb{Z}$ and let
\[
\sigma((x,y),z)= (T_{|z|}(x,y), z),\qquad \tau((x,y),z)=((x,y),z+1).
\]
\begin{proposition}\label{p:gmathcalg}
The subgroup of $\Homeo(X)$ generated by $\sigma$ and $\tau$ is sensitive as a group action. The pseudogroup generated by $\sigma$ and $\tau$, however, is not sensitive to initial conditions (in the sense of Definition~\ref{d:sensitivity}).
\end{proposition}
\begin{proof}
We start by proving that the induced group action is sensitive. Let $d$ be a compatible metric on $X$ that restricts to the standard flat metric on every $\mathbb{T}^2\times\{z\}\cong \mathbb{T}^2$. Choose $c>0$ such that
\begin{itemize}
\item the action of $T_0$ on $M_0$ is $c$-sensitive (with respect to $d_0$),
\item $c<1/8$, and
\item $d(\mathbb{T}^2\times \{0\},\mathbb{T}^2\times \{z\})\geq c$ for all $z\neq 0$.
\end{itemize}
Let $((x,y),z)\in X$, and let $U\times\{z\}$ be a neighborhood of $((x,y),z)$. By~\eqref{unionmm}, $\bigcup_z M_{|z|}$ is dense in $\mathbb{T}^2$, so there is $n>0$ such that
\[\tau^n(U\times\{z\})\cap \left(M_{n+z}\times\{n+z\}\right)\neq \emptyset.\]
Since $T_{n+z}$ is topologically transitive on $M_{n+z}$, there is $m>0$ so that
\[
\sigma^m\tau^n(U\times\{z\})\cap (M_{0}\times\{n+z\})\neq \emptyset,
\]
and therefore
\[
\tau^{-n-z}\sigma^m\tau^n(U\times\{z\})\cap (M_{0}\times\{0\})\neq \emptyset.
\]
Let $\phi=\tau^{-n-z}\sigma^m\tau^n$ for the sake of simplicity. Since the action of $T_0$ is $c$-sensitive on $M_0$, if $\phi((x,y),z)\in M_0\times\{0\}$, then there are $l>0$ and $((u,v),z)\in U\times\{z\}$ such that
\[
d(\sigma^l\phi((x,y),z),\sigma^l\phi((u,v),z))\geq c.
\]
If $\phi((x,y),z)\notin M_0\times\{0\}$, then, since
the action of $T_0$ is topologically transitive on $M_0$, there are $l>0$ and $((u,v),z)\in U\times\{z\}$ such that
\[
\tau^l\phi((u,v),z)\in \left[\frac{3}{8},\frac{5}{8}\right]^2\times\{0\}.
\]
But $[1/4,3/4]\subset M_0$ and $\phi((x,y),z)\notin M_0\times\{0\}$, so
\[
d(\sigma^l\phi((x,y),z),\sigma^l\phi((u,v),z))\geq\frac{1}{8}\geq c.
\]
Let us now show that the pseudogroup $\mathcal{G}$ generated by $\sigma$ and $\tau$ is not sensitive. Consider the point $((0,0),0)\in X$. Note that, since $(0,0)\notin M_m$ for every $m\in \mathbb{N}$, $\sigma$ is the identity on a neighborhood of $((0,0),z)$ for every $z\in \mathbb{Z}$.
For each $z\in \mathbb{Z}$, let $U_z,$ $O_z$ be open neighborhoods of $(0,0)$ such that $\overline{O_z}\subset \mathbb{T}^2 \setminus M_{|z|+1}$ and $\overline{U_z}\subset O_z$. Now let
\[
U=\bigcup_{z\in\mathbb{Z}} (\mathbb{T}^2\setminus U_z)\times \{z\},\qquad O=\bigcup_{z\in\mathbb{Z}} O_z\times \{z\},
\]
and consider the pseudo{\textasteriskcentered}group $\mathcal{S}$ generated by
\[
F=\{\tilde \sigma, \tau|_U, \tau|_O\},
\]
where $\tilde \sigma$ is the restriction of $\sigma$ to
\[
\{((x,y),z)\in X\mid (x,y)\in M_{|z|}\}.
\]
Since
\[
\mathcal{G}((0,0),0)=\{((0,0),z)\mid z\in\mathbb{Z}\},
\]
the only maps in $F$ that are defined on the orbit of $((0,0),0)$ are $\tau$ and $\sigma|_O$. Both maps are isometries, so we have that every map in $\mathcal{S}$ defined on $((0,0),0)$ is an isometry with respect to $d$, and therefore $\mathcal{G}$ is not sensitive to initial conditions.
\end{proof}
We have constructed the first counterexample of Theorem~\ref{t:linked}.
We can repurpose this machinery to obtain the second counterexample: an action $F_\omega\curvearrowright \mathbb{T}^2$ that does not satisfy Theorem~\ref{t:sicactionpseudogroup}, where $F_\omega$ is the free group with countably many generators. Define the action by mapping a sequence freely generating $F_\omega$ to the sequence $T_m$, $m\geq 0$. The proof that $F_\omega\curvearrowright \mathbb{T}^2$ is sensitive and the pseudogroup is not sensitive is virtually identical to Proposition~\ref{p:gmathcalg}, so we leave the details to the reader.
\subsection{Main results}\label{s:main}
In this section we will prove Theorems~\ref{t:invariant},~\ref{t:auslanderyorkepseudo}, and~\ref{t:ttdposicpseudo}, in that order.
We begin with the following preliminary result, which follows arguments from Lemma~8.8 and Theorem 15.1 in~\cite{AlvarezCandel}
\begin{proposition}\label{p:invariant}
Let $\mathcal{G}$ act on a locally compact and separable metric space $(X,d)$, let $\mathcal{S}\subset \mathcal{G}$ be a generating pseudo{\textasteriskcentered}group, and let $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ be an equivalence. Then there is a generating pseudo{\textasteriskcentered}group $\mathcal{T}$ for $\mathcal{H}$, a generating set $\Phi_0$ for $\Phi$, and a metric $d'$ on $Y$ satisfying the following condition: if there are $x\in X$, $\epsilon,\delta>0$ such that
\[
d(x,u)<\delta\quad\Longrightarrow\quad d(sx,su)<\epsilon
\]
for every $u\in X$ and $s\in \mathcal{S}$ with $x,u\in\dom s$,
then
\[
d'(y,v)<\delta\quad\Longrightarrow\quad d'(ty,tv)\leq\epsilon
\]
for every $y\in \Phi_0(x)$ and $t\in \mathcal{T}$ with $y,v\in\dom t$.
\end{proposition}
\begin{comment}
\begin{proposition}\label{p:equiv}
Let $\mathcal{G}$ act on a locally compact metric space $(X,d)$, let $S$ be a generating set, and let $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ be an equivalence. Then there is a generating set $R$ for $\mathcal{H}$, a metric $d'$ on $Y$, and a subset $\Phi_0\subset\Phi$ satisfying the following conditions:
\begin{enumerate}[(i)]
\item \label{i:equivphi0cov} $\{\im \phi\mid \phi\in\Phi_0\}$ is an open covering of $Y$; and,
\item \label{i:equivphi0eq} if $x\in \Eq(S,d)$ \textup{(}resp., $x\in \Eq_N(S,d)$\textup{)} with respect to an assignment $\epsilon\mapsto\delta$, then, for every $\phi\in \Phi_0$ with $x\in \dom \phi$, $\phi x\in\Eq(R,d')$ \textup{(}resp., $\phi x\in\Eq_N(R,d')$\textup{)} with respect to the assignment $\epsilon\mapsto \delta(\delta(\epsilon/2))$.
\item if $x\in X$ and $\epsilon>0$ are such that there is neighborhood $U$ of $x$ so that $d(sx,sy)<\epsilon$ for every $y\in U$, $s\in S$ with $x,y\in \dom s$, then, for every $\phi\in \Phi_0$ with $x\in \dom \phi$, there is a neighborhood $V$ of $\phi x$ so that $d(rz,r\phi x)\leq \epsilon$ for every $r\in R$, $z\in V$, $y,z\in\dom r$.
\item if $x\in X$ and $\epsilon>0$ are such that there is neighborhood $U$ of $x$ so that $d(sy,sz)<\epsilon$ for every $y,z\in U$, $s\in S$ with $y,z\in \dom s$, then, for every $\phi\in \Phi_0$ with $x\in \dom \phi$, there is a neighborhood $V$ of $\phi x$ so that $d(ru,rv)\leq \epsilon$ for every $r\in R$, $u,v\in V$, $u,v\in\dom r$.
\end{enumerate}
\end{proposition}
\end{comment}
\begin{proof}
We begin by proving the following preliminary result.
\begin{claim}\label{c:phizero}
There is a subset $\widehat{\Phi}_0\subset \Phi$ such that
\begin{enumerate}[(a)]
\item $\dom \phi$ and $\im \phi$ are relatively compact for every $\phi\in\widehat{\Phi}_0$,
\item \label{i:phizeros} the map $\psi^{-1}\phi$ belongs to $\mathcal{S}$ for every $\phi,\psi\in\widehat{\Phi}_0$, and
\item \label{i:phizerox} $\{\im\phi\mid \phi\in\widehat{\Phi}_0\}$ is a locally finite open covering of $Y$.
\end{enumerate}
\end{claim}
First note that, since $Y$ is a locally compact and separable metric space and $\Phi$ is an equivalence, we can find a sequence, $\phi_1, \phi_2, \ldots$, in $\Phi$ such that
\begin{itemize}
\item every $\phi_n$ has an extension $\tilde\phi_n\in\Phi$ with $\overline{\dom\phi_n}\subset\dom\tilde\phi_n$,
\item $\dom \tilde \phi_n$ and $\im\tilde \phi_n$ are relatively compact for every $n\geq 1$, and
\item $\{\im \phi_n\mid n\geq 1\}$ and $\{\im \tilde \phi_n\mid n\geq 1\}$ are locally finite open coverings of $Y$.
\end{itemize}
We define now by induction on $n$ an increasing sequence of finite subsets $\widehat{\Phi}_{0,n}\subset\Phi$ ($n\geq 1$)
so that $\psi^{-1}\phi$ belongs to $\mathcal{S}$ for all $\phi,\psi\in\widehat{\Phi}_{0,n}$ and
\[
\im\phi_1\cup\cdots\cup\im\phi_n\subset\bigcup_{\phi\in\widehat{\Phi}_{0,n}}\im\phi.
\]
Let $\widehat\Phi_{0,1}=\{\phi_1\}$ and, for $n>1$, assume that we have defined $\widehat\Phi_{0,n-1}$ satisfying the required properties. Lemma~\ref{l:generate} yields a finite open covering $\{U_i\}_{i\in I}$ of $\overline{\dom\phi_n}$ such that every $U_i$ is relatively compact and the restriction of $\psi^{-1}\tilde\phi_n$ to every $U_i$ belongs to $\mathcal{S}$ for every $\psi\in\widehat\Phi_{0,n-1}$. Letting
\[
\widehat\Phi_{0,n}=\widehat\Phi_{0,n-1}\cup\{\,\tilde{\phi}_{n|U_i}\mid i\in I\,\},
\]
we get
\[
\im\phi_n\subset\bigcup_{i\in I} \tilde\phi_n(U_i)= \bigcup_{i\in I}\im(\tilde{\phi}_{n}|_{U_i}).
\]
The induction hypothesis now yields
\[
\im\phi_1\cup\cdots\cup\im\phi_n\subset\bigcup_{\phi\in\widehat\Phi_{0,n}}\im\phi.
\]
Let us show by cases that $\psi^{-1}\phi$ belongs to $\mathcal{S}$ for all $\phi, \psi\in\widehat\Phi_{0,n}$.
If $\phi, \psi\in\widehat\Phi_{0,n-1}$, then it follows from the induction hypothesis, so assume first that $\phi=\tilde\phi_{n|U_i}$ for some $i\in I$. If $\psi$ is also of the form $\tilde\phi_{n|U_j}$ for some $j\in I$, then $\psi^{-1}\phi$ is the identity on its domain, so $\psi^{-1}\phi\in \mathcal{S}$.
If, on the other hand, $\psi\in\widehat\Phi_{0,n-1}$, then $\psi^{-1}\phi=\psi^{-1}\tilde\phi_{n|U_i}\in \mathcal{S}$ by the definition of $U_i$. The only case remaining is when $\phi\in\widehat\Phi_{0,n-1}$ and $\psi=\tilde\phi_{n|U_i}$ for some $i\in I$, which follows from the previous argument because $\mathcal{S}$ is symmetric. This completes the proof of Claim~\ref{c:phizero} by taking $\widehat\Phi_0=\bigcup_n\widehat\Phi_{0,n}$.
We turn to the task of defining $\Phi_0$ and the generating pseudo{\textasteriskcentered}group $\mathcal T$.
Let $\{V_{\hat\phi}\mid\hat\phi\in\widehat\Phi_0\}$ be a shrinking of the covering $\{\im\hat\phi\mid\hat\phi\in\widehat\Phi_0\}$; that is,
$\bigcup_{\hat\phi\in\Phi_0} V_{\hat\phi}=Y$ and
$V_{\hat\phi}$ is an open set satisfying $\overline{V_{\hat\phi}}\subset \im \hat\phi$
for every $\hat\phi\in\widehat\Phi_0$.
\begin{claim}\label{c:p}
For every $x\in X$, there is a open neighborhood $U_x$ of $x$ such that
\[
U_x\cap V_{\hat\phi}\neq\emptyset\quad\Longrightarrow\quad U_x\subset \im\hat\phi
\]
for every $\hat\phi\in\widehat\Phi_0$.
\end{claim}
Since $\{\im\hat\phi\mid\hat\phi\in\widehat\Phi_0\}$ is locally finite by Claim~\ref{c:phizero}(\ref{i:phizerox}),
\[
U_x=\Big(\bigcap_{\hat\phi\in\widehat\Phi_0,\ x\in\im\hat\phi}\im\hat\phi\Big)\setminus\Big(\bigcup_{\hat\phi\in\widehat\Phi_0,\ x\notin\overline{V_{\hat\phi}}}\overline{V_{\hat\phi}}\Big)
\] is an open set that satisfies the required properties, proving Claim~\ref{c:p}.
For every $\hat\phi\in\widehat\Phi_0$, let $\{P_i\mid i\in I_{\hat\phi}\}$ be a open covering of the open set $V_{\hat\phi}$ such that every $P_i$ satisfies
\begin{align}\label{imv}
P_i\cap V_{\hat\psi}\neq\emptyset\quad &\Longrightarrow\quad P_i\subset\im\hat\psi\qquad \forall \hat\psi\in\widehat\Phi_0,\\
\label{diamhatphi}\diam \hat\phi^{-1}(P_i)&<d(\hat\phi^{-1}(P_i),\dom\hat\phi\setminus \hat\phi^{-1}(V_{\hat\phi})).
\end{align}
From now on, given $\hat\phi\in \widehat\Phi_0$ and $i\in I_{\hat\phi}$, let $\phi_i$ be shorthand for $\hat\phi|_{P_{i}}$. Let
\begin{align*}
\Phi_0&=\{\,\phi_i\mid \hat\phi\in\widehat\Phi_0,\ i\in I_{\hat \phi}\,\},\\
\mathcal T&=\{\, \psi_j s \phi_i^{-1}\mid\hat\phi,\hat\psi\in\widehat{\Phi}_0,\ \phi_i,\psi_j\in\Phi_0,\ s\in \mathcal{S}\,\}\cup\{\id_Y\}.
\end{align*}
It is elementary to check that $\mathcal{T}$ is a pseudo{\textasteriskcentered}group, so let us prove that $\mathcal{T}$ generates $\mathcal{H}$. Let $h\in \mathcal{H}$ and let $x\in\dom h$. By the definitions of $\{V_{\hat\phi}\mid \phi\in\widehat\Phi_0\}$ and $\{P_i\mid i\in I_{\hat\phi}\}$, the collection
\[
\{\,P_i\mid i\in I_{\hat\phi},\ \phi\in\widehat{\Phi}_0\,\}=\{\,\im \phi_i\mid i\in I_{\hat\phi},\ \hat\phi\in\widehat{\Phi}_0\,\}
\]
is an open covering of $Y$. Therefore, there are $\phi_i$, $\psi_j\in\Phi_0$ so that $x\in\im\phi_i$, $f(x)\in\im\psi_j$. Thus $\psi^{-1}_jh\phi_i\in\mathcal{G}$ by Definition~\ref{d:equiv}(\ref{i:equivgamma}). Since $\mathcal{S}$ generates $\mathcal{G}$, there must be an open neighbourhood $U$ of $\phi^{-1}_ix$ so that the restriction of $\psi^{-1}_jh\phi_i$ to $U$ belongs to $\mathcal{S}$ by Lemma~\ref{l:generate}. Then $h$ coincides with $\psi_j s\phi_i^{-1}\in \mathcal{S}$ over $\phi_i(U)$. We have proved that, for every $h\in \mathcal{H}$ and $x\in\dom h$, the restriction of $h$ to some open neighbourhood of $x$ belongs to $\mathcal{T}$, so $\mathcal{T}$ generates $\mathcal{H}$ by Lemma~\ref{l:generate}.
We now prove some preliminary results needed to define the metric $d'$. For each $\hat\phi\in\widehat{\Phi}_0$, let $D_{\hat\phi}\colon \im\hat\phi\times\im\hat\phi\to \mathbb{R}_{\geq 0}$ be the metric defined on the open set $\im\hat\phi$ by $D_{\hat\phi}(x,y)=d(\hat\phi^{-1}x,\hat\phi^{-1}y)$.
\begin{comment}
\begin{claim}\label{c:dphi}
\begin{enumerate}[(i)]
\item \label{i:cdphieq}Let $y\in Y$ and $\phi\in\Phi'_0$ be such that $y\in \im \phi$ and $\phi^{-1}y\in\Eq(S,d)$ with respect to an assignment $\epsilon\mapsto\delta(\epsilon)$. Then, for every $\epsilon>0$, every $r\in R$ with $y \in \dom r$, and every $z\in\dom r$,
\[
D_{\phi}(y,z)<\delta(\epsilon)\quad\Longrightarrow\quad D_{\psi}(ry,rz)<\epsilon
\]
for every $\psi\in\Phi'_0$ such that $ry,rz\in\im\psi$.
\item \label{i:cdphineareq} If $y\in Y$ is such that there is $\phi\in\Phi'_0$ such that with respect to an assignment $\epsilon\mapsto\delta(\epsilon)$ $\phi^{-1}y\in\Eq_N(S,d)$; then, for every $\epsilon>0$, every $r\in R$, and every $u,v\in \dom r$,
\[
D_{\phi}(y,u)<\delta(\epsilon)\quad \&\quad D_{\phi}(y,v)<\delta(\epsilon) \quad\Longrightarrow\quad D_{\psi}(ru,rv)<\epsilon
\]
for every $\psi\in\Phi'_0$ such that $u,v\in \im \phi$ and $ru,rv\in \im \psi$.
\end{enumerate}
\end{claim}
Let us prove~(\ref{i:cdphieq}).
By the definition of $R$, $r=\chi_j s\xi_i^{-1}$ for some $\chi,\xi\in\Phi'_0$, $i\in I_\xi$, $j\in I_\chi$, $s\in S^*$. Let $\psi\in\Phi'_0$ be such that $ry,rz\in\im\psi$, then
\[
r=\chi_j s\xi_i^{-1}=\psi (\psi^{-1}\chi_j s\xi_i^{-1}\phi)\phi^{-1}=\psi g\phi^{-1},
\]
where $g\in S^*$ by Claim~\ref{c:phizero}(\ref{i:phizeros}). Since $D_{\phi}(y,z)<\delta(\epsilon)$, we have $d(\phi^{-1}y,\phi^{-1}z)<\delta(\epsilon)$, and now~\eqref{eq} yields
\[
d(s\phi^{-1}y,s\phi^{-1}z)=D_{\psi}(ry,rz)<\epsilon.
\]
Let us prove~(\ref{i:cdphineareq}).
By the definition of $R$, $r=\chi_j s\xi_i^{-1}$ for some $\chi,\xi\in\Phi'_0$, $i\in I_\xi$, $j\in I_\chi$, $s\in S^*$. Let $\psi\in\Phi'_0$ be such that $ru,rv\in\im\dot\psi$, then
\[
r=\chi_j s\xi_i^{-1}=\psi (\psi^{-1}\chi_j s\xi_i^{-1}\phi)\phi^{-1}=\psi g\phi^{-1},
\]
where $g\in S^*$ by Claim~\ref{c:phizero}(\ref{i:phizeros}). Since $D_{\phi}(y,u)<\delta(\epsilon)$ and $D_{\phi}(y,v)<\delta(\epsilon)$, we have
\[
d(\phi^{-1}y,\phi^{-1}u)<\delta(\epsilon)\quad \text{and}\quad d(\phi^{-1}y,\phi^{-1}v)<\delta(\epsilon),
\] and now~\eqref{eq} yields
\[
d(s\phi^{-1}u,s\phi^{-1}v)=D_{\psi}(ru,rv)<\epsilon.
\]
This completes the proof of Claim~\ref{c:dphi}.
\end{comment}
If $u,v\in\im\hat\phi$ for some $\hat\phi\in\widehat{\Phi}_0$, let
\begin{equation*}
\overline D(u,v)=\sup \{\,D_{\hat\phi}(u,v)\mid \hat\phi\in \widehat{\Phi}_0,\ u,v\in\im\hat\phi\,\}.
\end{equation*}
A pair $(u,v)\in Y\times Y$ is \emph{admissible} if there is $\hat\phi\in\widehat{\Phi}_0$ such that $u,v\in V_{\hat\phi}$ and
\[
\{u,v\}\cap V_{\hat\psi}\neq\emptyset\quad\Longrightarrow\quad\{u,v\}\subset \im\hat\psi\qquad \forall\hat\psi\in\widehat{\Phi}_0.
\]
Let $S_{u,v}$ be the set of sequences $(z_0,\ldots,z_n)$ of arbitrary finite length with $z_0=u$, $z_n=v$, and such that $(z_{i-1},z_{i})$ is an admissible pair for every $i=1,\ldots,n$. The following properties are elementary:
\begin{align}\label{suvref}
(u,u)&\in S_{u,u},\\
(z_0,\ldots,z_n)\in S_{u,v}&\Longrightarrow (z_n,\ldots,z_0)\in S_{v,u},\label{suvsym}\\
\label{suvtrans}
\begin{rcases*}
(z_0,\ldots,z_m)\in S_{u,v}\\
(z_m,\ldots,z_{m+n})\in S_{v,w}
\end{rcases*} &\Longrightarrow (z_0,\ldots,z_{m+n})\in S_{u,w}.
\end{align}
Set
\begin{equation}\label{definitiondprime}
d'(u,v)=\begin{cases}\infty \qquad &\text{if}\quad S_{u,v}=\emptyset,\\ \inf_{(z_0,\ldots,z_n)\in S_{u,v}}\sum_{k=1}^n\overline{D}(z_{k-1},z_k) \qquad &\text{if}\quad S_{u,v}\neq\emptyset.\end{cases}
\end{equation} It follows from~\eqref{suvref}--\eqref{suvtrans} that $d'$ is a pseudometric in $Y$. To prove that it is actually a metric, we need the following result.
\begin{claim}\label{c:vimphi}
Let $\hat\phi\in\widehat\Phi_0$, let $u\in \im\hat\phi$, and let $v\in Y$ be such that $S_{u,v}\neq\emptyset$. Then
\[
d'(u,v)\geq\begin{cases}
\min\{D_{\hat\phi}(u,v), D_{\hat\phi}(u,\im\hat\phi\setminus V_{\hat\phi})\} &\text{if}\ v\in V_{\hat\phi},\\
D_{\hat\phi}(u,\im\hat\phi\setminus V_{\hat\phi}) &\text{if}\ v\notin V_{\hat\phi}.
\end{cases}
\]
\end{claim}
Let $(z_0,\ldots,z_n)\in S_{u,v}$. Suppose first that $\{z_{i-1},z_i\}\subset V_{\hat\phi}$ for every $i=1,\ldots,n$, then
\[
\sum_{k=1}^n\overline{D}(z_{k-1},z_k)\geq \sum_{k=1}^n D_{\hat\phi}(z_{k-1},z_k)\geq D_{\hat\phi}(z_0,z_n)=D_{\hat\phi}(u,v)
\]
by the triangle inequality. Assume now that $\{z_0,z_m\}\subset V_{\hat\phi}$ for some $1\leq m<n$ but $\{z_m,z_{m+1}\}\not\subset V_{\hat\phi}$. Since $(z_m,z_{m+1})$ is an admissible pair and $z_m\in \im\phi_i\subset V_{\hat\phi}$, we get $z_{m+1}\in \im\hat\phi$. Therefore
\[
\sum_{k=1}^n\overline{D}(z_{k-1},z_k)\geq \sum_{k=1}^{m+1} D_{\hat\phi}(z_{k-1},z_k)\geq D_{\hat\phi}(z_0,z_{m+1})=D_{\phi}(u,\im\hat\phi\setminus V_{\hat\phi});
\]
this completes the proof of Claim~\ref{c:vimphi}.
\begin{claim}\label{c:compatible}
$d'$ is a compatible metric on $Y$.
\end{claim}
Let us prove that $d'$ is a metric: Let $u,v\in Y$ be such that $d'(u,v)=0$, so $S_{u,v}\neq\emptyset$. Take any $\phi\in\Phi_0$ such that $u\in V_\phi$. Since
\[
D_{\hat\phi}(u,\im\hat\phi\setminus V_{\hat\phi})>0,
\]
it follows from Claim~\ref{c:vimphi} that $v\in V_\phi$ and $D_{\hat\phi}(u,v)\leq d'(u,v)=0$. But $D_{\hat\phi}$ is a metric in $\im\hat\phi$, so $u=v$ as desired.
Let us show that $d'$ is a compatible metric. We start by showing that every neighborhood in $X$ contains a $d'$-ball, so let $U$ be a neighborhood of $x$. Since $\{\im\hat\phi\mid\hat\phi\in\widehat{\Phi}_0\}$ is a locally finite cover, we may assume
\[
\{\hat\phi\in\widehat{\Phi}_0\mid x\in V_{\hat\phi}\}=\{\hat\phi_1,\ldots,\hat\phi_n\}
\]
for some $n\in\mathbb{N}$.
The metrics $D_{\hat\phi_i}$ are compatible over $\im\hat\phi_i$, so we can find some $r>0$ satisfying
\begin{align*}
B_{D_{\hat\phi_i}}(x,r)\subset U,\qquad
d(B_{D_{\hat\phi_i}}(x,r),\im\hat\phi_i\setminus V_{\hat\phi_i})>r
\end{align*}
for every $i=1,\ldots,n$; then, for every $y\in B_{d'}(x,r)$,
\[
r>d'(x,y)\geq D_{\hat\phi_i}(x,y)
\]
by Claim~\ref{c:vimphi}, so $y\in B_{D_{\hat\phi_i}}(x,r)$ and hence $y\in U$.
Consider now a ball $B_{d'}(x,r)$. Choose a neighborhood $U$ of $x$ small enough so that
\begin{align*}
x\in V_{\hat\phi_i}\ \longrightarrow\ U\subset V_{\hat\phi_i},\qquad
U\subset B_{D_{\hat\phi_i}}(x,r)
\end{align*}
for $i=1,\ldots,n$.
This means that $(x,u)\in S_{x,u}$ for every $u\in U$, so
\[
d'(x,u)\leq\overline{D}(x,u)=\sup_i D_{\hat\phi_i}(x,u)<r.
\]
Hence $U\subset B_{d'}(x,r)$, proving Claim~\ref{c:compatible}.
\begin{comment}, note first that every metric $D_{\hat\phi}$ is compatible over $\im\hat\phi$ because $\hat\phi$ is a homeomorphism. Let $\hat\phi\in \widehat\Phi_0$ and let $y\in \im\hat\phi$. Claim~\ref{c:vimphi} yields that $d'(y,\cdot)\geq D_{\hat\phi}(y,\cdot)$ in a small neighborhood around $y$, so every $d'$-ball contains a neighborhood of $y$. On the other hand, since $\{\im\hat\phi\}$ is locally finite, there are only finitely many $\hat\phi$ with $y\in\im\hat\phi$. Hence, for every neighborhood $U$ of $y$, there is $s>0$ such that every closed ball $D_{\hat\phi_i}(y,s)$ is contained in $U$. Now~\eqref{defnoverlined} and~\eqref{definitiondprime} yield $B_{d'}(x,s)\subset U$. This concludes the proof of Claim~\ref{c:compatible}.
\end{comment}
Let $\epsilon,\delta>0$, and $x\in X$ be as in the statement of the theorem. Let $\phi_i\in\Phi$ with $x\in\dom\phi_i$, let $y=\phi_i(x)$, let $t\in \mathcal \mathcal{T}$ satisfy $y\in \dom t$, and let $v\in \dom t\cap B_{d'}(y,\delta)$.
The map $t$ is of the form $\psi_j s\phi^{-1}_i$, where $s\in \mathcal{S}$ and $\psi_j,\phi_i\in\Phi_0$ for some $\hat\psi,\hat\phi\in\widehat{\Phi}_0$. In particular, $y,v\in\dom t$ implies $y,v\in \im\phi_i=P_i$. Hence $\{y,v\}\in S_{y,z}$ by~\eqref{imv} and
\[
D_{\hat\phi}(y,v)<D_{\hat\phi}(y,\im\hat\phi\setminus V_{\hat\phi})
\]
by~\eqref{diamhatphi}, so Claim~\ref{c:vimphi} yields
\[D_{\hat\phi}(y,v)\leq d'(y,z)< \delta, \]
and therefore
\[
d(x,u)<\delta
\]
by the definition of $D_{\hat\phi}$, where $u=\phi_i^{-1}v$.
Let $\hat\chi\in\widehat{\Phi}_0$, $\chi_k\in \Phi_0$ be such that $ty,tv\in \im \chi_k$. Then
\[
d(\chi_k^{-1}ty,\chi_k^{-1}tv)=d((\chi_k^{-1}\psi_j)sx,(\chi_k^{-1}\psi_j)su).
\]
Since $\chi_k^{-1}\psi_j\in \mathcal{S}$ by Claim~\ref{c:phizero}(\ref{i:phizeros}) and $d(x,u)<\delta$, we have
\[
D_{\hat\chi}(ty,tv)=d(\chi_k^{-1}ty,\chi_k^{-1}tv)<\epsilon.
\]
Hence $\overline{D}(ty,tv)\leq \epsilon$.
Both $sx$ and $su$ belong to $\dom\psi_j\subset V_{\hat\psi}$, so~\eqref{definitiondprime} yields
\[
d'(ty,tv)\leq\overline D(ty,tv)\leq\epsilon.\qedhere
\]
\begin{comment}
Finally, let $y\in Y$ and $\phi\in\Phi_0$ be such that $\phi^{-1}y\in\Eq_N(S,d)$, let us prove that $T$ and $d'$ satisfy~\eqref{neareq} for a correspondence $\epsilon\mapsto\delta'(\epsilon)$.
Define $\mu$ and $\delta'$ as before.
Let $\epsilon>0$, let $t\in T$, and let $u,v\in X'$ be such that $u,v\in\dom t$, $d'(y,u)<\delta'(\epsilon)$ and $d(y,v)<\delta'(\epsilon)$.
Then the map $t$ can be expressed as $\psi s\phi^{-1}$, where $\psi\in\Phi_0$ and $s\in S^*$. In particular, $u,v\in\dom t$ implies $u,v\in \im\phi$. Hence $\{y,u\}\in S_{y,u}$, $\{y,v\}\in S_{y,v}$,
\[
d(y,u)<d(y,\im\hat\phi\setminus \im\phi),\quad \text{and}\quad d(y,v)<d(y,\im\hat\phi\setminus \im\phi)
\]
by~\eqref{imv}, so Claim~\ref{c:vimphi} yields
\[
d'(y,u)\geq \overline{D}(y,u) \quad \text{and}\quad d'(y,v)\geq \overline{D}(y,v).
\]
Therefore
\begin{align*}
D_{\hat\phi}(y,u)=d(\phi^{-1}y,\phi^{-1}u)=d(y',u')<\delta'(\epsilon),\\
D_{\hat\phi}(x',z')=d(\phi^{-1}y,\phi^{-1}v)=d(y',v')<\delta'(\epsilon),\\
\end{align*}
where $y'=\phi^{-1}y$, $u'=\phi^{-1}u$, and $v'=\phi^{-1}v$.
Both $su'$ and $sv'$ belong to $\dom\psi=W(\hat\psi)$, so Claim~\ref{c:overlineD}(\ref{i:overlineDneareq}) yields
\[
\overline D(\psi su',\psi sv')=\overline D(tu,tv)\leq\epsilon/2<\epsilon.
\]
Again, $tu$, $tv\in \im\psi$, so $(tu,tv)\in S_{tu,tv}$, and therefore
\[
d'(tu,tv)<\epsilon
\]
by~\eqref{definitiondprime}.
\end{comment}
\end{proof}
\begin{corollary}\label{c:sensitive}
Being sensitive to initial conditions is invariant by equivalences of pseudogroups acting on locally compact spaces.
\end{corollary}
\begin{proof}
Suppose that $\mathcal{G}$ is not sensitive; then there is a metric $d$ and a generating pseudo{\textasteriskcentered}group $\mathcal{S}$ such that, for every $\epsilon>0$, there are $x_\epsilon$ and $\delta_\epsilon$ with
\[
d(x_\epsilon,u)<\delta_\epsilon\quad\Longrightarrow\quad d(s(x_\epsilon),s(u))<\epsilon
\]
for all $u\in X$ and $s\in\mathcal{S}$ with $x_\epsilon,u\in\dom s$.
Let $(Y,\mathcal{H})$ be a pseudogroup, and let $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ be an equivalence. Proposition~\ref{p:invariant} yields a generating pseudo{\textasteriskcentered}group $\mathcal{T}$ for $\mathcal H$, a generating set $\Phi_0\subset\Phi$, and a metric $d'$ on $Y$. Since $\Phi_0$ generates $\Phi$, the open set
\[U =\bigcup_{\phi\in\Phi_0}\dom\phi\] meets every $\mathcal{G}$-orbit, and therefore we may assume without loss of generality that every $x_\epsilon$ is contained in $U$. Letting $y_\epsilon\in\Phi_0(x_\epsilon)$, Proposition~\ref{p:invariant} yields
\[
d'(y_\epsilon,v)<\delta_\epsilon\quad\Longrightarrow\quad d'(t(y_\epsilon),t(v))\leq\epsilon
\]
for every $v\in Y$ and $t\in \mathcal{T}$ with $y,v\in \dom t$; this shows that $\mathcal{H}$ is not sensitive to initial conditions.
We have shown that if $\mathcal{G}$ is not sensitive, then neither is $\mathcal{H}$; the result now follows from the symmetry of the equivalence relation for pseudogroups.
\end{proof}
\begin{corollary}\label{c:equicontequivalence}
Let $\Phi\colon (X,\mathcal{G})\to (Y,\mathcal{H})$ be an equivalence of pseudogroups acting on locally compact spaces. If $x\in X$ is a point of $( \mathcal S,d)$-equicontinuity for $\mathcal{G}$, then, with the notation of Proposition~\ref{p:invariant}, every $y\in \Phi_0 x$ is a point of $(\mathcal T,d')$-equicontinuity for $\mathcal{H}$. In particular, $\mathcal{G}$ is almost equicontinuous if and only if $\mathcal{H}$ is.
\end{corollary}
\begin{proof}
The first assertion follows immediately from Preposition~\ref{p:invariant} and the definition of equicontinuous point. Suppose now that the points of $(\mathcal S,d)$-equicontinuity are dense in $X$, then they are also dense in the open set $\bigcup_{\phi\in\Phi_0} \dom \phi$. Since $\Phi_0$ sends points of $(\mathcal S,d)$-equicontinuity to points of $(\mathcal T,d')$-equicontinuity and $\{\im\phi\mid\phi\in\Phi_0\}$ is an open covering of $Y$, the points of $(\mathcal T,d')$-equicontinuity are dense in $Y$.
\end{proof}
Corollaries~\ref{c:sensitive} and~\ref{c:equicontequivalence} together with Lemma~\ref{l:dppinvariant} yield Theorem~\ref{t:invariant}.
We turn our attention to the Auslander-Yorke dichotomy for pseudogroups, which we will subsequently use to prove Theorem~\ref{t:ttdposicpseudo}.
\begin{proof}[Proof of Theorem~\ref{t:auslanderyorkepseudo}]
Suppose that $\mathcal{G}$ is not sensitive; thus, there is a metric $d$ and a generating pseudo{\textasteriskcentered}group $\mathcal{S}$ so that, for every $n\in \mathbb{N}$, there are $x_n\in X$ and $\delta_n>0$ satisfying
\begin{equation}\label{auslanderyorke}
d(x_n,y)<\delta_n\quad\Longrightarrow\quad d(s(x_n),s(y))<1/n
\end{equation} for every $y\in X$ and $s\in \mathcal{S}$ with $x_n,y\in\dom s$.
Using Lemma~\ref{l:systemcg}, choose a system of compact generation $(U,F,\widetilde F)$ with $\widetilde F\subset \mathcal{S}$; note that any point in $\mathcal{G}x_n$ still satisfies~\eqref{auslanderyorke}, perhaps with a different $\delta_n$, so we may assume without loss of generality that the sequence $x_n$ is contained in $U$. We also have $1/n<\sigma(U,F,\widetilde{F})$ for $n$ large enough, and now Proposition~\ref{p:halo} yields the existence of a sequence $r_n>0$ such that
\begin{equation}\label{auslanderyorkeftilde}
\dom f\cap B(x_n,r_n)\neq \emptyset\quad\Longrightarrow\quad B(x_n,r_n)\subset \dom \tilde f
\end{equation}
for every $f\in \langle F\rangle$. We will suppose, by passing to a subsequence if necessary, that every $x_n$ satisfies~\eqref{auslanderyorkeftilde}; moreover, we will also assume by decreasing $r_n$ that $B(x_n,r_n)\subset U$.
Note that
\begin{equation}\label{auslanderyorkediam}
\diam f(B(x_n,r_n))<2/n
\end{equation} for every $f\in F$. Indeed, otherwise there would be some $f\in F$ with
\begin{equation*}
B(x_n,r_n)\subset \dom \tilde f,\qquad\diam \tilde f(B(x_n,r_n))\geq 2/n
\end{equation*}
by~\eqref{auslanderyorkeftilde}.
But then the triangle inequality would yield $d(\tilde f(x_n),\tilde f(y))\geq 1/n$ for some $y\in B(x_n,r_n)$, contradicting~\eqref{auslanderyorke}.
Let
\[
V_n=\bigcup_{f\in\langle F\rangle} f(B(x_n,r_n))\qquad \text{for}\ n\geq 1,
\] which are clearly open subsets of $U$. Moreover, topological transitivity implies that every $V_n$ is dense in $U$, so by the Baire Category Theorem $\bigcap_n V_n$ is also a dense subset of $U$.
Let us show that every $x\in \bigcap_{n\geq 1} V_n$ is a point of $( F^*,d)$-equicontinuity. Assume for the sake of contradiction that there is $c>0$ such that, for every $r>0$, there are $f\in F^*$ and $y\in B(x,r)$ such that
\[x,y\in\dom f,\qquad d(f(x),f(y))\geq c.\]
Choose $m$ large enough so that $2/m<c/2$. Since $x\in \bigcap_{n\geq 1} V_n$, there is some $g\in F^*$ such that $g(x)\in B(x_m,r_m)$. By assumption, there are also $y\in X$ and $f\in F^*$ satisfying
\[
y\in \dom g\cap \dom f,\quad g(y)\in B(x_m,r_m),\quad d(f(x),f(y))\geq c.
\]
But then
\[
d(fg^{-1}(y'),fg^{-1}(x'))=d(f(x),f(y))\geq c
\]
for $x'=g(x)$, $y'=g(y)$,
and now~\eqref{auslanderyorkeftilde} and the triangle inequality yield
\[
\max\{d(\tilde f\tilde g^{-1}(x_m),\tilde f\tilde g^{-1}(x')),d(\tilde f\tilde g^{-1}(x_m),\tilde f\tilde g^{-1}(y'))\}\geq c/2>2/m,
\]
contradicting~\eqref{auslanderyorkediam}.
We have proved that, if $\mathcal{G}\curvearrowright X$ is topologically transitive and not sensitive to initial conditions, then there is a metric $d$ on $X$ and a system of compact generation $(U,F,\widetilde F)$ such that the set of points of $(F^*,d)$-equicontinuity is dense in $U$.
Since $\mathcal{G}$ is equivalent to $\mathcal{G}|_U$ by Lemma~\ref{l:equivid}, Corollary~\ref{c:equicontequivalence} yields that $\mathcal{G}$ is almost equicontinuous.
If $\mathcal{G}$ is minimal, then it is trivial to check that $\bigcap_{n\geq 1} V_n=U$, so by the previous argument there are $d$ and $(U,F,\widetilde F)$ so that every point in $U$ is a point of $(F^*,d)$-equicontinuity. The result then follows by Lemma~\ref{l:uniformity} and Proposition~\ref{p:equicontinuityinvariant}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t:ttdposicpseudo}]
Let $\mathcal{S}$ be a generating pseudo{\textasteriskcentered}group for $\mathcal{G}$ and let $d$ be a compatible metric.
By the previous result, it is enough to show that, for every $(\mathcal{S},d)$, the set of points of $(\mathcal{S},d)$-equicontinuity is empty.
Let $(U,F,\widetilde F)$ be a system of compact generation for $\mathcal{G}$ satisfying $\widetilde F\subset \mathcal{S}$.
Suppose for the sake of contradiction that $x\in U$ is a point of $(\mathcal{S},d)$-equicontinuity. Then it must satisfy~\ref{p:halo}(\ref{i:nothalo}) with some $\rho>0$.
By Definition~\ref{d:dpo} and Corollary~\ref{c:dpocg}, there are points \[
q_1\in B(x,\rho/2),\qquad q_2\in B(x,\rho/2)\setminus \mathcal{G}q_1
\]with finite $\mathcal{G}|_U$-orbits. Letting
\[
\epsilon<\frac{1}{2}d(\mathcal{G}|_Uq_1,\mathcal{G}|_Uq_2) \]
and using the triangle inequality, we can choose a point $q\in\{q_1,q_2\}$ satisfying
\begin{equation}
\label{gq}d(x,\mathcal{G}q)> \epsilon.
\end{equation}
For every $n\geq 1$, let $p_n$ be a point in $B(x,\rho/n)$ with finite $\mathcal{G}|_U$-orbit. Since $\mathcal{G}p_n\cap B(x,\rho)$ is finite, there is a finite set $K_n\subset \langle F\rangle$ satisfying that, for every $y\in \mathcal{G}p_n\cap B(x,\rho)$, there is $k\in K_n$ with $ky=p_n$; moreover, since $y\in B(x,\rho)$, each map $k\in K_n$ may be extended to a map $\tilde k\in \langle \widetilde F\rangle$ with $B(x,\rho)\subset \dom \tilde k$. For each $n$, let $\widetilde K_n$ denote the collection of all such extensions. Since $\widetilde K_n$ is a finite set of maps, there is a neighborhood $W_n$ of $q$ so that $W_n\subset B(x,\rho/2)$ and $\tilde k(W_n)\subset B(\tilde k(q),\epsilon/4)$ for every $\tilde k\in\widetilde K_n$.
$\mathcal{G}|_U$ is topologically transitive, so there are maps $f_n$ and points $v_n\in B(x,\rho/n)$ such that $f_nv_n\in W_n$; again, $B(x,\rho)\subset \dom\tilde f_n$ for all $n$.
If $d(\tilde f_nv_n,\tilde f_np_n)\geq \rho/2$ for infinitely many $n$, then the triangle inequality yields
\[\max\{d(\tilde f_nx,\tilde f_np_n),d(\tilde f_nx,\tilde f_nv_n)\}\geq \rho/4,\]
showing that $x$ is not a point of $(\mathcal{S},d)$-equicontinuity, a contradiction.
Hence, we may assume
$d(\tilde f_nv_n,\tilde f_np_n)< \rho/2$ for $n$ large enough.
In particular, since $\tilde f_nv_n\in W_n\subset B(x,\rho/2)$, we have $\tilde f_np_n\in B(x,\rho)$, so there are maps $k_n$ in $K_n$ satisfying $k_nf_n(p_n)=p_n$ and $B(x,\rho)\subset \dom \tilde k_n\tilde f_n$ for $n$ large enough.
Now we have
\begin{equation}\label{bigineq}
\max\{d(\tilde k_n\tilde f_n p_n,\tilde k_n\tilde f_n x),d(\tilde k_n\tilde f_n x, \tilde k_n\tilde f_n v_n)\}\geq \epsilon/4
\end{equation} because, otherwise, the triangle inequality and $\tilde k_n\tilde f_n p_n=p_n$ would yield
\begin{align*}
d(x,\tilde k_n q)&\leq d(x,p_n)+d(\tilde k_n\tilde f_n p_n,\tilde k_n\tilde f_n x)+\\
&\phantom{\leq d(x,} +d(\tilde k_n\tilde f_n x, \tilde k_n\tilde f_n v_n)+d(\tilde k_n\tilde f_n v_n,\tilde k_n q )\\
&<\epsilon/4+\epsilon/4+\epsilon/4+\epsilon/4\\
&=\epsilon,
\end{align*}
contradicting~\eqref{gq}.
The inequality in~\eqref{bigineq} and the fact that the sequences $\{v_n\}$ and $\{p_n\}$ converge to $x$ are at odds with the assumption that $x$ was a point of $(\mathcal{S},d)$-equicontinuity. Since $x$ was an arbitrary point, we infer that the set of points of $(\mathcal{S},d)$-equicontinuity is empty, as desired.
\end{proof}
\subsection{A non-compactly generated, countably generated and topologically transitive pseudogroup that is not sensitive}\label{s:cantor}
In this section we construct a counterexample showing that compact generation is a necessary condition in the statement of Theorem~\ref{t:ttdposicpseudo}.
Let $Y=\{0,1\}^\mathbb{Z}$, which is a Cantor set.
We use the greek letters $\alpha,\beta,\ldots$ to denote elements of $Y$ and the notation $\alpha=(\alpha_i)_{i\in \mathbb{Z}}, \beta=(\beta_i)_{i\in \mathbb{Z}}$. Let $\sigma\colon Y\to Y$ be the shift function, defined by
\[
(\sigma(\alpha))_i= (\alpha_{i+1}).
\]
Let $G$ denote the subgroup of $\Homeo(Y)$ generated by $\sigma$, endowed with the obvious action $G\curvearrowright Y$. It is well-known that the action $G\curvearrowright Y$ is topologically transitive and has density of periodic orbits.
The point $\mu:=(...,0,0,0,...)$ is a fixed point of $G$. For $n\geq 0$, let
\[
U_n =\{\, \alpha\in Y \mid \alpha_i=0\ \text{for}\ |i|<n \,\}.
\]
Note that $U_0=Y$ and that $\{U_n\}$ is a system of clopen neighbourhoods for $\mu$. We also have
\begin{equation}\label{sigmasigma-1}
\sigma(U_{n+1}),\ \sigma^{-1}(U_{n+1})\subset U_{n}\qquad \text{for all}\ n\geq 0.
\end{equation}
Let $X=\{\,(n,\alpha)\in\mathbb{N}\times Y\mid \alpha\in U_n\,\}$; that is, $X$ is the disjoint union $\bigsqcup_{i\geq 0}U_i$. Let $f,g\in\Ph(X)$ be defined by
\begin{align}
f(n,\alpha)&=(n,\sigma(\alpha)),\quad & &\dom f=\{\, (n,\alpha)\in X\mid\sigma(\alpha)\notin U_{n+2}\,\}, \label{defnf}\\
g(n,\alpha)&=(n+1,\alpha),\quad & &\dom g = \{\,(n,\alpha)\in X\mid \alpha\in U_{n+1}\,\}.
\end{align}
Note that $\dom f$, $\im f$, $\dom g$, and $\im g$ are clopen subsets of $X$. Finally, let $\mathcal{G}$ be the pseudogroup generated by $f$ and $g$.
\begin{lemma}\label{l:gna}
For every $(n,\alpha)\in X$, $\mathcal{G}(n,\alpha)=\{(m,\beta)\in X\mid \beta\in G\alpha\}$.
\end{lemma}
\begin{proof}
If follows trivially from the definitions of $f$ and $g$ that every point in $\mathcal{G}(n,a)$ is of the form $(m,\beta)\in X$ for some $\beta\in G\alpha$, so let us prove the reverse inclusion. First note that, since $\mu$ is a fixed point of $G$, the lemma is trivial for $\alpha=\mu$, so assume $\alpha\neq \mu$; clearly, it is enough to show that
\begin{equation}\label{magna}
\{(m,\sigma (\alpha))\in X\}\subset \mathcal{G}(n,\alpha).
\end{equation}
Let us prove~\eqref{magna}. Let $(m,\alpha)\in X$, we will show that
\begin{equation}\label{magna2}
\text{there is}\ k\in\mathbb{N}\quad \text{such that}\quad (k,\alpha), (k,\sigma(\alpha))\in X.
\end{equation}
Since $\alpha\neq \mu$, there is a largest $l$ such that $\sigma (\alpha)\in U_l$. If $l=0$ or $l=1$, then $(0,\alpha)\in \dom f$ and $f(0,\alpha)=(0,\sigma(\alpha))$ by~\eqref{defnf}; if $l\geq 2$, then $\alpha\in U_{l-1}$ by~\eqref{sigmasigma-1}. We chose $l$ so that $\sigma(\alpha)\notin U_{l+1}$, $(l-1,\alpha)\in \dom f$ and $f(l-1,\alpha)=(m-1,\sigma(\alpha))$ by~\eqref{defnf}, proving~\eqref{magna2} and yielding
\[
(m,\sigma(\alpha))=g^{m-k}fg^{k-n}(n,\alpha).
\]
This shows~\eqref{magna}.
\end{proof}
\begin{corollary}
$\mathcal{G}$ is topologically transitive and has density of periodic orbits.
\end{corollary}
\begin{proof}
Let us prove transitivity first. By Lemma~\ref{l:pt}, it is enough to show that there is a dense orbit; choose $\alpha\in Y$ with a dense $G$-orbit, then $\mathcal{G}(0,\alpha)$ is dense in $X$ by Lemma~\ref{l:gna}.
In order to prove density of periodic orbits, let $(m,\beta)\in X$ and let $V\subset U_m$ be an open neighborhood of $\beta$ in $Y$; we will prove
that there is a periodic point $(0,\alpha)$ with $\mathcal{G}(0,\alpha)\cap \{m\}\times V\neq \emptyset$.
Since the finite $G$-orbits are dense, there is some periodic $\alpha\in Y$ such that $\alpha\neq \mu$ and $G\alpha\cap V\neq \emptyset$, so $\mathcal{G}(0,\alpha)\cap\{m\}\times V\neq \emptyset$ by Lemma~\ref{l:gna}. Let us show that $(0,\alpha)$ has a finite $\mathcal{G}$-orbit. Indeed, there is some $k$ such that $G\alpha\notin U_k$, so the set
\[
\{(m,\sigma^n(\alpha))\mid \sigma^n(\alpha)\in U_n\}
\]
is finite, and therefore $(0,\alpha)$ is a periodic point by Lemma~\ref{l:gna}.
\end{proof}
\begin{lemma}\label{l:zeroezroinf}
$(0,\mu)$ is a point of equicontinuity for $\mathcal{G}$.
\end{lemma}
\begin{proof}
Let $S$ be the generating set $\{f,f^{-1},g,g^{-1}\}$, then $(0,\mu)\notin \dom h$ for $h\in S$, and the result follows.
\end{proof}
\section{Foliated dynamics}
\begin{comment}
\begin{definition}
A foliated space $X$ is topologically transitive if, for every non-empty open sets $U,V\subset X$, there is a leaf $L$ such that $L\cap U$, $L\cap V$ are non-empty.
\end{definition}
\begin{lemma}
A foliated space is topologically transitive if and only if its holonomy pseudogroup is.
\end{lemma}
\begin{proof}
Let $\{(U_i,\phi_i)\}$, $\phi_i\colon U_i\to \mathbb{R}^n\times T_i$, be a foliated atlas and let $\mathcal G\curvearrowright \bigsqcup_i T_i$ be the induced holonomy pseudogroup.
Then the result follows from the elementary observation that a leaf $L$ meets an open set $V$ if and only if the orbit of the holonomy pseudogroup corresponding to $L$ meets $\pi_i(U_i\cap V)$, where $\pi_i\colon U_i\to T_i$ is the composition of $\phi_i$ and the projection to the second component.
\end{proof}
\end{comment}
\subsection{A non-chaotic foliated space with chaotic holonomy pseudogroup.}\label{s:chaoschaos}
Using a construction inspired by Example~\ref{e:rz}, we will show that density of periodic orbits in the holonomy pseudogroup does not imply density of compact leaves. Chaos for foliated spaces is, therefore, a stronger condition than chaos for pseudogroups, at least with our definitions.
Think of $\mathbb{T}^2$ as the quotient $\mathbb{R}^2/\mathbb{Z}^2$ and consider Arnold's cat map $f\colon \mathbb{T}^2\to \mathbb{T}^2$, which is obtained by factoring the linear map
\[
\tilde f\colon\mathbb{R}^2\to\mathbb{R}^2,\qquad\tilde f(x,y)=(2x+y,x+y)
\]
through the quotient $\pi \colon \mathbb{R}^2\to \mathbb{T}^2$.
The cat map $f$ is well-known to be chaotic, so, by Theorem~\ref{t:sicactionpseudogroup}, the pseudogroup $\mathcal{G}\curvearrowright\mathbb{T}^2$ generated by $f$ is chaotic too. It is also easy to check that the suspension foliation induced by the representation $\pi_1(\mathbb{S}^1)\to \Homeo(\mathbb{T}^2)$ sending a generator to $f$ satisfies Definition~\ref{d:chaosfs} and is, therefore, chaotic.
Consider now the pseudogroup $\mathcal{H}\curvearrowright \mathbb{R}^2$ generated by $\tilde f$ and the integer translations. As in Example~\ref{e:rz},
\[
\Phi:=\{\,\pi|_{U}\mid U\subset \mathbb{R}^2\ \text{open},\ \pi|_{U}\colon U\to \phi(U)\ \text{is a homeomorphism}\,\}
\]
is an equivalence $\Phi\colon (\mathbb{R}^2,\mathcal{H})\to (\mathbb{T}^2,\mathcal{G})$; $\mathcal{H}$ is then a chaotic pseudogroup by Theorem~\ref{t:invariant}. Let $S$ denote the closed surface of genus three, whose fundamental group has presentation
\[
\pi_1(S)=\langle\, a_1,b_1, a_2,b_2,a_3,b_3\mid [a_1b_1][a_2b_2][a_3b_3]\,\rangle,
\]
and consider the suspension foliation $X$ induced by the representation $\phi\colon\pi_1(S)\to \Homeo(R^2)$ defined by
\begin{align*}
\phi(a_1)&=\tilde f,& \phi(a_2)&=[(x,y)\mapsto (x+1,y)],\\ \phi(a_3)&=[(x,y)\mapsto (x,y+1)],& \phi(b_1)&=\phi(b_2)=\phi(b_3)=\id.
\end{align*}
The holonomy pseudogroup is obviously equivalent to $\mathcal{H}$, hence chaotic. The foliated space does not have any compact leaves, however, because all $\mathbb{H}$-orbits are infinite, so $X$ is not a chaotic foliated space according to Definition~\ref{d:chaosfs}.
\subsection{A non-compact topologically transitive foliated space with density of compact leaves which is not sensitive}
\label{s:ttdposicfol}
This section shows that compactness in a necessary condition in the statement of Theorem~\ref{t:ttdposicfol}.
Let us use the pseudogroup $\mathcal{G}\curvearrowright X$ from Section~\ref{s:cantor} to obtain a topologically transitive foliated space with density of compact leaves, but with an equicontinuous leaf. We start by constructing a directed graph $Z$, which can be defined as a pair $Z=(V(Z),E(Z))$ where $V(Z)$ is the vertex set and $E(Z)\subset V(Z)\times V(Z)$ is the set of directed edges. We think of $z$ as the origin and $z'$ as the end vertex of the directed edge $(z,z')\in E(Z)$.
Let $V(Z)=X$, and let $E(Z)$ consist of all edges of the form
$((m,a),f(m,a))$ and $((n,b),g(n,b))$ for $(m,a)\in\dom f$, $(n,b)\in \dom g$.
Let $\nu\colon X\to \{1,2,3,4\}$ be defined by
\[
\nu=\chi_{\dom f}+ \chi_{\im f}+\chi_{\dom g} + \chi_{\im g},
\]
where $\chi_A$ denotes the characteristic function of a subset $A$.
Since the sets $\dom f$, $\im f$, $\dom g$, and $\im g$ are clopen, the sets $\nu^{-1}(i)$ are clopen too. For $i=1,\ldots,4$, let $S_i$ denote $\mathbb{S}^2$ with $i$ open balls removed, and enumerate the border circles as $\Delta_{\dom f}$, $\Delta_{\im f}$, $\Delta_{\dom g}$, and $\Delta_{\im g}$. Let
\begin{align*}
C=\mathbb{S}\times [0,1],\qquad \mathfrak{Y}_1=\bigsqcup_{i=1}^4\nu^{-1}(i)\times S_i,\qquad \mathfrak{Y}_2=E(Z)\times C.
\end{align*}
Let $\mathfrak{X}$ be the following quotient of $\mathfrak{Y}_1\sqcup \mathfrak{Y}_2$: for each $((m,a),h(m,a))$ with $h\in\{f,g\}$, glue $\{((m,a),h(m,a))\}\times C$ to $Y_1$ by identifying
\begin{align*}
\{((m,a),h(m,a))\}\times \mathbb{S}\times\{0\}\ &\sim \ \{((m,a),h(m,a))\}\times \Delta_{\dom h},\\ \{((m,a),h(m,a))\}\times \mathbb{S}\times\{1\} \ &\sim \ \{((m,a),h(m,a))\}\times \Delta_{\im h}.
\end{align*}
It is routine to check that $\mathfrak{X}$ is a topologically transitive matchbox manifold with density of compact leaves, and also that $\mathcal{G}\curvearrowright X$ is equivalent to the holonomy pseudogroup of $\mathfrak{X}$. Thus, the leaf corresponding to $(0,0^\infty)\in X$ is a leaf of equicontinuity by Lemma~\ref{l:zeroezroinf}.
\subsection{An affine pseudogroup}\label{s:affinepseudogroup}
Before embarking into the proof of Theorem~\ref{t:counterfol}, we need to obtain a modified version of the pseudogroup in Section~\ref{s:nonsensitiveaction}. The reason is that we will construct the foliated space counterexample with a particular representative of the holonomy pseudogroup in mind, but we cannot use the pseudogroup of Section~\ref{s:nonsensitiveaction} because all its orbits are infinite.
We start by fixing the following notation
\begin{equation}
\label{defnlr} l_n^-=\frac{1}{3\cdot2^{1+n}},\quad r_n^-=\frac{1}{3\cdot2^{n}},\quad l_n^+=1-r_n^-,\quad r_n^+=1-l_n^-,
\end{equation}
and then we fix the following intervals in order to define a sequence of toral linked twist:
\begin{align}
H&=\{(x,y)\in \mathbb{T}^2\mid 1/8\leq y\leq 7/8\},\notag\\
V_0&=\left[\frac{1}{6},\frac{5}{6}\right],\notag\\
V_n^-&=\left\{(x,y)\in \mathbb{T}^2\mid l_n^-\leq x\leq r_n^-\right\}\qquad \text{for}\ n\geq 1,\notag\\
V_n^+&=\left\{(x,y)\in \mathbb{T}^2\mid l_n^-\leq x\leq r_n^+\right\} \qquad \text{for}\ n\geq 1.\notag
\end{align}
Let $T_h\colon \mathbb{T}^2\to \mathbb{T}^2$ be the horizontal twist defined by
\begin{equation}\label{defnth}
T_h(x,y)=\begin{cases}
(x+6(y-\frac{1}{6}),y)\quad &\text{if}\ (x,y)\in H,\\
(x,y)\quad &\text{else};
\end{cases}
\end{equation}
and, for $m\in\mathbb{N}$, let $T_{v,m}\colon\mathbb{T}^2\to \mathbb{T}^2$ be the vertical twist:
\begin{equation}\label{defntvm}
T_{v,m}(x,y)=\begin{cases}
(x,y+6(x-1/6))\quad &\text{if}\ (x,y)\in V_0,\\
(x,y+3\cdot 2^{1+n}(x-l_n^-))\quad &\text{if}\ (x,y)\in V_n^-,\ n\leq m,\\
(x,y+3\cdot 2^{1+n}(x-l_n^+))\quad &\text{if}\ (x,y)\in V_n^+,\ n\leq m,\\
(x,y)\quad &\text{else}.
\end{cases}
\end{equation}
Finally, let $T_m=T_{v,m}\circ T_h$ be the corresponding linked twist map.
We denote by $\Delta_n$ the set of points in $\mathbb{T}^2$ where $T_n$ is not smooth; i.e.,
\[
\Delta_n= \partial H\cup \bigcup_{m\leq n}T_h^{-1}(\partial V^-_m)\cup \bigcup_{m\leq n}T_h^{-1}(\partial V^+_m).
\]
Finally, let $\Delta=\bigcup_n \Delta_n$.
As in previous sections, let $M_n:=H\cup \bigcup_{m\leq n}V_m$ and note that $M_n\subset\Delta_n$.
The linear nature of these linked twists will allow us to find a common set of periodic orbits. Let
\[
Q_n= M_n\cap \{(\frac{l_1}{2^{n}},\frac{l_2}{2^{n}})\mid l_1,l_2\in \{0,\ldots 2^{n}-1\}\}, \quad n\geq 1.
\]
\begin{lemma}\label{l:tq}
$T_m(Q_n)=Q_n$ for every $n$ and $m$.
\end{lemma}
\begin{proof}
Using~\eqref{defnlr}, we can rewrite~\eqref{defnth} and~\eqref{defntvm} as
\begin{align*}
T_h(x,y)&=\begin{cases}
(x+6y),y)\quad &\text{if}\ (x,y)\in H,\\
(x,y)\quad &\text{else};
\end{cases}\\
T_{v,m}(x,y)&=\begin{cases}
(x,y+6x)\quad &\text{if}\ (x,y)\in V_0,\\
(x,y+3\cdot 2^{1+n}x)\quad &\text{if}\ (x,y)\in V_n^-,\ n\leq m,\\
(x,y+3\cdot 2^{1+n}x)\quad &\text{if}\ (x,y)\in V_n^+,\ n\leq m,\\
(x,y)\quad &\text{else}.
\end{cases}
\end{align*}
The result is now obvious.
\end{proof}
Lemma~\ref{l:tq} and~\eqref{defnlr} together yield
\begin{equation} \label{qndelta}
Q_n\cap \Delta =\emptyset\qquad \text{for every} \ n.
\end{equation}
Also, by defining
\[
\widetilde{Q}_0=Q_0=\emptyset,\qquad
\widetilde{Q}_n=Q_n\setminus Q_{n-1} \quad\text{for}\ n\geq1,
\]
we obtain
\begin{equation}\label{tildeq}
T_m(\widetilde{Q}_n)=\widetilde{Q}_n\qquad \text{for every}\quad n,m.
\end{equation}
Up to this point, we have proceeded almost in the same way as in Section~\ref{s:nonsensitiveaction}. There, we used the pseudogroup on $Y:=\mathbb{T}^2\times \mathbb{N}$ defined by the maps
\[
((x,y),n)\mapsto (T_n(x,y),n)\qquad \text{and} \qquad ((x,y),n)\mapsto ((x,y),n+1).
\]
In order to get affine maps, we will restrict the first map to an appropiate subspace; to obtain density of finite orbits, we will ``cut" open balls with center points in $Q_n$ out of the domain of the second map. Let us start by finding suitable radii.
\begin{lemma}\label{l:radii}
There is a decreasing sequence $r_0,r_1,\ldots$ of positive radii such that, for every $n\geq 0$ and $(x,y)\in \widetilde Q_n$,
\begin{enumerate}[(i)]
\item
\label{i:radiimn}$B((x,y),r_n)\subset M_n$,
\item \label{i:radiidelta}$d(B((x,y),r_n), \Delta_{n})>0$,
\item \label{i:radiidisjoint} for every $0\leq m<n$ and $(x',y')\in \widetilde Q_{m}$,
\[
d((x,y),(x',y'))>r_m-r_n\quad \Longrightarrow\quad d((x,y),(x',y'))>r_m+r_n,
\]
\item \label{i:radiisphere}and $S((x,y),r_n)$ consists of points with both coordinates irrational.
\end{enumerate}
\end{lemma}
\begin{proof}
We proceed by induction on $n\geq 0$. First, we may set $r_0$ arbitrarily because $\widetilde Q_0$ is empty. Assume now that we have chosen $r_0,\ldots,r_{n-1}$ satisfying the above conditions, then (\ref{i:radiimn})--(\ref{i:radiidelta}) hold for $r_n$ small enough because of~\eqref{qndelta}. To prove that~(\ref{i:radiidisjoint}) holds for small enough radii, note that, by the induction hypothesis with~(\ref{i:radiisphere}),
\[
d((x,y),\widetilde Q_m)\geq r_m\quad\Longrightarrow\quad d((x,y),\widetilde Q_m)> r_m
\] for every $0\leq m<n$. Since $\widetilde Q_n$ is a finite set,
\[
d((x,y),\widetilde Q_m)\geq r_m\quad\Longrightarrow\quad d((x,y),\widetilde Q_m)> r_m+r_n
\] for every $0\leq m<n$ and $r_n$ small enough. Finally, we can choose $r_n$ satisfying (\ref{i:radiisphere}) because there are countably many points with both coordinates rational but uncountably many radii satisfying (\ref{i:radiimn})--(\ref{i:radiidisjoint}).
\end{proof}
Let
\begin{align}\label{un}
\mathcal U_n&:=\bigcup_{(x,y)\in \widetilde Q_n} B((x,y),r_{n}),\\
\mathcal{V}_n&:=\label{defnutilde} \bigcup_{0\leq m \leq n} \mathcal{U}_n.
\end{align}
By Lemma~\ref{l:radii}(\ref{i:radiidisjoint}), we can express $\mathcal{V}_n$ as a disjoint union of open balls. It follows that $\mathcal{V}_n\cap M_l$ is not dense in $M_l$ for any $n,l\geq 0$.
We are finally prepared to define what will be the holonomy pseudogroup of our counterexample foliated space. Let $\tilde f$ and $\tilde g$ be maps on $Y$ defined by
\begin{align}\label{defntildef}
\tilde f((x,y),n)&=(T_n(x,y),n),\quad &\dom \tilde f &=\{((x,y),n)\mid (x,y)\notin \Delta_n\},\\
\label{defntildeg}\tilde g((x,y),n)&=((x,y),n+1),\quad &\dom \tilde g& =\{((x,y),n)\mid (x,y)\notin\mathcal{V}_n\},
\end{align}
and let $\widetilde{G}\curvearrowright Y$ be the pseudogroup generated by $\tilde f$ and $\tilde g$.
\begin{lemma}
The pseudogroup $\widetilde{\mathcal{G}}\curvearrowright Y$ is topologically transitive.
\end{lemma}
\begin{proof}
Since $T_n$ is topologically transitive on $M_n$ for every $n\geq 0$, there are residual sets $R_n\subset M_n$ consisting of points whose $T_n$-orbits are dense in $M_n$; hence, the fact that $\Delta$ is meager yields that
\[
R:=\left(\bigcap_{n\geq 0, z\in\mathbb{Z}} T^{z}_0(R_n) \right)\setminus \left(\bigcup_{n\geq 0, z\in\mathbb{Z}} T^{z}_n(\Delta) \right)
\]
is a residual subset of $M_0$ satisfying
\begin{equation}\label{deltar}
\Delta\cap \bigcup_{n\geq 0,z\in\mathbb{Z}} T_n^z(R)=\emptyset,
\end{equation}
\begin{equation}\label{r0r}
(x,y)\in R_0\qquad \Longrightarrow\qquad T_0^z(x,y)\in R_n \qquad \text{for every}\ n\geq 0, z\in\mathbb{Z}.
\end{equation}
\begin{comment}
\[
\bigcup_{z\in\mathbb{Z}} (T_n)^z(R)\cap M_0=R.
\]
\end{comment}
We will prove that any point $((x,y),0)\in Y$ with $(x,y)\in R$ has a dense $\widetilde G$-orbit. Consider an open set $V\times\{n\}$ with $V\subset \mathbb{T}^2$ and $n\geq 0$, then there is a least $m\geq n$ such that $V\cap M_m\neq \emptyset$. Since $ \mathcal{V}_l\cap M_0$ is not dense in $M_0$ for any $l\in \mathbb{N}$ and $T_0(x,y)$ is dense in $M_0$, there is $z_1\in \mathbb{Z}$ such that
\[
T_0^{z_1}(x,y)\notin \mathcal{V}_l\qquad \text{for}\ l=0,\ldots,m.
\]
This means that $((x,y),0)\in\dom\tilde g^m$ by~\eqref{defntildeg}.
Equations~\eqref{deltar} and~\eqref{r0r}
now yield
\[
T_0^{z_1}(x,y)\in R_m \setminus \bigcup_{z\in \mathbb{Z}}T_m^{z}(\Delta),
\]
and therefore
\[
\tilde g^m\tilde f^{z_1}(x,y)\in (R_m \setminus \bigcup_{z\in \mathbb{Z}}T_m^{z}(\Delta))\times \{m\}.
\]
By the definition of $R_m$, the orbit
\[\bigcup_{z\in\mathbb{Z}}T_m^z(T_0^{z_1}(x,y))\] is dense in $M_m$ and disjoint from $\Delta$, so, by~\eqref{defntildef}, there is $z_2\in\mathbb{Z}$ such that
\[
(x',y'):=\tilde f^{z_2}\tilde g^m\tilde f^{z_1}(x,y)\in (V\cap M_m)\times \{m\}.
\]
We defined $m$ as the least integer $\geq n$ satisfying $V\cap M_m\neq \emptyset$, so we have $V\cap M_l=\emptyset$ for every $n\leq l<m$. Hence $(x',y')\in \dom (\tilde g^{n-m})$ by~\eqref{defntildeg} and Lemma~\ref{l:radii}(\ref{i:radiimn}), yielding
\[
\tilde g^{n-m}\tilde f^{z_2}\tilde g^m\tilde f^{z_1}(x,y)\in V\times \{n\}.
\]
We have proved that, for $(x,y)\in R$, the $\widetilde{\mathcal{G}}$-orbit of $((x,y),0)$ meets every open set, so $\widetilde{\mathcal{G}}\curvearrowright Y$ is topologically transitive by Lemma~\ref{l:pt}.
\end{proof}
\begin{lemma}\label{l:finiteorbits}
Let $((x,y),m)\in \widetilde Q_n\times\{m\}$ with $m\leq n$, then the orbit $\widetilde{\mathcal{G}}((x,y),m)$ is contained in $ \widetilde Q_n\times\{0,\ldots,n\}$.
\end{lemma}
\begin{proof}
We have $T_l(\widetilde Q_n)=\widetilde Q_n$ for every $0\leq l\leq n$ by~\eqref{tildeq}, so
\[
\tilde f(\bigcup_{m\leq n} \widetilde Q_n\times\{m\})\subset \bigcup_{m\leq n} \widetilde Q_n\times\{m\},
\]
and similarly for $\tilde f^{-1}$. It is obvious from the definition of $\tilde g$ that
\[
\tilde g^{-1}(\bigcup_{m\leq n}\widetilde Q_n\times\{m\})\subset \bigcup_{m\leq n}\widetilde Q_n\times\{m\}, \quad \tilde g(\widetilde Q_n\times\{m\})\subset \bigcup_{m\leq n}\widetilde Q_n\times\{m+1\}.
\]
Hence, to finish the proof it is enough to show that $(\widetilde Q_n\times\{n\})\cap \dom \tilde g=\emptyset$, but this follows from~\eqref{un},~\eqref{defnutilde} and~\eqref{defntildeg}.
\end{proof}
\begin{corollary}
The finite $\widetilde{\mathcal{G}}$-orbits are dense in $Y$.
\end{corollary}
The proof of the following statement is identical to that of Proposition~\ref{p:gmathcalg}.
\begin{proposition}\label{p:widetildegnotsic}
There is a metric $d$ on $Y$ and a generating pseudo{\textasteriskcentered}group $\mathcal{S}$ such that every map in $\mathcal{S}$ whose domain contains $((0,0),0)$ is an isometry. Hence, $\widetilde{\mathcal{G}}$ is not sensitive to initial conditions.
\end{proposition}
\subsection{A non-compact, topologically transitive affine foliation with a dense set of compact leaves which is not sensitive}
\label{s:ttdclnotsicaffine}
We are now in position to prove Theorem~\ref{t:counterfol} by constructing a suitable foliated space using the pseudogroup we have just defined.
Let $\Sigma$ be a smooth surface of genus two divided into three smooth manifolds with boundary, $\Sigma_0$, $\Sigma_\alpha$, and $\Sigma_\beta$,
that overlap only on their boundaries. Let $\Sigma_0$ be a two-sphere with four open disks removed, and denote the boundary spheres by $S_\alpha^-$, $S_\alpha^+$, $S_\beta^-$, $S_\beta^+$, with $\Sigma_\alpha$ attaching at $S_\alpha^-$ and $S_\alpha^+$ (see Figure~\ref{f:doubletorus}).
\begin{figure}[tbh]
\includegraphics[width=0.6\textwidth]{doubletorus.pdf}
\caption{The surface $\Sigma$ and its partition}
\label{f:doubletorus}
\end{figure}
Let $\mathfrak Y_0:= (\Sigma_0\times Y)\setminus \mathfrak{C}$, where
\begin{align*}
\mathfrak{C}&=\mathfrak{C}_\alpha^-\cup \mathfrak{C}_\alpha^+\cup \mathfrak{C}_\beta^-\cup \mathfrak{C}_\beta^+,\\
\mathfrak{C}_\alpha^-&= S_\alpha^-\times (Y\setminus \dom \tilde f),\\
\mathfrak{C}_\alpha^-&= S_\alpha^+\times (Y\setminus \im \tilde f),\\
\mathfrak{C}_\beta^-&= S_\beta^-\times (Y\setminus \partial \dom \tilde g),\\
\mathfrak{C}_\beta^+&= S_\beta^+\times (Y\setminus \partial \im \tilde g).
\end{align*}
Now attach the borders of $\mathfrak Y_\alpha:=\Sigma_\alpha\times \dom \tilde f$ and $\mathfrak Y_\beta:=\Sigma_\beta\times \dom \tilde g$ to $\mathfrak Y_0$ using the identifications
\begin{align*}
(s,y)&\sim(s,y),\quad &(s,y)\in S_\alpha^-\times\dom \tilde f,\\
(s,y)&\sim (s,\tilde f(y)),\quad&(s,y)\in S_\alpha^+\times\dom \tilde f,\\
(s,y)&\sim (s,y),\quad&(s,y)\in S_\beta^-\times\dom \tilde g,\\
(s,y)&\sim (s,\tilde g(y)),\quad&(s,y)\in S_\beta^+\times\dom \tilde g.\\
\end{align*}
Denote by $\mathfrak{Y}$ the resulting space. The product foliated structures on $\mathfrak Y_0$, $\mathfrak Y_\alpha$, $\mathfrak{Y}_\beta$ with leaves $\{y\}\times \Sigma_i$ (where $i=0$, $\alpha$, or $\beta$, respectively) descend to $\mathfrak{Y}$. This makes $\mathfrak Y$ a foliated space with boundary
\[
\partial \mathfrak Y =(Y\setminus \overline{\dom \tilde g})\times S_\beta^- \cup (Y\setminus \overline{\im \tilde g})\times S_\beta^+.
\]
It is an elementary matter to check that, essentially by construction, $\mathfrak{Y}$ is $C^\infty$ and its holonomy pseudogroup is equivalent to $\widetilde{\mathcal{G}}\curvearrowright Y$.
\begin{corollary}\label{c:mathfraky}
The foliated space $\mathfrak{Y}$ is $C^\infty$, transversally affine and topologically transitive, but not sensitive to initial conditions.
\end{corollary}
\begin{lemma}\label{l:mathfrakydpo}
The foliated space
$\mathfrak{Y}$ has a dense set of leaves that are compact manifolds, possibly with boundary.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:finiteorbits}, a leaf $L$ corresponding to a point $((x,y),m)$ with $(x,y)\in Q_n$ corresponds to a finite orbit of $\widetilde{\mathcal G}$. Consider the inverse image of $L$ by the quotient map $\mathfrak{Y}_0\cup ( \dom \tilde f\times\Sigma_\alpha) \cup (\dom \tilde g\times\Sigma_\beta)\to \mathfrak{Y}$. Since it has a finite orbit, the inverse image of $L$ only intersects finitely many plaques in $\mathfrak{Y}_0$, $\Sigma_\alpha \times \dom \tilde f$, and $\Sigma_\beta\times\dom \tilde g$. By Lemma~\ref{l:radii} and~\eqref{un}--\eqref{defntildeg},
\[
Q_n\cap \bigcup_{l\geq 0}\partial \mathcal{V}_l=Q_n\cap \Delta=\emptyset,
\]
so the $\widetilde{\mathcal{G}}$-orbit of $((x,y),m)$ is disjoint from $\partial\dom \tilde g\cup\partial\im\tilde g$.
Thus, the plaques in $\mathfrak{Y}_0$ that project to $L$ are all compact; $L$ is then the quotient of a finite union of compact plaques, hence compact.
\end{proof}
To make $\mathfrak{Y}$ the counterexample we are looking for, we need to get rid of the boundary: Take two copies $\mathfrak{Y}^-$ and $\mathfrak{Y}^+$, and let $\mathfrak{Y}^\pm\cong \mathfrak{Y}^-\cup\mathfrak{Y}^+/\sim$ be the quotient space obtained by identifying their boundaries. It is obvious that $\mathfrak{Y}^\pm$ is now a foliated space without boundary.
\begin{lemma}
The set of compact leaves is dense in $\mathfrak{Y}^\pm$.
\end{lemma}
\begin{proof}
Let $U$ be a open subset of $\mathfrak{Y}^\pm$, which without loss of generality we may assume contained in $\mathfrak{Y}^-$. By Lemma~\ref{l:mathfrakydpo}, there is a compact leaf $L^-$ in $\mathfrak{Y}^-$ intersecting $U$. If $L^-$ is without boundary, then $L^-$ is also a leaf in the quotient space $\mathfrak{Y}^\pm$ intersecting $U$. If $L^-$ has non-empty boundary, then it is contained in the leaf $L\cong L^-\cup L^+/\sim$, where $L^+$ is the leaf of $\mathfrak{Y}^+$ that corresponds to $L^-$. Then $L$ intersects $U$ and is a quotient of the compact space $L^-\cup L^+$, hence compact.
\end{proof}
\begin{lemma}
$\mathfrak{Y}^\pm$ is topologically transitive but not sensitive to initial conditions.
\end{lemma}
\begin{proof}
Let $Y^-$, $Y^+$ be two copies of the space $Y$, whose point we denote by $(n,t,-)$ and $(n,t,+)$, and let $\mathcal{G}^-$, $\mathcal{G}^+$ be two copies of the pseudogroup $\mathcal{G}$ acting on $\mathfrak{Y}^-$ and $\mathfrak{Y}^+$, respectively. Denote by $\tilde{g}^-$, $\tilde{g}^+$, etc.~the maps $\tilde{f}$ and $\tilde{g}$ acting on $Y^-$ and $Y^+$, respectively. Essentially by construction, the holonomy pseudogroup of $\mathfrak{Y}^\pm$ is generated by the union of $\mathcal{G}^-\curvearrowright\mathfrak{Y}^-$ and $\mathcal{G}^+\curvearrowright\mathfrak{Y}^+$, and an extra map $\tilde h\colon \mathfrak{Y}^-\rightarrowtail\mathfrak{Y}^+$ defined by
\begin{align*}
\dom \tilde h&=(Y^-\setminus \overline{\dom\tilde{g}^-})\cup(Y^-\setminus \overline{\im\tilde{g}^-}),\\
\tilde h(n,t,-)&=(n,t,+).
\end{align*}
Let us prove that the holonomy pseudogroup is topologically transitive. By Corollary~\ref{c:mathfraky}, there is a dense orbit $\widetilde{\mathfrak{G}}(n,t)$ in $Y$. Then the orbit $\widetilde{\mathfrak{G}}^\pm(n,t,-)$ equals \[\widetilde{\mathfrak{G}}^-(n,t,-)\cup \widetilde{\mathfrak{G}}^+(n,t,+),\]
which is dense in $Y^\pm$.
Finally, let us prove that $\widetilde{\mathcal{G}}^\pm$ is not sensitive to initial conditions. By Proposition~\ref{p:widetildegnotsic}, there is a metric $d$ on $Y$ and a generating pseudo{\textasteriskcentered}group $S$ such that every map of $S$ whose domain contains $(0,0,0)$ is an isometry.
Let $d^\pm$ be metric on $Y^\pm$ such that point from $Y^-$ and $Y^+$ are at infinite distance, and the restrictions to $d^\pm$ to $Y^-$ and $Y^+$ coincide with $d$. Finally, let $S^\pm=S^-\cup S^+\cup \{\tilde h\}$, where $S^-$ and $S^+$ are copies of $S$ acting on $Y^-$ and $Y^+$, respectively. It is immediate that every map in $S^\pm$ whose domain contains $(0,0,0,-)$ is an isometry with respect to $d^\pm$, and the result follows.
\end{proof}
It is obvious from the construction that $\mathfrak{Y}^\pm$ is also $C^\infty$ and transversally affine. This completes the proof of Theorem~\ref{t:counterfol}.
\begin{comment}
\subsection{A non-compact affine foliation satisfying (TT) and (DCL), but not (SIC)}
In this section we use the notation of Section~\ref{s:nonsensitiveaction}. Let $\Sigma$ be the orientable surface of genus two, and let $[\alpha], [\beta]\in \pi_1(\Sigma)$ be two homotopy classes of loops generating a free group. Let $\mathfrak{X}$ be the suspension foliation induced by the representation $\rho\colon\pi_1(\Sigma)\to \Homeo(X)$ sending $[\alpha]\mapsto f$, $[\beta]\mapsto g$.
It will be useful for our purposes to make this construction explicit in the following way: Divide $\Sigma$ into three smooth manifolds with boundary, $\Sigma_0$, $\Sigma_\alpha$, and $\Sigma_\beta$,
that overlap only on their boundaries. Let $\Sigma_0$ be a two-sphere with four open disks removed, and denote the boundary spheres by $S_\alpha^-$, $S_\alpha^+$, $S_\beta^-$, $S_\beta^+$, with $\Sigma_\alpha$ attaching at $S_\alpha^-$ and $S_\alpha^+$. Assume also that $[\alpha]$ is represented by a closed path that circles around $\Sigma_\alpha$ once going from $S_\alpha^-$ to $S_\alpha^+$, and similarly for $\beta$
Consider the product foliations $\Sigma_0\times X$, $\Sigma_\alpha\times X$, and $\Sigma_\beta\times X$, whose points we denote as $(s,x)_0$, $(s,x)_\alpha$, or $(s,x)_\beta$, respectively.
Then $\mathfrak{X}$ is the quotient of the disjoint union
\[
(\Sigma_0\times X)\sqcup(\Sigma_\alpha\times X)\sqcup (\Sigma_\beta\times X)
\]
by identifying their boundaries as follows:
\begin{alignat*}{2}
(s,x)_0&\sim (s,x)_\alpha\qquad&\text{for}\ s\in S_\alpha^-&,\\
(s,x)_0&\sim (s,f(x))_\alpha\qquad&\text{for}\ s\in S_\alpha^+&,\\
(s,x)_0&\sim (s,x)_\beta\qquad&\text{for}\ s\in S_\beta^-&,\\
(s,x)_0&\sim (s,g(x))_\beta\qquad &\text{for}\ s\in S_\beta^+&.
\end{alignat*}
Let $\pi\colon \mathfrak{X}\to \Sigma$ denote the bundle projection, let $\{V_i\}$ be a finite atlas for $\Sigma$, and let $V_0\subset \operatorname{int} \Sigma_0$. Then $\pi^{-1}(V_i)\cong V_i\times X$ is a foliated atlas for $\mathfrak{X}$. Choose a particular $V$, and let $X$ be the corresponding transversal in the holonomy pseudogroup. Then $X$ is an open set meeting every orbit; moreover, it can easily be checked that the restriction of the holonomy pseudogroup to $X$ coincides with the pseudogroup generated by $f$ and $g
$. It follows that this foliation is not sensitive to initial conditions.
This is not, however, the example we were looking for, and for two reasons:
\begin{enumerate}[(a)]
\item the foliation is not of transversal class $C^1$, and
\item there are no compact leaves.
\end{enumerate}
Regarding b, one should note that the leaves corresponding to rational coordinates are closed.
\end{comment}
\begin{comment}
Let us take care of a). The map $\tau\colon X\to X$ is smooth outside of the closed subset
\[
\Delta:=\{((x,y),n)\mid (x,y)\in \Delta_n\}.
\]
So, by the previous description of $\mathfrak{X}$,
\[
\mathfrak{X}_a:=\mathfrak{X}\setminus (\Sigma_\beta\times\Delta)
\]
is of tangential and transversal class $C^\infty$.
The problem now is that
blablabla
Let $P\subset\mathbb{T}^2$ consist of the points with rational coordinates, and let $O_n$, $n\geq 1$, be an enumeration of the orbits of the linked twist that are contained in $P$. Note that all orbits $O_n$ are finite. For $n\geq 1$, let $R_n\subset\mathbb{T}^2$ be a union of finitely many Euclidean closed balls satisfying:
\begin{enumerate}
\item $P_m\subset P_n$ if $m\leq n$,
\item $O_m\subset P_n$ if $m\leq n$, and
\item $P\subset \bigcup_n \operatorname{int}P_n$.
\end{enumerate}
Consider $Y_z:=\Sigma_0\times(\mathbb{T}^2\setminus P_{|z|})$, $Z_z:=\Sigma_0\times \operatorname{int} P_{|z|}$,
\[
Y=\{(x,z)\mid x\in \mathbb{T}^2\setminus P_{|z|} \},\quad Z=\{(x,z)\mid x\in \operatorname{int} P_{|z|} \}
\]
Let $\mathfrak{Y}$ be the quotient of the disjoint union
\[
(\Sigma_\beta\times Y)\sqcup (\Sigma_\beta\times Z)\sqcup (\Sigma_0\times X )\sqcup (\Sigma_\alpha\times X)
\]
by identifying the boundaries as follows
*******************************
Let
\[
Q_z=\{(\frac{l_1}{2^{|z|+2}},\frac{l_2}{2^{|z|+2}})\}\cap M_z.
\]
For every $n$, choose a small $r_n>0$ such that
\begin{enumerate}
\item for every $(x,y)\in Q_z$, $d(B_{\mathbb{T}}((x,y),r_{|z|}), \Delta_{|z|})>0$,
\item for every $(x,y)\in Q_z$, $(x',y')\in Q_{z'}$, $|z'|<|z|$, $d(B_{\mathbb{T}}((x,y),r_{|z|})\cap B_{\mathbb{T}}((x',y'),r_{|z'|}))>0$,
\item $S((x,y),r_{|z|})$ consist of points with both coordinates irrational.
\end{enumerate}
Let $\widetilde Q_z:=\bigcup_{|z'|<|z|}\bigcup_{(x,y)\in Q_{z}\setminus Q_{z'}} B_{\mathbb{T}}((x,y),r_{|z'|})$. It is trivial to check that, for every $m\geq 0$, $\bigcup_{|z|\leq m} \widetilde Q_z$ is not dense in any $M_{l}$, $l\leq m$.
Let $\tilde f$ be the restriction of $f$ to
\[
X\setminus \bigcup \{z\}\times \Delta_z.
\]
Let $A=\{(z,x,y)\in X\mid (x,y)\notin \widetilde Q_{z'}, z-1\leq z'\leq z+1\}$, and let $\tilde g$ be the restriction of $g$ to $A$.
\begin{lemma}
There is a residual subset $R$ of $M_0$ satisfying
\begin{enumerate}
\item $T_z(x,y)\cap \Delta_{z'}\neq \emptyset$,
\item $\{(T_z)^l(x,y), l\in\mathbb{Z}\}$ is dense in $M_z$, and
\item $T_z(x,y)\in R$
\end{enumerate}
for every $(x,y)\in R$, $z,z'\in\mathbb{Z}$.
\end{lemma}
\end{comment}
\begin{comment}
We can consider on $\Sigma_0\times Y$ the product foliation with leaves $\Sigma_0\times\{(n,x,y)\}$; this is a foliation of a manifold with boundary by manifolds with boundary. Since $\mathfrak{C}$ is a closed saturated subset of the boundary, $\mathfrak Y_0$ is also a foliated manifold.
Similarly, we can consider the product foliations on $\Sigma_\alpha \times \dom \tilde f$ and $\Sigma_\beta\times\dom \tilde g$. Let $\mathfrak{Y}$ be the quotient of the disjoint union
\[
\mathfrak{Y}_0\cup (\Sigma_\alpha \times \dom \tilde f) \cup (\Sigma_\beta\times\dom \tilde g)
\]
obtained by identifying blabla. The following result follows trivially from the definition.
\begin{lemma}
The holonomy pseudogroup of $\mathfrak{Y}$ is equivalent to the pseudogroup $\widetilde{\mathcal{G}}\curvearrowright Y$.
\end{lemma}
\begin{corollary}
$\mathfrak{Y}$ is topologically transitive but not sensitive to initial conditions.
\end{corollary}
\begin{lemma}
$\mathfrak{Y}$ has a dense set of leaves that are compact manifolds with boundary.
\end{lemma}
\begin{proof}
By REF, the leaves corresponding to points $(m,(x,y))$ with $(x,y)\in Q_n$ have finite orbits. MAYBE EXPLAIN. Consider the inverse image of $L$ by the quotient map $\mathfrak{Y}_0\cup (\Sigma_\alpha \times \dom \tilde f) \cup (\Sigma_\beta\times\dom \tilde g)\to \mathfrak{Y}$. Since it has a finite orbit, the inverse image of $L$ only intersects finitely many plaques in $\mathfrak{Y}_0$, $\Sigma_\alpha \times \dom \tilde f$, and $\Sigma_\beta\times\dom \tilde g$. By REF, $Q_n\cap \delta \widetilde{\mathcal{U}}_n=Q_n\cap \Delta=\emptyset$, so all the plaques in $\mathfrak{Y}_0$ that project to $L$ are compact. Thus $L$ is the quotient of a finite union of compact plaques, hence compact.
\end{proof}
At this point we have constructed a foliation of a non-compact manifold with boundary that satisfies blabla, but not bla.
\end{comment}
| {
"timestamp": "2022-02-23T02:13:41",
"yymm": "2202",
"arxiv_id": "2202.09983",
"language": "en",
"url": "https://arxiv.org/abs/2202.09983",
"abstract": "We generalize \"sensitivity to initial conditions\" to foliated spaces and pseudogroups, offering a definition of Devaney chaos in this setting. In contrast to the case of group actions, where sensitivity follows from the other two conditions of Devaney chaos, we show that this is true only for compact foliated spaces, exhibiting a counterexample in the non-compact case. Finally, we obtain an analogue of the Auslander-Yorke dichotomy for compact foliated spaces and compactly generated pseudogroups.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Chaos for foliated spaces and pseudogroups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750503469331,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221685932364
} |
https://arxiv.org/abs/math/0701113 | Hardy-type Inequalities Via Auxiliary Sequences | We prove some Hardy-type inequalities via an approach that involves constructing auxiliary sequences. | \section{Introduction}
\label{sec 1} \setcounter{equation}{0}
Suppose throughout that $p\neq 0, \frac{1}{p}+\frac{1}{q}=1$.
Let $l^p$ be the Banach space of all complex sequences ${\bf a}=(a_n)_{n \geq 1}$ with norm
\begin{equation*}
||{\bf a}||: =(\sum_{n=1}^{\infty}|a_n|^p)^{1/p} < \infty.
\end{equation*}
The celebrated
Hardy's inequality (\cite[Theorem 326]{HLP}) asserts that for $p>1$,
\begin{equation}
\label{eq:1} \sum^{\infty}_{n=1}\big{|}\frac {1}{n}
\sum^n_{k=1}a_k\big{|}^p \leq (\frac
{p}{p-1})^p\sum^\infty_{k=1}|a_k|^p.
\end{equation}
Hardy's inequality can be regarded as a special case of the
following inequality:
\begin{equation*}
\label{01}
\sum^{\infty}_{j=1}\big{|}\sum^{\infty}_{k=1}c_{j,k}a_k
\big{|}^p \leq U \sum^{\infty}_{k=1}|a_k|^p,
\end{equation*}
in which $C=(c_{j,k})$ and the parameter $p$ are assumed
fixed ($p>1$), and the estimate is to hold for all complex
sequences ${\bf a}$. The $l^{p}$ operator norm of $C$ is
then defined as the $p$-th root of the smallest value of the
constant $U$:
\begin{equation*}
\label{02}
||C||_{p,p}=U^{\frac {1}{p}}.
\end{equation*}
Hardy's inequality thus asserts that the Ces\'aro matrix
operator $C$, given by $c_{j,k}=1/j , k\leq j$ and $0$
otherwise, is bounded on {\it $l^p$} and has norm $\leq
p/(p-1)$. (The norm is in fact $p/(p-1)$.)
We say a matrix $A$ is a summability matrix if its entries satisfy:
$a_{j,k} \geq 0$, $a_{j,k}=0$ for $k>j$ and
$\sum^j_{k=1}a_{j,k}=1$. We say a summability matrix $A$ is a weighted
mean matrix if its entries satisfy:
\begin{equation*}
a_{j,k}=\lambda_k/\Lambda_j, ~~ 1 \leq k \leq
j; \Lambda_j=\sum^j_{i=1}\lambda_i, \lambda_i \geq 0, \lambda_1>0.
\end{equation*}
Hardy's inequality \eqref{eq:1} now motivates one to
determine the $l^{p}$ operator norm of an arbitrary summability matrix $A$.
For examples, the following two
inequalities were claimed to hold by Bennett ( \cite[p. 40-41]{B4}; see also \cite[p. 407]{B5}):
\begin{eqnarray}
\label{7}
\sum^{\infty}_{n=1}\Big{|}\frac
1{n^{\alpha}}\sum^n_{i=1}(i^{\alpha}-(i-1)^{\alpha})a_i\Big{|}^p &
\leq & \Big( \frac {\alpha p}{\alpha p-1} \Big )^p\sum^{\infty}_{n=1}|a_n|^p, \\
\label{8}
\sum^{\infty}_{n=1}\Big{|}\frac
1{\sum^n_{i=1}i^{\alpha-1}}\sum^n_{i=1}i^{\alpha-1}a_i\Big{|}^p &
\leq & \Big(\frac {\alpha p}{\alpha p-1} \Big
)^p\sum^{\infty}_{n=1}|a_n|^p,
\end{eqnarray}
whenever $\alpha>0, p>1, \alpha p >1$.
No proofs of the above two inequalities were supplied in \cite{B4}-\cite{B5} and
recently, the author \cite{G} and Bennett himself \cite{Be1}
proved inequalities \eqref{7} for $p>1, \alpha \geq 1, \alpha p >1$ and
\eqref{8} for $p>1, \alpha \geq 2$ or $0< \alpha \leq 1, \alpha p >1$
independently.
We point out here that Bennett in fact was able
to prove \eqref{7} for $p \geq 1, \alpha >0, \alpha p >1$ (see
\cite[Theorem 1]{Be1} with $\beta=1$ there) which now leaves the
case $p>1, 1< \alpha <2$ of inequality \eqref{8} the only case
open to us. For this, Bennett expects inequality \eqref{8} to hold for $1+1/p < \alpha <2$
(see page 830 of \cite{Be1}) and as a support,
Bennett \cite[Theorem 18]{Be1} has shown that
inequality \eqref{8}
holds for $\alpha = 1+1/p, p \geq 1$.
In this paper, we will study inequality \eqref{8} using a
method of Knopp \cite{K} which involves constructing auxiliary
sequences. We will partially resolve the remaining case
$p>1, 1< \alpha <2$ of inequality \eqref{8} by proving in Section
\ref{sec 2} the following:
\begin{theorem}
\label{thm3}
Inequality \eqref{8} holds for $p \geq 2, 1 \leq
\alpha \leq 1+1/p$ or $1 < p \leq 4/3, 1+1/p \leq
\alpha \leq 2$.
\end{theorem}
We shall leave the explanation of Knopp's approach in detail in Section
\ref{sec 2} by pointing out here that it can be applied to prove other types of inequalities
similar to that of Hardy's. As an example, we note that Theorem
359 of \cite{HLP} states:
\begin{theorem}
\label{thm4}
For $0 < p <1$ and $a_n \geq 0$,
\begin{equation*}
\sum^{\infty}_{n=1}\Big( \frac 1{n} \sum^{\infty}_{k=n}a_k \Big
)^p \geq p^p \sum^{\infty}_{n=1}a^p_n.
\end{equation*}
\end{theorem}
The constant $p^p$ in Theorem \ref{thm4} is not best possible
and this was fixed by Levin and Ste\v ckin
\cite[Theorem 61]{L&S} for $0<p \leq 1/3$ in the following
\begin{theorem}
\label{thm5}
For $0 < p \leq 1/3$ and $a_n \geq 0$,
\begin{equation*}
\sum^{\infty}_{n=1}\Big( \frac 1{n} \sum^{\infty}_{k=n}a_k \Big
)^p \geq \Big(\frac {p}{1-p} \Big
)^p \sum^{\infty}_{n=1}a^p_n.
\end{equation*}
\end{theorem}
We shall give
another proof of this result in Section \ref{sec 3}
using Knopp's approach. We point out here for each $1/3< p <1$, Levin and Ste\v
ckin also gave a better constant than the one $p^p$ given
in Theorem \ref{thm4}. For example, when $p=1/2$, they
gave $\sqrt{3}/2$ instead of $1/\sqrt{2}$. In Section \ref{sec 5}, we shall consider an approach of Redheffer \cite{R1} by showing first that this approach can be regarded as essentially the approach of Knopp when treating Hardy-type inequalities.
We then use Redheffer's method to prove the following
\begin{theorem}
\label{thm6}
For $a_n \geq 0$,
\begin{equation*}
\sum^{\infty}_{n=1}\Big( \frac 1{n} \sum^{\infty}_{k=n}a_k \Big
)^{1/2} \geq 0.8967 \sum^{\infty}_{n=1}a^{1/2}_n.
\end{equation*}
\end{theorem}
This improves the result of Levin and Ste\v
ckin mentioned above. It is also pointed out in Section \ref{sec 5} that the same method
can be used to establish the result in Theorem \ref{thm5} for $p$ slightly bigger than $1/3$.
In our proofs of Theorems \ref{thm3}-\ref{thm4}, certain
auxiliary sequences are constructed and there can be many ways to
construct such sequences. In Section \ref{sec 4}, we give an
example regarding these possibilities by answering a
question of Bennett.
\section{Proof of Theorem \ref{thm3} }
\label{sec 2} \setcounter{equation}{0}
We begin this section by explaining Knopp's idea \cite{K} on proving
Hardy's inequality \eqref{eq:1}. In fact, we will explain this more generally for
the case involving weighted mean matrices. For real numbers $\lambda_1>0, \lambda_i \geq 0,
i \geq 2$, we write $\Lambda_n=\sum^n_{i=1}\lambda_i$ and we are
looking for a positive constant $U$ such that
\begin{equation}
\label{2.0}
\sum^{\infty}_{n=1}\Big{|} \frac {1}{\Lambda_n}\sum^{n}_{k=1}\lambda_ka_k
\Big{|}^p \leq U \sum^{\infty}_{k=1}|a_k|^p
\end{equation}
holds for all complex sequences ${\bf a}$ with $p>1$ being fixed.
Knopp's idea is to find an
auxiliary sequence ${\bf w}=\{ w_i \}^{\infty}_{i=1}$ of positive terms
such that by H\"older's inequality,
\begin{eqnarray*}
\label{eq:12}
\Big( \sum_{k=1}^n\lambda_k|a_k| \Big)^p &= & \Big( \sum_{k=1}^n\lambda_k|a_k|w_k^{-\frac {1}{p^*}} \cdot
w_k^{\frac {1}{p^*}} \Big )^p \\
& \leq & \Big( \sum_{k=1}^n\lambda^p_k|a_k|^pw_k^{-(p-1)} \Big ) \Big ( \sum_{j=1}^n w_j \Big)^{p-1}
\end{eqnarray*}
so that
\begin{eqnarray*}\label{eq:13}
\sum^{\infty}_{n=1}\Big{|} \frac {1}{\Lambda_n}\sum^{n}_{k=1}\lambda_ka_k
\Big{|}^p &\leq &
\sum^\infty_{n=1}\frac
{1}{\Lambda^p_n}\Big(\sum_{k=1}^{n}\lambda^p_k|a_k|^pw_k^{-(p-1)}\Big)\Big(\sum_{j=1}^n
w_j\Big)^{p-1} \\
&=& \sum_{k=1}^{\infty}w_k^{-(p-1)}\lambda^p_k\Big(\sum_{n=k}^{\infty}\frac
{1}{\Lambda^p_n}\Big(\sum_{j=1}^nw_j\Big)^{p-1}\Big)|a_k|^p.
\end{eqnarray*}
Suppose now one can find for each $p>1$ a positive constant $U$,
a sequence ${\bf w}$ of positive terms with $w_n^{p-1}/\lambda^p_n$ decreasing to $0$, such
that for any integer $n \geq 1$,
\begin{equation}
\label{eq:7}
(w_1+\cdots+w_n)^{p-1}< U\Lambda^p_n( \frac {w_n^{p-1}}{\lambda^p_n}-\frac {w_{n+1}^{p-1}}{\lambda^p_{n+1}} ),
\end{equation}
then it is easy to see that inequality \eqref{2.0} follows from this.
When $\lambda_n=1$ for all $n$, Knopp's choice for ${\bf w}$ is given by
$w_n=\binom{n-1-1/p}{n-1}$ and one can show that \eqref{eq:7}
holds in this case with $U=(p^{*})^p$ and Hardy's inequality \eqref{eq:1}
follows from this.
We now want to apply Knopp's approach to prove Theorem \ref{thm3}. For this, we replace $\alpha-1$ by $\alpha$
and rewrite \eqref{8} as
\begin{equation}
\label{2.10}
\sum^{\infty}_{n=1}\Big{|}\frac
1{\sum^n_{i=1}i^{\alpha}}\sum^n_{i=1}i^{\alpha}a_i\Big{|}^p \leq
\Big (\frac {(\alpha+1) p}{(\alpha +1)
p-1} \Big )^p\sum^{\infty}_{n=1}|a_n|^p.
\end{equation}
Note that we are interested in the case $0 \leq \alpha \leq 1$ here. From our
discussions above, we are looking for a sequence ${\bf w}$ of positive terms
with $w_n^{p-1}/\lambda^p_n$ decreasing to $0$, such
that for any integer $n \geq 1$,
\begin{equation}
\label{2.20}
(w_1+\cdots+w_n)^{p-1}< \Big (\frac {(\alpha+1) p}{(\alpha +1)
p-1} \Big )^p\Big(\sum^n_{i=1}i^{\alpha} \Big)^p \Big ( \frac {w_n^{p-1}}{n^{\alpha p}}-\frac
{w_{n+1}^{p-1}}{(n+1)^{\alpha p}} \Big ).
\end{equation}
Following Knopp's choice, we define a sequence ${\bf w}$ such that
\begin{equation}
\label{2.21}
w_{n+1}= \frac {n+\alpha-1/p}{n}w_n, \hspace{0.1in} n \geq 1.
\end{equation}
Note that the above sequence is uniquely determined for any given
positive $w_1$ and therefore we may assume $w_1=1$ here. We note further that we need
$\alpha > -1/p^{*}$ in order for $w_n >0$ for all $n$ and we also point
out that it is easy to show by induction that
\begin{equation}
\label{2.22}
\sum^{n}_{i=1}w_i=\frac {n+\alpha-1/p}{1+\alpha-1/p}w_n.
\end{equation}
Moreover, one can easily check that
\begin{equation*}
\frac {w_n^{p-1}}{n^{\alpha p}} = O(n^{-\alpha-1/p^{*}}),
\end{equation*}
so that $w_n^{p-1}/\lambda^p_n$ decreases to $0$ as $n$
approaches infinity as long as $\alpha > -1/p^{*}$.
Now we need a lemma on sums of powers, which is due to Levin and Ste\v ckin
\cite[Lemma 1, 2, p.18]{L&S}:
\begin{lemma}
\label{lem0}
For an integer $n \geq 1$,
\begin{eqnarray}
\label{4}
\sum^n_{i=1}i^r &\geq & \frac {1}{r+1}n(n+1)^r, \hspace{0.1in} 0 \leq r \leq 1, \\
\label{201}
\sum^n_{i=1}i^r &\geq & \frac {r}{r+1}\frac
{n^r(n+1)^r}{(n+1)^r-n^r}, \hspace{0.1in} r \geq 1.
\end{eqnarray}
Inequality \eqref{201} reverses when $-1 <r \leq 1$.
\end{lemma}
We note here only the case $r \geq 0$ for \eqref{201} was proved in \cite{L&S} but one
checks easily that the proof extends to the case $r >-1$.
As we are interested in $0 \leq \alpha \leq 1$ here, we can now combine
\eqref{2.21}-\eqref{4} to deduce that inequality \eqref{2.20} will
follow from
\begin{equation*}
(1+\frac {\alpha-1/p}{n})^{p-1} < \frac {n}{1+\alpha-1/p} \Big ((1+\frac 1{n})^{\alpha p}
- (1+\frac {\alpha-1/p}{n} )^{p-1}\Big ).
\end{equation*}
We can simplify the above inequality further by recasting it as
\begin{equation}
\label{2.23}
\Big( 1+ \frac {\alpha+1/p^{*}}{n}\Big )^{1/p}\Big( 1+ \frac {\alpha-1/p}{n}\Big
)^{1/p^{*}} < \Big( 1+ \frac {1}{n}\Big )^{\alpha}.
\end{equation}
Now we define for
fixed $n \geq 1, p>1$,
\begin{equation*}
f(x)=x \ln (1+1/n)-\frac 1{p}\ln(1+ \frac {x+1/p^{*}}{n})-\frac 1{p^{*}}\ln(1+ \frac
{x-1/p}{n}).
\end{equation*}
It is easy to see here that inequality \eqref{2.23} is equivalent
to $f(\alpha) > 0$. It is also easy to
see that $f(x)$ is a convex function of $x$ for $0 \leq x \leq 1$
and that $f(1/p)=0$. It follows from this that if $f'(1/p) \leq 0$
then $f(x) > 0$ for $0 \leq x < 1/p$ and if $f'(1/p) \geq 0$ then
$f(x) > 0$ for $1/p < x \leq 1$. We have
\begin{equation*}
f'(1/p)=\ln (1+1/n)-\frac 1{n}+\frac 1{pn(n+1)}.
\end{equation*}
We now use Taylor expansion to conclude for $x>0$,
\begin{equation}
\label{2.24}
x - x^2/2< \ln (1+x) < x - x^2/2+ x^3/3.
\end{equation}
It follows from this that for $p \geq 2$, $n \geq 2$,
\begin{equation*}
f'(1/p)< -\frac 1{2n^2}+\frac 1{3n^3}+\frac 1{pn(n+1)} \leq
-\frac 1{2n^2}+\frac 1{3n^3}+\frac 1{2n(n+1)}=\frac 1{3n^3}-\frac 1{2n^2(n+1)} \leq
0.
\end{equation*}
and for $n = 1$,
\begin{equation*}
f'(1/p)= \ln 2-1 +\frac 1{2p} \leq \ln 2-1 +\frac 1{4} < 0,
\end{equation*}
It's also easy to check that for $1< p \leq 4/3$, $n=1$,
\begin{equation*}
f'(1/p)=\ln 2-1+\frac 1{2p} > 0.
\end{equation*}
For $n \geq 2, 1< p \leq 4/3$, by using the first inequality of
\eqref{2.24} we get
\begin{equation*}
f'(1/p) > -\frac 1{2n^2}+\frac 1{pn(n+1)} \geq
0.
\end{equation*}
This now enables us to conclude the proof of Theorem \ref{thm3}.
\section{Another Proof of Theorem \ref{thm5}}
\label{sec 3} \setcounter{equation}{0}
We use the idea of Levin and Ste\v ckin in the proof of Theorem
62 in \cite{L&S} to find an auxiliary sequence ${\bf w}=\{ w_i
\}^{\infty}_{i=1}$ of positive
terms so that for any finite summation from $n = 1$ to $N$ with $N \geq
1$, we have
\begin{equation*}
\sum^{N}_{n=1}a^p_n = \sum^{N}_{n=1}\frac {a^p_n}{\sum^n_{i=1}w_i} \sum^n_{k=1}w_k
= \sum^N_{n=1}w_n\sum^{N}_{k=n}\frac {a^p_k}{\sum^k_{i=1}w_i}.
\end{equation*}
On letting $N \rightarrow \infty$, we then have
\begin{equation*}
\sum^{\infty}_{n=1}a^p_n = \sum^{\infty}_{n=1}w_n\sum^{\infty}_{k=n}\frac {a^p_k}{\sum^k_{i=1}w_i}.
\end{equation*}
By H\"older's inequality, we have
\begin{equation*}
\sum^{\infty}_{k=n}\frac {a^p_k}{\sum^k_{i=1}w_i} \leq
\Big ( \sum^{\infty}_{k=n}\Big(\sum^k_{i=1}w_i \Big)^{-1/(1-p)}\Big
)^{1-p} \Big ( \sum^{\infty}_{k=n}a_k\Big
)^{p}.
\end{equation*}
Suppose now one can find
a sequence ${\bf w}$ of positive terms with $w^{-1/(1-p)}_nn^{-p/(1-p)}$ decreasing to $0$ for each $0<p \leq 1/3$, such
that for any integer $n \geq 1$,
\begin{equation}
\label{3.1}
(w_1+\cdots+w_n)^{-1/(1-p)} \leq
\Big ( \frac {1-p}{p} \Big )^{p/(1-p)}
\Big( \frac {w^{-1/(1-p)}_n}{n^{p/(1-p)}}- \frac {w^{-1/(1-p)}_{n+1}}{(n+1)^{p/(1-p)}}\Big),
\end{equation}
then it is easy to see that Theorem \ref{thm5} follows from this.
We now define our sequence ${\bf w}$ to be
\begin{equation}
\label{3.2}
w_{n+1}= \frac {n+1/p-2}{n}w_n, \hspace{0.1in} n \geq 1.
\end{equation}
Note that the above sequence is uniquely determined for any given
positive $w_1$ and therefore we may assume $w_1=1$ here. We note further that
$w_n >0$ for all $n$ as $0 < p \leq 1/3$ and it is easy to show by induction that
\begin{equation}
\label{3.3}
\sum^{n}_{i=1}w_i=\frac {n+1/p-2}{1/p-1}w_n.
\end{equation}
Moreover, one can easily check that
\begin{equation*}
\frac {w^{-1/(1-p)}_n}{n^{p/(1-p)}} = O(n^{-(1-p)/p}),
\end{equation*}
so that $w_n^{-1/(p-1)}n^{-p/(1-p)}$ decreases to $0$ as $n$
approaches infinity.
We now combine
\eqref{3.2}-\eqref{3.3} to recast inequality \eqref{3.1} as
\begin{equation*}
(n+1/p-2)^{-1/(1-p)} \leq \frac {p}{1-p} \Big (n^{-p/(1-p)}
- (n+1)^{-p/(1-p)}n^{1/(1-p)}(n+1/p-2)^{-1/(1-p)} \Big ).
\end{equation*}
We further rewrite the above inequality as
\begin{eqnarray*}
\frac {1-p}{p} & \leq & n^{-p/(1-p)}(n+1/p-2)^{1/(1-p)}
- (n+1)^{-p/(1-p)}n^{1/(1-p)} \\
&=& n \Big ( \Big(1+\frac {1/p-2}{n} \Big )^{1/(1-p)}
- \Big (1+ \frac 1{n} \Big )^{-p/(1-p)} \Big ).
\end{eqnarray*}
It is easy to see that the above inequality follows from $f(1/n) \geq 0$ where we define for $x \geq 0$,
\begin{equation*}
f(x)=\Big(1+ (1/p-2)x \Big )^{1/(1-p)}
- \Big (1+ x \Big )^{-p/(1-p)}-\frac {1-p}{p}x.
\end{equation*}
We now prove that $f(x) \geq 0$ for $x \geq 0$ for $0 < p \leq 1/3$ and this will
conclude the proof of Theorem \ref{thm5}. We note that
\begin{eqnarray*}
f'(x) &=& \frac {1/p-2}{1-p}\Big(1+ (1/p-2)x \Big )^{p/(1-p)}+\frac p{1-p}\Big (1+ x \Big )^{-p/(1-p)-1}-\frac
{1-p}{p}, \\
f''(x) &=& \frac {p(1/p-2)^2}{(1-p)^2}\Big(1+ (1/p-2)x \Big )^{p/(1-p)-1}-\frac p{(1-p)^2}\Big (1+ x \Big
)^{-p/(1-p)-2}.
\end{eqnarray*}
We now define for $x \geq 0$,
\begin{equation*}
g(x)=(1/p-2)^{2(1-p)/(1-2p)}(1+x)^{(2-p)/(1-2p)}-(1+ (1/p-2)x).
\end{equation*}
It is easy to see that $g(x) \geq 0$ implies $f''(x) \geq 0$.
Note that $(2-p)/(1-2p) \geq 1$ so that
\begin{eqnarray*}
g'(x) &=&
(1/p-2)^{2(1-p)/(1-2p)}(2-p)/(1-2p)(1+x)^{(2-p)/(1-2p)-1}-(1/p-2)
\\
& \geq & (1/p-2)^{2(1-p)/(1-2p)}-(1/p-2) \geq 0,
\end{eqnarray*}
where the last inequality above follows from $2(1-p)/(1-2p) \geq
1$ and $0 < p \leq 1/3$ so that $1/p - 2 \geq 1$. It follows from
this that $f''(x) \geq 0$ and as one checks easily that
$f'(0)=0$, which implies $f'(x) \geq 0$ so that $f(x) \geq f(0)=0$
which is just what we want to prove.
\section{Redheffer's Approach and Proof of Theorem \ref{thm6} }
\label{sec 5} \setcounter{equation}{0}
Redheffer's approach in \cite{R1} of Hardy-type inequalities via his ``recurrent inequalities" can be put into the following form:
\begin{lemma}[{\cite[Lemma 2.4]{G}}]
\label{lem6.1}
Let $\{ \lambda_i \}^{\infty}_{i \geq 1}, \{ a_i \}^{\infty}_{i \geq 1}$ be two sequences of positive real numbers and
let $S_n=\sum_{i=1}^n \lambda_i
a_i$. Let $0 \neq p<1$ be fixed and let $\{ \mu_i \}^{\infty}_{i \geq 1}, \{ \eta_i \}^{\infty}_{i \geq 1}$ be two
positive sequences of real numbers such
that $\mu_i \leq \eta_i$ for $0<p<1$ and $\mu_i \geq \eta_i$ for
$p<0$, then for $n \geq 2$,
\begin{equation}
\label{6.1}
\sum_{i=2}^{n-1}\Big ( \mu_i-(\mu^q_{i+1}-\eta^q_{i+1})^{1/q} \Big )S_i^{1/p}+\mu_nS_n^{1/p}
\leq (\mu^q_{2}-\eta^q_{2})^{1/q}\lambda^{1/p}_1a_1^{1/p}
+\sum_{i=2}^n \eta_i \lambda^{1/p}_i a_i^{1/p}.
\end{equation}
\end{lemma}
We consider the case $0<p<1$ in the above lemma and we set $\eta_i =
\lambda^{-1/p}_i$ together with a change of variables: $\mu_i \mapsto \mu_i\eta_i$ to rewrite \eqref{6.1} as
\begin{equation*}
\sum_{i=2}^{n-1}\Big ( \frac {\mu_i}{\lambda^{1/p}_i}-
\frac {(\mu^q_{i+1}-1)^{1/q}}{\lambda^{1/p}_{i+1}} \Big )S_i^{1/p}+ \frac {\mu_n}{\lambda^{1/p}_n}S_n^{1/p}
\leq (\mu^q_{2}-1)^{1/q}\frac {\lambda^{1/p}_1}{\lambda^{1/p}_2}a_1^{1/p}
+\sum_{i=2}^n a_i^{1/p}.
\end{equation*}
We now set $\mu^q_{i}-1=\nu_i$ and make a further change of
variables: $p \mapsto 1/p$ to write the above inequality as:
\begin{equation}
\label{6.2}
\sum_{i=2}^{n-1}\Big ( \frac {(1+\nu_i)^{-(p-1)}}{\lambda^{p}_i}-
\frac {\nu^{-(p-1)}_{i+1}}{\lambda^{p}_{i+1}} \Big )S_i^{p}+ \frac {(1+\nu_n)^{-(p-1)}}{\lambda^{p}_n}S_n^{p}
\leq \nu^{-(p-1)}_2\frac {\lambda^{p}_1}{\lambda^{p}_2}a_1^{p}
+\sum_{i=2}^n a_i^{p}.
\end{equation}
Now if we set for $i \geq 2$,
\begin{equation*}
\nu_i=\frac {\sum^{i-1}_{j=1}w_j}{w_i},
\end{equation*}
we can rewrite inequality \eqref{6.2} as
\begin{eqnarray}
\label{6.3}
&& \sum_{i=2}^{n-1}
\Big(\sum^i_{j=1}w_j \Big )^{-(p-1)}\Big( \frac {w_i^{p-1}}{\lambda^p_i}-\frac {w_{i+1}^{p-1}}{\lambda^p_{i+1}} \Big )
\Lambda^p_i A_i^{p}+
\Big(\sum^n_{j=1}w_j \Big )^{-(p-1)}\frac {w_n^{p-1}}{\lambda^p_n}
\Lambda^p_n A_n^{p} \\
&\leq & \frac {w_2^{p-1}}{w^{p-1}_1}\frac {\lambda^{p}_1}{\lambda^{p}_2}a_1^{p}
+\sum_{i=2}^n a_i^{p}, \nonumber
\end{eqnarray}
where
\begin{equation*}
\Lambda_n=\sum^n_{i=1}\lambda_i, \hspace{0.1in} A_n=\frac {S_n}{\Lambda_n}, \hspace{0.1in} n \geq 1.
\end{equation*}
Suppose now we can find a sequence ${\bf w}=\{ w_i \}^{\infty}_{i=1}$ of positive
terms such that inequality \eqref{eq:7} holds for all $n \geq 1$.
Then inequality \eqref{6.3} implies
\begin{equation}
\label{6.4}
\frac 1{U} \sum_{i=1}^{n} A_i^{p} \leq \Big ( \frac 1{U}+
\frac {w_2^{p-1}}{w^{p-1}_1}\frac
{\lambda^{p}_1}{\lambda^{p}_2} \Big ) a_1^{p}
+\sum_{i=2}^n a_i^{p} \leq \sum_{i=1}^n a_i^{p},
\end{equation}
where the last inequality above follows from the case $n=1$ of inequality \eqref{eq:7}, which implies
\begin{equation*}
\frac 1{U} < 1-
\frac {w_2^{p-1}}{w^{p-1}_1}\frac
{\lambda^{p}_1}{\lambda^{p}_2}.
\end{equation*}
Thus we have seen that on letting $n \rightarrow +\infty$,
inequality \eqref{6.4} gives back inequality \eqref{2.0}. Hence Redheffer's approach can be regarded as essentially Knopp's approach when treating Hardy-type inequalities. The only difference is that one no longer requires that $w_n^{p-1}/\lambda^p_n$ decreases to $0$ when selecting the sequence ${\bf w}$ in Redheffer's approach.
Now we state a lemma similar to Lemma \ref{lem6.1}:
\begin{lemma}
\label{lem6.2}
Let $\{ \lambda_i \}^{\infty}_{i \geq 1}, \{ a_i \}^{\infty}_{i \geq 1}$ be two sequences of positive real numbers
and suppose $\sum^{\infty}_{i=1}\lambda_ia_i$ converges.
Let $S_n=\sum_{i=n}^{\infty} \lambda_i
a_i$ and let $0 < p<1$ be fixed. Let $\{ \mu_i \}^{\infty}_{i \geq 1}, \{ \eta_i \}^{\infty}_{i \geq 1}$ be two
positive sequences of real numbers such
that $\mu_i \geq \eta_i$, then for $n \geq 2$,
\begin{equation}
\label{6.5}
\mu_1S^p_1+\sum_{i=2}^{n}\Big ( \mu_i-(\mu^{\frac 1{1-p}}_{i-1}-\eta^{\frac 1{1-p}}_{i-1})^{1-p} \Big )S_i^{p}
-(\mu^{\frac 1{1-p}}_n-\eta^{\frac 1{1-p}}_n)^{1-p}S^{p}_{n+1}
\geq \sum_{i=1}^n \eta_i \lambda^{p}_i a_i^{p}.
\end{equation}
\end{lemma}
\begin{proof}
We note for
$k \geq 2$,
\begin{equation}
\label{6.6}
\mu_k S^{p}_k- \eta_k \lambda^{p}_k a_k^{p}=S^{p}_{k+1}(\mu_k (1+t)^{p}
- \eta_kt^{p}) \geq (\mu^{\frac 1{1-p}}_k-\eta^{\frac 1{1-p}}_k)^{1-p}S^{p}_{k+1},
\end{equation}
with $t=\lambda_k a_k/S_{k+1}$.
The lemma then follows by adding \eqref{6.6} for $1 \leq k \leq n$
together.
\end{proof}
We set $\eta_i =
\lambda^{-p}_i$ together with a change of variables: $\mu_i \mapsto \mu_i\eta_i$ to rewrite \eqref{6.5} as
\begin{equation*}
\frac {\mu_1}{\lambda^{p}_1}S_1^{p}+ \sum_{i=2}^{n}\Big ( \frac {\mu_i}{\lambda^{p}_i}-
\frac {(\mu^{\frac 1{1-p}}_{i-1}-1)^{1-p}}{\lambda^{p}_{i-1}} \Big )S_i^{p}-
\frac {(\mu^{\frac 1{1-p}}_{n}-1)^{1-p}}{\lambda^{p}_{n}} S_{n+1}^{p}
\geq \sum_{i=1}^n a_i^{p}.
\end{equation*}
We now set $\mu^{\frac 1{1-p}}_{i}-1=\nu_i$ and write the above inequality as:
\begin{equation*}
\frac {(1+\nu_1)^{1-p}}{\lambda^{p}_1}S_1^{p}+\sum_{i=2}^{n}\Big ( \frac {(1+\nu_i)^{1-p}}{\lambda^{p}_i}-
\frac {\nu^{1-p}_{i-1}}{\lambda^{p}_{i-1}} \Big )S_i^{p}- \frac {\nu_n^{1-p}}{\lambda^{p}_n}S_{n+1}^{p}
\geq \sum_{i=1}^n a_i^{p}.
\end{equation*}
From now on we consider the case $\lambda_i=1$ for all $i$ in the above inequality and we set for $n \geq 1$,
\begin{equation*}
\nu_n=\frac {n-\beta}{c},
\end{equation*}
with $\beta \leq 1, c \geq \beta$ here. We want to choose $c, \beta$ such
that the following inequality holds for $n \geq 2$:
\begin{equation}
\label{6.49}
\max \Big( (1+c-\beta)^{1-p}, n^p\Big ((n+c-\beta)^{1-p}-(n-1-\beta)^{1-p} \Big ) \Big ) \leq c^{1-p}k(p),
\end{equation}
where $k(p)$ is a constant depending only on $p$ and we want
$k(p)$ to be as small as possible.
For this purpose, we further assume that $k(p)$ satisfies:
\begin{equation}
\label{6.50}
(1-p)(1+c) < c^{1-p}k(p),
\end{equation}
and define for $0 \leq x \leq 1/2$,
\begin{equation*}
f(x) = (1+(c-\beta)x)^{1-p}-(1-(1+\beta)x)^{1-p}-c^{1-p}k(p)x,
\end{equation*}
and note that with our assumption on $k(p)$, $f'(0)<0$. Note also
that
\begin{equation*}
f''(x)=p(1-p)(1+\beta)^2(1-(1+\beta)x)^{-p-1}-p(1-p)(c-\beta)^2(1+(c-\beta)x)^{-p-1}.
\end{equation*}
It follows from this that when $1+\beta \geq c-\beta$ then
$f''(x) \geq 0$ for $0 \leq x \leq 1/2$. Otherwise we note that
$f''(x)=0$ can have at most one root in $(0, 1/2)$ and
$f''(0)<0$. The above implies that for $0 \leq x \leq
1/2$, $f(x) \leq \min (f(0), f(1/2))=\min (0, f(1/2))$.
We deduce from our discussion above on setting $x=1/n$ in $f(x)$
that in order for inequality \eqref{6.49} to hold, it suffices
to check the case $n=2$, namely,
\begin{equation}
\label{6.51}
\max \Big( (1+c-\beta)^{1-p}, 2^p\Big ((2+c-\beta)^{1-p}-(1-\beta)^{1-p} \Big ) \Big ) \leq c^{1-p}k(p),
\end{equation}
provided we assume \eqref{6.50}.
We now look at the case $p=1/2$ and in this case we choose
$c, \beta$ so that the following holds:
\begin{equation*}
(1+\frac {1-\beta}{c})^{1/2}=2^{1/2}\Big ((1+\frac {2-\beta}{c})^{1/2}-(\frac {1-\beta}{c})^{1/2} \Big ).
\end{equation*}
On setting $x=(1-\beta)/c, c'=1/c$, we can rewrite the above equation as
\begin{equation*}
(1+x)^{1/2}=2^{1/2}\Big ( (1+c'+x)^{1/2}-x^{1/2} \Big ).
\end{equation*}
Solving the above equation yields:
\begin{equation*}
x=\frac {1-\beta}{c}=\frac {\sqrt{(10+4c')^2+28(1+2c')^2}-(10+4c')}{14}.
\end{equation*}
To prove Theorem \ref{thm6}, we
take $c=5/2$ here, then $x \approx 0.2435$ with $\beta \approx
0.3912$ and $k(1/2) \approx 1.1151 < 1.1152$. We take $k(p)=1.1152$ here and
one can also check that
inequality \eqref{6.50} holds in this case. As $1/1.1152 > 0.8967$, Theorem \ref{thm6}
now follows.
Now we consider other values of $p$'s. For this, we choose
$c=1/p-1, k(p)=c^p$ so that inequality \eqref{6.50} becomes an
equality. Because of this, we need to assume that
\begin{equation}
\label{6.54}
\beta < \frac 1{2p}-1,
\end{equation}
so that $f''(0)<0$ is satisfied for the function $f(x)$ defined above and our argument goes through as well in the above
discussions to ensure that when inequality \eqref{6.51} holds,
inequality \eqref{6.49} also holds. For the case $p=1/3$, it is easy to check that on taking $\beta = 3-2\sqrt{2}$, both inequalities \eqref{6.51} and
\eqref{6.54} are satisfied. This implies Theorem \ref{thm5} for the case $p =
1/3$. In view of this, one sees that it is possible to prove the
result in Theorem \ref{thm5} for $p$ beyond $1/3$. For example,
on taking $p=0.34, \beta=0.21$, calculations shows both
inequalities \eqref{6.54} and \eqref{6.51} are satisfied and
hence Theorem \ref{thm5} holds for $p=0.34$.
\section{Another look at Inequality \eqref{8} }
\label{sec 4} \setcounter{equation}{0}
In this section we return to the consideration of inequality \eqref{8} via our approach in
Section \ref{sec 2}, which boils down to a construction of a sequence ${\bf w}$ of positive terms
with $w_n^{p-1}/\lambda^p_n$ decreasing to $0$, such
that for any integer $n \geq 1$, inequality \eqref{2.20} is
satisfied. Certainly here the choice for ${\bf w}$ may not be
unique and in fact in the case $\alpha =0$,
Bennett asked in \cite{B4} (see the paragraph below Lemma 4.11) for other
sequences, not multiples of Knopp's, that satisfy \eqref{2.20}. He
also mentioned that the obvious choice, $w_n=n^{-1/p}$, does not
work.
We point out here even though the choice $w_n=n^{-1/p}$ does not
satisfy \eqref{2.20} when $\alpha=0$ for all $p>1$,
as one can see by considering inequality \eqref{2.20} for the case $n=1$
with $p \rightarrow 1^{+}$,
it nevertheless works for $p \geq 3$, which we now show by first
rewriting \eqref{2.20} in our case as
\begin{equation}
\label{2.30}
\Big(\sum^n_{i=1}i^{-1/p} \Big )^{p-1}< \Big (\frac {p}{
p-1} \Big )^pn^p\Big ( n^{-(p-1)/p }-
(n+1)^{-(p-1)/p }
\Big ).
\end{equation}
We note that the case $n=1$ of \eqref{2.30} follows from the case $\alpha=0$ of the following inequality,
\begin{equation}
\label{2.4}
1-2^{-(p-1)/p-\alpha} > \Big ( 1- \frac {1}{(\alpha+1)p} \Big )^p,
\hspace{0.1in} 0 \leq \alpha \leq 1/p.
\end{equation}
To show \eqref{2.4}, we see by Taylor expansion,
that for $p \geq 2, x<0$,
\begin{equation*}
(1+x)^p < 1+px+\frac {p(p-1)x^2}{2}.
\end{equation*}
Apply the above inequality with $x=-1/(\alpha p+p)$, we obtain for $p \geq 3$,
\begin{equation*}
\Big ( 1- \frac {1}{(\alpha+1)p} \Big )^p < 1-\frac {1}{(\alpha+1)}+\frac {(p-1)}{2(\alpha+1)^2p}.
\end{equation*}
Hence inequality \eqref{2.4} will follow from
\begin{equation*}
1-\frac {p-1}{2(\alpha+1)p}-2^{-(p-1)/p}\frac {(\alpha+1)}{2^{\alpha}} >0.
\end{equation*}
It is easy to see that when $p \geq 3$, the function $\alpha \mapsto
(1+\alpha)2^{-\alpha}$ is an increasing function of $\alpha$ for
$0 \leq \alpha \leq 1/p$. It follows from this that for
$0 \leq \alpha \leq 1/p$,
\begin{equation*}
1-\frac {p-1}{2(\alpha+1)p}-2^{-(p-1)/p}\frac {(\alpha+1)}{2^{\alpha}}
> 1-\frac {p-1}{2p}-2^{-(p-1)/p}\frac {(1/p+1)}{2^{1/p}} = 0,
\end{equation*}
and from which inequality \eqref{2.4} follows.
Now, to show \eqref{2.30} holds for all $n \geq 2, p \geq 3$, we
first note that for $p>1$,
\begin{equation*}
\sum^{n}_{i=1}i^{-1/p} < 1+ \int^n_{1}x^{-1/p}dx = \frac {p}{p-1}n^{1-1/p}-\frac
{1}{p-1}.
\end{equation*}
On the other hand, by Hadamard's inequality,
which asserts that for a continuous convex function $f(x)$ on $[a, b]$,
\begin{equation*}
f(\frac {a+b}2) \leq \frac {1}{b-a}\int^b_a f(x)dx \leq \frac
{f(a)+f(b)}{2},
\end{equation*}
we have for $p>1$,
\begin{equation*}
n^{-(p-1)/p}-(n+1)^{-(p-1)/p}=\frac {p-1}{p}\int^{n+1}_{n}x^{-1-1/p^{*}}dx
\geq \frac {p-1}{p}(n+1/2)^{-1-1/p^{*}}.
\end{equation*}
Hence inequality \eqref{2.30} will follow from the following
inequality for $n \geq 2$,
\begin{equation*}
\frac {p}{p-1}n^{1-1/p}-\frac
{1}{p-1} \leq p^{*}n^{1/p^{*}}\Big ( 1 + \frac {1}{2n} \Big
)^{-(1+p^{*})/p}.
\end{equation*}
It is easy to see that for $p>1$,
\begin{equation*}
\Big ( 1 + \frac {1}{2n} \Big
)^{-(1+p^{*})/p} \geq 1 - \frac {1+p^{*}}{p}\frac {1}{2n}.
\end{equation*}
Hence it suffices to show
\begin{equation*}
\frac {p}{p-1}n^{1-1/p}-\frac
{1}{p-1} \leq p^{*}n^{1/p^{*}}\Big ( 1 - \frac {1+p^{*}}{p}\frac {1}{2n} \Big
),
\end{equation*}
or equivalently,
\begin{equation*}
\Big ( 1+ \frac {1}{2p-2} \Big )^p \leq n.
\end{equation*}
It's easy to check that the right-hand expression above is a
decreasing function of $p \geq 3$ and is equal to $5^3/4^3<2$ when $p=3$.
Hence it follows that \eqref{2.30} holds for all $n \geq 2, p \geq
3$.
We consider lastly inequality \eqref{2.20} for other values of $\alpha$
and we take $w_n=n^{\alpha-1/p}$ for $n \geq 1$ so that
we can rewrite \eqref{2.20} as
\begin{equation}
\label{2.3}
\Big(\sum^n_{i=1}i^{\alpha-1/p} \Big )^{p-1}< \Big (\frac {(\alpha+1) p}{(\alpha +1)
p-1} \Big )^p\Big(\sum^n_{i=1}i^{\alpha} \Big)^p \Big ( n^{-(p-1)/p-\alpha }-
(n+1)^{-(p-1)/p-\alpha }
\Big ).
\end{equation}
We end our discussion here by considering the case $1 \leq \alpha \leq 1+1/p$
and we apply Lemma \ref{lem0} to obtain
\begin{eqnarray*}
\sum^n_{i=1}i^{\alpha-1/p} &\leq & \frac {\alpha-1/p}{\alpha-1/p+1}\frac
{n^{\alpha-1/p}(n+1)^{\alpha-1/p}}{(n+1)^{\alpha-1/p}-n^{\alpha-1/p}}
=\frac {1}{\alpha-1/p+1}\Big ( \int^{n+1}_n x^{-\alpha+1/p-1}dx \Big
)^{-1}, \\
\sum^n_{i=1}i^{\alpha} &\geq & \frac {\alpha}{\alpha+1}\frac
{n^{\alpha}(n+1)^{\alpha}}{(n+1)^{\alpha}-n^{\alpha}}
=\frac {1}{\alpha+1}\Big ( \int^{n+1}_n x^{-\alpha-1}dx \Big
)^{-1}
\end{eqnarray*}
We further write
\begin{equation*}
n^{-(p-1)/p-\alpha }-
(n+1)^{-(p-1)/p-\alpha
}=(\alpha-1/p+1)\int^{n+1}_{n}x^{-\alpha+1/p-2}dx,
\end{equation*}
so that inequality \eqref{2.3} will follow from
\begin{equation*}
\int^{n+1}_n x^{-\alpha-1}dx < \Big ( \int^{n+1}_n x^{-\alpha+1/p-1}dx \Big
)^{1-1/p} \Big ( \int^{n+1}_{n}x^{-\alpha+1/p-2}dx \Big )^{1/p}.
\end{equation*}
One can easily see that the above inequality holds by H\"older's
inequality and it follows that inequality \eqref{2.3} holds for
$p>1, 1 \leq \alpha \leq 1+1/p$. This provides another proof of
inequality \eqref{8} for $p>1, 1 \leq \alpha \leq 1+1/p$.
| {
"timestamp": "2007-05-25T05:29:35",
"yymm": "0701",
"arxiv_id": "math/0701113",
"language": "en",
"url": "https://arxiv.org/abs/math/0701113",
"abstract": "We prove some Hardy-type inequalities via an approach that involves constructing auxiliary sequences.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Hardy-type Inequalities Via Auxiliary Sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987375050346933,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221685932362
} |
https://arxiv.org/abs/1907.06233 | Pointwise adaptive kernel density estimation under local approximate differential privacy | We consider non-parametric density estimation in the framework of local approximate differential privacy. In contrast to centralized privacy scenarios with a trusted curator, in the local setup anonymization must be guaranteed already on the individual data owners' side and therefore must precede any data mining tasks. Thus, the published anonymized data should be compatible with as many statistical procedures as possible. We suggest adding Laplace noise and Gaussian processes (both appropriately scaled) to kernel density estimators to obtain approximate differential private versions of the latter ones. We obtain minimax type results over Sobolev classes indexed by a smoothness parameter $s>1/2$ for the mean squared error at a fixed point. In particular, we show that taking the average of private kernel density estimators from $n$ different data owners attains the optimal rate of convergence if the bandwidth parameter is correctly specified. Notably, the optimal convergence rate in terms of the sample size $n$ is $n^{-(2s-1)/(2s+1)}$ under local differential privacy and thus deteriorated to the rate $n^{-(2s-1)/(2s)}$ which holds without privacy restrictions. Since the optimal choice of the bandwidth parameter depends on the smoothness $s$ and is thus not accessible in practice, adaptive methods for bandwidth selection are necessary and must, in the local privacy framework, be performed directly on the anonymized data. We address this problem by means of a variant of Lepski's method tailored to the privacy setup and obtain general oracle inequalities for private kernel density estimators. In the Sobolev case, the resulting adaptive estimator attains the optimal rate of convergence at least up to extra logarithmic factors. | \section{Introduction}
In the modern information era data are routinely collected in all areas of private and public life.
Although the availability of massive data sets is essential to answer important scientific and societal questions, the individual data owners (who may be individuals, households, research institutions, companies, \ldots) might refuse to share their, maybe sensitive, raw data with others.
Even more, in view of regularly reported data leaks, they may not even want to entrust their data to a central curator who stores the data and publishes anonymized summary statistics.
Finding ourselves in such a dilemma, the question whether and, if yes, how data analytics can still be performed is of special importance.
For the evaluation of this question, several aspects have to be taken into account.
Firstly, in absence of a trusted curator, privacy of the data has to be achieved already \emph{locally} at the individual data owners' level.
The $i$-th data holder takes its datum, say $X_i$, as the input of a privacy mechanism and creates an output $Z_i$ that is considered sufficiently anonymized, for instance, in the sense of any of the privacy definitions listed below.
For the purpose of the present paper, a privacy mechanism is a Markov kernel $Q_i$ between measurable spaces $(\mathfrak X,\mathscr X)$ and $(\mathfrak Z,\mathscr Z)$ generating $Z_i$ given $X_i=x$ according to the distribution $Q^{Z_i\mid X_i=x}$.
This definition of \emph{local} privacy is in contrast to the framework of \emph{centralized} or \emph{global} privacy where the trusted curator can take the whole data set $\{ X_1,\ldots,X_n \}$ to create an output $Z$.\footnote{Thus, the local privacy model can be seen as a proper submodel of the global one because the trusted curator can also mimic any conceivable procedure in the local model.}
Secondly, for the quantification of privacy, different solutions have been proposed so far
(see \cite{barber2014privacy}, Section~2 for a comprehensive overview of existing privacy definitions):
\begin{itemize}
\item In this paper, we will exclusively work in the framework of $\alpha$-differential privacy and its generalization $(\alpha,\beta)$-differential privacy as defined in Definition~\ref{DEF:APPROX:DP} below.
These two privacy definitions are also referred to as \emph{pure} and \emph{approximate differential privacy}, respectively.
Originally, these concepts have been suggested for the anonymization of microdata tables in a global privacy setup, more precisely in a framework where queries are answered by a server that has direct access to the sensitive data~\cite{dwork2006differential,dwork2006calibrating,dwork2008differential}.
In the statistics community, working under privacy constraints has been popularized in the past decade, amongst others, through the articles \cite{wasserman2010statistical,hall2013differential} (in the global setup) and \cite{duchi2018minimax} (in the local privacy setup).
Another strict relaxation of pure differential privacy is \emph{random differential privacy} as introduced in \cite{hall2011random}.
\item An alternative quantification of privacy can be given as follows:
Let $\phi$ be a function from $[0,\infty]$ to $\mathbb R \cup \{ +\infty \}$ with $\phi(1)=0$.
Then, the associated \emph{$\phi$-divergence} between two distributions $\mathbf{P},\mathbf{Q}$ is
\[ D_\phi(\mathbf{P} |\!| \mathbf{Q}) = \int \phi \left( \frac{\mathrm{d} \mathbf{P}}{\mathrm{d} \mathbf{Q}} \right) \mathrm{d} \mathbf{Q} = \int \phi\left( \frac{p}{q} \right) q \mathrm{d} \mu \]
where $\mu$ is a measure such that $\mathbf{P},\mathbf{Q} \ll \mu$ and $p,q$ denote the corresponding Radon-Nikodym densities.
Then, the mechanism $Q$ is called \emph{$\alpha$-$\phi$-divergence private} if
\[ \sup_{x,x^\prime \in \mathfrak X} D_\phi(Q(\cdot|X=x) |\!| Q(\cdot|X=x^\prime)) \leq \alpha. \]
\end{itemize}
The intersection of these two concepts is non-empty: For instance, taking $\phi(x) = \lvert x-1\rvert/2$, the $\phi$-divergence $D_\phi(\mathbf{P} |\!| \mathbf{Q})$ is the total variation distance, and the resulting $\alpha$-$\phi$-divergence is equivalent to $(0,\beta)$-differential privacy.
\bigskip
Thirdly, the published data $Z_1,\ldots,Z_n$ should ideally be multi-purpose in the sense that they can serve as input data for several types of analyses.
Thus, when the unmasked data are for instance a sample from an unknown probability distribution, the anonymized data should contain as much information as possible about the whole distribution and not only about certain characteristics.
One main motivation for this work is to introduce novel methodology in the framework of density estimation that aims to address also this issue by proposing a local approximate differential private version of kernel density estimators, that is, the whole function $t \mapsto K((X_i-t)/h)/h$ for a bandwidth parameter $h > 0$ along with a study of their theoretical properties.
Figure~\ref{FIG:WORKFLOW} gives a foretaste and provides a graphical representation of the general workflow developed in this paper.
\begin{figure}[t]
\center
\includegraphics[width=0.99\textwidth]{foo.pdf}
\captionsetup{width=\linewidth}
\caption{General workflow of our procedure in the framework of univariate density estimation.
Objects in blue boxes can only be observed by the respective data holder in the same box. Every data owner computes a kernel density estimator based only on its proper observation (given by the first coordinate of the black points). These estimators can be published after being perturbed by an appropriate centred Gaussian process (output in the pistachio coloured boxes).
The pointwise mean of the private kernel density estimators (black solid curve) provides a natural estimator of the unknown density function (red dashed curve).}
\label{FIG:WORKFLOW}
\end{figure}
\subsection*{Roadmap of the article}
Throughout the article, we consider the paradigmatic example of non-parametric density estimation.
For the sake of simplicity, we assume that each of $n$ data holders $D_i$ observes a size-one sample $X_i$ from a (in this paper) univariate target density $f$, but refuses to share this observation.
In Section~\ref{SEC:PRIVACY}, we first introduce two mechanisms to estrange the value of an kernel density estimator at a single fixed point $t \in \mathbb R$.
The first approach is based on adding appropriately scaled Laplace noise.
The second approach is based on adding Gaussian noise and can be extended, using the ideas introduced in \cite{hall2013differential}, to an anonymized version of the whole kernel density estimator (as a function from $\mathbb R$ to $\mathbb R$) via perturbation by a suitable Gaussian process.
In Section~\ref{SEC:MINIMAX}, we consider estimation of the unknown density function under approximate differential privacy from a minimax point of view.
As the performance measure to evaluate arbitrary estimators, we consider the mean squared error at a fixed point.
Both the Laplace and the Gaussian perturbation approach attain the optimal rate $n^{-(2s-1)/(2s+1)}$ in terms of $n$ over Sobolev ellipsoids with smoothness index $s$ which is slower than the rate $n^{-(2s-1)/(2s)}$ in the setup without privacy constraints.
The Gaussian process approach, however, makes it possible to estimate the value of the density at any point of the observation window and not only at one single point that has to be chosen prior to the anonymization procedure.
In addition, this approach enables the statistician to perform any kind of analysis that plugs kernel density estimators into others estimators.
Investigating theoretical guarantees of such plug-in procedures, however, is outside the scope of this work and deferred to future research.
As usual for kernel density estimators, the choice of the bandwidth parameter is crucial.
In the considered minimax framework over Sobolev classes, the optimal order of the bandwidth that leads to a rate optimal estimator depends on the smoothness index $s$ which is typically unknown.
In Section~\ref{SEC:ADAPTATION}, we apply a Lepski scheme tailored to the privacy framework to overcome this problem and obtain an adaptive choice of the bandwidth.
This issue specifically arises in the local privacy setup since in the global framework the trusted curator can apply the existing plethora of methods for bandwidth selection on the unmasked data, and then only publish the resulting estimator with the adaptively determined bandwidth in its anonymized form.
In order to perform the Lepski scheme, any data owner has to publish the kernel density estimator not only for one single bandwidth but for a finite set of potential bandwidths.
Such a multiple output still guarantees the desired privacy condition provided that the additive noise is multiplied with a factor proportional to the number of potential bandwidths which is logarithmic in the number of data sources in our case.
We derive general oracle type inequalities for the estimator resulting from the Lepski procedure adapted to the privacy framework.
For the specific example of Sobolev ellipsoids, the rates of convergence are merely deteriorated by logarithmic factors with respect to the case of \emph{a priori} known smoothness.
\section{Privacy mechanisms}\label{SEC:PRIVACY}
\subsection{Definition of approximate differential privacy}
Let $(\mathfrak X,\mathscr X)$ and $(\mathfrak Z,\mathscr Z)$ be measurable spaces.
A privacy mechanism is a Markov kernel $Q: \mathfrak X \times \mathscr Z \to [0,1]$ with the interpretation that, given original data $X=x$, an anonymized output is randomly drawn from the probability measure $Q(\cdot | X=x)$.
In the non-interactive setup that we are going to consider, we work under the following definition of \emph{approximate} or \emph{$(\alpha,\beta)$-differential privacy}.
\begin{definition}\label{DEF:APPROX:DP}
Let $\alpha \geq 0, \beta \in [0,1]$.
We say that $Z \sim Q(\cdot \mid X)$ is a local $(\alpha,\beta)$-differentially private view of $X$ if for all $x,x^\prime \in \mathfrak X$, $A \in \mathscr Z$ the estimate
\begin{equation}\label{EQ:COND:APPROX:DP}
Q(A \mid X = x) \leq \exp(\alpha) Q(A \mid X = x^\prime) + \beta,
\end{equation}
holds true.
\end{definition}
Let us emphasize that in Definition~\ref{DEF:APPROX:DP} the spaces $(\mathfrak X,\mathscr X)$ and $(\mathfrak Z,\mathscr Z)$ do not need to coincide.
In fact, in Example~\ref{EX:FUNC:GAUSSIAN} the space $(\mathfrak X,\mathscr X)$ will be the real line equipped with its Borel sets and $(\mathfrak Z,\mathscr Z)$ a measurable space of random functions.
In the literature, the case $\beta = 0$ is also referred to as $\alpha$-differential privacy or \emph{pure} differential privacy.
Evidently, the privacy condition \eqref{EQ:COND:APPROX:DP} becomes more restrictive for smaller values of the two parameter $\alpha$ and $\beta$.
Although Definition~\ref{DEF:APPROX:DP} smoothly bridges the cases $\beta = 0$ and $\beta > 0$, the classical anonymization techniques used for $\beta = 0$ and $\beta > 0$ are essentially different:
In the case $\beta = 0$, Laplace perturbation as well as randomization techniques as considered in \cite{duchi2018minimax,rohde2018geometrizing} can be used.
In the case $\beta > 0$, adding appropriately scaled Gaussian noise has been suggested in \cite{hall2013differential}.
However, as proved in \cite{holohan2015differential}, appropriately scaled Laplace noise can also lead to approximately differential private outputs (see Proposition~\ref{PROP:UNIV:LAPLACE:MECH} below).
In the sequel, we discuss how to achieve approximate differential privacy by means of these classical subroutines and how they can be extended to deal with functional data as well.
\subsection{Univariate output using Laplace noise}
First, we consider the case that both the input and the output of the privacy mechanism are univariate and real-valued, that is $(\mathfrak X,\mathscr X)=(\mathfrak Z,\mathscr Z)=(\mathbb R,\Bs(\mathbb R))$.
For this case, we consider Laplace perturbation which is also used to derive an upper bound in Section~\ref{SEC:MINIMAX}.
More precisely, let $Y_i=g(X_i) \in \mathbb R$ a quantity derived from the $X_i$ that should be masked.
Define the sensitivity of $g$ as
\[ \Delta(g) = \sup_{x,x^\prime \in \mathfrak X} \lvert g(x) - g(x^\prime) \rvert. \]
Recall that the univariate Laplace distribution, denoted by $\Lc(b)$, is given by the probability density function $p_b(x)=\frac{1}{2b} \exp(-\lvert x \rvert / b)$ (we include also the case $b=0$; then the Laplace distribution is, by convention, the Dirac measure concentrated at $0$).
In particular, the variance of an $\Lc(b)$ distributed random variable is $2b^2$.
The following proposition establishes approximate differential privacy by Laplace perturbation.
\begin{proposition}[See~\cite{holohan2015differential}, Example~5]\label{PROP:UNIV:LAPLACE:MECH}
Let $\alpha > 0$, $\beta \in [0,1]$.
Then
\[ Z = g(X) + b \xi \]
with $\xi \sim \Lc(1)$ for $b \geq \Delta(g)/(\alpha - \log(1- \beta))$
provides an $(\alpha,\beta)$-differential private view of $g(X)$ (and of $X$ as well).
\end{proposition}
A benefit of Proposition~\ref{PROP:UNIV:LAPLACE:MECH} in contrast to the often proposed perturbation by Gaussian noise to establish approximate differential privacy is that it allows to deal with the cases $\beta = 0$ and $\beta > 0$ by the same approach.
Moreover, letting the parameter $\beta$ vary permits natural interpretations:
If $\beta = 0$, the variance of $\sqrt 2 b \xi$ corresponds to the one that is usually encountered in the case of pure differential privacy.
When $\beta$ tends to one, the privacy constraint gets weaker and the variance of the centred noise $\sqrt 2 b \xi$ tends to $0$.
In the extreme case $\beta = 1$ it is even allowed to publish $g(X)$ directly.
We now introduce kernel density estimators as the guiding example that we have in mind for the function $g$ for the rest of the paper.
\begin{example}\label{EX:UNIV:LAPLACE}
Let $X_1,\ldots,X_n$ i.i.d.\,according to an unknown probability density function $f \colon \mathbb R \to \mathbb R$.
Let $t \in \mathbb R$ be fixed.
Then the $i$-th dataholder, who observes $X_i \in \mathbb R$, can compute
\[ K_h(X_i-t) \vcentcolon= \frac{1}{h} K \left( \frac{X_i - t}{h} \right) \]
for a bounded kernel function $K$, that is, $K \colon \mathbb R \to \mathbb R$ is integrable and $\int K(u) \mathrm{d} u = 1$.
The quantity $K_h(X_i-t)$ will play the role of $g(X)$ in Proposition~\ref{PROP:UNIV:LAPLACE:MECH}.
By the triangle inequality $\Delta(K_h(\cdot-t)) \leq 2\lVert K \rVert_\infty/h$, and one can take any $b \geq 2 \lVert K \rVert_\infty/(h(\alpha - \log(1- \beta)))$ to obtain an approximate differential private view of $K_h(X_i-t)$.
Note that $t \in \mathbb R$ has been fixed in advance before the anonymization procedure.
\end{example}
\subsection{Multivariate output}
In principle, also multivariate output could be dealt with by adding independent Laplace noise to any of the components of the vector to be published.
In this case, both $\alpha$ and $\beta$ for each component have to be appropriately scaled in order to obtain the desired level of approximate differential privacy for the whole vector (the scaling can be carried out, for instance, as described in Lemma~\ref{LEM:COMP} below).
This approach, however, results in an increase concerning the Laplace noise added at any single point where the kernel density estimator is evaluated, and thus might deteriorate the performance of subsequent analyses more than necessary.
We do not further pursue this course here, since we will introduce a method for the anonymization of functional data that does not inflate the noise at single points in the next subsection.
Having stated this general method, we can, for instance, anonymize the whole function $\cdot \mapsto K_h(X_i-\cdot)$ in Example~\ref{EX:UNIV:LAPLACE}, and as a by-product we obtain $(\alpha,\beta)$-differential privacy for all pointwise evaluations $K_h(X_i-t)$, $t \in \mathbb R$ without any extra cost on the noise to be added.
To achieve anonymization of functional data, adding Gaussian processes with appropriately chosen covariance structure turns out to be convenient.
This idea has been originally suggested in \cite{hall2013differential}, but we state the essential steps here again for a clear exposition, and refer to \cite{hall2013differential} only for the proofs.
The first stopover on our way along the results from \cite{hall2013differential} is the following proposition that provides a condition under which approximate differential privacy of a vector is obtained by adding multivariate Gaussian noise with not necessarily uncorrelated components.
\begin{proposition}\label{PROP:PRIV:MULTI}
Let $\alpha > 0$, $\beta \in (0,1/2)$.
Let further $\Sigma \in \mathbb R^{m \times m}$ be a positive definite matrix and $g: \mathfrak X \to \mathbb R^m$ for some $m \in \mathbb N^\ast$.
Assume that
\begin{equation}\label{EQ:COND:DELTA:MULTI}
\sup_{x,x^\prime \in \mathfrak X} \norm{ \Sigma^{-1/2} (g(x)-g(x^\prime)) }_2 \leq \Delta
\end{equation}
for all $x,x^\prime \in \mathfrak X$.
Then, $Z$ defined via
\begin{equation*}
Z = g(X) + \sigma \Xi, \qquad \Xi \sim \mathcal N(0,\Sigma),
\end{equation*}
is $(\alpha,\beta)$-differential private provided that
\begin{equation}\label{EQ:COND:SIGMA}
\sigma \geq \frac{\Delta}{\alpha} \sqrt{2 \log(1/(2\beta)) + 2\alpha}.
\end{equation}
\end{proposition}
Proposition~\ref{PROP:PRIV:MULTI} will unfold its full potential in the next subsection where the condition \eqref{EQ:COND:DELTA:MULTI} will be reformulated appropriately.
For the univariate case (taking $m=1$), Proposition~\ref{PROP:PRIV:MULTI} directly provides a result similar to the one in Example~\ref{EX:UNIV:LAPLACE}, again with $t \in \mathbb R$ fixed before anonymization.
\begin{example}\label{EX:MULTI:GAUSSIAN}
We consider $K_h(X_i - t)$ as in Example~\ref{EX:UNIV:LAPLACE} and apply Proposition~\ref{PROP:PRIV:MULTI} for $m=1$ and $\Sigma=\begin{pmatrix}
1
\end{pmatrix}$.
As in Example~\ref{EX:UNIV:LAPLACE},
\[ \sup_{x,x^\prime \in \mathbb R} \left\lvert \frac{1}{h} K \left( \frac{x-t}{h} \right) - \frac{1}{h} K \left( \frac{x^\prime-t}{h} \right) \right\rvert \leq \frac{2 \norm{K}_\infty}{h}, \]
and one can take $\Delta(K_h(\cdot - t)) = 2\norm{K}_\infty/h$ in \eqref{EQ:COND:DELTA:MULTI}.
Then, Proposition~\ref{PROP:PRIV:MULTI} guarantees that the $Z_i$, $i=1,\ldots,n$ defined through
\begin{equation*}
Z_i = \frac{1}{h} K \left( \frac{X_i - t}{h} \right) + \frac{2 \norm{K}_\infty \sqrt{2\log (1/2\beta)+2\alpha}} {\alpha h} \, \xi_i, \qquad \xi_i \sim \mathcal N(0,1),
\end{equation*}
is an $(\alpha,\beta)$-differential private view for $\alpha, \beta > 1/2$ of $g_t(X_i)$ (and of $X_i$ as well).
\end{example}
\subsection{From multivariate to functional output}
The anonymization techniques used in Examples~\ref{EX:UNIV:LAPLACE} and \ref{EX:MULTI:GAUSSIAN} both suffer from the drawback that the output $Z_i$ provides information on the kernel density estimator $K_h(X_i-t)$ for one single $t$ only.
The aim of this section, based on Proposition~\ref{PROP:PRIV:MULTI} and ideas introduced in \cite{hall2013differential} in the context of global privacy, is to construct a privatized version of the whole function $t \mapsto K_h(X_i-t)$ by adding a suitable Gaussian process to the kernel density estimator.
As a consequence, the kernel density estimator anonymized in this vein can be evaluated at any single $t \in \mathbb R$.
For univariate and multivariate real-valued outputs of privacy mechanisms, the role of the $\sigma$-field $\mathscr Z$ in Definition~\ref{DEF:APPROX:DP} is canonically taken by the Borel sets on $\mathbb R$ or $\mathbb R^m$.
In the case of functional output $Z \colon \mathfrak X \to \mathbb R^m$ (where $\mathfrak X$ is an arbritary set), its role is taken by the $\sigma$-field $\mathscr C$ which is generated by the cylinder sets
\[ C_{\mathfrak T,B} = \{ f \colon \mathfrak X \to \mathbb R : (f(t_1),\ldots,f(t_m)) \in B \} \]
where $\mathfrak T$ ranges over all finite sets $\mathfrak T = \{ t_1,\ldots,t_m \} \subseteq \mathfrak X$ and $B \in \Bs(\mathbb R^m)$.
The following result is a reformulation of Proposition~7 in \cite{hall2013differential} and we omit its proof.
See also Example~4 in \cite{holohan2015differential} for an alternative reasoning.
\begin{proposition}\label{PROP:PRIV:FUNC:GP}
Let $\Xi: \mathfrak X \to \mathbb R$ be a sample path of a centred Gaussian process with covariance kernel $K : \mathfrak X \times \mathfrak X \to \mathbb R$.
For $t_1,\ldots,t_m \in \mathfrak X$, consider the \emph{Gram matrix}
\begin{equation*}
\Sigma_{t_1,\ldots,t_m} = \begin{pmatrix}
K(t_1,t_1) & \ldots & K(t_1,t_m)\\
\vdots & \ddots & \vdots\\
K(t_m,t_1) & \ldots & K(t_m,t_m)
\end{pmatrix}.
\end{equation*}
Let $X\colon \mathfrak X \to \mathbb R$ be a (random) function in a function class $\mathfrak F$.
Then, the release of
\begin{equation*}
Z = X + \sigma \Xi
\end{equation*}
with $\sigma$ fulfilling \eqref{EQ:COND:SIGMA} is $(\alpha,\beta)$-differential private (with respect to $\mathscr C$) provided that
\begin{equation}\label{EQ:COND:DELTA:GP}
\sup_{f,g \in \mathfrak F} \sup_{m \in \N^\ast} \newcommand{\OO}{\mathbb O} \sup_{t_1,\ldots,t_m \in \mathfrak X} \left\lVert \Sigma_{t_1,\ldots,t_m}^{-1/2} \begin{pmatrix}
f(t_1) - g(t_1)\\
\vdots \\
f(t_m) - g(t_m)
\end{pmatrix}
\right\rVert_2 \leq \Delta
\end{equation}
where $\Delta$ is defined in \eqref{EQ:COND:DELTA:MULTI}.
\end{proposition}
The main question arising from Proposition~\ref{PROP:PRIV:FUNC:GP} is how the, on a first sight unhandy condition \eqref{EQ:COND:DELTA:GP}, might be verified.
The solution consists in transferring the problem into a reproducing kernel Hilbert space (RKHS) setup.
In fact, Proposition~\ref{PROP:PRIV:FUNC:GP} can be applied effectively when the random functions to be masked belong to the RKHS which corresponds to the covariance kernel of the Gaussian process $\Xi$.
In order to formulate this next result from \cite{hall2013differential}, we need to introduce some basic notation concerning the considered RKHS (we refer the reader to \cite{berlinet2004reproducing} for a comprehensive introduction to RKHS theory).
Let $K\colon \mathfrak X \times \mathfrak X \to \mathbb R$ be a positive definite kernel.
Recall that a real-valued kernel $K\colon \mathfrak X \times \mathfrak X \to \mathbb R$ is positive definite if
\begin{equation}\label{EQ:COND:PDK}
\sum_{i,j=1}^k a_ia_j K(x_i,x_j) \geq 0
\end{equation}
holds for any $k \in \N^\ast} \newcommand{\OO}{\mathbb O$, $\{ a_1,\ldots,a_k \} \subseteq \mathbb R$, and $\{ x_1,\ldots,x_k\} \subseteq \mathfrak X$.
For any $x\in \mathfrak X$, define the function $K_x: \mathfrak X \to \mathbb R$ by $K_x(\cdot) = K(x,\cdot)$.
Then the set
\begin{equation*}
\mathfrak H_0 \vcentcolon= \{ f : f = \sum_{i \in I} c_i K_{x_i} \text{ for some finite index set }I \}
\end{equation*}
is a pre-Hilbert space with respect to the norm $\norm{\cdot}_{\mathfrak H}$ induced by the scalar product
\begin{equation*}
\langle f,g \rangle_\mathfrak H = \sum_{i\in I} \sum_{j\in J} c_i d_j K(x_i, y_j)
\end{equation*}
for $f=\sum_{i\in I} c_i K_{x_i}$, $g=\sum_{j\in J} d_j K_{y_j}$.
The RKHS corresponding to the kernel $K$ is the Hilbert space $\mathfrak H$ resulting from the completion of $\mathfrak H_0$ with respect to the RKHS norm $\norm{\cdot}_{\mathfrak H}$.
The following two results are again taken from \cite{hall2013differential}.
\begin{proposition}[See \cite{hall2013differential}, Proposition~8]
For $f \in \mathfrak H$, where $\mathfrak H$ is the RKHS corresponding to the kernel $K: \mathfrak X \times \mathfrak X \to \mathbb R$, and for any finite sequence $t_1,\ldots,t_m$ of distinct points from $\mathfrak X$, we have
\begin{equation*}
\left\lVert \begin{pmatrix}
K(t_1,t_1) & \ldots & K(t_1,t_m)\\
\vdots & \ddots & \vdots\\
K(t_m,t_1) & \ldots & K(t_m,t_m)
\end{pmatrix}^{-1/2}
\begin{pmatrix}
f(t_1)\\
\vdots \\
f(t_m)
\end{pmatrix}
\right\rVert_2 \leq \norm{f}_{\mathfrak H}.
\end{equation*}
\end{proposition}
\begin{corollary}[See \cite{hall2013differential}, Corollary~9]\label{COR:FUNC:OUTPUT:RKHS}
For $X \in \mathfrak F \subseteq \mathfrak H$, the release of
\[ Z = X + \sigma \Xi \]
with $\sigma$ as in~\eqref{EQ:COND:SIGMA} is $(\alpha,\beta)$-differential private with respect to $\mathscr C$ provided that
\begin{equation}\label{EQ:COND:DELTA:RKHS}
\sup_{f,g \in \mathfrak F} \norm{f-g}_\mathfrak H \leq \Delta,
\end{equation}
and $\Xi$ is the sample path of centred Gaussian process with covariance kernel $K$ (given by the reproducing kernel of $\mathfrak H$).
\end{corollary}
We now apply Corollary~\ref{COR:FUNC:OUTPUT:RKHS} to kernel density estimators.
\begin{example}\label{EX:FUNC:GAUSSIAN}
In the case of univariate density estimation the $i$-th data holder observes $X_i$ drawn from the target density $f$, and we want him to be able to publish a approximately differential private version of the kernel density estimator
\begin{equation*}
\widetilde f_{i,h}(t) = \frac{1}{h} K \left( \frac{X_i-t}{h} \right), t \in \mathbb R,
\end{equation*}
based on his single observation $X_i$ only.
In order to apply the above theory we have to assume that the kernel $K(x,y) = K(x-y)$\footnote{We slightly abuse notation by denoting both the kernel of the kernel density estimator and the corresponding kernel $\mathbb R \times \mathbb R \to \mathbb R$ given through $(x,y) \mapsto K(x-y)$ by the letter $K$.} is also a positive definite kernel.
Under this additional assumption, Corollary~\ref{COR:FUNC:OUTPUT:RKHS} shows that the perturbed kernel density estimator
\begin{equation*}
Z_{i,h}(\cdot) = \widetilde f_{i,h}(\cdot) + \frac{\Delta}{\alpha} \sqrt{2 \log(1/(2\beta)) + 2\alpha} \Xi
\end{equation*}
where $\Xi$ a Gaussian process with kernel $hK_h(x,y)=K((x-y)/h)$ ensures $(\alpha,\beta)$-differential privacy provided that \eqref{EQ:COND:DELTA:RKHS} is satisfied.
For instance, for the Gaussian kernel $K_{\text{Gauss}}(\cdot) = \exp(-(\cdot)^2/2h^2)$ we have
\begin{align*}
\norm{K_h(x - \cdot) - K_h(x^\prime - \cdot)}_{\mathcal H}^2 = \frac{1}{2\pi h^2} (K_{\text{Gauss}}(0) + K_{\text{Gauss}}(0) - 2K_{\text{Gauss}}(x-x^\prime)) \leq \frac{1}{\pi h^2},
\end{align*}
and we can take $\Delta = 1/(\sqrt \pi h)$
(the same argument working for any non-negative bounded kernel, and with a slight modification for any bounded kernel).
\end{example}
Let us emphasize that the property of positive definiteness is not satisfied for all kernels commonly used for kernel density estimators in non-parametric statistics.
In the following, we discuss some popular examples.
\begin{example}
The \emph{rectangular kernel} given by
\[ K_{\sqsubset\!\sqsupset}(x,y) \propto \mathbf 1_{\{ \lvert x-y\rvert \leq 1 \} } \]
for $x,y \in \mathbb R$ is \emph{not} positive definite.
In order to see this, set $x_1=0$, $x_2=\frac{3}{4}$, $x_3=\frac{3}{2}$, $a_1=a_3=1$, and $a_2=-1$.
Then
\begin{align*}
\sum_{i=1}^3 \sum_{j=1}^3 a_i K_{\sqsubset\!\sqsupset}(x_i,x_j) a_j \propto 3-4 < 0,
\end{align*}
which contradicts the defining property \eqref{EQ:COND:PDK} of positive definite kernels.
\end{example}
\begin{example}
The \emph{triangular kernel} given by
\[ K_\triangle(x,y) \propto (1-\lvert x-y \rvert) \mathbf 1_{ \{ \lvert x-y\rvert \leq 1 \} } \]
for $x,y \in \mathbb R$ is positive definite.
This follows from the fact that kernels of the form
\[ K(x,y) = \int_{\mathbb R^d} f(t+y)f(t+y)\mathrm{d} t \]
for $x,y \in \mathbb R^d$ with square integrable $f\colon \mathbb R^d \to \mathbb R$ are positive definite and
\[ (1-\lvert x-y \rvert) \mathbf 1_{ \{ \lvert x-y\rvert \leq 1 \} } = \int_\mathbb R \mathbf 1_{[0,1/2]}(t+x) \mathbf 1_{[0,1/2]}(t+y) \mathrm{d} t. \]
\end{example}
\begin{example}
The \emph{Gaussian kernel}
\[ K(x,y) \propto \exp(- \lvert x-y\rvert^2/2) \]
and the \emph{exponential kernel}
\[ K(x,y) \propto \exp(- \lvert x-y\rvert) \]
are positive definite.
These kernels of the form $K(x,y) \propto \exp(- |x-y|^\gamma)$ are positive definite if and only if $\gamma \in [0,2]$.
This follows by combination of Theorem~2.2 and Exercise~2.13, (b) in \cite{berg1984harmonic}.
\end{example}
\begin{example}\label{EX:SINC}
The \emph{$\sinc$ kernel} given by
\[ K_{\sinc}(x,y) = \sinc(x-y) = \frac{\sin(\pi(x-y))}{\pi (x-y)} \]
is positive semidefinite since the $\si$-function is the characteristic function of the uniform distribution on the interval $[-1,1]$.
The $\sinc$-kernel attains also negative values but grant to the estimate $1 \geq \sinc(\cdot) \geq -0.3$ we have, in analogy to the calculation in Example~\ref{EX:FUNC:GAUSSIAN},
\[ \lVert (K_{\sinc})_x - (K_{\sinc})_{x^\prime} \rVert^2_{\mathfrak H} = \frac{1}{h^2} (K_{\sinc}(x,x) + K_{\sinc}(x^\prime,x^\prime) -2K_{\sinc}(x,x^\prime)) \leq \frac{2.6}{h^2} \]
which yields a suitable bound for $\Delta$ in this example.
\end{example}
\begin{example}
The \emph{Epanechnikov kernel}
\[ K(x,y) = \frac{3}{4} (1-\lvert x-y\rvert^2) \mathbf 1_{ \{ \lvert x-y\rvert \leq 1 \} } \]
is \emph{not} positive definite. In order to see this, put $x_1=0$, $x_2=1/2$, $x_3=1$, $a_1=a_3=-0.9$ and $a_2=1$.
Then,
\begin{align*}
\sum_{i=1}^3 \sum_{j=1}^3 a_i K(x_i,x_j) \overline a_j &= \frac{3}{4} \left[ 0.81 + 0.81 + 1 - 2 \cdot 0.9 \cdot 0.75 - 2 \cdot 0.9 \cdot 0.75 \right] = -0.08 < 0,
\end{align*}
in contradiction to the defining property \eqref{EQ:COND:PDK} of positive definite kernels.
\end{example}
\begin{example}
The \emph{biweight kernel}
\[ K(x,y) = \frac{15}{16} (1-\lvert x-y\rvert^2)^2 \mathbf 1_{ \{ \lvert x-y\rvert \leq 1 \} } \]
is \emph{not} positive definite.
To see this, put $x_1=1/4$, $x_2 = -1/4$, $x_3 = -3/4$, and $x_4=1/2$.
Then, consider the matrix $M=(K(x_i,x_j))_{i,j \in \llbracket 1,4\rrbracket}$.
We have
\[ \widetilde M \vcentcolon= 256 M = \begin{pmatrix}
256 & 144 & 0 & 225\\
144 & 256 & 144 & 49 \\
0 & 144 & 256 & 0 \\
225 & 49 & 0 & 256
\end{pmatrix}, \]
and the matrix $\widetilde M$ is not positive definite, since for $v = \begin{pmatrix}
0.7 & -0.4 & 0.2 & -0.5
\end{pmatrix}^\top} \newcommand{\spur}{\mathrm{tr}$
\[ v^\top} \newcommand{\spur}{\mathrm{tr} \widetilde M v = -0.94 < 0. \]
\end{example}
\subsection{A composition lemma for approximate differential privacy}
For kernel density estimation, bandwidth selection is usually a delicate issue and so it is in our local privacy setup.
Whereas in the centralized setup existing methods can be applied by the trusted curator on the unmasked data, this is not possible in our local setup.
Thus the data holders have to publish versions of the kernel density estimator for different bandwidths, and one has to adapt general strategies from the non-private framework to the one with local approximately differential private data.
To do this under our privacy constraint it is necessary to understand how multiple outputs influence the defining condition of approximate differential privacy.
The following lemma provides a result of this flavour and is known in the research literature on privacy for statistical databases.
The setup is the following:
Given the unmasked datum $X$, the data owner does not only want to publish $Z_1=Z_1(X)$ but also $Z_2=Z_2(X)$, i.e., the vector $(Z_1,Z_2)$.
The following result tells us how $\alpha$ and $\beta$ for the single components have to be scaled in order to obtain $(\alpha, \beta)$-differential privacy for multiple outputs.
\begin{lemma}[Composition lemma for $(\alpha,\beta)$-differential privacy]\label{LEM:COMP}
Let $Z_i$, $i=1,2$ be $(\alpha_i,\beta_i)$-differential private and conditionally (on $X$) independent views of $X$, respectively.
Then $Z=(Z_1,Z_2)$ is an $(\alpha_1 + \alpha_2,\beta_1 + \beta_2)$-differential private view of $X$.
\end{lemma}
Of course, Lemma~\ref{LEM:COMP} can be successively applied.
For instance, if we want to publish $Z_{i,h}$ from the above examples for different $h$ in a finite set $\mathcal H$, then $\alpha$ and $\beta$ should be replaced with $\privpar^\prime = \alpha/\# \mathcal H$ and $\beta^\prime = \beta/\# \mathcal H$, respectively, in order to get differential privacy for $Z = (Z_{i,h})_{h \in \mathcal H}$.
\section{Private minimax estimation}\label{SEC:MINIMAX}
Minimax theory provides a standard framework to study convergence properties of estimators in non-parametric statistics~\cite{tsybakov2009introduction}.
In this section, we apply this general toolbox to the specific case of density estimation under privacy constraints.
For fixed $t \in \mathbb R$ and any estimator $\widehat \ell$ of the linear functional $f(t)$ based on the private views $Z=\{ Z_1,\ldots,Z_n \}$, we study its mean squared error
\begin{equation*}
\mathbf{E}[(\widehat \ell - f(t))^2].
\end{equation*}
The guiding principle of minimax theory is to look for estimators that perform best in a worst-case scenario.
However, due to the privacy framework, we have not only the freedom of choosing the estimator $\widehat \ell$ but also the privacy mechanism $Q$ that generates the private outputs.
Hence, following \cite{duchi2018minimax}, classical minimax theory has to be adapted and a natural quantity to consider is the private minimax risk
\begin{equation*}
\inf_{\substack{\widehat \ell \in \sigma(Z)\\mathbb Q \in \mathcal Q_{\alpha,\beta}}} \sup_{f \in \mathcal P} \, \mathbf{E}[(\widehat \ell - f(t))^2]
\end{equation*}
where $\mathcal P$ is some function class containing probability densities and the infimum is taken over all local $(\alpha,\beta)$-differential private Markov kernels $Q \in \mathcal Q_{\alpha,\beta}$ and all estimators based on the local approximate differential private views $Z$ of the corresponding original sample $X_1,\ldots,X_n$.
We specify the function class $\mathcal P$ by so called Sobolev ellipsoids $\mathcal S(s,L)$ that we define for $s > 1/2$ and $L>0$ by means of
\begin{equation*}
\mathcal S(s,L) = \{ f \colon \mathbb R \to [0,\infty) : \int f(x) \mathrm{d} x = 1, \int \lvert \mathcal F[f](\omega) \rvert^2 \lvert \omega \rvert^{2s} \mathrm{d} \omega \leq 2 \pi L^2 \},
\end{equation*}
which, for $s \in \N^\ast} \newcommand{\OO}{\mathbb O$, is equivalent to the definition
\begin{equation*}
\mathcal S(s,L) = \{ f \colon \mathbb R \to [0,\infty) : \int f(x)\mathrm{d} x = 1, \int (f^{(s)}(x))^2 \mathrm{d} x \leq L^2 \}.
\end{equation*}
In the first definition, $\mathcal F[f]$ denotes the Fourier transform of the density $f$, in the second one $f^{(s)}$ denotes the weak $s$-th derivative of $f$.
\subsection{Upper bound}
We first derive an upper bound on the minimax risk by specializing both the privacy mechanism $Q \in \mathcal Q_{\alpha, \beta}$ and the estimator of $f(t)$.
Concerning the privacy mechanism, we consider the mechanisms mapping $X_i$ to private views $Z_{i,h}$ of $K_h(X_i - t)$ from Section~\ref{SEC:PRIVACY} for one single $h>0$.
More precisely, we consider the Laplace mechanism given through
\begin{equation}\label{EQ:OBS:LAPLACE}
Z_{i,h}(t) = K_h(X_i-t) + \underbrace{\frac{2\lVert K\rVert_\infty}{h(\alpha - \log(1-\beta))}}_{=\vcentcolon C^{\Lc}_{\alpha \beta}/(\sqrt 2 h)}\, \xi_{i,h}, \qquad \xi_{i,h} \text{ i.i.d.} \sim \Lc(1),
\end{equation}
and the Gaussian process mechanism given through
\begin{equation}\label{EQ:OBS:GP}
Z_{i,h}(t) = K_h(X_i-t) + \underbrace{\frac{\Delta^\prime \sqrt{2\log(1/(2\beta)) + 2\alpha}}{h\alpha}}_{=\vcentcolon C^{\mathrm{GP}}_{\alpha, \beta}/h}\, \Xi_{i,h}
\end{equation}
where $\Xi_{i,h}$ are i.i.d.\,Gaussian processes with covariance kernel $K((x-y)/h)$ and $\Delta^\prime$ is an upper bound on $\lVert (hK_h)_x - (hK_h)_{x^\prime} \rVert_\mathfrak H$ for $x,x^\prime \in \mathbb R$.
Given $Z_{1,h},\ldots,Z_{n,h}$ as in \eqref{EQ:OBS:LAPLACE} or \eqref{EQ:OBS:GP}, a natural estimator of $f(t)$ is given by
\begin{equation}\label{EQ:DEF:FHAT}
\widehat f_h(t) = \frac{1}{n} \sum_{i=1}^{n} Z_{i,h}(t).
\end{equation}
The following proposition provides an upper risk bound for this estimator specialized with the $\sinc$-kernel over the Sobolev ellipsoids $\mathcal S(s,L)$ introduced above.
\begin{proposition}\label{PROP:UPPER}
Consider the kernel density estimator $\widehat f_h(t)$ for some fixed $t \in \mathbb R$ where the kernel used in the anonymization procedure \eqref{EQ:OBS:LAPLACE} or \eqref{EQ:OBS:GP} is the $\sinc$-kernel from Example~\ref{EX:SINC}.
Then, for any $s > 1/2$,
\[ \sup_{f \in \mathcal S(s,L)} \mathbf{E} [(\widehat f_h(t) - f(t))^2] \leq C \left[ h^{2s-1} + \frac{1}{nh} + \frac{1}{nh^2} \right] \]
for some $C=C(\alpha,\beta,L,s,\lVert f \rVert_\infty,K_{\sinc})$.
In particular, setting $h=h^\star$ with $h^\star \asymp n^{-1/(2s+1)}$, we obtain
\[ \sup_{f \in \mathcal S(s,L)} \mathbf{E} [(\widehat f_{h^\star}(t) - f(t))^2] \lesssim n^{-\frac{2s-1}{2s+1}}. \]
\end{proposition}
Since the noise added by the privacy mechanisms is centred, the bias term in the proof of Proposition~\ref{PROP:UPPER} remains unchanged in comparison to the standard setup without privacy constraints.
However, the variance term changes due to the additional Laplace or Gaussian noise, respectively, and the classical variance term $1/(nh)$ is joined by the additional term $1/(nh^2)$ which is of higher order for $h \to 0$.
Consequently, the optimal bandwidth is no longer of order $n^{-1/(2s)}$ as in the standard setup but of the larger order $n^{-1/(2s+1)}$.
However, consistency of $\widehat f_h$ is already guaranteed if $h \to 0$ and $nh^2 \to \infty$ simultaneously (in the standard density estimation setup one only needs $nh \to \infty$ in addition to $h \to 0$).
\subsection{Lower bound}
The following result states a lower bound over Sobolev ellipsoids in the case of pure differential privacy ($\beta = 0$).
\begin{proposition}\label{PROP:LOWER}
Let $\alpha > 0$ arbitrary.
Then,
\begin{equation*}
\inf_{\substack{\widehat \ell \in \sigma(Z) \\ Q \in \mathcal Q_{\alpha,0}}} \sup_{f \in \mathcal S(s,L)} \mathbf{E} [(\widehat \ell - f(t))^2] \geq C(\alpha) n^{-\frac{2s-1}{2s+1}}
\end{equation*}
where $C(\alpha)> 0$ depends on the privacy parameter, and the infimum is taken over all estimators $\widehat \ell$ based on private views $Z_1,\ldots,Z_{n}$ and privacy mechanisms providing $(\alpha,0)$-differential privacy.
\end{proposition}
\begin{remark}
The lower bound of Proposition~\ref{PROP:LOWER} still holds true when one allows a slight amount of interaction between the data holders, namely when the distribution of every $Z_i$ is determined by $X_i$ and the previously masked values $Z_1,\ldots,Z_{i-1}$.
The proof remains the same because the data processing inequality (14) from \cite{duchi2018minimax} still holds true in this more general setup.
\end{remark}
Proposition~\ref{PROP:LOWER} shows that, regarding the privacy parameter $\alpha$ as an \emph{a priori} fixed constant, the estimators $\widehat f_h(t)$ from Proposition~\ref{PROP:UPPER} attain the optimal rate $n^{-(2s-1)/(2s+1)}$ in terms of $n$ under pure local differential privacy.
Recall that without privacy restrictions the optimal rate over Sobolev ellipsoids is $n^{-(2s-1)/(2s)}$ (as mentioned in \cite{butucea2001exact}, this rate can, other than by a reduction scheme as used in our proof, be easily obtained via the theory developed in \cite{donoho1992renormalization}, see also \cite{tsybakov1998pointwise}).
In this work, we consider the parameters $\alpha$, $\beta$ as fixed and are interested in the behaviour of the rate as a function of $n$ only but remarks concerning $\alpha$ analogous to the ones made in \cite{butucea2019local} could be made (as in that paper, $\alpha$ and $\beta$ could also be allowed to vary with $n$).
The optimal behaviour, however, of the rates in terms of the privacy parameters $\alpha$ and $\beta$, especially if $\beta > 0$, remains an open issue.
\section{Adaptation to unknown smoothness}\label{SEC:ADAPTATION}
The estimators of the previous section are not completely satisfying since the optimal choice $h^\star_n$ of the bandwidth, as usually in non-parametric statistics, depends on \emph{a priori} knowledge of the smoothness of the unknown function $f$.
Such knowledge is usually not available in practise.
At least, using the Gaussian process perturbation approach we relieved ourselves from the drawback of the Laplace method that one can privatize only one functional of the form $f(t)$ for one single $t$ that has to be fixed even before the anonymization.
Note that this drawback is, for instance, also present in the mechanisms suggested in \cite{rohde2018geometrizing}.
From this point of view, anonymization of the whole kernel density estimator via this approach should be preferred.
The purpose of this section is to address the remaining issue of adapting to the unknown smoothness of $f$.
In order to tackle this problem, we use a variant of Lepski's method (see~\cite{lepski1997optimal} for a general account in the Gaussian white noise model, and \cite{cavalier2001tomography} for an application to a tomography problem whose concise presentation has inspired our one).
Recall again that the necessity of novel methodology for adaptive estimation is specific for the setup of local privacy since in the global case the trusted curator can choose the bandwidth in an adaptive way using all the data $X_1,\ldots,X_n$ and, as a consequence, can build on the existing plethora of methods and theoretical results for this standard case; hence bandwidth selection does not provide any additional difficulty for centralized privacy since only the final output is anonymized.
In our local setup, where the data owners publish their data prior to any data analysis, adaptation must be addressed separately.
Note that the problem of adaptation has, to the best of the author's knowledge, only been addressed in the recent work~\cite{butucea2019local} so far, where the authors use wavelet estimators for density estimation on a compact interval.
The approach in that paper is thus conceptionally different from the one presented in the sequel.
We will apply Lepski's method both on observations \eqref{EQ:OBS:LAPLACE} where $t \in \mathbb R$ has been fixed \emph{a priori} and on pathwise observations \eqref{EQ:OBS:GP} from the Gaussian process approach that we evaluate at the point $t \in \mathbb R$ of interest.
In order to apply Lepski's method, the observations \eqref{EQ:OBS:LAPLACE} and \eqref{EQ:OBS:GP} must be available for different values of the bandwidth parameter $h$, say $h \in \mathcal H_n$.
This can be realized using Lemma~\ref{LEM:COMP} provided that the privacy parameters $\alpha$ and $\beta$ are appropriately scaled.
Thus, we can assume that $Z_{i,h}(t)$ are accessible for any $i \in \llbracket 1,n\rrbracket$ and $h \in \mathcal H_n$ if we replace $\alpha$ and $\beta$ by $\privpar^\prime=\alpha/\# \mathcal H_n$ and $\beta^\prime = \beta/\# \mathcal H_n$, respectively.
For any $h \in \mathcal H_n$ and $t \in \mathbb R$, we can then consider the estimator defined in \eqref{EQ:DEF:FHAT}.
In our case, we define the set of potential bandwidths by a geometrid grid,
\begin{equation*}
\mathcal H_n = \{ h \in [\underline h_n, \overline h_n] : h = a^{-j}\overline h_n, j \in \mathbb N \},
\end{equation*}
where $a>1$ is a fixed constant, $\overline h_n$ is such that $a \log(\overline h_n \sqrt n)/\sqrt n \leq \overline h_n \leq 1$, and $\underline h_n$ satisfies $\underline h_n = (\log(\overline h_n \sqrt n) \vee 1)/\sqrt n$.
For $h \in \mathcal H_n$ and some $M > 0$, define\footnote{In the sequel, we write $C_{\privpar^\prime \beta^\prime}$ for both $C_{\privpar^\prime \beta^\prime}^\Lc$ and $C_{\privpar^\prime \beta^\prime}^{\mathrm{GP}}$.}
\begin{equation*}
v^2(h) = \frac{M \int K^2(u)\mathrm{d} u}{nh} + \frac{C_{\privpar^\prime\beta^\prime}^2}{nh^2}
\end{equation*}
where $C_{\privpar^\prime\beta^\prime}$ is defined as in Section~\ref{SEC:MINIMAX}.
The proof of \ref{PROP:UPPER} shows that
\[ \operatorname{Var} (\widehat f_h) \leq v^2(h) \]
if $\norm{f}_\infty \leq M$.
Put $\lambda(h) = \max ( 1, ( \kappa \log(\overline h_n/h) )^{1/2} )$ with $\kappa$ being a sufficiently large constant (an explicit value can be determined from the proof of Theorem~\ref{THM:ADAPTATION}) and define
\begin{equation}\label{EQ:DEF:HAST}
h^\ast_n = h_n^\ast(t,f) = \max \{ h \in \mathcal H_n : \lvert f_\eta(t) - f(t) \rvert \leq \frac{v(h)\lambda(h)}{2} \text{ for all }\eta \in \mathcal H_n, \eta \leq h \}.
\end{equation}
If the set in the definition of $h^\ast_n$ is empty, we set $h^\ast_n = \underline h_n$ by convention.
However, in the proof of Proposition~\ref{PROP:UPPER:ORACLE} we will show that this set is non-empty for $n$ large enough.
The bandwidth $h_n^\ast$ is an oracle in the sense that it is not accessible by the statistician since it depends on the unknown parameter $f$.
The definition of $h^\ast_n$ provides some kind of ideal criterion:
The bandwidth $h$ is increased along the grid $\mathcal H_n$ as long as the bias term $\lvert f_\eta(t) - f(t) \rvert$ it is bounded by the 'rate' $v(h)\lambda(h)$, a procedure that aims at mimicking the classical bias-variance tradeoff.
In order to state a risk bound for the pseudo estimator $\widehat f_{h^\ast_n}$, we further define
\begin{equation*}
r_n(t,f) = \inf_{\underline h_n \leq h \leq 1} \left[ \sup_{0 \leq \eta \leq h} (f_\eta(t) - f(t))^2 + \frac{M \int K^2(u)\mathrm{d} u\log(n)}{nh} + \frac{C_{\privpar^\prime\beta^\prime}^2 \log(n)}{nh^2} \right].
\end{equation*}
\begin{proposition}\label{PROP:UPPER:ORACLE}
Consider the pseudo-estimator $\widehat f_{h^\ast_n}$ defined via~\eqref{EQ:DEF:FHAT} and \eqref{EQ:DEF:HAST} where $\alpha$ and $\beta$ are replaced with $\privpar^\prime$ and $\beta^\prime$, respectively.
Assume that
\begin{align}\label{EQ:ASS:BOCHNER}
\lim_{h \to 0} \frac{1}{h} \int K \left( \frac{x-t}{h} \right) f(x) \mathrm{d} x = f(t).
\end{align}
Consider $\overline h_n = 1$. Then, for $n$ sufficiently large,
\[ \mathbf{E} [ ( \widehat f_{h^\ast_n}-f(t) )^2 ] \leq \frac{5}{4}v^2(h^\ast_n)\lambda^2(h^\ast_n) \leq C(a) r_n(t,f) \]
uniformly for all $f$ with $\lVert f\rVert_\infty \leq M$.
\end{proposition}
\begin{remark}
Assumption~\eqref{EQ:ASS:BOCHNER} is satisfied in many cases.
For instance, if $\int \lvert K(u) \rvert \mathrm{d} u < \infty$, then \eqref{EQ:ASS:BOCHNER} is a special case of Bochner's lemma (see~\cite{tsybakov2004introduction}, Lemma~1.1).
However, the $\sinc$-kernel is not absolutely integrable and thus Bochner's lemma cannot be applied.
In this case, one can alternatively assume that $f$ belongs at least to some Sobolev space $\mathcal S(s,L)$ for some $s > 1/2$.
Then, the analysis of the bias term as in the proof of Proposition~\ref{PROP:UPPER} guarantees the validity of \eqref{EQ:ASS:BOCHNER}.
\end{remark}
The pseudo estimator $\widehat f_{h^\ast_n}$ is a stopover on our road to an adaptive estimator.
We now construct a genuine estimator of $f$ that aims at mimicking this oracle.
For this, we first define
\begin{align*}
v^2(h,\eta) &= \frac{M}{n}\int (K_h(u) - K_\eta(u))^2 \mathrm{d} u + \frac{C_{\privpar^\prime\beta^\prime}^2}{nh^2} + \frac{C_{\privpar^\prime\beta^\prime}^2}{n\eta^2}.
\end{align*}
Then, calculations similar to those in the proof of Proposition~\ref{PROP:UPPER} show that
\[ \operatorname{Var}(\widehat f_h - \widehat f_\eta) \leq v^2(h,\eta) \]
if $\norm{f}_\infty \leq M$.
For $h, \eta \in \mathcal H_n$, put
\[ \psi(h,\eta) = v(h)\lambda(h) + v(h,\eta) \lambda(\eta). \]
Then, we define an adaptive choice of the bandwidth parameter by
\begin{equation}\label{EQ:DEF:HHAT}
\widehat h_n = \max \{ h \in \mathcal H_n : \lvert \widehat f_h(t) - \widehat f_\eta(t) \rvert \leq \psi(h,\eta) \text{ for all }\eta \leq h, h \in \mathcal H_n \}.
\end{equation}
This choice of the bandwidth is well-defined since the maximum is taken over a non-empty set.
The definition of $\widehat h_n$ is characteristic for Lepski's method \cite{lepski1990problem}, and the motivation of this procedure is neatly described in \cite{cavalier2001tomography}, p.~67:
One chooses the largest bandwidth $h$ such that the difference between the two estimators $\widehat f_h$ and $\widehat f_\eta$ is not too large (in the sense of \eqref{EQ:DEF:HHAT}) for all $\eta \leq h$.
Evidently, the motivation of this procedure is to mimick the trade-off between squared bias and variance in a purely data-driven manner.
Note also that \eqref{EQ:DEF:HHAT} provides, as well as the oracle version~\eqref{EQ:DEF:HAST}, a local choice of the bandwidth in the sense that $\widehat h_n$ depends on $t$.
Such a local criterion might result in a better adaptation to spatial inhomogeneity of the target density than global selection rules.
\begin{theorem}\label{THM:ADAPTATION}
Consider the estimator $\widehat f_{\widehat h_n}$ defined via~\eqref{EQ:DEF:FHAT} and \eqref{EQ:DEF:HHAT} where $Z_{i,h}(t)$ for $h \in \mathcal H_n$ are defined via~\eqref{EQ:OBS:LAPLACE} or \eqref{EQ:OBS:GP} with $\alpha$ and $\beta$ replaced with $\privpar^\prime$ and $\beta^\prime$, respectively.
Then, uniformly for all $f$ with $\lVert f\rVert_\infty \leq M$,
\[ \mathbf{E}[(\widehat f_{\widehat h_n}(t) - f(t))^2] \leq C(a)v^2(h^\ast_n)\lambda^2(h^\ast_n). \]
As a consequence, taking $\overline h_n = 1$, we obtain
\[ \mathbf{E}[(\widehat f_{\widehat h_n}(t) - f(t))^2] \leq C(a) r_n(t,f). \]
\end{theorem}
\begin{remark}
By specifying Theorem~\ref{THM:ADAPTATION} with the $\sinc$-kernel and $\overline h_n = 1$, one obtains an adaptive estimator attaining the optimal rate of convergence over functions bounded by $M$ in a Sobolev ellipsoid up to an extra logarithmic factor.
A logarithmic loss for adaptation is commonly accepted and even known to be indispensable for pointwise estimation in the non-private framework \cite{brown1996constrained}.
\end{remark}
\section{Discussion}
We have suggested an approach to adaptive kernel density estimation via Lepski's method in the framework of local approximate differential privacy.
Although we have studied its theoretical properties in the prototypical example of univariate density estimation only, our methodology should also be transferable to the multivariate case.
We also conjecture that it might be possible to extend our results to the case of general linear functionals (different from pointwise evaluation of the density function at a fixed point) as investigated in~\cite{goldenshluger2000adaptive} via Lepski's method in a inverse problem setup.
Furthermore, our methodology might be applicable to obtain local private estimation procedures in functional data analysis.
However, a lot of questions remain open:
One drawback of our approach is that the perturbation by a Gaussian process provides only approximate differential privacy and cannot be extended to pure differential privacy.
The creation of new methods for kernel estimators that overcome this restriction provides a further direction for future research.
Moreover, the optimal power of the logarithmic factor in the adaptive rate of convergence deserves further investigation as well as the behaviour of the minimax optimal rates in terms of the privacy parameters $\alpha$ and $\beta$.
| {
"timestamp": "2019-07-16T02:16:37",
"yymm": "1907",
"arxiv_id": "1907.06233",
"language": "en",
"url": "https://arxiv.org/abs/1907.06233",
"abstract": "We consider non-parametric density estimation in the framework of local approximate differential privacy. In contrast to centralized privacy scenarios with a trusted curator, in the local setup anonymization must be guaranteed already on the individual data owners' side and therefore must precede any data mining tasks. Thus, the published anonymized data should be compatible with as many statistical procedures as possible. We suggest adding Laplace noise and Gaussian processes (both appropriately scaled) to kernel density estimators to obtain approximate differential private versions of the latter ones. We obtain minimax type results over Sobolev classes indexed by a smoothness parameter $s>1/2$ for the mean squared error at a fixed point. In particular, we show that taking the average of private kernel density estimators from $n$ different data owners attains the optimal rate of convergence if the bandwidth parameter is correctly specified. Notably, the optimal convergence rate in terms of the sample size $n$ is $n^{-(2s-1)/(2s+1)}$ under local differential privacy and thus deteriorated to the rate $n^{-(2s-1)/(2s)}$ which holds without privacy restrictions. Since the optimal choice of the bandwidth parameter depends on the smoothness $s$ and is thus not accessible in practice, adaptive methods for bandwidth selection are necessary and must, in the local privacy framework, be performed directly on the anonymized data. We address this problem by means of a variant of Lepski's method tailored to the privacy setup and obtain general oracle inequalities for private kernel density estimators. In the Sobolev case, the resulting adaptive estimator attains the optimal rate of convergence at least up to extra logarithmic factors.",
"subjects": "Statistics Theory (math.ST); Cryptography and Security (cs.CR)",
"title": "Pointwise adaptive kernel density estimation under local approximate differential privacy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750503469328,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221685932361
} |
https://arxiv.org/abs/2205.14314 | On a singular limit of the Kobayashi--Warren--Carter energy | By introducing a new topology, a representation formula of the Gamma limit of the Kobayashi-Warren-Carter energy is given in a multi-dimensional domain. A key step is to study the Gamma limit of a single-well Modica-Mortola functional. The convergence introduced here is called the sliced graph convergence, which is finer than conventional $L^1$ convergence, and the problem is reduced to a one-dimensional setting by a slicing argument. | \section{Introduction} \label{S1}
We consider the Kobayashi--Warren--Carter energy, which is a sum of a weighted total variation and a single-well Modica--Mortola energy.
Their explicit forms are
\begin{align}
E^\varepsilon_\mathrm{KWC}(u,v) &:= \int_\Omega \alpha(v)|Du|
+ E^\varepsilon_\mathrm{sMM}(v), \label{eq:E_KWC}\\
{E^\varepsilon_\mathrm{sMM}(v)} &:= \frac\varepsilon2 \int_\Omega |\nabla v|^2 {\mathrm{d}\mathcal{L}^N}
+ \frac{1}{2\varepsilon} \int_\Omega F(v){\mathrm{d}\mathcal{L}^N},\notag
\end{align}
where $\Omega$ is a bounded domain in $\mathbf{R}^N$ with the Lebesgue measure $\mathcal{L}^N$, $\alpha\ge0$, $\varepsilon>0$ is a small parameter, and $F$ is a single-well potential which takes its minimum at $v=1$.
Typical examples of $\alpha$ and $F$ are $\alpha(v)=v^2$ and $F(v)=(v-1)^2$, respectively.
These are the original choices in \cite{KWC1,KWC3}.
The first term in \eqref{eq:E_KWC} is a weighted total variation with weight $\alpha(v)$.
This energy was first introduced by {\cite{KWC1,KWC3}} to model motion of {grain boundaries of polycrystal} which have some structures like the averaged angle of each grain.
This energy is quite popular in materials science.
We are interested in a singular limit of the Kobayashi--Warren--Carter energy $E^\varepsilon_\mathrm{KWC}$ as $\varepsilon$ tends to zero.
If we assume boundedness of $E^\varepsilon_\mathrm{KWC}$ for a sequence $(u,v_\varepsilon)$ for fixed $u$, {then} $v_\varepsilon$ tends to a unique minimum of $F$ as $\varepsilon\to0$ in the $L^2$ sense.
However, if $u$ has a jump discontinuity, its convergence is not uniform near such places, even in a one-dimensional setting, suggesting that we have to introduce a finer topology than $L^2$ or $L^1$ topology.
In fact, in a one-dimensional setting, the notion of graph convergence of $v^\varepsilon$ to a set-valued function is introduced, and representations of Gamma limits of $E^\varepsilon_\mathrm{KWC}$ and $E^\varepsilon_\mathrm{sMM}$ are given in \cite{GOU}.
In this paper, we extend this one-dimensional results to a multi-dimensional setting.
For this purpose, we introduce a new concept of convergence called sliced graph convergence.
Roughly speaking, it requires graph convergence on each line.
Under this convergence in $v_\varepsilon$ and the $L^1$-convergence in $u$, one is able to derive a representation formula for the Gamma limit of $E^\varepsilon_\mathrm{KWC}$ as $\varepsilon\to0$.
It is
\begin{align*}
E^0_\mathrm{KWC}(u,\Xi) &:= \alpha(1) \int_{\Omega\backslash J_u} |Du|
+ \int_{J_u} \min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi) \left|u^+ - u^- \right| {\mathrm{d}\mathcal{H}^{N-1}} + {E^0_\mathrm{sMM}(\Xi)}, \\
{E^0_\mathrm{sMM}(\Xi)} &:= 2 \int_\Sigma \left\{ G(\xi^-) + G(\xi^+) \right\} {\mathrm{d}\mathcal{H}^{N-1}}\notag
\end{align*}
when $v_\varepsilon$ converges to a set-valued function $\Xi$ of form
\begin{equation*}
\Xi(z) =
\begin{cases}
\left[\xi^-(z),\xi^+(z)\right],&z\in\Sigma,\\
\{1\},&z\not\in\Sigma,
\end{cases}
\end{equation*}
where $\Sigma$ is a countably $N-1$ rectifiable set, and $\xi^\pm$ are {$\mathcal{H}^{N-1}$-measurable functions with $\xi^- \le 1 \le \xi^+$. Here} $\mathcal{H}^{N-1}$ denotes the $N-1$ dimensional Hausdorff measure.
The function $G$ is defined by
\[
G(\sigma) := \left| \int^\sigma_1 \sqrt{F(\tau)} {\mathrm{d}\tau} \right|.
\]
The functions $u^+$ and $u^-$ denote upper and lower approximate limits in the measure-theoretic sense \cite{Fe}.
In the case $\alpha(v)=v^2$, we see
\[
E^0_\mathrm{KWC}(u,\Xi) = \int_{\Omega\backslash J_u} |Du|
+ \int_{J_u\cap\Sigma} \left(\xi^-_+ \right)^2 \left|u^+ - u^- \right| {\mathrm{d}\mathcal{H}^{N-1}}
+ {E^0_\mathrm{sMM}(\Xi)},
\]
where $a_+$ denotes the positive part of a function $a$, i.e., $a_+=\max(a,0)$.
In \cite{GOU}, the case $\alpha(v)=v^2$ is discussed for a one-dimensional setting.
(Unfortunately, $\xi^-_+$ has been misprinted as $\xi^-$ in \cite{GOU}.)
When $F(v)=(v-1)^2$,
\[
{E^0_\mathrm{sMM}(\Xi)} = \int_\Sigma \left\{ (\xi^- -1)^2 + (\xi^+ -1)^2 \right\} {\mathrm{d}\mathcal{H}^{N-1}}.
\]
In a one-dimensional setting, the results in \cite{GOU} gave a full characterization of the Gamma limit: the compactness result and the mere convergence result.
On the other hand, it is unclear what kind of set-valued functions should be considered {as} the limit of $v_\varepsilon$ in a multi-dimensional setting, assuming $E^\varepsilon_\mathrm{sMM}(v_\varepsilon)$ is bounded.
A compactness result is still missing in a multi-dimensional setting.
The basic idea is to reduce a multi-dimensional setting to a one-dimensional setting by a slicing argument based on the following disintegration
\[
\int_\Omega f(z) {\mathrm{d}\mathcal{L}^N}(z)
= \int_{{\pi_\nu(\Omega_\nu)}} \left(\int_{\pi^{-1}_\nu(x)} f\,\mathrm{d}\mathcal{H}^1\right)\,\mathrm{d}\mathcal{L}^{N-1}(x),
\]
where $\pi_\nu$ denotes the projection of $\mathbf{R}^N$ to the {subspace} orthogonal to a unit vector $\nu$, and $\Omega_\nu=\pi_\nu(\Omega)$.
This idea is often used to study the singular limit of the Ambrosio--Tortorelli functional
\[
\mathcal{E}^\varepsilon (u,v) = \int_\Omega v^2 |\nabla u|^2\,\mathrm{d}\mathcal{L}^N
+ \lambda \int_\Omega (u-h)^2\,\mathrm{d}\mathcal{L}^N + {E^\varepsilon_\mathrm{sMM}(v)}, \quad \lambda \geq 0
\]
as in \cite{AT,AT2,FL}, where $h$ is a given $L^2$ function and $F(v)=(v-1)^2$.
This problem can be handled in $L^1$ topology, and its limit is known to be the Mumford--Shah functional
\[
\mathcal{E}^0 (u,K) = \int_{\Omega\backslash K} |\nabla u|^2\,\mathrm{d}\mathcal{L}^N
+ \mathcal{H}^{N-1}(K) + \lambda \int_\Omega (u-h)^2\,\mathrm{d}\mathcal{L}^N,
\]
where $K$ is a countably $N-1$ rectifiable set.
In this case, in our language, it suffices to consider the case $\xi^-=0$, $\xi^+=1$ on $\Sigma=K$ so that
\[
{\mathcal{E}^0_\mathrm{sMM}(\Xi) }= \mathcal{H}^{N-1}(K).
\]
In our case, however, as observed in the one-dimensional problem \cite{GOU}, it is reasonable to study non-constant $\xi^\pm$.
Moreover, the fidelity term including $\lambda$ is also allowed in our case.
Our first main result is the Gamma-convergence of
\[
{E^\varepsilon_\mathrm{sMM} (v)} + \int_J \alpha(v) j(y) {\mathrm{d}\mathcal{H}^{N-1}}(y)
\]
for a given countably rectifiable set $J$, where $j$ is an $ \mathcal{H}^{N-1}${-}integrable function on $J$.
This energy is a special case of $E^\varepsilon_\mathrm{KWC} (u,v)$ when $u$ has a jump in $J$ while it is constant outside $J$.
To show liminf inequality, we decompose $\Sigma$ into a disjoint union of compact sets {$\{K_i\}_i$} lying in almost flat hypersurfaces.
Then we reduce the problem in a one-dimensional setting like \cite{FL}.
To show limsup inequality, we approximate $\xi^\pm$ so that they are constants in each $K_i$.
This approximation procedure is quite involved because one should approximate not only energies but also approximate in the sliced graph topology.
The basic choice of recovery sequences is similar to \cite{AT,FL}.
This paper's main result is the Gamma-convergence of the Kobayashi--Warren--Carter energy.
The additional difficulty comes from the $\int\alpha(v)|Du|$ part, and this part can be carried out by decomposing the domain of integration into two parts: place close to $\Sigma$ of the limit $\Xi$ of $v_\varepsilon$, and outside such place.
The most difficult problem is how to choose a suitable topology for $v_\varepsilon$ to $\Xi$.
We take a slice, a straight line passing through $x$ with direction $\nu$ for $\mathcal{L}^{N-1}$-almost every $x\in\pi_\nu(\Omega)$ for some directions $\nu$.
We need several concepts of set-valued functions to formulate the topology, including measurability, as discussed in \cite{AF}.
The compactness is missing for the convergence of $E^\varepsilon_\mathrm{KWC}$ to $E^0_\mathrm{KWC}$.
Therefore, we do not know whether a minimizer of $E^0_\mathrm{KWC}$ exists under suitable boundary conditions or a minimizer of energy like $E^0_\mathrm{KWC}+\lambda\int_\Omega(u-h)^2 d\mathcal{L}^N$ exists.
If one minimizes $E^0_\mathrm{KWC}$ in the $\Xi$ variable, i.e.,
\[
TV_\mathrm{KWC}(u) := \inf_{\Xi\in\mathcal{A}_0} E^0_\mathrm{KWC}(u,\Xi),
\]
this can be calculated as
\[
TV_\mathrm{KWC}(u) = \int_\Sigma \sigma \left( |u^+ - u^-| \right) {\mathrm{d}\mathcal{H}^{N-1}} + \int_{\Omega\backslash J_u}|Du|
\]
with
\begin{align*}
\sigma(r) & := \min_{\xi^-,\xi^+} \left\{ r\min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi) + 2 \left( G(\xi^-)+G(\xi^+) \right) \right\} \\
& = \min_{\xi^-} \left\{ r\min_{\xi^-\leq\xi\leq1} \alpha(\xi) + 2G(\xi^-) \right\},\ r \geq 0
\end{align*}
if $\alpha(v)\geq\alpha(1)$ for $v\geq1$.
This $\sigma$ is always concave.
If $F(v)=(v-1)^2$, then
\[
\sigma(r) = \min_{\xi^-} \left\{ r(\xi^-_+)^2 + (\xi^- - 1)^2 \right\} = \frac{r}{r+1}.
\]
In other words,
\[
TV_\mathrm{KWC}(u) = \int_\Sigma \frac{|u^+ - u^-|}{1+|u^+ - u^-|}{\mathrm{d}\mathcal{H}^{N-1}}
+ \int_{\Omega\backslash J_u}|Du|.
\]
This functional is a kind of total variation but has different aspects.
For example, if $u$ is a piecewise constant monotone increasing function in a one-dimensional setting, the total variation $TV(u)=\int_\Omega|Du|$ equals $\sup u-\inf u$.
This case is often called a staircase problem since $TV$ does not care about the number and size of jumps for monotone functions.
In contrast to $TV$, the $TV_\mathrm{KWC}$ costs less if the number of jumps is smaller, provided that each jump is the same size and $\sup u-\inf u$ is the same.
The energy like $TV_\mathrm{KWC}$ for a piecewise constant function is derived as the surface tension of grain boundaries in polycrystals \cite{LL}, which is an active area, as studied by \cite{GaSp}.
The Modica--Mortola functional is the sum of Dirichlet energy and potential energy.
The Gamma limit problem was first studied in \cite{MM1}.
Since then, there has been much literature studying the Gamma-convergence problems.
If $F$ is a double-well potential, say $F(v)=(v^2-1)^2$, then the Modica--Mortola functional reads
\[
E^\varepsilon_\mathrm{dMM}(v) = \frac{\varepsilon}{2} \int_\Omega |\nabla v|^2 {\mathrm{d}\mathcal{L}^N}
+ \frac{1}{2\varepsilon} \int_\Omega (v^2-1)^2 {\mathrm{d}\mathcal{L}^N}.
\]
If $E^\varepsilon_\mathrm{dMM}(v_\varepsilon)$ is bounded, $v_\varepsilon(z)$ converges to either $1$ or $-1$ for $\mathcal{L}^N$-almost all $z\in\Omega$ by taking a subsequence.
The interface between two states, $\{\lim v_\varepsilon=1\}$ {and} $\{\lim v_\varepsilon=-1\}$, is called a transition interface.
In a one-dimensional setting, its Gamma limit is considered in $L^1$ topology and is characterized by the number of transition points \cite{MM2}.
This result is extended to a multi-dimensional setting in \cite{M,St}, and the Gamma limit is a constant multiple of the surface area of the transition interface.
However, the topology of convergence of $v_\varepsilon$ is either {in} $L^1$ {topology or in measure} (including almost everywhere convergence).
If we consider its Gamma limit in the sliced graph convergence, we expect that the limit equals
\[
E^0_\mathrm{dMM}(\Xi) = 2 \int_\Sigma \left\{G_-(\xi^-)+G_+(\xi^+) \right\} {\mathrm{d}\mathcal{H}^{N-1}}
+ G_-(1)\mathcal{H}^{N-1} (\Sigma)
\]
for
\begin{equation*}
\Xi(z) := \left \{
\begin{array}{ll}
\left[ \xi^-(z), \xi^+(z) \right], &{\text{for}}\ z \in \Sigma, \\
\text{either}\ 1\ \text{or}\ -1, &{\text{otherwise}},
\end{array}
\right.
\end{equation*}
where $[-1,1]\subset[\xi^-,\xi^+]$.
Here, $G_\pm$ is defined as
\[
G_\pm(\sigma) = \left| \int^\sigma_{\pm 1} \sqrt{F(\tau)}{\mathrm{d}\tau} \right|.
\]
The first term in $E_{\mathrm{dMM}}^0(\Xi)$ is invisible in $L^1$ convergence, while the second term is the Gamma limit of $E_\mathrm{dMM}$ in the $L^1$ sense.
We do not give proof in this paper.
If compactness is available, the Gamma-convergence yields the convergence of a local minimizer and the global minimizer.
For $L^1$ convergence, based on this strategy, the convergence of a local minimizer has been established in \cite{KS} when the limit is a strict local minimizer.
The convergence of critical points is outside the framework of a general theory and should be discussed separately as in \cite{HT}.
In recent years, the Gamma limit of the double-well Modica--Mortola function with spatial inhomogeneity has been studied from a homogenization point of view (see, e.g.\ \cite{CFHP1}, \cite{CFHP2}) but still under $L^1$ or convergence in measure.
The Mumford--Shah functional $\mathcal{E}^0$ is difficult to handle because one of the variables is a set $K$.
This is the motivation for introducing $\mathcal{E}^\varepsilon$, called the Ambrosio--Tortorelli functional, to approximate $\mathcal{E}^0$ in \cite{AT}.
The Gamma limit of $\mathcal{E}^\varepsilon$ is by now well studied \cite{AT,AT2}, and with weights \cite{FL}.
The convergence of critical parts is studied in \cite{FLS} in a one-dimensional setting; the higher-dimensional case was studied quite recently by \cite{BMR} by adjusting the idea of \cite{LSt}.
The Ambrosio--Tortorelli approximation is now used in various problems, including the decomposition of brittle fractures \cite{FMa} and the Steiner problem \cite{LS,BLM}.
However, in all these works, the energy for $u$ is a $v$-weighted Dirichlet energy, not $v$-weighted total variation energy.
A singular limit of the gradient flow of the double-well Modica--Mortola flow is well studied.
The sharp interface limit, i.e., $\varepsilon\to0$ yields the mean curvature flow of an interface.
For an early stage of development, see \cite{BL,XC,MSch}, on convergence to a smooth mean curvature flow and \cite{ESS} on convergence to a level-set mean curvature flow \cite{G}.
For more recent studies, see, for example, \cite{AHM,To}.
The gradient flow of the Kobayashi--Warren--Carter energy $E^\varepsilon_\mathrm{KWC}$ is proposed in \cite{KWC1} (see also \cite{KWC2,KWC3}) to model grain boundary motion when each grain has some structure.
Its explicit form is
\begin{align*}
\tau_1 v_t &= s \Delta v + (1-v) - 2sv |\nabla v|, \\
\tau_0 v^2 u_t &= s \operatorname{div} \left( v^2\frac{\nabla u}{|\nabla u|}\right),
\end{align*}
where $\tau_0$, $\tau_1$, and $s$ are positive parameters.
This system is regarded as the gradient flow of $E^\varepsilon_\mathrm{KWC}$ with $F(v)=(v-1)^2$, $\varepsilon=1$, {and} $\alpha(v)=v^2$.
Because of the presence of the singular term $\nabla u/|\nabla u|$, the meaning of the solution itself is non-trivial since, even if $v\equiv1$, the flow is the total variation flow, and a non-local quantity determines the speed \cite{KG}.
At this moment, the well-posedness of its initial-value problem is an open question.
If the second equation is replaced by
\[
\tau_0 (v^2+{\delta}) u_t = s \operatorname{div} \left( (v^2+\delta') \nabla u/|\nabla u| + \mu\nabla u \right)
\]
with $\delta>0$, $\delta'\geq0$ and $\mu\geq0$ satisfying $\delta'+\mu>0$, the existence and large-time behavior of solutions are established in \cite{IKY,MoSh,MoShW1,SWat,SWY,WSh} under several homogeneous boundary conditions.
However, its uniqueness is only proved in a one-dimensional setting under $\mu>0$ \cite[Theorem 2.2]{IKY}.
These results can be extended to the cases of non-homogeneous boundary conditions.
Under non-homogeneous Dirichlet boundary conditions, we are able to find various structural patterns of steady states; see \cite{MoShW2}.
The singular limit of the gradient flow of $E^\varepsilon_\mathrm{KWC}$ is not known even if $\alpha(v)=v^2+\delta'$, $\delta'>0$.
In \cite{ELM}, a gradient flow of
\[
E(u,\Sigma) = \int_\Sigma \sigma \left(\left|u^+-u^-\right|\right){\mathrm{d}\mathcal{H}^{N-1}}, \quad N=2
\]
is studied.
Here $u$ is a piecewise constant function outside a union $\Sigma$ of smooth curves, including triple junction, and $\sigma$ is a given non-negative function.
Our $TV_\mathrm{KWC}$ is a typical example.
They take variation of $E$ not only $u$ but also of $\Sigma$ and derive a weighted curvature flow with evolutions of boundary values of $u$ together with motion of triple junction.
It is not clear that the singular limit of the gradient flow of $E^\varepsilon_\mathrm{KWC}$ gives this flow since, in the total variation flow, the variation is taken only in the direction of $u$ and does not include domain variation, which is the source of the mean curvature flow.
This paper is organized as follows.
In Section \ref{SSGC}, we introduce the notion of sliced graph convergence.
In Section \ref{SLSC}, we discuss the liminf inequality of the singular limit of $E^\varepsilon_\mathrm{sMM}$ with an additional term under the sliced graph convergence.
In Section \ref{SCRS}, we discuss the limsup inequality by constructing recovery sequences.
In Section \ref{SLKWC}, we discuss the singular limit of $E^\varepsilon_\mathrm{KWC}$.
{The results of this paper are based on the thesis \cite{O} of the second author.}
\section{Sliced graph convergence} \label{SSGC}
In this section, we introduce the notion of sliced graph convergence.
We first recall a few basic notions of a set-valued function, especially on the measurability.
Consequently, we review the notion of the slicing argument and introduce the concept of sliced graph convergence.
\subsection{A set-valued function and its measurability}
We first recall a few basic notions of a set-valued function; see \cite{AF}.
Let $M$ be a Borel set in $\mathbf{R}^d$ and $\Gamma$ be a set-valued function on $M$ with values in $2^{\mathbf{R}^m}\backslash\{\emptyset\}$ such that $\Gamma(z)$ is closed in $\mathbf{R}^m$ for all $z\in M$.
We say that such $\Gamma$ is a closed set-valued function.
We say that $\Gamma$ is \emph{Borel measurable} if $\Gamma^{-1}(U)$ is a Borel set whenever $U$ is an open set in $\mathbf{R}^m$.
Here, the inverse $\Gamma^{-1}(U)$ is defined as
\[
\Gamma^{-1}(U) := \left\{ z \in M \bigm|
\Gamma(z) \cap U \neq \emptyset \right\}.
\]
Similarly, we say that $\Gamma$ is \emph{Lebesgue measurable} if $\Gamma^{-1}(U)$ is Lebesgue measurable whenever $U$ is an open set.
Assume that $M$ is closed.
We say that $\Gamma$ is \emph{upper semicontinuous} if $\operatorname{graph}\Gamma$ is closed in $M\times\mathbf{R}^m$, where
\[
\operatorname{graph}\Gamma := \left\{ z=(x,y) \in M \times \mathbf{R}^m \bigm|
y \in \Gamma({x}),\ x \in M \right\}.
\]
If $\Gamma$ is upper semicontinuous, $\Gamma$ is Borel measurable \cite{AF}.
Assume that $M$ is compact.
Then, $\operatorname{graph}\Gamma$ is compact if it is closed.
We set
\[
\mathcal{C} = \left\{ \Gamma \mid
\operatorname{graph}\Gamma\ \text{is compact in}\ M\times \mathbf{R}^m\ \text{and}\ \Gamma(x)\neq\emptyset \ \text{for}\ {x} \in M\right\}.
\]
For $\Gamma_1, \Gamma_2 \in \mathcal{C}$, we set
\[
d_g(\Gamma_1, \Gamma_2) := d_H(\operatorname{graph}\Gamma_1, \operatorname{graph}\Gamma_2),
\]
where $d_H$ denotes the Hausdorff distance of two sets in $M\times\mathbf{R}^m$, defined by
\[
d_H(A,B) := \max \left\{ \sup_{x \in A} \operatorname{dist}(z,B), \sup_{w \in B} \operatorname{dist}(w,A) \right\}
\]
for $A,B \subset M\times\mathbf{R}^m$, and
\[
\operatorname{dist}(z,B) := \inf_{w \in B} \operatorname{dist}(z,w),\quad
\operatorname{dist}(z,w) = |z-w|,
\]
where $|\cdot|$ denotes the Euclidean norm in $\mathbf{R}^d\times\mathbf{R}^m$.
We recall a fundamental property of a Borel measurable set-valued function \cite[Theorem 8.1.4]{AF}.
\begin{theorem} \label{MBA}
Let $\Gamma$ be a closed set-valued function on a Borel set $M$ in $\mathbf{R}^d$ with values in $2^{\mathbf{R}^m}\backslash\{\emptyset\}$.
The following three statements are equivalent:
\begin{enumerate}
\item[(i)] $\Gamma$ is Borel (resp.\ Lebesgue) measurable.
\item[(i\hspace{-1pt}i)] $\operatorname{graph}\Gamma$ is a Borel set ($\mathbf{M}\otimes\mathbf{B}$ measurable set) in $M\times\mathbf{R}^m$.
\item[(i\hspace{-1pt}i\hspace{-1pt}i)] There is a sequence of Borel (Lebesgue) measurable functions $\{f_j\}^\infty_{j=1}$ such that
\[
\Gamma(z) = \overline{\left\{f_j(z)\bigm| j=1,2,\ldots\right\}}.
\]
\end{enumerate}
Here $\mathbf{M}$ denotes the $\sigma$-algebra of Lebesgue measurable sets in $M$ and $\mathbf{B}$ denotes the $\sigma$-algebra of Borel sets in $\mathbf{R}^m$.
\end{theorem}
\subsection{The definition of the sliced graph convergence}
We next recall the notation often used in the slicing argument \cite{FL}.
Let $S$ be a set in $\mathbf{R}^N$.
Let $S^{N-1}$ denote the unit sphere in $\mathbf{R}^N$ centered at the origin, i.e.,
\[
S^{N-1} = \left\{ \nu \in \mathbf{R}^N \bigm|
|\nu| = 1 \right\}.
\]
For a given $\nu$, let $\Pi_\nu$ denote the hyperplane whose normal equals $\nu$.
In other words,
\[
\Pi_\nu := \left\{ x \in \mathbf{R}^N \bigm|
\langle x,\nu \rangle = 0 \right\},
\]
where $\langle \ ,\ \rangle$ denotes the standard inner product in $\mathbf{R}^N$.
For $x\in\Pi_\nu$, let $S_{x,\nu}$ denote the intersection of {$S$} and the whole line with direction $\nu$, which contains $x$; that is,
\[
S_{x,\nu} := \left\{ x + t \nu \bigm|
t \in S^1_{x,\nu} \right\},
\]
where
\[
S^1_{x,\nu} := \left\{ t \in \mathbf{R} \bigm|
x + t\nu \in S \right\} \subset \mathbf{R}.
\]
We also set
\[
S_\nu := \left\{ x \in \Pi_\nu \bigm|
S_{x,\nu} \neq \emptyset \right\}.
\]
See Figure \ref{FSC}.
\begin{figure}[htb]
\centering
\includegraphics[width=5cm]{GOSUfigure_1.png}
\caption{Slicing} \label{FSC}
\end{figure}
For a given function $f$ on $S$, we associate it with a function $f_{x,\nu}$ on $S^1_{x,\nu}$ defined by
\[
f_{x,\nu}(t) := f(x + t \nu).
\]
Let $\Omega$ be a bounded domain in $\mathbf{R}^N$, and $\mathcal{T}$ denote the set of all Lebesgue measurable (closed) set-valued function $\Gamma:\Omega\to2^\mathbf{R}$.
For $\nu \in S^{N-1}$, we consider $\Omega^1_{x,\nu}\subset\mathbf{R}$ and the (sliced) set-valued function $\Gamma_{x,\nu}$ on $\Omega^1_{x,\nu}$ defined by $\Gamma_{x,\nu}(t)=\Gamma(x+t\nu)$.
Let $\overline{\Gamma_{x,\nu}}$ denote its closure defined on the closure of $\overline{\Omega^1_{x,\nu}}$.
Namely, it is uniquely determined so that the graph of $\overline{\Gamma_{x,\nu}}$ equals the closure of $\operatorname{graph}\Gamma_{x,\nu}$ in $\mathbf{R}\times\mathbf{R}$.
As with usual measurable functions, $\Gamma^{(1)}$ and $\Gamma^{(2)}$ belonging to $\mathcal{T}$ are identified if $\Gamma^{(1)}(z)=\Gamma^{(2)}(z)$ for $\mathcal{L}^N$-a.e.\ $z\in\Omega$.
By Fubini's theorem, $\Gamma^{(1)}_{x,\nu}(t)=\Gamma^{(2)}_{x,\nu}(t)$ for $\mathcal{L}^1$-a.e.\ $t$ for $\mathcal{L}^{N-1}$-a.e.\ $x\in\Omega_\nu$.
With this identification, we consider its equivalence class, and we call each $\Gamma^{(1)}$, $\Gamma^{(2)}$ a representative of this equivalence class.
For $\nu\in S^{N-1}$, we define the subset $\mathcal{B}_\nu \subset \mathcal{T}$ as follows: $\Gamma \in \mathcal{B}_\nu$ if, for a.e.\ $x\in\Omega_\nu$,
\begin{itemize}
\item There is a representative of $\Gamma_{x,\nu}$ such that $\overline{\Gamma_{x,\nu}} = \Gamma_{x,\nu}$ on $\Omega^1_{x,\nu}$;
\item $\operatorname{graph}\overline{\Gamma_{x,\nu}}$ is compact in $\overline{\Omega^1_{x,\nu}}\times\mathbf{R}$.
\end{itemize}
We note that if $\Gamma^{(1)},\Gamma^{(2)}\in\mathcal{B}_\nu$, then $\overline{\Gamma^{(1)}_{x,\nu}},\overline{\Gamma^{(2)}_{x,\nu}}\in\mathcal{C}$ with $M=\overline{\Omega^1_{x,\nu}}$ by a suitable choice of representative of $\Gamma^{(1)}_{x,\nu}, \Gamma^{(2)}_{x,\nu}$, which follows from the definition.
In this situation, we have the following fact:
\begin{lemma}
The function
\[
f(x) = d_g \left( \overline{\Gamma^{(1)}_{x,\nu}},\overline{\Gamma^{(2)}_{x,\nu}} \right)
= d_H \left( \operatorname{graph}\Gamma^{(1)}_{x,\nu},\operatorname{graph}\Gamma^{(2)}_{x,\nu} \right)
\]
is Lebesgue measurable in $\Omega_\nu$.
\end{lemma}\label{lemma:distance}
\begin{proof}
Since each Lebesgue measurable function $f$ has a Borel measurable function $\overline{f}$ with $f(z)=\overline{f}(z)$ for $\mathcal{L}^N$-a.e.\ $z\in\Omega$, by Theorem~\ref{MBA}~(i\hspace{-1pt}i\hspace{-1pt}i), there is a Borel measurable representative of $\Gamma$.
By Theorem~\ref{MBA}~(i\hspace{-1pt}i),
$\operatorname{graph}\Gamma$ is a Borel set for the Borel representative of $\Gamma$.
Since the graph of the set-valued function $T:x\longmapsto\operatorname{graph}\overline{\Gamma_{x,\nu}}$ on $\Omega_\nu$ equals $\operatorname{graph}\Gamma$ for $\Gamma\in\mathcal{B}_\nu$ by taking a suitable representative of $\Gamma$,
we see that $T$ should be Borel measurable if $\Gamma$ is Borel measurable by Theorem~\ref{MBA}~(i\hspace{-1pt}i).
(Note that $T(x)$ is a compact set in $\mathbf{R}\times\mathbf{R}$.)
Since $d_H$ is continuous, the map $f(x)$ should be measurable.
\end{proof}
We now introduce a metric on $\mathcal{B}_\nu$ of form
\[
d_\nu \left( \Gamma^{(1)},\Gamma^{(2)} \right)
:= \int_{\Omega_\nu} \frac{d_g \left( \overline{\Gamma^{(1)}_{x,\nu}},\overline{\Gamma^{(2)}_{x,\nu}} \right)}{1+d_g \left( \overline{\Gamma^{(1)}_{x,\nu}},\overline{\Gamma^{(2)}_{x,\nu}} \right)} \,\mathrm{d}\mathcal{L}^{N-1}(x)
\]
for $ \Gamma^1,\Gamma^2\in\mathcal{B}_\nu$, where $\mathcal{L}^{N-1}$ denotes the Lebesgue measure on $\Pi_\nu$.
From Lemma~\ref{lemma:distance}, we see that this is a well-defined quantity for all $ \Gamma^{(1)},\Gamma^{(2)}\in\mathcal{B}_\nu$.
We identify $\Gamma^{(1)},\Gamma^{(2)}\in\mathcal{B}_\nu$ if $\Gamma^{(1)}_{x,\nu}=\Gamma^{(2)}_{x,\nu}$ for a.e.\ $x$.
With this identification, $(\mathcal{B}_\nu,d_\nu)$ is indeed a metric space.
By a standard argument, we see that $(\mathcal{B}_\nu,d_\nu)$ is a complete metric space; we do not give proof since we do not use this fact.
Let $D$ be a countable dense set in $S^{N-1}$.
We set
\[
\mathcal{B}_D := \bigcap_{\nu\in D}{\mathcal{B}_\nu}.
\]
It is a metric space with metric
\[
d_D \left( \Gamma^{(1)},\Gamma^{(2)} \right)
:= \sum^\infty_{j=1} \frac{1}{2^j}
\frac{d_{\nu_j} \left( \Gamma^{(1)},\Gamma^{(2)} \right)}{1+d_{\nu_j} \left( \Gamma^{(1)},\Gamma^{(2)} \right)},
\]
where $D=\{\nu_j\}^\infty_{j=1}$.
(This is also a complete metric space.)
We shall fix $D$.
The convergence with respect to $d_D$ is called the \emph{sliced graph convergence}.
If $\{\Gamma_k\}\subset\mathcal{B}_D$ converges to $\Gamma\in\mathcal{B}_D$ with respect to $d_D$, we write $\Gamma_k\xrightarrow{sg}\Gamma$ (as $k\to\infty$).
Roughly speaking, $\Gamma_k\xrightarrow{sg}\Gamma$ if the graph of the slice $\Gamma_k$ converges to that of $\Gamma$ for a.e. $x \in \Omega_\nu$ for any $\nu \in D$.
For a function $v$ on $\Omega$, we associate a set-valued function $\Gamma_v$ by $\Gamma_v(x)=\left\{v(x)\right\}$.
If $\Gamma_k=\Gamma_{v_k}$ for some $v_k$, we shortly write $v_k\xrightarrow{sg}\Gamma$ instead of $\Gamma_{v_k}\xrightarrow{sg}\Gamma$.
We note that if $v\in H^1(\Omega)$, the $L^2$-Sobolev space of order $1$, then $\Gamma_v\in\mathcal{B}_D$ for any $D$.
We conclude this subsection by showing that the notions of the graph convergence and the sliced graph convergence are unrelated for $N\geq2$.
First, we give an example that the graph convergence does not imply the sliced graph convergence.
Let $C(r)$ denote the circle of radius $r>0$ centered at the origin in $\mathbf{R}^2$.
It is clear that $d_H\left(C(r),C(r-\varepsilon)\right)\to0$ as $\varepsilon>0$ tends to zero.
However, for $\nu=(1,0)$, $C(r-\varepsilon)_{x,\nu}$ with $x=(0,\pm r)$ is empty and does not converge to a single point $C(r)_{x,\nu}=\left\{(0,\pm r)\right\}$.
In this case, $C(r-\varepsilon)_{x,\nu}$ converges to $C(r)_{x,\nu}$ in the Hausdorff sense except the case $x=(0,\pm r)$.
To make the exceptional set has a positive $\mathcal{L}^1$ measure in $\Pi_\nu$, we recall a thick Cantor set defined by
\begin{align*}
G &:= [0,1] \backslash U \\
U &:= \bigcup \left\{\left( \frac{a}{2^n} - \frac{1}{2^{2n+1}}, \frac{a}{2^n} + \frac{1}{2^{2n+1}} \right)
\biggm| n, a = 1,2,\ldots \right\}.
\end{align*}
This $G$ is a compact set with a positive $\mathcal{L}^1$ measure.
We set
\[
K := \bigcup_{r \in G} C(r), \quad
K_\varepsilon := \bigcup_{r \in G} C(r-\varepsilon).
\]
$K_\varepsilon$ converges to $K$ as $\varepsilon\to0$ in the Hausdorff distance sense.
However, for any $\nu\in S^2$, the slice $(K_\varepsilon)_{x,\nu}$ does not converge to {$K_{x,\nu}$} for $x\in\Pi_\nu$ with $|x|\in G$.
It is easy to construct an example that the graph convergence does not imply the sliced graph convergence based on this set.
Let $\Omega$ be an open unit disk centered at the origin.
We set
\begin{center}
\begin{minipage}[c][24pt][b]{0.35\textwidth}
\begin{eqnarray*}
\Gamma_\varepsilon(x) := \left\{
\begin{array}{cl}
[0,1], & z \in K_\varepsilon \\
\{ 1 \}, & z \in \Omega\backslash K_\varepsilon
\end{array}
\right.,
\end{eqnarray*}
\end{minipage}
\begin{minipage}[c][24pt][b]{0.35\textwidth}
\begin{eqnarray*}
\Gamma(x) := \left\{
\begin{array}{cl}
[0,1], & z \in K \\
\{ 1 \}, & z \in \Omega\backslash K
\end{array}
\right..
\end{eqnarray*}
\end{minipage}
\end{center}
The graph convergence of $\Gamma_\varepsilon$ to $\Gamma$ is equivalent to the Hausdorff convergence of $K_\varepsilon$ to $K$.
The sliced graph convergence is equivalent to saying $(K_\varepsilon)_{x,\nu}\to K_{x,\nu}$ for $\nu\in D$ and a.e.\ $x$, where $D$ is some dense set in $S^1$.
However, from the construction of $K_\varepsilon$ and $K$, we observe that for any $\nu\in S^1$, the slice $K_{x,\nu}$ does not converge to $K$ for $x$ with $|x|\in G$, which has a positive $\mathcal{L}^1$ measure on $\Pi_\nu$.
Thus, we see that $\Gamma_\varepsilon$ does not converge to $\Gamma$ in the sense of the sliced graph convergence while $\Gamma_\varepsilon$ converges to $\Gamma$ in the sense of graph convergence.
The sliced graph convergence does not imply the graph convergence even if the graph convergence is interpreted in the sense of essential distance.
For any $\mathcal{H}^N$-measurable set $A$ in $\mathbf{R}^{N+1}$ and a point $p\in\mathbf{R}^{N+1}$, we set the essential distance from $p$ to $A$ as
\[
d_e(p,A) := \inf \left\{ r>0 \bigm| \mathcal{H}^N \left( B_r(p)\cap A \right) > 0 \right\},
\]
where $B_r(p)$ is a closed ball of radius $r$ centered at $p$.
We set
\[
N_\delta(A) := \left\{ q\in\mathbf{R}^{N+1} \bigm| d_e (q,A) < \delta \right\},
\]
and the essential Hausdorff distance is defined as
\[
d_{eH}(A,B) := \inf \left\{ \delta>0 \bigm| A \subset N_\delta(B),\ B \subset N_\delta(A) \right\}.
\]
Let $\Omega$ be a domain in $\mathbf{R}^N$ ($N\geq 2$) containing $B_1(0)$ and set
\[
\Gamma^\varepsilon(z) = \left\{ \left( 1-|z|/\varepsilon \right)_+ \right\}, \quad
\Gamma^0(z) = \{0\}
\]
for $z\in\Omega$ and $\varepsilon>0$.
Clearly, for any $\nu\in S^{N-1}$, $x\in\Omega_\nu$ with $x\neq0$,
\[
d_H \left( \operatorname{graph} \Gamma^\varepsilon_{x,\nu}, \operatorname{graph} \Gamma^0_{x,\nu} \right) \to 0
\]
holds as $\varepsilon\to0$,
However,
\[
d_{eH} \left( \operatorname{graph} \Gamma^\varepsilon, \operatorname{graph} \Gamma^0 \right) = 1;
\]
in particular, $\Gamma^\varepsilon$ does not converge to $\Gamma^0$ in the $d_{eH}$ convergence of the graphs.
\section{Lower semicontinuity} \label{SLSC}
We now introduce a single-well Modica--Mortola function $E^\varepsilon_\mathrm{sMM}$ on $H^1(\Omega)$ when $\Omega$ is a bounded domain in $\mathbf{R}^N$.
For $v\in H^1(\Omega)$, we set an integral
\[
E^\varepsilon_\mathrm{sMM} (v) := \frac{\varepsilon}{2} \int_\Omega |\nabla v|^2 \,\mathrm{d}\mathcal{L}^N
+ \frac{1}{2\varepsilon} \int_\Omega F(v) \,\mathrm{d}\mathcal{L}^N,
\]
where $\mathcal{L}^N$ denotes the $N$-dimensional Lebesgue measure.
Here, the potential energy $F$ is a single-well potential.
We shall assume that
\begin{enumerate}
\item[(F1)] $F\in C^1(\mathbf{R})$ is non-negative, and $F(v)=0$ if and only if $v=1$,
\item[(F2)] $\liminf_{|v|\to\infty} F(v) > 0$.
We occasionally impose a stronger growth assumption than (F2):
\item[(F2')] (monotonicity condition) $F'(v)(v-1)\geq0$ for all $v\in\mathbf{R}$.
\end{enumerate}
We are interested in the Gamma limit of $E^\varepsilon_\mathrm{sMM}$ as $\varepsilon\to 0$ under the sliced graph convergence.
We define the subset $\mathcal{A}_0 := \mathcal{A}_0(\Omega) \subset \mathcal{B}_D$ as follows: $\Xi \in \mathcal{A}_0(\Omega)$ if there is a countably $N-1$ rectifiable set $\Sigma\subset\Omega$ such that
\begin{equation}
\Xi(z) = \left \{
\begin{array}{l} \label{SIG}
1,\ z\in\Omega\backslash\Sigma \\
\left[\xi^-, \xi^+\right],\ z\in\Sigma
\end{array}
\right.
\end{equation}
with $\mathcal{H}^{N-1}$-measurable function $\xi_\pm$ on $\Sigma$ and $\xi^-(z) \leq 1 \leq \xi^+(z)$ for $\mathcal{H}^{N-1}$-a.e.\ $z \in \Sigma$.
For the definition of countably $N-1$ rectifiability, see the beginning of Section~\ref{SSBP}.
Here $\mathcal{H}^m$ denotes the $m$-dimensional Hausdorff measure.
We briefly remark on the compactness of the graph of $\Xi\in\mathcal{A}_0$.
By definition, if $\Xi$ is of form \eqref{SIG}, then $\Xi(z)$ is compact.
However, there may be a chance that $\operatorname{graph}\overline{\Gamma_{x,\nu}}$ is not compact, even for the one-dimensional case ($N=1$).
Indeed, if a set-valued function on $(0,1)$ is of form
\begin{equation*}
\Xi(z) = \left \{
\begin{array}{ll}
\left[1,m\right]&\text{for}\ z=1/m \\
\{1\}&\text{otherwise},
\end{array}
\right.
\end{equation*}
then $\overline{\Xi}$ is not compact in $[0,1]\times\mathbf{R}$.
It is also possible to construct an example that $\overline{\Xi}\neq\Xi$ in $(0,1)$, which is why we impose $\Xi\in\mathcal{B}_D$ in the definition of $\mathcal{A}_0$.
For $\Xi\in\mathcal{A}_0$, we define a functional
\[
E^0_\mathrm{sMM}(\Xi,\Omega) := 2\int_\Sigma \left\{ G(\xi^-) + G(\xi^+) \right\} {\,\mathrm{d}\mathcal{H}^{N-1}},\quad \text{where}\ G(\sigma) := \left| \int^\sigma_1 \sqrt{F(\tau)} {\,\mathrm{d}\tau} \right|.
\]
For later applications, it is convenient to consider a more general functional.
Let $J$ be a countably $N-1$ rectifiable set, and $\alpha:\mathbf{R}\to[0,\infty)$ be continuous.
Let $j$ be a non-negative $\mathcal{H}^{N-1}$-measurable function on $J$. We denote the triplet $(J,j,\alpha)$ by $\mathcal{J}$.
We set
\[
E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) = E^0_\mathrm{sMM}(\Xi,\Omega)
+ \int_{J\cap\Sigma} \left( \min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi) \right) {\,\mathrm{d}\mathcal{H}^{N-1}}.
\]
For $S$, we also set
\[
E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM}(v) := E^\varepsilon_\mathrm{sMM}(v)
+ \int_J \alpha(v)j{\,\mathrm{d}\mathcal{H}^{N-1}},
\]
which is important to study the Kobayashi--Warren--Carter energy.
\subsection{Liminf inequality} \label{SSLINF}
We shall state the ``liminf inequality'' for the convergence of $E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM}$.
\begin{theorem} \label{INF}
Let $\Omega$ be a bounded domain in $\mathbf{R}^N$.
Assume that $F$ satisfies (F1) and (F2).
For ${\mathcal{J}}=(J,j,\alpha)$, assume that $J$ is countably $N-1$ rectifiable in $\Omega$ with a non-negative $\mathcal{H}^{N-1}$-measurable function $j$ on $J$ and that $\alpha \in C(\mathbf{R})$ is non-negative.
Let $D$ be a countable dense set of $S^{N-1}$.
Let $\{v_\varepsilon\}_{0<\varepsilon<1}$ be in $H^1(\Omega)$ so that $\Gamma_{v_\varepsilon}\in\mathcal{B}_D$.
If $v_\varepsilon\xrightarrow{sg}\Xi$ and $\Xi\in\mathcal{A}_0$, then
\[
E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) \leq \liminf_{\varepsilon\to 0}E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM} (v_\varepsilon).
\]
\end{theorem}
\begin{remark} \label{INF1}
\begin{enumerate}
\item[(i)] The last inequality is called the liminf inequality.
Here, we assume that the limit $\Xi$ is in $\mathcal{A}_0$, which is a stronger assumption than the one-dimensional result \cite[Theorem 2.1 (i)]{GOU}, where this condition automatically follows from the finiteness of the right-hand side of the liminf inequality.
\item[(i\hspace{-1pt}i)] In a one-dimensional setting, we consider the limit functional in $\overline{\Omega}$.
Here we only consider it in $\Omega$.
Thus, our definition of $\mathcal{A}_0$ is different from \cite{GOU}.
Under suitable assumptions on the boundary, say $C^1$, we are able to extend the result onto $\overline{\Omega}$.
Of course, we may replace $\Omega$ with a flat torus $\mathbf{T}^N=\mathbf{R}^N/\mathbf{Z}^N$.
\item[(i\hspace{-1pt}i\hspace{-1pt}i)] In \cite{GOU}, $\alpha(v)$ is taken $v^2$ so that
\[
E^{0,b}_\mathrm{sMM}(\Xi,M) = E^0_\mathrm{sMM}(\Xi,M) + b\left(\left(\min\Xi(a)\right)_+\right)^2,
\]
where $(f)_+$ denotes the positive part defined by $f_+=\max(f,0)$.
However, in \cite{GOU}, this operation was missing in the definition, which is incorrect.
\end{enumerate}
\end{remark}
\subsection{Basic properties of a countably $N-1$ rectifiable set} \label{SSBP}
To prove Theorem \ref{INF}, we begin with the basic properties of a countably $N-1$ rectifiable set.
A set $J$ in $\mathbf{R}^N$ is said
to be countably $N-1$ rectifiable if
\[
J \subset J_0 \cup \left( \bigcup^\infty_{j=1} F_j\left(\mathbf{R}^{N-1}\right) \right)
\]
where $\mathcal{H}^{N-1}(J_0)=0$ and $F_j:\mathbf{R}^{N-1}\to\mathbf{R}^N$ are Lipschitz mappings for $j=1,2,\ldots$.
\begin{definition} \label{DEL}
Let $\delta>0$.
A set $K$ in $\mathbf{R}^N$ is $\delta$-flat if there are $V\subset\mathbf{R}^{N-1}$, a $C^1$ function $\psi\colon\mathbf{R}^{N-1}\to\mathbf{R}$, and a rotation $A\in SO(N)$ such that
\[
K = \left\{ \left(x,\psi(x) \right)A \bigm| x \in V \right\}
\]
and $\|\nabla\psi\|_\infty\leq \delta$.
\end{definition}
\begin{lemma} \label{CR}
Let $\Sigma$ be a countably $N-1$ rectifiable set.
For any $\delta>0$, there is a disjoint countable family $\{K_i\}^\infty_{i=1}$ of compact $\delta$-flat sets and $\mathcal{H}^{N-1}$-measure zero $N_0$ such that
\[
\Sigma = N_0 \cup \left( \bigcup^\infty_{i=1} K_i \right).
\]
\end{lemma}
\begin{proof}
By \cite[Lemma 11.1]{Sim}, there is a countable family of $C^1$ manifolds $\{M_i\}^\infty_{i=1}$ and $N$ with $\mathcal{H}^{N-1}(N)=0$ such that
\[
\Sigma \subset N \cup \left( \bigcup^\infty_{i=1} M_i \right).
\]
Since $M_i$ is a $C^1$ manifold, it can be written as a countable family of $\delta$-flat sets.
Thus, we may assume that $M_i$ is $\delta$-flat.
We define $\{N_i,\Sigma_i\}^\infty_{k=1}$ inductively by
\begin{align*}
&N_{{1}} := \Sigma \cap M_1, \quad \Sigma_1 := \Sigma \backslash N_1 \\
&N_{i+1} := \Sigma_i \cap M_{i+1}, \quad \Sigma_{i+1} := \Sigma_i \backslash N_i\ (i=1,{2,\ldots}).
\end{align*}
Here, $N_i$ is $\mathcal{H}^{N-1}$-measurable and $\mathcal{H}^{N-1}(N_i)<\infty$.
Since $\mathcal{H}^{N-1}$ is Borel regular, for any $\delta$, there exists a compact set $C\subset N_i$ such that $\mathcal{H}^{N-1}(N_i\backslash C)<\delta$.
Thus, there is a disjoint countable family $\{M_{ij}\}^\infty_{j=1}$ of compact sets, and an $\mathcal{H}^{N-1}$-zero set $N_{i0}$ such that
\[
N_i = N_{i0} \cup \left( \bigcup^\infty_{j=1} M_{ij} \right)\ (i=1,2,\ldots).
\]
Indeed, we define a sequence of compact sets $\{M_{ij}\}$ inductively by
\begin{align*}
& M_{i1} \subset N_i,\\
& M_{i,j+1} \subset N_i \backslash \bigcup^j_{k=1} M_{ik},\ j=1,2,\ldots
\end{align*}
such that $\mathcal{H}^{N-1} \left(N_i\backslash \bigcup^{{j}}_{k=1}M_{i{k}}\right)<1/2^{{j}}$. Then, setting $N_{i0}=N_i\backslash\bigcup^\infty_{j=1} M_{ij}$ yields the desired decomposition of $N_i$.
Setting
\[
N_0 = (N\cap\Sigma) \cup \left( \bigcup^\infty_{i=1} N_{i0} \right)
\]
and renumbering $\{M_{ij}\}$ as $\{K_i\}$, the desired decomposition is obtained.
\end{proof}
\subsection{Proof of liminf inequality} \label{SINF}
\begin{proof}[Proof of Theorem \ref{INF}]
By Lemma \ref{CR}, for $\delta\in(0,1)$, we decompose $\Sigma$ as
\[
\Sigma = N_0 \cup \left( \bigcup^\infty_{i=1} K_i \right),
\]
where $\{K_i\}^\infty_{i=1}$ is a disjoint family of compact $\delta$-flat sets and $\mathcal{H}^{N-1}(N_0)=0$.
We set
\[
\Sigma_m = \bigcup^m_{i=1} K_i
\]
and take a disjoint family of open sets $\{U^m_i\}^m_{i=1}$ such that $K_i\subset U^m_i$.
By definition, $K_i$ is of the form
\[
K_i = \left\{ \left(x,\psi(x)\right)A_i \bigm|
x \in V_i \right\}
\]
for some $A_i\in SO(N)$, a compact set $V_i\subset\mathbf{R}^{N-1}$ and $\psi_i\in C^1(\mathbf{R}^{N-1})$ with $\|\nabla\psi_i\|_\infty\leq\delta$.
Since $D$ is dense in $S^{N-1}$, we are able to take $\nu^i\subset D$, which is close to the normal of the hyperplane
\[
P_i = \left\{ (x,0)A_i \bigm|
x \in \mathbf{R}^{N-1} \right\}
\]
for $i=1,\ldots,m$.
We may assume that $\nu_i$ is normal to $P_i$ and $\|\nabla\psi_i\|_\infty\leq 2\delta$ by rotating slightly.
See Figure \ref{FRK}.
\begin{figure}[htb]
\centering
\includegraphics[width=6.5cm]{GOSUfigure_2.png}
\caption{The set $\Sigma_2$} \label{FRK}
\end{figure}
We decompose
\[
E^\varepsilon_\mathrm{sMM}(v_\varepsilon) \geq \sum^m_{i=1} \int_{U^m_i} \left\{ \frac{\varepsilon}{2}|\nabla v_\varepsilon|^2 + \frac{1}{2\varepsilon} F(v_\varepsilon) \right\} \,\mathrm{d}\mathcal{L}^N.
\]
By slicing, we observe that the right-hand side is
\begin{align*}
\int_{U^m_i} &{\left\{ \frac{\varepsilon}{2}|\nabla v_\varepsilon|^2 + \frac{1}{2\varepsilon} F(v_\varepsilon) \right\}} \,\mathrm{d}\mathcal{L}^N\\
&= \int_{(U^m_i)_{\nu^i}} \left( \int_{(U^m_i)^1_{x,\nu^i}} \left\{ \frac{\varepsilon}{2}|\nabla v_\varepsilon|^2_{x,\nu^i} + \frac{1}{2\varepsilon} F(v_{\varepsilon,x,\nu^i}) \right\} \,\mathrm{d} t \right) \,\mathrm{d}\mathcal{L}^{N-1}(x) \\
&\geq \int_{(U^m_i)_{\nu^i}} \left( \int_{(U^m_i)^1_{x,\nu^i}} \left\{ \frac{\varepsilon}{2}\left|\partial_t(v_{\varepsilon,x,\nu^i}) \right|^2 + \frac{1}{2\varepsilon} F(v_{\varepsilon,x,\nu^i}) \right\} \,\mathrm{d} t \right) \,\mathrm{d}\mathcal{L}^{N-1}(x).
\end{align*}
Since $v_\varepsilon\xrightarrow{sg}\Xi$, we see that {$\overline{v_{\varepsilon,x,\nu}}$ converges to $\overline{\Xi_{x,\nu^i}}$ as $\varepsilon \to 0$ in the sense of the graph convergence in a one dimensional setting for $\mathcal{L}^{N-1}$-a.e. $x$.}
Applying the one-dimensional result \cite[Theorem 2.1 (i)]{GOU}, we have
\begin{equation} \label{ONE}
\begin{split}
\liminf_{\varepsilon\to 0} \int_{(U^m_i)^1_{x,\nu^i}} &\left\{ \frac{\varepsilon}{2}\left|\partial_t(v_{\varepsilon,x,\nu^i}) \right|^2 + \frac{1}{2\varepsilon} F(v_{\varepsilon,x,\nu^i}) \right\} \,\mathrm{d} t\\
&\geq \sum^\infty_{k=1} 2 \left\{ G \left(\xi^+_{x,\nu^i}(t_k)\right) + G \left(\xi^-_{x,\nu^i}(t_k)\right) \right\}
\end{split}
\end{equation}
for $\{t_k\}^\infty_{k=1}$, where $\Xi_{x,\nu^i}(t)$ is not a singleton in $(U^m_i)^1_{x,\nu^i}$.
This set $\{t_k\}^\infty_{k=1}$ contains a unique point $t_x$ such as
\[
(K_i)^1_{x,\nu^i} \cap (U^m)^1_{x,\nu^i} = \{t_x\},
\]
so the right-hand side of \eqref{ONE} is estimated from below by
\[
2 \left\{ G \left(\xi^+_{x,\nu^i}(t_x)\right) + G \left(\xi^-_{x,\nu^i}(t_x)\right) \right\}.
\]
Since integration is lower semicontinuous by Fatou's lemma, we now observe that
\[
\liminf_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM}(v_\varepsilon) \geq \sum^m_{i=1} \int_{(U^m_i)_{\nu^i}} \widetilde{G} \left(x + t_x \nu^i \right) \,\mathrm{d}\mathcal{L}^{N-1}(x),
\]
where $\widetilde{G}(x)=2\left\{G \left(\xi^+(x)\right) + G \left(\xi^-(x)\right)\right\}$ ($x\in\Sigma$).
By the area formula, we see
\begin{align*}
\int_{K_i} \widetilde{G}(y) {\,\mathrm{d}\mathcal{H}^{N-1}}(y)
&= \int_{V_i} \widetilde{G}\left(\left(x,\psi_i(x)\right) A_i \right) \sqrt{1+\left|\nabla\psi_i(x)\right|^2} \,\mathrm{d} \mathcal{L}^{N-1}(x) \\
&\leq \sqrt{1+(2\delta)^2} \int_{(U^m_i)_{\nu^i}} \widetilde{G}(x + t_x \nu^i) \,\mathrm{d} \mathcal{L}^{N-1}(x).
\end{align*}
Thus
\begin{align*}
\liminf_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM}(v_\varepsilon)
&\geq \left( 1+(2\delta)^2 \right)^{-1/2} \sum^m_{i=1} \int_{K_i} \widetilde{G}(x) {\,\mathrm{d}\mathcal{H}^{N-1}}(x) \\
&= \left( 1+(2\delta)^2 \right)^{-1/2} \int_{\Sigma_m} \widetilde{G}(x) {\,\mathrm{d}\mathcal{H}^{N-1}}(x).
\end{align*}
Sending $m\to\infty$ and then $\delta\to 0$, we conclude
\[
\liminf_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM}(v_\varepsilon) \geq \int_\Sigma \widetilde{G}(x) {\,\mathrm{d}\mathcal{H}^{N-1}}(x).
\]
It remains to prove
\[
\liminf_{\varepsilon\to 0} \int_J \alpha(v_\varepsilon)j {\,\mathrm{d}\mathcal{H}^{N-1}}
\geq \int_{J\cap\Sigma} \left( \min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi) \right)j {\,\mathrm{d}\mathcal{H}^{N-1}}
\]
when $v_\varepsilon\xrightarrow{sg}\Xi$.
It suffices to prove that
\[
\liminf_{\varepsilon\to 0} \int_{J\cap K_i} \alpha(v_\varepsilon) j{\,\mathrm{d}\mathcal{H}^{N-1}}
\geq \int_{J\cap K_i} \left( \min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi) \right)j {\,\mathrm{d}\mathcal{H}^{N-1}}.
\]
By slicing, we may reduce the problem in a one-dimensional setting.
If the dimension equals one, this follows directly from the definition of graph convergence.
The proof is now complete.
\end{proof}
\section{Construction of recovery sequences} \label{SCRS}
Our goal in this section is to construct what is called a recovery sequence $\{w_\varepsilon\}$ to establish limsup inequality.
\begin{theorem} \label{SUP}
Let $\Omega$ be a bounded domain in $\mathbf{R}^N$.
Assume that $F$ satisfies (F1) and (F2').
For $\mathcal{J}=(J,j,\alpha)$, assume that $J$ is countably $N-1$ rectifiable in $\Omega$ with a non-negative $\mathcal{H}^{N-1}$-integrable function $j$ on $J$ and that $\alpha\in C(\mathbf{R})$ is non-negative.
For any $\Xi\in\mathcal{A}_0$ with $E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega)<\infty$, there exists a sequence $\{w_\varepsilon\}\subset H^1(\Omega)$ such that
\begin{align*}
& E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) \geq \limsup_{\varepsilon\to 0} E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM}(w_\varepsilon),\\
& \lim_{\varepsilon\to 0} d_\nu (\Gamma_{w_\varepsilon},\Xi) = 0\quad \text{for all}\quad \nu \in S^{N-1}.
\end{align*}
In particular, $w_\varepsilon\xrightarrow{sg}\Xi$ in $\mathcal{B}_D$ for any $D\subset S^{{N}-1}$ with $\overline{D}=S^{N-1}$.
By Theorem \ref{INF},
\[
E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) = \lim_{\varepsilon\to 0} E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM}(w_\varepsilon).
\]
\end{theorem}
\subsection{Approximation} \label{SSAP}
We begin with various approximations.
\begin{lemma} \label{APP}
Assume ther same hypotheses concerning $\Omega$ and $S=(J,j,\alpha)$ as in Theorem \ref{SUP}. Assume that $F$ satisfies (F1).
Assume $\Xi\in\mathcal{A}_0$ so that its singular set $\Sigma=\left\{ y\in\Omega \bigm| \Xi(y)\neq\{1\} \right\}$ is countably $N-1$ rectifiable.
Let $\delta$ be an arbitrarily fixed positive number.
Then, there exists a sequence $\{\Xi_m\}^\infty_{m=1}\subset\mathcal{A}_0$ such that the following properties hold:
\begin{enumerate}
\item[(i)] $E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) \geq \limsup_{m\to\infty}E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi_m,\Omega)$,
\item[(i\hspace{-1pt}i)] $\lim_{m\to\infty}d_\nu(\Xi_m,\Xi)=0$ for all $\nu\in S^{N-1}$,
\item[(i\hspace{-1pt}i\hspace{-1pt}i)] $\Xi_m(y)\subset\Xi(y)$ for all $y\in\Omega$,
\item[(i\hspace{-1pt}v)] the singular set $\Sigma_m=\left\{ y\in\Omega \bigm| \Xi_m(y)\neq\{1\} \right\}$ consists of a disjoint finite union of compact $\delta$-flat sets $\{K_j\}^k_{j=1}$,
\item[{(v)}] $\xi^+_m$, $\xi^-_m$ are constant functions on each $K_j$ ($j=1,\ldots,k$), where $\Xi_m(y)=\left[\xi^-_m(y), \xi^+_m(y)\right]\ni1$ on $\Sigma_m$.
Here $k$ may depend on $m$.
\end{enumerate}
\end{lemma}
We recall an elementary fact.
\begin{proposition} \label{SEQ}
Let $h\in C(\mathbf{R})$ be a non-negative function that satisfies $h(1)=0$ and is strictly monotonically increasing for $\sigma\geq 1$.
Let $\{a_j\}^\infty_{j=1}$ be a sequence such that $a_j\geq 1$ ($j=1,2,\ldots$) and
\[
\sum^\infty_{j=1} h(a_j) < \infty.
\]
Then
\[
\lim_{m\to\infty} \sup_{j\geq m} (a_j-1) = 0.
\]
\end{proposition}
\begin{proof}
By monotonicity of $h$ for $\sigma\geq 1$, we observe that
\[
h \left( \sup_{j\geq m} a_j \right)
= \sup_{j\geq m} h(a_j)
\leq \sum_{j\geq m} h(a_j) \to 0
\]
as $m\to\infty$.
This yields the desired result since $h(\sigma)$ is strictly monotone for $\sigma \geq 1$.
\end{proof}
We next recall a special case of co-area formula \cite[12.7]{Sim} for a countably rectifiable set.
\begin{lemma} \label{CAR}
Let $\Sigma$ be a countably $N-1$ rectifiable set on $\Omega$, and let $g$ be an $\mathcal{H}^{N-1}$-measurable function on $\Sigma$.
For $\nu\in S^{N-1}$, let $\pi_\nu$ denote the restriction on $\Sigma$ of the orthogonal projection from $\mathbf{R}^N$ to $\Pi_\nu$.
Then
\[
\int_\Sigma gJ^*\pi_\nu {\,\mathrm{d}\mathcal{H}^{N-1}}
= \int_{\Omega_\nu} \left( \int_{\Sigma^1_{x,\nu}} g_{x,\nu} (t)\,\mathrm{d}\mathcal{H}^0(t) \right) \,\mathrm{d} \mathcal{L}^{N-1}(x).
\]
Here $J^*f$ denotes the Jacobian of a mapping $f$ from $\Sigma$ to $\Pi_\nu$.
\end{lemma}
\begin{proof}[Proof of {Theorem} \ref{APP}] We divide the proof into two steps.
\noindent
\textit{Step 1.}
We shall construct $\Xi_m$ satisfying (i)--(i\hspace{-1pt}v).
By Lemma \ref{CR}, we found a disjoint family of compact $\delta$-flat sets $\{K_j\}^\infty_{j=1}$ such that $\Sigma=\bigcup^\infty_{j=1}K_j$ up to $\mathcal{H}^{N-1}$-measure zero set for $\Sigma$ associated with $\Xi$.
By the co-area formula (Lemma \ref{CAR}) and $J^*\pi_\nu\leq 1$, we observe
\begin{equation} \label{ACA}
\begin{split}
\int_{K_j} \widetilde{G}(y) {\,\mathrm{d}\mathcal{H}^{N-1}}(y)
&\geq \int_{K_j} \widetilde{G} J^*\pi_\nu {\,\mathrm{d}\mathcal{H}^{N-1}}\\
&= \int_{\Omega_\nu} \left(\int_{(K_j)^1_{x,\nu}} \widetilde{G}_{x,\nu}(t)\,\mathrm{d}\mathcal{H}^0(t) \right) \,\mathrm{d} \mathcal{L}^{N-1}(x),
\end{split}
\end{equation}
where $\widetilde{G}(y)=2\bigl(G\left(\xi^+(y)\right)+G\left(\xi^-(y)\right)\bigr)$.
Since $E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega)<\infty$, we see that
\begin{equation} \label{BU}
\sum^\infty_{j=1} \int_{K_j} \widetilde{G} {\,\mathrm{d}\mathcal{H}^{N-1}}(y) < \infty.
\end{equation}
We then take
\begin{equation*}
\Xi_m(y)= \left \{
\begin{array}{cl}
\left[\xi^-(y), \xi^+(y)\right] &,\ y\in\Sigma_m = \bigcup^m_{j=1} K_j \\
\{1\} &,\ \text{otherwise.}
\end{array}
\right.
\end{equation*}
By definition, (i), (i\hspace{-1pt}i\hspace{-1pt}i), and (i\hspace{-1pt}v) are trivially fulfilled.
It remains to prove (i\hspace{-1pt}i).
By \eqref{ACA} and \eqref{BU}, we observe that
\[
\sum^\infty_{j=1}\int_{\Omega_\nu} \left(\int_{(K_j)^1_{x,\nu}} \widetilde{G}_{x,\nu}(t)\,\mathrm{d}\mathcal{H}^0(t) \right) \,\mathrm{d} \mathcal{L}^{N-1}(x) < \infty
\]
for $\Xi$.
Since all integrands are non-negative, the monotone convergence theorem implies that
\[
\sum^\infty_{j=1}\int_{\Omega_\nu} \left(\int_{(K_j)^1_{x,\nu}} \widetilde{G}_{x,\nu}\,\mathrm{d}\mathcal{H}^0 \right) \,\mathrm{d} \mathcal{L}^{N-1}(x)
= \int_{\Omega_\nu} \left( \sum^\infty_{j=1} \int_{(K_j)^1_{x,\nu}} \widetilde{G}_{x,\nu}\,\mathrm{d}\mathcal{H}^0 \right) \,\mathrm{d} \mathcal{L}^{N-1}(x).
\]
Thus
\[
\sum^\infty_{j=1} \int_{(K_j)^1_{x,\nu}}\widetilde{G}_{x,\nu}\,\mathrm{d}\mathcal{H}^0 < \infty
\]
for $\mathcal{L}^{N-1}$-a.e.\ $x\in\Omega_\nu$.
Proposition \ref{SEQ} yields
\[
\lim_{m\to 0} \sup_{j\geq m} \sup_{t\in(K_j)^1_{x,\nu}} \left(\xi^+_{x,\nu}(t)-1\right) = 0
\]
and, similarly,
\[
\lim_{m\to 0} \sup_{j\geq m} \sup_{t\in(K_j)^1_{x,\nu}} \left(1-\xi^-_{x,\nu}(t)\right) = 0.
\]
Since
\[
d_H\left(\left(\Xi_m\right)_{x,\nu}, \Xi_{x,\nu}\right)
= \sup_{j\geq m+1} \sup_{t\in(K_j)^1_{x,\nu}} \max\left\{ \left|\xi^+_{x,\nu}(t)-1\right|, \left|\xi^-_{x,\nu}(t)-1\right| \right\},
\]
we conclude that
\[
d_H\left(\left(\Xi_m\right)_{x,\nu}, \Xi_{x,\nu}\right) \to 0
\]
as $m\to\infty$ for a.e.\ $x\in\Omega_\nu$.
Since the integrand of
\[
d_\nu\left(\Xi_m,\Xi\right) = \int_{\Omega_\nu}
\frac{d_H\left(\left(\Xi_m\right)_{x,\nu}, \Xi_{x,\nu}\right)}{1+d_H\left(\left(\Xi_m\right)_{x,\nu}, \Xi_{x,\nu}\right)} \,\mathrm{d} \mathcal{L}^{N-1}(x)
\]
is bounded by $1$, the Lebesgue dominated convergence theorem implies (i\hspace{-1pt}i).
\noindent
\textit{Step 2.}
We next approximate $\Xi_m$ constructed by Step 1 and construct a sequence $\{\Xi_{m_k}\}^\infty_{k=1}$ satisfying (i)--{(v)} by replacing $\Xi$ with $\Xi_m$.
If such a sequence exists, a diagonal argument yields the desired sequence.
We may assume that
\begin{equation*}
\Xi(y)= \left \{
\begin{array}{cl}
\left[\xi^-(y), \xi^+(y)\right], & y\in\Sigma_m = \bigcup^m_{j=1} K_j \\
\{1\} &,\ \text{otherwise.}
\end{array}
\right.
\end{equation*}
We approximate $\xi^+$ from below.
For a given integer $n$, we set
\[
\xi^+_n(y) := \inf \left\{ \xi^+(z) \Bigm| z \in I^k_n \right\}, \quad
I^k_n := \left\{ y \in\Sigma_m \biggm| \frac{k-1}{n} \leq \xi^+(y)-1 < \frac{k}{n} \right\}
\]
for $k=1,2,\ldots$.
Since $I^k_n$ is $\mathcal{H}^{N-1}$-measurable set, as in the proof of Lemma \ref{CR}, $I^k_n$ is decomposed as a countable disjoint family of compact sets up to $\mathcal{H}^{N-1}$-measure zero set.
We approximate $\xi^-$ from above similarly, and we set
\begin{equation*}
\Xi_{m,n}(y)= \left \{
\begin{array}{cl}
\left[\xi^-_n(y), \xi^+_n(y)\right] & ,\ y\in\Sigma_m \\
\{1\} & ,\ \text{otherwise.}
\end{array}
\right.
\end{equation*}
It is easy to see that $\Xi_{m,n}$ satisfies (i\hspace{-1pt}i\hspace{-1pt}i) and (i\hspace{-1pt}v) by replacing $m$ with $n$.
Since $E^0_\mathrm{sMM}(\Xi,\Omega)\geq E^0_\mathrm{sMM}(\Xi_{m,n},\Omega)$ and
\[
\min_{\xi^-_n(y)\leq\xi\leq\xi^+_{{n}}(y)} j(y)\alpha(\xi)
\to \min_{\xi^-(y)\leq\xi\leq\xi^+(y)} j(y)\alpha(\xi) \
\text{as}\ n \to \infty \ \text{for}\ \mathcal{H}^{N-1}\text{-a.e.}\ y
\]
with bound $j(y)\alpha(1)$, the property (i) follows from the Lebesgue dominated convergence theorem.
Since
\[
d_H\left(\left(\Xi_{m,n}\right)_{x,\nu}, \Xi_{x,\nu}\right)
= \sup_{t\in(\Sigma_m)^1_{x,\nu}} \max\left\{ \left|\xi^+_{x,\nu}-\xi^+_{n,x,\nu}\right|, \left|\xi^-_{x,\nu}-\xi^-_{n,x,\nu}\right| \right\} \leq 1/n,
\]
we now conclude (i\hspace{-1pt}i) as discussed at the end of Step 1.
\end{proof}
\subsection{Recovery sequences} \label{SSRC}
In this subsection, we shall prove Theorem \ref{SUP}.
An essential step is constructing a recovery sequence $\{w_\varepsilon\}$ when $\Xi$ has a simple structure, and the basic idea is similar to that of \cite{AT,FL}.
Besides generalization to general $F$ satisfying (F1) and (F2') from $F(z)=(z-1)^2$, our situation is more involved because $\Xi(y)=[0,1]$ for $y\in\Sigma$ in their case, while in our case, $\Xi(y)=\left[\xi^-(y),\xi^+(y)\right]$ for a general $\xi^-\leq1\leq\xi^+$.
Moreover, we must show the convergence in $d_\nu$ and handle the $\alpha$-term.
\begin{lemma} \label{REC}
Assume the same hypotheses concerning $\Omega$, $F$, and $\mathcal{J}=(J,j,\alpha)$ as {in} Theorem \ref{SUP}.
For $\Xi\in\mathcal{A}_0$, assume that its singular set $\Sigma=\left\{x\in\Omega \mid \Xi(x)\neq\{1\} \right\}$ consists of a disjoint finite union of compact $\delta$-flat sets $\{K_j\}^k_{j=1}$, and $\xi^-$ and $\xi^+$ are constant functions in each $K_j$ ($j=1,\ldots,k$), where $\Xi(x)=[\xi^-,\xi^+]$ on $\Sigma$.
Then there exists a sequence $\{w_\varepsilon\}\subset H^1(\Omega)$ such that
\begin{align*}
& E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega) \geq \limsup_{\varepsilon\to 0} E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM}(w_\varepsilon), \\
& \lim d_\nu(\Gamma_{w_\varepsilon}, \Xi) = 0
\quad\text{for all}\quad \nu \in S^{N-1}.
\end{align*}
\end{lemma}
This lemma follows from the explicit construction of functions $\{w_\varepsilon\}$ similarly to the standard double-well Modica--Mortola functional.
\begin{proof}
We take a disjoint family of open sets $\{U_j\}^k_{j=1}$ with the property $K_j\subset U_j$.
It suffices to construct a desired sequence $\{w_\varepsilon\}$ so that the support of $w_\varepsilon-1$ is contained in $\bigcup^k_{j=1}U_j$, so we shall construct such $w_\varepsilon$ in each $U_j$.
We may assume $k=1$ and write $K_1, U_1$ by $K, U$, and $\xi_-,\xi_+$ by $a,b$ ($a\leq1\leq b$) so that
\begin{equation*}
\Xi(y)= \left \{
\begin{array}{cl}
[a, b] & ,\ y\in K, \\
\{1\} & ,\ y \in U \backslash K.
\end{array}
\right.
\end{equation*}
For $c<1$ and $s>0$, let $\psi(s,c)$ be a function determined by
\[
\int^\psi_c \frac{1}{\sqrt{F(z)}} \,\mathrm{d} z = s.
\]
By (F1), this equation is uniquely solvable for all $s\in[0,s_*)$ with
\[
s_* := \int^1_c \frac{1}{\sqrt{F(z)}}\,\mathrm{d} z.
\]
This $\psi(s,c)$ solves the initial value problem
\begin{equation} \label{SR}
\left \{
\begin{array}{l}
\displaystyle{\frac{\mathrm{d}\psi}{\mathrm{d}s}} = \sqrt{F(\psi)}, \quad s \in (0,s_*) \\
\psi(0,c)=c,
\end{array}
\right.
\end{equation}
although this ODE may admit many solutions.
For $c>1$, we parallelly define $\psi$ by
\[
\int^c_\psi \frac{1}{\sqrt{F(z)}}\,\mathrm{d} z = s
\]
for $s\in(0,s_*)$ with
\[
s_* := \int^c_1 \frac{1}{\sqrt{F(z)}} \,\mathrm{d} z.
\]
In this case, $\psi$ also solves \eqref{SR}.
We consider the even extension of $\psi$ (still denoted by $\psi$) for $s<0$ so that $\psi(s,c)=\psi(-s,c)$.
For the case $c=1$, we set $\psi(s,c)\equiv 1$.
For $a,b$ with $[a,b]\ni 1$, we consider a rescaled function $\psi_\varepsilon(s,\cdot)=\psi(s/\varepsilon,\cdot)$ and then define
\begin{equation*}
\Psi_\varepsilon(s,a,b)= \left \{
\begin{array}{ll}
1 & ,\quad s \leq -2\sqrt{\varepsilon} \\
\alpha_1 s + \beta_1 & , \quad -2\sqrt{\varepsilon} \leq s \leq -\sqrt{\varepsilon}\\
\psi_\varepsilon(-s,a) & ,\quad -\sqrt{\varepsilon} \leq s \leq 0\\
\psi_\varepsilon(s,a) & ,\quad 0 \leq s \leq \sqrt{\varepsilon}\\
\alpha_2 s + \beta_2 & ,\quad \sqrt{\varepsilon} \leq s \leq 2\sqrt{\varepsilon}\\
\psi_\varepsilon(s-3\sqrt{\varepsilon},b) & ,\quad 2\sqrt{\varepsilon} \leq s \leq 4\sqrt{\varepsilon}\\
\alpha_3 s + \beta_3 & ,\quad 4\sqrt{\varepsilon} \leq s \leq 5\sqrt{\varepsilon}\\
1 & ,\quad 5\sqrt{\varepsilon} \leq s
\end{array}
\right.
\end{equation*}
with $\alpha_i,\beta_i\in\mathbf{R}$ ($i=1,2,3$) so that $\Psi_\varepsilon$ is Lipschitz continuous.
\begin{figure}[thbp]
\centering
\includegraphics[width=0.5\linewidth]{multi_kwc_recovery-2.pdf}
\label{fig:psi-graph}
\caption{The graph of $\Psi_\varepsilon(s,a,b)$. Thick lines are the part of the graph of $\psi_\varepsilon(s,a)$ or $\psi_\varepsilon(s,b)$, and other parts are linear.}
\end{figure}
Let $\eta$ be a minimizer of $\alpha$ in $[a,b]$.
We first consider the case when $\eta<1$ so that $a\leq\eta<1$.
In this case, by definition of $\Psi_\varepsilon$, there is a unique $s_0>0$ such that $\Psi_\varepsilon(s_0,a,b)=\eta$.
We then set
\[
\varphi_\varepsilon(s,a,b) = \Psi_\varepsilon(s+s_0,a,b).
\]
For the case $\eta\geq1$, we take the smallest positive $s_0>0$ such that $\Psi_\varepsilon(s_0,a,b)=\eta$.
This $s_0=s_0(\varepsilon)$ is of order $\varepsilon^{3/2}$ as $\varepsilon\to 0$. Since $K$ is a $\delta$-flat surface, it is on the graph of a $C^1$ function $p$. So we can write
\[K = \{(x',p(x')) \mid x' \in V\}.\] We set $A := \{(x,z) \mid p(x) \geq z \}$ and $B := \{(x,z) \mid p(x) < z \}.$
Let $\mathrm{sd}(z)$ be the signed distance of $z$ from $K$, i.e.
\[\mathrm{sd}(z) := d({z},A) - d({z},B).\]
If $\mathrm{sd}(z)$ is non-negative then we simply write it by $d(z).$ We then take
\[
w_\varepsilon(z) = \varphi_\varepsilon \left(\mathrm{sd}(z),a,b\right),
\]
This is the desired sequence such that the support of $w_\varepsilon-1$ is contained in $U$ for sufficiently small $\varepsilon>0$.
Since $w_\varepsilon$ is Lipschitz continuous, it is clear that $w_\varepsilon\in H^1(\Omega)$.
Since
\[
\nabla w_\varepsilon = (\partial_s \Psi_\varepsilon) \left( \mathrm{sd}(z)+s_0,a,b \right) \nabla \mathrm{sd}(z),
\]
we have {for $|\mathrm{sd}(z)| <\sqrt{\varepsilon} - s_0$,}
\begin{align*}
\nabla w_\varepsilon(z) & = (\partial_s \psi_\varepsilon) \left( \mathrm{sd}(z)+s_0,a \right) \nabla \mathrm{sd}(z) \\
& = \frac{1}{\varepsilon} (\partial_s \psi) \left(\left( \mathrm{sd}(z)+\delta_0\right)/\varepsilon,a \right) \nabla \mathrm{sd}(z).
\end{align*}
Thus, for $z$ with $ -\sqrt{\varepsilon}+s_0 < \mathrm{sd}(z) <\sqrt{\varepsilon} - s_0 $, we see that
\[
\left| \nabla w_\varepsilon(z) \right|^2
= \frac{1}{\varepsilon^2} \bigl| (\partial_s \psi) \left(\left( \mathrm{sd}(z)+s_0\right)/\varepsilon,a \right)\bigr|^2.
\]
Let $U_\varepsilon$ denote set
\[
U_\varepsilon = \left\{ z \in \Omega \bigm| -\sqrt{\varepsilon} + s_0 < \mathrm{sd}(z) < \sqrt{\varepsilon} - s_0 \right\}.
\]
Since $s_0$ is of order $\varepsilon^{3/2}$, the closure $\overline{U}_\varepsilon$ converges to $K$ in the sense of Hausdorff distance.
We proceed
\begin{align*}
E^0_\mathrm{sMM}&(w_\varepsilon,U_\varepsilon)
= \int_{U_\varepsilon} \left\{ \frac{\varepsilon}{2} |\nabla w_\varepsilon|^2 + \frac{1}{2\varepsilon} F(w_\varepsilon) \right\} \,\mathrm{d}\mathcal{L}^N \\
& = \frac{1}{2\varepsilon} \int_{U_\varepsilon} \bigl| (\partial_s \psi) \left(\left( \mathrm{sd}(z)+s_0\right)/\varepsilon,a \right)\bigr|^2
+ F \bigl(\psi \left(\left( \mathrm{sd}(z)+s_0\right)/\varepsilon,a \right)\bigr) \,\mathrm{d}\mathcal{L}^N(z) \\
& = \frac{1}{\varepsilon} \int_{U_\varepsilon} F \bigl(\psi \left(\left( \mathrm{sd}(z)+s_0\right)/\varepsilon,a \right)\bigr) \,\mathrm{d}\mathcal{L}^N(z)
\end{align*}
by \eqref{SR}.
To simplify the notation, we set
\[
f_\varepsilon(t) = \frac{1}{\varepsilon} F \bigl(\psi \left((t+s_0)/\varepsilon,a \right)\bigr)
\]
and observe that
\[
E^0_\mathrm{sMM} (w_\varepsilon,U_\varepsilon)
= \int_{U_\varepsilon} f_\varepsilon \left(\mathrm{sd}(z)\right) \,\mathrm{d}\mathcal{L}^N(z)
= \int^{\beta(\varepsilon)}_{-\beta(\varepsilon)} f_\varepsilon(t) H(t) \,\mathrm{d} t, \quad \beta(\varepsilon) := \sqrt{\varepsilon}-s_0(\varepsilon)
\]
with $H(t):=\mathcal{H}^{N-1}\left(\left\{ z\in U_\varepsilon \bigm| d(z)=t \right\}\right)$ by the co-area formula.
We set $A(t):=\mathcal{L}^N\left(\left\{ z\in U_\varepsilon \bigm| |\mathrm{sd}(z)| < t \right\}\right)$ and observe that $A(t)=\int^t_{-t} H(s)ds$ by the co-area formula.
Integrating by parts, we observe that
\begin{align*}
\int^{\beta(\varepsilon)}_{-\beta(\varepsilon)} f_\varepsilon(t) H(t) \,\mathrm{d} t &= \int^{\beta(\varepsilon)}_0 f_\varepsilon(t) H(t) \,\mathrm{d} t + \int^0_{-\beta(\varepsilon)} f_\varepsilon(t) H(t) \,\mathrm{d} t\\
&=\int^{\beta(\varepsilon)}_0 f_\varepsilon(t) \left( H(t) + H(-t) \right) \,\mathrm{d} t \\
&= \int^{\beta(\varepsilon)}_0 f_\varepsilon(t) A'(t) \,\mathrm{d} t \\
&= f_\varepsilon \left( \beta(\varepsilon) \right) A\left(\beta(\varepsilon) \right) - \int^{\beta(\varepsilon)}_0 f'_\varepsilon(t) A(t) \,\mathrm{d} t.
\end{align*}
By the relation of Minkowski contents and area \cite[Theorem 3.2.39]{Fe}, we know that
\[
\lim_{t\downarrow 0} A(t)/2t = \mathcal{H}^{N-1}(K).
\]
In other words,
\[
A(t) = 2 \left( \mathcal{H}^{N-1}(K) + \rho(t) \right)t
\]
with $\rho$ such that $\rho(t)\to 0$ as $t\to 0$.
Thus,
\[
- \int^{\beta(\varepsilon)}_0 f'_\varepsilon(t) A(t) \,\mathrm{d} t
\leq - \int^{\beta(\varepsilon)}_0 f'_\varepsilon(t) 2t\,\mathrm{d} t \left( \mathcal{H}^{N-1}(K) + \max_{0\leq t\leq\beta(\varepsilon)} \rho(t)_+ \right)
\]
since $f'_\varepsilon(t)\leq 0$.
Here we invoke (F2') so that $F'(\sigma)\leq0$ for $\sigma<1$.
We thus observe that
\[
E^\varepsilon_\mathrm{sMM}(w_\varepsilon,U_\varepsilon)
\leq f_\varepsilon \left(\beta(\varepsilon)\right) A\left(\beta(\varepsilon)\right)
- \int^{\beta(\varepsilon)}_0 f'_\varepsilon(t) 2t\,\mathrm{d} t \left( \mathcal{H}^{N-1}(K) + \max_{0\leq t\leq\beta(\varepsilon)} \rho(t)_+ \right).
\]
Integrating by parts yields
\[
- \int^{\beta(\varepsilon)}_0 f'_\varepsilon(t) 2t\,\mathrm{d} t
= 2 \int^{\beta(\varepsilon)}_0 f_\varepsilon(t) \,\mathrm{d} t
-2 f_\varepsilon \left(\beta(\varepsilon)\right) \beta(\varepsilon).
\]
Since $\psi(s)=\psi(s,a)$ solves \eqref{SR}, we see
\begin{align*}
f_\varepsilon(t-s_0) &= \frac{1}{\varepsilon} F \left(\psi(t/\varepsilon) \right) \\
&= \frac{1}{\varepsilon} (\partial_s \psi) (t/\varepsilon)\sqrt{F\left(\psi(t/\varepsilon) \right)} \\
&= -{\frac{\mathrm{d}}{\mathrm{d}t}} \bigl(G\left(\psi(t/\varepsilon) \right)\bigr).
\end{align*}
Thus
\[
\int^{\beta(\varepsilon)}_0 f_\varepsilon(t) \,\mathrm{d} t
= G \left(\psi(s_0/\varepsilon) \right) - G \left(\psi(1/\sqrt{\varepsilon}) \right).
\]
Since $s_0/\varepsilon\to 0$, $\psi(1/\sqrt{\varepsilon},a)\to 1$ as $\varepsilon\to 0$, we obtain
\[
\lim_{\varepsilon\to 0} \int^{\beta(\varepsilon)}_0 f_\varepsilon(t) dt = G(a).
\]
Combing these manipulations, we obtain that
\begin{align*}
\limsup_{\varepsilon\to 0}& E^\varepsilon_\mathrm{sMM} (w_\varepsilon,U_\varepsilon) \\
&\leq \limsup_{\varepsilon\to 0} f_\varepsilon \left(\beta(s)\right)
\left\{ A\left(\beta(\varepsilon)\right)
- 2\left(\mathcal{H}^{N-1} (K)-\max_{0\leq t\leq\beta(\varepsilon)} \left|\rho(t)\right|\right) \beta(\varepsilon) \right\} \\
&\qquad+ 2\mathcal{H}^{N-1} (K) G(a)
\end{align*}
We thus conclude that
\[
\limsup_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM} (w_\varepsilon,U_\varepsilon) \leq 2 \mathcal{H}^{N-1} (K) G(a)
\]
provided that
\[
\lim_{\varepsilon\to0} f_\varepsilon \left(\beta(\varepsilon)\right) \beta(\varepsilon) < \infty
\]
since $\left(A(t)-2\mathcal{H}^{N-1}(K)t\right)\bigm/t=\rho(t)\to0$ as $t\to0$.
This condition follows from the following lemma by setting $\varepsilon^{1/2}=\delta$.
Indeed, we obtain a stronger result
\[
\limsup_{\varepsilon\to0} f_\varepsilon\left(\beta(\varepsilon)\right) \beta(\varepsilon) \bigm/ \varepsilon^{1/2} < \infty.
\]
\begin{lemma} \label{ELPF}
Assume that $F$ satisfies (F1), (F2').
Then, for $c\in\mathbf{R}$,
\[
F \left(\psi(1/\delta,c)\right) \bigm/ \delta^2 \leq (1-c)^2 \
\quad\text{for}\quad \delta > 0.
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{ELPF}]
We may assume $c<1$ since the argument for $c>1$ is symmetric and the case $c=1$ is trivial.
We write $\psi(s,a)$ by $\psi(s)$.
By definition and monotonicity (F2') of $F$, we see
\[
\frac{1}{\delta} = \int^{\psi(1/\delta)}_c \frac{1}{\sqrt{F(z)}} \,\mathrm{d} z
\leq \frac{\psi(1/\delta)-c}{\sqrt{F\left(\psi(1/\delta)\right)}}.
\]
Taking the square of both sides, we end up with
\[
F\left(\psi(1/\delta)\right) \bigm/ \delta^2
\leq \left(\psi(1/\delta)-c\right)^2 \leq (1-c)^2.
\]
\end{proof}
Let $V_{\varepsilon}$ denote the set
\begin{align*}
V_{\varepsilon} := \left\{z \in \Omega \mid 2\sqrt{\varepsilon} < d(z) + s_0 < 4\sqrt{\varepsilon} \right\}.
\end{align*}
{We} observe that
\begin{align*}
E^0_\mathrm{sMM} (w_\varepsilon,V_\varepsilon) &= \frac{1}{\varepsilon} \int_{V_\varepsilon} F \left(\psi \left(\frac{d(z) + s_0 -3\sqrt{\varepsilon}}{\varepsilon}, b \right) \right) \,\mathrm{d}\mathcal{L}^N(z) \\
&= \frac{1}{\varepsilon} \int_{2\sqrt{\varepsilon}-s_0}^{4\sqrt{\varepsilon}-s_0} F \left(\psi \left(\frac{t + s_0 -3\sqrt{\varepsilon}}{\varepsilon}, b \right) \right) H(t) \,\mathrm{d} t \\
&= \int_{2\sqrt{\varepsilon}-s_0}^{4\sqrt{\varepsilon}-s_0} \tilde{f}_{\varepsilon}(t - 3\sqrt{\varepsilon}) H(t) \,\mathrm{d} t,
\end{align*}
where $\tilde{f}_\varepsilon(t) := \frac{1}{\varepsilon} F \bigl(\psi \left((t+s_0)/\varepsilon,b \right)\bigr) .$ We set $\tilde{A}(t):=\mathcal{L}^N\left(\left\{ z\in V_\varepsilon \bigm| 0\leq d(z) < t \right\}\right)$ and observe that $ \tilde{A}(t)=\int^t_0 H(s)ds$ by the co-area formula. As before, we see
\[
\tilde{A}(t) = \left( \mathcal{H}^{N-1}(K) + \rho(t) \right)t
\]
with $\rho$ such that $\rho(t)\to 0$ as $t\to 0$. We set
\[b(\varepsilon) := \tilde{f}_{\varepsilon}(\sqrt{\varepsilon} - s_0) \tilde{A}(4\sqrt{\varepsilon} - s_0) - \tilde{f}_{\varepsilon}(-\sqrt{\varepsilon} - s_0) \tilde{A}(2\sqrt{\varepsilon} - s_0),\]
and observe that
\begin{align*}
E^0_\mathrm{sMM}(w_\varepsilon,V_\varepsilon) &\leq b(\varepsilon) - \int_{2\sqrt{\varepsilon} - s_0}^{4\sqrt{\varepsilon} - s_0} \tilde{f}'_{\varepsilon}(t - 3\sqrt{\varepsilon}) t \,\mathrm{d} t \\
&\qquad\times\left(\mathcal{H}^{N-1}(K) + \max_{2\sqrt{\varepsilon} - s_0 \leq t \leq 4\sqrt{\varepsilon} - s_0} \rho(t)_{+} \right).
\end{align*}
Integration by parts yields
\[-\int_{2\sqrt{\varepsilon} - s_0}^{4\sqrt{\varepsilon} - s_0} \tilde{f}'_{\varepsilon}(t - 3\sqrt{\varepsilon}) t \,\mathrm{d} t = \int_{2\sqrt{\varepsilon} - s_0}^{4\sqrt{\varepsilon} - s_0} \tilde{f}_{\varepsilon} (t - 3\sqrt{\varepsilon}) \,\mathrm{d} t - 2 \sqrt{\varepsilon} \tilde{f}_{\varepsilon}(\beta(\varepsilon)), \]
and we see
\[\int_{2\sqrt{\varepsilon} - s_0}^{4\sqrt{\varepsilon} - s_0} \tilde{f}_{\varepsilon} (t - 3\sqrt{\varepsilon}) \,\mathrm{d} t = 2 \int_{0}^{\beta(\varepsilon)} \tilde{f}_{\varepsilon}(t) \,\mathrm{d} t. \]
As before, we thus conclude that
\[
\limsup_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM} (w_\varepsilon,V_\varepsilon) \leq 2 \mathcal{H}^{N-1} (K) G(b).
\]
The part corresponding to $\psi(s,b)$ is similar, and the part where $\Psi_\varepsilon$ is linear will vanish as $\varepsilon\to 0$.
So, we conclude
\[
\lim_{\varepsilon\to 0} E^\varepsilon_\mathrm{sMM} (w_\varepsilon,\Omega) \leq E^0_\mathrm{sMM} (\Xi,\Omega).
\]
The term related to $\alpha$ is independent of $\varepsilon$ because of the choice of $s_0$ so that $w_\varepsilon(x)=\eta$ for $x\in K$.
Since $\mathcal{H}^{N-1}(K)<\infty$, by the co-area formula (Lemma \ref{CAR}), $K^1_{x,\nu}$ is a finite set for $\mathcal{L}^{N-1}$-a.e.\ $x\in\Omega_\nu$.
In the Hausdorff sense, $(S_\varepsilon)^1_{x,\nu}\to K^1_{x,\nu}$ holds, as observed in the following lemma for
\[
S_\varepsilon = \left\{ y\in\mathbf{R}^N \bigm|
d(y,K) = \varepsilon \right\}.
\]
Therefore, we observe that for $\mathcal{L}^{N-1}$-a.e.\ $x\in\Omega_\nu$,
\[
\textstyle \limsup^* w_{\varepsilon,x,\nu}=b, \quad
\liminf_* w_{\varepsilon,x,\nu}=a \ \text{on}\ K^1_{x,\nu}
\]
and outside $K^1_{x,\nu}$, $\limsup^* w_{\varepsilon,x,\nu}=\liminf_* w_{\varepsilon,x,\nu}=1$.
We conclude that $w_{\varepsilon,x,\nu}$ converges to $\Xi_{x,\nu}$ in the graph sense on $\Omega^1_{x,\nu}$, which proves (i\hspace{-1pt}i).
\end{proof}
\begin{lemma} \label{HAUS}
Let $K$ be a compact set in a bounded open subset $\Omega$ of $\mathbf{R}^N$ and set
\[
S_\varepsilon = \left\{ y \in \Omega \bigm| d(y,K) = \varepsilon \right\}.
\]
For $\nu\in S^{N-1}$, let $x\in\Omega_\nu$ be such that $K^1_{x,\nu}$ is a non-empty finite set.
Then, $(S_\varepsilon)^1_{x,\nu}\to K^1_{x,\nu}$ in Hausdorff distance in $\mathbf{R}$ as $\varepsilon\to0$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{HAUS}]
If $(S_\varepsilon)^1_{x,\nu}$ is not empty, it is clear that
\[
\sup_{y\in(S_\varepsilon)^1_{x,\nu}}
d(y, K^1_{x,\nu}) \leq \varepsilon \to 0
\]
as $\varepsilon\to0$.
It remains to prove that for any $t_0\in K^1_{x,\nu}$, there is a sequence $t_\varepsilon\in(S_\varepsilon)^1_{x,\nu}$ such that $t_\varepsilon\to t_0$ in $\mathbf{R}$.
We set
\[
f(\delta) = d \left(x+\nu(t_0 + \delta), K\right)
\quad\text{for}\quad \delta>0.
\]
Since $t_0$ is isolated and $K$ is compact, we see that $f(\delta)>0$ for sufficiently small $\delta$, say $\delta<\delta_0$.
Moreover, $f(\delta)$ is continuous on $(0,\delta_0)$ since $K$ is compact.
Since $f(\delta)\leq\delta$, $f$ satisfies $f(\delta)\to0$ as $\delta\to0$.
By the intermediate value theorem, for sufficiently small $\varepsilon$, say $\varepsilon\in(0,\varepsilon_0)$, there always exists $\delta(\varepsilon)$ such that $f\left(\delta(\varepsilon)\right)=\varepsilon$, which implies that
\[
t_\varepsilon = t_0 + \delta(\varepsilon) \in (S_\varepsilon)^1_{x,\nu}.
\]
Since $\delta(\varepsilon)\to0$ as $\varepsilon\to0$, this implies $t_\varepsilon\to t_0$.
The proof is now complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{SUP}]
This follows from Lemma \ref{APP} and Lemma \ref{REC} by a diagonal argument.
\end{proof}
\section{Singular limit of the Kobayashi--Warren--Carter energy} \label{SLKWC}
We first recall the Kobayashi--Warren--Carter energy.
For a given $\alpha\in C(\mathbf{R})$ with $\alpha\geq 0$, we consider the Kobayashi--Warren--Carter energy of the form
\[
E^\varepsilon_\mathrm{KWC}(u,v)
= \int_\Omega \alpha(v) |Du| + E^\varepsilon_\mathrm{sMM}(v)
\]
for $u\in BV(\Omega)$ and $v\in H^1(\Omega)$.
The first term is the weighted total variation of $u$ with weight $w=\alpha(v)$, defined by
\[
\int_\Omega w|Du|
:= \sup \left\{ -\int_\Omega u \operatorname{div}\varphi\, \,\mathrm{d}\mathcal{L}^N \Bigm|
\left|\varphi(z)\right| \leq w(z)\ \text{a.e.}\ x,\
\varphi\in C^1_c(\Omega) \right\}
\]
for any non-negative Lebesgue measurable function $w$ on $\Omega$.
We next define the functional, which turns out to be a singular limit of the Kobayashi--Warren--Carter energy.
For $\Xi\in\mathcal{A}_0(\Omega)$, let $\Sigma$ be its singular set in the sense that
\[
\Sigma = \left\{ z\in\Omega \bigm|
\Xi(z) \neq \{1\} \right\}.
\]
For $u\in BV(\Omega)$, let $J_u$ denote the set of its jump discontinuities.
In other words,
\[
J_u = \left\{ z\in\Omega \backslash \Sigma_0 \bigm|
j(z) := \left| u(z+0\nu) - u(z-0\nu) \right| > 0 \right\}.
\]
Here $\nu$ denotes the approximate normal of $J_u$, and $u(z\pm0\nu)$ denotes the trace of $u$ in the direction of $\pm\nu$.
We consider a triplet $\mathcal{J}(u)={(J_u,j,\alpha)}$ and consider $E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega)$, whose explicit form is
\[
E^{0,{\mathcal{J}}}_\mathrm{sMM}(\Xi,\Omega)
= E^0_\mathrm{sMM}(\Xi,\Omega)
+ \int_{{J\cap\Sigma}} j \min_{\xi^-\leq\xi\leq\xi^+} \alpha(\xi)\, {\,\mathrm{d}\mathcal{H}^{N-1}},
\]
where $\Xi(z)=\left[\xi^-(z),\xi^+(z)\right]$ for $z\in\Sigma$.
We then define the limit Kobayashi--Warren--Carter energy:
\[
E^0_\mathrm{KWC}(u,\Xi,\Omega)
= \int_{\Omega\backslash J_u} \alpha(1) |Du|
+ E^{0,\mathcal{J}(u)}_\mathrm{sMM}(\Xi,\Omega),
\]
in which the explicit representation of the second term is
\[
E^{0,\mathcal{J}(u)}_\mathrm{sMM}(\Xi,\Omega)
= E^0_\mathrm{sMM}(\Xi,\Omega)
+ \int_{{J_u\cap\Sigma}} |u^+-u^-|\alpha_0(z)\, {\,\mathrm{d}\mathcal{H}^{N-1}}(z)
\]
with $u^\pm=u(z\pm0\nu)$ and
\[
\alpha_0(z) := \min\left\{ \alpha(\xi) \bigm|
\xi^-(z) \leq \xi \leq \xi^+(z) \right\}.
\]
Here $u^\pm$ are defined by
\begin{align*}
u^+(x) &:= \inf \left\{ t \in \mathbf{R} \biggm|
\lim_{r\to0} \frac{\mathcal{L}^{{N}}\left(B_r(x)\cap\{u>t\}\right)}{r^N}=0 \right\}, \\
u^-(x) &:= \sup \left\{ t \in \mathbf{R} \biggm|
\lim_{r\to0} \frac{\mathcal{L}^{{N}}\left(B_r(x)\cap\{u<t\}\right)}{r^N}=0 \right\},
\end{align*}
where $B_r(x)$ is the closed ball of radius $r$ centered at $x$ in $\mathbf{R}^N$.
This is a measure-theoretic upper and lower limit of $u$ at $x$.
If $u^+(x)=u^-(x)$, we say that $u$ is approximately continuous.
For more detail, see \cite{Fe}.
We are now in a position to state our main results rigorously.
\begin{theorem} \label{GKWC1}
Let $\Omega$ be a bounded domain in $\mathbf{R}^N$.
Assume that $F$ satisfies (F1) and (F2) and that $\alpha\in C(\mathbf{R})$ is non-negative.
\begin{enumerate}
\item[(i)] (liminf inequality) Assume that $\{u_\varepsilon\}_{0<\varepsilon<1}\subset BV(\Omega)$ converges to $u\in BV(\Omega)$ in $L^1$, i.e., $\|u_\varepsilon-u\|_{L^1}\to0$.
Assume that $\{u_\varepsilon\}_{0<\varepsilon<1}\subset H^1(\Omega)$.
If $v_\varepsilon\xrightarrow{sg}\Xi$ and $\Xi\in\mathcal{A}_0$, then
\[
E^0_\mathrm{KMC} (u,\Xi\ \Omega)
\leq \liminf_{\varepsilon\to0} E^\varepsilon_\mathrm{KMC} (u_\varepsilon,v_\varepsilon).
\]
\item[(i\hspace{-1pt}i)] (limsup inequality) For any $\Xi\in\mathcal{A}_0$ and $u\in {BV(\Omega)}$, there exists a family of Lipschitz functions $\{w_\varepsilon\}_{0<\varepsilon<1}$ such that
\[
E^0_\mathrm{KMC} (u,\Xi,\Omega)
= \lim_{\varepsilon\to0} E^\varepsilon_\mathrm{KMC} (u,w_\varepsilon).
\]
\end{enumerate}
\end{theorem}
\begin{corollary} \label{GKWC2}
Assume the same hypotheses of Theorem \ref{GKWC1}.
Assume that $f\in L^2(\Omega)$ and $\lambda\geq0$.
Then the results of Theorem \ref{GKWC1} with $E_{\mathrm{KWC}}^0(u,\Xi,\Omega)$ and $E_{\mathrm{KWC}}^\epsilon(u,\Xi,\Omega)$ being replaced with
\[
E^0_\mathrm{KMC} (u,\Xi,\Omega) + \frac{\lambda}{2} \int_\Omega|u-f|^2 \,\mathrm{d}\mathcal{L}^N
\quad\text{and}\quad
E^\varepsilon_\mathrm{KMC} (u,v) + \frac{\lambda}{2} \int_\Omega|u-f|^2 \,\mathrm{d}\mathcal{L}^N,
\]
respectively, still hold, provided that $u\in L^2(\Omega)$.
\end{corollary}
\begin{remark} \label{GKWC3}
\begin{enumerate}
\item[(i)] In a one-dimensional case, the liminf inequality here is weaker than \cite[Theorem 2.3 (i)]{GOU} because we assume $u\in BV(\Omega)$, not $u\in BV(\Omega\backslash\Sigma_0)$ with
\[
\Sigma_0 = \left\{ x \in \Sigma \bigm|
\alpha_0(z) = 0 \right\}.
\]
It seems possible to extend our results to this situation, but we did not try to avoid technical complications.
\item[(i\hspace{-1pt}i)] It is clear that Corollary \ref{GKWC2} immediately follows from Theorem \ref{GKWC1} once we admit that $u_\varepsilon\to u$ in $L^1(\Omega)$ implies
\[
\| u-f \|^2_{L^2}
\leq \liminf_{\varepsilon\to0} \| u_\varepsilon-f \|^2_{L^2}.
\]
The last lower semicontinuity holds by Fatou's lemma since $u_{\varepsilon'}\to u$\ $\mathcal{L}^N$-a.e.\ by taking a suitable subsequence.
\end{enumerate}
\end{remark}
\begin{proof}[Proof of Theorem \ref{GKWC1}]
Part (i\hspace{-1pt}i) follows easily from Theorem \ref{SUP}.
Indeed, taking $w_\varepsilon$ in Theorem \ref{SUP} for $\mathcal{J}=\mathcal{J}(u)$, we see that
\[
E^{0,{\mathcal{J}}}_\mathrm{sMM} (\Xi,\Omega)
= \lim_{\varepsilon\to0} E^{\varepsilon,{\mathcal{J}}}_\mathrm{sMM} (w_\varepsilon).
\]
Since
\[
\int_\Omega \alpha(w_\varepsilon)|Du|
= \int_{\Omega\backslash J_u} \alpha(w_\varepsilon)|Du|
+ \int_{J_u} |u^+ - u^-| \alpha(w_\varepsilon){\,\mathrm{d}\mathcal{H}^{N-1}},
\]
it suffices to prove that
\[
\lim_{\varepsilon\to0} \int_{\Omega\backslash J_u} \alpha(w_\varepsilon)|Du|
= \int_{\Omega\backslash J_u} \alpha(1)|Du|.
\]
Similarly in the proof of Theorem \ref{SUP}, by a diagonal argument, we may assume that $w_\varepsilon$ is bounded.
Since, by construction, $w_\varepsilon(z)\to1$ for $z\in\Omega\backslash\Sigma$ with a uniform bound for $\alpha(w_\varepsilon)$ and since
\[
|Du| \left(\Sigma\cap(\Omega\backslash J_u) \right) = 0,
\]
the Lebesgue dominated convergence theorem yields the desired convergence.
It remains to prove (i).
For this purpose, we recall a few properties of the measure $\langle Du,\nu\rangle$ for $u\in BV(\Omega)$, where $Du$ denotes the distributional gradient of $u$ and $\nu\in S^{N-1}$.
The following disintegration lemma is found in \cite[Theorem 3.107]{AFP}.
\begin{lemma} \label{5DI}
For $u\in BV(\Omega)$ and $\nu\in S^{N-1}$,
\[
\left|\langle Du,\nu\rangle\right| = (\mathcal{H}^{N-1} \lfloor \Omega_\nu) \otimes \left|Du_{x,\nu}|\right.
\]
In other words,
\[
\int_\Omega \varphi \left|\langle Du,\nu\rangle\right|
= \int_{\Omega_\nu} \left\{ \int_{\Omega^1_{x,\nu}} \varphi_{x,\nu} \left|Du_{x,\nu}\right| \right\}{\,\mathrm{d}\mathcal{H}^{N-1}}(x)
\]
for any bounded Borel function $\varphi\colon\Omega\to\mathbf{R}$.
\end{lemma}
We also need a representation of the total variation of a vector-valued measure and its component.
Let $\tau > 0$ and monotone increasing sequence $(a_j)_{j \in \mathbf{Z}}$ such that $a_{j+1} < a_j + \tau$ be given. We consider a division of $\mathbf{R}^N$ into a family of rectangles of the form
\[
R^\tau_{J,(a_j)} = \prod^N_{i=1}[a_{j_i},a_{j_i+1}), \quad
J = (j_1, \ldots, j_N) \in \mathbf{Z}^N
\]
We say that the division $\{R^\tau_{J,(a_j)}\}_{J \in \mathbf{Z}^N}$ is a $\tau$-rectangular division associated with $(a_j)$.
{Hereafter, we may omit $(a_j)$ and write $\{R_J^\tau\}_{J\in\mathbb{Z}^N}$ in short.}
\begin{lemma} \label{5VT}
Let $\mu$ be an $\mathbf{R}^d$-valued finite Radon measure in a domain $\Omega$ in $\mathbf{R}^N$.
Let $\{\tau_k\}$ be a decreasing sequence converging to zero as $k\to\infty$.
Let $\{R^{\tau_k}_J\}_J$ be a fixed $\tau_k$-rectangular division of $\mathbf{R}^N$.
Let $D$ be a dense subset of $S^{N-1}$.
Then
\[
|\mu|(A) = \sup \left\{ \left|\langle \mu,\nu_k \rangle\right|(A) \bigm|
\nu_k : \Omega\to D\ \text{is constant on}\ {R^{\tau_k}_J \cap \Omega,\ J \in \mathbf{Z}^N,}\ k=1,2,\ldots \right\},
\]
where $A$ is a Borel set.
\end{lemma}
We postpone its proof to the end of this section.
We shall prove (i).
We recall the decomposition of $\Sigma$ into a countable disjoint union of $\delta$-flat compact sets $K_i$ up to $\mathcal{H}^{N-1}$-measure zero set, and take the corresponding ${\nu^i\in D}$ as in Theorem \ref{INF}.
We use the notation in Theorem \ref{INF}.
We may assume that $\bigcap^\infty_{m=1}U^m_i=K_i$.
By Lemma \ref{5DI}, we proceed
\begin{align*}
\int_{U^m_i} \alpha(v_\varepsilon)\left|Du_\varepsilon\right|
&\geq \int_{U^m_i} \alpha(v_\varepsilon)\left|\langle Du_\varepsilon, \nu^i\rangle\right|\\
&= \int_{(U^m_i)_{\nu^i}} \left\{ \int_{(U^m_i)^1_{x,\nu^i}} \alpha(v_{\varepsilon,x,\nu^i})\left| Du_{\varepsilon,x,\nu^i} \right| \right\}{\,\mathrm{d}\mathcal{H}^{N-1}}(x).
\end{align*}
By one dimensional result \cite[Lemma 5.1]{GOU}, we see that
\begin{align*}
&\liminf_{\varepsilon\to0} \int_{(U^m_i)^1_{x,\nu^i}} \alpha(v_{\varepsilon,x,\nu^i})\left|Du_{\varepsilon,x,\nu^i}\right| \\
&\geq \int_{(U^m_i\backslash\Sigma)^1_{x,\nu^i}} \alpha(1)\left|Du_{x,\nu^i}\right|
+ \sum_{t\in(\Sigma\cap U^m_i)^1_{x,\nu^i}} \left( \min_{\xi^-_{x,\nu^i}\leq\xi\leq\xi^+_{x,\nu^i}} \alpha(\xi) \right)
\left|u^+_{x,\nu^i}-u^-_{x,\nu^i}\right|(t).
\end{align*}
(In \cite[Lemma 5.1]{GOU}, $\alpha(v)$ is taken as $v^2$, but the proof works for general $\alpha$.
In \cite[Lemma 5.1]{GOU}, $|\xi^-_i|^2$ should be $\left((\xi^-_i)_+\right)^2$.)
The last term is bounded from below by
\[
\alpha_0 \left(x + t^i_x \nu^i\right) \left|u^+ - u^-\right| \left(x + t^i_x \nu^i\right)
\]
since $(K^m_i)^1_{x,\nu^i}$ (${\subset \left(\Sigma\cap U^m_i \right)^1_{x,\nu^i} }$) is a singleton $\{t^i_x\}$.
By the area formula, we see
\begin{align}
\int_{K^m_i} &\alpha_0 \left|u^+ - u^-\right|{\,\mathrm{d}\mathcal{H}^{N-1}}\\
&\leq \sqrt{1+(2\delta)^2} \int_{(K^m_i)_{\nu^i}} \alpha_0 \left(x + t^i_x \nu^i\right) \left|u^+ - u^-\right| \left(x + t^i_x \nu^i \right){\,\mathrm{d}\mathcal{H}^{N-1}} (x).
\end{align}
Combining these observations, by Fatou's lemma, we conclude that
\[
\liminf_{\varepsilon\to0} \int_{U^m_i} \alpha\left(v_\varepsilon\right) \left|Du_\varepsilon\right|
\geq \frac{1}{\sqrt{1+(2\delta)^{2}}} \int_{K^m_i} \alpha_0 \left|u^+ - u^-\right|{\,\mathrm{d}\mathcal{H}^{N-1}}.
\]
Adding from $i=1$ to $m$, we conclude that
\[
\liminf_{\varepsilon\to0} \int_{V^m} \alpha\left(v_\varepsilon\right) \left|Du_\varepsilon\right|
\geq \frac{1}{\sqrt{1+(2\delta)^{2}}} \int_{\Sigma^m} \alpha_0 \left|u^+ - u^-\right|{\,\mathrm{d}\mathcal{H}^{N-1}}
\]
for $V^m=\bigcup^m_{i=1}U^m_i$.
For $W^m=\Omega\backslash V^m$, we take $\nu\in D$ and argue in the same way to get
\begin{align*}
\liminf_{\varepsilon\to0}& \int_{W^m} \alpha(v_\varepsilon)\left|Du_\varepsilon\right|\\
&\geq \int_{(W^m)_\nu} \Biggl\{ \int_{(W^m\backslash\Sigma)^1_{x,\nu}} \alpha(1)\left|Du_{x,\nu}\right| \\
&\qquad+ \sum_{t\in(\Sigma\cap W^m)^1_{x,\nu}} \left( \min_{\xi^-_{x,\nu}\leq\xi\leq\xi^+_{x,\nu}} \alpha(\xi) \right)
\left|u^+_{x,\nu}-u^-_{x,\nu}\right|(t) \Biggr\}{\,\mathrm{d}\mathcal{H}^{N-1}}(x) \\
&\geq \alpha(1) \int_{(W^m)_\nu} \left\{ \int_{(W^m\backslash\Sigma)^1_{x,\nu}} \left|Du_{x,\nu}\right| \right\}{\,\mathrm{d}\mathcal{H}^{N-1}}(x) \\
&= \alpha(1) \int_{W^m\backslash\Sigma} \left|\langle Du, \nu \rangle\right|.
\end{align*}
The last equality follows from Lemma \ref{5DI}.
Since $W^m\cap\Sigma_m=\emptyset$, combining the estimate of the integral on $V^m$, we now observe that
\begin{align*}
\liminf_{\varepsilon\to0}& \int_\Omega \alpha(v_\varepsilon)\left|Du_\varepsilon\right|\\
&\geq \liminf_{\varepsilon\to0} \int_{W^m\backslash(\Sigma\backslash\Sigma_m)} \alpha(v_\varepsilon) \left|\langle Du,\nu \rangle\right|
+ \liminf_{\varepsilon\to0} \int_{V^m} \alpha(v_\varepsilon) \left| Du_\varepsilon \right| \\
&\geq \alpha(1) \int_{W^m\backslash(\Sigma\backslash\Sigma_m)} \left|\langle Du,\nu \rangle\right|
+ \frac{1}{\sqrt{1+(2\delta)^2}} \int_{\Sigma_m} \alpha_0 \left| u^+ - u^- \right|{\,\mathrm{d}\mathcal{H}^{N-1}}.
\end{align*}
Passing $m$ to $\infty$ yields
\[
\liminf_{\varepsilon\to0} \int_\Omega \alpha(v_\varepsilon)\left|Du_\varepsilon\right|
\geq \alpha(1) \int_{\Omega\backslash\Sigma} \left|\langle Du,\nu \rangle\right|
+ \frac{1}{\sqrt{1+(2\delta)^2}} \int_\Sigma \alpha_0 \left| u^+ - u^- \right|{\,\mathrm{d}\mathcal{H}^{N-1}}
\]
by Fatou's lemma.
Since $\delta>0$ can be taken arbitrarily, we now conclude that
\[
{\liminf_{\varepsilon\to0}} \int_\Omega \alpha(v_\varepsilon)\left|Du_\varepsilon\right|
\geq \alpha(1) \int_{\Omega\backslash\Sigma} \left|\langle Du,\nu \rangle\right|
+ \int_{\Omega\cap\Sigma} \alpha_0 \left| u^+ - u^- \right|{\,\mathrm{d}\mathcal{H}^{N-1}}.
\]
For any $\nu \in D$, we may replace $\Omega$ with an open set in $\Omega$, for example, $\Omega_0 \cap \Omega$ where $\Omega_0$ is an open rectangle.
Applying the co-area formula (or Fubini's theorem) to the projection $(x_1,\ldots,x_N)\longmapsto x_i$, we have $\mathcal{H}^{N-1}\left(\Sigma\cap\{x_i=q\}\right)=0$ for $\mathcal{L}^1$-a.e.\ $q$, since otherwise, $\mathcal{L}^N(\Sigma)>0$.
Thus, for any $\tau>0$, there is a $\tau$-rectangular division $\{R^\tau_J\}_J$ with $\mathcal{H}^{N-1}({\partial R^\tau_J\cap\Sigma})=0$.
Since $\mathcal{H}^{N-1}({\partial R^\tau_J\cap\Sigma})=0$, by dividing $\Omega$ into ${\{\Omega\cap R^\tau_J\}_J}$, we conclude that
\[
{\liminf_{\varepsilon\to0}} \int_\Omega \alpha(v_\varepsilon)\left|Du_\varepsilon\right|
\geq \alpha(1) \int_{\Omega\backslash\Sigma} \left|\left\langle Du,\nu(x) \right\rangle\right|
+ \int_{\Omega\cap\Sigma} \alpha_0 \left| u^+ - u^- \right|{\,\mathrm{d}\mathcal{H}^{N-1}}
\]
where $\nu:\Omega\to D$ is a constant on each rectangle.
Applying Lemma \ref{5VT}, we now conclude that
\[
{\liminf_{\varepsilon\to0}} \int_\Omega \alpha(v_\varepsilon)\left|Du_\varepsilon\right|
\geq \alpha(1) \int_{\Omega\backslash\Sigma} |Du|
+ \int_\Omega \alpha_0 \left| u^+ - u^- \right|{\,\mathrm{d}\mathcal{H}^{N-1}}.
\]
Since we already obtained
\[
\liminf_{\varepsilon\to0} {E^\varepsilon_\mathrm{sMM}} (v_\varepsilon)
\geq {E^0_\mathrm{sMM}} (\Xi,\Omega)
\]
by Theorem \ref{INF} and since
\[
E^\varepsilon_\mathrm{KWC} (v) = {E^\varepsilon_\mathrm{sMM} (v)}
+ \int_\Omega \alpha(v) |Du|,
\]
the desired liminf inequality follows.
\end{proof}
\begin{proof}[Proof of Lemma \ref{5VT}]
We may assume that $A$ is open since $\mu$ is a Radon measure.
By duality representation,
\[
|\mu|(A) = \sup \left\{ \sum^d_{i=1} \int_A \varphi_i \,\mathrm{d}\mu_i \biggm|
\varphi = (\varphi_1, \ldots, \varphi_d) \in C_c(A),\
\|\varphi\|_{L^\infty} \leq 1 \right\},
\]
where $C_c(A)$ denotes the space of ($\mathbf{R}^d$-valued) continuous functions compactly supported in $A$ and $\|\varphi\|_\infty:=\sup_{x\in\Omega} \left|\varphi(x)\right|$ with the Euclidean norm $|a|=\langle a,a\rangle^{1/2}$ for $a\in\mathbf{R}^d$.
Since $\mu(A)<\infty$, by this representation, we see that for any $\delta>0$, there exists $\varphi\in C_c(A)$ with $\|\varphi\|_\infty\leq1$ satisfying
\[
|\mu|(A) \leq \sum^d_{i=1} \int_A \varphi_i \,\mathrm{d}\mu_i + \delta.
\]
Since $\varphi$ is uniformly continuous in $A$ and $D$ is dense, for sufficiently large $k$, there is $\tau_k$-rectangular division $\{R^{\tau_k}_J\}$ and $\nu^\delta_k:\Omega\to D$, which is constant on $R^{\tau_k}_J\cap\Omega$ such that
\[
\left| \varphi - \nu^\delta_k c_k \right| < \delta
\quad\text{in}\quad R^{\tau_k}_J \cap \Omega
\]
with some constant $0\leq c_k\leq1$.
This inequality implies that
\begin{align*}
\sum^d_{i=1} \int_A \varphi_i \,\mathrm{d}\mu_i &\leq {\sum_J} {\int_{R^{\tau_k}_J\cap A}} c_k \langle \mu, \nu^\delta_k \rangle + \delta |\mu|(A) \\
&\leq \left| \langle\mu,\nu^\delta_k \rangle\right| (A) +\delta |\mu|(A).
\end{align*}
Thus we obtain that
\[
|\mu|(A) \leq \left| \langle\mu,\nu^\delta_k \rangle\right| (A) + \delta + \delta |\mu|(A).
\]
Hence, by $\mu(A)<\infty$ and the arbitrariness $\delta>0$, we have
\begin{multline*}
|\mu|(A) \leq \sup \left\{ \left| \langle\mu,\nu_k \rangle\right| (A) \bigm|
\nu_k : \Omega \to D,\ \nu_k\ \text{is constant on}\right.\\
\left.\ {R^{\tau_k}_J \cap \Omega,\ J \in \mathbf{Z}^N,}\ k=1,2,\ldots \right\}.
\end{multline*}
The reverse inequality is trivial, so the proof is now complete.
\end{proof}
\section*{Acknowledgments}
The work of the second was supported by the Program for Leading Graduate Schools, MEXT, Japan.
The work of the first author was partly supported by the Japan Society for the Promotion of Science through the grants KAKENHI No.~19H00639, No.~18H05323, No.~17H01091, and by Arithmer Inc.~and Daikin Industries, Ltd.~through collaborative grants.
The work of the third author was partly supported by the Japan Society for the Promotion of Science through the grants KAKENHI No.~18K13455 {and No.~22K03425}.
| {
"timestamp": "2022-05-31T02:05:22",
"yymm": "2205",
"arxiv_id": "2205.14314",
"language": "en",
"url": "https://arxiv.org/abs/2205.14314",
"abstract": "By introducing a new topology, a representation formula of the Gamma limit of the Kobayashi-Warren-Carter energy is given in a multi-dimensional domain. A key step is to study the Gamma limit of a single-well Modica-Mortola functional. The convergence introduced here is called the sliced graph convergence, which is finer than conventional $L^1$ convergence, and the problem is reduced to a one-dimensional setting by a slicing argument.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On a singular limit of the Kobayashi--Warren--Carter energy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750496039277,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221680593169
} |
https://arxiv.org/abs/1805.02210 | Formal factorization of higher order irregular linear differential operators | We study the problem of decomposition (non-commutative factorization) of linear ordinary differential operators near an irregular singular point. The solution (given in terms of the Newton diagram and the respective characteristic numbers) is known for quite some time, though the proofs are rather involved.We suggest a process of reduction of the non-commutative problem to its commutative analog, the problem of factorization of pseudopolynomials, which is known since Newton invented his method of rotating ruler. It turns out that there is an "automatic translation" which allows to obtain the results for formal factorization in the Weyl algebra from well known results in local analytic geometry.Besides, we draw some (apparently unnoticed) parallelism between the formal factorization of linear operators and formal diagonalization of systems of linear first order differential equations. | \section{Introduction}
The local theory of linear ordinary differential equations exists in two closely related but different flavors. First, one can consider systems of first order linear equations near a singular point. Such systems form an infinite-dimensional space on which several groups of gauge transformations act naturally. Then the classification problem arises: what is the simplest normal form to which a given system can be reduced by a gauge transformations. This theory is well developed, in particular, delicate results explaining the difference between formal and convergent classification (the Stokes phenomenon) were obtained half a century ago.
Another flavor of the theory deals with (scalar) higher order linear differential equations involving only one unknown function. Formally such equations can be reduced to systems of first order equations and vice versa, but the natural group action is lost by such reduction. Instead a notion of Weyl equivalence can be introduced, which makes the classification problem meaningful once again.
The two theories are closely parallel (but clearly different) for the mildest type of singularities, the Fuchsian (regular) ones, as was shown in \cite{shira}. In this paper we discuss the theorem by Bernard Malgrange \cite{malgrange} which is an analogue of a theorem of formal diagonalization of non-resonant irregular singularities \cite{thebook}*{Theorem 20.7}.
We start with a brief summary of the theory of systems of first order equations. For simplicity from the very beginning we concentrate on the formal case, leaving the issue of convergence for remarks.
\subsection{Systems of first order linear ordinary differential equations}
Denote by $\C[[t]]$ the differential ring of formal Taylor series and $\Bbbk=\C[t^{-1}][[t]]$ its quotient differential field of Laurent polynomials with the usual derivation $\d=\frac{\mathrm d}{\mathrm dt}$. A \emph{system of first order linear ordinary differential equations} over $\Bbbk$ is defined by an $n\times n$ matrix $M=\{M_{ij}\}\in\Mat(n,\Bbbk)$ and has the form
\begin{equation*}
\tfrac{\mathrm d}{\mathrm dt} x_i=\sum _{j=1}^n M_{ij}(t)x_j,\qquad i=1\dots,k.
\end{equation*}
It is more convenient to write this equation in the matrix form with respect to the unknown $n\times n$-matrix function $X$, specifically singling out the order of the pole of the coefficients matrix as follows,
\begin{equation}\label{ls-2}
t^{1+r}\,\tfrac{\mathrm d}{\mathrm dt} X=A(t)X,\quad r\in\Z_+,\ A=A_0+A_1t+A_2t^2+\cdots\in\Mat(n,\C[[t]]).
\end{equation}
The integer $r\ge 0$ is called the \emph{Poincar\'e index} of the system \eqref{ls-2}; if $r=0$, the system is called \emph{Fuchsian}, the leading matrix $A_0$ of the matrix formal Taylor series $A(t)$ is assumed nonzero.
The group of \emph{formal gauge transformations} $\GL(n,\C[[t]])$ acts naturally on linear systems of the form \eqref{ls-2} by ``change of variables'': if $H(t)=H_0+H_1t+H_2t^2+\cdots$, $\det H_0\ne 0$ is a formal matrix series, then the transformed system for the new ``unknown'' matrix $Y=H(t)X$ takes the form $t^{1+r}\,\tfrac{\mathrm d}{\mathrm dt} Y=B(t)Y$ with the new matrix coefficient $B(t)=t^{r+1}(\tfrac{\mathrm d}{\mathrm dt}H)H^{-1}-HA(t)H^{-1}$.
The natural question is to describe the orbits of this action, in particular, determine what is the ``simplest'' form to which a given system can be reduced by a suitable formal gauge transformation. This question is almost completely settled for Fuchsian systems, including the issue of convergence for holomorphic systems and holomorphic gauge transformations. The question for non-Fuchsian systems is much more subtle, especially the issue of convergence, yet the first step of the formal classification is rather simple.
\begin{Def}
An non-Fuchsian system \eqref{ls-2} is resonant, if among the eigenvalues $\l_1,\dots,\l_n$ of the leading matrix $A_0$ occur equal numbers with zero differnce. Otherwise (when all eigenvalues are pairwise different) the system is non-resonant, see \cite{thebook}*{\parasymbol 20C}.
\end{Def}
\begin{Rem}
For Fuchsian systems the resonance condition means that some of the eigenvalues differ by a natural number, i.e., $\l_i-\l_j\in\N$ for some $i\ne j$.
\end{Rem}
\begin{Thm}\label{thm:diag}
A non-resonant non-Fuchsian system can be formally diagonalized, i.e., there exists a formal gauge transformation such that the corresponding transform $B(t)$ becomes a diagonal matrix.
\end{Thm}
In other words, in the non-resonant case the system can be decomposed into a Cartesian product of one-dimensional equations. Appearance of resonances (multiple eigenvalues of $A_0$) leads to a more involved formal normal form. The analytic reasons for the divergence (in general) of the diagonalizing gauge transform (the Stokes phenomenon) are also well understood, see \cite{thebook}*{\parasymbol 16 and \parasymbol 20}.
\subsection{Higher order linear operators}\label{sec:operators}
The other flavor of the theory deals with linear equations involving only one unknown (scalar) function $u$, but several derivatives. For simplicity we will consider only the \emph{homogeneous} equations of this type, which can always be written under the form
\begin{equation}\label{lode}
a_0(t)u^{(n)}+a_1(t)u^{(n-1)}+\cdots+a_{n-2}(t)u''+a_{n-1}(t)u'+a_n(t)u=0,
\end{equation}
where $a_0,\dots,a_n\in\Bbbk$ are the coefficients, defined modulo a multiplication by a nonzero Laurent series from $\Bbbk$. In particular, one can assume that the leading coefficient $a_0$ is identically one, or on the contrary, assume that all $a_0,\dots,a_n$ are formal Taylor series (not involving negative powers of $t$). However, it turns out that the smart decision, simplifying many formulations is to use a different derivation when expanding a linear dependence between derivatives of the unknown function.
The equation \eqref{lode} can be rewritten in the operator form. Denote by $\d\:\Bbbk\to\Bbbk$, $\d\:u\mapsto\tfrac{\mathrm d}{\mathrm dt}$ the standard derivation, and identify each element $a\in\Bbbk$ with a ``zero order operator'' $u\mapsto au$. Then the left hand side of \eqref{lode} can be interpreted as the result of application of the differential operator
\begin{equation}\label{lodo}
L=\sum_{j=0}^n a_j\d^{n-j},\qquad a_0,\dots, a_n\in\Bbbk.
\end{equation}
to the unknown function $u$ (which may well be in any extension of the field $\Bbbk$). The Leibniz rule implies the commutation law
\begin{equation}\label{leib}
\d t^k=kt^{k-1}\d+t^k,\qquad k\in\Z.
\end{equation}
Denote by $\eu$ the Euler derivation,
\begin{equation}\label{euler}
\eu=t\,\d=t\,\tfrac{\mathrm d}{\mathrm dt}.
\end{equation}
Then any linear operator of the form \eqref{lodo} can be re-expanded as the sum
\begin{equation}\label{lodo-1}
L=\sum_{j=0}^n b_j(t)\eu^{n-j}, \qquad b_j\in\Bbbk,
\end{equation}
where the coefficients $b_j\in\Bbbk$ as before are defined modulo a nonzero element in $\Bbbk$. The $\C$-linear space of such operators will be denored $\Bbbk[\eu]$. It is a non-commutative algebra with respect to the operation of composition. The commutation law in $\Bbbk[\eu]$ is given by the formula
\begin{equation}\label{comm}
\eu^j t^k=t^k(\eu+k)^j, \qquad t\in\Z,\ j\in\Z_+,
\end{equation}
which looks especially simple compared with the law \eqref{leib} extended on arbitrary monomials.
\begin{Def}
A \emph{canonical representation} of an $n$th order linear differential operator is the representation \eqref{lodo-1} in which:
\begin{itemize}
\item all coefficients $b_0,\dots,b_n\in\C[[t]]$ are formal Taylor polynomials, not involving negative powers of $t$, and at least one of them has a nonzero free term, $b_j(0)\ne0$;
\item all coefficients $b_0,\dots,b_n$ appear to the left from the symbols of the iterated Euler derivations $\eu^j$.
\end{itemize}
\end{Def}
Alternatively, any operator in the canonical representation can expanded as an infinite series of the form
\begin{equation}\label{oper-series}
L=\sum_{j\ge 0}t^j p_j(\eu),\qquad p_j\in\C[\eu],\ \max_j\deg_\eu p_j=n=\ord L.
\end{equation}
\begin{Def}\label{def:Fuchsian}
An operator is Fuchsian, or \emph{regular}, if in any canonical representation the leading coefficient is nonvaninishing, $b_0(0)\ne0$. Otherwise it is called \emph{irregular}.
The expansion \eqref{oper-series} corresponds to a Fuchsian operator, if and only if $\deg_\eu p_0=n=\ord L$.
\end{Def}
\begin{Rem}\label{rem:reduction}
A Fuchsian equation \eqref{lodo-1} can be reduced to a Fuchsian system \eqref{ls-2} in the standard way by introducing the formal variables $x_j=\eu^{j-1}u$, $j=1,\dots,n-1$. The condition of regularity is defined for equations with meromorphic coefficients in terms of the growth rate of their solutions. For scalar equations regularity is equivalent to Fuchsianity.
Conversely, a Fuchsian system can be written in the (matrix) operator form as $(\eu-A)X=0$ which \emph{mutatis mutandis} is Fuchsian in the sense of the above Definition.
\end{Rem}
\subsection{Weyl equivalence}
The (infinite-dimensional $\C$-linear) space of differential operators admits no natural action of a gauge transformations group that would be large enough (changes of variable of the form $v=h(t)u$ with a formal series $h\in\C[[t]]$ are obviously insufficient for a meaningful classification). Instead one can use the fact that differential operators form a \emph{noncommutative algebra}. The following definition was suggested in \cite{shira} based on the fundamental work by \Ore.~Ore \cite{ore}.
\begin{Def}
Two linear ordinary differential operators $L,M\in\Bbbk[\eu]$ are \emph{Weyl equivalent}, if there exist two \emph{Fuchsian} operators $H,K\in\Bbbk[\eu]$ such that:
\begin{enumerate}
\item $MH=KL$, and
\item $\gcd (H,L)=1$, i.e., there is no nontrivial operator $P\in\Bbbk[\eu]$ such that both $H$ and $L$ are divisible (from the right) by $P$.
\end{enumerate}
\end{Def}
Informally, two operators are Weyl equivalent, if there exists a Fuchsian operator $u\mapsto v=Hu$ which maps in a bijective way solutions of the equation $Lu=0$ to solutions of the equation $Mv=0$. It is not obvious why this is indeed an equivalence relation (in particular, why it is symmetric), yet this can be verified \cite{ore,shira}.
It turns out that the Weyl classification of \emph{Fuchsian} operators is very much similar to that of the gauge equivalence of the Fuchsian systems of linear equations. In particular, see \cite{shira}:
\begin{itemize}
\item in the generic (non-resonant) case a Fuchsian operator is Weyl equivalent to an Euler operator from $\C[\eu]$ (i.e., with constant coefficients);
\item in the resonant case the normal form is a composition of \emph{polynomial} first order operators $\eu-\l_j(t)$, $\l_j\in\C[t]$, with the degrees of the polynomials $\l_j$ depending on the combinatorial structure of resonances;
\item the normal form is Liouville integrable;
\item for a Fuchsian operator $L$ with holomorphic (convergent) coefficients, the normal form and the conjugating operators $H,L$ also have holomorphic coefficients.
\end{itemize}
\subsection{Non-commutative factorization}
The mere possibility of non-commutative factorization of Fuchsian operators into terms of first order is a simple fact. It suffices to note that any Fuchsian equation always has a solution of the form $u(t)=t^\l v(t)$ with $\l\in\C$ and an invertible series $v\in\C[[t]]$ (in the analytic category this solution is generated by an eigenvector of the monodromy operator). Such solution immediately produces a right Fuchsian factor of order 1 for the corresponding operator. The difficult part of \cite{shira} is to reduce \emph{simultaneously} all factors to polynomial forms by a suitable Weyl equivalence.
In this paper we discuss a much simpler question on the \emph{possibility} of the noncommutative factorization of irregular differential operators. To the best of our knowledge, this problem was first addressed by B.~Malgrange, who in 1979 sketched a solution in a preprint published only in 2008 \cite{malgrange}. Soon a different proof based on the valuations theory was published by P. Robba \cite{robba}. In both cases the answer was given in terms of the Newton diagram of the differential operator, yet the proof was essentially noncommutative.
An analogous question in the commutative \emph{algebra of pseudopolynomials} $\C[[t]][\xi]$ was first studied by I.~Newton in 1676 in a letter to H.~Oldenburg that was published only in 1960 according to \cite{arnold,brieskorn}. Newton invented his method of a rotating ruler which today is formalized using the Newton polygon (resp., Newton diagram) to solve this problem.
Even in the commutative case the Newton's solution was considerably involved, see \cite{vain-tren} for the modern exposition; an appropriate modification of this proof allows to treat also the noncommutative case of differential operators, see the excellent textbook \cite{vdp-sing}. However, the modern techniques of the singularity theory (blow-up) allow to obtain the same results in a much simpler way.
In our paper we develop a formal technique which allows to transfer \emph{all} results for commutative pseudopolynomials to the noncommutative case of differential operators and outline the similarity between the respective results. In particular, we prove a result that is a direct analogue of Theorem~\ref{thm:diag} (the necessary definitions are introduced below).
\begin{Thm}
Let $L$ be a single-slope differential operator $L$ with the rational slope $r=p/q$, $\gcd(p,q)=1$, and the roots $\l_1,\dots,\l_m\in\C$ of the corresponding characteristic polynomial are nonresonant, i.e., pairwise different.
Then the operator $L$ can be formally decomposed as a noncommutative product of $m$ irreducible operators, $L=L_1\cdots L_m$, with the same slope $r$ having the form $L_j=t^p\eu^q-\l_j+\cdots$.
\end{Thm}
Note that if the slope is integer, then $q=1$ and the irreducible factors are of order $1$.
The general factorization statement is given in Theorem~\ref{thm:main} below: its structure is completely analogous to the structure of the classical factorization theorem for pseudopolynomials. We start with a brief recap of the commutative theory in the form most suitable for our purposes.
\subsection{Acknowledgements}
This paper appeared after a thorough rethinking of the thesis of the first author \cite{leanne}. We are grateful to many friends and colleagues who came out with most helpful remarks after hearing conference presentations of the results, especially Jeanne-Pierre Ramis, Michael Singer, Daniel Bertrand, Gal Binyamini and Dmitry Novikov. The second author is incumbent of the Gershon Kekst Chair of Mathematics.
\section{Pseudopolynomials and their factorization}
\subsection{Pseudopolynomials}
A \emph{pseudopolynomial} is a family of polynomials of degree $\le n=\deg P$ which formally depends on a local parameter $t\in(\C,0)$. The space of pseudopolynomials can be naturally identified with the commutative algebra that in a sense cross-bread between the algebra of polynomials and the algebra of formal Taylor series,
\begin{equation}\label{comm-alg}
\^\Cs=\C[\xi]\otimes_\C\C[[t]].
\end{equation}
Each element of this algebra can be expanded into the formal series
\begin{equation}\label{taylor}
P(t,\xi)=\sum_{j=0}^\infty t^j p_j(\xi),\qquad p_j\in\C[\xi],\quad \deg P=\sup_{j}\deg p_j<+\infty
\end{equation}
(note the boundedness of $\deg p_j$) or as a formal double sum
\begin{equation}\label{double}
P(t,\xi)=\sum_{(i,j)\in S}c_{ij}t^j\xi^i,\qquad S\subset \Z^2_+
\end{equation}
The set $S\subset\Z^2_+$ which belongs to the vertical strip $0\le i\le n=\deg P$ is called the \emph{support} of the pseudopolynomial $P$ and denoted by $\supp P$.
\begin{Def}\label{def:NPC}
The Newton polygon $\D_P\subseteq\R_+^2$ of a pseudopolynomial $P\in\^\Cs$ is the minimal closed convex set containing the origin $(0,0)$ and the support $\supp P$, which is invariant by the vertical translation $(i,j)\mapsto (i,j+1)$.
\end{Def}
One can immediately see that the boundary of any Newton polygon consists of two vertical rays over the points $i=0$ and $i=n$ and the graph of a convex piecewise linear function $\chi_P\:[0,n]\to\R_+$, called the \emph{gap function}. This function is non-decreasing, which implies the following obvious conclusion.
\begin{figure}
\centering
\includegraphics[width=0.4\hsize]{NewtonDiag1}\\
\caption{Newton diagram of a pseudopolynomial}\label{fig:ND}
\end{figure}
\begin{Prop}\label{prop:left}
If $(i,j)\in \D_P$ and $0\le i'<j$, then $(i',j)\in\D_P$. \qed
\end{Prop}
\begin{Rem}\label{rem:local}
This definition is an obvious modification of the standard notion of the Newton polygon $\D_f$ for Taylor series $f$ from $\C[[x,y]]$, defined as the minimal closed convex set which contains the support $\supp f$ and is invariant by the shifts $(i,j)\mapsto (i,j+1)$ and $(i,j)\mapsto (i+1,j)$, see \cite{wall}. It suffices to notice that for a pseudopolynomial $P(t,\xi)$ of degree $n$ the Laurent series $f(x,y)=y^n P(x,\frac 1y)$ does not involve negative powers of $y$. The (usual) Newton polygon for $f$ is obtained by reflection of $\D_P$ in the horizontal axis and shift upwards by $n$.
\end{Rem}
The following properties of the gap function immediately follow from its construction:
\begin{enumerate}
\item $\chi$ is defined on $[0,n]$ and $\chi(0)=0$;
\item $\chi$ is convex, monotone non-decreasing and piecewise-linear (more accurately, piecewise-affine);
\item $\chi$ may be non-differentiable at a point $i\in[0,1]$ if and only if $i$ and $\chi(i)$ are both integer numbers (in which case we call $(i,j)\in\D_P$ the called the \emph{corner point} or the \emph{vertex} of $\D_P$).
\end{enumerate}
\begin{Rem}
The inverse to the gap function is the smallest concave majorant of the \emph{degree function} $j\mapsto \deg p_j$ derived from the expansion \eqref{taylor}.
\end{Rem}
\begin{Def}
The union of all finite edges of the Newton polygon $\D_P$ is called the \emph{Newton diagram} of the pseudopolynomial $P$ and denoted by $\G_P$. Thus the Newton polygon is the epigraph of the gap function.
\end{Def}
\begin{Def}\label{def:admissible}
A closed convex polygon $\D\subset\R^2_+$ which is the epigraph of a convex piecewise-linear function $\chi=\chi_\D$ as above, is called \emph{admissible}. The function $\chi=\chi_\D$ will be called the \emph{gap function} for $\D$.
Collection of different slopes (derivatives) of the affine pieces of the function $\chi$, all of them nonnegative rational numbers, will be called the \emph{Poincar\'e spectrum} $\PS(\D)\subset\Q_+$ of the polygon $\D$.
\end{Def}
\begin{Def}
We call a pseudopolynomial $P$ (resp., its Newton polygon $\D$) a \emph{single-slope} pseudopolynomial (resp., polygon), if its Poincar\'e spectrum consists of a single value, $\PS(P)=\{\rho\}$. The corresponding gap function is linear on some segment $[0,d]$, $\chi_P(i)=\rho i$, $\rho\in\Q_+$. The value $\rho=0$ is not excluded.
\end{Def}
\begin{Ex}\label{ex:FuchsianPP}
By this definition $\PS(P)=\{0\}$ if and only if $\deg P=\max_j \deg p_j=\deg p_0$. We call such (single-slope) pseudopolynomials \emph{Fuchsian}, cf.~with Definition~\ref{def:Fuchsian}.
\end{Ex}
\begin{Rem}
The reason why the collection of slopes is referred to as the Poincar\'e spectrum, is as follows. A linear system \eqref{ls-2} of Poincar\'e rank $r\in\Z_+$ can be written as $t^r\eu X=A(t)X$, and after reduction to a scalar equation as explained in Remark~\ref{rem:reduction}, it will generically produce a single-slope operator with the integer slope $r$.
\end{Rem}
\subsection{Newton polygon of a product}
The key property of the Newton polygon is its ``logarithmic behavior'' with respect to multiplication in $\^\Cs$, which generalizes the geometry of superscripts in the identity $\xi^n\xi^m=\xi^{n+m}$.
\begin{Prop}\label{prop:minksum}
For any $P,Q\in\^\Cs$,
\begin{equation}\label{NPlog}
\D_{PQ}=\D_P+\D_Q,
\end{equation}
where the right hand side is the Minkowsky sum $\{u+v\:u\in \D_P,\ v\in \D_Q\}$. \qed
\end{Prop}
For monomials this follows from the identity for their (one-point) supports, $\supp (t^{i+i'}\xi^{j+j'})=\supp (t^i\xi^j)+\supp (t^{i'}\xi^{j'})$. This immediately implies the inclusions
\begin{equation}\label{additive}
\supp(PQ)\subseteq\supp (P)+\supp (Q),\qquad\text{hence}\qquad \D_{PQ}\subseteq\D_P+\D_Q.
\end{equation}
The inclusion for supports can be strict, since a lattice point from $\supp (P)+\supp (Q)$ can be represented in several possible ways as the sum of points from $\supp(P)$ and $\supp(Q)$. A cancellation of different contributions is possible so that the corresponding coefficient of $PQ$ could be zero. The not-so-obvious claim is that the coefficients corresponding to the \emph{corner} points of $\D_P+\D_Q$ cannot vanish because of such cancellation.
\begin{Cor}
$$
\PS(P+Q)=\PS(P)\cup \PS(Q).\qed
$$
\end{Cor}
As follows from the Proposition~\ref{prop:minksum}, the problem of factorization of a pseudopolynomial $R\in\^\Cs$ reduces (although is \emph{not equivalent}) to the problem of representing the Newton polynomial $\D=\D_R$ as the Minkowski sum of two \emph{admissible} polynomials, $\D=\D'+\D''$. The admissibility constraints (nonnegativity, vertices only at the integer points of the lattice, two vertical bounding rays etc.) imply the following two geometrically rather obvious statements.
\begin{Lem}
An admissible \textup(in the sense of Definition~\ref{def:admissible}\textup) polygon is \emph{indecomposable}, i.e., cannot be represented as a Minkowski sum of two nontrivial admissible polygons, if and only if it has a single slope and the non-vertical edge carries no lattice points of $\Z^2_+$.
Any admissible polygon can be decomposed into the Minkowski sum of the indecomposable polygons.
\end{Lem}
\begin{proof}
It can be immediately verified that any admissible polygon can be decomposed into the Minkowski sum of the single-slope polygons. The claim of (in)decomposability for the single-slope polygons is essentially a one-dimensional statement about lattice segments in $\Z^1$.
\end{proof}
\subsection{Quasihomogeneous pseudopolynomials}\label{sec:qhg}
Let $w\in\Q_+$ be a nonnegative rational number and $\wgt=\wgt_w\:\Z^2\to\Q$ the weight function which associates to a monomial $t^j\xi^i$ the weight $\wgt(t^j\xi^i)=\wgt(i,j)=j-wi$.
\begin{Def}
A pseudopolynomial is called $w$-quasihomogeneous, if all its monomials have the same $w$-weight $\alpha\in\Q$,
$$P_\alpha=\sum_{(i,j)\:\wgt_w(i,j)=\alpha}c_{ij}t^j\xi^i.$$
One can instantly see that support of a quasihomogeneous polynomial belongs to a line with the slope $w$ and is finite (i.e., $P_\alpha\in\C[t,\xi]$ is a genuine polynomial).
\end{Def}
A quasihomogeneous polynomial of \emph{weight zero} is essentially a polynomial of a single variable. If $w=p/q$ is an irreducible fraction, then all monomials of weight zero are necessarily powers of the generating monomial $t^q\xi^p$ of weight zero, thus $P_0(t,\xi)=\sigma(t^q\xi^p)$ for some $\sigma=\sigma_P\in\C[\l]$. It is always reducible: if $\l_1,\dots,\l_k$ are the complex roots of $\sigma$, then
$$
P_0(t,\xi)=c\prod_{s=1}^k (\l_s-t^q\xi^p),\qquad c\in\C,\ c\ne0,\quad \sigma_P(\l_s)=0.
$$
An arbitrary quasihomogeneous (pseudo)polynomial $P_\alpha$ of weight $\alpha$ can be represented as a nontrivial monomial of weight $\alpha$ times a quasihomogeneous polynomial of weight zero. To make this representation unique, we will require that this quasihomogeneous polynomial is \emph{without zero roots},
$$
P_\alpha(t,\xi)=ct^j\xi^i\cdot \prod_{s=1}^k (\l_s-t^q\xi^p),\qquad c\prod_s\l_s\ne 0,\quad \sigma_P(\l_s)=0,\ \wgt(i,j)=\alpha.
$$
\begin{Def}\label{def:char-poly}
The univariate polynomial $\sigma=\sigma_P$ introduced by the above construction, is called the \emph{characteristic polynomial} of the quasihomogeneous pseudopolynomial $P$. Its (nonzero) roots are called \emph{characteristic numbers}.
\end{Def}
\subsection{Graded algebra of pseudopolynomials}
Let as before $w\in\Q_+$ be a rational weight and $\wgt(\cdot)$ the corresponding weight. The algebra $\^\Cs$ is naturally \emph{graded} by this weight, i.e., represented as a countable direct sum,
\begin{equation}\label{graded}
\^\Cs=\bigoplus_{\alpha\in\Q}\Cs_\alpha,\qquad \Cs_\alpha=\{P\in\^\Cs :\wgt P=\alpha\}.
\end{equation}
The index $\alpha$ effectively ranges over the set $\Z_+-w\Z_+\in\Q$ which is completely ordered (i.e., it is discrete, bounded from below and unbounded from above). All other terms $\Cs_\alpha$ are trivial. This grading agrees with the structure of algebra:
\begin{equation}\label{gr-alg}
\Cs_\alpha\cdot\Cs_\beta\subseteq\Cs_{\alpha+\beta},\qquad \forall \alpha,\beta\in\Q.
\end{equation}
Consequently, any pseudopolynomial $P\in\^\Cs$ can be expanded as a series $P=\sum_{\alpha\in\Q}P_\alpha$, which is in general infinite but always has a well-defined $w$-\emph{leading term} $P_*$ of the minimal weight $\alpha_*=\min \wgt\big|_{\D_P}$. It is tempting to use the imperfect notation $P=P_*+\cdots$ to state the corresponding fact.
\subsection{Commutative factorization problem}\label{sec:comm-factorization}
We will focus on the following special form of the factorization problem for pseudopolynomials: given $P\in\^\Cs$ with a Newton polygon $\D=\D_P$ and an admissible decomposition $\D=\D'+\D''$ into the Minkowski sum, construct a factorization $P=QR$ with $\D_Q=\D'$, $\D_R=\D''$. Some additional assumptions will be required.
\begin{Ex}
Assume that $P$ is a Fuchsian pseudopolynomial with pairwise distinct characteristic numbers $\l_1,\dots,\l_d$. Then its Newton polygon decomposes as the Minkowski sum of $d$ identical copies of $[0,1]\times\R_+$, so one can expect that it factors as a product of $d$ linear pseudopolynomials of the form
$P(t,\xi)=c(t)\prod_{s=1}^d (\xi-\boldsymbol \l_s(t))$. This is indeed the case, as follows from the (formal) Implicit Function Theorem: if all roots of $p_0$ are simple, $p_0'(\l_s)\ne 0$, then each root can be expressed as $\xi_s=\boldsymbol\l_s(t)\in\C[[t]]$, $\boldsymbol\l_s(0)=\l_s$.
\end{Ex}
\subsubsection{Factorization in $\C[[x,y]]$}
The factorization problem for pseudopolynomials can be instantly reduced to that for formal series in two variables, as mentioned in Remark~\ref{rem:local}. The factorization problem for such objects is well known, see \cite{wall}. If the series were convergent, then this would be the problem of determining all irreducible branches of the germ of a planar analytic curve in $\{f(x,y)=0\}\subset(\C^2,0)$. The answer is determined by the (classical) Newton diagram of the germ $f$ which is the graph of a piecewise affine convex function $\chi_f\:\R^+\to\R^+$ which \emph{decreases} to zero at some point $\le d$, cf.~with Remark~\ref{rem:local}. Slopes of this function are negative; as before, $f$ is called \emph{single-slope}, if its Newton diagram consists of a single edge.
\begin{Thm}\label{thm:locA}
A formal series $f\in\C[[x,y]]$ admits factorization into single-slope series $f=f_1\cdots f_m$. \qed
\end{Thm}
With a single-slope series $f$ one can associate the its leading part (a quasihomogeneous polynomial), the corresponding characteristic polynomial $\sigma=\sigma_f(\l)$ and its (nonzero) roots $\l_1,\dots,\l_d$, the \emph{characteristic numbers}, exactly as in \secref{sec:qhg}. The only difference is that the weights assigned to $x,y$ are both natural numbers. Obviously, $\sigma_{f_1f_2}=\sigma_{f_1}\sigma_{f_2}\in\C[\l]$.
\begin{Thm}\label{thm:locB}
Assume that the characteristic numbers of a single-slope series $f\in\C[[x,y]]$ form two disjoint groups so that $\sigma_f(\l)=\sigma_1(\l)\sigma_2(\l)$ and $\gcd(\sigma_1,\sigma_2)=1$.
Then $f$ admits factorization $f=f_1f_2$ so that $\sigma_{f_i}=\sigma_i$, $i=1,2$. \qed
\end{Thm}
\begin{Cor}
Any single-slope series can be factored out as a product of terms each having a single characteristic number, eventually with nontrivial multiplicity. \qed
\end{Cor}
These theorems immediately imply the following two factorization results for pseudopolynomials.
\begin{Thm}\label{thm:ppA}
Any pseudopolynomial $P\in\^\Cs$ admits factorization into single-slope terms $P=P_1\cdots P_m$.
\end{Thm}
\begin{Thm}\label{thm:ppB}
Assume that the characteristic numbers of a single-slope pseudopolynomial $P\in\^\Cs$ form two disjoint groups so that $\sigma_P$ factors as $\sigma_P(\l)=\sigma_1(\l)\sigma_2(\l)$ with $\gcd(\sigma_1,\sigma_2)=1$.
Then $P$ admits factorization $P=P_1P_2$ so that $\sigma_{P_i}=\sigma_i$, $i=1,2$.
\end{Thm}
\begin{Cor}
Any single-slope series can be factored out as a product of terms each having a single characteristic number, eventually with nontrivial multiplicity.
\end{Cor}
\begin{proof}[Proof by reduction to the local case]
For a pseudopolynomial $P(t,\xi)\in\^\Cs$ of degree $n$ denote $f(x,y)=y^n P(x,1/y)$. Then $f$ is a formal series in $x$ and a polynomial in $y$, that is, an element from $\C[[x,y]]$. Let $f=f_1f_2$ be the factorization of $f$ in the assumptions of Theorem~\ref{thm:locA} (Theorem~\ref{thm:locB} respectively). By the Weierstrass theorem (formal), one can assume that modulo an invertible series $f_i$ are polynomial in $y$ of degrees $n_1,n_2$ respectively, with $n_1+n_2=n$. The invertible series must also be polynomial in $y$ of degree $0$, that is, a formal series from $\C[[x]]$. Setting $P_i(t,\xi)=\xi^{n_i}f_i(t,1/\xi)$ gives the required factorization of $P$.
\end{proof}
\subsubsection{About the proofs}\begin{small}
The modern proof of Theorems~\ref{thm:ppA} and~~\ref{thm:ppB} relies on the desingularization, a sequence of rational monomial transformations which simplify the curve (or the formal series). These transformations (blow-ups) have the form
\begin{equation}\label{bup}
(x,y)\longmapsto (x,y/x)\qquad\text{or}\qquad (x,y)\longmapsto (x/y,y)
\end{equation}
and act on the support of a series by an affine transformation which allows to extract factors of the form $x^p$ or $y^q$. For instance, if $\D_f$ has a single ``homogeneous edge'' connecting the vertices $(0,p)$ and $(p,0)$ and the corresponding characteristic numbers $\l_1,\dots,\l_p$ are pairwise different, then after a single blow-up one can refer to the implicit function theorem for the proof that $f$ admits factorization into terms corresponding to nonsingular branches,
\begin{equation*}
f(x,y)=\prod_{i=1}^p (x-\boldsymbol \l_i(x)y),\qquad \boldsymbol \l_i\in\C[[x]],\quad \boldsymbol \l_i(0)=\l_i.
\end{equation*}
In case of multiple characteristic values one has to refer to the Weierstrass Preparation theorem instead of the implicit function theorem.
A single-slope series with the single edge connecting $(0,p)$ and $(q,0)$ requires several blow-ups whose number and types are determined by the Euclid algorithm for computation of $\gcd(p,q)$. A slightly more delicate considerations are required when $f$ has more than one slope, but the idea remains the same. \par\end{small}
\section{Homological equation and its solvability}
In this section we return to the algebra of pseudopolynomials $\^\Cs=\C[[t]][\xi]$ and attempt to construct factorization in this algebra directly, following Newton's ideas.
\subsection{Formal factorization}\label{sec:formfact}
Consider an admissible polygon $\D\in\R_+^2$ and the weight function $\wgt=\wgt_w\:\Z^2\to\Q$ associated with a rational weight $w\in\Q_+$. Denote
\begin{equation}\label{qhn}
\Cs_\alpha(\D)=\Cs_\alpha\cap\{\supp P\subseteq\D\}.
\end{equation}
Then the property \eqref{gr-alg} can be refined as follows: for any two admissible polygons $\D',\D''\subseteq\R^2_+$ and any $\alpha,\beta\in\Q_+$,
\begin{equation}\label{gr-alg-D}
\Cs_\alpha(\D')\cdot\Cs_\beta(\D'')\subseteq\Cs_{\alpha+_\beta}(\D'+\D'').
\end{equation}
Let $P\in\^\Cs$ be a pseudopolynomial expanded into $w$-quasihomogeneous terms as $P=\sum_\gamma P_\gamma$, and assume that $\D=\D_P=\D'+\D''$ is the admissible decomposition of its Newton polygon. The factorization under the form $P=QR$ can be achieved by two formal expansions $Q=\sum_\alpha Q_\alpha$, $R=\sum_\beta R_\beta$, if and only if
\begin{equation}\label{factor-comm}
P_\gamma=\sum_{\alpha+\beta=\gamma}Q_\alpha R_\beta,\qquad Q_\alpha\in\Cs_\alpha(\D'),\ R_\beta\in\Cs_\beta(\D'').
\end{equation}
Denote the leading terms of the three pseudopolynomials by $P_*,Q_*,R_*$ respectively (of weights $\gamma_*=\min \wgt\big|_\D$, $\alpha_*=\min \wgt\big|_{\D'}$, $\beta_*=\min \wgt\big|_{\D''}$) and assume that
\begin{equation}\label{factor-seed}
P_*=Q_*R_*\in\Cs_{\gamma_*}(\D).
\end{equation}
Then \eqref{factor-comm} becomes is an infinite \emph{triangular} system of \emph{linear algebraic equations} with respect to the unknown terms $Q_\alpha,R_\beta$ from the corresponding finite-dimensional linear spaces $\Cs_\alpha(\D')$, $R_\beta\in\Cs_\beta(\D'')$.
Indeed, each equation can be rewritten as
\begin{equation}\label{homolog}
Q^*R_{\gamma-\alpha_*}+Q_{\gamma-\beta_*}R^*=-P_\gamma+\sum_{\alpha>\alpha_*,\ \beta>\beta_*}Q_\alpha R_\beta.
\end{equation}
The condition on the weights in the right hand side means that it involves only the terms of the weights strictly less than $Q_{\gamma-\beta_*}$ (resp., $R_{\gamma-\alpha_*})$. If these terms were already determined recursively from the equations \eqref{homolog} solved for all smaller values of $\gamma$, then the right hand side is known and we can study its solvability the equation in the weight $\gamma$ as well. The equation \eqref{factor-seed} serves as a base for this inductive process.
Solvability of these equations depends on the following data: the weight $w$, the two admissible polygons $\D',\D''$ and the initial quasihomogeneous polynomials $Q_*,R_*$ of the appropriate weights. Denote by $\H$ the linear operator (more precisely, a family (sequence) of linear operators $\H_\gamma$, $\gamma\in\Q_+$)
\begin{equation}\label{hom-op}
\H\:\Cs_{\gamma-\alpha_*}(\D'')\times \Cs_{\gamma-\beta_*}(\D')\to\Cs_\gamma(\D'+\D''),\qquad (U,V)\mapsto Q_*U+R_*V.
\end{equation}
\begin{Def}\label{def:homolog}
The equation(s) $\H(U,V)=W$ is called the \emph{homological equation} associated with the data $\mathscr H=(w,\D',\D'',Q_*,R_*)$. The homological equation is called \emph{solvable}, if each operator $\H_\gamma$ is \emph{surjective} for all $\gamma\ge\gamma_*=\alpha_*+\beta_*$.
\end{Def}
This equation can be considered as the linearization of the nonlinear equation $P=QR$ at the ``point'' $Q_*,R_*$ in the same way as it appears in the theory of local normal forms of vector fields etc., see \cite{thebook}*{\parasymbol 4}. Its solvability very strongly depends on the corresponding data $\mathscr H$, in particular, on the choice of the seed polynomials $Q_*,R_*$.
\subsection{Examples}
We start with the extreme case where $w=0$. It implies that the ``Fuchsian'' part of a pseudopolynomial can be always factored out.
\begin{Ex}\label{ex:horizontal}
Let $w=0$. Then $\wgt=\deg_t$, and the the quasihomogeneous components are of the form $P_j=t^j p_j(\xi)$, $j=0,1,2,\dots$. Let $d=\deg p_0<n=\deg P$. Since the Newton diagram of $P$ contains a nontrivial horizontal segment of length $d<n=\deg P$, then $\D_P=\D'+\D''$, where $\D'$ is a vertical semistrip $[0,d]\times\R_+$. In other words, the gap function $\chi_{\D'}$ vanishes identically on $[0,d]$, and $\chi_{\D''}$ is strictly positive on $(0,n-d]$, i.e., the corresponding Newton diagram has only nonzero slopes. Let $Q_*=P_*=p_0(\xi)$, $R_*=1$. Substituting the expansions
$$
Q=p_0(\xi)+\sum_{j=1}^\infty t^j q_j(\xi), \deg q_j\le d,\quad R=1+\sum_{j=1}^\infty t^j r_j(\xi),\ \deg r_j\le n-d
$$
into \eqref{homolog}, we obtain an infinite series of identities in $\C[\xi]$,
\begin{equation}\label{fuchs-out}
\begin{aligned}
p_0&=p_0,\\
p_1&=q_1+r_1p_0,\\
p_2&=q_2+r_1q_1+r_2p_0,\\
\dots&{\makebox[0.3\columnwidth]{\dotfill}}\\
p_j&=q_j+r_1q_{j-1}+\cdots+r_j p_0,
\end{aligned}
\end{equation}
The initial identity is trivially satisfied. The requirements that the support of $Q_\beta$ belongs to $\D'$ means that $\deg q_j\le d$ (then the second requirement will be automatically satisfied).
The homological equation \eqref{fuchs-out} can be inductively solved with respect to $q_j,r_j$ by the division with remainder of the polynomial $p_j-\sum_{k=1}^{j-1}r_kq_{j-k}$ by $p_0$. The remainder term $q_j$ can be guaranteed to be of degree $\le d-1$ (and then it will be uniquely determined), while $r_j$ will be the respective incomplete ratio. This gives a direct proof of a particular case of Theorem~\ref{thm:ppA}.
\end{Ex}
\subsection{Sylvester map}
As was explained in \secref{sec:qhg}, quasihomogeneous polynomials can be expressed as univariate polynomials in the basic monomial of weight zero. An analog of the homological equation \eqref{homolog} for univariate polynomials looks as follows.
Denote by $\C_n[\l]$ the linear space of polynomials of degree $\le n-1$, so that $\dim_\C\C_n[\l]=n$, and assume $q_*\in\C_n[\l]$, $r_*\in\C_m[\l]$. Then there is a linear map, called the \emph{Sylvester map},
\begin{equation}\label{sylv}
\boldsymbol S\:\C_m[\l]\times\C_n[\l]\to \C_{m+n}[\l], \qquad (u,v)\longmapsto q_*u+r_*v
\end{equation}
(the matrix of this map in the natural basis is the Sylvester matrix of the two polynomials $p,q$). It is well known that the Sylvester map is bijective if and only if $\gcd(p,q)=1$.
However, it is very difficult to apply this result to study the homological equation \eqref{homolog}: the dimensions $\dim\Cs_\alpha(\D)$ depend on the weight $\alpha$ in a rather irregular way. In general, the homological operator $\H_\gamma$ acts between spaces of different dimensions. Thus proving directly its surjectivity is problematic. However, it follows indirectly from the established in \secref{sec:comm-factorization} results on factorization of pseudopolynomials.
\subsection{Solvability of the homological equation}\label{sec:solvability}
Let $\mathscr H=(w,\D',\D'',Q_*,R_*)$ be the data defining the homological operator $\H$.
Consider first the case where one of the polygons, say, $\D'$ is single-slope, $\PS(\D')=\{\rho\}$, and choose the weight $w=\rho$. Then $\alpha_*=0$, and $Q_*$ is a quasihomogeneous polynomial of weight zero with nonzero characteristic roots. If $\rho\notin \PS(\D'')$, then $\beta_*<0$, the weight achieves its minimum at a corner point and the leading term $R_*$ is a (nontrivial) monomial.
\begin{Thm}\label{thm:homA}
If $w=\rho\notin \PS(\D'')$, i.e., the polygons $\D',\D''$ have no common slope, then all homological operators are surjective and the homological equation is solvable in any weight $\gamma$.
\end{Thm}
The second case deals with factorization of the quasihomgeneous polynomials into terms of lower weight. Assume that $\PS(\D')=\PS(\D'')=\{\rho\}$ and the weight is chosen accordingly, $w=\rho$. Then $\alpha_*=\beta_*=0$, and the corresponding characteristic $\sigma'=\sigma_{Q_*}$ and $\sigma''=\sigma_{R_*}$ are defined as in Definition~\ref{def:char-poly}.
\begin{Thm}\label{thm:homB}
If $\gcd(\sigma',\sigma'')=1$, i.e., the two characteristic polynomials have no common roots, then all homological operators are surjective and the homological equation is solvable in any weight $\gamma$.
\end{Thm}
\begin{proof}[Proof of both theorems]
Consider a pseudopolynomial $P$ with the leading part $P_*=Q_*R_*$ with $Q_*,R_*$ as in, say, Theorem~\ref{thm:homA}. By Theorem~\ref{thm:ppA}, $P$ admits factorization of the form $P=(Q_*+\cdots)(R_*+\dots)$ \emph{regardless} of the higher terms of $P$.
Substituting this factorization, we see that for each weight $\gamma>\gamma_*$ the homological equation \eqref{homolog} admits a solution for \emph{some} right hand side. Yet since the term $P_\gamma$ can be changed \emph{arbitrarily} without affecting reducibility, we conclude that the equation $\H_\gamma(U,V)=W$, see \eqref{hom-op}, is solvable for \emph{any} $W$.
In exactly the same way Theorem~\ref{thm:homB} follows from Theorem~\ref{thm:ppA}.
\end{proof}
\begin{Rem}\label{rem:convergence}
Theorems~\ref{thm:ppA} and~\ref{thm:ppB} describe factorization of the pseudopolynomials both in the formal context (as stated) and in the analytic context. Consider the commutative algebra $\Cs=\C[\xi]\otimes_\C\mathscr O(t)$, cf.~with \eqref{comm-alg}, where $\mathscr O(t)$ is the algebra of germs of analytic functions at $(\C,0)$ which can be identified with the algebra of convergent Taylor series, the corresponding objects are called \emph{analytic pseudopolynomials}. Then each analytic pseudopolynomial $P\in\Cs$ can be factored as a product if two analytic pseudopolynomials $Q,R\in\Cs$. Moreover, among the (many) formal solutions $(Q,R)$ constructed using the homological equations, one can always find a convergent solution.
\end{Rem}
\section{Weyl algebra and factorization of differential operators}
\subsection{Weyl algebra}
Motivated by the arguments from \secref{sec:operators} on various representations of linear ordinary differential operators, we introduce the (formal) Weyl algebra $\W$ as the algebra of formal series\footnote{The classical Weyl algebra is \emph{generated} by two symbols with the same commutation relation, so consists of noncommutative \emph{polynomials} in these variables.} in the two non-commutative variables $t,\eu$ related by the commutation identity \eqref{comm}, which are in fact \emph{polynomials} in $\eu$.
Using the commutation rule, any element $L\in\W$ can be reduced to the infinite formal sum
\begin{equation}\label{formal-oper}
L=L(t,\eu)=\sum_{(i,j)\in S}c_{ij} t^j\eu^i,\qquad S=\supp L\subset [0,\dots,n]\times\Z_+,\ c_{ij}\in\C\ssm\{0\},
\end{equation}
where all powers of $t$ always occur to the left from powers of $\eu$ (the canonical representation). The integer $n=\ord L$ is the order of the operator $L$, and $S$ is called its support, $S=\supp (L)$. The Newton diagram $\D_L$ is obtained from the support in exactly the same way as in the commutative case (convex hull and invariance by translations).
Because of the non-commutativity of $\W$, in general $\supp (LM)\not\subseteq \supp (L)+\supp (M)$. However, the identity \eqref{comm} implies that
\begin{equation}\label{e-decrease}
t^j\eu^i\cdot t^{j'}\eu^{i'}=t^{j+j'}\eu^{i+i'}+\sum_{k<i+i'}c_{kl}\,t^l\eu^k.
\end{equation}
This together with Proposition~\ref{prop:left} proves that
\begin{equation}\label{minksum-W}
\forall L,M\in\W\qquad \D_{LM}=\D_L+\D_M
\end{equation}
(cf.~with Proposition~\ref{prop:minksum}).
\begin{Def}
For any $L\in\W$ with the canonical representation \eqref{formal-oper} the pseudopolynomial $P=P(t,\xi)=\sum_{\supp L}c_{ij}t^j\xi^i$ with the same coefficients $c_{ij}$ will be called\footnote{The classical notion of the symbol of a differential operator collects only the terms involving the highest order derivatives.} the \emph{pseudosymbol} of $L$ and denoted by $\PP_L$.
Conversely, for a pseudopolynomial $P=P(t,\xi)=\sum c_{ij}t^j\xi^i \in\^\Cs$ we will denote $P(t,\eu)\in\W$ the result of substitution of $\eu$ instead of $\xi$, $L=\sum_{i,j}c_{ij}t^j\eu^i$.
\end{Def}
Needless to warn that the pseudosymbol is by no means functorial: in general $\PP_{LM}\ne\PP_L\PP_M$ and $P(t,\eu)Q(t,\eu)\ne PQ(t,\eu)$.
The correspondence $\W\to\^\Cs$, $L\mapsto\PP_L$ allows to associate with operators from $\W$ all notions that were introduced for the pseudopolynomials. Thus we define Fuchsian operators, single-slope operators, the Poincar\'e spectrum e.a. Obviously, the pseudosymbols of Fuchsian operators become what they should be.
\subsection{Filtration of the Weyl algebra}
Let $w\in\Q_+$ be a rational weight and $\wgt_w(\cdot)$ the corresponding weight function. However (unfortunately) since $\PP_{LM}\ne \PP_L\PP_M$, we do not have the grading of $\W$ by different weights, only \emph{filtration}.
Recall that each grading of an algebra, in particular, the grading $\Cs(\D)=\bigoplus_\alpha\Cs_\alpha(\D)$, canonically defines a filtration by subspaces
$$
\U_\alpha(\D)=\bigcup_{\gamma\ge \alpha}\Cs_\alpha(\D),\qquad \alpha,\gamma\in\Q.
$$
This filtration is monotone decreasing, $\U_\alpha(\D)\subseteq\U_\beta(\D)$ if $\alpha\ge\beta$, satisfies the condition $\U_\alpha(\D)\cdot\U_\beta(\D)\subseteq\U_{\alpha+\beta}(\D)$. Conversely, the grading can be restored from the filtration as follows,
\begin{equation}\label{filtograd}
\Cs_\alpha(\D)=\U_\alpha(\D)/\U_\alpha^+(\D), \qquad\text{where}\qquad \U_\alpha^+(\D)=\bigcup_{\gamma>\alpha}\U_\alpha(\D).
\end{equation}
\begin{Def}
Let $\alpha\in\Q_+$ be a rational number and $\D$ an admissible polygon. We define $\W_\alpha(\D)$ as the subspace
\begin{equation}\label{filter}
\W_\alpha(\D)=\{L\in\W: \PP_L\in\U_\gamma(\D)\}.
\end{equation}
In other words, $\W_\alpha(\D)$ denotes the $\C$-space of operators from $\W$ whose pseudosymbol contains only terms of weight $\alpha$ and higher.
\end{Def}
By definition, $\W_\alpha(\D)\supseteq\W_\beta(\D)$ if $\alpha\ge\beta$, so the spaces $\W_\alpha(\D)$ form a decreasing filtration of $\W(\D)$. This filtration agrees with the composition in $\W$ in the sense that
\begin{equation}\label{filter}
\W_\alpha(\D)\cdot\W_\beta(\D)\subseteq\W_{\alpha+\beta}(\D)\qquad \forall \alpha,\beta\in\Q,
\end{equation}
cf.~with \eqref{gr-alg}.
Indeed, after reducing the composition of operators $L,M$ of weights $\alpha,\beta$ respectively to the canonical representation where all powers of $\eu$ occur to the left from all powers of $\eu$, we affect only terms of of order strictly greater than $\alpha+\beta$, as follows from \eqref{e-decrease} (recall that the weight of $\eu$ is negative $-w$).
Recall that for any choice of the weight we used \emph{all} rational numbers for labeling in the graded algebra $\^\Cs=\bigoplus_{\alpha\in\Q}\Cs_\alpha$: the homogeneous spaces $\Cs_\alpha$ could be nonzero only for countably many values forming an arithmetic progression (depending on $w$). In the same way the decreasing filtration of $\W$ by $\W_\alpha$ has ``jumps'' only at these values.
\begin{Prop}
For any rational $\alpha\in\Q$,
\begin{equation}\label{quotient}
\W_\alpha(\D)/\W^+_\alpha(\D)=\Cs_\alpha(\D),\qquad\text{where}\quad
\W_\alpha^+(\D)=\bigcup_{\gamma>\alpha}\W_{\gamma}(\D).
\end{equation}
\end{Prop}
\begin{proof}
This follows from \eqref{filtograd} and the definition of the subspaces $\W^+\alpha(\D)$.
\end{proof}
\subsection{Factorization in the Weyl algebra}
Assume that $L\in\W$, $\ord L=n$ and $\D_L=\D'+\D''$. We look for conditions guaranteeing that $L$ can be decomposed as $L=MN$ with $M,N\in\W$ and $\D_M=\D'$, $\D_N=\D''$.
Choose a weight $w\in\Q$ and expand $L$ as the series $L=\sum_\gamma P_\gamma(t,\eu)$, where $P_\gamma$ are the corresponding quasihomogeneous components of the pseudopolynomial $P=\PP_L\in\^\Cs(\D)$.
We will look for the factorization in the form $L=MN$ defined by indeterminate pseudopolynomials $Q=\PP_M\in\Cs(\D')$, $R=\PP_N\in\Cs(\D'')$. by inductively constructing them and try to mimic the formal arguments from \secref{sec:formfact}. All notations will be kept as similar as possible in the commutative case. We assume that both $Q,R$ are expanded as sums of quasihomogeneous components $Q=\sum Q_\alpha$, $R=\sum R_\beta$.
The leading quasihomogeneous terms $Q_*,R_*$ of the minimal weights $\alpha_*,\beta_*$ respectively, must yield factorization of the leading term $P_*$. Fix them and consider the equation $\PP_L=\PP(MN)$. Since $\W$ is non-commutative, the right hand side is not equal to $\PP_M\PP_N$, but for any $\alpha,\beta$ from \eqref{e-decrease} it follows that
\begin{equation}\label{pssymb-weight}
Q_\alpha(t,\eu)R_\beta(t,\eu)=(Q_\alpha R_\beta)(t,\eu)\bmod \W_\gamma,\qquad \gamma=\alpha+\beta,
\end{equation}
that is, after reducing the composition of operators to the canonical form, the result will have the same leading terms of order $\gamma=\alpha+\beta$ as if the algebra were commutative.
This means that the pseudopolynomials $Q_\alpha$, $R_\beta$ can be inductively defined from the infinite ``triangular'' system of equations of the form
\begin{equation}\label{triang-w}
P_\gamma=\sum_{\alpha+\beta=\gamma}Q_\alpha R_\beta + S_\gamma, \qquad Q_\alpha\in\Cs_\alpha(\D'),\ R_\beta\in\Cs_\beta(\D''),
\end{equation}
cf.~with \eqref{factor-comm}, where $S_\gamma\in\Cs_\gamma(\D)$ is the collection of terms accumulated from re-expansion of terms $P_{\alpha'},Q_{\beta'}$ with $\alpha'+\beta'<\gamma$, which were already found by the induction hypothesis.
The equations \eqref{triang-w} are identical to the equations \eqref{factor-comm}, and their solvability depends only on the properties of $Q_*,R_*$ and the Newton diagrams $\D,\D''$ as described in \secref{sec:solvability}. In particular, Theorems~\ref{thm:homA} and~\ref{thm:homB} imply the following results.
\begin{Thm}\label{thm:operA}
If $\D_L=\D'+\D''$ and the admissible polygons $\D',\D''$ have no common slope, then any operator $L\in\W(\D)$ admits a formal decomposition $L=MN$ with $M\in\W(\D')$, $N\in\W(\D'')$.
\end{Thm}
\begin{Thm}\label{thm:operB}
If $\D$ is a single-slope admissible polygon, $L\in\W(\D)$ has a characteristic polynomial $\sigma=\sigma_L\in\C[\l]$, then for any factorization $\sigma=\sigma'\sigma''$ with mutually prime polynomials $\sigma',\sigma''$ one can find a formal factorization $L=MN$ by two single-slope operators such that $\sigma_M=\sigma'$, $\sigma_N=\sigma''$.
\end{Thm}
\begin{proof}[Proof of both Theorems]
Each equation in the infinite series \eqref{triang-w} is of the form \eqref{homolog} with the only difference being an extra term $S_\gamma$ coming from the preceding equation. Its solvability follows from the surjectivity of the corresponding homological operator associated with the data $\mathscr H=(w,\D',\D'',Q_*,R_*)$.
\end{proof}
As an immediate corollary to these two theorems, we have the following result on reducibility.
\begin{Def}
A differential operator $L\in\W$ is called \emph{monic}, if it has a single slope, and the corresponding characteristic polynomial $\sigma_L$ has a single root.
\end{Def}
\begin{Thm}\label{thm:main}
Any differential operator $L\in\W$ admits a decomposition into the non-commutative product of monic operators.
\end{Thm}
\subsection{Remark on the convergence}
It is absolutely imperative to stress that all results on factorization of the differential operators, unlike their counterparts on pseudopolynomials, are only formal (cf.~with Remark~\ref{rem:convergence}). Technically, the difference between the two theories can be attributed to the fact that the passage from grading to filtration results in the growth of the number of terms in the right hand side of the homological equation \eqref{triang-w} compared with \eqref{factor-comm}.
However, the issue of the divergence of formal transformations, diagonalizing (say, in the non-resonant case) irregular singularities was studied in detail, and geometric obstructions were identified as a Stokes matrices \cite{thebook}*{\parasymbol 20G}.
The ambitious goal beyond this paper and its precursor \cite{shira} is to identify analytic obstructions to the formal Weyl classification and formal factorization in a similar form as a suitable cocycle over a punctured neighborhood $(\C,0)$. However, this project is still in its rudimentary stage.
\begin{bibdiv}
\begin{biblist}
\bib{arnold}{book}{
author={Arnold, V. I.},
title={Huygens and Barrow, Newton and Hooke},
note={Pioneers in mathematical analysis and catastrophe theory from
evolvents to quasicrystals;
Translated from the Russian by Eric J. F. Primrose},
publisher={Birkh\"auser Verlag, Basel},
date={1990},
pages={118},
isbn={3-7643-2383-3},
review={\MR{1078625}},
doi={10.1007/978-3-0348-9129-5},
}
\bib{brieskorn}{book}{
author={Brieskorn, Egbert},
author={Kn\"orrer, Horst},
title={Plane algebraic curves},
series={Modern Birkh\"auser Classics},
note={Translated from the German original by John Stillwell;
[2012] reprint of the 1986 edition},
publisher={Birkh\"auser/Springer Basel AG, Basel},
date={1986},
pages={x+721},
isbn={978-3-0348-0492-9},
review={\MR{2975988}},
doi={10.1007/978-3-0348-5097-1},
}
\bib{sqh}{article}{
author={Greuel, Gert-Martin},
author={Pfister, Gerhard},
title={On moduli spaces of semiquasihomogeneous singularities},
conference={
title={Algebraic geometry and singularities},
address={La R\'abida},
date={1991},
},
book={
series={Progr. Math.},
volume={134},
publisher={Birkh\"auser, Basel},
},
date={1996},
pages={171--185},
review={\MR{1395180}},
}
\bib{thebook}{book}{
author={Ilyashenko, Yulij},
author={Yakovenko, Sergei},
title={Lectures on analytic differential equations},
series={Graduate Studies in Mathematics},
volume={86},
publisher={American Mathematical Society, Providence, RI},
date={2008},
pages={xiv+625},
isbn={978-0-8218-3667-5},
review={\MR{2363178 (2009b:34001)}},
}
\bib{kamgarpour}{article}{
author={Kamgarpour, Masoud},
author={Wheatherhog, Samuel},
title={A New Approach to Jordan Decomposition for Formal Differential Operators},
journal={\texttt{ArXiv}},
volume={1702.03608v1},
year={2017},
month={2},
pages={1--12},
note={Preprint published on February 13, 2017.},
}
\bib{malgrange}{article}{
author={Malgrange, Bernard},
title={Sur la r\'eduction formelle des \'equations diff\'erentielles \'a singularit\'es irr\'eguli\`eres},
journal={Pr\'epublication de l'Inst. Fourier, Grenoble},
date={1979},
reprint={
title={Singularit\'es irr\'eguli\`eres},
series={Documents Math\'ematiques},
volume={5},
author={Deligne, Pierre},
author={Malgrange, Bernard},
author={Ramis, Jean-Pierre},
publisher={Soci\'et\'e Math\'ematique de France},
date={2007},
isbn={978-2-85629-241-9},
review={\MR{2387754}},
pages={97--107},
}
}
\bib{leanne}{thesis}{
author={Mezuman, Leanne},
school={Weizmann Institute of Science},
year={2017},
title={Classification of non-Fuchsian linear differential equations},
type={M.Sc.~thesis}
}
\bib{mero-flat}{article}{
author={Novikov, Dmitry},
author={Yakovenko, Sergei},
title={Lectures on meromorphic flat connections},
conference={
title={Normal forms, bifurcations and finiteness problems in
differential equations},
},
book={
series={NATO Sci. Ser. II Math. Phys. Chem.},
volume={137},
publisher={Kluwer Acad. Publ., Dordrecht},
},
date={2004},
pages={387--430},
review={\MR{2085816 (2005f:34255)}},
}
\bib{ore}{article}{
author={Ore, \Ore ystein},
title={Theory of noncommutative polynomials},
journal={Ann. of Math. (2)},
volume={34},
date={1933},
number={3},
pages={480--508},
issn={0003-486X},
review={\MR{1503119}},
doi={10.2307/1968173},
}
\bib{vdp-sing}{book}{
author={van der Put, Marius},
author={Singer, Michael F.},
title={Galois theory of linear differential equations},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={328},
publisher={Springer-Verlag, Berlin},
date={2003},
pages={xviii+438},
isbn={3-540-44228-6},
review={\MR{1960772}},
doi={10.1007/978-3-642-55750-7},
}
\bib{robba}{article}{
author={Robba, P.},
title={Lemmes de Hensel pour les op\'erateurs diff\'erentiels. Application \`a
la r\'eduction formelle des \'equations diff\'erentielles},
language={French},
journal={Enseign. Math. (2)},
volume={26},
date={1980},
number={3-4},
pages={279--311 (1981)},
issn={0013-8584},
review={\MR{610528}},
}
\bib{shira}{article}{
author={Tanny, Shira},
author={Yakovenko, Sergei},
title={On local Weyl equivalence of higher order Fuchsian equations},
journal={Arnold Math. J.},
volume={1},
date={2015},
number={2},
pages={141--170},
issn={2199-6792},
review={\MR{3370063}},
doi={10.1007/s40598-015-0014-6},
}
\bib{vain-tren}{book}{
author={Vainberg, M. M.},
author={Trenogin, V. A.},
title={Theory of branching of solutions of non-linear equations},
note={Translated from the Russian by Israel Program for Scientific
Translations},
publisher={Noordhoff International Publishing, Leyden},
date={1974},
pages={xxvi+485},
review={\MR{0344960}},
}
\bib{wall}{book}{
author={Wall, C. T. C.},
title={Singular points of plane curves},
series={London Mathematical Society Student Texts},
volume={63},
publisher={Cambridge University Press, Cambridge},
date={2004},
pages={xii+370},
isbn={0-521-83904-1},
isbn={0-521-54774-1},
review={\MR{2107253}},
doi={10.1017/CBO9780511617560},
}
\end{biblist}
\end{bibdiv}
\end{document}
\end{document}
| {
"timestamp": "2018-05-08T02:11:45",
"yymm": "1805",
"arxiv_id": "1805.02210",
"language": "en",
"url": "https://arxiv.org/abs/1805.02210",
"abstract": "We study the problem of decomposition (non-commutative factorization) of linear ordinary differential operators near an irregular singular point. The solution (given in terms of the Newton diagram and the respective characteristic numbers) is known for quite some time, though the proofs are rather involved.We suggest a process of reduction of the non-commutative problem to its commutative analog, the problem of factorization of pseudopolynomials, which is known since Newton invented his method of rotating ruler. It turns out that there is an \"automatic translation\" which allows to obtain the results for formal factorization in the Weyl algebra from well known results in local analytic geometry.Besides, we draw some (apparently unnoticed) parallelism between the formal factorization of linear operators and formal diagonalization of systems of linear first order differential equations.",
"subjects": "Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)",
"title": "Formal factorization of higher order irregular linear differential operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750488609223,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221675253973
} |
https://arxiv.org/abs/2206.11651 | Attractor separation and signed cycles in asynchronous Boolean networks | The structure of the graph defined by the interactions in a Boolean network can determine properties of the asymptotic dynamics. For instance, considering the asynchronous dynamics, the absence of positive cycles guarantees the existence of a unique attractor, and the absence of negative cycles ensures that all attractors are fixed points. In presence of multiple attractors, one might be interested in properties that ensure that attractors are sufficiently "isolated", that is, they can be found in separate subspaces or even trap spaces, subspaces that are closed with respect to the dynamics. Here we introduce notions of separability for attractors and identify corresponding necessary conditions on the interaction graph. In particular, we show that if the interaction graph has at most one positive cycle, or at most one negative cycle, or if no positive cycle intersects a negative cycle, then the attractors can be separated by subspaces. If the interaction graph has no path from a negative to a positive cycle, then the attractors can be separated by trap spaces. Furthermore, we study networks with interaction graphs admitting two vertices that intersect all cycles, and show that if their attractors cannot be separated by subspaces, then their interaction graph must contain a copy of the complete signed digraph on two vertices, deprived of a negative loop. We thus establish a connection between a dynamical property and a complex network motif. The topic is far from exhausted and we conclude by stating some open questions. |
\section{Introduction}
A \EM{Boolean network} (BN) is a finite dynamical system usually defined by a function
\[
f:\{0,1\}^n\to\{0,1\}^n,\qquad x=(x_1,\dots,x_n)\mapsto f(x)=(f_1(x),\dots,f_n(x)).
\]
BNs have many applications. In particular, since the seminal papers of McCulloch and Pitts \cite{MP43}, Hopfield \cite{H82}, Kauffman \cite{K69,K93} and Thomas \cite{T73,TA90}, they are omnipresent in the modeling of neural and gene networks (see \cite{B08,N15} for reviews). They are also essential tools in computer science, see \cite{ANLY00,GRF16,BGT14,CFG14a,GR15b} for instance.
\medskip
The ``network'' terminology comes from the fact that the \EM{interaction graph} of $f$ is often considered as the main parameter of $f$: the vertex set is $[n]=\{1,\dots,n\}$ and there is an arc from $j$ to $i$ if $f_i$ depends on input $j$. The \EM{signed interaction graph} of $f$, denoted \EM{$G(f)$}, provides useful additional information about interactions, and is commonly considered in the context of gene networks: the vertex set is $[n]$ and there is a positive (negative) arc from $j$ to $i$ if there are $x,y\in \{0,1\}^n$ that only differ in $x_j<y_j$ such that $f_i(y)-f_i(x)$ is positive (negative). Note that the presence of both a positive and a negative arc from one vertex to another is allowed.
\medskip
From a dynamical point of view, the successive iterations of $f$ describe the so called \EM{synchronous dynamics}: if $x^t$ is the configuration of the system at time $t$, then $x^{t+1}=f(x^t)$ is the configuration of the system at the next time. Hence, all components are updated in parallel at each time step. However, when BNs are used as models of natural systems, such as gene networks, synchronicity can be an issue. This led researchers to consider the \EM{(fully) asynchronous dynamics}, where one component is updated at each time step (see e.g. \cite{T91,TA90,TK01,A-J16}). If $x^t$ is the configuration of the system at time $t$, then the configuration at time $t+1$ is $x$ if $f(x)=x$ and, otherwise, a configuration $y$ obtained from $x$ by flipping a component $i$ such that $f_i(x)\neq x_i$. The asynchronous dynamics can be described by the paths of the \EM{asynchronous graph} of $f$, denoted \EM{$\Gamma(f)$}: the vertex set is $\{0,1\}^n$, and there is an arc from $x$ to $y$ if and only if $y$ is obtained from $x$ by flipping a component $i$ such that $x_i\neq f_i(x)$. The asymptotic behaviors are described by the \EM{attractors} of $\Gamma(f)$, which are the inclusion-minimal \EM{trap sets}, where $X\subseteq \{0,1\}^n$ is a trap set if $\Gamma(f)$ has no arc from a vertex in $X$ to a vertex outside $X$. In particular, we say that $\Gamma(f)$ is:
\begin{itemize}
\item
\EM{fixing} if all the attractors are of size one,
\item
\EM{converging} if there is a unique attractor.
\end{itemize}
\medskip
In biological applications, and for gene networks in particular, the first reliable experimental information often concern the signed interaction graph while the actual dynamics are very difficult to observe \cite{TK01,N15}. One is thus faced with the following question: {\em what can be said about $\Gamma(f)$ according to $G(f)$?} An influential result is this direction is the following \cite{ADG04a,A08}: if $G(f)$ has no negative cycle, then $f$ has at least one fixed point; and if $G(f)$ has no positive cycle, then $f$ has at most one fixed point. Soon after, it was realized that ``fixed point'' can be replaced by ``asynchronous attractor'' giving: if $G(f)$ has no negative cycle, then $\Gamma(f)$ is fixing \cite{R10}; and if $G(f)$ has no positive cycle, then $\Gamma(f)$ is converging \cite{RC07}.
\medskip
In this paper, we are interested in conditions on $G(f)$ that imply asymptotic properties in $\Gamma(f)$ which are weaker than the fixing and converging properties. To describe them, we need additional definitions. A \EM{subspace} is set of configurations $X$ such that, for some $I\subseteq [n]$ and $c:I\to \{0,1\}$, we have $x\in X$ if and only if $x_i=c(i)$ for all $i\in I$. Hence a subspace is obtained by fixing some components. Given a set of configurations $X$, we denote by $[X]$ the smallest subspace containing $X$. A \EM{trap space} is a trap set which is also a subspace. We denote by $\langle X\rangle$ the smallest trap space containing $X$. Obviously, $[X]\subseteq \langle X\rangle$. We say that $\Gamma(f)$ is
\begin{itemize}
\item
\EM{separating} if $[A]\cap [B]=\emptyset$ for all distinct attractors $A,B$,
\item
\EM{trap-separating} if $\langle A\rangle\cap \langle B\rangle=\emptyset$ for all distinct attractors $A,B$,
\item
\EM{trapping} if it is separating and $[A]=\langle A\rangle$ for each attractor $A$.
\end{itemize}
The trapping property has been introduced in \cite{NRT22} with the following equivalent definition: for each attractor $A$, $[A]=\langle A\rangle$ and $A$ is the unique attractor reachable from any state in $[A]$.
\medskip
Arriving at a description of the attractor landscape is important for the identification of phenomena such as differentiation or stable periodicity, but is in general a hard problem, subject of ongoing research~\cite{klarner2015approximating,rozum2021parity}.
On the other hand, trap spaces can be computed more easily, for instance using logic programming~\cite{klarner2015computing}.
The number of minimal trap spaces provides a lower bound on the number of attractors. Moreover, an analysis of published biological models~\cite{klarner2015approximating} found that minimal trap spaces are often good approximations of attractors, meaning that each minimal trap space contains only one attractor (``univocality''), all attractors are found inside minimal trap spaces (``completeness''), and oscillating variables in attractors span the minimal containing trap space in all directions (``faithfullness'').
Under these conditions, model analyses that investigate reachability of attractors or existence of control strategies (see e.g.~\cite{cf2022control}) can be greatly facilitated.
Of particular interest are structural conditions on the interaction graph that can guarantee these properties.
In this work we look for conditions for the dynamics to be trap-separating, which implies that minimal trap spaces are complete and univocal, and for the dynamics to be trapping, which adds faithfulness of the minimal trap spaces, and investigate the ``worst case scenario'' where attractors cannot be separated even by subspaces.
\medskip
One easily check that fixing $\Rightarrow$ trapping $\Rightarrow$ trap-separating $\Rightarrow$ separating. Furthermore, converging $\Rightarrow$ trap-separating (but converging $\not\Rightarrow$ trapping). The situation is described at the top of \cref{fig:results}. We deduce that if $\Gamma(f)$ is not trap-separating, then it is neither converging nor fixing, and thus $G(f)$ has at least one positive cycle and at least one negative cycle. But can something stronger be said? This paper provides partial answers.
\medskip
In particular, we prove that if $\Gamma(f)$ is not trap-separating, then $G(f)$ has a path from a negative cycle to a positive cycle. If $\Gamma(f)$ is non-separating, we say more:
\begin{itemize}
\item
$G(f)$ has a positive cycle which intersects a negative cycle, and
\item
$G(f)$ has at least two negative cycles, and
\item
at least two vertices must be removed from $G(f)$ to destroy all the positive cycles.
\end{itemize}
The first point is particularly interesting since little is known about the dynamical influence of such intersections (see however \cite{didier2012relations,MNRSS15,remy2016boolean,ARS17b,R18,mosse2020combinatorial}). Consider the following signed digraph, called $H_2$ (throughout the paper, green arcs are positive and red arcs are negative):
\[
H_2\qquad
\begin{array}{c}
\begin{tikzpicture}
\useasboundingbox (-2.2,-0.7) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[Green,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (1.{180-60}) .. controls ({180-30}:2.8) and ({180+30}:2.8) .. (1.{180+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[red,bend right=25] (2)
(1) edge[Green,bend right=55] (2)
(2) edge[red,bend right=25] (1)
(2) edge[Green,bend right=55] (1)
;
\end{tikzpicture}
\end{array}
\]
This is a minimal signed digraph which satisfies the three conditions given above, and we prove that if $\Gamma(f)$ is non-separating then, under some conditions on $G(f)$, the presence of $H_2$ is unavoidable. To be precise, let us say that a signed digraph $H$ with vertex set $V$ is \EM{embedded} in $G(f)$ if there is an injection $\phi:V\to [n]$ such that, for every positive (negative) arc of $H$ from $j$ to $i$, $G(f)$ contains a positive (negative) path from $\phi(j)$ to $\phi(i)$ whose internal vertices are not in $\phi(V)$. We prove that, if $\Gamma(f)$ is non-separating, then either $H_2$ is embedded in $G(f)$, or at least three vertices must be removed from $G(f)$ to destroy all the cycles. The dynamics associated to isolated complex motifs has been previously investigated, as well as relationships between feedback vertex numbers and number of attractors (see e.g. \cite{A08,didier2012relations,MNRSS15,remy2016boolean,ARS17,mosse2020combinatorial}); however, this is the first time, to our knowledge, that a connection between a dynamical property and the embedding of such a complex pattern is identified.
\medskip
A sufficient structural condition for the trapping property is identified in \cite{NRT22}: $\Gamma(f)$ is trapping if $G(f)$ has a \EM{linear cut}, that is, if it has no arc from a vertex of out-degree at least two to a vertex of in-degree at least two and every cycle contains a vertex of in- and out-degree one. Here, with the tools used to analyse signed digraphs that are non-separating, we provide a sufficient condition for $\Gamma(f)$ to be trapping which is rather different: we show that $\Gamma(f)$ is trapping when $G(f)$ is strong and has at most one negative cycle. We also provide in the same vein new sufficient conditions for $\Gamma(f)$ to be fixing or converging.
\medskip
We divide the results in statements that concern the intersection of positive and negative cycles (\cref{sec:intersection_positive_negative}), number of positive (\cref{sec:number-positive-cycles}) and negative cycles (\cref{sec:number-negative-cycles}), and graphs with feedback number two (\cref{sec:feedback-number-two}).
A summary of our results is given in \cref{fig:results}.
We conclude with some conjectures and open questions (\cref{sec:open-problems}).
\pgfdeclarelayer{background}
\pgfdeclarelayer{foreground}
\pgfsetlayers{background,main,foreground}
\def4cm{4cm}
\def2cm{2cm}
\def1cm{1cm}
\begin{figure}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}
\node[rectangle,draw,minimum width=4cm,minimum height=0.5*4cm,line width=0.5mm] (fix) at (-0.1,0) {\begin{tabular}{c}\textbf{fixing}\\$|A|=1$ $\forall A$\end{tabular}};
\node[rectangle,draw,minimum width=4cm,minimum height=0.5*4cm,line width=0.5mm] (conv) at (1.2*4cm,0.3*4cm) {\begin{tabular}{c}\textbf{converging}\\$\exists! A$\end{tabular}};
\node[rectangle,draw,minimum width=4cm,minimum height=0.5*4cm,line width=0.5mm] (trap) at (2.4*4cm,0) {\begin{tabular}{c}\textbf{trapping}\\separating,\\$\langle A\rangle=[A]$ $\forall A$\end{tabular}};
\node[rectangle,draw,minimum width=1.2*4cm,minimum height=0.82*4cm,line width=0.5mm] (trapsep) at (3.7*4cm,0.15*4cm) {\begin{tabular}{c}\textbf{trap-separating}\\\hspace{2pt}\\$\langle A\rangle\cap \langle B\rangle=\emptyset$\\$\forall A\neq B$\end{tabular}};
\node[rectangle,draw,minimum width=2.0*4cm,minimum height=0.82*4cm,line width=0.5mm] (sep) at (5.5*4cm,0.15*4cm) {\begin{tabular}{c}\textbf{separating}\\\hspace{2pt}\\$[A]\cap [B]=\emptyset$ $\forall A\neq B$\end{tabular}};
\node[rectangle,draw,fill=black!10,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (nnc) at (-0.1,0-0.6*4cm) {no negative cycle};
\node[rectangle,draw,fill=black!10,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (npc) at (1.2*4cm,-0.6*4cm) {no positive cycle};
\node[rectangle,draw,fill=black!10,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (lc) at (2.2*4cm,0-0.6*4cm) {linear cut};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (nopathnp) at (3.6*4cm,0-0.6*4cm) {\begin{tabular}{c}no path from neg.\\to pos. cycle\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (oneneg) at (0.05*4cm,0-1.16*4cm) {\begin{tabular}{c}at least 1 pos. cycle,\\unique neg. cycle\\ meets all cycles\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (stroneneg) at (0*4cm,0-1.76*4cm) {\begin{tabular}{c}strong, at least 1 pos.\\cycle, unique neg. cycle\\meets all cycles\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (onepos1) at (1.1*4cm,0-1.16*4cm) {\begin{tabular}{c}at least 1 neg. cycle,\\unique pos. cycle\\meets all cycles\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (stronepos) at (1.2*4cm,0-1.76*4cm) {\begin{tabular}{c}strong, at least 1 neg.\\cycle, unique pos. cycle\\meets all cycles\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (strmaxoneneg) at (2.4*4cm,0-1.22*4cm) {\begin{tabular}{c}strong and at most\\one neg. cycle\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (onepos) at (3.6*4cm,0-1.1*4cm) {\begin{tabular}{c}unique pos. cycle\\meets all cycles\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (no) at (4.8*4cm,0-0.6*4cm) {\begin{tabular}{c}no neg. and pos.\\cycles intersect\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (fvntwo) at (5.9*4cm,0-0.6*4cm) {\begin{tabular}{c}feedback number 2 and\\no embedding of $H_2$\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (pfvn) at (4.8*4cm,0-1.1*4cm) {\begin{tabular}{c}pos. feedback\\number 1\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (nfvn) at (6.1*4cm,0-1.1*4cm) {\begin{tabular}{c}neg. feedback\\number 1\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (atmostonep) at (4.8*4cm,0-1.6*4cm) {\begin{tabular}{c}at most one\\pos. cycle\end{tabular}};
\node[rectangle,draw,fill=white,minimum width=2cm,minimum height=1cm,line width=0.3mm,rounded corners=0.2cm] (atmostonen) at (5.8*4cm,0-1.6*4cm) {\begin{tabular}{c}at most one\\neg. cycle\end{tabular}};
\draw[->,ultra thick] ([shift={(0cm,-0.3cm)}]fix.east) -- ([shift={(0cm,-0.3cm)}]trap.west);
\draw[->,ultra thick] ([shift={(0cm,0.3cm)}]conv.east) -- ([shift={(0cm,0.9cm)}]trapsep.west);
\draw[->,ultra thick,red,dotted] ([shift={(0cm,-0.7cm)}]conv.east) -- ([shift={(0cm,0.5cm)}]trap.west) node[pos=0.5,above] {ex.\ref{ex:non-trapping}};
\draw[->,ultra thick] (trap) -- ([shift={(0cm,-0.58cm)}]trapsep.west);
\draw[->,ultra thick] (trapsep) -- (sep);
\draw[->,ultra thick,gray] (nnc) -- (fix) node[midway,right] {\cite{R10}};
\begin{pgfonlayer}{background}
\draw[->,ultra thick,gray] (npc) -- (conv) node[pos=0.2,right,on background layer] {\cite{RC07}};
\end{pgfonlayer}
\draw[->,ultra thick,gray] (lc) -- ([shift={(-0.8cm,-1cm)}]trap.center) node[midway,right] {\cite{NRT22}};
\draw[->,ultra thick] (nopathnp) -- ([shift={(-0.4cm,-1.65cm)}]trapsep.center) node[midway,right] {Thm.\ref{thm:trap-sep}};
\draw[->,ultra thick] (no) -- ([shift={(-2.8cm,-1.65cm)}]sep.center) node[midway,right] {Thm.\ref{thm:sep}};
\draw[->,ultra thick] (fvntwo) -- ([shift={(1.6cm,-1.65cm)}]sep.center) node[midway,right] {Thm.\ref{thm:fvs2}};
\draw[->,ultra thick] ([shift={(0.0cm,0.6cm)}]atmostonep.center) -- ([shift={(0.0cm,-0.6cm)}]pfvn.center);
\draw[->,ultra thick] ([shift={(1.0cm,0.6cm)}]atmostonen.center) -- ([shift={(-0.2cm,-0.6cm)}]nfvn.center);
\begin{pgfonlayer}{background}
\draw[->,ultra thick] ([shift={(-1.2cm,0cm)}]strmaxoneneg.north) -- ([shift={(-1.2cm,0cm)}]trap.south) node[pos=0.27,right] {Thm.\ref{thm:NEGATIVE}};
\draw[->,ultra thick] ([shift={(-2.0cm,0.6cm)}]stroneneg.center) -- ([shift={(-1.9cm,-1.0cm)}]fix.center) node[pos=0.1,right] {Prop.\ref{pro:fixing}};
\draw[->,ultra thick] ([shift={(+1.8cm,0cm)}]stronepos.north) -- ([shift={(+1.8cm,0cm)}]conv.south) node[pos=0.05,right] {Prop.\ref{pro:one_positive_cycle}};
\draw[->,ultra thick] ([shift={(-0.4cm,0.6cm)}]onepos.center) -- ([shift={(-0.8cm,-1.65cm)}]trapsep.center) node[pos=0.15,right] {Prop.\ref{pro:one_positive_cycle}};
\draw[->,ultra thick] ([shift={(-0.6cm,0.6cm)}]pfvn.center) -- ([shift={(-3.4cm,-1.65cm)}]sep.center) node[pos=0.15,right] {Thm.\ref{thm:PFN}};
\draw[->,ultra thick] ([shift={(-0.6cm,0.6cm)}]atmostonen.center) -- ([shift={(0.6cm,-1.65cm)}]sep.center) node[pos=0.08,right] {Thm.\ref{thm:NEGATIVE}};
\end{pgfonlayer}
\draw[->,ultra thick,red,dotted] ([shift={(1.4cm,0cm)}]oneneg.north) -- +(0cm,0.6cm) -- +(0.55cm,0.6cm) -- +(0.55cm,2.3cm) node[pos=0.4,right] {ex.\ref{ex:sep-not-conv-fix-graph}} -- +(0cm,2.3cm) -- ([shift={(1.7cm,0cm)}]fix.south);
\draw[->,ultra thick,red,dotted] ([shift={(-1.4cm,0cm)}]onepos1.north) -- +(0cm,0.6cm) -- +(-0.85cm,0.6cm) -- +(-0.85cm,2.3cm) -- +(0.05cm,2.3cm) -- ([shift={(-1.75cm,0cm)}]conv.south);
\draw[->,ultra thick,red,dotted] (nopathnp) -| node[pos=0.87,right] {ex.\ref{ex:non-trapping}} ([shift={(1.6cm,-1.0cm)}]trap.center);
\draw[->,ultra thick,red,dotted] ([shift={(0cm,0.3cm)}]onepos.west) -| node[pos=0.27,above] {ex.\ref{ex:pos-not-trapping}} ([shift={(1.4cm,-1.0cm)}]trap.center);
\draw[->,ultra thick,red,dotted] (no) -| node[pos=0.87,right] {ex.\ref{ex:not-trap-sep}} ([shift={(2.0cm,-1.65cm)}]trapsep.center);
\draw[->,ultra thick,red,dotted] (pfvn) -| ([shift={(2.0cm,-1.65cm)}]trapsep.center);
\draw[->,ultra thick,red,dotted] (atmostonep) -| ([shift={(2.0cm,-1.65cm)}]trapsep.center);
\draw[-,ultra thick,red,dotted] (atmostonen) -- (atmostonep);
\draw[-,ultra thick,red,dotted] (fvntwo) -- (no);
\draw[->,ultra thick,red,dotted] (nfvn) -| node[pos=0.55,right] {ex.\ref{ex:negative_feedback_1}} ([shift={(-0.85cm,-1.65cm)}]sep.center);
\end{tikzpicture}
}\caption{\label{fig:results} Summary of the main definitions and results of this work, and some known results (indicated with gray boxes and arrows). $A$ and $B$ stand for attractors of asynchronous dynamics. Counterexamples are indicated in dashed red arrows.}
\end{figure}
\section{Definitions and background}
\subsection{Digraphs and signed digraphs}
A \EM{digraph} is a pair $G=(V,E)$ where $V$ is a set of vertices and $E\subseteq V^2$ is a set of arcs. Given $I\subseteq V$, the subgraph of $G$ induced by $I$ is denoted $G[I]$, and $G\setminus I$ means $G[V\setminus I]$. A \EM{strongly connected component} (\EM{strong component} for short) of a digraph $G$ is an induced subgraph which is \EM{strongly connected} (\EM{strong} for short) and maximal for this property. A strong component $G[I]$ is \EM{initial} if $G$ has no arc from $V\setminus I$ to $I$, and \EM{terminal} if $G$ has no arc from $I$ to $V\setminus I$. A digraph is \EM{trivial} if it has a unique vertex and no arc.
\medskip
A \EM{signed digraph} $G$ is a pair $(V,E)$ where $E\subseteq V^2\times\{-1,1\}$. If $(j,i,s)\in E$ then $G$ has an arc from $j$ to $i$ of sign $s$; we also say that $j$ is an in-neighbor of $i$ of sign $s$ and that $i$ is an out-neighbor of $j$ of sign $s$. We say that $G$ is \EM{simple} if it does not have both a positive arc and a negative arc from one vertex to another, and \EM{full-positive} if all its arcs are positive. A subgraph of $G$ is a signed digraph $(V',E')$ with $V'\subseteq V$ and $E'\subseteq E$. Cycles and paths of $G$ are regarded as simple subgraphs. The \EM{sign of a cycle or a path} of $G$ is the product of the signs of its arcs. We say that $I$ is a \EM{feedback vertex set} if $G\setminus I$ has no cycle. The \EM{feedback number} of $G$ is the minimum size of a \EM{feedback vertex set} of $G$. Similarly, we say that $I$ is a \EM{positive (negative) feedback vertex set} if $G\setminus I$ has no positive (negative) cycle. The \EM{positive (negative) feedback number} of $G$ is the minimum size of a \EM{positive (negative) feedback vertex set} of $G$. We say that $G$ has a \EM{linear cut} if it has no arc from a vertex of out-degree at least two to a vertex of in-degree at least two and every cycle contains a vertex of in- and out-degree one. The underlying (unsigned) digraph of $G$ has vertex set $V$ and an arc from $j$ to $i$ if $G$ has a positive or a negative arc from $j$ to $i$. Every graph concept made on $G$ that does not involved signs are tacitly made on its underlying digraph. For instance, $G$ is strongly connected if its underlying digraph is. In the following, $G$ always denotes a signed digraph with vertex set $V$.
\subsection{Configurations}
The set of maps from $V$ to $\{0,1\}$ is denoted \EM{$\{0,1\}^V$} and called set of \EM{configurations} on $V$. Given such a configuration $x$ and $i\in V$, we denote \EM{$x_i$} the image of $i$ by $x$ and for $I\subseteq V$, \EM{$x_I$} is the restriction of $x$ to $I$. In examples, we always have $V=[n]$ for some $n\geq 1$ and we identify $x$ and the binary sequence $x_1x_2\dots x_n$. We denote by \EM{$e_I$} the configuration such that $(e_I)_i=1$ if $i\in I$ and $(e_I)_i=0$ otherwise. We write \EM{$e_i$} instead of $e_{\{i\}}$. Let $x,y$ be two configurations on $V$. We denote by \EM{$x+y$} the configuration $z$ on $V$ with $z_i=x_i+y_i$ for all $i\in V$, where the addition is modulo $2$, and by $\bar{x}$ the configuration $x+e_V$. We denote by \EM{$\Delta(x,y)$} the set of $i\in V$ with $x_i\neq y_i$. The \EM{Hamming distance} between $x$ and $y$ is $\EMM{d(x,y)}=|\Delta(x,y)|$. We equip $\{0,1\}^V$ with the partial order \EM{$\leq $} defined by $x\leq y$ if and only if $x_i\leq y_i$ for $i\in V$. We denote by \EM{$\mathbf{0}$} and \EM{$\mathbf{1}$} the all-zero and all-one configurations, that is, the minimal and maximal element of $\{0,1\}^V$. If $x\leq y$ then \EM{$[x,y]$} is the set of configurations $z$ on $V$ such that $x\leq z\leq y$. Let $X\subseteq \{0,1\}^V$. We denote by \EM{$\Delta(X)$} the set of $i\in V$ such that $x_i\neq y_i$ for some $x,y\in X$. We say that $X$ is a \EM{subspace} if $X=[x,y]$ for some configurations $x,y$ on $V$. We denote by \EM{$[X]$} the smallest subspace of $\{0,1\}^V$ containing $X$.
\subsection{Boolean networks}
A \EM{Boolean network} (BN) with component set $V$ is a map $f:\{0,1\}^V\to\{0,1\}^V$. We denote by $F(V)$ the set of BNs with component set $V$ and for $n\geq 1$ we write $F(n)$ instead of $F([n])$. We say that $f$ is \EM{monotone} if $x\leq y$ implies $f(x)\leq f(y)$ for all configurations $x,y$ on $V$. We denote by $G(f)$ the \EM{signed interaction graph} of $f$: it is the signed digraph with vertex set $V$ such that, for all $i,j\in V$, there is a positive (negative) arc from $j$ to $i$ if there exist a configuration $x$ on $V$ such that $x_j=0$ and $f_i(x+e_j)-f_i(x)$ is positive (negative); we can have both a positive and a negative arc from one component to another. Given a signed digraph $G$, a BN \EM{on} $G$ is a BN with interaction graph equal to $G$. We denote by $F(G)$ the set of BNs on $G$. Let $x,y$ be two configurations on $V$ with $x\leq y$. The \EM{subnetwork} of $f$ induced by $[x,y]$ is the BN $h$ with component set $I=\Delta(x,y)$ defined by $h(z_I)=f(z)_I$ for all $z\in [x,y]$. Intuitively, $h$ is obtained from $f$ by fixing to $x_i=y_i$ each component $i\in V\setminus I$. One can easily check that $G(h)$ is a subgraph of $G(f)[I]$.
\subsection{Asynchronous graphs}
An \EM{asynchronous graph} $\Gamma$ on $\{0,1\}^V$ is a digraph with vertex set $\{0,1\}^V$ such that, for every arc $x\to y$, there is exactly one $i\in V$ such that $x_i\neq y_i$; this component $i$ is the \EM{direction} of the arc. We say that $x\to y$ is \EM{increasing} if $x\leq y$ and \EM{decreasing} if $y\leq x$. Let $X$ be a set of configurations on $V$. We say that an arc $x\to y$ of $\Gamma$ \EM{leaves} $X$ if $x\in X$ and $y\not\in X$. We say that $X$ is a \EM{trap set} of $\Gamma$ if no arc leaves $X$. A \EM{trap space} of $\Gamma$ is a subspace which is also a trap set. We denote by \EM{$\langle X\rangle$} the smallest trap space containing $X$ (which exists since $\{0,1\}^V$ is a trap space). An \EM{attractor} of $\Gamma$ is a terminal strong component of $\Gamma$ or, equivalently, an inclusion-minimal non-empty trap set (there is at least one attractor since $\{0,1\}^V$ is a trap set). Given a BN $f\in F(V)$, we denote by $\Gamma(f)$ the \EM{asynchronous graph} of $f$, that is, the asynchronous graph on $\{0,1\}^V$ with an arc $x\to y$ in the direction $i$ if and only if $f_i(x)\neq x_i$.
It is easy to see that each asynchronous graph on $\{0,1\}^V$ is the asynchronous graph of a unique BN with component set $V$. For a subspace $X$ of $\{0,1\}^V$, we denote by $\Gamma(f)[X]$ the subgraph of $\Gamma(f)$ induced by $X$. Clearly, this is the asynchronous dynamics of the subnetwork of $f$ induced by $X$.
\medskip
An asynchronous graph is:
\begin{itemize}
\item
\EM{fixing} if all the attractors are of size one,
\item
\EM{converging} if there is a unique attractor,
\item
\EM{separating} if $[A]\cap [B]=\emptyset$ for all distinct attractors $A,B$,
\item
\EM{trap-separating} if $\langle A\rangle\cap \langle B\rangle=\emptyset$ for all distinct attractors $A,B$,
\item
\EM{trapping} if it is separating and $[A]=\langle A\rangle$ for all attractor $A$.
\end{itemize}
We abusively say that a signed digraph $G$ is \EM{converging} (resp. \EM{fixing}, \EM{trapping}, \EM{trap-separating}, \EM{separating}) if the asynchronous graph of every $f\in F(G)$ is converging (resp. fixing, trapping, trap-separating, separating). One easily check that fixing $\Rightarrow$ trapping $\Rightarrow$ trap-separating $\Rightarrow$ separating. Furthermore, converging $\Rightarrow$ trap-separating, but converging $\not\Rightarrow$ trapping, see \cref{ex:non-trapping}. Here are some sufficient conditions for $G$ to be fixing, converging or trapping.
\begin{theorem}\label{thm:bib}
For every signed digraph $G$,
\begin{itemize}
\item if $G$ has no cycle, then $G$ is converging and fixing \cite{R95};
\item if $G$ has no positive cycle, then $G$ is converging \cite{RC07};
\item if $G$ has no negative cycle, then $G$ is fixing \cite{R10};
\item if $G$ has a linear cut, then $G$ is trapping \cite{NRT22}.
\end{itemize}
\end{theorem}
For some proofs in this paper we will rely on the following results, which imply the second and third points of the previous theorem.
\begin{lemma}[\cite{A08,RC07}]\label{lem:A08}
Let $f\in F(G)$ and let $x,y$ be two configurations on $V$ such that $f_i(x)=x_i$ and $f_i(y)=y_i$ for all $i\in\Delta(x,y)$. Then $G[\Delta(x,y)]$ has a positive cycle $C$, and if $C$ contains an arc from $j$ to $i$ then the sign of this arc is $(y_j-x_j)(y_i-x_i)$.
\end{lemma}
\begin{lemma}[\cite{R10}]\label{lem:R10}
Let $f\in F(G)$ and suppose that $\Gamma(f)$ has an attractor $A$ of size at least two. Then $G[\Delta(A)]$ has a negative cycle.
\end{lemma}
By combining the two theorems we can already state a result about non-intersecting positive and negative cycles in trap-separating networks that have at least two attractors and at least one cyclic attractor.
\begin{proposition}\label{pro:disjoint-opposite}
Let $f\in F(G)$. If $\Gamma(f)$ is trap-separating but neither converging nor fixing, then $G$ has vertex-disjoint cycles of distinct sign.
\end{proposition}
\begin{proof}
Suppose that $\Gamma(f)$ is trap-separating but neither converging nor fixing. Then it has two attractors $A,B$ with $|A|>1$ and disjoint trap spaces $X,Y$ with $A\subseteq X$ and $B\subseteq Y$. Consider configurations $x \in X$ and $y \in Y$ which minimize $d(x,y)$. Then $x$ and $y$ are fixed points for the subnetwork of $f$ induced by $[x,y]$. Hence $G[\Delta(x,y)]$ has a positive cycle by \cref{lem:A08}. By \cref{lem:R10}, $G[\Delta(A)]$ has a negative cycle. Since $\Delta(x,y)\cap \Delta(A)=\emptyset$ this proves the proposition.
\end{proof}
The conclusion of the proposition does not hold if we replace trap-separating with separating:
the following presents an example of asynchronous graph that is separating but not converging nor fixing, while its corresponding signed interaction graph does not have disjoint positive or negative cycles.
\begin{example}\label{ex:sep-not-conv-fix}
The BN $f\in F(5)$ defined by $f_1(x)=x_4 x_5 \lor \bar x_4 \bar x_5$, $f_2(x)=x_1 \bar x_5 \lor x_5 \bar x_1$, $f_3(x)=x_2 \bar x_5 \lor x_5 \bar x_2$, $f_4(x)=x_3 \bar x_5 \lor x_5 \bar x_3$, $f_5(x)=x_1 x_3 \bar x_2 \lor x_1 x_4 \bar x_3 \lor x_2 \bar x_1 \bar x_3 \lor x_3 \bar x_1 \bar x_4$ has two cyclic attractors $A$ and $B$, with $[A]=\{x_5=0\}$ and $[B]=\{x_5=1\}$.
In addition, $\Gamma(f)$ has an arc from $00001$ to $00000$ and an arc from $10100$ to $10101$.
So $\Gamma(f)$ is separating but not converging, not fixing and not trap-separating. $G(f)$ does not have a positive and a negative cycle that are disjoint.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00000) at (0,0){{\boldmath \textcolor{Blue}{$00000$} \unboldmath}};
\node (00100) at (1,1){$00100$};
\node (01000) at (0,2){$01000$};
\node (01100) at (1,3){$01100$};
\node (10000) at (2,0){{\boldmath \textcolor{Blue}{$10000$} \unboldmath}};
\node (10100) at (3,1){$10100$};
\node (11000) at (2,2){{\boldmath \textcolor{Blue}{$11000$} \unboldmath}};
\node (11100) at (3,3){{\boldmath \textcolor{Blue}{$11100$} \unboldmath}};
\node (00010) at (5,0){{\boldmath \textcolor{Blue}{$00010$} \unboldmath}};
\node (00110) at (6,1){{\boldmath \textcolor{Blue}{$00110$} \unboldmath}};
\node (01010) at (5,2){$01010$};
\node (01110) at (6,3){{\boldmath \textcolor{Blue}{$01110$} \unboldmath}};
\node (10010) at (7,0){$10010$};
\node (10110) at (8,1){$10110$};
\node (11010) at (7,2){$11010$};
\node (11110) at (8,3){{\boldmath \textcolor{Blue}{$11110$} \unboldmath}};
\node (00001) at (0-2.5,0-5){$00001$};
\node (00101) at (1-2.5,1-5){{\boldmath \textcolor{Plum}{$00101$} \unboldmath}};
\node (01001) at (0-2.5,2-5){{\boldmath \textcolor{Plum}{$01001$} \unboldmath}};
\node (01101) at (1-2.5,3-5){{\boldmath \textcolor{Plum}{$01101$} \unboldmath}};
\node (10001) at (2-2.5,0-5){$10001$};
\node (10101) at (3-2.5,1-5){{\boldmath \textcolor{Plum}{$10101$} \unboldmath}};
\node (11001) at (2-2.5,2-5){$11001$};
\node (11101) at (3-2.5,3-5){$11101$};
\node (00011) at (5-2.5,0-5){$00011$};
\node (00111) at (6-2.5,1-5){$00111$};
\node (01011) at (5-2.5,2-5){{\boldmath \textcolor{Plum}{$01011$} \unboldmath}};
\node (01111) at (6-2.5,3-5){$01111$};
\node (10011) at (7-2.5,0-5){{\boldmath \textcolor{Plum}{$10011$} \unboldmath}};
\node (10111) at (8-2.5,1-5){{\boldmath \textcolor{Plum}{$10111$} \unboldmath}};
\node (11011) at (7-2.5,2-5){{\boldmath \textcolor{Plum}{$11011$} \unboldmath}};
\node (11111) at (8-2.5,3-5){$11111$};
\path[thick,->,draw,black]
(00000) edge[ultra thick,Blue] (10000)
(00001) edge[Gray,bend right=15] (00011)
(00001) edge (01001)
(00001) edge (00101)
(00001) edge[Gray,bend left=15] (00000)
(00010) edge[ultra thick,Blue,bend left=15] (00000)
(00011) edge (00111)
(00011) edge (01011)
(00011) edge (10011)
(00011) edge[Gray,bend left=15] (00010)
(00100) edge (10100)
(00100) edge[Gray,bend right=15] (00101)
(00100) edge[Gray,bend right=15] (00110)
(00100) edge (00000)
(00101) edge[ultra thick,Purple] (01101)
(00110) edge[ultra thick,Blue] (00010)
(00111) edge (10111)
(00111) edge[Gray,bend left=15] (00101)
(00111) edge[Gray,bend left=15] (00110)
(00111) edge (01111)
(01000) edge[Gray,bend right=15] (01001)
(01000) edge (01100)
(01000) edge (11000)
(01000) edge (00000)
(01001) edge[ultra thick,Purple,bend left=15] (01011)
(01010) edge[Gray,bend left=15] (01000)
(01010) edge[Gray,bend right=15] (01011)
(01010) edge (01110)
(01010) edge (00010)
(01011) edge[ultra thick,Purple] (11011)
(01100) edge (00100)
(01100) edge[Gray,bend right=15] (01101)
(01100) edge[Gray,bend left=15] (01110)
(01100) edge (11100)
(01101) edge[ultra thick,Purple] (01001)
(01110) edge[ultra thick,Blue] (00110)
(01111) edge (11111)
(01111) edge[Gray,bend right=15] (01101)
(01111) edge (01011)
(01111) edge[Gray,bend left=15] (01110)
(10000) edge[ultra thick,Blue] (11000)
(10001) edge[Gray,bend left=15] (10000)
(10001) edge (10101)
(10001) edge[Gray,bend right=15] (10011)
(10001) edge (00001)
(10010) edge[Gray,bend left=15] (10000)
(10010) edge (11010)
(10010) edge[Gray,bend left=15] (10011)
(10010) edge (00010)
(10011) edge[ultra thick,Purple] (10111)
(10100) edge (10000)
(10100) edge[Gray,bend right=15] (10110)
(10100) edge (11100)
(10100) edge[Gray,bend left=15] (10101)
(10101) edge[ultra thick,Purple] (00101)
(10110) edge (11110)
(10110) edge[Gray,bend left=15] (10111)
(10110) edge (10010)
(10110) edge (00110)
(10111) edge[ultra thick,Purple,bend left=15] (10101)
(11000) edge[ultra thick,Blue] (11100)
(11001) edge[Gray,bend left=15] (11011)
(11001) edge (01001)
(11001) edge (10001)
(11001) edge[Gray,bend right=15] (11000)
(11010) edge (11110)
(11010) edge[Gray,bend right=15] (11000)
(11010) edge (01010)
(11010) edge[Gray,bend left=15] (11011)
(11011) edge[ultra thick,Purple] (10011)
(11100) edge[ultra thick,Blue,bend left=15] (11110)
(11101) edge (10101)
(11101) edge (01101)
(11101) edge (11001)
(11101) edge[Gray,bend right=15] (11100)
(11110) edge[ultra thick,Blue] (01110)
(11111) edge (11011)
(11111) edge (10111)
(11111) edge[Gray,bend right=15] (11110)
(11111) edge[Gray,bend right=15] (11101)
;
\end{tikzpicture}
\end{array}
\quad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1.5){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1.5){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at ({180}:1.5){$4$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1.5){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (5) at (0:0){$5$};
\path[->,thick]
(1) edge[Green,bend left=40] (2)
(1) edge[red,bend left=15] (2)
(2) edge[Green,bend left=40] (3)
(2) edge[red,bend left=15] (3)
(3) edge[Green,bend left=40] (4)
(3) edge[red,bend left=15] (4)
(4) edge[Green,bend left=40] (1)
(4) edge[red,bend left=15] (1)
(1) edge[Green,bend left=40] (5)
(1) edge[red,bend left=15] (5)
(2) edge[Green,bend left=40] (5)
(2) edge[red,bend left=15] (5)
(3) edge[Green,bend left=40] (5)
(3) edge[red,bend left=15] (5)
(4) edge[Green,bend left=40] (5)
(4) edge[red,bend left=15] (5)
(5) edge[Green,bend left=40] (1)
(5) edge[red,bend left=15] (1)
(5) edge[Green,bend left=40] (2)
(5) edge[red,bend left=15] (2)
(5) edge[Green,bend left=40] (3)
(5) edge[red,bend left=15] (3)
(5) edge[Green,bend left=40] (4)
(5) edge[red,bend left=15] (4)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
If we are interested in graphs that are separating but not converging nor fixing and do not have disjoint positive or negative cycles, then we can identify examples in smaller dimension.
\begin{example}\label{ex:sep-not-conv-fix-graph}
Consider the following signed digraph $G$, which has exactly one negative and one positive cycle that intersect:
\[
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[red,->,thick] (2.{-60}) .. controls ({0-30}:2.8) and ({0+30}:2.8) .. (2.{0+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[red,bend left=25] (2)
(1) edge[Green,bend right=25] (2)
;
\end{tikzpicture}
\]
It is neither converging nor fixing, but is separating, since all BNs in $F(G)$ are either converging or fixing.
Indeed, the asynchronous graphs of the four BNs in $F(G)$ are as follows:
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){{\boldmath \textcolor{Blue}{$00$} \unboldmath}};
\node (01) at (0,1.5){{\boldmath \textcolor{Blue}{$01$} \unboldmath}};
\node (10) at (1.5,0){$10$};
\node (11) at (1.5,1.5){$11$};
\path[thick,->,draw,black]
(00) edge[bend left=15,Blue,ultra thick] (01)
(11) edge (01)
(10) edge (00)
(01) edge[bend left=15,Blue,ultra thick] (00)
;
\end{tikzpicture}
\end{array}
\quad
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){$00$};
\node (01) at (0,1.5){$01$};
\node (10) at (1.5,0){{\boldmath \textcolor{Purple}{$10$} \unboldmath}};
\node (11) at (1.5,1.5){{\boldmath \textcolor{Blue}{$11$} \unboldmath}};
\path[thick,->,draw,black]
(00) edge[bend left=15] (01)
(01) edge (11)
(00) edge (10)
(01) edge[bend left=15] (00)
;
\end{tikzpicture}
\end{array}
\quad
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){$00$};
\node (01) at (0,1.5){$01$};
\node (10) at (1.5,0){{\boldmath \textcolor{Blue}{$10$} \unboldmath}};
\node (11) at (1.5,1.5){{\boldmath \textcolor{Blue}{$11$} \unboldmath}};
\path[thick,->,draw,black]
(11) edge[bend left=15,Blue,ultra thick] (10)
(01) edge (11)
(00) edge (10)
(10) edge[bend left=15,Blue,ultra thick] (11)
;
\end{tikzpicture}
\end{array}
\quad
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){{\boldmath \textcolor{Purple}{$00$} \unboldmath}};
\node (01) at (0,1.5){{\boldmath \textcolor{Blue}{$01$} \unboldmath}};
\node (10) at (1.5,0){$10$};
\node (11) at (1.5,1.5){$11$};
\path[thick,->,draw,black]
(11) edge[bend left=15] (10)
(11) edge (01)
(10) edge (00)
(10) edge[bend left=15] (11)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\subsection{Switches}
An \EM{isometry} on $\{0,1\}^V$ is a bijection $\pi\colon\{0,1\}^V\to\{0,1\}^V$ such that $d(x,y)=d(\pi(x),\pi(y))$ for all $x,y\in\{0,1\}^V$. Let $\Gamma,\Gamma'$ be two asynchronous graphs on $\{0,1\}^V$. We say that $\Gamma$ and $\Gamma'$ are isometric if there is an isometry $\pi$ on $\{0,1\}^V$ such that $x\to y$ is an arc of $\Gamma'$ if and only if $\pi(x)\to \pi(y)$ is an arc of $\Gamma'$. One can easily check that $\Gamma$ and $\Gamma'$ are isometric, then $\Gamma$ is \EM{converging} (resp. \EM{fixing}, \EM{trapping}, \EM{trap-separating}, \EM{separating}) if and only if $\Gamma'$ is converging (resp. fixing, trapping, trap-separating, separating).
Let $I\subseteq V$ and, for all $i\in V$, let $\sigma_I(i)=1$ if $i\in I$ and $\sigma_I(i)=-1$ if otherwise. The \EM{$I$-switch} of $G$ is the signed digraph $\EMM{G^I}=(V,E^I)$ with $\EMM{E^I}=\{(j,i,\sigma_I(j)\cdot s\cdot\sigma_I(i))\mid (j,i,s)\in E\}$; note that $G^I=G^{V\setminus I}$ and $(G^I)^I=G$. We say that $G$ is \EM{switch equivalent} to $H$ if $H=G^I$ for some $I\subseteq V$. Obviously, $G$ and $G^I$ have the same underlying digraph. Note also that $C$ is a cycle in $G$ if and only if $C^I$ is a cycle in $G^I$, and $C$ and $C^I$ have the same sign. Thus if $G$ has no positive (negative) cycles then every switch of $G$ has no positive (negative) cycles: this property is invariant by switch. The \EM{symmetric version} of $G$ is the signed digraph $\EMM{G^s}=(V,E^s)$ where $\EMM{E^s}=E\cup \{(i,j,s)\mid (j,i,s)\in E\}$. A well-known result concerning switch is the following adaptation of Harary's theorem \cite{H53}.
\begin{proposition}\label{pro:harary}
A signed digraph $G$ is switch equivalent to a full-positive signed digraph if and only if $G^s$ has no negative cycle. Furthermore, if $G$ is strong, then $G^s$ has no negative cycle if and only if $G$ has no negative cycle.
\end{proposition}
There is an analogue of the switch operation for BNs. Let $f\in F(V)$ and $I\subseteq V$. The \EM{$I$-switch} of $f$ is the BN $h\in F(V)$ defined by $h(x)=f(x+e_I)+e_I$ for all configurations $x$ on $V$; note that if $h$ is the $I$-switch of $f$ then $f$ is the $I$-switch of $h$. The analogy comes from the first point of the following easy property.
\begin{proposition}\label{pro:BN_switch}
If $h$ is the $I$-switch of $f$, then
\begin{itemize}
\item $G(h)$ is the $I$-switch of $G(f)$,
\item $\Gamma(h)$ is isometric to $\Gamma(f)$, with the isometry $x\mapsto x+e_I$.
\end{itemize}
\end{proposition}
\section{Intersections between positive and negative cycles}\label{sec:intersection_positive_negative}
What can we say about non-separating and non-trap-separating signed digraphs? If $G$ is non-separating or non-trap-separating then it is both non-fixing and non-converging. Hence, by \cref{thm:bib}, $G$ has both a positive and a negative cycle. Can we say something more? In this section, we provide the following answers: if $G$ is non-separating then it has intersecting cycles with opposite signs, and if it is non-trap-separating, then it has a path from a negative cycle to a positive cycle.
\begin{theorem}\label{thm:sep}
If $G$ has no two intersecting cycles with opposite signs, then $G$ is separating.
\end{theorem}
\begin{theorem}\label{thm:trap-sep}
If $G$ has no path from a negative cycle to a positive cycle, then $G$ is trap-separating.
\end{theorem}
Note the condition of \cref{thm:trap-sep} is stronger than the condition of \cref{thm:sep}: if a vertex $i$ meets both a positive cycle $C^+$ and a negative cycle $C^-$, then the trivial graph with $i$ as single vertex is a path (of length zero) from $C^-$ to $C^+$ (and from $C^+$ to $C^-$). Examples below show that the condition of \cref{thm:sep} (resp. \cref{thm:trap-sep}) is not sufficient to guarantee that $G$ is trap-separating (resp. trapping).
\begin{example}\label{ex:not-trap-sep}
Let $f\in F(4)$ be defined by $f_1(x)=\bar x_3$, $f_2(x)=\bar x_1$, $f_3(x)=\bar x_2$ and $f_4(x)=x_1x_2x_3\lor x_4x_1\lor x_4x_2\lor x_4x_3$. Then $\Gamma(f)$ is separating and not trap-separating, and $G(f)$ has exactly two cycles, of opposite signs, which are vertex disjoint hence it satisfies the condition of \cref{thm:sep}.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){$0000$};
\node (0010) at (1,1){{\boldmath \textcolor{Blue}{$0010$} \unboldmath}};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){{\boldmath \textcolor{Blue}{$0110$} \unboldmath}};
\node (1000) at (2,0){{\boldmath \textcolor{Blue}{$1000$} \unboldmath}};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){{\boldmath \textcolor{Blue}{$1100$} \unboldmath}};
\node (1110) at (3,3){$1110$};
\node (0001) at (5,0){$0001$};
\node (0011) at (6,1){{\boldmath \textcolor{Plum}{$0011$} \unboldmath}};
\node (0101) at (5,2){{\boldmath \textcolor{Plum}{$0101$} \unboldmath}};
\node (0111) at (6,3){{\boldmath \textcolor{Plum}{$0111$} \unboldmath}};
\node (1001) at (7,0){{\boldmath \textcolor{Plum}{$1001$} \unboldmath}};
\node (1011) at (8,1){{\boldmath \textcolor{Plum}{$1011$} \unboldmath}};
\node (1101) at (7,2){{\boldmath \textcolor{Plum}{$1101$} \unboldmath}};
\node (1111) at (8,3){$1111$};
\path[thick,->,draw,black]
(0100) edge[ultra thick,Blue] (1100)
(1100) edge[ultra thick,Blue] (1000)
(1000) edge[ultra thick,Blue] (1010)
(1010) edge[ultra thick,Blue] (0010)
(0010) edge[ultra thick,Blue] (0110)
(0110) edge[ultra thick,Blue] (0100)
(1110) edge (0110)
(1110) edge (1010)
(1110) edge (1100)
(1110) edge[bend left=20] (1111)
(0000) edge (1000)
(0000) edge (0100)
(0000) edge (0010)
(0101) edge[ultra thick,Plum] (1101)
(1101) edge[ultra thick,Plum] (1001)
(1001) edge[ultra thick,Plum] (1011)
(1011) edge[ultra thick,Plum] (0011)
(0011) edge[ultra thick,Plum] (0111)
(0111) edge[ultra thick,Plum] (0101)
(1111) edge (0111)
(1111) edge (1011)
(1111) edge (1101)
(0001) edge (1001)
(0001) edge (0101)
(0001) edge (0011)
(0001) edge[bend left=20] (0000)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({90+120}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at ({90-120}:1){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (-90:2){$4$};
\draw[Green,->,thick] (4.-112) .. controls (-100:2.8) and (-80:2.8) .. (4.-68);
\path[->,thick]
(1) edge[red,bend right=15] (2)
(2) edge[red,bend right=15] (3)
(3) edge[red,bend right=15] (1)
(1) edge[Green] (4)
(2) edge[Green] (4)
(3) edge[Green] (4)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\begin{example}\label{ex:non-trapping}
Let $f\in F(4)$ be defined by $f_1(x)=\bar x_3$, $f_2(x)=\bar x_1$, $f_3(x)=\bar x_2$ and $f_4(x)=x_1x_2x_3$. Since $G(f)$ has no positive cycle, $\Gamma(f)$ has a unique attractor~$A$, but $[A]$ is not a trap space: $x_4=0$ for all $x\in A$ but $\Gamma(f)$ has an arc from $1010$ to $1011$. Hence $\Gamma(f)$ is converging, and thus trap-separating, but not trapping. Furthermore, since $G(f)$ has no positive cycle, it satisfies the condition of \cref{thm:trap-sep}.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){$0000$};
\node (0010) at (1,1){{\boldmath \textcolor{Blue}{$0010$} \unboldmath}};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){{\boldmath \textcolor{Blue}{$0110$} \unboldmath}};
\node (1000) at (2,0){{\boldmath \textcolor{Blue}{$1000$} \unboldmath}};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){{\boldmath \textcolor{Blue}{$1100$} \unboldmath}};
\node (1110) at (3,3){$1110$};
\node (0001) at (5,0){$0001$};
\node (0011) at (6,1){$0011$};
\node (0101) at (5,2){$0101$};
\node (0111) at (6,3){$0111$};
\node (1001) at (7,0){$1001$};
\node (1011) at (8,1){$1011$};
\node (1101) at (7,2){$1101$};
\node (1111) at (8,3){$1111$};
\path[thick,->,draw,black]
(0100) edge[ultra thick,Blue] (1100)
(1100) edge[ultra thick,Blue] (1000)
(1000) edge[ultra thick,Blue] (1010)
(1010) edge[ultra thick,Blue] (0010)
(0010) edge[ultra thick,Blue] (0110)
(0110) edge[ultra thick,Blue] (0100)
(1110) edge (0110)
(1110) edge (1010)
(1110) edge (1100)
(0000) edge (1000)
(0000) edge (0100)
(0000) edge (0010)
(0101) edge (1101)
(1101) edge (1001)
(1001) edge (1011)
(1011) edge (0011)
(0011) edge (0111)
(0111) edge (0101)
(1111) edge (0111)
(1111) edge (1011)
(1111) edge (1101)
(0001) edge (1001)
(0001) edge (0101)
(0001) edge (0011)
(0001) edge[bend left=20] (0000)
(0011) edge[bend left=20] (0010)
(0101) edge[bend right=20] (0100)
(0111) edge[bend right=20] (0110)
(1001) edge[bend left=20] (1000)
(1011) edge[bend left=20] (1010)
(1101) edge[bend right=20] (1100)
(1110) edge[bend left=20] (1111)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({90+120}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at ({90-120}:1){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (-90:2){$4$};
\path[->,thick]
(1) edge[red,bend right=15] (2)
(2) edge[red,bend right=15] (3)
(3) edge[red,bend right=15] (1)
(1) edge[Green] (4)
(2) edge[Green] (4)
(3) edge[Green] (4)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\medskip
The strategy to prove \cref{thm:sep} is roughly the following. Suppose that any two intersecting cycles of $G$ have the same sign. Then, in each strong component $H$, all the cycles have the same sign and thus, by \cref{thm:bib}, $H$ is fixing or converging, and thus separating. This suggests a proof by induction on the number of strong components, the base case ($G$ itself is strong) being given by the above argument. However, if all the strong components of $G$ are separating, then $G$ is not necessarily separating, as showed by the following examples (note that \cref{ex:not-trap-sep} and \cref{ex:non-trapping} show that, if all the strong components of $G$ are trap-separating or trapping, then $G$ is not necessarily trap-separating or trapping).
\begin{example}
\label{ex:not-sep}
Let $f\in F(3)$ be defined by $f_1(x)=\bar x_1$, $f_2(x)=\bar x_1 x_3 \lor x_2 \bar x_3$ and $f_3(x)=x_1 x_2 \lor \bar x_2 x_3$. Then $\Gamma(f)$ is non-separating and $G(f)$ has exactly two strong components, $G[\{1\}]$ and $G[\{2,3\}]$, both converging (the second trivially so, since $F(G[\{2,3\}])=\emptyset$).
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{$000$} \unboldmath}};
\node (001) at (1,1){{\boldmath \textcolor{Blue}{$001$} \unboldmath}};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){{\boldmath \textcolor{Plum}{$100$} \unboldmath}};
\node (101) at (3,1){{\boldmath \textcolor{Blue}{$101$} \unboldmath}};
\node (110) at (2,2){{\boldmath \textcolor{Blue}{$110$} \unboldmath}};
\node (111) at (3,3){{\boldmath \textcolor{Blue}{$111$} \unboldmath}};
\path[ultra thick,->,draw,Blue]
(000) edge[bend left=10,Plum] (100)
(001) edge (011)
(001) edge[bend left=10] (101)
(010) edge[bend left=10] (110)
(011) edge[bend left=10] (111)
(011) edge (010)
(100) edge[bend left=10,Plum] (000)
(101) edge[bend left=10] (001)
(110) edge (111)
(110) edge[bend left=10] (010)
(111) edge (101)
(111) edge[bend left=10] (011)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({90+120}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at ({90-120}:1){$3$};
\draw[red,->,thick] (1) to [out=120,in=60,looseness=8] (1);
\draw[Green,->,thick] (2) to [out=120,in=210,looseness=6] (2);
\draw[Green,->,thick] (3) to [out=60,in=-30,looseness=6] (3);
\path[->,thick]
(1) edge[red,bend right=15,->] (2)
(1) edge[Green,bend left=15] (3)
(2) edge[Green,bend right=13] (3)
(2) edge[red,bend right=40,->] (3)
(3) edge[Green,bend right=13] (2)
(3) edge[red,bend right=40,->] (2)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\begin{example}
\label{ex:not-sep-2}
We give a second example, where the set $F(H)$ is non-empty for all strong components $H$ of $G$.
Consider $f\in F(4)$ defined by $f_1(x)=x_1x_3 \vee \bar x_2 x_3$, $f_2(x)= x_2 x_3 \vee \bar x_1 x_3$, $f_3(x)= x_1x_2 \vee \bar x_4$ and $f_4(x)=\bar x_4$. Then $\Gamma(f)$ is non-separating as shown in the figure below. $G(f)$ has two strong components, $G[\{4\}]$, which is converging, and $G[\{1,2,3\}]$ which is separating by~\cref{thm:fvs2}, since it has feedback number two and all of its negative cycles contain all three vertices.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){{\boldmath \textcolor{Blue}{$0000$} \unboldmath}};
\node (0010) at (1,1){{\boldmath \textcolor{Blue}{$0010$} \unboldmath}};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){{\boldmath \textcolor{Blue}{$0110$} \unboldmath}};
\node (1000) at (2,0){{\boldmath \textcolor{Blue}{$1000$} \unboldmath}};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){$1100$};
\node (1110) at (3,3){{\boldmath \textcolor{Plum}{$1110$} \unboldmath}};
\node (0001) at (5,0){{\boldmath \textcolor{Blue}{$0001$} \unboldmath}};
\node (0011) at (6,1){{\boldmath \textcolor{Blue}{$0011$} \unboldmath}};
\node (0101) at (5,2){{\boldmath \textcolor{Blue}{$0101$} \unboldmath}};
\node (0111) at (6,3){{\boldmath \textcolor{Blue}{$0111$} \unboldmath}};
\node (1001) at (7,0){{\boldmath \textcolor{Blue}{$1001$} \unboldmath}};
\node (1011) at (8,1){{\boldmath \textcolor{Blue}{$1011$} \unboldmath}};
\node (1101) at (7,2){$1101$};
\node (1111) at (8,3){{\boldmath \textcolor{Plum}{$1111$} \unboldmath}};
\path[thick,->,draw,black]
(0000) edge[ultra thick,bend right=20,Blue] (0001)
(0000) edge[ultra thick,Blue] (0010)
(0001) edge[ultra thick,bend left=20,Blue] (0000)
(0010) edge[ultra thick,bend right=20,Blue] (0011)
(0010) edge[ultra thick,Blue] (0110)
(0010) edge[ultra thick,Blue] (1010)
(0011) edge[ultra thick,Blue] (0001)
(0011) edge[ultra thick,bend left=20,Blue] (0010)
(0011) edge[ultra thick,Blue] (0111)
(0011) edge[ultra thick,Blue] (1011)
(0100) edge[ultra thick,bend left=20,Blue] (0101)
(0100) edge[ultra thick,Blue] (0110)
(0100) edge[ultra thick,Blue] (0000)
(0101) edge[ultra thick,Blue] (0001)
(0101) edge[ultra thick,bend right=20,Blue] (0100)
(0110) edge[ultra thick,bend left=20,Blue] (0111)
(0111) edge[ultra thick,Blue] (0101)
(0111) edge[ultra thick,bend right=20,Blue] (0110)
(1000) edge[ultra thick,Blue] (1010)
(1000) edge[ultra thick,bend right=20,Blue] (1001)
(1000) edge[ultra thick,Blue] (0000)
(1001) edge[ultra thick,Blue] (0001)
(1001) edge[ultra thick,bend left=20,Blue] (1000)
(1010) edge[ultra thick,bend right=20,Blue] (1011)
(1011) edge[ultra thick,Blue] (1001)
(1011) edge[ultra thick,bend left=20,Blue] (1010)
(1100) edge (0100)
(1100) edge (1000)
(1100) edge (1110)
(1100) edge[bend left=20] (1101)
(1101) edge[bend right=20] (1100)
(1101) edge (0101)
(1101) edge (1001)
(1101) edge (1111)
(1110) edge[ultra thick,bend left=20,Plum] (1111)
(1111) edge[ultra thick,bend right=20,Plum] (1110)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (0,0){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at (2,0){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (0,-2){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (2,-2){$4$};
\draw[red,->,thick] (4) to [out=60,in=-30,looseness=6] (4);
\draw[Green,->,thick] (1) to [out=120,in=210,looseness=6] (1);
\draw[Green,->,thick] (2) to [out=60,in=-30,looseness=6] (2);
\path[->,thick]
(1) edge[red,bend right=15] (2)
(1) edge[Green,bend left=15] (3)
(2) edge[red,bend right=15] (1)
(2) edge[Green,bend right=15] (3)
(3) edge[Green,bend left=15] (1)
(3) edge[Green,bend right=15] (2)
(4) edge[red] (3)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
On the other hand, if all the cycles of $H$ have the same sign, then $H$ is more than separating, it is ``robustly'' separating (a formal definition will be given below), and it turns out that if each strong component of $G$ is ``robustly'' separating, then $G$ is separating and we are done.
\medskip
Formally, $G$ is \EM{robustly separating} if, for any non-empty set $F$ of BNs such that $G(f)$ is a spanning subgraph of $G$ for all $f\in F$, the (joint) union $\bigcup_{f\in F} \Gamma(f)$ is separating (a spanning subgraph of $G$ is a subgraph of $G$ with vertex set $V$). We define similarly the notions of \EM{robustly converging} and \EM{robustly trapping}.
\begin{example}
The graph $G[\{1,2,3\}]$ of \cref{ex:not-sep-2} is separating but not robustly separating:
the maps $f(x)=(\bar x_2 \lor x_1 x_3, x_2 \bar x_1 \lor x_3 \bar x_1, x_1 \lor x_2)$ and $g(x)=(x_1 x_3 \lor x_3 \bar x_2, x_3 \lor x_2 \bar x_1, x_1 x_2)$ are fixing, but the union of $\Gamma(f)$ and $\Gamma(g)$ is not separating. The asynchronous graphs of $f$ and $g$ and their union are as follows:
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{$000$}};
\node (001) at (1,1){{$001$}};
\node (010) at (0,2){$010$};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){$100$};
\node (101) at (3,1){{\boldmath \textcolor{Plum}{$101$} \unboldmath}};
\node (110) at (2,2){$110$};
\node (111) at (3,3){$111$};
\path[thick,->,draw]
(000) edge (100)
(001) edge (101)
(001) edge (000)
(001) edge (011)
(010) edge (011)
(100) edge (101)
(110) edge (100)
(110) edge (111)
(110) edge (010)
(111) edge (101)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{{$000$}} \unboldmath}};
\node (001) at (1,1){{$001$}};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){$011$};
\node (100) at (2,0){$100$};
\node (101) at (3,1){$101$};
\node (110) at (2,2){$110$};
\node (111) at (3,3){{\boldmath \textcolor{Emerald}{$111$} \unboldmath}};
\path[thick,->,draw]
(001) edge (101)
(001) edge (000)
(001) edge (011)
(011) edge (010)
(100) edge (000)
(101) edge (100)
(101) edge (111)
(110) edge (100)
(110) edge (111)
(110) edge (010)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{$000$} \unboldmath}};
\node (001) at (1,1){$001$};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){{\boldmath \textcolor{Plum}{$100$} \unboldmath}};
\node (101) at (3,1){{\boldmath \textcolor{Plum}{$101$} \unboldmath}};
\node (110) at (2,2){$110$};
\node (111) at (3,3){{\boldmath \textcolor{Plum}{$111$} \unboldmath}};
\path[thick,->,draw]
(000) edge[bend left=15,Plum,ultra thick] (100)
(001) edge (101)
(001) edge (000)
(001) edge (011)
(010) edge[bend left=15,Blue,ultra thick] (011)
(011) edge[bend left=15,Blue,ultra thick] (010)
(100) edge[bend left=15,Plum,ultra thick] (000)
(100) edge[bend left=15,Plum,ultra thick] (101)
(101) edge[bend left=15,Plum,ultra thick] (100)
(101) edge[bend left=15,Plum,ultra thick] (111)
(110) edge (100)
(110) edge (111)
(110) edge (010)
(111) edge[bend left=15,Plum,ultra thick] (101)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\medskip
Below, we prove that if $G$ is strong and has only negative (positive) cycles, then $G$ is robustly converging (trapping) and thus robustly separating.
\begin{lemma}\label{lem:robustly_converging}
If all the cycles of $G$ are negative, then $G$ is robustly converging.
\end{lemma}
\begin{proof}
Suppose that all the cycles of $G$ are negative. Let $F$ be a set of BNs such that $G(f)$ is a spanning subgraph of $G$ for all $f\in F$, and let $\Gamma=\bigcup_{f\in F} \Gamma(f)$. Suppose that $\Gamma$ has two distinct attractors $A$ and $B$, and let $f\in F$. Since $\Gamma(f)$ is a subgraph of $\Gamma$, $A$ and $B$ are trap sets of $\Gamma(f)$, and thus $\Gamma(f)$ has at least two distinct attractors, one included in $A$, and the other included in $B$. But since $G(f)$ is a subgraph of $G$, it has only negative cycles. Thus $\Gamma(f)$ is converging by \cref{thm:bib}, and we obtain a contradiction. This proves that $\Gamma$ is converging.
\end{proof}
To treat the case where all the cycles are positive, we need the following lemma.
\begin{lemma}\label{lem:monotone_union}
Let $f^1,\dots,f^\ell$ be $\ell$ monotone BNs with component set $V$. Let
\[
\Gamma=\Gamma(f^1)\cup\cdots\cup \Gamma(f^\ell)
\]
be the joint union of the corresponding asynchronous graphs. Then $\Gamma$ is trapping.
\end{lemma}
\begin{proof}
We need:
\begin{quote}
(1) {\em If $\Gamma$ has no decreasing arc starting from $a$ then $[a,\mathbf{1}]$ is a trap space, and if $\Gamma$ has no increasing arc starting from $b$ then $[\mathbf{0},b]$ is a trap space.}
\smallskip
Suppose that $[a,\mathbf{1}]$ is not a trap space. Then $\Gamma$ has an arc $x\to y$ leaving $[a,\mathbf{1}]$. Let $i$ be the direction of this arc. Then $x_i=a_i=1$ and $y_i=0$. Thus $f^k_i(x)=0$ for some $1\leq k\leq \ell$, and since $f^k$ is monotone and $a\leq x$, we have $f^k_i(a)\leq f^k_i(x)=0$. Since $a_i=1$, we deduce that $\Gamma(f^k)$, and thus $\Gamma$, has an arc $a\to a'$ with $a'_i=0$, which is decreasing. This proves the first assertion, and the second is similar.
\end{quote}
Let $A$ be an attractor of $\Gamma$.
\begin{quote}
(2) {\em $A$ has a unique minimal element and a unique maximal element.}
\smallskip
Consider $a,a'$ minimal elements of $A$.
Then $\Gamma$ has no decreasing arc starting from $a$ thus, by (1), $[a,\mathbf{1}]$ is a trap space, as is $[a',\mathbf{1}]$ with the same proof.
Since $a' \in A$, we have $a'\in [a,\mathbf{1}]$, and therefore $a'\geq a$, and symmetrically $a \geq a'$, which proves $a=a'$. We prove similarly that $A$ has a unique maximal element.
\end{quote}
Let $a$ and $b$ be the minimal and maximal element of $A$. Then $[A]=[a,b]$ and, by (1), $[a,\mathbf{1}]$ and $[\mathbf{0},b]$ are trap spaces.
\begin{quote}
(3) {\em $[A]$ is a trap space of $\Gamma$.}
\smallskip
Suppose that $x\to y$ is an arc leaving $[A]$, and let $i$ be the direction of this arc. Then $x_i=a_i=b_i\neq y_i$. If $x_i=1$ we deduce that $x\to y$ leaves $[a,\mathbf{1}]$ and if $x_i=0$ we deduce that $x\to y$ leaves $[\mathbf{0},b]$, and in both cases we obtain a contradiction.
\end{quote}
\begin{quote}
(4) {\em $\Gamma$ has a path from every configuration in $[A]$ to $A$.}
\smallskip
For every $x\in [A]$ we prove, by induction on $d(x,b)$, that $\Gamma$ has a path from $x$ to $b$. If $d(x,b)=0$ then there is nothing to prove. So suppose that $d(x,b)>0$. If $\Gamma$ has no increasing arc starting from $x$ then, by (1), $[\mathbf{0},x]$ is a trap space, which contains $a$ but not $b$. Since $a,b\in A$, $\Gamma$ has a path from $a$ to $b$, and thus $[\mathbf{0},x]$ is not a trap space, a contradiction. Hence $\Gamma$ has an increasing arc $x\to y$. By (3), $[A]$ is a trap space, so $y\leq b$, and since $x\leq y$ we have $d(y,b)<d(x,b)$. Thus, by induction, $\Gamma$ has a path from $y$ to $b$, and by adding $x\to y$ to this path, we obtain the desired path.
\end{quote}
Suppose that $\Gamma$ has an attractor $B\neq A$. For $X\in\{A,B\}$, let $R(X)$ be the set of configurations $x$ such that $\Gamma$ has a path from $x$ to $X$. By (3) and (4), we have $[A]=\langle A\rangle$ and $[A]\subseteq R(A)$ and, similarly, $[B]=\langle B\rangle$ and $[B]\subseteq R(B)$. Suppose, for a contradiction, that there is $x\in [A]\cap [B]$. Then $x\in R(A)$ and since $x\in \langle B\rangle$ we have $R(A)\subseteq \langle B\rangle=[B]$. Since $A\subseteq R(A)$ we have $A\subseteq [B]\subseteq R(B)$, and thus $\Gamma$ has a path from $A$ to $B$, which is a contradiction since $A,B$ are distinct attractors. Thus $[A]\cap [B]=\emptyset$, and thus $\Gamma$ is trapping.
\end{proof}
We deduce:
\begin{lemma}\label{lem:robustly_trapping}
If $G$ is strong and has only positive cycles, then $G$ is robustly trapping.
\end{lemma}
\begin{proof}
Suppose that $G$ is strong and has only positive cycles. By \cref{pro:harary} and \cref{pro:BN_switch} we can suppose that $G$ is full-positive. Let $F$ be a set of BNs such that $G(f)$ is a spanning subgraph of $G$ for all $f\in F$, and let $\Gamma=\bigcup_{f\in F} \Gamma(f)$. For every $f\in F$, since $G(f)$ is full-positive, $f$ is monotone and we deduce from Lemma~\ref{lem:monotone_union} that $\Gamma$ is trapping.
\end{proof}
Going back to the proof of \cref{thm:sep}, we now know that if any two intersecting cycles of $G$ have the same sign, then each strong component of $G$ is robustly separating. It remains to prove that this implies that $G$ is separating. For that, we need a decomposition technique for non-strong signed digraphs. If $G$ is not strong, then there is a partition $(I_1,I_2)$ of the vertices such that $G$ has no arc from $I_2$ to $I_1$. Given $f\in F(G)$, we then show that each attractor of $\Gamma(f)$ can be regarded as the Cartesian product of an asynchronous attractor of the ``restriction'' of $f$ on $I_1$ and a union of asynchronous attractors of BNs whose signed interaction digraph is a spanning subgraph of $G[I_2]$. (The union involved in the definition of robustly separating is actually motivated by the union involved in this decomposition.) The details follow.
\medskip
Let $f\in F(V)$, and $(I_1,I_2)$ a partition of $V$, without empty part. We identify $\{0,1\}^V$ with $\{0,1\}^{I_1}\times \{0,1\}^{I_2}$. Thus we regard each configuration $x$ on $V$ has a pair $x=(x_{I_1},x_{I_2})$. We denote by $f^1$ the subnetwork of $f$ induced by $[(\mathbf{0},\mathbf{0}),(\mathbf{1},\mathbf{0})]$ and set $\Gamma^1=\Gamma(f^1)$. Hence $f^1$ is obtained by fixing to $0$ each component in $I_2$. Next, for all configurations $x$ on $I_1$, we denote by $f^x$ be subnetwork of $f$ induced by $[(x,\mathbf{0}),(x,\mathbf{1})]$. Hence $f^x$ is obtained by fixing to $x_i$ each component $i$ in $I_1$. Let $A$ be an attractor of $\Gamma(f)$. We set:
\[
A^1=\{a_{I_1}\mid a\in A\},\qquad A^2=\{a_{I_2}\mid a\in A\},\qquad \Gamma^2_A=\bigcup_{x\in A^1} \Gamma(f^x).
\]
\begin{lemma}\label{lem:decomposition}
Let $f\in F(V)$. Let $(I_1,I_2)$ be a partition of $V$ without empty part. Suppose that $G(f)$ has no arc from $I_2$ to $I_1$. For every attractor $A$ of $\Gamma(f)$:
\begin{itemize}
\item $A=A^1\times A^2$,
\item $A^1$ is an attractor of $\Gamma^1$,
\item $A^2$ is an attractor of $\Gamma^2_A$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $x$ be a configuration on $I_1$ and let $a$ be a configuration on $I_2$. Since $G(f)$ has no arc from $I_2$ to $I_1$, we have $f(x,a)_{I_1}=f(x,\mathbf{0})_{I_1}=f^1(x)$ and we deduce that
\begin{quote}
(1) {\em $x\to y$ is an arc of $\Gamma^1$ if and only if $(x,a)\to (y,a)$ is an arc of $\Gamma(f)$.}
\end{quote}
\medskip
It follows that $A^1$ is an attractor of $\Gamma^1$. Indeed, let $x\in A^1$ and let $a$ be a configuration on $I_2$ such that $(x,a)\in A$. If $x\to y$ is an arc of $\Gamma^1$ then, by (1), $(x,a)\to (y,a)$ is an arc of $\Gamma(f)$ and since $(x,a)\in A$ we have $(y,a)\in A$ and thus $y\in A^1$. So $A^1$ is a trap set of $\Gamma^1$. Let $B^1$ be a strict subset of $A^1$. Let $x\in B^1$, $y\in A^1\setminus B^1$, and let $a,b$ be configurations on $I_2$ such that $(x,a),(y,b)\in A$. Since $A$ is an attractor of $\Gamma(f)$, there is a path from $(x,a)$ to $(y,b)$, and this path contains an arc $(z,c)\to (z',c)$ with $z\in B^1$ and $z'\in A^1\setminus B^1$. Then, by (1), $z\to z'$ is an arc of $\Gamma^1$ leaving $B^1$. Hence $A^1$ is an inclusion-minimal trap set of $\Gamma^1$, as desired.
\medskip
We now prove that $A=A^1\times A^2$. It is sufficient to prove that $A^1\times A^2\subseteq A$ since the other direction is clear. Let $(y,a)\in A^1\times A^2$. Since $a\in A^2$, there is $x\in A^1$ such that $(x,a)\in A$. Since $A^1$ is an attractor of $\Gamma^1$, $\Gamma^1$ has a path from $x$ to $y$ and we deduce from (1) that $\Gamma(f)$ has a path from $(x,a)$ to $(y,a)$, and thus $(y,a)\in A$.
\medskip
We finally prove that $A^2$ is an attractor of $\Gamma^2_A$. Suppose that $\Gamma^2_A$ has an arc $a\to b$ with $a\in A^2$. There is $x\in A^1$ such that $a\to b$ is an arc of $\Gamma(f^x)$, and we deduce that $(x,a)\to (x,b)$ is an arc of $\Gamma(f)$. Since $A=A^1\times A^2$, we have $(x,a)\in A$ and thus $(x,b)\in A$ and we deduce that $b\in A^2$. So $A^2$ is a trap set of $\Gamma^2_A$. Let $B^2$ be a strict subset of $A^2$. Let $a\in B^2$, $b\in A^2\setminus B^2$, and let $x,y$ be configurations on $I_1$ such that $(x,a),(y,b)\in A$. Since $A$ is an attractor of $\Gamma(f)$, there is a path from $(x,a)$ to $(y,b)$, and this path contains an arc $(z,c)\to (z,c')$ with $c\in B^2$ and $c'\in A^2\setminus B^2$. Since $z\in A^1$, $c\to c'$ is an arc of $\Gamma(f^z)$, and thus an arc of $\Gamma^2_A$ leaving $B^2$. Hence $A^2$ is an inclusion-minimal trap set of $\Gamma^2_A$, as desired.
\end{proof}
We deduce the following which, together with \cref{lem:robustly_converging} and \cref{lem:robustly_trapping}, implies \cref{thm:sep}.
\begin{lemma}\label{lem:decomposition_2}
If each strong component of $G$ is robustly separating, then $G$ is separating.
\end{lemma}
\begin{proof}
Suppose that every strong component of $G$ is robustly separating. We proceed by induction on the number of strong components. If $G$ is strong then the result is obvious. Otherwise, there is a partition $(I_1,I_2)$ of the vertices of $G$ such that $G$ has no arc from $I_2$ to $I_1$ and such that $G[I_2]$ is a strong component of $G$. Let $f\in F(G)$. Since $G(f^1)=G[I_1]$, by induction, $\Gamma^1$ is separating. Furthermore, for every attractor $A$ of $\Gamma(f)$ and $x\in A$, $G(f^x)$ is a spanning subgraph of $G[I_2]$, which is robustly separating, and we deduce that $\Gamma^2_A$ is separating. Let $A,B$ be distinct attractors of $\Gamma(f)$. By \cref{lem:decomposition}, we have $A=A^1\times A^2$ and $B=B^1\times B^2$. If $A^1\neq B^1$ then, by the same lemma, $A^1,B^1$ are distinct attractors of $\Gamma^1$, which is separating, thus $[A^1]\cap [B^1]=\emptyset$ and we deduce that $[A]\cap [B]=\emptyset$. Suppose now that $A^1=B^1$. Then, by the same lemma, $A^2,B^2$ are distinct attractors of $\Gamma^2_A=\Gamma^2_B$, which is separating. Thus $[A^2]\cap [B^2]=\emptyset$ and we deduce that $[A]\cap [B]=\emptyset$. This proves that $\Gamma(f)$ is separating.
\end{proof}
The proof of \cref{thm:trap-sep} follows the same line and is easier. Let us say that $G$ is \EM{perfectly fixing} if each subgraph of $G$ is fixing. Clearly, if all the cycles of $G$ are positive, then $G$ is perfectly fixing. Suppose now that the conditions of \cref{thm:trap-sep} are satisfied, that is, $G$ has no path from a negative cycle to a positive cycle. Then each strong component is either perfectly fixing (if all the cycles are positive) or robustly converging (if all the cycles are negative, \cref{lem:robustly_converging}), and there is no arc from a robustly converging component to a perfectly fixing component. We prove below that this is enough for $G$ to be trap-separating, and this proves \cref{thm:trap-sep}.
\begin{lemma}\label{lem:decomposition_3}
Suppose that each strong component of $G$ is either perfectly fixing or robustly converging, and that there is no arc from a robustly converging component to a robustly fixing component. Then $G$ is trap-separating.
\end{lemma}
We need the following:
\begin{lemma}\label{lem:decomposition_4}
If each strong component of $G$ is perfectly fixing, then $G$ is perfectly fixing.
\end{lemma}
\begin{proof}
Suppose that every strong component of $G$ is perfectly fixing. We proceed by induction on the number of strong components. If $G$ is strong then the result is obvious. Otherwise, there is a partition $(I_1,I_2)$ of the vertices of $G$ such that $G$ has no arc from $I_2$ to $I_1$ and such that $G[I_2]$ is a strong component of $G$. Let $f\in F(G)$. Since $G(f^1)=G[I_1]$, by induction, $\Gamma^1$ is fixing. Let $A$ be an attractor of $\Gamma(f)$. By \cref{lem:decomposition}, we have $A=A^1\times A^2$ and $A^1$ is an attractor of $\Gamma^1$. Thus $A^1=\{a\}$ for some fixed point $a$ of $f^1$. By the same lemma, $A^2$ is an attractor of $\Gamma^2_{A^1}=\Gamma(f^{a})$. Since $G(f^{a})$ is a subgraph of $G[I_2]$, it is fixing, and thus $|A^2|=1$. We deduce that $|A|=1$, and thus $\Gamma(f)$ is fixing. Consequently, $G$ is fixing. Let $G'$ be a subgraph of $G$. Then each strong component of $G'$ is perfectly fixing and the argument above shows that $G'$ is fixing. Consequently, $G$ is perfectly fixing.
\end{proof}
\begin{lemma}\label{lem:decomposition_5}
Suppose that there is $I_2\subseteq V$ such that $G[I_2]$ is a robustly converging terminal strong component of $G$ and that $G\setminus I_2$ is trap-separating. Then $G$ is trap-separating.
\end{lemma}
\begin{proof}
Let $f\in F(G)$ and $I_1=V\setminus I_2$. Since $G(f^1)=G[I_1]$, $\Gamma^1$ is trap-separating. Furthermore, for every attractor $A$ of $\Gamma(f)$ and $x\in A$, $G(f^x)$ is a spanning subgraph of $G[I_2]$, which is robustly converging, and we deduce that $\Gamma^2_A$ is converging. Let $A,B$ be distinct attractors of $\Gamma(f)$. By \cref{lem:decomposition}, we have $A=A^1\times A^2$ and $B=B^1\times B^2$. If $A^1\neq B^1$ then, by the same lemma, $A^1,B^1$ are distinct attractors of $\Gamma^1$, which is trap-separating, thus $\langle A^1\rangle\cap \langle B^1\rangle=\emptyset$. Consequently, $\langle A^1 \rangle \times\{0,1\}^{I_2}$ and $\langle B^1\rangle\times \{0,1\}^{I_2}$ are disjoint trap spaces of $\Gamma(f)$ containing $A$ and $B$, and thus $\langle A\rangle\cap \langle B\rangle=\emptyset$. Suppose now that $A^1=B^1$. Then, by the same lemma, $A^2,B^2$ are distinct attractors of $\Gamma^2_A=\Gamma^2_B$, which is converging, a contradiction. This proves that $\Gamma(f)$ is trap-separating.
\end{proof}
\begin{proof}[\BF{Proof of \cref{lem:decomposition_3}}]
We proceed by induction on the number of strong components. If $G$ is strong then $G$ is either (perfectly) fixing or (robustly) converging and thus $G$ is trap-separating. So suppose that $G$ is not strong. If all the strong components of $G$ are perfectly fixing, then, by \cref{lem:decomposition_4}, $G$ is fixing and thus trap-separating. So suppose that $G$ has a strong component which is robustly converging. Since there is no path from a robustly converging strong component to a perfectly fixing strong component, there is a partition $(I_1,I_2)$ of the vertices of $G$ such that $G$ has no arc from $I_2$ to $I_1$ and such that $G[I_2]$ is a robustly converging strong component of $G$. By induction hypothesis, $G[I_1]$ is trap-separating, and thus, by \cref{lem:decomposition_5}, $G$ is trap-separating.
\end{proof}
\section{Number of positive cycles}\label{sec:number-positive-cycles}
We have proved that if $G$ is non-separating, then it has a positive cycle intersecting a negative cycle. In this section, we prove the following, which says more concerning positive cycles.
\begin{theorem}\label{thm:PFN}
If the positive feedback number of $G$ is at most one, then $G$ is separating.
\end{theorem}
\cref{ex:not-trap-sep} shows that, in the theorem, separating cannot be replaced by trap-separating. For the proof we need the following lemma.
\begin{lemma}\label{lem:Delta_A}
Let $f\in F(G)$ and $A,B$ be distinct attractors of $\Gamma(f)$. For every $i\in\Delta(A)$, $G\setminus i$ has a positive cycle.
\end{lemma}
\begin{proof}
Let $i\in \Delta(A)$ and $b\in B$. Then there is $a\in A$ with $a_i=b_i$. Let $f'\in F(V)$ defined as follows: for all configurations $x$ on $V$, $f'_j(x)=f_j(x)$ for $j\neq i$ and $f'_i(x)=x_i$. Then $\Gamma(f')$ is the spanning subgraph of $\Gamma(f)$ obtained by deleting all the arcs in the direction $i$. Let $A'$ and $B'$ be attractors of $\Gamma(f')$ which are reachable in $\Gamma(f')$ from $a$ and $b$, respectively. Then $A'\subseteq A$ and $B'\subseteq B$ thus $A'\cap B'=\emptyset$. Furthermore, since $a_i=b_i$ we have $x_i=a_i=b_i$ for all $x\in A'\cup B'$. Let $(\alpha,\beta)\in A'\times B'$ with $\Delta(\alpha,\beta)$ minimum. Then $f'_j(\alpha)=\alpha_j$ and $f'_j(\beta)=\beta_j$ for all $j\in\Delta(\alpha,\beta)$. By \cref{lem:A08}, the subgraph of $G(f')$ induced by $\Delta(\alpha,\beta)$ has a positive cycle $C'$, which does not contain $i$ since $i\not\in \Delta(\alpha,\beta)$. Thus $C'$ is a positive cycle of $G(f')\setminus i=G\setminus i$.
\end{proof}
\begin{proof}[\BF{Proof of \cref{thm:PFN}}]
If the positive feedback number of $G$ is zero, then $G$ is converging. So suppose that there is a vertex $i$ such that $G\setminus i$ has no positive cycle. Let $f\in F(G)$ and let $A,B$ be distinct attractors of $\Gamma(f)$. By \cref{lem:Delta_A}, we have $i\not\in\Delta(A)\cup\Delta(B)$. Let $(a,b)\in A\times B$ with $\Delta(a,b)$ minimum. Then $f_j(a)=a_j$ and $f_j(b)=b_j$ for all $j\in\Delta(a,b)$. Hence, by \cref{lem:A08}, $G[\Delta(a,b)]$ has a positive cycle $C$. Since $C$ contains $i$ we have $a_i\neq b_i$. Since $i\not\in\Delta(A)\cup\Delta(B)$, we have $[A]\cap [B]=\emptyset$ and thus $\Gamma(f)$ is separating.
\end{proof}
Hence, if $G$ is non-separating, then $G$ has at least two disjoint positive cycles or at least three positive cycles. The example below show that if $G$ is non-separating, then $G$ does not necessarily have two disjoint positive cycles.
\begin{example}
Consider $f\in F(3)$ defined by $f_1(x)=x_2 \bar x_3 \lor x_3 \bar x_1 \lor x_3 \bar x_2$, $f_2(x)=x_1 \bar x_3 \lor x_3 \bar x_1 \lor x_3 \bar x_2$ and $f_3(x)=x_1 \bar x_2 \lor x_2 \bar x_1 \lor x_2 \bar x_3$. $\Gamma(f)$ is non-separating, and $G(f)$ does not have two disjoint positive cycles.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{$000$} \unboldmath}};
\node (001) at (1,1){$001$};
\node (010) at (0,2){$010$};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){$100$};
\node (101) at (3,1){{\boldmath \textcolor{Blue}{$101$} \unboldmath}};
\node (110) at (2,2){{\boldmath \textcolor{Blue}{$110$} \unboldmath}};
\node (111) at (3,3){{\boldmath \textcolor{Blue}{$111$} \unboldmath}};
\path[thick,->,draw,black]
(001) edge (000)
(001) edge (011)
(001) edge (101)
(010) edge (000)
(010) edge (011)
(010) edge (110)
(011) edge[ultra thick,Blue,bend right=10] (111)
(101) edge[ultra thick,Blue,bend right=10] (111)
(110) edge[ultra thick,Blue,bend right=10] (111)
(100) edge (000)
(100) edge (110)
(100) edge (101)
(111) edge[ultra thick,Blue,bend right=10] (011)
(111) edge[ultra thick,Blue,bend right=10] (101)
(111) edge[ultra thick,Blue,bend right=10] (110)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({-120-90}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({120-90}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1){$3$};
\draw[red,->,thick] (1.{-120-90-20}) .. controls ({-120-90-20}:2) and ({-120-90+20}:2) .. (1.{-120-90+20});
\draw[red,->,thick] (2.{120-90-20}) .. controls ({120-90-20}:2) and ({120-90+20}:2) .. (2.{120-90+20});
\draw[red,->,thick] (3.{-90-20}) .. controls ({-90-20}:2) and ({-90+20}:2) .. (3.{-90+20});
\path[->,thick]
(1) edge[red,bend left=40] (2)
(2) edge[red,bend right=15] (1)
(2) edge[Green,bend left=20] (1)
(1) edge[Green,bend left=70] (2)
(1) edge[red,bend right=15] (3)
(3) edge[red,bend left=40] (1)
(3) edge[Green,bend left=70] (1)
(1) edge[Green,bend left=20] (3)
(2) edge[red,bend left=40] (3)
(3) edge[red,bend right=15] (2)
(3) edge[Green,bend left=20] (2)
(2) edge[Green,bend left=70] (3)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
With \cref{lem:Delta_A}, we can give new sufficient conditions for $G$ to be trap-separating or converging.
\begin{proposition}\label{pro:one_positive_cycle}
Suppose that $G$ has a unique positive cycle $C$, and that every negative cycle of $G$ intersects $C$. Then $G$ is trap-separating. If, in addition, $G$ is strong and has at least one negative cycle, then $G$ is converging.
\end{proposition}
We need the following lemma.
\begin{lemma}[\cite{R18}]\label{lem:R18a}
Suppose that $G$ is strong, has a unique positive cycle, and at least one negative cycle. Then every $f\in F(G)$ has at most one fixed point.
\end{lemma}
\begin{proof}[\BF{Proof of \cref{pro:one_positive_cycle}}]
Suppose that $G$ has a unique positive cycle $C$, and that every negative cycle of $G$ intersects $C$. Let $f\in F(G)$. We prove that $\Gamma(f)$ is fixing or converging, and thus $G$ is trap-separating. Suppose that $\Gamma(f)$ is not converging. Let $A,B$ be distinct attractors of $\Gamma(f)$. By \cref{lem:Delta_A}, $\Delta(A)$ is disjoint from the vertex set of $C$. Hence, by \cref{lem:R10}, if $|A|\geq 2$ then $G$ has a negative cycle disjoint from $C$, a contradiction. Thus $|A|=1$, that is, $\Gamma(f)$ is fixing. Suppose now that, in addition, $G$ is strong and has at least one negative cycle. If $\Gamma(f)$ is not converging, then it is fixing and thus $f$ has at least two fixed points, and this contradicts \cref{lem:R18a}. Thus $\Gamma(f)$ is always converging.
\end{proof}
The following example demonstrates that we cannot replace trap-separating with trapping in the first part of \cref{pro:one_positive_cycle}.
\cref{ex:sep-not-conv-fix-graph} shows that we cannot drop the hypothesis of $G$ strong in the second part.
\begin{example}\label{ex:pos-not-trapping}
Consider $f\in F(3)$ defined by $f_1(x)=\bar x_1 x_2$, $f_2(x)=x_1\lor \bar x_2$ and $f_3(x)=x_1 \bar x_2$. $G(f)$ has a unique positive cycle that intersects all cycles. $\Gamma(f)$ is trap-separating but not trapping.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Blue}{$000$} \unboldmath}};
\node (001) at (1,1){$001$};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){$011$};
\node (100) at (2,0){$100$};
\node (101) at (3,1){$101$};
\node (110) at (2,2){{\boldmath \textcolor{Blue}{$110$} \unboldmath}};
\node (111) at (3,3){$111$};
\path[thick,->,draw,black]
(000) edge[bend left=10,ultra thick,Blue] (010)
(010) edge[bend left=10,ultra thick,Blue] (000)
(010) edge[bend left=10,ultra thick,Blue] (110)
(110) edge[bend left=10,ultra thick,Blue] (010)
(001) edge (000)
(001) edge[bend left=10] (011)
(011) edge (010)
(011) edge[bend left=10] (111)
(011) edge[bend left=10] (001)
(100) edge (101)
(100) edge (110)
(100) edge (000)
(101) edge (001)
(101) edge (111)
(111) edge[bend left=10] (011)
(111) edge (110)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\useasboundingbox (-2.2,-1.3) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1){$3$};
\draw[red,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[Green,bend right=25] (2)
(1) edge[Green,bend right=15] (3)
(2) edge[red,bend left=15] (3)
(2) edge[Green,bend right=25] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
What if $G$ is strong and has only one positive cycle? By \cref{thm:PFN} we know that $G$ is separating.
The following example shows that $G$ is not necessarily trapping. It also shows that $G$ is not necessarily trapping if it is strong and has feedback vertex number equal to one.
Whether $G$ is trap-separating in these cases remains an open question.
\begin{example}\label{ex:strong-not-trapping}
Let $f\in F(4)$ be defined by $f_1(x)=\bar x_3$, $f_2(x)=\bar x_1$, $f_3(x)=\bar x_2 \bar x_4$ and $f_4(x)=x_1x_2x_3$. $G(f)$ is strong, has only one positive cycle and has feedback number one, but is not trapping.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){$0000$};
\node (0010) at (1,1){{\boldmath \textcolor{Blue}{$0010$} \unboldmath}};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){{\boldmath \textcolor{Blue}{$0110$} \unboldmath}};
\node (1000) at (2,0){{\boldmath \textcolor{Blue}{$1000$} \unboldmath}};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){{\boldmath \textcolor{Blue}{$1100$} \unboldmath}};
\node (1110) at (3,3){$1110$};
\node (0001) at (5,0){$0001$};
\node (0011) at (6,1){$0011$};
\node (0101) at (5,2){$0101$};
\node (0111) at (6,3){$0111$};
\node (1001) at (7,0){$1001$};
\node (1011) at (8,1){$1011$};
\node (1101) at (7,2){$1101$};
\node (1111) at (8,3){$1111$};
\path[thick,->,draw,black]
(0100) edge[ultra thick,Blue] (1100)
(1100) edge[ultra thick,Blue] (1000)
(1000) edge[ultra thick,Blue] (1010)
(1010) edge[ultra thick,Blue] (0010)
(0010) edge[ultra thick,Blue] (0110)
(0110) edge[ultra thick,Blue] (0100)
(1110) edge (0110)
(1110) edge (1010)
(1110) edge (1100)
(0000) edge (1000)
(0000) edge (0100)
(0000) edge (0010)
(0101) edge (1101)
(1101) edge (1001)
(1011) edge (0011)
(1011) edge (1001)
(0011) edge (0001)
(0011) edge (0111)
(0111) edge (0101)
(1111) edge (0111)
(1111) edge (1011)
(1111) edge (1101)
(0001) edge (1001)
(0001) edge (0101)
(0001) edge[bend left=20] (0000)
(0011) edge[bend left=20] (0010)
(0101) edge[bend right=20] (0100)
(0111) edge[bend right=20] (0110)
(1001) edge[bend left=20] (1000)
(1011) edge[bend left=20] (1010)
(1101) edge[bend right=20] (1100)
(1110) edge[bend left=20] (1111)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({90+120}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at ({90-120}:1){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (-90:2){$4$};
\path[->,thick]
(1) edge[red,bend right=15] (2)
(2) edge[red,bend right=15] (3)
(3) edge[red,bend right=15] (1)
(1) edge[Green] (4)
(2) edge[Green] (4)
(3) edge[Green,bend right=15] (4)
(4) edge[red,bend right=15] (3)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\section{Number of negative cycles}\label{sec:number-negative-cycles}
In this section, we say more about negative cycles in non-separating signed digraphs. There are non-separating signed digraphs $G$ with negative feedback number one, and even with negative arc-feedback number one (that is, one arc belongs to every negative cycle), as showed by the examples below.
\begin{example}\label{ex:negative_feedback_1}
Let $f\in F(3)$ be defined by $f_1(x)=x_1+x_2$, $f_2(x)=\bar x_1x_2\lor x_3$ and $f_3(x)=x_1$. Then $\Gamma(f)$ is non-separating and $\{1\}$ is a negative feedback vertex set of $G(f)$.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{$000$} \unboldmath}};
\node (001) at (1,1){$001$};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){{\boldmath \textcolor{Blue}{$100$} \unboldmath}};
\node (101) at (3,1){{\boldmath \textcolor{Blue}{$101$} \unboldmath}};
\node (110) at (2,2){{\boldmath \textcolor{Blue}{$110$} \unboldmath}};
\node (111) at (3,3){{\boldmath \textcolor{Blue}{$111$} \unboldmath}};
\path[thick,->,draw,black]
(001) edge (011)
(001) edge (000)
(010) edge[bend left=10,ultra thick,Blue] (110)
(011) edge[bend left=10,ultra thick,Blue] (111)
(011) edge[ultra thick,Blue] (010)
(100) edge[ultra thick,Blue] (101)
(101) edge[ultra thick,Blue] (111)
(110) edge[bend left=10,ultra thick,Blue] (010)
(110) edge[ultra thick,Blue] (100)
(110) edge[ultra thick,Blue] (111)
(111) edge[bend left=10,ultra thick,Blue] (011)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\useasboundingbox (-2.2,-1.3) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1){$3$};
\draw[Green,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (1.{180-60}) .. controls ({180-30}:2.8) and ({180+30}:2.8) .. (1.{180+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[red,bend right=25] (2)
(1) edge[Green,bend right=15] (3)
(3) edge[Green,bend right=15] (2)
(2) edge[red,bend right=25] (1)
(2) edge[Green,bend right=55] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\begin{example}\label{ex:negative_arc_feedback_1}
Let $f\in F(4)$ be defined by $f_1(x)=x_2\bar x_3\lor \bar x_2x_3\lor x_3\bar x_4$, $f_2(x)=x_2\bar x_3\lor x_4$, $f_3(x)=x_1$ and $f_4(x)=x_3$. Then $\Gamma(f)$ is non-separating and every negative cycle contains the positive arc from $1$ to $3$.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){{\boldmath \textcolor{Plum}{$0000$} \unboldmath}};
\node (0010) at (1,1){$0010$};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){$0110$};
\node (1000) at (2,0){$1000$};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){{\boldmath \textcolor{Blue}{$1100$} \unboldmath}};
\node (1110) at (3,3){{\boldmath \textcolor{Blue}{$1110$} \unboldmath}};
\node (0001) at (5,0){$0001$};
\node (0011) at (6,1){$0011$};
\node (0101) at (5,2){{\boldmath \textcolor{Blue}{$0101$} \unboldmath}};
\node (0111) at (6,3){{\boldmath \textcolor{Blue}{$0111$} \unboldmath}};
\node (1001) at (7,0){$1001$};
\node (1011) at (8,1){{\boldmath \textcolor{Blue}{$1011$} \unboldmath}};
\node (1101) at (7,2){{\boldmath \textcolor{Blue}{$1101$} \unboldmath}};
\node (1111) at (8,3){{\boldmath \textcolor{Blue}{$1111$} \unboldmath}};
\path[thick,->,draw,black]
(0010) edge (1010)
(0010) edge (0000)
(0010) edge[bend right=20] (0011)
(0100) edge[ultra thick,Blue] (1100)
(0110) edge (1110)
(0110) edge (0010)
(0110) edge (0100)
(0110) edge[bend left=20] (0111)
(1000) edge (0000)
(1000) edge (1010)
(1010) edge[ultra thick,bend right=20,Blue] (1011)
(1100) edge[ultra thick,Blue] (1110)
(1110) edge[ultra thick,Blue] (1010)
(1110) edge[ultra thick,bend left=20,Blue] (1111)
(0001) edge (0101)
(0001) edge[bend left=20] (0000)
(0011) edge (1011)
(0011) edge (0111)
(0011) edge (0001)
(0101) edge[ultra thick,Blue] (1101)
(0101) edge[ultra thick,bend right=20,Blue] (0100)
(0111) edge[ultra thick,Blue] (0101)
(1001) edge (0001)
(1001) edge (1101)
(1001) edge (1011)
(1001) edge[bend left=20] (1000)
(1011) edge[ultra thick,Blue] (1111)
(1101) edge[ultra thick,Blue] (1111)
(1101) edge[ultra thick,Blue,bend right=20] (1100)
(1111) edge[ultra thick,Blue] (0111)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1.5){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1.5){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at ({180}:1.5){$4$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1.5){$3$};
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.8) and ({0+20}:2.8) .. (2.{0+20});
\path[->,thick]
(1) edge[Green,bend left=15] (3)
(2) edge[Green,bend right=10] (1)
(2) edge[red,bend right=40] (1)
(3) edge[Green,bend left=15] (1)
(3) edge[red,bend left=50] (1)
(3) edge[red, bend right=40] (2)
(3) edge[Green,bend left=40] (4)
(4) edge[Green] (2)
(4) edge[red, bend left=40] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
Our main observation concerning negative cycles is then is the following.
\begin{theorem}\label{thm:NEGATIVE}
If $G$ has at most one negative cycle, then $G$ is separating. If $G$ is strong and has at most one negative cycle, then $G$ is trapping.
\end{theorem}
\cref{ex:not-trap-sep} shows that, in the theorem, separating cannot be replaced by trap-separating. For the proof we need some lemmas.
\begin{lemma}\label{lem:almost_fixed_points}
Let $f\in F(G)$. Suppose that $G\setminus i$ has no negative cycle for some vertex $i$ and that $\Gamma(f)$ has an attractor $A$ of size at least two. Then there are $x,y\in A$ with $x_i\neq y_i$ such that $f(x)=x+e_i$ and $f(y)=y+e_i$.
\end{lemma}
\begin{proof}
Let $f'\in F(V)$ be defined by $f'_i(x)=x_i$ and $f'_j(x)_j=f_j(x)$ for all $j\neq i$. Then $\Gamma(f')$ is a spanning subgraph of $\Gamma(f)$, and $G(f')\setminus i=G\setminus i$. So $G(f')$ has no negative cycle.
\medskip
Let $a\in A$. Since $G(f')$ has no negative cycle, by \cref{thm:bib}, $\Gamma(f')$ is fixing and thus it has a path $P$ from $a$ to a fixed point $x$ of $f'$. By the definition of $f'$, we have $z_i=a_i$ for all the configuration $z$ in $P$. In particular, $x_i=a_i$, and since $a\in A$ and $P$ is a path of $\Gamma(f)$, we have $x\in A$. Since $A$ is of size at least two, $x$ is not a fixed point of $f$, thus $f(x)=x+e_i$. Let $b=x+e_i$. We have $b\in A$, and $b_i\neq a_i$, and we prove similarly, that there is $y$ with $y_i=b_i$ such that $f(y)=y+e_i$.
\end{proof}
\begin{lemma}\label{lem:fixed_point}
Suppose that all the negative arcs of $G$ have the same terminal vertex $i$. Let $f\in F(G)$. If $x_i=0$ and $x\leq f(x)$ then $[x,\mathbf{1}]$ is a trap space, and if $x_i=1$ and $x\geq f(x)$ then $[\mathbf{0},x]$ is a trap space.
\end{lemma}
\begin{proof}
Suppose that $x_i=0$ and $x\leq f(x)$. Let $z\in [x,\mathbf{1}]$. Then $0=x_i\leq f_i(z)$ and for all $j\neq i$ we have $x_j\leq f_j(x)\leq f_j(z)$ since $f_j$ is monotone and $x\leq z$. Thus $x\leq f(z)$ and we deduce that $[x,\mathbf{1}]$ is a trap space. We prove similarly that $[\mathbf{0},x]$ is a trap space if $x_i=1$ and $x\geq f(x)$.
\end{proof}
\begin{lemma}\label{lem:trap_spaces}
Suppose that all the negative arcs of $G$ have the same terminal vertex $i$. Let $f\in F(G)$ and $A$ an attractor of $\Gamma(f)$ of size at least two. There are $x,y\in A$ with $x_i<y_i$ and $x\leq y$ such that $[A]=[x,y]$. Furthermore, $[A]$, $[x,\mathbf{1}]$ and $[\mathbf{0},y]$ are trap spaces.
\end{lemma}
\begin{proof}
By \cref{lem:almost_fixed_points}, there are $x,y\in A$ with $x_i<y_i$ such that $f(x)=x+e_i$ and $f(y)=y+e_i$, and thus $x\leq f(x)$ and $y\geq f(y)$. By \cref{lem:fixed_point}, $[x,\mathbf{1}]$ and $[\mathbf{0},y]$ are trap spaces. It follows that $x\leq y$ and that $[A]=[x,y]$ is a trap space.
\end{proof}
\begin{lemma}\label{lem:special_vertex}
If all the negative arcs of $G$ have the same terminal vertex, then $G$ is trapping.
\end{lemma}
\begin{proof}
Let $i$ be the terminal vertex of every negative arc of $G$. Let $f\in F(G)$ and suppose that $A,B$ are distinct attractors of $\Gamma(f)$. By \cref{lem:trap_spaces}, $[A]$ and $[B]$ are trap spaces, so it remains to prove that $[A]\cap [B]=\emptyset$. Suppose, for a contradiction, that $[A]\cap [B]\neq\emptyset$. Then at least one of $A,B$ is of size at least two, say $A$. By \cref{lem:trap_spaces}, there are $x^A,y^A\in A$ with $x^A_i<y^A_i$ such that $[x^A,\mathbf{1}]$ and $[A]=[x^A,y^A]$ are trap spaces.
\medskip
Suppose first that $B$ is of size one, that is, consists of a fixed point, say $z$. Then $z\in [x^A,y^A]$. If $z_i=0$ then, by \cref{lem:fixed_point}, $[z,\mathbf{1}]$ is a trap space. We have $y^A\in [z,\mathbf{1}]$ and since $z\neq x^A$, we have $x^A\not\in [z,\mathbf{1}]$. We deduce that there is no path in $\Gamma(f)$ from $y^A$ to $x^A$, a contradiction. If $z_i=1$ we obtain a contradiction similarly.
\medskip
Consequently, $B$ is of size at least two. Hence, by \cref{lem:trap_spaces}, there are $x^B,y^B\in B$ with $x^B_i<y^B_i$ such that $[x^B,\mathbf{1}]$ and $[B]=[x^B,y^B]$ are trap spaces. If $y^B_j<x^A_j$ for some vertex $j$, then $[A]\cap [B]=\emptyset$, a contradiction. So $x^A\leq y^B$. Hence $y^B\in [x^A,\mathbf{1}]$ and since $[x^A,\mathbf{1}]$ is a trap space, we deduce that $B\subseteq [x^A,\mathbf{1}]$. In particular, $x^A\leq x^B$. By symmetry we have $x^B\leq x^A$. Thus $x^A=x^B$, a contradiction.
\end{proof}
\begin{lemma}\label{lem:switch}
If $G$ is strong and contains an arc from $j$ to $i$, or a vertex $i$, that belongs to every negative cycle and to no positive cycle, then, up to a switch of $G$, all the negative arcs have $i$ as terminal vertex.
\end{lemma}
\begin{proof}
Let $G'$ be obtained from $G$ by deleting all the in-coming arcs of $i$. Since $G$ is strong, $G'$ has a unique initial strong component, which has $i$ has unique vertex, and $G$ has an arc from each terminal strong component of $G'$ to $i$. Since each strong component of $G'$ only contains positive cycles, up to a switch we can assume that all the strong components of $G'$ only contain positive arcs (using \cref{pro:harary,pro:BN_switch}). Let $I_1,\dots,I_r$ be the vertex sets of the strong components of $G'$ in the topological order (so $I_1=\{i\}$). Let $H$ be the signed digraph on $\{I_1,\dots,I_r\}$ with a positive (negative) arc from $I_p$ to $I_q$ if $G$ has a positive (negative) arc from some vertex in $I_p$ to some vertex in $I_q$. Let $s_1=1$ and, for $1<p\leq r$, let $s_p=1$ if $H$ has a positive path from $I_1$ to $I_p$ and $s_p=-1$ otherwise. A consequence of (1) below is that, actually, all the paths from $I_1$ to $I_p$ have the same sign.
\begin{quote}
(1) {\em For $1\leq p<q\leq r$, the sign of an arc of $H$ from $I_p$ to $I_q$ is $s_p\cdot s_q$.}
\smallskip
Suppose that $H$ has an arc from $I_p$ to $I_q$ of sign $s\neq s_p\cdot s_q$. Since $H$ has a path from $I_q$ to $I_1$ with internal vertices in $\{I_{q+1},\dots,I_r\}$, $G$ has a path $P$ from some $k\in I_q$ to $i$ whose internal vertices are in $I_{q+1}\cup\dots \cup I_r$. Since $s\neq s_p\cdot s_q$, $H$ has a positive and a negative path from $I_1$ to $I_q$ whose vertices are in $\{I_1,\dots,I_q\}$. Since $I_q$ is strong and has only positive arcs, we deduce that $G$ has a positive path $P^+$ from $i$ to $k$ and a negative path $P^-$ from $i$ to $k$ whose internal vertices are in $I_2\cup\cdots\cup I_q$. Thus $C_1=P^+\cup P$ and $C_2=P^-\cup P$ are cycles with different signs.
So $i$ belongs to cycles of different signs. We deduce that $i$ has an in-neighbor $j$ such that the arc from $j$ to $i$ belongs to all the negative cycles and to no positive cycle. Thus this arc belongs to $C_1$ or $C_2$, and this means that it is in $P$, and thus it belongs to both $C_1$ and $C_2$, and we obtain a contradiction.
\end{quote}
\medskip
Let $J$ be the set of vertices $I_p$ of $H$ with $s_p=-1$. We deduce from (1) that an arc of $H$ is negative if and only if it leaves $J$ or enters $J$. Hence the $J$-switch of $H$ is full-positive. Let $J'$ be the union of the sets $I_p$ contained in $J$. Then the $J'$-switch of $G'$ is full-positive, so, in the $J'$-switch of $G$, all the negative arcs have $i$ as terminal vertex.
\end{proof}
As a consequence:
\begin{lemma}\label{lem:special_vertex_bis}
If $G$ has an arc or a vertex that belongs to all the negative cycles and to no positive cycle, then $G$ is separating.
\end{lemma}
\begin{proof}
Suppose that $G$ is a smallest counter example with respect to the number of vertices. There is $f\in F(G)$ and attractors $A,B$ of $\Gamma(f)$ with $[A]\cap[B]\neq\emptyset$.
\begin{quote}
(1) {\em For every $j\in V$ there is $a\in A$ and $b\in B$ with $a_j\neq b_j$.}
\smallskip
Suppose that there is $j\in V$ and $c\in\{0,1\}$ such that $a_j=b_j=c$ for all $a\in A$ and $b\in B$. Let $h$ be the BN with component set $I=V\setminus j$ defined by $h(x_I)=f(x)_I$ for all $x$ with $x_j=c$. Then $G(h)$ is a subgraph of $G\setminus j$. Let $A'=\{a_I\mid a\in A\}$ and $B'=\{b_I\mid b\in B\}$. Then $A',B'$ are distinct attractors of $\Gamma(h)$ with $[A']\cap [B']\neq\emptyset$. Hence $G(h)$ is non-separating. So, by \cref{thm:bib}, $G(h)$ has a negative cycle $C$. Hence $C$ contains an arc or a vertex that belongs to all the negative cycles of $G$ and to no positive cycle of $G$. Since $G(h)$ is a subgraph of $G\setminus j$, this arc or this vertex belongs to all the negative cycles of $G(h)$ and to no positive cycle of $G(h)$. We deduce that $G(h)$ is a smaller counter example, a contradiction.
\end{quote}
\begin{quote}
(2) {\em $G$ is strong.}
\smallskip
If not there is a partition $(I_1,I_2)$ of the vertices such that $G$ has no arc from $I_2$ to $I_1$ and $G[I_2]$ is strong. We then use the notations of \cref{lem:decomposition}. Let $i$ be a vertex of $G$ meeting every negative cycle, which exists by hypothesis.
If $i\in I_1$, then $G[I_1]$ is separating, since otherwise it is a smaller counter example. Thus $\Gamma^1$ is separating. We then deduce from \cref{lem:decomposition} that if $A_1\neq B_1$ then $[A_1]\cap [B_1]=\emptyset$ and thus $[A]\cap [B]=\emptyset$, a contradiction. So suppose that $A_1=B_1$, which implies $\Gamma^2_A=\Gamma^2_B$ and $A_2\neq B_2$. Since $i$ is in $I_1$, $G[I_2]$ has only positive cycles, so by \cref{lem:robustly_trapping} it is robustly trapping, and thus $\Gamma^2_A$ is separating. We then deduce from \cref{lem:decomposition} that $[A_2]\cap [B_2]=\emptyset$ and thus $[A]\cap [B]=\emptyset$, a contradiction.
If $i\not\in I_1$ then $G[I_1]$ has only positive cycles. So $G[I_1]$ is fixing, and we deduce from \cref{lem:decomposition} that there are $x^A,x^B$, fixed points of $f^1$, such that $A=\{x^A\}\times A^2$ and $B=\{x^B\}\times B^2$. If $x^A\neq x^B$ then $[A]\cap [B]=\emptyset$, a contradiction. Thus $x^A=x^B$ so $a_{I_1}=b_{I_1}$ for all $a\in A$ and $b\in B$, which contradicts~(1). This proves (2).
\end{quote}
From (2) and \cref{lem:switch}, in some switch $G'$ of $G$, all the negative arcs have the same terminal vertex. By \cref{lem:special_vertex}, $G'$ is separating and we deduce that $G$ is separating by \cref{pro:BN_switch}, a contradiction.
\end{proof}
\begin{lemma}[\cite{R18}]\label{lem:R18}
If $G$ has a unique negative cycle, then some arc belongs to no positive~cycle.
\end{lemma}
\begin{proof}[\BF{Proof of \cref{thm:NEGATIVE}}]
Suppose that $G$ has a unique negative cycle $C$. By \cref{lem:R18}, $C$ has an arc that belongs to no positive cycle. Since this arc obviously belongs to all the negative cycles of $G$, we deduce from \cref{lem:special_vertex_bis} that $G$ is separating. Suppose, in addition, that $G$ is strong. By \cref{lem:switch}, in some switch $G'$ of $G$, all the negative arcs have the same terminal vertex. Hence, by \cref{lem:special_vertex}, $G'$ is trapping and, by \cref{pro:BN_switch}, $G$ is trapping.
\end{proof}
Using the previous tools, we provide a new sufficient condition for $G$ to be fixing.
\begin{proposition}\label{pro:fixing}
If $G$ is strong, has a unique negative cycle $C$, at least one positive cycle, and if no cycle of $G$ is disjoint from $C$, then $G$ is fixing.
\end{proposition}
We need the following lemma.
\begin{lemma}\label{lem:one_negative}
Suppose that $G$ has a unique negative arc, say from $j$ to $i$, where $i$ is of in-degree at least two, and suppose that every positive cycle intersects every negative cycle. Then $G$ is~fixing.
\end{lemma}
\begin{proof}
Let $f\in F(G)$ and suppose, for a contradiction, that $\Gamma(f)$ has an attractor $A$ of size at least two. By \cref{lem:trap_spaces}, there are $x,y\in A$ with $x_i<y_i$ and $x\leq y$ such that $[x,\mathbf{1}]$, $[\mathbf{0},y]$ and $[A]=[x,y]$ are trap spaces. By \cref{lem:R10}, $G$ has a negative cycle $C$ such that $x_k<y_k$ for every vertex $k$ in $C$.
\medskip
Suppose that $x=\mathbf{0}$ and $y=\mathbf{1}$. Then, for every vertex $k\neq i$ in $G$, $k$ has only positive in-neighbors. Furthermore, since $x,y\in A$, $\Gamma(f)$ has a path from $\mathbf{0}$ to $\mathbf{1}$ and thus $f_k$ is not a constant. So we have $f_k(x)=0=x_k$ and $f_k(y)=1=y_k$. We deduce that $f_i(x)=1$ and $f_i(y)=0$ (otherwise $x$ or $y$ would be fixed points, which is not possible since $x,y\in A$). Let $z$ be any configuration on $V$. If $z_j=0$ then we have $x_j=z_j$ and $x\leq z$, and since all the in-neighbors $k\neq j$ of $i$ are positive, we deduce that $1=f_i(x)\leq f_i(z)$, thus $f_i(z)=1$. Similarly, if $z_j=1$ then we have $z_j=y_j$ and $z\leq y$, and since all the in-neighbors $k\neq j$ of $i$ are positive, we deduce that $f_i(z)\leq f_i(y)=0$, thus $f_i(z)=0$. Consequently, $f_i(z)=z_j+1$ for all configurations $z$. But this means that $j$ is the unique in-neighbor of $i$, a contradiction. Consequently, $x\neq\mathbf{0}$ or $y\neq\mathbf{1}$.
\medskip
Suppose first that $x\neq\mathbf{0}$, that is, $I=\{k\mid x_k=1\}$ is non-empty. Let $k\in I$. Since $[x,\mathbf{1}]$ is a trap space, we have $x\leq f(x)$ thus $f_k(x)=1$. Since $k\neq i$ (because $x_i=0$), all the in-coming arcs of $k$ are positive, so it has an in-neighbor $\ell$ with $x_\ell=1$. Hence $\ell\in I$. Consequently, $G[I]$ has a cycle, which is positive since $i\not\in I$. Since $x_k=0$ for all vertices $k$ in $C$, this positive cycle is disjoint from $C$, a contradiction. If $y\neq\mathbf{1}$ we prove with similar arguments that $\{k\mid y_k=0\}$ induces a positive cycle disjoint from $C$.
\end{proof}
\begin{proof}[\BF{Proof of \cref{pro:fixing}}]
By \cref{lem:R18}, the unique negative cycle $C$ has an arc that belongs to no positive cycle. Since some positive cycle intersects $C$, some vertex in $C$ has in-degree at least two, and we deduce that $C$ has an arc $a$, say from $j$ to $i$, which belongs to no positive cycle and such that $i$ is of in-degree at least two. Hence, since $G$ is strong, by \cref{lem:switch}, in some switch $G'$ of $G$, all the negative arcs have $i$ as terminal vertex. Let $a'\neq a$ be an arc with terminal vertex $i$, and let $C'$ be a cycle of $G'$ containing $a'$, which exists since $G$ is strong. In $G'$, all the arcs of $C'$ distinct from $a'$ are positive, and if $a'$ is negative, then $C'$ is a negative cycle distinct from $C$, a contradiction. Thus $a$ is the unique negative arc of $G'$. Since $i$ is of in-degree at least two, by \cref{lem:one_negative}, $G'$ is fixing, and so is $G$.
\end{proof}
\cref{ex:sep-not-conv-fix-graph} shows that we cannot drop the hypothesis of $G$ strong in \cref{pro:fixing}. If $G$ is strong, has a unique negative cycle $C$, at least one positive cycle, but some positive cycle is disjoint from $C$ then $G$ is not necessarily fixing, as showed by \cref{ex:fix1} below. Furthermore, if $G$ is strong, has two negative cycles and every positive cycle intersect every negative cycle, then $G$ is not necessarily fixing, as showed by \cref{ex:fix2} below.
\begin{example}\label{ex:fix1}
Let $f\in F(2)$ be defined by $f_1(x)=\bar x_1\lor x_2$ and $f_2(x)=x_1 x_2$. Then $\Gamma(f)$ is not fixing since $\{00,10\}$ is an attractor, while $G(f)$ is strong, has a unique negative cycle and two positive cycles.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){{\boldmath \textcolor{Blue}{$00$} \unboldmath}};
\node (01) at (0,1.5){$01$};
\node (10) at (1.5,0){{\boldmath \textcolor{Blue}{$10$} \unboldmath}};
\node (11) at (1.5,1.5){{\boldmath \textcolor{Plum}{$11$}\unboldmath}};
\path[thick,->,draw,black]
(00) edge[ultra thick,bend left=15,Blue] (10)
(01) edge (11)
(01) edge (00)
(10) edge[ultra thick,bend left=15,Blue] (00)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[red,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[Green,bend right=25] (2)
(2) edge[Green,bend right=25] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\begin{example}\label{ex:fix2}
Let $f\in F(2)$ be defined by $f_1(x)=\bar x_1\lor x_2$ and $f_2(x)=x_1 \bar x_2$. Then $\Gamma(f)$ is not fixing since $\{00,10,01\}$ is an attractor, while $G(f)$ has a unique positive cycle, which intersects the two negative cycles.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){{\boldmath \textcolor{Blue}{$00$} \unboldmath}};
\node (01) at (0,1.5){$01$};
\node (10) at (1.5,0){{\boldmath \textcolor{Blue}{$10$} \unboldmath}};
\node (11) at (1.5,1.5){{\boldmath \textcolor{Blue}{$11$} \unboldmath}};
\path[thick,->,draw,black]
(00) edge[ultra thick,bend left=15,Blue] (10)
(10) edge[ultra thick,bend left=15,Blue] (00)
(10) edge[ultra thick,bend left=15,Blue] (11)
(11) edge[ultra thick,bend left=15,Blue] (10)
(01) edge (11)
(01) edge (00)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[red,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[Green,bend right=25] (2)
(2) edge[Green,bend right=25] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\section{Non-separating signed digraphs with feedback number two}\label{sec:feedback-number-two}
We can say more on non-separating signed digraphs $G$ when the feedback number of $G$ is exactly two. Let $K^\pm_n$ be the signed digraph with vertex set $[n]$ and with both a positive and a negative arc from $i$ to $j$ for any $i,j\in [n]$. It is an easy exercise to prove that $K^\pm_2$ is the unique non-separating signed digraph on two vertices, and that $F(K^\pm_2)$ is {\em exactly} the set of BNs in $F(2)$ with a non-separating asynchronous graph. An example follows.
\begin{example}\label{ex:K2}
Let $f\in F(K^\pm_2)$ be defined by $f_1(x)=x_1+x_2$ and $f_2(x)=x_1+x_2$. $\Gamma(f)$ has two attractors: $A=\{00\}$ and $B=\{01,10,11\}$. Since $[B]=\{0,1\}^2$, $\Gamma(f)$ is non-separating.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (00) at (0,0){{\boldmath \textcolor{Plum}{$00$} \unboldmath}};
\node (01) at (0,1.5){{\boldmath \textcolor{Blue}{$01$} \unboldmath}};
\node (10) at (1.5,0){{\boldmath \textcolor{Blue}{$10$} \unboldmath}};
\node (11) at (1.5,1.5){{\boldmath \textcolor{Blue}{$11$} \unboldmath}};
\path[thick,->,draw,black]
(01) edge[ultra thick,bend left=15,Blue] (11)
(10) edge[ultra thick,bend left=15,Blue] (11)
(11) edge[ultra thick,bend left=15,Blue] (10)
(11) edge[ultra thick,bend left=15,Blue] (01)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\useasboundingbox (-2.2,-0.7) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[Green,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (1.{180-60}) .. controls ({180-30}:2.8) and ({180+30}:2.8) .. (1.{180+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\draw[red,->,thick] (2.{0-60}) .. controls ({0-30}:2.8) and ({0+30}:2.8) .. (2.{0+60});
\path[->,thick]
(1) edge[red,bend right=25] (2)
(1) edge[Green,bend right=55] (2)
(2) edge[red,bend right=25] (1)
(2) edge[Green,bend right=55] (1)
;
\end{tikzpicture}
\\
K^\pm_2
\end{array}
\]
\end{example}
Let $H_2$ be obtained from $K^\pm_2$ by deleting the negative loop on vertex $2$.
\[
\begin{array}{c}
\begin{tikzpicture}
\useasboundingbox (-2.2,-0.7) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\draw[Green,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (1.{180-60}) .. controls ({180-30}:2.8) and ({180+30}:2.8) .. (1.{180+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[red,bend right=25] (2)
(1) edge[Green,bend right=55] (2)
(2) edge[red,bend right=25] (1)
(2) edge[Green,bend right=55] (1)
;
\end{tikzpicture}
\\
H_2
\end{array}
\]
We prove below that if $G$ is non-separating and has feedback number two, then $G$ contains in some way $H_2$. To make this precise we need some definitions. Let $H$ be a signed digraphs with vertex set $U$. We say that $H$ is \EM{embedded} in $G$ if there is an injection $\phi:U\to V$ such that, for every positive (negative) arc of $H$ from $j$ to $i$, $G$ contains a positive (negative) path from $\phi(j)$ to $\phi(i)$ whose internal vertices are not in $\phi(U)$.
\begin{theorem}\label{thm:fvs2}
If $G$ is non-separating and has feedback number $2$, then $H_2$ is embedded in $G$.
\end{theorem}
\begin{remark}
The proof also shows that if $G$ has feedback number $2$ and $\Gamma(f)$ is non-separating for some $f\in F(G)$, then $\Gamma(f)$ has exactly two attractors, say $A$ and $B$, with $|A|=1$ and $|B|\geq 3$.
\end{remark}
Note that if $G$ is non-separating and has feedback number $2$, then $K^\pm_2$ is not necessarily embedded in $G$, as illustrated by \cref{ex:negative_feedback_1}.
\begin{example}
The interaction graph $G$ of \cref{ex:negative_arc_feedback_1} is non-separating and has feedback number~$2$. $H_2$ is indeed embedded in $G$, with $\phi(1)=1$ and $\phi(2)=2$, because: $1~\textcolor{Green}{\to}~3~\textcolor{Green}{\to}~1$ and $1~\textcolor{Green}{\to}~3~\textcolor{red}{\to}~1$ are positive and negative cycles containing $1$ but not $2$; $1~\textcolor{Green}{\to}~3~\textcolor{red}{\to}~2$ and $1~\textcolor{Green}{\to}~3~\textcolor{Green}{\to}~4~\textcolor{Green}{\to}~2$ are positive and negative paths from $1$ to $2$; $2~\textcolor{Green}{\to}~1$ and $2~\textcolor{red}{\to}~1$ are positive and negative paths from $2$ to $1$; and $2~\textcolor{Green}{\to}~2$ is a positive cycle containing $2$ but not $1$.
\end{example}
We will use several times a lemma whose statement needs some definitions. Let $P$ be a path of an asynchronous graph $\Gamma$ of length $\ell\geq 1$, with configurations $x^0,\dots,x^\ell$ in order. For $0\leq k<\ell$, let $i_k$ be the direction of the arc $x^k\to x^{k+1}$. We call $i_0,\dots,i_{\ell-1}$ the \EM{direction sequence} of $P$. We say that $P$ is a \EM{geodesic} if its direction sequence has no repetition. We say that $P$ is \EM{increasing} if all its arcs are increasing. A \EM{walk} $W$ in a signed digraph $G$ is a sequence of arcs $a^0,\dots,a^\ell$ such that for $0\leq k<\ell$, the terminal vertex of $a^k$ is the initial vertex of $a^{k+1}$. The sign of $W$ is the product of the sign of its arcs. For $0\leq k<\ell$, let $i_k$ be the initial vertex of $a^k$, and let $i_\ell$ be terminal vertex of $a^\ell$. Then $i_0,\dots,i_\ell$ is the \EM{vertex sequence} of $W$, and we say that $W$ is a walk from $i_0$ to $i_\ell$. A sequence $s'$ is a \EM{subsequence} of $s$ if we can obtain $s'$ by removing some elements of~$s$ (so for instance $ii$ is a subsequence of $iji$).
\begin{lemma}[\cite{R10}]\label{lem:R10b}
Let $f\in F(G)$, and let $P$ be a path of $\Gamma(f)$ of length $\ell\geq 2$, with configurations $x^1x^2\dots x^\ell x^{\ell+1}$ in order. Let $i$ be the direction of the arc from $x^\ell$ to $x^{\ell+1}$, and suppose that $f_i(x^k)=x^k_i$ for $1\leq k<\ell$. There is a component $j$ with $f_j(x^1)\neq x^1_j$ such that $G$ has a walk $W$ from $j$ to $i$ of sign $(f_j(x^1)-x^1_j)(f_i(x^\ell)-x^\ell_i)$ such that the vertex sequence of $W$ is a subsequence of the direction sequence of $P$. Furthermore, if $P$ is increasing then $W$ is a full-positive path (its vertex sequence has no repetition, and all its arcs are positive) and $j\in\Delta(x^1,x^{\ell+1})$.
\end{lemma}
The following is a well-known result of Robert.
\begin{lemma}[\cite{R95}]\label{lem:R95}
Suppose that $G$ is acyclic and let $f\in F(G)$. Then $f$ has a unique fixed point and $\Gamma(f)$ has a geodesic from any configuration on $V$ to this fixed point.
\end{lemma}
\begin{proof}[\BF{Proof of \cref{thm:fvs2}}]
Let $\{i_1,i_2\}$ be a feedback vertex set of $G$. Let $f\in F(G)$, and let $A,B$ be distinct attractors of $\Gamma=\Gamma(f)$ with $[A]\cap [B]\neq\emptyset$. We will prove that $H_2$ is embedded in $G$. For $c_1,c_2\in\{0,1\}$, let $X^{c_1c_2}$ be the set of configurations on $V$ with $x_{i_1}=c_1$ and $x_{i_2}=c_2$.
\begin{quote}
(1) {\em For every $c_1,c_2\in \{0,1\}$, there is $x^{c_1c_2}\in X^{c_1c_2}$ with $f_j(x^{c_1c_2})=x^{c_1c_2}_j$ for all $j\neq i_1,i_2$ such that $\Gamma$ has a geodesic from any configuration in $X^{c_1c_2}$ to $x^{c_1c_2}$.}
\smallskip
Let $f'\in F(V)$ be defined by $f'_{i_1}(x)=c_1$, $f'_{i_2}(x)=c_2$ and $f'_j(x)=f_j(x)$ for all $j\neq i_1,i_2$. Then $G(f')$ is obtained from $G$ by removing all the arcs with $i_1$ or $i_2$ as terminal vertex, and thus it is acyclic. By \cref{lem:R95}, $f'$ has a unique fixed point, say $x^{c_1c_2}$, and $\Gamma(f')$ has a geodesic from any configuration to $x^{c_1c_2}$. Since $x^{c_1c_2}\in X^{c_1c_2}$, and since $\Gamma(f')[X^{c_1c_2}]=\Gamma[X^{c_1c_2}]$, we deduce that $x^{c_1c_2}$ has the desired properties.
\end{quote}
We deduce from (1) that $A$ and $B$ cannot intersect the same set $X^{c_1c_2}$ (since otherwise $x^{c_1c_2}\in A\cap B$). Suppose that $A$ and $B$ intersect at most two of the four sets $X^{00},X^{01},X^{10},X^{11}$, and that $A$ intersects $X^{c_1c_2}$. If $A$ intersects also $X^{\bar c_1\bar c_2}$ then it must intersects $X^{\bar c_1 c_2}$ or $X^{c_1 \bar c_2}$, and we obtain a contradiction. We deduce that either $A\subseteq X^{c_1c_2}\cup X^{\bar c_1c_2}$ and $B\subseteq X^{\bar c_1\bar c_2}\cup X^{c_1\bar c_2}$ or $A\subseteq X^{c_1c_2}\cup X^{c_1\bar c_2}$ and $B\subseteq X^{\bar c_1\bar c_2}\cup X^{\bar c_1c_2}$. In both cases we have $[A]\cap [B]=\emptyset$. Hence we can suppose that one of $A,B$ intersects three of the sets, and the other one. Up to a switch of $f$, we can suppose that:
\[
x^{00}=\mathbf{0}, \quad B\subseteq X^{00}\quad\textrm{and}\quad
A\subseteq X^{01}\cup X^{10}\cup X^{11}.
\]
From (1) we have $\mathbf{0}\in B$ and $x^{01},x^{10},x^{11}\in A$.
\begin{quote}
(2) {\em $f(\mathbf{0})=\mathbf{0}$, $f(x^{01})=x^{01}+e_{i_1}$ and $f(x^{10})=x^{10}+e_{i_2}$.}
\smallskip
If $f_{i_1}(\mathbf{0})=1$ then $\Gamma$ has an arc from $\mathbf{0}$ to $e_{i_1}$ and we deduce that $e_{i_1}\in B\cap X^{10}$, a contradiction. Thus $f_{i_1}(\mathbf{0})=0$ and we prove similarly that $f_{i_2}(\mathbf{0})=0$. We deduce from (1) that $f(\mathbf{0})=\mathbf{0}$. If $f_{i_2}(x^{01})=0$, then $\Gamma$ has an arc from $x^{01}$ to $x^{01}+e_{i_2}$ and we deduce that $x^{01}+e_{i_2}\in A\cap X^{00}$, a contradiction. Thus $f_{i_2}(x^{01})=1$, and since $x^{01}$ is not a fixed point, we deduce from (1) that $f(x^{01})=x^{01}+e_{i_1}$. We prove similarly that $f(x^{10})=x^{10}+e_{i_2}$.
\end{quote}
\begin{quote}
(3) {\em $G\setminus i_2$ has a full-positive cycle containing $i_1$, and $G\setminus i_1$ has a full-positive cycle containing $i_2$.}
\smallskip
Let $f'\in F(V)$ be defined by $f'_{i_2}(x)=0$ and $f'_j(x)=f_j(x)$ for all $j\neq i_2$. Then $G(f')\setminus i_2=G\setminus i_2$. By (2), we have $f'(x^{10})=x^{10}$. Since $f'(\mathbf{0})=\mathbf{0}$ and $x^{10}_{i_2}=0$, we deduce from \cref{lem:A08} that $G(f')\setminus i_2=G\setminus i_2$ has a full-positive cycle~$C$. Since $i_1$ is a feedback vertex set of $G\setminus i_2$, we deduce that $C$ contains $i_1$. We prove similarly that $G\setminus i_1$ has a full-positive cycle containing $i_2$.
\end{quote}
\begin{quote}
(4) {\em $G$ has a full-positive path from $i_1$ to $i_2$, and from $i_2$ to $i_1$.}
\smallskip
We prove that $G$ has a full-positive path from $i_1$ to $i_2$; for the other path the argument is similar. If $f_{i_2}(e_{i_1})=1$ then, since $f_{i_2}(\mathbf{0})=0$, $G$ has a positive arc from $i_1$ to $i_2$ and we are done. Suppose $f_{i_2}(e_{i_1})=0$. Let $P$ be a geodesic of $\Gamma$ from $e_{i_1}$ to $x^{10}$ which exists by (1), and which is increasing since $e_{i_1}\leq x^{10}$. Let $x$ be the first vertex of $P$ with $f_{i_2}(x)=1$, which exists by (2), and which is not the first vertex of $P$, by hypothesis. Let $P'$ be obtained by adding the arc from $x$ to $x+e_{i_2}$ to the subpath of $P$ from $e_{i_1}$ to $x$. Then $P'$ is increasing and, by \cref{lem:R10b}, there is $j\neq i_1,i_2$ such that $f_j(e_{i_1})\neq (e_{i_1})_j$ and such that $G$ has a full-positive path from $j$ to $i_2$ (which does not contain $i_1$). So $f_j(e_{i_1})=1$ and, since $f_j(\mathbf{0})=0$, $G$ has a positive arc from $i_1$ to $j$, and thus $G$ has a full-positive path from $i_1$ to $i_2$.
\end{quote}
\begin{quote}
(5) {\em If $f_{i_1}(x^{11})=0$ then $G$ has a negative cycle containing $i_1$ but not $i_2$, and if $f_{i_2}(x^{11})=0$ then $G$ has a negative cycle containing $i_2$ but not $i_1$.}
\smallskip
Suppose that $f_{i_1}(x^{11})=0$. By (2) we have $f(x^{01})=x^{01}+e_{i_1}$, and since $x^{01}\in A$, we have $x^{01}+e_{i_1}\in A\cap X^{11}$. By~(1), $\Gamma$ has a geodesic path from $x^{01}+e_{i_1}$ to $x^{11}$. Hence it has a shortest geodesic path $P$ from $x^{01}+e_{i_1}$ to a state $x\in X^{11}$ such that $f_{i_1}(x)=0$. Let $P'$ be obtained from $P$ by adding the arc from $x^{01}$ to $x^{01}+e_{i_1}$ and from $x$ to $x+e_{i_1}$. By \cref{lem:R10b}, $G\setminus i_2$ has a negative walk from $i_1$ to itself. Hence $G\setminus i_2$ has a negative cycle and since $i_1$ is a feedback vertex set of $G\setminus i_2$, this negative cycle contains $i_1$. We prove similarly the second assertion.
\end{quote}
\begin{quote}
(6) {\em If $f_{i_1}(x^{11})=0$, then $G$ has a negative path from $i_2$ to $i_1$, and if $f_{i_2}(x^{11})=0$, then $G$ has a negative path from $i_1$ to $i_2$.}
\smallskip
Suppose that $f_{i_1}(x^{11})=0$. By (2) we have $f(x^{10})=x^{10}+e_{i_2}$, and since $x^{10}\in A$, we have $x^{10}+e_{i_2}\in A\cap X^{11}$. By~(1), $\Gamma$ has a geodesic path from $x^{10}+e_{i_2}$ to $x^{11}$. Hence it has a shortest geodesic path $P$ from $x^{10}+e_{i_2}$ to a state $x\in X^{11}$ such that $f_{i_1}(x)=0$. Let $P'$ be obtained from $P$ by adding the arc from $x^{10}$ to $x^{10}+e_{i_2}$ and from $x$ to $x+e_{i_1}$. By \cref{lem:R10b}, $G$ has a negative walk $W$ from $i_2$ to $i_1$ such that, denoting $j_1,\dots,j_\ell$ the vertex sequence of $W$ (thus $j_1=i_2$ and $j_\ell=i_1$), we have $i_1,i_2\not\in\{j_2,\dots,j_{\ell-1}\}$. Since $G\setminus\{i_1,i_2\}$ is acyclic, the vertex sequence has no repetition. Hence $W$ corresponds to a path. We prove similarly the second assertion.
\end{quote}
Since $x^{11}$ is not a fixed point, by (1) we have $f_{i_1}(x^{11})=0$ or $f_{i_2}(x^{11})=0$. If $f_{i_1}(x^{11})=0$ and $f_{i_2}(x^{11})=0$ then, by (3)-(6), $K^\pm_2$ is embedded in $G$ and so is $H_2$. So suppose, without loss, that $f_{i_2}(x^{11})=1$, and thus $f_{i_1}(x^{11})=0$ (since otherwise, by (1), $x^{11}$ is a fixed point, a contradiction). By (3)-(6), it only remains to prove that $G$ has a negative path from $i_1$ to $i_2$.
\medskip
Suppose, for a contradiction, that all the paths from $i_1$ to $i_2$ are positive. Let $I$ be the vertices that belong to a path from $i_1$ to $i_2$; by (4) there is at least one path from $i_1$ to $i_2$ thus $I$ is not empty and $i_1,i_2\in I$. Let $H$ be obtained from $G[I]$ by removing all the incoming arcs of $i_1$ and all the out-going arcs of $i_2$.
\begin{quote}
(7) {\em There is $L\subseteq I\setminus \{i_1,i_2\}$ such that the $L$-switch of $H$ is full-positive.}
\smallskip
Let $H'$ be obtained from $H$ by adding a positive arc from $i_2$ to $i_1$. Let $j_1,j_2$ be two vertices in $H'$. Then $H$ has a path from $i_1$ to $j_2$ and a path from $j_1$ to $i_2$. Since $H'$ has an arc from $i_2$ to $i_1$, it has a path from $j_1$ to $j_2$. So $H'$ is strongly connected. Suppose that $H'$ has a negative cycle $C$. Since $G\setminus \{i_1,i_2\}$ is acyclic, $H$ is acyclic, and thus $C$ contains the positive arc from $i_2$ to $i_1$. Hence the path of $C$ from $i_1$ to $i_2$ is negative, and since it is in $G$ we obtain a contradiction. Hence $H'$ has only positive cycles. Since $H'$ is strong, by \cref{pro:harary}, there is $L\subseteq I$ such that the $L$-switch of $H'$ is full-positive, and the $(I\setminus L)$-switch of $H'$ is also full-positive. If $i_1,i_2\not\in L$ then we are done. Otherwise, since the arc from $i_2$ to $i_1$ is positive in $H'$, we have $i_1,i_2\in L$, and we are done with $(I\setminus L)$ instead of $L$.
\end{quote}
Hence, up two a $L$-switch of $G$ and $f$ with $i_1,i_2\not\in L$, we can suppose that $H$ is full-positive, and since $i_1,i_2\not\in L$, $B$ still intersects $X^{00}$ and $A$ still intersects $X^{01},X^{10},X^{11}$ (but $x^{00}$ is no longer necessarily equal to $\mathbf{0}$). Let $R$ be the set of vertices reachable from $i_1$ in $G$; so $I\subseteq R$. Let $J=R\setminus I$ and $K=V\setminus R$. Note that $G$ has no arc from $J$ to $I\setminus i_1$ (if there is an arc from $j \in J$ to $I\setminus i_1$, then $j$ is in a path from $i_1$ to $i_2$ so it belongs to $I$, a contradiction). Let $\preceq$ be the partial order on $\{0,1\}^V$ defined by $x\preceq y$ if and only if $x_{I\setminus i_1}\leq y_{I\setminus i_1}$ and $x_K=y_K$.
\begin{quote}
(8) {\em If $x\preceq y$ and $x_{i_1}\leq y_{i_1}$ then $f(x)\preceq f(y)$.}
\smallskip
Suppose that $x\preceq y$ and $x_{i_1}\leq y_{i_1}$. Since $\Delta(x,y)\subseteq R$ and $G$ has no arc from $R$ to $K$ we have $f(x)_K=f(y)_K$. Let $z$ be the configuration on $V$ defined by $z_I=y_I$ and $z_{V\setminus I}=x_{V\setminus I}$. We have $\Delta(x,z)\subseteq I$ and $x\leq z$. Given any $i\in I\setminus i_1$, since every arc of $G$ from a vertex in $I$ to $i$ is positive, we deduce that $f_i(x)\leq f_i(z)$. Since $\Delta(z,y)\subseteq J$ and $G$ has no arc from $J$ to $i$, we have $f_i(z)=f_i(y)$ and thus $f_i(x)\leq f_i(y)$.
\end{quote}
Since $\Gamma[A]$ has a path from $X^{01}$ to $X^{10}$, it has a path $P$ from $X^{01}$ to $X^{10}$ whose internal configurations are all in $X^{11}$. Let $y^0,\dots,y^{\ell+1}$ be the configurations of $P$ in order, and let $j_0,\dots,j_\ell$ be the direction sequence of $P$. Since $y^k\in X^{11}$ for $1\leq k\leq\ell$, we have $y^0\in X^{01}$ and $y^1\in X^{11}$ thus $j_0=i_1$. Similarly, since $y^\ell\in X^{11}$ and $y^{\ell+1}\in X^{10}$, we have $j_\ell=i_2$.
\medskip
Let $x^0=y^0$ and, for $1\leq k\leq\ell+1$, let $x^k=x^{k-1}+e_{j_k}$ if $\Gamma$ has an arc from $x^{k-1}$ in the direction $j_k$, and $x^k=x^{k-1}$ otherwise. Hence $\Gamma$ has a path from $x^0$ to $x^\ell$ whose direction sequence is a subsequence of $j_1,\dots,j_\ell$.
\medskip
We have $x^0_{i_1}=y^0_{i_1}=0$ and $y^1_{i_1}=1$. Since $i_1\not\in\{j_1,\dots,j_\ell\}$, we deduce that $x^k_{i_1}<y^{k+1}_{i_1}$ for all $0\leq k\leq \ell$. Hence we have $x^k\preceq y^{k+1}$ for all $0\leq k\leq \ell$. Indeed, for $k=0$ we have $x^0=y^0\preceq y^0+e_{i_1}=y^1$, and if $x^{k-1}\preceq y^k$ for some $1\leq k<\ell$, then, since $x^{k-1}_{i_1} <y^k_{i_1}$, we have $f(x^{k-1})\preceq f(y^k)$ by (8) and we deduce that $x^k\preceq y^{k+1}$. In particular $x^\ell\preceq y^{\ell+1}$. Since $y^{\ell+1}\in X^{10}$, we have $x^\ell_{i_2}=0$ and thus $x^\ell\in X^{00}$ because $x^\ell_{i_1}=0$. Since $\Gamma$ has a path from $x^0$ to $x^\ell$ and $x^0\in A$, we have $x^\ell\in A\cap X^{00}$, a contradiction. This proves that $G$ has a negative path from $i_1$ to $i_2$.
\end{proof}
If $G$ has feedback number at least $3$, then $H_2$ is not necessarily embedded in $G$ as illustrated by the following example.
\begin{example}\label{ex:fn3}
Let $f\in F(3)$ be defined by $f_1(x)=\bar x_3x_1\lor \bar x_3x_2$, $f_2(x)=\bar x_1x_2\lor \bar x_1x_3$ and $f_3(x)=\bar x_2x_3\lor \bar x_2x_1$. Then $\Gamma(f)$ is non-separating, and $H_2$ is not embedded in $G(f)$ since all the paths from $1$ to $3$ are positive, all the paths from $3$ to $2$ are positive and all the paths from $2$ to $1$ are positive.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (000) at (0,0){{\boldmath \textcolor{Plum}{$000$} \unboldmath}};
\node (001) at (1,1){{\boldmath \textcolor{Blue}{$001$} \unboldmath}};
\node (010) at (0,2){{\boldmath \textcolor{Blue}{$010$} \unboldmath}};
\node (011) at (1,3){{\boldmath \textcolor{Blue}{$011$} \unboldmath}};
\node (100) at (2,0){{\boldmath \textcolor{Blue}{$100$} \unboldmath}};
\node (101) at (3,1){{\boldmath \textcolor{Blue}{$101$} \unboldmath}};
\node (110) at (2,2){{\boldmath \textcolor{Blue}{$110$} \unboldmath}};
\node (111) at (3,3){$111$};
\path[thick,->,draw,black]
(010) edge[ultra thick,Blue] (110)
(110) edge[ultra thick,Blue] (100)
(100) edge[ultra thick,Blue] (101)
(101) edge[ultra thick,Blue] (001)
(001) edge[ultra thick,Blue] (011)
(011) edge[ultra thick,Blue] (010)
(111) edge (011)
(1110) edge (101)
(1110) edge (110)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({-120-90}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({120-90}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1){$3$};
\draw[Green,->,thick] (1.{-120-90-20}) .. controls ({-120-90-20}:2) and ({-120-90+20}:2) .. (1.{-120-90+20});
\draw[Green,->,thick] (2.{120-90-20}) .. controls ({120-90-20}:2) and ({120-90+20}:2) .. (2.{120-90+20});
\draw[Green,->,thick] (3.{-90-20}) .. controls ({-90-20}:2) and ({-90+20}:2) .. (3.{-90+20});
\path[->,thick]
(1) edge[red,bend left=15] (2)
(2) edge[red,bend left=15] (3)
(3) edge[red,bend left=15] (1)
(1) edge[Green,bend left=15] (3)
(3) edge[Green,bend left=15] (2)
(2) edge[Green,bend left=15] (1)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
Note that the signed digraph of the example has positive feedback number equal to three.
\cref{ex:not-sep} shows that when $G$ is non-separating and has positive feedback number equal to two then $H_2$ is not necessarily embedded in $G$.
The following example shows that this is not necessarily the case even when adding the requirement that $G$ is strongly connected.
\begin{example}
\label{ex:strong-h2-not-embedded}
Consider $f\in F(4)$ defined by $f_1(x)=x_3 \vee x_1 \bar x_2$, $f_2(x)= x_4 \vee x_2 \bar x_1$, $f_3(x)= x_2 \bar x_3$ and $f_4(x)= x_1$. Then $\Gamma(f)$ is non-separating as shown in the figure below. $G(f)$ is strongly connected, has feedback number three and positive feedback number two. $H_2$ is not embedded in $G(f)$: vertices 1 and 2 are the only vertices that belong to disjoint positive cycles, and all negative cycles that contain one of them contain both.
\[
\begin{array}{c}
\begin{tikzpicture}
\pgfmathparse{1}
\node (0000) at (0,0){{\boldmath \textcolor{Plum}{$0000$} \unboldmath}};
\node (0010) at (1,1){$0010$};
\node (0100) at (0,2){{\boldmath \textcolor{Blue}{$0100$} \unboldmath}};
\node (0110) at (1,3){{\boldmath \textcolor{Blue}{$0110$} \unboldmath}};
\node (1000) at (2,0){{\boldmath \textcolor{Blue}{$1000$} \unboldmath}};
\node (1010) at (3,1){{\boldmath \textcolor{Blue}{$1010$} \unboldmath}};
\node (1100) at (2,2){{\boldmath \textcolor{Blue}{$1100$} \unboldmath}};
\node (1110) at (3,3){{\boldmath \textcolor{Blue}{$1110$} \unboldmath}};
\node (0001) at (5,0){$0001$};
\node (0011) at (6,1){$0011$};
\node (0101) at (5,2){{\boldmath \textcolor{Blue}{$0101$} \unboldmath}};
\node (0111) at (6,3){{\boldmath \textcolor{Blue}{$0111$} \unboldmath}};
\node (1001) at (7,0){{\boldmath \textcolor{Blue}{$1001$} \unboldmath}};
\node (1011) at (8,1){{\boldmath \textcolor{Blue}{$1011$} \unboldmath}};
\node (1101) at (7,2){{\boldmath \textcolor{Blue}{$1101$} \unboldmath}};
\node (1111) at (8,3){{\boldmath \textcolor{Blue}{$1111$} \unboldmath}};
\path[thick,->,draw,black]
(0001) edge (0101)
(0001) edge[bend left=20] (0000)
(0010) edge (0000)
(0010) edge (1010)
(0011) edge (0001)
(0011) edge[bend left=20] (0010)
(0011) edge (0111)
(0011) edge (1011)
(0100) edge[Blue,ultra thick,bend right=10] (0110)
(0101) edge[Blue,ultra thick,bend right=20] (0100)
(0101) edge[Blue,ultra thick,bend right=10] (0111)
(0110) edge[Blue,ultra thick,bend right=10] (0100)
(0110) edge[Blue,ultra thick] (1110)
(0111) edge[Blue,ultra thick,bend right=10] (0101)
(0111) edge[Blue,ultra thick,bend right=20] (0110)
(0111) edge[Blue,ultra thick] (1111)
(1000) edge[Blue,ultra thick,bend right=20] (1001)
(1001) edge[Blue,ultra thick] (1101)
(1010) edge[Blue,ultra thick] (1000)
(1010) edge[Blue,ultra thick,bend right=20] (1011)
(1011) edge[Blue,ultra thick] (1001)
(1011) edge[Blue,ultra thick] (1111)
(1100) edge[Blue,ultra thick] (0100)
(1100) edge[Blue,ultra thick] (1000)
(1100) edge[Blue,ultra thick,bend right=10] (1110)
(1100) edge[Blue,ultra thick,bend left=20] (1101)
(1101) edge[Blue,ultra thick] (0101)
(1101) edge[Blue,ultra thick,bend right=10] (1111)
(1110) edge[Blue,ultra thick,bend right=10] (1100)
(1110) edge[Blue,ultra thick,bend left=20] (1111)
(1110) edge[Blue,ultra thick] (1010)
(1111) edge[Blue,ultra thick,bend right=10] (1101)
;
\end{tikzpicture}
\end{array}
\qquad
\begin{array}{c}
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (0,0){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at (2,0){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (0,-2){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (2,-2){$4$};
\draw[red,->,thick] (3) to [out=120,in=210,looseness=6] (3);
\draw[Green,->,thick] (1) to [out=120,in=210,looseness=6] (1);
\draw[Green,->,thick] (2) to [out=60,in=-30,looseness=6] (2);
\path[->,thick]
(1) edge[red,bend right=15] (2)
(1) edge[Green] (4)
(2) edge[red,bend right=15] (1)
(2) edge[Green] (3)
(3) edge[Green] (1)
(4) edge[Green] (2)
;
\end{tikzpicture}
\end{array}
\]
\end{example}
\section{Open problems}\label{sec:open-problems}
In this work we introduced notions of ``separation'' for asynchronous attractors of Boolean networks.
The mildest notion of separation requires that the minimal subspaces containing different attractors do not intersect.
If the minimal subspaces are also closed with respect to the dynamics, we speak of trap-separation.
\medskip
Previous results examined the cases of interaction graphs with no cycles, no positive cycles, or no negative cycles. In all cases, the attractors have strong separation properties, in particular they can always be separated using trap spaces.
Here we showed that the existence of at most one positive cycle (\cref{thm:PFN}) or at most one negative cycle (\cref{thm:NEGATIVE}) is sufficient for separation but not for trap-separation (\cref{ex:not-trap-sep}).
Separation is also guaranteed if there is no intersection between positive or negative cycles (\cref{thm:sep}), and trap-separation if there are no paths from negative to positive cycles (\cref{thm:trap-sep}).
\medskip
If we add the requirement that the graph is strongly connected, then trap-separation holds if there is at most one negative cycle (\cref{thm:NEGATIVE}). The theorem actually shows that a stronger property holds: the minimal subspaces containing attractors are trap spaces. Informally, we can say that the attractors are well approximated by minimal trap spaces. Whether the existence of at most one positive cycle in a strongly connected graph also implies conditions stronger than separation remains an open question. \cref{ex:strong-not-trapping} shows that under these conditions the attractors are not necessarily well approximated by minimal trap spaces.
\medskip
We also looked at how the existence of several attractors affects the existence of separate cycles of opposite signs.
We saw that if a network has multiple attractors, and at least one of them is not a fixed point, if it is trap-separating then its interaction graph must have disjoint positive and negative cycles (\cref{pro:disjoint-opposite}), but this is not necessarily true if the dynamics is only separating (\cref{ex:sep-not-conv-fix}).
For non-separating dynamics, all the examples we provided have disjoint positive and negative cycles, and at least one positive cycle with all vertices belonging to a negative cycle. We formulate the following conjecture.
\begin{conjecture}
Suppose that $G$ is non-separating. Then
\begin{itemize}
\item $G$ has a negative and a positive cycle that are disjoint, and
\item $G$ has a positive cycle $C$ such that every vertex of $C$ belongs to a negative cycle.
\end{itemize}
\end{conjecture}
We saw that non-separating graphs have feedback number at least two (\cref{thm:PFN}), and that if the feedback number is exactly two then the graph contains $H_2$ (in the sense of \cref{thm:fvs2}). Starting from $H_2$ we can construct a non-separating strongly connected graph with $n\geq 3$ vertices by replacing the positive arc from $1$ to $2$ with a full-positive path with $n-2$ internal vertices (for instance, the signed digraph in \cref{ex:negative_feedback_1} is obtained by replacing the positive arc from $1$ to $2$ with a full-positive path with one internal vertex).
\begin{example}\label{ex:H2-n}
For each $n\geq 3$, let $f\in F(n)$ be defined by
\[f_1(x) = x_1 + x_2, \ \ f_2(x) = \bar{x}_1x_2 \vee x_n, \ \ f_3(x) = x_1, \ \ f_k(x) = x_{k-1} \text{ for } k=4,\dots,n.\]
The interaction graph of $f$ is as follows (the dotted green arrow represents a full-positive path).
\[
\begin{tikzpicture}
\useasboundingbox (-2.2,-1.8) rectangle (2.2,0.7);
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at ({180}:1){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-1.5,-1.5){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (4) at (-0.5,-1.5){$4$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (n) at (1.5,-1.5){$n$};
\draw[Green,->,thick] (1.{180-20}) .. controls ({180-20}:2.3) and ({180+20}:2.3) .. (1.{180+20});
\draw[red,->,thick] (1.{180-60}) .. controls ({180-30}:2.8) and ({180+30}:2.8) .. (1.{180+60});
\draw[Green,->,thick] (2.{0-20}) .. controls ({0-20}:2.3) and ({0+20}:2.3) .. (2.{0+20});
\path[->,thick]
(1) edge[red,bend right=25] (2)
(1) edge[Green,bend left=5] (3)
(n) edge[Green,bend left=5] (2)
(2) edge[red,bend right=25] (1)
(2) edge[Green,bend right=55] (1)
(3) edge[Green] (4)
(4) edge[Green,dotted] (n)
;
\end{tikzpicture}
\]
It is strongly connected, has $n+5$ arcs and 7 cycles, of which 4 are positive and 3 are negative. This signed digraph is non-separating since $\Gamma(f)$ is non-separating. Indeed, $\mathbf{0}$ is the unique fixed point of $f$. Furthermore, the set $T$ of configurations $x$ with $x_1=1$ or $x_2=1$ is a trap set which thus contains an attractor $A$. We can easily check that $\Gamma(f)$ has a path from any configuration in $T$ to $\mathbf{1}$, and thus $\mathbf{1}\in A$. Since $\Gamma(f)$ has a path from $\mathbf{1}$ to $e_2$ (with direction sequence $1,3,4\dots,n$) and a path from from $e_2$ to $e_1$ (with direction sequence $1,2$), we have $e_1,e_2,\mathbf{1}\in A$, and thus $[A]=\{0,1\}^n$, so $\Gamma(f)$ is not separating.
\end{example}
Non-separating signed digraphs that do not contain $H_2$ do exist (\cref{ex:fn3});
however we conjecture that signed digraphs derived from $H_2$ provide lower bounds for strongly connected non-separating signed digraphs in terms of number of arcs and number of cycles.
\begin{conjecture}
Every non-separating strongly connected signed digraph with $n\geq 3$ vertices has at least $n+5$ arcs and at least 7 cycles. At least 4 cycles are positive and at least 3 are negative.
\end{conjecture}
For signed digraphs that are separating but not trap-separating we can suggest stricter bounds, based on the following example.
\begin{example}\label{ex:sep-not-trap-sep-n}
For each $n\geq 4$ let $f\in F(n)$ be defined by
\[f_1(x) = \bar{x}_{n-1} \bar{x}_{n}, \ \ f_n(x)= x_n \lor x_1\bar x_2x_3,\ \ f_k(x) = x_{k-1} \text{ for } k=2,\dots,n-1.\]
The interaction graph of $f$ is as follows (the dotted green arrow represents a full-positive path).
\[
\begin{tikzpicture}
\node[outer sep=1,inner sep=2,circle,draw,thick] (1) at (90:1.5){$1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (2) at ({0}:1.5){$2$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (3) at (-90:1.5){$3$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (n-1) at (180:1.5){\scriptsize $n-1$};
\node[outer sep=1,inner sep=2,circle,draw,thick] (n) at (3,0){$n$};
\draw[Green,->,thick] (n.{0-20}) .. controls ({0-15}:4.3) and ({0+15}:4.3) .. (n.{0+20});
\path[->,thick]
(1) edge[Green,bend left=15] (2)
(2) edge[Green,bend left=15] (3)
(3) edge[Green,bend left=15,dotted] (n-1)
(n-1) edge[red,bend left=15] (1)
(1) edge[Green,bend left=15] (n)
(2) edge[red] (n)
(3) edge[Green,bend right=15] (n)
(n) edge[red,bend right=40] (1)
;
\end{tikzpicture}
\]
It is separating since it has feedback number two ($\{1,n\}$ is a feedback vertex set) and no embedding of $H_2$. It is strongly connected, has $n+5$ arcs and 5 cycles, of which 2 are positive and 3 are negative. However, it is not trap-separating since $\Gamma(f)$ is not trap-separating. Indeed, $e_n$ is a fixed point of $f$ and $\Gamma(f)$ has an attractor $A$ whose configurations are $\sum_{i=1}^k e_i$ and $\mathbf{1}+\sum_{i=1}^k e_i$ for $k=1,\dots,n-1$. Thus $[A]=\{x_n=0\}$, and since $\Gamma(f)$ has an arc from $e_1+e_3$ to $e_1+e_3+e_n$ we have $\langle A\rangle=\{0,1\}^n$, and thus $\Gamma(f)$ is not trap-separating.
\end{example}
\begin{conjecture}
If a strongly connected signed digraph with $n\geq 4$ vertices is separating but not trap-separating, then it has at least $n+5$ arcs and at least 5 cycles, of which at least 2 are positive and at least 3 are negative.
\end{conjecture}
\section*{Acknowledgments}
The authors are grateful to Heike Siebert and the participants to the {\em Berlin Workshop on Theory and applications of Boolean interaction networks 2019} for helpful discussions.
Funding: Elisa Tonello was partially funded by the Volkswagen Stiftung (Volkswagen Foundation) under the initiative Life?—A fresh scientific approach to the basic principles of life (Project ID: 93063). Adrien Richard was supported by the Young Researcher project ANR-18-CE40-0002-01 ``FANs''.
| {
"timestamp": "2022-06-24T02:15:27",
"yymm": "2206",
"arxiv_id": "2206.11651",
"language": "en",
"url": "https://arxiv.org/abs/2206.11651",
"abstract": "The structure of the graph defined by the interactions in a Boolean network can determine properties of the asymptotic dynamics. For instance, considering the asynchronous dynamics, the absence of positive cycles guarantees the existence of a unique attractor, and the absence of negative cycles ensures that all attractors are fixed points. In presence of multiple attractors, one might be interested in properties that ensure that attractors are sufficiently \"isolated\", that is, they can be found in separate subspaces or even trap spaces, subspaces that are closed with respect to the dynamics. Here we introduce notions of separability for attractors and identify corresponding necessary conditions on the interaction graph. In particular, we show that if the interaction graph has at most one positive cycle, or at most one negative cycle, or if no positive cycle intersects a negative cycle, then the attractors can be separated by subspaces. If the interaction graph has no path from a negative to a positive cycle, then the attractors can be separated by trap spaces. Furthermore, we study networks with interaction graphs admitting two vertices that intersect all cycles, and show that if their attractors cannot be separated by subspaces, then their interaction graph must contain a copy of the complete signed digraph on two vertices, deprived of a negative loop. We thus establish a connection between a dynamical property and a complex network motif. The topic is far from exhausted and we conclude by stating some open questions.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Attractor separation and signed cycles in asynchronous Boolean networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750484894196,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7095221672584375
} |
https://arxiv.org/abs/1701.06377 | Counting Arithmetical Structures on Paths and Cycles | Let $G$ be a finite, simple, connected graph. An arithmetical structure on $G$ is a pair of positive integer vectors $\mathbf{d},\mathbf{r}$ such that $(\mathrm{diag}(\mathbf{d})-A)\mathbf{r}=0$, where $A$ is the adjacency matrix of $G$. We investigate the combinatorics of arithmetical structures on path and cycle graphs, as well as the associated critical groups (the cokernels of the matrices $(\mathrm{diag}(\mathbf{d})-A)$). For paths, we prove that arithmetical structures are enumerated by the Catalan numbers, and we obtain refined enumeration results related to ballot sequences. For cycles, we prove that arithmetical structures are enumerated by the binomial coefficients $\binom{2n-1}{n-1}$, and we obtain refined enumeration results related to multisets. In addition, we determine the critical groups for all arithmetical structures on paths and cycles. | \section{Introduction}\label{sec:intro}
This paper is about the combinatorics of arithmetical structures on path and cycle graphs. We begin by recalling some basic facts about graphs, Laplacians, and critical groups.
Let $G$ be a finite, connected graph with $n\geq 2$ vertices, let $A$ be its adjacency matrix, and let $D$ be the diagonal matrix of vertex degrees. The \emph{Laplacian matrix} $L=D-A$ has rank $n-1$, with nullspace spanned by the all-ones vector $\mathbf{1}$. If we regard $L$ as a ${\mathbb Z}$-linear transformation ${\mathbb Z}^n\to{\mathbb Z}^n$, the cokernel ${\mathbb Z}^n/\im L$ has the form ${\mathbb Z}\oplus K(G)$; here $K(G)$, the \emph{critical group} is finite abelian, with cardinality equal to the number of spanning trees of $G$, by the Matrix-Tree Theorem. The critical group is also known as the \emph{sandpile group} or the \emph{Jacobian}. The elements of the critical group represent long-term behaviors of the well-studied \emph{abelian sandpile model} on $G$; see, e.g., \cite{BTW,Dhar,WhatIs}.
More generally, an \emph{arithmetical structure} on $G$ is a pair $(\mathbf{d}, \mathbf{r})$ of positive integer vectors such that $\mathbf{r}$ is primitive (the gcd of its coefficients is $1$) and
\[
(\diag(\mathbf{d})-A)\mathbf{r}={\bf 0}.
\]
This definition generalizes the Laplacian arithmetical structure just described, where $\mathbf{d}$ is the vector of vertex degrees and $\mathbf{r}=\mathbf{1}$. Note that each of $\mathbf{d}$ and $\mathbf{r}$ determines the other uniquely, so we may regard any of $\mathbf{d}$, $\mathbf{r}$, or the pair $(\mathbf{d},\mathbf{r})$ as an arithmetical structure on $G$. Where appropriate, we will use the terms \emph{arithmetical $d$-structure} and \emph{arithmetical $r$-structure} to avoid ambiguity.
The set of all arithmetical structures on $G$ is denoted $\Arith(G)$, and the data $G,\mathbf{d},\mathbf{r}$ together determine an \emph{arithmetical graph}. As in the classical case, the matrix $L(G,\mathbf{d})=\diag(\mathbf{d})-A$ has rank $n-1$ \cite[Proposition~1.1]{Lorenzini89}. The torsion part of $\coker L$ is the \emph{critical group} of the arithmetical graph.
Arithmetical graphs were introduced by Lorenzini in~\cite{Lorenzini89} to model degenerations of curves. Specifically, the vertices of $G$ represent components of a degeneration of a given curve, edges represent intersections of components, and the entries of $\mathbf{d}$ are self-intersection numbers. The critical group is then the group of components of the N\'eron model of the Jacobian of the generic curve (an observation attributed by Lorenzini to Raynaud). In this paper, we will not consider the geometric motivation, but instead study arithmetical graphs from a purely combinatorial point of view.
It is known~\cite[Lemma~1.6]{Lorenzini89} that $\Arith(G)$ is finite for all connected graphs~$G$. The proof of this fact is non-constructive (by reduction to Dickson's lemma), raising the question of enumerating arithmetical structures for a particular graph or family of graphs. We will see that when $G$ is a path or a cycle, the enumeration of arithmetical structures on $G$ is controlled by the combinatorics of Catalan numbers. In brief, the path $\P_n$ and the cycle ${\mathcal C}_n$ on $n$ vertices satisfy
\[|\Arith(\P_n)|=C_{n-1}=\frac{1}{n}\binom{2n-2}{n-1},\qquad\qquad
|\Arith({\mathcal C}_n)|=\binom{2n-1}{n-1}=(2n-1) C_{n-1}\]
(Theorems~\ref{path-count} and~\ref{main-cycle-theorem}, respectively).
These results were announced in \cite{arithmetical}.
We will refine these results, and show for example that the number of $\mathbf{d}$-structures on $\P_n$ with one prescribed $d_i$ entry are given by the \emph{ballot numbers}, a well-known combinatorial refinement of the Catalan numbers first investigated by Carlitz~\cite{ballot}. For cycles, we get a similar result where the ballot numbers are replaced by binomial coefficients. The critical group of an arithmetical structure on a path is always trivial, while for a cycle it is always cyclic of order equal to the number of occurrences of 1 in the associated arithmetical $r$-structure. Our approaches for these two families are similar: ballot sequences yield information about arithmetical structures for paths, while multisets produce information in the case of cycles. Our main results for paths and cycles mirror each other, as do the proof techniques we use.
Two graph operations that play a central role in our work are \emph{subdivision} (or \emph{blowup}) and \emph{smoothing}. On the level of graphs, subdividing an edge inserts a new degree-2 vertex between its endpoints, while smoothing a vertex of degree~2 removes the vertex and replaces its two incident edges with a single edge between the adjacent vertices. These operations extend to arithmetical structures and preserve the critical group, as shown by Corrales and Valencia \cite[Thms.~5.1, 5.3, 6.5]{arithmetical}, following Lorenzini~\cite[pp.484--485]{Lorenzini89}. These operations turn out to be key in enumerating arithmetical structures. Note that paths and cycles are special because they are precisely the connected graphs of maximum degree 2, hence can be obtained from very small graphs by repeated subdivision.
Looking ahead, many open questions remain about arithmetical graphs. It is natural to ask to what extent the arithmetical critical group $K(G,\mathbf{d},\mathbf{r})$ behaves like the standard critical group. The matrix $L(G,\mathbf{d})$ is an \emph{M-matrix} in the sense of numerical analysis (see, e.g., Plemmons~\cite{Plemmons}), so it admits a generalized version of chip-firing as described by Guzm\'{a}n and Klivans \cite{GuzmanKlivans}. One could also look for an analogue of the matrix-tree theorem, asserting that the cardinality of the critical group enumerates some tree-like structures on the corresponding arithmetical graphs, or for a version of Dhar's burning algorithm~\cite{Dhar} that gives a bijection between those structures and objects like parking functions.
Enumerating arithmetical structures for graphs other than paths and cycles appears to be more difficult. For example, the arithmetical d-structures on the star $K_{n,1}$ can be shown to be the positive integer solutions to the equation
\[d_0 = \sum_{i=1}^n \frac{1}{d_i}.\]
A solution to this Diophantine equation is often called an \emph{Egyptian fraction representation} of $d_0$. The numbers of solutions for $n\leq 8$ are given by sequence \href{http://oeis.org/A280517}{A280517} in~\cite{OEIS}. A related problem, with the additional constraints $d_0=1$ and $d_1\leq\cdots\leq d_n$, was studied by S\'andor~\cite{sandor}, who gave upper and lower bounds for the number of solutions; the upper bound was subsequently improved by Browning and Elsholtz \cite{browning_elsholtz}. The lower and upper bounds are far apart, and it is unclear even what asymptotic growth to expect.
{\bf Acknowledgments:} This project began at the ``Sandpile Groups'' workshop at Casa Matem\'{a}tica Oaxaca (CMO) in November 2015, funded by Mexico's Consejo Nacional de Ciencia y Tecnolog\'{i}a (CONACYT). The authors thank CMO and CONACYT for their hospitality, as well as Carlos A.\ Alfaro, Lionel Levine, Hiram H.\ Lopez, and Criel Merino for helpful discussions and suggestions. The authors also thank the anonymous referees for their helpful suggestions.
\section{Paths}\label{sec:paths}
We have two main goals in this section.
First, we show in Theorem~\ref{main-path-theorem} that using $r$-structures one can partition the arithmetical structures on a fixed path into sets with cardinality given by ballot numbers, generalizing Theorem~\ref{path-count} which states that the total number of arithmetical structures is given by a Catalan number.
Second, we show in Theorem~\ref{ballot-path} that using $d$-structures one can produce additional partitions of the arithmetical structures of a fixed path that again has distribution given by ballot numbers.
We begin with some basic results about arithmetical structures on paths.
\begin{lemma}\label{path-lemma}
If $\mathbf{r} = (r_1,\ldots, r_n)$ is an arithmetical $r$-structure on the path $\P_n$ with $n \ge 2$ vertices, then
\[
r_1 = r_n = 1.
\]
Moreover, if $r_j = 1$ for some $1 < j < n$, then $(r_1,\ldots, r_j)$ is an arithmetical $r$-structure on $\P_j$ and $(r_j,\ldots, r_n)$ is an arithmetical $r$-structure on $\P_{n-(j-1)}$.
\end{lemma}
This result follows from~\cite[Theorem 4.2]{arithmetical}, but for the sake of illustrating typical methods, we include a short self-contained proof.
\begin{proof}
First, note that $(\mathbf{d},\mathbf{r})$ is an arithmetical structure on $\P_n$ if and only if the following equalities hold:
\begin{equation} \label{path-d-r-equalities}
\begin{aligned}
r_1d_1&=r_2;\\
r_id_i&=r_{i-1}+r_{i+1}\quad\text{ for }1<i<n;\quad\text{ and }\\
r_nd_n&=r_{n-1}.
\end{aligned}
\end{equation}
Starting with the first equation and moving down, we obtain the sequence of divisibilities:
\[
r_1|r_2 \implies r_1|r_3 \implies \dotsm \implies r_1|r_n.
\]
Since $\mathbf{r}$ is a primitive vector, we conclude that $r_1=1$. The same argument starting with the last equation and moving up yields $r_n=1$.
On the other hand, if $r_j=1$ for some $1<j<n$, then the following slight modification of the first $j$ equations defining $(\mathbf{d},\mathbf{r})$ shows that $(r_1,\dots,r_j)$ is an arithmetical structure on $\P_j$:
\begin{align*}
d_1&=r_2 && (\textrm{since $r_1=1$ by part (1)}),\\
r_id_i&=r_{i-1}+r_{i+1} \quad\text{ for }1<i<j;\quad\text{ and }\\
\tilde{d}_j&:=r_{j-1}.
\end{align*}
A similar argument, using the final $n-(j-1)$ equations defining the pair $(\mathbf{d},\mathbf{r})$, shows that $(r_j,\dots,r_n)$ is an arithmetical structure on $\P_{n-(j-1)}$.
\end{proof}
\begin{cor} \label{path-characterization}
Let $\mathbf{r}=(r_1,\dots,r_n)$ be a primitive positive integer vector. Then
$\mathbf{r}$ is an arithmetical $r$-structure on $\P_n$ if and only if
\begin{enumerate}[label=(\alph*)]
\item $r_1=r_n=1$, and \\
\item $r_i|(r_{i-1}+r_{i+1})$ for all $i\in[2,n-1]$.
\end{enumerate}
\end{cor}
\begin{proof}
Condition (a) is part of Lemma~\ref{path-lemma}, and the necessity of condition (b) follows from~\eqref{path-d-r-equalities}. On the other hand, if $\mathbf{r}$ satisfies these two conditions then the corresponding $d$-structure can be recovered from the equations given in~\eqref{path-d-r-equalities}.
\end{proof}
For an arithmetical $r$-structure $\mathbf{r}$, let
\[\mathbf{r}(1) = \#\{i \st r_i=1\}.\]
\begin{theorem} \label{path-count}
The number of arithmetical structures on $\P_n$ is the Catalan number $C_{n-1}=\frac{1}{n}\binom{2n-2}{n-1}$. Moreover, the number of arithmetical $r$-structures with $\mathbf{r}(1) = 2$ is the Catalan number $C_{n-2}$.
\end{theorem}
\begin{proof}
For the second assertion, the description in Corollary~\ref{path-characterization} is a known interpretation of the Catalan numbers; see \cite{AignerSchulze} or \cite[p.~34, Problem~92]{catalan}. The first assertion then follows from the standard Catalan recurrence $C_{n-1} = \sum_{i=0}^{n-2}C_iC_{n-2-i}$, since by Lemma~\ref{path-lemma} the same recurrence holds for arithmetical structures.
\end{proof}
As a consequence of Theorem~\ref{path-count}, the path $\P_2$ has only one arithmetical structure, namely $(\mathbf{d},\mathbf{r})=(\mathbf{1},\mathbf{1})$, and it is the only path with a unique arithmetical structure.
In the rest of this section, we study a finer enumeration of arithmetical structures on paths in terms of their $\mathbf{r}$-vectors (Theorem~\ref{main-path-theorem}) and $\mathbf{d}$-vectors (Theorem~\ref{ballot-path}). The arithmetical structure on $\P_n$ that comes from the Laplacian of $\P_n$ has $\mathbf{r} = (1,\ldots, 1)$ and $\mathbf{d} = (1,2,2,\ldots, 2,1)$. We call this pair $(\mathbf{d},\mathbf{r})$ the \emph{Laplacian arithmetical structure}.
We recall the following result of Corrales and Valencia.
\begin{prop}[{\cite[Thm.~6.1]{arithmetical}}]\label{only-two-path}
There is exactly one arithmetical structure $(\mathbf{d},\mathbf{r})$ on $\P_n$ such that $d_i\geq 2$ for all $1<i<n$, namely the Laplacian arithmetical structure.
\end{prop}
Given a graph $G$, the \emph{subdivision} of an edge $e = uv$ with endpoints $u$ and $v$ yields a graph containing one new vertex $w$, and with a pair of edges $uw$ and $vw$ replacing $e$.
The reverse operation, known as \emph{smoothing} a 2-valent vertex $w$ incident to edges $e_1 = uw$ and $e_2 = wv$, removes both edges $e_1, e_2$ and adds a new edge connecting $u$ and $v$.
One of our starting points is the following result relating the arithmetical structures on $\P_n$ with the arithmetical structures on $\P_{n+1}$, regarding the latter graph as an edge subdivision of the former. The construction is a particular case of the \emph{blow-up} operation described in \cite[pp.~484--485]{Lorenzini89} as well as of the \emph{clique-star transformation}~\cite[Theorem 5.1]{arithmetical}. The recursive structure of the $\mathbf{r}$-vector was also described in \cite[Lemma~2]{AignerSchulze}. We include a unified proof of these facts.
\begin{prop
\label{blowup-path}
(a) Let $n\geq 2$ and let $(\mathbf{d}',\mathbf{r}')\in\Arith(\P_n)$. Given $i$ with $2\leq i \leq n$, define integer vectors $\mathbf{d}$ and $\mathbf{r}$ of length~$n+1$ as follows:
\begin{equation} \label{path-subdivision}
d_j = \begin{cases}
d'_j &\text{ if } j < i-1,\\
d'_{i-1}+1 &\text{ if } j=i-1,\\
1 &\text{ if } j=i,\\
d'_{i}+1 &\text{ if } j=i+1,\\
d'_{j-1} &\text{ if } j>i+1,
\end{cases}
\qquad\qquad
r_j = \begin{cases}
r'_j &\text{ if } j<i,\\
r'_{i-1}+r'_i &\text{ if } j=i,\\
r'_{j-1} &\text{ if } j>i,
\end{cases}
\end{equation}
for $1\leq j\leq n+1$. Then $(\mathbf{d},\mathbf{r})$ is an arithmetical structure on $\P_{n+1}$.
Moreover, the cokernels of $L(\P_n,\mathbf{d}')$ and $L(\P_{n+1},\mathbf{d})$ are isomorphic.
(b)
Let $n\geq 3$ and let $(\mathbf{d},\mathbf{r})\in\Arith(\P_n)$ such that $d_i=1$ for some $1<i<n$.
For $j\in[n-1]$, define
\begin{equation} \label{path-smoothing}
d'_j = \begin{cases} d_j &\text{ if } j<i-1, \\ d_{i-1}-1 &\text{ if } j=i-1,\\ d_{i+1}-1 &\text{ if } j=i,\\ d_{j+1}&\text{ if } i<j\le n-1,\end{cases}\qquad\qquad
r'_j = \begin{cases} r_j &\text{ if } j<i,\\ r_{j+1}&\text{ if } j\ge i.\end{cases}\qquad\qquad
\end{equation}
Then $(\mathbf{d}',\mathbf{r}')$ is an arithmetical structure on $\P_{n-1}$.
Again, the cokernels of $L(\P_n,\mathbf{d})$ and $L(\P_{n-1},\mathbf{d}')$ are isomorphic.
\end{prop}
In case (a), we say that $(\mathbf{d},\mathbf{r})$ is the \emph{subdivision} of $(\mathbf{d}',\mathbf{r}')$ at position~$i$. In case~(b), we say that $(\mathbf{d}',\mathbf{r}')$ is the \emph{smoothing} of $(\mathbf{d},\mathbf{r})$ at position $i$.
\begin{proof}
Given a graph $G$ and a clique $C$ in $G$, the \emph{clique-star transformation} removes every edge in $C$ and adds a new vertex $w$ together with all the edges between $w$ and every vertex in $C$. Clearly, given an edge $e$ in $\P_n$, the clique-star transformation at $e$ is precisely the subdivision of $e$.
It follows as a special case of \cite[Theorem 5.1]{arithmetical} that equation~\eqref{path-subdivision} gives an arithmetical structure on $\P_{n+1}$.
On the other hand, it is clear that if $(\mathbf{d}',\mathbf{r}')$ is the smoothing of $(\mathbf{d},\mathbf{r})$ at position $i$ then $(\mathbf{d},\mathbf{r})$ is the subdivision of $(\mathbf{d}',\mathbf{r}')$ at position $i$. Hence equation~\eqref{path-smoothing} gives an arithmetical structure on $\P_{n-1}$.
It remains to show that smoothing is well-defined, that is, if $d_i=1$ for some $1<i<n$, then $d_{i-1} \geq 2$ and $d_{i+1} \geq 2$.
In other words, we need to show that if $(\mathbf{d},\mathbf{r})$ is an arithmetical structure on $\P_n$ with $n\geq 3$, then there are no consecutive 1's in $\mathbf{d}$. This follows immediately from Lemma~\ref{isolated-ones-in-d} below, which applies to arithmetical structures on arbitrary graphs.
Finally, the fact that the corresponding cokernels are isomorphic follows directly from \cite[\S 1.8]{Lorenzini89}. Explicitly, let $(\mathbf{d},\mathbf{r})$ be the \emph{subdivision} of $(\mathbf{d}',\mathbf{r}')$ at position~$i$, let $M'=L(\P_{n},\mathbf{d}')$, and let $M=L(\P_{n+1},\mathbf{d})$. Then $M$ is ${\mathbb Z}$-equivalent to the matrix
\[
M_Q = \begin{pmatrix}M'+Q^tQ & -Q^t\\ -Q & 1\end{pmatrix},
\]
where $Q$ is the vector of length $n$ with 1's in the two positions $i-1$ and $i$, and 0's elsewhere. (Here ``${\mathbb Z}$-equivalent'' means $M=AM_QB$ where $A$ and $B$ are invertible over ${\mathbb Z}$.)
Moreover, $M_Q$ is ${\mathbb Z}$-equivalent to the matrix $\begin{pmatrix}
M' & 0 \\ 0 & 1\end{pmatrix}$. Therefore, the cokernel of $M$ is isomorphic to the cokernel of~$M'$.
\end{proof}
\begin{lemma}\label{isolated-ones-in-d}
Let $G$ be a connected graph on the vertex set $\{v_1,v_2,\dots,v_n\}$ with $n\ge 3$ and adjacency matrix $A=(a_{ij})$. In addition, suppose that $v_1$ and $v_2$ are neighbors (that is, $a_{12}>0$). If $\mathbf{d}$ is an arithmetical $d$-structure on $G$ with $d_1=1$, then $d_2>1$.
\end{lemma}
\begin{proof}
Let $\mathbf{r}=(r_1,r_2,\dots,r_n)$ denote the $r$-structure corresponding to the $d$-structure $\mathbf{d}$. By the definition of arithmetical structure, we have
\[
d_1r_1=\sum_{i=1}^na_{1i}r_i \qquad \textrm{and} \qquad
d_2r_2=\sum_{j=1}^na_{2j}r_j.
\]
Now suppose, contrary to the claim, that $d_1=d_2=1$. Since $n\ge 3$, at least one of the vertices $v_1,v_2$ has another neighbor; without loss of generality, suppose that $v_2$ and $v_3$ are connected by an edge of $G$, so that $a_{23}>0$. Then
\begin{eqnarray*}
r_1r_2&=&d_1r_1d_2r_2\\
&=&\left(\sum_{i=1}^na_{1i}r_i\right)\left(\sum_{j=1}^na_{2j}r_j\right)\\
&=&a_{12}^2r_1r_2+a_{12}a_{23}r_2r_3+\sum_{\substack{i,j=1 \\ (i,j)\ne (2,1),(2,3)}}^na_{1i}a_{2j}r_ir_j.
\end{eqnarray*}
Subtracting $r_1r_2$ yields the equation
\begin{eqnarray*}
0=(a_{12}^2-1)r_1r_2+a_{12}a_{23}r_2r_3+
\sum_{\substack{i,j=1 \\ (i,j)\ne (2,1), (2,3)}}^na_{1i}a_{2j}r_ir_j.
\end{eqnarray*}
But this is impossible, since for all $k,\ell\in[n]$, the quantities $a_{k\ell}\ge 0, r_k\ge 1$, and $a_{12},a_{23}>0$ by assumption.
\end{proof}
\begin{theorem}\label{critical-group-path}
Let $(\mathbf{d},\mathbf{r})\in\Arith(\P_n)$ be an arithmetical structure on the path. Then the associated critical group $K(\P_n,\mathbf{d},\mathbf{r})$ is trivial. Moreover,
\begin{equation} \label{r-and-d-path}
\mathbf{r}(1)=3n-2-\sum_{j=1}^n d_j.
\end{equation}
\end{theorem}
\begin{proof}
We proceed by induction on $n$. It is easy to check that the theorem holds for the base case $n=2$, where the only arithmetical structure is the Laplacian arithmetical structure $(\mathbf{d},\mathbf{r})=(\mathbf{1},\mathbf{1})$.
For any $n\ge 3$, if $\mathbf{d}=(1,2,2,\dots,2,1)$ and $\mathbf{r}=\mathbf{1}$, then $3n-2-\sum_{i=1}^n d_i=n=\mathbf{r}(1)$. In this case $L$ is the Laplacian, and $K(\P_n,\mathbf{d},\mathbf{1})$ is the standard critical group of $\P_n$, which is trivial (since $\P_n$ has only one spanning tree, namely itself).
Now suppose that $n\geq3$ and $\mathbf{d}\neq(1,2,2,\dots,2,1)$. By Proposition~\ref{only-two-path}, $d_i=1$ for some $1<i<n$. Proposition~\ref{blowup-path} implies that
$(\mathbf{d},\mathbf{r})$ can be obtained from subdividing some $(\mathbf{d}',\mathbf{r}')\in\Arith(\P_{n-1})$ at position~$i$, and $K(\P_n,\mathbf{d},\mathbf{r})=K(\P_{n-1},\mathbf{d}',\mathbf{r}')$. By induction, these groups are trivial. Moreover, we note that $d_ir_i=r_{i-1}+r_{i+1} \ge 2$, which implies that $r_i>1$. We therefore have
\begin{align*}
\mathbf{r}(1) &= \mathbf{r}'(1) = 3(n-1) - 2 - \sum_{j=1}^{n-1} d'_j && \text{(by induction)}\\
&= 3(n-1) - 2 - \left(\sum_{j=1}^n d_j - (2+d_i)\right)\\
&= 3n - 2 - \sum_{j=1}^n d_j && \text{(because $d_i=1$).}\qedhere
\end{align*}
\end{proof}
We next consider a finer enumeration of arithmetical $r$-structures on paths, based on the value of~$\mathbf{r}(1)$.
A \emph{lattice path} is a walk from $(0,0)$ to $(p,q)\in{\mathbb N}^2$ consisting of $p$ east steps and $q$ north steps. It is a standard fact that the number of such paths is $\binom{p+q}{p}$, and that the number of paths from $(0,0)$ to $(k,k)$ that do not cross above the line $y=x$ is the Catalan number $C_k$. More generally, let $B(k,\ell)$ denote the number of lattice paths from $(0,0)$ to $(k,\ell)$ that do not cross above the line $y=x$. The numbers $B(k,\ell)$ are a generalization of the Catalan numbers known as \emph{ballot numbers}, sequence \href{https://oeis.org/A009766}{A009766} in the On-Line Encyclopedia of Integer Sequences \cite{OEIS}. Ballot numbers were first observed by Bertrand in 1887~\cite{Bertrand} and studied in detail by Carlitz~\cite{ballot}. Our $B(k,\ell)$ corresponds to $f(k+1,\ell+1)$ in Carlitz's notation. A formula is given by $B(k,\ell)=\frac{k-\ell+1}{k+1}\binom{k+\ell}{k}$ \cite[eqn.~2.12]{ballot}.
\begin{Remark} \label{Ballot-Triangulation}
According to Pak's historical survey in the appendix to \cite{catalan}, the first appearance of the Catalan numbers $C_k$ in the literature was as the number of triangulations of a $(k+2)$-gon, as shown by Segner \cite{Segner} and Euler \cite{Euler}. In this context, the ballot number $B(k,\ell)$ counts the number of triangulations of a $(k+3)$-gon with a distinguished vertex that has $k-\ell+1$ triangles incident to it.
\end{Remark}
\begin{theorem} \label{main-path-theorem}
Let $n\geq 2$ and $1\leq k\leq n$, and let $A(n,k)$ be the number of arithmetical structures $(\mathbf{d},\mathbf{r})$ on $\P_n$ such that $\mathbf{r}(1)=k$. Then
\begin{equation} \label{main-path-thm-eqn}
A(n,k) = B(n-2,n-k) = \frac{k-1}{n-1} \binom{2n-2-k}{n-2}.
\end{equation}
\end{theorem}
Combining Theorems~\ref{critical-group-path} and~\ref{main-path-theorem} immediately yields the following result.
\begin{cor}\label{cor:10}
For $n\geq 2$ and any $k$, we have
\[\#\left\{(\mathbf{d},\mathbf{r})\in\Arith(\P_n) \st \sum_{j=1}^nd_j=k\right\} = B(n-2,k-2n+2).
\]
In particular, there are no arithmetical structures with $\sum_{j=1}^nd_j=k$ unless $2n-2\le k\le 3n-4$.
\end{cor}
We give two proofs of Theorem \ref{main-path-theorem}. The first is bijective and involves identifying lattice paths with \emph{ballot sequences} and interpreting arithmetical structures on $\P_n$ in terms of sequences of edge subdivisions of arithmetical structures on shorter paths. The second proof is phrased in terms of recurrences for certain classes of lattice paths.
To set up the first proof, we describe how to obtain an arithmetical structure on $\P_n$ starting from an arithmetical structure on a path $\P_m$ with $m<n$ and repeatedly subdividing edges in an order specified by an integer sequence $\mathbf{b} = (b_1,\ldots, b_{n-m})$. We first note the following result, which can be found in \cite[Thm.~6.1]{arithmetical}, but also follows as an immediate consequence of Propositions \ref{only-two-path} and \ref{blowup-path}.
\begin{prop}\label{subdivision-prop}
Any arithmetical structure $(\mathbf{d},\mathbf{r})$ on $\P_n$ not equal to the Laplacian arithmetical structure can be obtained from an arithmetical structure on $\P_{n-1}$ by subdividing an edge.
\end{prop}
This result implies that every arithmetical structure on $\P_n$ comes from taking the Laplacian arithmetical structure on $\P_m$ for some $m$ satisfying $2\le m\le n$ and subdividing $n-m$ edges. Consider the standard ordering on the $m-1$ edges of $\P_m$. Let $\mathbf{b} = (b_1,\ldots, b_{n-m})$ be a sequence of $n-m$ positive integers satisfying $1 \le b_i \le (m-1) + (i-1)=m+i-2$. We inductively define an arithmetical structure $\mathbf{A}_n(\mathbf{b})$ on $\P_n$ from this sequence $\mathbf{b}$. Let $(\mathbf{d}_0, \mathbf{r}_0)$ be the Laplacian arithmetical structure on $\P_m$. Let $(\mathbf{d}_i, \mathbf{r}_i)$ be the arithmetical structure on $\P_{m+i}$ obtained from the arithmetical structure $(\mathbf{d}_{i-1}, \mathbf{r}_{i-1})$ by subdividing the edge $b_i$ of the path $\P_{m+i-1}$; equivalently, the position of the vector specified by the index $i$ in part (a) of Proposition~\ref{blowup-path} is given by $b_i+1$. Proposition~\ref{blowup-path} gives explicit formulas for the entries of $(\mathbf{d}_i, \mathbf{r}_i)$. Note that if $n=m$, then $\mathbf{b}$ is the empty sequence and $\mathbf{A}_n(\mathbf{b})$ is the Laplacian arithmetical structure on $\P_n$. Proposition \ref{blowup-path} implies that the value of $\mathbf{r}(1)$ is preserved under edge subdivision, so if $\mathbf{b} = (b_1,\ldots, b_{n-m})$ then $\mathbf{A}_n(\mathbf{b})$ is an arithmetical structure on $\P_n$ with $\mathbf{r}(1) = m$.
\begin{Example}
Let $n=5$ and $m=2$, with $\mathbf{b}=(1,2,2)$.
Then our subdivision process on arithmetical $d$-structures yields
\[
(1,1)\mapsto (2,1,2)\mapsto (2,2,1,3) \mapsto (2,3,1,2,3) \, ,
\]
and thus $\mathbf{A}_5(1,2,2)=(2,3,1,2,3)$.
The corresponding arithmetical $r$-structure transformation is
\[
(1,1)\mapsto (1,2,1)\mapsto (1,2,3,1)\mapsto (1,2,5,3,1)\, .
\]
\end{Example}
Two different sequences can produce the same arithmetical structure on $\P_n$, but it is straightforward to characterize when this occurs.
\begin{lemma}\label{lemma_bseq}
Let $n \ge m \ge 2$ and $\mathbf{b} = (b_1,b_2,\ldots, b_{n-m})$ be a sequence of positive integers satisfying $1 \le b_i \le i+m-2$. Suppose $i$ is a positive integer satisfying $1 \le i < n-m$ with $b_i > b_{i+1}$. Then, $\mathbf{A}_n(\mathbf{b}) = \mathbf{A}_n(\mathbf{b'})$, where $\mathbf{b}' = (b_1',\ldots, b_{n-m}')$ is defined by
\[
b_j' = \begin{cases}
b_{i+1} & \text{if } j=i,\\
b_i + 1 & \text{if } j = i+1,\\
b_j & \text{otherwise}.
\end{cases}
\]
\end{lemma}
\begin{proof}
By the definition of $(\mathbf{d}_i, \mathbf{r}_i)$ and Proposition \ref{blowup-path} we see that if $b_i > b_{i+1}$, then starting with the arithmetical structure $(\mathbf{d}_{i-1}, \mathbf{r}_{i-1})$, subdividing $\P_{m+i-1}$ at edge $b_i$ and then subdividing $\P_{m+i}$ at edge $b_{i+1}$, gives the same arithmetical structure on $\P_{m+i+1}$ as subdividing $\P_{m+i-1}$ at edge $b_{i+1}$ and then subdividing $\P_{m+i}$ at edge $b_i + 1$.
\end{proof}
Repeatedly applying this lemma gives a bijection between arithmetical structures on $\P_n$ and sequences of a certain type.
\begin{prop}\label{prop_bseq}
Every arithmetical structure on $\P_n$ is equal to $\mathbf{A}_n(\mathbf{b})$ for a unique sequence $\mathbf{b} = (b_1,\ldots, b_{n-m})$ satisfying $1\le b_i \le i+m - 2$ and $b_i \le b_{i+1}$ for all $i$.
\end{prop}
\begin{proof}
Lemma \ref{lemma_bseq} shows that every arithmetical structure on $\P_n$ is equal to $\mathbf{A}_n(\mathbf{b})$ for some sequence $\mathbf{b}$ of this type. Proposition \ref{blowup-path} implies that the arithmetical structures arising from such sequences are distinct.
\end{proof}
In order to complete our first proof of Theorem \ref{main-path-theorem} we need only count the number of sequences described in the statement of Proposition \ref{prop_bseq}.
\begin{lemma}\label{bseq_count}
Let $n$ and $m$ be integers satisfying $2 \le m \le n$. The number of sequences $\mathbf{b} = (b_1,\ldots, b_{n-m})$ satisfying $1\le b_i \le i+m - 2$ and $b_i \le b_{i+1}$ for all $i$ is equal to $B(n-2, n-m)$.
\end{lemma}
Appending an initial string of $m-2$ entries equal to $1$ we see that the sequences of Lemma \ref{bseq_count} are in bijection with sequences $(b_1,\ldots, b_{n-2}) = (1,\ldots, 1, b_{m-1}, \ldots, b_{n-2})$ satisfying $b_i \le i$ and $b_i \le b_{i+1}$. A sequence $(b_1,\ldots, b_k)$ of positive integers satisfying $b_i \le i$ and $b_i \le b_{i+1}$ is called a \emph{ballot sequence of length $k$}. We need only count the number of ballot sequences of length $n-2$ that begin with at least $m-2$ entries equal to $1$. The following lemma is equivalent to Lemma \ref{bseq_count}.
\begin{lemma}\label{lemma-initial_string}
The number of sequences $(b_1,\ldots, b_{n-2})$ of positive integers satisfying $b_i \le i$ and $b_i \le b_{i+1}$ that begin with an initial string of at least $m-2$ entries equal to $1$ is given by $B(n-2, n-m)$.
\end{lemma}
\begin{proof}
Recall that $B(n-2, n-2)$ counts the number of lattice paths from $(0,0)$ to $(n-2, n-2)$ that do not cross above the line $y=x$. It is trivial to note that this also counts the number of lattice paths from $(1,1)$ to $(n-1, n-1)$ that do not cross above the line $y=x$. Every such lattice path $L$ can be identified uniquely with a ballot sequence $\mathbf{b}_L = (b_1,\ldots, b_{n-2})$ where $(i, b_i)$ is the point along the path on the vertical line $x = i$ with the largest $y$-coordinate.
We want to count the number of these ballot sequences that begin with at least $m-2$ entries equal to $1$, or equivalently, the number of lattice paths from $(1,1)$ to $(n-1,n-1)$ that do not cross above the line $y=x$ and begin with at least $m-2$ east steps. Reversing the order of such a path, and then replacing each north step with an east step and each east step with a north step, gives a bijection with the set of lattice paths from $(1,1)$ to $(n-1,n-1)$ that do not cross above the line $y=x$ and end with at least $m-2$ north steps. This set is clearly in bijection with the set of lattice paths from $(1,1)$ to $(n-1, n-m+1)$ that do not cross above the line $y=x$, the number of which is given by $B(n-2,n-m)$.
\end{proof}
\begin{proof}[First proof of Theorem \ref{main-path-theorem}]
Lemma \ref{bseq_count} together with the earlier observation that an arithmetical structure of $\P_n$ has $\mathbf{r}(1) = m$ if and only if it is equal to $\mathbf{A}_n(\mathbf{b})$ for some sequence $\mathbf{b} = (b_1,\ldots, b_{n-m})$ satisfying $1\le b_i \le i+m - 2$ and $b_i \le b_{i+1}$, completes the first proof of Theorem \ref{main-path-theorem}.
\end{proof}
We give a second proof based on recurrences satisfied by counts for certain classes of lattice paths.
\begin{proof}[Second proof of Theorem \ref{main-path-theorem}]
We argue by induction on $k=\mathbf{r}(1)$. In the case $k=1$, we have $B(n-2,n-1)=0$ and Lemma~\ref{path-lemma} implies that there are no arithmetical structures on $\P_n$ with $\mathbf{r}(1) = 1$, completing the proof in this case. In the case $k=2$, Theorem \ref{path-count} implies that $A(n,2) = B(n-2,n-2)=C_{n-2}$.
Now, for fixed $n$, suppose that~\eqref{main-path-thm-eqn} holds for all $j$ satisfying $2\le j \le k < n$. Suppose that $\mathbf{r} = (r_1,\ldots, r_n) \in \Arith(\P_n)$ with $\mathbf{r}(1) = k+1$. Let $m=\min\{i>1\st r_i=1\}$. Note that $2 \le m \le n-k+1$.
By Lemma \ref{path-lemma}, the truncated vector $\mathbf{r}' = (r_1,\ldots, r_m)$ is an arithmetical $r$-structure on $\P_m$ with $\mathbf{r}'(1) = 2$ and $\mathbf{r}'' = (r_m,\ldots, r_n)$ is an arithmetical $r$-structure on $\P_{n-m+1}$ with $\mathbf{r}''(1) = k$. Moreover, every such pair $\mathbf{r}',\mathbf{r}''$ gives rise to an arithmetical $r$-structure $\mathbf{r}$ on $\P_n$ with $\mathbf{r}(1)=k+1$. The number of possible choices for such a pair is $A(m,2) A(n-m+1,k)$. Therefore, by induction,
\begin{align*}
A(n,k+1) &= \sum_{m=2}^{n-k+1} A(m,2) A(n-m+1,k) \\
&=\sum_{m=2}^{n-k+1} B(m-2,m-2) B(n-m-1,n-m-k+1)\\
&= B(n-2,n-k-1)
\end{align*}
\noindent where the last equality follows from \cite[4.9]{ballot} after a change of variables. (Replace Carlitz's $j,n,k$ with $m-1,n-k-1,n-k$ respectively, and recall that Carlitz's $f(n,k)$ is our $B(n-1,k-1)$.)
\end{proof}
Our next goal is to prove a refined counting result for arithmetical $d$-structures of $\P_n$ with a fixed entry.
\begin{theorem} \label{ballot-path}
For each $i=1,\ldots,n$ and $0\leq k\leq n-2$, the number of arithmetical $d$-structures $(d_1,\ldots, d_n)$ on $\P_n$ with $d_i=n-k-1$ is equal to $B(n-2,k)$.
\end{theorem}
We prove Theorem \ref{ballot-path} in two parts. We first introduce the notation $d_0=3n-3-\sum_{j=1}^n d_j$ and note that by Corollary~\ref{cor:10} the result extends to the special case $i=0$. We then define a bijection between triangulations of an $(n+1)$-gon and arithmetical $d$-structures on $\mathcal{P}_n$. Under this bijection, clockwise rotation of a triangulation induces a correspondence between the set of arithmetical $d$-structures $(d_1,\dots, d_n)$ on $\mathcal{P}_n$ with $d_0 = n-k-1$ (resp. $d_i = n-k-1$) and the set of arithmetical $d$-structures on $\mathcal{P}_n$ with $d_1 = n-k-1$ (resp. $d_{i+1} = n-k-1$).
\begin{proof}
First, by the above definition and by Theorem \ref{critical-group-path} we have $d_0 = \mathbf{r}(1)-1$. By Theorem \ref{main-path-theorem} the number of arithmetical $d$-structures with $d_0=n-k-1$ is $B(n-2,k)$.
For the main part of the proof, we fix a labeling of the vertices of an $(n+1)$-gon as $0,1,2,\dots,n$, in clockwise order. For each triangulation $T$ of the $(n+1)$-gon, define $D(T) = (D_0,D_1,\dots, D_n)$ by
$$D_i = \#\left\{\mathrm{triangles~incident~to~vertex~}i\right\}.$$
The theorem then reduces to the following claim.
\begin{Claim} \label{triangulation-to-d}
The sequence $(d_1,\dots,d_n) = (D_1,\dots, D_n)$ is an arithmetical $d$-structure of $\mathcal{P}_n$. Moreover, the map $D$ is a bijection from the set of triangulations of the $(n+1)$-gon to the set of arithmetical $d$-structures on $\mathcal{P}_n$.
\end{Claim}
Observe that each triangulation consists of $n-1$ triangles and each triangle will be adjacent to three vertices. In particular, $\sum_{j=0}^n D_j=3n-3$ so that $D_0 = 3n-3 - \sum_{j=1}^n D_j$. In particular, $D_0=d_0$, motivating the above notation.
\begin{proof} We now prove Claim~\ref{triangulation-to-d} by induction. If $n=2$, then the unique and trivial triangulation of a $3$-gon corresponds to the unique arithmetical $d$-structure of $\mathcal{P}_2$, namely $(d_1,d_2) = (1,1)$. For $n=2$ and this arithmetical $d$-structure, we also have $d_0 = 3-1-1 =1$.
For $n\geq 3$, a triangulation $T$ of an $(n+1)$-gon is obtained by gluing a triangle to the exterior of a triangulation $T'$ of an $n$-gon. After relabeling the vertices as described below, let $D(T') = (d_0', d_1',\dots, d_{n-1}')$, where $d_j'$ is the number of triangles incident to vertex $j$ in the triangulation $T'$, so by our inductive hypothesis we have that $(d_1',\dots, d_{n-1}')$ is an arithmetical $d$-structure of $\mathcal{P}_{n-1}$.
We next follow a procedure related to Conway and Coxeter's work \cite[(23)]{CC} on frieze patterns and triangulated polygons.
See also \cite[Theorem 2.1]{T} for a more recent description of this procedure.
In this language, the tuple $D(T’)$ is known as the \emph{quiddity sequence} associated to the triangulation $T$’.
For any $0 \le i \le n-1$ we consider the effect of gluing a triangle on the edge between vertices $i$ and $i+1$ of $T'$ and increasing the label on all vertices $j>i$ by one (note that in the case of $i=n-1$ then we are gluing a triangle between vertices $n-1$ and $0$, and we do not need to renumber any vertices). This will create a new triangulation $T$, and if we set $d_j$ to be the number of triangles adjacent to vertex $j$ in this new triangulation, we see that:
\begin{equation}
d_j = \begin{cases}
d'_j &\text{ if } 0 \le j < i-1,\\
d'_{i-1}+1 &\text{ if } j=i-1,\\
1 &\text{ if } j=i,\\
d'_{i}+1 &\text{ if } j=i+1,\\
d'_{j-1} &\text{ if } i+1 < j \le n,
\end{cases}
\end{equation}
agreeing with (\ref{path-subdivision}) and showing that $D(T) = (d_0,d_1,\dots, d_n)$ where $\mathbf{d} = (d_1,\dots, d_n)$ is the subdivision of $\mathbf{d}' = (d_1',\dots, d_{n-1}')$ at position $i$. Hence, by Propositions \ref{blowup-path} and \ref{subdivision-prop}, the resulting $\mathbf{d}$ is in fact an arithmetical $d$-structure. We conclude this map is a bijection since the ways in which two sequences of gluing triangles construct the same triangulation is dictated by the same relations given in Lemma \ref{lemma_bseq}. This completes the proof of Claim~\ref{triangulation-to-d}.
\end{proof}
By Remark \ref{Ballot-Triangulation}, there are in fact $B(n-2,k)$ triangulations of an $(n+1)$-gon that have $d_i= n-k-1$ triangles incident to a given vertex i.
Alternatively, we observe that if $T$ is a triangulation of an $(n+1)$-gon and $\rho(T)$ is its clockwise rotation, then $D(T) = (d_0,d_1,\dots, d_n)$ and $D(\rho(T)) = (d_n, d_0,d_1,\dots, d_{n-1})$. Hence rotation induces a bijection between arithmetical $d$-structures $(d_1,\dots, d_n)$ on $\mathcal{P}_n$ with $d_i=n-k-1$ and arithmetical $d$-structures on $\mathcal{P}_n$ with $d_{i+1} = n-k-1$ (for $0\leq i \leq n-1$). Combining this bijection with the $i=0$ case completes the proof of Theorem~\ref{ballot-path}.
\end{proof}
\begin{Example}
We illustrate the above ideas with the following example, where $T'$ is a triangulation of a pentagon and $T$ is the triangulation of a hexagon obtained by gluing a single triangle to $T'$.
\begin{figure}[h]
\begin{tikzpicture}
[scale=.4,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\foreach \a/\text in {180/4,120/0,60/1,0/2,240/3}
\node (\text) at (\a:4cm) {\text};
\foreach \from/\to in {4/0,0/1,1/2,2/3,3/4,1/4,1/3}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-6) {(i) $T'$};
\begin{scope}[shift={(15,0)}]
\foreach \a/\text in {180/5,120/0,60/1,0/2,300/3,240/4}
\node (\text) at (\a:4cm) {\text};
\foreach \from/\to in {5/0,0/1,1/2,2/3,3/4,4/5, 2/4,1/4, 5/1}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-6) {(ii) $T$};
\end{scope}
\end{tikzpicture}
\end{figure}
Recalling that for any triangulation $T$ we have $D(T)=(d_0,\ldots d_n)$ where $d_i$ is the number of triangles adjacent to vertex $i$, we see that $D(T') = (1,3,1,2,2)$ and therefore $T'$ corresponds to the arithmetical $d$-structure $(3,1,2,2)$ on $\mathcal{P}_4$. After gluing on a triangle between vertices $2$ and $3$ and updating the vertex labels, we obtain the triangulation $T$ and the corresponding $D(T)=(1,3,2,1,3,2)$, giving the arithmetical $d$-structure $(3,2,1,3,2)$ on $\mathcal{P}_5$.
\end{Example}
\begin{Remark}
Questions (17) and (18) of \cite{CC} suggest a relationship between the quiddity sequence, which corresponds to the second row of the associated frieze pattern, and the diagonals in the same pattern. In fact, these diagonals correspond to the $\mathbf{r}$-vectors of the arithmetical structures with a given quiddity sequence as its $\mathbf{d}$-vector.
\end{Remark}
In the proof of Theorem \ref{ballot-path} we defined a bijection $D$ from triangulations to arithmetical structures on the path, and in Proposition~\ref{prop_bseq} we gave a bijection $\mathbf{A}_n^{-1}$ from arithmetical $d$-structures on $\P_n$ to ballot sequences. Composing these two bijections together, we get one mapping triangulations to ballot sequences. Given this composite bijection, it is natural to consider the map $f_n$ from ballot sequences to ballot sequences that corresponds to the action of rotating a given triangulation $T$ of a polygon. More explicitly, we are interested in the map $f_n$ so that $f_n(\mathbf{A}_n^{-1}(D(T))) = \mathbf{A}_n^{-1}(D(\rho(T)))$. We now give a description of this map.
In particular, let $\mathcal{B}(n)$ denote the set of ballot sequences of length $n$, i.e., $n$-tuples $\mathbf{b} = (b_1,\ldots, b_n)$ consisting of $n$ nonnegative integers, where $b_i \le b_{i+1}$, and $b_i \le i$ for all $i$. We inductively define the bijection $f_n \colon\ \mathcal{B}(n) \rightarrow \mathcal{B}(n)$. Let $f_1((1)) = (0)$, and $f_1((0)) = (1)$. Suppose we have defined $f_{n-1}(\mathbf{b})$ for all sequences $\mathbf{b} \in \mathcal{B}(n-1)$. Given a sequence $\mathbf{b} = (b_1,\ldots, b_{n-1},b_n) \in \mathcal{B}(n)$, let $\mathbf{b}' = (b_1,\ldots, b_{n-1}) \in \mathcal{B}(n-1)$.
Define
\[
f_n(\mathbf{b}) := \begin{cases}
f_{n-1}(\mathbf{b}') \text{ with a } b_n+1 \text{ appended to the end of it} & \text{if } b_n < n,\\
f_{n-1}(\mathbf{b}') \text{ with a } 0 \text{ appended to the beginning of it} & \text{if }b_n=n.
\end{cases}
\]
\begin{Remark}\label{rem:fn}
It is possible to give an explicit description of $f_n(\mathbf{b})$. In particular, let $\mathbf{b} = (b_1,\ldots, b_n) \in\mathcal{B}(n)$ and create a new sequence $\mathbf{b}+(1,1,\ldots,1)$, where addition in the $i$\textsuperscript{th} coordinate is taken modulo $i+1$.
Then $f_n(\mathbf{b})$ is the vector obtained by ``right-justifying" the nonzero entries in $\mathbf{b}+(1,1,\ldots,1)$. In particular, $f_n(\mathbf{b})$ begins with the same number of zeroes that are in the string $\mathbf{b}+(1,1,\ldots,1)$ and then contains the same numbers as the nonzero entries of this string in the same order. It is straightforward to verify that this definition of $f_n$ agrees with the one given above.
\end{Remark}
\begin{Example}
If $n=3$, then
\begin{center}
\begin{tabular}{l@{\extracolsep{2cm}}l@{\extracolsep{2cm}}l}
$(1,1,1)\overset{f_3}{\longmapsto} (0,2,2)$, & $(0,1,1)\overset{f_3}{\longmapsto} (1,2,2)$, & $(0,0,1)\overset{f_3}{\longmapsto} (1,1,2)$, \\
$(1,1,2)\overset{f_3}{\longmapsto} (0,2,3)$, & $(0,1,2)\overset{f_3}{\longmapsto} (1,2,3)$, & $(0,0,2)\overset{f_3}{\longmapsto} (1,1,3)$, \\
$(1,1,3)\overset{f_3}{\longmapsto} (0,0,2)$, & $(0,1,3)\overset{f_3}{\longmapsto} (0,1,2)$, & $(0,0,3)\overset{f_3}{\longmapsto} (0,1,1)$, \\
$(1,2,2)\overset{f_3}{\longmapsto} (0,0,3)$, & $(0,2,2)\overset{f_3}{\longmapsto} (0,1,3)$, & $ (0,0,0) \overset{f_3}{\longmapsto} (1,1,1)$. \\
$(1,2,3)\overset{f_3}{\longmapsto} (0,0,0)$, & $(0,2,3)\overset{f_3}{\longmapsto} (0,0,1)$, &
\end{tabular}
\end{center}
\end{Example}
We leave the fact that $f_n$ is a bijection and that it corresponds to rotation of a triangulation as exercises for the reader. This approach leads to an alternative proof to Theorem \ref{ballot-path}.
It is natural to ask for analogues of Theorem~\ref{main-path-theorem} for arithmetical $d$-structures. That is, how many arithmetical structures $(\mathbf{d}, \mathbf{r})$ on $\P_n$ have $\mathbf{d}(1) = k$? Results of Aigner and Schulze~\cite{AignerSchulze} give a partial answer.
\begin{prop}
Let $n$ be a positive integer. The number of arithmetical structures $(\mathbf{d}, \mathbf{r})$ on $\P_{n+2}$ with $\mathbf{r}(1) = 2$ and $\mathbf{d}(1) = k$ is given by
\[
\binom{n-1}{2k-2} 2^{n+1-2k} C_{k-1}.
\]
\end{prop}
\begin{proof}
Call a sequence of integers $a_1, a_2, \dots, a_n$ \emph{admissible} if all $a_i > 1$ and $a_i$ divides $a_{i-1} + a_{i+1}$ for $i = 1,\ldots, n$, where we define $a_0 = a_{n+1} = 1$. An admissible sequence has a \emph{local maximum} in position $i$ if and only if $a_i = a_{i-1} + a_{i+1}$. Aigner and Schulze show that the expression in the proposition is the number of admissible sequences of length $n$ with precisely $k$ local maxima \cite[equation~(1)]{AignerSchulze}.
Admissible sequences of length $n$ are in bijection with arithmetical $r$-structures on $\P_{n+2}$ that have $\mathbf{r}(1) = 2$. Let $(\mathbf{d}, \mathbf{r})$ be an arithmetical structure on $\P_{n+2}$ with $\mathbf{d} = (d_0,d_1,\ldots, d_{n+1})$ and $\mathbf{r} = (r_0, r_1, \ldots, r_{n+1})$. Then $(r_1,\ldots, r_n)$ is an admissible sequence of length $n$ if and only if $\mathbf{r}(1) = 2$. For $i$ satisfying $1\le i \le n$ we see that $d_i = 1$ if and only if $r_i = r_{i-1} + r_{i+1}$, that is, when the corresponding admissible sequence of length $n$ has a local maximum in position $i$. We see that $d_0 = 1$ if and only if $r_0 = r_1 = 1$, and similarly $d_{n+1} = 1$ if and only if $r_{n+1} = r_n = 1$. Neither of these can happen when $\mathbf{r}(1) = 2$ since $n+1 > 1$.
\end{proof}
It seems significantly more challenging to give formulas for the number of arithmetical structures $(\mathbf{d}, \mathbf{r})$ on $\P_n$ for which $\mathbf{r}(1) = m$ and $\mathbf{d}(1) = k$ when $m > 2$.
When we restrict to arithmetical structures with $\mathbf{r}(1) = 2$ we can derive similar results for other properties considered in \cite{AignerSchulze}. For example, equation~(2) of that paper gives the number of admissible sequences of length $n$ with precisely $k$ rises $a_i < a_{i+1}$, and equation~(3) gives the number of such sequences without monotone subsequences of length $3$. We do not pursue these directions here. We do note that equation~(4) of \cite{AignerSchulze} can be interpreted as counting the number of arithmetical structures on $\P_n$ with $\mathbf{r}(1) = 2$ and $d_1 = k$, which is closely related to a special case of Theorem~\ref{ballot-path}.
\section{Cycles}\label{sec:cycles}
As in the case of paths, the arithmetical structures on the $n$-cycle ${\mathcal C}_n$ are controlled by Catalan combinatorics. The main result of this section, Theorem~\ref{main-cycle-theorem}, states that for each $k\in[n]$, the number of arithmetical structures $(\mathbf{d},\mathbf{r})$ on ${\mathcal C}_n$ with $\mathbf{r}(1)=k$ is
\[\multiset{n}{n-k}=\binom{2n-k-1}{n-k}\]
(where the first symbol denotes the number of multisubsets of $[n]$ of cardinality $n-k$), and consequently
\[|\Arith({\mathcal C}_n)|=\binom{2n-1}{n-1}.\]
As for Theorem \ref{main-path-theorem}, we give two separate proofs of this result. The first proof constructs an explicit bijection between multisets and arithmetical structures, equivariant with respect to a natural ${\mathbb Z}_n$-action on each. The second proof proceeds via enumeration of lattice paths.
In addition, we show that the critical group $K({\mathcal C}_n,\mathbf{d},\mathbf{r})$ is always cyclic, with cardinality $\mathbf{r}(1)=3n-\sum_{j=1}^n d_j$ (Theorem~\ref{critical-group}).
We first state some basic results about arithmetical structures on cycles, some of which have been proved elsewhere.
\begin{prop} \label{c-two}
The cycle ${\mathcal C}_2$ has three arithmetical structures, namely
\[(\mathbf{d},\mathbf{r})\in\{((2,2),(1,1)),\ ((1,4),(2,1)),\ ((4,1),(1,2))\}.\]
\end{prop}
\begin{proof}
The adjacency matrix of ${\mathcal C}_2$ is $A=\begin{bmatrix}0&2\\mathbf{2}&0\end{bmatrix}$, so in order for $L$ to be singular we must have $d_1d_2=4$, leading to the three possibilities listed above.
\end{proof}
More generally, we note that for the cycle graph ${\mathcal C}_n$, a vector $\mathbf{r}$ is in the nullspace of $L({\mathcal C}_n,\mathbf{d})$ if and only if $r_{i-1}+r_{i+1} = r_id_i$ for each $i$, where the subscripts are taken mod $n$. In analogy to Corollary~\ref{path-characterization} for paths, we therefore have:
\begin{prop} \label{cycle-characterization}
Let $\mathbf{r}=(r_1,\dots,r_n)$ be a primitive positive integer vector. Then $\mathbf{r}$ is an arithmetical $r$-structure on ${\mathcal C}_n$ if and only if
\[
r_i|(r_{i-1}+r_{i+1}) \ \ \forall i\in[n]
\]
with the indices taken modulo $n$.
\end{prop}
The next two results are analogous to Propositions~\ref{only-two-path} and~\ref{blowup-path} for paths.
\begin{prop}[{\cite[Thm.~6.5]{arithmetical}}]\label{only-two}
There is exactly one arithmetical structure $(\mathbf{d},\mathbf{r})$ on ${\mathcal C}_n$ such that $d_i\geq 2$ for all $i$, namely
$\mathbf{d}=\mathbf{2}=(2,2,\dots,2)$ and $\mathbf{r}=\mathbf{1}=(1,1,\dots,1)$.
\end{prop}
\begin{prop}[{\cite[pp.~484--485]{Lorenzini89}}; {\cite[Thm.~5.1]{arithmetical}}] \label{subdivision}
\begin{enumerate}
\item Let $n\geq 2$ and let $(\mathbf{d}',\mathbf{r}')\in\Arith({\mathcal C}_n)$. For $1\leq i\leq n$, define integer vectors $\mathbf{d}$ and $\mathbf{r}$ of length $n+1$ as in~\eqref{path-subdivision}, with the conventions $d'_0=d'_n$ and $r'_0=r'_n$.
Then $(\mathbf{d},\mathbf{r})$ is an arithmetical structure on~${\mathcal C}_{n+1}$.
Moreover, the cokernels of $L({\mathcal C}_{n+1},\mathbf{d})$ and $L({\mathcal C}_n,\mathbf{d}')$ are isomorphic.
\item Let $n\geq 3$ and let $(\mathbf{d},\mathbf{r})\in\Arith({\mathcal C}_n)$ such that $d_{i-1}>d_i=1<d_{i+1}$ for some $i\in[n]$. Define integer vectors $\mathbf{d}',\mathbf{r}'$ of length~$n-1$ as in~\eqref{path-smoothing}, with the conventions $d_0=d_n$ and $r_0=r_n$. Then $(\mathbf{d}',\mathbf{r}')$ is an arithmetical structure on ${\mathcal C}_{n-1}$, and the cokernels of $L({\mathcal C}_n,\mathbf{d})$ and $L({\mathcal C}_{n-1},\mathbf{d}')$ are isomorphic.
\end{enumerate}
\end{prop}
As in Proposition~\ref{blowup-path}, we say that $(\mathbf{d},\mathbf{r})$ is the \emph{subdivision of $(\mathbf{d}',\mathbf{r}')$ at position~$i$} and $(\mathbf{d}',\mathbf{r}')$ is the \emph{smoothing} of $(\mathbf{d},\mathbf{r})$ at position~$i$.
Proposition~\ref{subdivision} can be proved in the same way as Proposition~\ref{blowup-path}, because subdivision and smoothing are local operations that only concern vertices of degree 2.
Recall that $\mathbf{r}(1)$ denotes the number of occurrences of 1 in an arithmetical $r$-structure $\mathbf{r}$.
\begin{cor}
We have $\mathbf{r}(1)>0$ for all arithmetical $r$-structures on ${\mathcal C}_n$.
\end{cor}
\begin{proof}
For $n=2$, the assertion follows immediately from Proposition~\ref{c-two}. For $n\ge3$, the claim is obvious if $\mathbf{d}=\mathbf{2}$ and $\mathbf{r}=\mathbf{1}$. If $\mathbf{d}\neq\mathbf{2}$, then by Proposition \ref{only-two} and Lemma~\ref{isolated-ones-in-d} there exists $i\in [n]$ such that $d_{i-1}>d_i=1<d_{i+1}$. But then $(\mathbf{d},\mathbf{r})$ is the subdivision of an arithmetical structure $(\mathbf{d}',\mathbf{r}')$ on ${\mathcal C}_{n-1}$ as described in Proposition~\ref{subdivision}. Since $\mathbf{r}(1)=\mathbf{r}'(1)$, the result follows by induction.
\end{proof}
\begin{theorem}\label{critical-group}
Let $(\mathbf{d},\mathbf{r})\in\Arith({\mathcal C}_n)$ be an arithmetical structure of the cycle. Then
\begin{equation} \label{r-and-d}
\mathbf{r}(1)=3n-\sum_{j=1}^n d_j
\end{equation}
and
\begin{equation} \label{critical-cyclic}
K({\mathcal C}_n,\mathbf{d},\mathbf{r})=\mathbb{Z}_{\mathbf{r}(1)}.
\end{equation}
\end{theorem}
\begin{proof}
We induct on $n$. First, in the base case $n=2$, both claims can be checked by direct computation for the three arithmetical structures listed in Proposition~\ref{c-two}.
Second, if $\mathbf{d}=\mathbf{2}$ and $\mathbf{r}=\mathbf{1}$, then $3n-\sum_{i=1}^n d_i=n=\mathbf{r}(1)$. In this case $L$ is the Laplacian, and $K({\mathcal C}_n,\mathbf{2},\mathbf{1})=\mathbb{Z}_n$, the standard critical group of ${\mathcal C}_n$.
Third, suppose that $n\geq3$ and $\mathbf{d}\neq\mathbf{2}$. Then by Proposition~\ref{subdivision},
$(\mathbf{d},\mathbf{r})$ can be obtained from subdividing some $(\mathbf{d}',\mathbf{r}')\in\Arith({\mathcal C}_{n-1})$ at position~$i$, and $K({\mathcal C}_n,\mathbf{d},\mathbf{r})=K({\mathcal C}_{n-1},\mathbf{d}',\mathbf{r}')$. The recursive description of $\mathbf{r}$ in \eqref{path-smoothing} implies that $\mathbf{r}(1)=\mathbf{r}'(1)$, establishing the isomorphism \eqref{critical-cyclic}. Moreover,
\begin{align*}
\mathbf{r}(1) &= \mathbf{r}'(1) = 3(n-1) - \sum_{j=1}^{n-1} d'_j && \text{(by induction)}\\
&= 3(n-1) - \left(\sum_{j=1}^n d_j - (2+d_i)\right)\\
&= 3n - \sum_{j=1}^n d_j && \text{(because $d_i=1$).}
\end{align*}
\end{proof}
We now come to the main theorem of this section, enumerating arithmetical $r$-structures on ${\mathcal C}_n$ by the value of $\mathbf{r}(1)$. We first need to set up notation for multisets. A \emph{multiset} is a list $S=[a_1,\dots,a_\ell]$, where order does not matter, and repeats are allowed. The number $\ell$ is the \emph{size} or \emph{cardinality} of $S$. We will use square brackets to distinguish multisets from ordinary sets.
For a set $T$, if $a_i\in T$ for all $i$ then we say that $S$ is a \emph{multisubset} of $T$.
We write $\Multiset_{\ell}(T)$ to denote the set of multisubsets of $T$ of size $\ell$
and let $\multiset{n}{\ell} = |\Multiset_{\ell}([n])| = \binom{n+\ell-1}{\ell}$.
Likewise, $\Multiset_{\leq\ell}(T)$ denotes the set of multisubsets of $T$ of size at most $\ell$.
If $T=[n]$, we abbreviate $\Multiset_{\ell}([n])$ as $\Multiset_{\ell}(n)$ and $\Multiset_{\leq\ell}([n])$ as $\Multiset_{\leq\ell}(n)$. There is a bijection $\Multiset_{n-1}(n+1)\to\Multiset_{\leq n-1}(n)$ that erases all instances of $n+1$, which implies that $\sum_{\ell=0}^{n-1} \multiset{n}{\ell} = \multiset{n+1}{n-1}$.
\begin{theorem} \label{main-cycle-theorem}
Let $1\leq k\leq n$ and $\ell=n-k$. Then
\[\#\{(\mathbf{d},\mathbf{r})\in\Arith({\mathcal C}_n) \st \mathbf{r}(1)=k\} = \multiset{n}{n-k}=\binom{2n-k-1}{n-k}.\]
In particular, the total number of arithmetical structures on ${\mathcal C}_n$ is
\[\sum_{k=1}^n \multiset{n}{n-k} = \sum_{\ell=0}^{n-1} \multiset{n}{\ell}
= \multiset{n+1}{n-1} = \binom{2n-1}{n-1}.\]
\end{theorem}
Combining Theorems~\ref{critical-group} and~\ref{main-cycle-theorem} immediately yields the following result.
\begin{cor}
For $n\geq 2$ and $2n\leq k\leq 3n-1$, we have
\[\#\left\{(\mathbf{d},\mathbf{r})\in\Arith({\mathcal C}_n) \st k = \sum_{j=1}^n d_j\right\} = \multiset{n}{k-2n}=\binom{k-n-1}{k-2n}.\]
\end{cor}
As we did with Theorem \ref{main-path-theorem}, we give two separate combinatorial proofs of Theorem~\ref{main-cycle-theorem}. The first is bijective and the second involves recurrences for lattice paths. Before giving the first proof, we describe actions $\rho$ and $\phi$ of the cyclic group ${\mathbb Z}_n$ (written additively) on the sets $\Arith({\mathcal C}_n)$ and $\Multiset_{\ell}(n)$. Specifically, $c\in {\mathbb Z}_n$ acts on $\Arith({\mathcal C}_n)$ by rotating positions:
\[\rho_c(r_1,\dots,r_n) = (r_{c+1},\dots,r_n,r_1,\dots,r_c),\]
and on multisets by rotating values:
\[ \phi_c([a_1,\dots,a_\ell]) = [a_1+c,\dots,a_\ell+c], \]
with all elements taken modulo~$n$. For example, the $\rho$-orbit of the arithmetical $r$-structure $(3,1,2,1,2)$ on ${\mathcal C}_5$ is
\[\big\{(3,1,2,1,2), \ \ (1,2,1,2,3), \ \ (2,1,2,3,1), \ \ (1,2,3,1,2), \ \ (2,3,1,2,1)\big\} \]
and the $\phi$-orbit of $[1,3,3,4]$ in $\Multiset_4(5)$ is
\[\big\{[1,3,3,4], \ \ [2,4,4,5], \ \ [3,5,5,1], \ \ [4,1,1,2], \ \ [5,2,2,3]\big\}.\]
\begin{proof}[First proof of Theorem~\ref{main-cycle-theorem}]
We will construct an explicit bijection
\[\Omega:\Multiset_{\leq n-1}(n)\to\Arith({\mathcal C}_n),\]
which for each multiset constructs an arithmetical structure on the cycle. The method of doing so is analogous to the first proof of Theorem \ref{main-path-theorem}, in which we constructed an arithmetical structure on the path for each ballot sequence. We note that our bijection $\Omega$ will have the following properties:
\begin{enumerate}
\item $\Omega(S)=(\mathbf{d},\mathbf{r})$ with $\mathbf{r}(1)=n-|S|$ for all $S$.
\item $\Omega$ is equivariant with respect to the actions of ${\mathbb Z}_n$ just described, i.e., $\Omega(\phi_t(S))=\rho_t(\Omega(S))$.
\item Given a nonempty multiset $S$, let $\tilde{S} = \phi_c(S)$ be the element of the $\phi$-orbit of $S$ that is first in reverse-lex\footnote{If $A=[a_1\leq\cdots\leq a_k]$ and $B=[b_1\leq\cdots\leq b_k]$ are $k$-multisets, then $A$ precedes $B$ in reverse-lex order if and only if $a_j<b_j$, where $j=\max\{i\st a_i\neq b_i\}$.} order. Then $\Omega(\tilde{S}) = \tilde{\mathbf{r}}$ is first in reverse-lex order in its $\rho$-orbit. In particular, $\tilde{r}_n = 1$.
\end{enumerate}
We begin by setting $\Omega(\emptyset) = (\mathbf{2},\mathbf{1})$. Given a nonempty multiset $S$, let $\tilde{S}$ be defined as in property (C) above. Write $\tilde{S}$ as $[s_1,\ldots,s_\ell]$ where each $s_i \le s_{i+1}$. Because $|S|=\ell<n$, some element of its $\phi$-orbit, hence $\tilde S$, contains no instance of $n$. This implies $s_{\ell} < n$. Also, $s_1 = 1$, since otherwise subtracting $1$ from each element produces an element of the $\phi$-orbit earlier in reverse-lex order.
We now define an arithmetical $r$-structure $\tilde{\mathbf{r}}=(\tilde{r}_1,\dots,\tilde{r}_{n})$ (which we regard as labels on the vertices of ${\mathcal C}_n$) by the following algorithm, with the steps numbered for later reference. Much as Proposition \ref{subdivision-prop} constructed arithmetical structures on paths by describing a series of subdivisions on the normal Laplacian arithmetical structure on a shorter path, this algorithm will start with the Laplacian arithmetical structure on ${\mathcal C}_1$ and make a sequence of subdivisions based on the given multiset.
{\bf Algorithm A}
\begin{enumerate}
\item[Input:] A multiset $\tilde{S}=\phi_c(S)=[s_1,\ldots,s_\ell]$ as above.\\
\item[(1)] Initialize $\tilde{r}_0=1$ and $n_0=1$ on ${\mathcal C}_1$ (by convention, we set $\tilde{r}_0=\tilde{r}_n$).
\item[(2)] For each integer $i$ with $1 \le i \le \ell$, we construct an $r$-structure on the cycle graph ${\mathcal C}_{n_i}$ ($1=n_0< n_1 \leq n_2 \leq n_3 \leq \dots \leq n_\ell \le n$) as follows:
\begin{enumerate}
\item If $n_{i-1}<s_i$ add vertices with a label of $\tilde{r}_j=1$ until there are $s_i$ vertices. Then add a vertex with label $\tilde{r}_{s_i}=2$, and set $n_i=s_i+1$
\item If $n_{i-1}=s_i$, add a vertex with label $\tilde{r}_{s_i}=\tilde{r}_{s_i-1}+1$, and set $n_i=s_i+1$
\item If $n_{i-1} > s_i$, then insert a vertex with label $\tilde{r}_{s_i}+\tilde{r}_{s_i-1}$ into position $s_i$, which will have the effect of moving all later labels forward one vertex. Set $n_i=n_{i-1}+1$
\end{enumerate}
\item[(3)] If $n_{\ell}<n$, add $n-n_\ell$ vertices with a label of $1$.
\item[(4)] The resulting arithmetical $r$-structure is $(\tilde{r}_1,\dots, \tilde{r}_n)=\tilde{\mathbf{r}}=\Omega(\tilde{S})$, recalling that $\tilde{r}_0=\tilde{r}_n$. Set $\mathbf{r}=\Omega(S) = \rho_{-c}(\tilde{\mathbf{r}})$.
\end{enumerate}
We emphasize that exactly one of steps (2a), (2b) and (2c) is executed for each~$i$. Moreover, we claim that after each iteration of step~(2) the labeled vertices form an arithmetical $r$-structure on a smaller cycle ${\mathcal C}_{n_i}$, and in particular the output $\mathbf{r}$ produced by this algorithm is indeed an arithmetical $r$-structure on ${\mathcal C}_n$ for the following reasons:
\begin{enumerate}
\item[i)] Because we know that $s_1=1$, after the first iteration of step~(2) we have the vector $(1,2)$ which is an arithmetical $r$-structure on ${\mathcal C}_2$ by Proposition \ref{c-two}.
\item[ii)] After each iteration of the procedure in step~(2a), an arithmetical $r$-structure on ${\mathcal C}_{n_i}$ with $\tilde{r}_0=1$ now also ends with a sequence of $1$'s followed by a $2$. The divisibilities of Proposition \ref{cycle-characterization} are thus preserved.
\item[iii)] The procedures in steps~(2b) and (2c) amount to inserting a vertex that is labeled with the sum of the labels of its neighbors, which is essentially the subdivision operation discussed for paths. In particular, the divisibilities of Proposition \ref{cycle-characterization} still hold.
\item[iv)] Step~(3) will simply add a string of vertices labeled with a $1$ where there was only one such vertex before. Therefore, this also preserves the fact that the output is an arithmetical $r$-structure.
\item[v)] Finally, applying a cyclic rotation as in step~(4) does not break this property.
\end{enumerate}
Before analyzing this algorithm further, we give two examples, with $n=6$. Index the vertices of ${\mathcal C}_6$ according to the following figure:
\begin{center}
\begin{tikzpicture}
[scale=.4,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {$v_1$};
\node (b) at (1.5,2) {$v_2$};
\node (c) at (3,0) {$v_3$};
\node (d) at (1.5,-2) {$v_4$};
\node (e) at (-1.5,-2) {$v_5$};
\node (f) at (-3,0) {$v_0$};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\end{tikzpicture}
\end{center}
For $\tilde{S} = [1,1,3,5]$, Algorithm~A proceeds as in Figure~\ref{AlgmAEx:1}. Each figure shows one iteration of the procedure in step~2.
\begin{figure}[h]
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\coordinate (a) at (-1.5,2) {};
\coordinate (b) at (1.5,2) {};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(i)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {2};
\coordinate (b) at (1.5,2) {};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(ii)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(iii)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\node (c) at (3,0) {3};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(iv)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\node (c) at (3,0) {3};
\node (d) at (1.5,-2) {1};
\node (e) at (-1.5,-2) {2};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(v)};
\end{tikzpicture}
\caption{Algorithm A, with $n=6$ and $\tilde{S}=[1,1,3,5]$.\label{AlgmAEx:1}}
\end{figure}
\begin{enumerate}[label=(\roman*)]
\item The leftmost figure is the result of the initialization (step~1).
\item When $i=1$, we have $s_i=1=n_{i-1}$. Step~(2b) inserts a vertex with label $1+1=2$ in position 1 and sets $n_1=2$. Note that we always have $s_1=1=n_0$, so this will always be the output of the first iteration.
\item When $i=2$ we have $s_i=1<n_{i-1}$. Step~(2c) inserts a vertex with label $1+2=3$ in position 1 and moves the vertex labeled 2 into position 2. We then set $n_2=3$.
\item When $i=3$ we have $s_i=3=n_2$. Step~(2b) places a vertex with label $r_{s_2}+1=3$ in position 3, and sets $n_3=4$.
\item When $i=4$ we have $s_i=5>n_{i-1}$. Step~(2a) places a vertex with a label of $1$ in position 4, and a vertex with label $2$ in position 5. We now set $n_4=6=n$, so step (3) does not occur.
\end{enumerate}
As a second example, if $\tilde{S} = [1,1,4,4]$ then the algorithm proceeds as in Figure~\ref{AlgmAEx:2}.
\begin{figure}[h]
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\coordinate (a) at (-1.5,2) {};
\coordinate (b) at (1.5,2) {};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(i)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {2};
\coordinate (b) at (1.5,2) {};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(ii)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\coordinate (c) at (3,0) {};
\coordinate (d) at (1.5,-2) {};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(iii)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\node (c) at (3,0) {1};
\node (d) at (1.5,-2) {2};
\coordinate (e) at (-1.5,-2) {};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(iv)};
\end{tikzpicture}
\hfill
\begin{tikzpicture}
[scale=.35,auto=left,minimum size=.7cm,every node/.style={circle,fill=blue!20}]
\node (a) at (-1.5,2) {3};
\node (b) at (1.5,2) {2};
\node (c) at (3,0) {1};
\node (d) at (1.5,-2) {3};
\node (e) at (-1.5,-2) {2};
\node (f) at (-3,0) {1};
\foreach \from/\to in {a/b,b/c,c/d,d/e,e/f,f/a}
\draw (\from) -- (\to);
\node [draw=none,fill=none] at (0,-4) {(v)};
\end{tikzpicture}
\caption{Algorithm A, with $n=6$ and $\tilde{S}=[1,1,4,4]$.\label{AlgmAEx:2}}
\end{figure}
We now show that the function $\Omega:\Multiset_{\leq n-1}(n)\to\Arith({\mathcal C}_n)$ defined by Algorithm~A satisfies the desired properties (A), (B), and (C), and is a bijection. First, each iteration of the procedure in step~(2) adds one new vertex with a label $\tilde{r}_i$ greater than $1$. Thus, the number of $1$'s in $\Omega(S)$, which is unaffected by the cyclic rotation $\mathbf{r} = \rho_{-c}(\tilde{\mathbf{r}})$ of Step (4), equals $\Omega(S)(1)=n-|S|$, and we can regard $\Omega$ as the union of maps
\[
\Omega_\ell:\Multiset_{\ell}(n)\to\{\text{arithmetical } r\text{-structures on } {\mathcal C}_n \text{ with } \mathbf{r}(1)=n-\ell\}.
\]
Second, to see that this map is equivariant, we let $T=\phi_t(S)$ for $t \in {\mathbb Z}_n$. Then one can check that $\tilde{T}=\tilde{S}=\phi_{-t+c}(T)$. We then get that $\Omega(T) = \rho_{t-c}(\tilde{\mathbf{r}})= \rho_t(\Omega(S))$, as desired.
Third, by starting with $\tilde{S} = [s_1,s_2,\dots, s_\ell]$, which is the multiset in its $\phi$-orbit that comes first in reverse-lex order, the arithmetical $r$-structure $\tilde{\mathbf{r}} = \Omega(\tilde{S})$ is constructed so that $\tilde{r}_j = 1$ for $s_\ell< j \leq n$ and $\tilde{r}_{s_\ell} > 1$. The only way that $\tilde{\mathbf{r}}$ can contain a longer string of $1$'s earlier is if there exists a choice of $(i,i+1)$ so that the gap $s_{i+1}-s_i-1$ is greater than the gap $n-s_\ell$. However that would contradict that fact that $\tilde{S}$ is first in reverse-lex order since otherwise adding $n-s_{i+1}+1$ (modulo $n$) to all entries of $\tilde{S}$ produces a new maximal entry $n - s_{i+1}+s_i+1$ which would be smaller than $s_\ell$.
We will now show that each $\Omega_\ell$ is a bijection. This is clear for $\ell=0$, i.e., when $S=\emptyset$. When $\ell=1$, we have $S=[a_1]$ and $\tilde S=[1]=\phi_{n-a_1+1}(S)$. The procedure in step~(2) is executed only once, and step~(3) sets $\tilde{\mathbf{r}}=(2,1,\dots,1)$, so the output of the algorithm via step~(4) will be $\rho_{-n+a_1-1}(2,1,\dots,1) = (r_1,\ldots,r_n)$ where $r_{n-a_1+2}=2$ and all other $r_i=1$. Again, we see that $\Omega_1$ is a bijection.
Suppose now that $\ell\geq 2$. After each iteration of the procedure in step~(2), the label in position $s_i$ is a \emph{local maximum}; that is $\tilde{r}_{s_i-1}<\tilde{r}_{s_i}$, and either $\tilde{r}_{s_i+1}<\tilde{r}_{s_i}$ or there is not a vertex $s_i+1$, in which case $\tilde{r}_0=\tilde{r}_n=1$ would be the next label on a vertex. We claim in addition that if $m>{s_i}$ then $r_m$ is not a local maximum. This is clear if step~(2a) or (2b) was executed, for then $s_i=n_i-1$. On the other hand, if step~(2c) was just executed $b$ times in a row, then each step inserted a label that is greater than the label to its right, and each insertion occurred to the right of all previous insertions (since the sequence $(s_i)$ is in weakly increasing order), which proves the claim.
Therefore, we can recover $\tilde S$ from $\mathbf{r}$ by the following algorithm. First, let $\tilde\mathbf{r}=\rho_{c}(\mathbf{r})$ be the element of the $\rho$-orbit of $\mathbf{r}$ that is first in reverse-lex order (so in particular $\tilde r_n=1$). Label the vertices of ${\mathcal C}_n$ with $\tilde\mathbf{r}$, and perform the following steps.
{\bf Algorithm B}
\begin{enumerate}
\item[Input:] An arithmetical $r$-structure $\tilde{\mathbf{r}}=\rho_c(\mathbf{r})=(\tilde{r}_1,\ldots,\tilde{r}_n)$ as above.\\
\item[(1)] Let $\tilde S$ be the empty multiset.
\item[(2)] Let $j$ be the greatest integer $1 \le j \le n$ so that $\tilde{r}_j$ is a local maximum, and add $j$ to the multiset $\tilde S$.
\item[(3)] Delete $\tilde{r}_j$ from $\tilde\mathbf{r}$. What remains is an arithmetical $r$-structure on the graph ${\mathcal C}_{n-1}$.
\item[(4)] Repeat the previous two steps until we are left with the arithmetical $r$-structure $\mathbf{1}$ on the graph ${\mathcal C}_{n-\ell}$. The multiset $\tilde S$ will now contain $\ell = n-\mathbf{r}(1)$ elements, and will be first in reverse-lex order in its $\phi$-orbit.
\end{enumerate}
Having recovered $\tilde S$, we set $S=\phi_{-c}(\tilde S)$.
Steps~(2) and~(3) of Algorithm B will be executed exactly $\ell$ times, since each iteration removes one entry greater than 1 from $\tilde{\mathbf{r}}$. The fact that we have an arithmetical $r$-structure on ${\mathcal C}_{n-1}$ after step~(3) follows from part~(B) of Proposition~\ref{subdivision}.
\end{proof}
We now give a second proof of Theorem~\ref{main-cycle-theorem} using enumeration of lattice paths. This proof relies on our refined count for arithmetical structures on paths given in Theorem~\ref{main-path-theorem}.
Before giving the proof we explain how an arithmetical $r$-structure on a cycle gives rise to arithmetical $r$-structures on paths of vertices contained within that cycle.
\begin{lemma}\label{cycle-lemma}
Let $n \ge 2$.
\begin{enumerate}
\item Suppose that $\mathbf{r} = (r_1,\ldots, r_n)$ is an arithmetical $r$-structure on ${\mathcal C}_n$ with $r_j = 1$ for some $1 \le j \le n$. Then $(r_j, r_{j+1},\ldots, r_n, r_1,\ldots, r_j)$ is an arithmetical $r$-structure on $\P_{n+1}$.
\item Suppose that $\mathbf{r} = (r_1,\ldots, r_n)$ is an arithmetical $r$-structure on ${\mathcal C}_n$ with $r_\alpha = 1$ and $r_\beta = 1$ for some $1 \le \alpha < \beta \le n$. Then $(r_\alpha, r_{\alpha+1},\ldots, r_\beta)$ is an arithmetical $r$-structure on $\P_{\beta-(\alpha-1)}$ and $(r_\beta,r_{\beta+1},\ldots, r_n, r_1,\ldots, r_\alpha)$ is an arithmetical structure on $\P_{n-(\beta-\alpha)+1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proofs are similar to those in Lemma~\ref{path-lemma}. For (A), note that $\mathbf{r}$ is an arithmetical $r$-structure on ${\mathcal C}_n$ if and only if there exists a positive integral vector $\mathbf{d}$ such that the following equations hold, where the indices are taken modulo~$n$:
\[
r_id_i=r_{i-1}+r_{i+1} \qquad 1\le i\le n.
\]
Without loss of generality, assume that $r_1=1$. Then we have the following set of equations, showing that $(r_1,r_2,\dots,r_n,r_1)$ is an arithmetical $r$-structure on $\P_{n+1}$:
\begin{align*}
\tilde{d}_1 &:= r_2\\
d_ir_i &= r_{i-1}+r_{i+1} \qquad 1<i\le n\\
\tilde{d}_{n+1} &:= r_n.
\end{align*}
Similarly for (B), if $r_1=r_\beta=1$, then we have the equations:
\begin{align*}
\tilde{d}_1 &:= r_2\\
d_ir_i &= r_{i-1}+r_{i+1} \qquad 1<i<\beta\\
\tilde{d}_{\beta} &:= r_{\beta-1},
\end{align*}
showing that $(r_1,r_2,\dots,r_\beta)$ is an arithmetical $r$-structure on $\P_{\beta}$. An identical argument shows that $(r_\beta,r_{\beta+1},\dots,r_n,r_1)$ is an arithmetical $r$-structure on $\P_{n-\beta+2}$.
\end{proof}
\begin{proof}[Second Proof of Theorem~\ref{main-cycle-theorem}]
First, consider the case $k=1$. By Lemma \ref{cycle-lemma}, for each $j\in[n]$, the map
\[
(r_1,\dots,r_n) \mapsto (r_j,r_{j+1},\dots,r_n,r_1,\dots,r_j)
\]
is a bijection between arithmetical $r$-structures $(r_1,\dots,r_n)$ on ${\mathcal C}_n$ with a unique 1 in position $j$, and arithmetical $r$-structures on $\P_{n+1}$ with exactly two entries equal to $1$. By Theorem~\ref{main-path-theorem},
\begin{align*}
\#\{\mathbf{r}\in\Arith({\mathcal C}_n) \st \mathbf{r}(1)=1\} &= n\cdot\#\{\mathbf{r}\in\Arith(\P_{n+1}) \st \mathbf{r}(1)=2\}\\
&= n\cdot A(n+1,2)\\
&= n\cdot C_{n-1} = \binom{2n-2}{n-2} = \multiset{n}{n-1}
\end{align*}
as claimed.
Now suppose that $k > 1$. There are $\binom{2n-k-1}{n-k}$ lattice paths from $(0,0)$ to $(n-1,n-k)$ consisting of north and east steps. Since $n-1 > n-k$, every such lattice path $P$ touches the line $x=y$ for a last time at some point $(z,z)$, where $0 \le z \le n-k$. That is, $P$ consists of a lattice path $P_1$ from $(0,0)$ to $(z,z)$, followed by a step east to $(z+1,z)$, followed by a lattice path $P_2$ from $(z+1,z)$ to $(n-1,n-k)$ that does not cross (although it may touch) the diagonal line $x = y+1$. There are $\binom{2z}{z} = (z+1) C_z$ choices for the subpath $P_1$. The possibilities for the path $P_2$ are in bijection with lattice paths from $(0,0)$ to $(n-z-2,n-z-k)$ that do not cross above the line $x=y$. This gives $B(n-z-2,n-z-k)=A(n-z,k)$ possible paths $P$. Therefore
\begin{equation} \label{lattice-path-count}
\binom{2n-k-1}{n-k} = \sum_{z=0}^{n-k} (z+1) C_z \cdot A(n-z,k).
\end{equation}
We show that this expression counts the arithmetical $r$-structures on ${\mathcal C}_n$ with $\mathbf{r}(1) = k$.
It is now convenient to think of the vertices of ${\mathcal C}_n$ as $v_0,\ldots, v_{n-1}$. Let $\mathbf{r}=(r_0,\dots,r_{n-1})$ be an arithmetical $r$-structure on ${\mathcal C}_n$ such that $\mathbf{r}(1)=k$. Let
\[
\alpha = \min\{i \st r_i=1\}, \qquad \beta=\max\{i\st r_i=1\}.
\]
Note that $0 \le \alpha < \beta \le n-1$.
First, by part (B) of Lemma \ref{cycle-lemma}, $\mathbf{r}'=(r_\alpha,r_{\alpha+1},\dots,r_{\beta -1},r_\beta)$ is an arithmetical $r$-structure on the path $\P_{\beta-\alpha+1}$ with $\mathbf{r}'(1)=k$. In particular, $\beta-\alpha + 1 \ge k$. By Theorem~\ref{main-path-theorem}, the number of possibilities for $\mathbf{r}'$ is $A(\beta-\alpha+1,k)$.
Second, observe that $\mathbf{r}''=(r_\beta, r_{\beta+1},\dots, r_{n-1}, r_0,r_1, \dots, r_\alpha)$ is an arithmetical $r$-structure on the path $\P_{n-(\beta-\alpha)+1}$ with $\mathbf{r}''(1)=2$. Again by Theorem~\ref{main-path-theorem}, the number of possibilities for $\mathbf{r}''$ is $A(n-\beta+\alpha+1,2)$, which is equal to the Catalan number $C_{n-(\beta-\alpha)-1}$.
Let $z = n-(\beta-\alpha)-1$ and note that $0\le z \le n-k$. For fixed $n, k$, each choice of $\alpha$ and $\beta$ satisfying $\beta-\alpha + 1 \ge k$ gives exactly $C_z \cdot A(n-z,k)$ possible arithmetical $r$-structures. Moreover, each value of $z$ arises from precisely $z+1$ pairs $(\alpha,\beta)$, namely $(0,n-z-1),(1,n-z),\dots,(z,n-1)$. Therefore, the number of possible arithmetical structures is
\[
\sum_{z=0}^{n-k} (z+1) C_z \cdot A(n-z,k)
\]
which, combined with \eqref{lattice-path-count}, completes the proof.
\end{proof}
We do not know as much about the distribution of single digits in the arithmetical structures on the cycle as we do for the path. Clearly each digit is distributed identically, since ${\mathcal C}_n$ is vertex-transitive, but we do not have a full analogue of Theorem~\ref{ballot-path}. However, we can observe the following pattern.
\begin{prop}
Let $n\geq 3$ and $1\leq i\leq n$. Then $(1,r_2,\dots,r_n)$ is an arithmetical $r$-structure on ${\mathcal C}_n$ if and only if $(1,r_2,\dots,r_n,1)$ is an arithmetical $r$-structure on $\P_{n+1}$. In particular, the number of arithmetical $d$-structures on ${\mathcal C}_n$ with $d_i=1$ is the Catalan number $C_{n-1}$.
\end{prop}
\begin{proof}
The equivalence follows immediately from the characterizations of arithmetical $r$-structures on paths and cycles (respectively Corollary~\ref{path-characterization} and Proposition~\ref{cycle-characterization}), and the enumeration then follows from Theorem~\ref{ballot-path}.
\end{proof}
\bibliographystyle{amsalpha}
| {
"timestamp": "2018-07-25T02:03:20",
"yymm": "1701",
"arxiv_id": "1701.06377",
"language": "en",
"url": "https://arxiv.org/abs/1701.06377",
"abstract": "Let $G$ be a finite, simple, connected graph. An arithmetical structure on $G$ is a pair of positive integer vectors $\\mathbf{d},\\mathbf{r}$ such that $(\\mathrm{diag}(\\mathbf{d})-A)\\mathbf{r}=0$, where $A$ is the adjacency matrix of $G$. We investigate the combinatorics of arithmetical structures on path and cycle graphs, as well as the associated critical groups (the cokernels of the matrices $(\\mathrm{diag}(\\mathbf{d})-A)$). For paths, we prove that arithmetical structures are enumerated by the Catalan numbers, and we obtain refined enumeration results related to ballot sequences. For cycles, we prove that arithmetical structures are enumerated by the binomial coefficients $\\binom{2n-1}{n-1}$, and we obtain refined enumeration results related to multisets. In addition, we determine the critical groups for all arithmetical structures on paths and cycles.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Counting Arithmetical Structures on Paths and Cycles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750496039275,
"lm_q2_score": 0.7185943805178138,
"lm_q1q2_score": 0.70952216210888
} |
https://arxiv.org/abs/2106.00380 | The influence of the symmetry of identical particles on flight times | In this work, our purpose is to show how the symmetry of identical particles can influence the time evolution of free particles in the nonrelativistic and relativistic domains. For this goal, we consider a system of either two distinguishable or indistinguishable (bosons and fermions) particles. Two classes of initial conditions have been studied: different initial locations with the same momenta, and the same locations with different momenta. The flight time distribution of particles arriving at a `screen' is calculated in each case. Fermions display broader distributions as compared with either distinguishable particles or bosons, leading to earlier and later arrivals for all the cases analyzed here. The symmetry of the wave function seems to speed up or slow down propagation of particles. Due to the cross terms, certain initial conditions lead to bimodality in the fermionic case. Within the nonrelativistic domain and when the short-time survival probability is analyzed, if the cross term becomes important, one finds that the decay of the overlap of fermions is faster than for distinguishable particles which in turn is faster than for bosons. These results are of interest in the short time limit since they imply that the well-known quantum Zeno effect would be stronger for bosons than for fermions.Fermions also arrive earlier than bosons when they are scattered by a delta barrier. Furthermore, the particle symmetry does not affect the mean tunneling flight time and it is given by the phase time for the distinguishable particle. | \section{Introduction}
It is well understood that the symmetry of indistinguishable particles has a
profound influence on their dynamics. A feature which is well documented is
the ``bunching" of bosons \cite{brown1956,jeltes2007} and ``anti-bunching" of
fermions \cite{henny1999,oliver1999,kiesel2002,ianuzzi2006,rom2006}.
Consider two identical particles, each described for simplicity by an
initial Gaussian wavepacket. When the two Gaussians of the two particles are
located sufficiently far from each other in phase space, there is no overlap between them
and the symmetry of particles plays no role. The fermions and bosons may be
considered as two independent distinguishable particles. However, when they
come close the symmetry leads to important consequences. Bosons, whose overall function is
symmetric with respect to exchange may overlap with each other, hence the interference term
`increases' the density, causing the ``bunching'' phenomenon. Fermions
on the other hand, due to the anti-symmetry, cannot be located at the same
place and the `hole' in the distribution created by the overlap term creates
a distancing between the particles, which is understood as the
``anti-bunching" effect.
These effects show up also in the temporal dynamics \cite%
{grossmann2014,buchholz2018}. Consider the scattering of two
indistinguishable particles on each other and the relative distance
(squared) between them as a function of time \cite{grossmann2014}. As they
come closer to each other the distance is reduced and as they move again
away it increases. Yet, when comparing such scattering with the exact same
potential, incident energy, etc. of bosons and fermions, one finds that the
distance between the fermions as they separate is larger than that of bosons
-- another reflection of the bunching and anti-bunching phenomenon \cite%
{grossmann2014,buchholz2018}. Some researchers have tried to describe the
repulsion of fermions in terms of an artificial repulsive potential -- the ``Pauli potential". \cite{wilets1977,dorso1987,boal1988,latora1994,gu2016} In statistical mechanics, this situation leads to the so-called statistical interparticle potential which is temperature-dependent since
it is related to what is known as the mean thermal wavelength or thermal de Broglie wavelength. \cite{Pathria} One then speaks about statistical
attraction and repulsion for bosons and fermions, respectively.
To the best of our knowledge the effect of symmetry on flight time
distributions \cite{petersen2017,petersen2018,rivlin2020,ianconescu2021} has
not been addressed.
The central objective of this present work is to study how the symmetry affects temporal evolution, one-particle flight time distributions and, under the presence of an interaction potential, mean flight times when considering non-interacting identical particles. We will show that fermions have a
broader time distribution than bosons so that the former will be detected arriving at a suitably placed screen earlier and later, a direct result of the anti-bunching effect.
Effectively, the symmetry can speed up or slow down the time evolving particles in the nonrelativistic and relativistic domains. In the second framework, we argue that one cannot speak of a superluminal effect since it can be seen as a mirror, or direct reflection, of the corresponding initial spatial distributions.
Specifically, consider first two identical particles initiated close to each other about a (mean) point in phase space, which then continue moving as free particles until they are detected on a screen some distance away. One may in principle measure the time at which a particle hits the screen
and thus obtain a flight time distribution. We will show that this flight time
distribution may be different for bosons and fermions as compared to
distinguishable particles. The same happens when their motion is not free but they
are individually scattered by a delta barrier potential.
We find that fermions have a broader time distribution than bosons, irrespective of whether they are
scattered through a potential or not. Fermions will be detected arriving at
the screen earlier and later than bosons, a direct result of the anti-bunching effect.
A related question has to do with the survival probability of the initial
wavefunction. We shall show that the bosonic survival probability of free
particles decays slower than that of distinguishable particles while for fermions it decays more rapidly. These results are also of interest in the short time limit, which should imply that the well-known quantum Zeno effect \cite{Zeno1,Zeno2,Zeno3}
would be stronger for bosons than for fermions.
However, at least for scattering through a
delta potential, the particle symmetry does not affect the mean tunneling flight time and it is given by the phase time for the distinguishable particle \cite%
{rivlin2020,dumont2020}.
The paper is organized as follows. In Section II we consider the case of free particle propagation in the nonrelativistic and relativistic domains. In
Section III the scattering of identical particles from a delta barrier
potential is analyzed in detail since closed analytical expressions can be obtained. Section IV presents and discusses our results for free and tunneling dynamics. The role played by the initial width of the Gaussian wavepackets describing the identical particles in mean flight times is also analyzed.
We argue that the implications of the early arrival of fermions versus bosons as, for example, photons at the screen in the relativistic case, which might seem to be superluminal, is not. It is a reflection of the anti-bunching effect on the initial density distribution.
We also consider further generalizations and
implications of these results to realistic systems in the last Section.
\renewcommand{\theequation}{2.\arabic{equation}} \setcounter{section}{1} %
\setcounter{equation}{0}
\section{\protect\bigskip Free dynamics of nonrelativistic identical particles}
\subsection{General considerations}
Our model system is two (one dimensional) non-interacting
identical particles (with coordinates $x_{1}$ and $x_{2})$ and mass$\ M\ $%
which scatter from a potential $V\left( x_{j}\right)$. We place a screen
to the right or left of the potential and measure a particle whenever it
hits the screen. The questions we seek to answer are what the
distribution of times at which one of the particles hits the screen is, and what
the mean time it takes is, assuming that the mean time exists. The
Hamiltonian for a single particle (operators are denoted with carets) is
\begin{equation}
\hat{H}_{j}=\frac{\hat{p}_{j}^{2}}{2M}+V\left( \hat{x}_{j}\right) ,j=1,2
\label{2.1}
\end{equation}%
with $\hat{p}_{j}$ and $\hat{x}_{j}$ the momentum and position operators of
the j-th particle respectively. The full Hamiltonian is the sum of the two%
\begin{equation}
\hat{H}=\hat{H}_{1}+\hat{H}_{2}. \label{2.2}
\end{equation}%
Initially, the single particle wavefunction will be a coherent state
localized about the mean position $x_{ji}$ and mean momentum $p_{ji}$ with
width parameter $\Gamma$%
\begin{equation}
\Psi _{j}\left( x_{j}\right) =\left( \frac{\Gamma }{\pi }\right) ^{1/4}\exp %
\left[ -\frac{\Gamma \left( x_{j}-x_{ji}\right) ^{2}}{2}+\frac{i}{\hbar }%
p_{ji}\left( x_{j}-x_{ji}\right) \right] ,j=1,2. \label{2.3}
\end{equation}%
To simplify, we introduce at this point the reduced coordinates of position, momenta, and time to be%
\begin{equation}
X=\sqrt{\Gamma }x,K=\frac{p}{\hbar \sqrt{\Gamma }},\tau =\frac{\hbar \Gamma
}{M}t \label{2.4}
\end{equation}%
so that the single particle wavefunction has the form%
\begin{equation}
\Psi _{j}\left( X_{j}\right) =\left( \frac{1}{\pi }\right) ^{1/4}\exp \left[
-\frac{1}{2}\left( X_{j}-X_{ji}\right) ^{2}+iK_{ji}\left( X_{j}-X_{ji}\right) %
\right] ,j=1,2. \label{2.5}
\end{equation}%
The composite wavefunction of the two particles is
\begin{equation}
\Psi _{k}\left( X_{1},X_{2}\right) =\frac{1}{N_{k}}\left[ \Psi _{1}\left(
X_{1}\right) \Psi _{2}\left( X_{2}\right) +h_{k}\Psi _{2}\left( X_{1}\right)
\Psi _{1}\left( X_{2}\right) \right] \label{2.6}
\end{equation}%
where the coefficient $h_{k}$ is
\begin{equation}
h_{k}=\left(
\begin{array}{c}
1 \\
-1 \\
0%
\end{array}%
\right) \text{ for }\left(
\begin{array}{c}
\text{bosons} \\
\text{fermions} \\
\text{distinguishable particles}%
\end{array}%
\right) \label{2.7}
\end{equation}%
and the corresponding normalization constant is
\begin{equation}
N_{k}^{2}=2\left[ 1+h_{k}\exp \left( -\frac{\left( X_{1i}-X_{2i}\right)
^{2}+\left( K_{1i}-K_{2i}\right) ^{2}}{2}\right) \right] \text{, }k=B,F
\label{2.8}
\end{equation}%
where $B$ and $F$ denote bosons and fermions, respectively. The
normalization for distinguishable particles ($k=D$) is unity. The time-evolved wavefunction is
\begin{eqnarray}
N_{k}\Psi _{k}\left( X_{1},X_{2};\tau \right) &=&\left[ \exp \left( -i\hat{H}%
_{1}\tau \right) \Psi _{1}\left( X_{1}\right) \right] \left[ \exp \left( -i%
\hat{H}_{2}\tau \right) \Psi _{2}\left( X_{2}\right) \right] \notag \\
&&+h_{k}\left[ \exp \left( -i\hat{H}_{1}\tau \right) \Psi _{2}\left(
X_{1}\right) \right] \left[ \exp \left( -i\hat{H}_{2}\tau \right) \Psi
_{1}\left( X_{2}\right) \right] \notag \\
&\equiv &\left[ \Psi _{1}\left( X_{1};\tau \right) \Psi _{2}\left(
X_{2};\tau \right) +h_{k}\Psi _{2}\left( X_{1};\tau \right) \Psi _{1}\left(
X_{2};\tau \right) \right] . \label{2.9}
\end{eqnarray}
We now put a `screen' at the point $X=X_f$ such that the initial wave function has negligible overlap with the screen.
Particle $1$ may reach the screen at
time $\tau $ \ when particle $2$ is found in any location. In other
words, the probability that a particle reaches the screen and that the second
particle will be found at that time at some point, say, $z$, will be
proportional to $\left\vert \Psi \left( X_f,X_{2}=z;\tau \right)
\right\vert ^{2}$. We are interested in knowing the distribution of times at
which a particle will hit the screen, irrespective of where the other
particle is so that the probability of finding a particle hitting the screen
at (the reduced) time $\tau $ is defined to be
\begin{equation}
P_{k}\left( X_f;\tau \right) =\frac{\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k}\left( X_f,z;\tau \right) \right\vert ^{2}}{\int_{0}^{\infty }d\tau
\int_{-\infty }^{\infty }dz\left\vert \Psi _{k}\left( X_f,z;\tau \right)
\right\vert ^{2}},k=B,F,D. \label{2.10}
\end{equation}%
The mean time is then naturally given as
\begin{equation}
\left\langle \tau \right\rangle _{k}=\int_{0}^{\infty }d\tau \tau
P_{k}\left( X_f;\tau \right) ,k=B,F,D. \label{2.11}
\end{equation}%
The distribution and the means are well defined if the time integrals
converge as in potential scattering where the density decays at long times
as $\tau ^{-3}$ \cite{Muga-2008,Eli-2018}. For free particles, the density decays as $\tau ^{-1}$ so
that the best one can do is to consider the relative probability of having a
particle arrive at the screen at time $\tau $. This we denote as
\begin{equation}
\rho _{k}\left( X_f,\tau \right) =\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k}\left( X_f,z;\tau \right) \right\vert ^{2},k=B,F,D. \label{2.12}
\end{equation}
\subsection{Symmetry and free particles}
The single free particle ($V=0$) time evolved wavefunction is
\begin{equation}
\Psi _{j}\left( X,X_{i},K_{i},\tau \right) =\frac{1}{\sqrt{\left( 1+i\tau
\right) }}\left( \frac{1}{\pi }\right) ^{1/4}\exp \left( -\frac{1}{2}\frac{%
\left[ \left( X_{j}-X_{ji}\right) -iK_{ji}\right] ^{2}}{\left( 1+i\tau
\right) }-\frac{K_{ji}^{2}}{2}\right) ,j=1,2 \label{2.13}
\end{equation}%
and the density is
\begin{equation}
\left\vert \Psi _{j}\left( X,X_{ji},K_{ji},\tau \right) \right\vert ^{2}=%
\frac{1}{\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp \left( -\frac{\left(
X-X_{ji}-K_{ji}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }\right) ,j=1,2.
\label{2.14}
\end{equation}%
The free particle time-dependent density has a maximum at the (free particle) time $\tau
=\left( X-X_{ji}\right) /K_{ji}$. It is normalized when integrating over the
position $X$ \ but diverges when integrating over the time due to the long
time tail which goes as $1/\tau $.
After some Gaussian integrations one finds that
\begin{eqnarray}
&&\rho _{k}\left( X;\tau \right) =\frac{1}{N_{k}^{2}}\left( \left\vert \Psi
_{1}\left( X,X_{1i},K_{1i},\tau \right) \right\vert ^{2}+\left\vert \Psi
_{2}\left( X,X_{2i},K_{2i},\tau \right) \right\vert ^{2}\right) \notag \\
&&+h_{k}\frac{2}{N_{k}^{2}}\left\vert \Psi _{1}\left( X,X_{1i},K_{1i},\tau
\right) \right\vert \left\vert \Psi _{2}\left( X,X_{2i},K_{2i},\tau \right)
\right\vert \notag \\
&&\exp \left( -\frac{\left( X_{2i}-X_{1i}\right) ^{2}+\left(
K_{2i}-K_{i1}\right) ^{2}}{4}\right) \cos \left( \Phi -\frac{\left(
X_{2i}-X_{1i}\right) \left( K_{2i}+K_{1i}\right) }{2}\right) ,k=B,F,D \notag
\\
&& \label{2.15}
\end{eqnarray}%
where the phase $\Phi $ is%
\begin{equation}
\Phi =\frac{\tau \left[ \left( X-X_{1i}\right) ^{2}-K_{1i}^{2}\right] }{%
2\left( 1+\tau ^{2}\right) }-\frac{\tau \left[ \left( X-X_{2i}\right)
^{2}-K_{2i}^{2}\right] }{2\left( 1+\tau ^{2}\right) }-\frac{K_{2i}\left(
X-X_{2i}\right) }{\left( 1+\tau ^{2}\right) }+\frac{\left( X-X_{1i}\right)
K_{1i}}{\left( 1+\tau ^{2}\right) }. \label{2.16}
\end{equation}%
As also shown below, in the long-time limit ($\tau \rightarrow \infty $) the
density scales as $\rho _{k}\left( X;\tau \right) \sim 1/\tau $ irrespective of
whether one is considering bosons, fermions or distinguishable particles so
that strictly speaking for freely evolving particles the time integral in the denominator of Eq. \ref{2.10}
diverges.
\subsubsection{\protect\bigskip Bosons}
If the two bosons are initially placed such that $X_{1i}=X_{2i}=X_{i}$ and $%
K_{1i}=K_{2i}=K_{i}$ then the phase $\Phi $ vanishes, there is no effect of
interference, and the time dependent density is the same as for a
distinguishable particle. Similarly, if the initial distance between the two
wavepackets is sufficiently large, the interference cross term will vanish and the
result will again reduce to the single particle distinguishable case. The
interesting case is when the two particles are close to each other. If the
initial momenta are identical, that is, $K_{1i}=K_{2i}=K_{i}$ ($\Delta_{K}=K_{2i}-K_{1i}=0$) and the initial coordinates are written as average and difference coordinates
\begin{equation}
X_{i}=\frac{X_{1i}+X_{2i}}{2},\Delta _{X}=X_{2i}-X_{1i} \label{2.17}
\end{equation}%
one finds that the density of finding a boson at the screen $X$\ at time $%
\tau $\
\begin{eqnarray}
&&\rho _{B}\left( X;\tau \right) \left( \Delta _{K}=0\right) =\frac{%
\left\vert \Psi \left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{\left[
1+\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) \right] }\exp \left( -\frac{%
\Delta _{X}^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{X}\left( X-X_{i}-K_{i}\tau \right) }{%
\left( 1+\tau ^{2}\right) }\right) +\exp \left( -\frac{\Delta _{X}^{2}}{4}%
\right) \cos \left( \frac{\tau \Delta _{X}\left[ \left( X-X_{i}\right) -\tau
K_{i}\right] }{\left( 1+\tau ^{2}\right) }\right) \right] \notag \\
&& \label{2.18}
\end{eqnarray}%
and indeed one may check to see that%
\begin{equation}
\int_{-\infty }^{\infty }dX\rho _{B}\left( X;\tau \right) =1. \label{2.19}
\end{equation}%
The long time limit is%
\begin{equation}
\lim_{\tau \rightarrow \infty }\rho _{B}\left( X;\tau \right) =\frac{%
\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{%
\left[ 1+\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) \right] }\left[
1+\exp \left( -\frac{\Delta _{X}^{2}}{4}\right) \cos \left( \Delta
_{X}K_{i}\right) \right] \label{2.20}
\end{equation}%
so that as already mentioned, the free-particle time distribution of bosons
decays at long time as $\tau ^{-1}$ just like the single
free particle.
Similarly let us consider two bosons that are initiated at the same point ($%
X_{1i}=X_{2i}=X_{i})$ but with different momenta and use the difference and
mean momenta%
\begin{equation}
K_{i}=\frac{K_{1i}+K_{2i}}{2},\Delta _{K}=K_{2i}-K_{1i} . \label{2.21}
\end{equation}%
In this case
\begin{eqnarray}
&&\rho _{B}\left( X;\tau \right) \left( \Delta _{X}=0\right) =\frac{2}{N^{2}%
\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp \left( -\frac{\left(
X-X_{i}-K_{i}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }-\frac{\Delta
_{K}^{2}\tau ^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{K}\tau \left( X-X_{i}-K_{i}\tau \right)
}{\left( 1+\tau ^{2}\right) }\right) +\exp \left( -\frac{\Delta _{K}^{2}}{4}%
\right) \cos \left( \frac{\Delta _{K}\left[ \tau K_{i}-\left( X-X_{i}\right) %
\right] }{\left( 1+\tau ^{2}\right) }\right) \right] . \notag \\
&& \label{2.23}
\end{eqnarray}%
and this again decays at long times as $\tau ^{-1}$.
\subsubsection{Fermions}
Following the same algebra as in the bosonic case, one finds that the
fermionic density is
\begin{eqnarray}
&&\rho _{F}\left( X;\tau \right) =\frac{1}{N_{F}^{2}}\left( \left\vert \Psi
\left( X,X_{1i},K_{1i},\tau \right) \right\vert ^{2}+\left\vert \Psi \left(
X,X_{2i},K_{2i},\tau \right) \right\vert ^{2}\right) \notag \\
&&-\frac{2}{N_{F}^{2}}\left\vert \Psi \left( X,X_{1i},K_{1i},\tau \right)
\right\vert \left\vert \Psi \left( X,X_{2i},K_{2i},\tau \right) \right\vert
\notag \\
&&\cos \left( \Phi -\frac{\left( X_{2i}-X_{1i}\right) \left(
K_{2i}+K_{1i}\right) }{2}\right) \exp \left( -\frac{\left(
X_{2i}-X_{1i}\right) ^{2}+\left( K_{2i}-K_{1i}\right) ^{2}}{4}\right)
\label{2.24}
\end{eqnarray}%
The normalization is
\begin{equation}
N_{F}^{2}=2\left[ 1-\exp \left( -\frac{\Delta _{X}^{2}+\Delta _{K}^{2}}{2}%
\right) \right] \label{2.25}
\end{equation}%
and it vanishes if $\Delta _{X}=\Delta _{K}=0$ so care must be taken in this
limit, since also the numerator vanishes but the ratio does not. More
specifically, at time $\tau =0$ we have with $K_{1i}=K_{2i}=K_{i}$ and $%
\Delta _{X}=X_{2i}-X_{1i}$
\begin{eqnarray}
&&\lim_{\Delta _{X}\rightarrow 0}\Psi \left( X_{1},X_{2};0\right) =-\frac{1}{%
\sqrt{\pi }}\left( X_{2}-X_{1}\right) \notag \\
&&\cdot \exp \left[ iK_{i}\left( X_{1}-X_{1i}\right) +iK_{i}\left(
X_{2}-X_{2i}\right) -\frac{1}{2}\left[ \left( X_{1}-X_{1i}\right)
^{2}+\left( X_{2}-X_{2i}\right) ^{2}\right] \right] \label{2.26}
\end{eqnarray}%
and this vanishes if $X_{1}=X_{2}$. Fermions cannot exist at the same point
in phase space. Note however that
\begin{equation}
\int_{-\infty }^{\infty }dX_{1}\int_{-\infty }^{\infty }dX_{2}\lim_{\Delta
_{i}\rightarrow 0}\left\vert \Psi \left( X_{1},X_{2};0\right) \right\vert
^{2}=1 \label{2.27}
\end{equation}%
as it should be. There is no difficulty in preparing an initial wavefunction
in the fermionic case even if both wavepackets are localized around the same
centers both in coordinate and momentum space. The density vanishes at one
point only.
Using the average and difference coordinates as above we readily find that
when the two incident momenta are identical
\begin{eqnarray}
&&\rho _{F}\left( X;\tau \right) \left( \Delta _{K}=0\right) =\frac{%
\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{%
2\sinh \left( \frac{\Delta _{i}^{2}}{4}\right) }\exp \left( \frac{\tau
^{2}\Delta _{X}^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{i}\left( X-X_{i}-K_{i}\tau \right) }{%
\left( 1+\tau ^{2}\right) }\right) -\exp \left( -\frac{\Delta _{X}^{2}}{4}%
\right) \cos \left( \frac{\tau \Delta _{X}\left[ \left( X-X_{i}\right) -\tau
K_{i}\right] }{\left( 1+\tau ^{2}\right) }\right) \right] . \notag \\
&& \label{2.28}
\end{eqnarray}%
It is also straightforward to see that
\begin{equation}
\int_{-\infty }^{\infty }dX\rho _{F}\left( X;\tau \right) \left( \Delta
_{K}=0\right) =1. \label{2.29}
\end{equation}%
When the (mean) distance between the two particles becomes small
\begin{equation}
\lim_{\Delta _{X}\rightarrow 0}\rho _{F}\left( X;\tau \right) =\left\vert
\Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}\left[ \frac{%
\left( X-X_{i}-K_{i}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }+\frac{1}{2%
}\right] \label{2.30}
\end{equation}%
and at long times the fermion density is\bigskip
\begin{equation}
\lim_{\tau \rightarrow \infty }\rho _{F}\left( X;\tau \right) \left( \Delta
_{K}=0\right) =\frac{\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right)
\right\vert ^{2}}{\left[ 1-\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) %
\right] }\left[ 1-\exp \left( -\frac{\Delta _{i}^{2}}{4}\right) \cos \left(
K_{i}\Delta _{X}\right) \right] \label{2.31}
\end{equation}%
and it too decays as $\tau ^{-1}$ as for bosons. The difference is only in
the coefficients.
\subsection{Survival probability and symmetry}
The time-dependent overlap or survival amplitude for a single particle is%
\begin{eqnarray}
S_{j}\left( \tau \right) &=&\langle \Psi \left( X_{ji},K_{ji},0\right) |\Psi
\left( X_{ji},K_{ji},\tau \right) \rangle \notag \\
&=&\sqrt{\frac{2}{\left( 2+i\tau \right) }}\exp \left( -\frac{i\tau
K_{ji}^{2}}{\left( 2+i\tau \right) }\right) . \label{2.32}
\end{eqnarray}%
Analogously, the time-dependent overlap or survival amplitude for the two particle wavefunction is then
\begin{equation*}
\Sigma \left( \tau \right) =\frac{2}{N_{k}^{2}}S_{1}\left( \tau \right)
S_{2}\left( \tau \right) \left[ 1+h_{k}\exp \left( -\frac{\Delta
_{X}^{2}+\Delta _{K}^{2}}{\left( 2+i\tau \right) }\right) \right]
\end{equation*}%
and its square is%
\begin{equation}
\left\vert \Sigma _{k}\left( \tau \right) \right\vert ^{2}=\left\vert
S_{1}\left( \tau \right) S_{2}\left( \tau \right) \right\vert
^{2}O_{k}\left( \tau \right) \label{2.33}
\end{equation}%
with
\begin{eqnarray}
O_{k}\left( \tau \right) & =&\frac{\left[ 1+h_{k}^{2}\exp \left( -\frac{%
4\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }%
\right) +2h_{k}\exp \left( -\frac{2\left( \Delta _{X}^{2}+\Delta
_{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }\right) \cos \left( \frac{%
\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }%
\tau \right) \right] }{\left[ 1+h_{k}\exp \left( -\frac{\Delta
_{X}^{2}+\Delta _{K}^{2}}{2}\right) \right] ^{2}}, \notag \\
k&=&B,F,D. \label{2.34}
\end{eqnarray}
It is then of interest to study this overlap $ O_{k}\left( \tau \right)$ in some limits. First, we note
that when the initial distances between the wavepackets are sufficiently
large, such that $\Delta _{X}^{2}+\Delta _{K}^{2}\gg 1$, then for times
shorter than $\sim \sqrt{\Delta _{X}^{2}+\Delta _{K}^{2}}$, this overlap
function reduces to unity. This is what is expected: when the initial
distance between the particles is large, they behave as independent
distinguishable particles. The interesting case is when the initial
distances between the two wavepackets are small and the interference term is
no longer negligible at short times. For bosons, one finds to leading order
\begin{equation}
\lim_{\Delta _{X},\Delta _{K}\rightarrow 0}O_{B}\left( \tau \right) =1+\frac{%
\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right)
\tau ^{2}}{2\left( 4+\tau
^{2}\right) }\geq 1 =O_{D}\left( \tau \right) \label{2.35}
\end{equation}%
showing that in this limit, the bosonic survival probability is greater than
the distinguishable particle overlap and this is so for all times. For
fermions, though, one has that
\begin{equation}
\lim_{\Delta _{X},\Delta _{K}\rightarrow 0}O_{F}\left( \tau \right) =\frac{4%
}{\left( 4+\tau ^{2}\right) }\leq 1=O_{D}\left( \tau \right) \label{2.37}
\end{equation}%
showing that the fermionic overlap decays faster than the distinguishable
particle case in this limit. In other words, when the distance in phase
space between the centers of the two particles is small, which is the case
when the interference term becomes most important, one finds that the decay
of the overlap of fermions is faster than distinguishable particles which in
turn is faster than bosons. These results are also of interest in the short
time limit, where one finds that%
\begin{equation}
\lim_{\Delta _{X},\Delta _{K},\tau \rightarrow 0}O_{B}\left( \tau \right) =1+%
\frac{\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) \tau ^{2}}{8}
\label{2.38}
\end{equation}%
\begin{equation}
\lim_{\Delta _{X},\Delta _{K},\tau \rightarrow 0}O_{F}\left( \tau \right)
=\left( 1-\frac{\tau ^{2}}{4}\right) \label{2.39}
\end{equation}%
which indicates that the quantum Zeno effect \cite{Zeno1,Zeno2,Zeno3} would be stronger for bosons
than for fermions, as the fermionic survival probability decays faster also
at short times.
\subsection{\protect\bigskip Free dynamics of relativistic identical particles}
To investigate the relativistic regime, we consider relativistic electrons and photons. The wavepackets describing the bosons -- the photons -- travel dispersion-free at the speed of light. The wavepackets describing the fermions -- the electrons -- are four component spinors with time evolution determined by the Dirac equation. As we consider only free particle motion of two non-interacting (except via particle statistics) electrons, spin is conserved and the wavepackets reduce to two component spinors. Relativistic wavepacket propagation is much like non-relativistic propagation except that the velocity is no longer directly proportional to wavenumber -- the former asymptotes to the speed of light -- and wavepacket broadening is greatly suppressed due to the dispersion relation -- quadratic in the non-relativistic case -- approaching linearity. In particular, the time scale for wavepacket broadening scales with $\gamma^2$, where $\gamma = 1/\sqrt{1-v^2/c^2}$. (See Eq. (2.19) in \cite{dumont2020}.) For example, if $v=0.99c$, broadening takes 50 times longer than it does for non-relativistic velocities.
The single free relativistic electron time evolved wavefunction, in the highly accurate (for cases we considered) steepest descent approximation, is
\begin{equation}
\Psi _{j}\left( X,X_{i},K_{i},\tau \right) =\frac{\hat{u}}{\sqrt{\left( 1+i\left(\tau/\gamma^2 \right)
\right) }}\left( \frac{1}{\pi }\right) ^{1/4}\exp \left( -\frac{1}{2}\frac{%
\left[ \left( X_{j}-X_{ji}\right) -iK_{ji}\right] ^{2}}{\left( 1+i\left(\tau/\gamma^2 \right)
\right) }-\frac{K_{ji}^{2}}{2}\right) ,j=1,2, \label{3.1}
\end{equation}%
where $\hat{u}=u/\lVert u \rVert$ and
\begin{equation}
u=\left(
\begin{array}{c}
1 \\
\left(\frac{\hbar \Gamma^{1/2}}{mc} \right) \frac{K_{ji}}{1+\gamma}
\end{array}%
\right)
\end{equation}
is the two component spinor for a spin up electron. This is then used in the symmetrized wavefunction for the two bosons and electrons, respectively.
\bigskip
\bigskip
\renewcommand{\theequation}{4.\arabic{equation}} \setcounter{section}{3} %
\setcounter{equation}{0}
\section{\protect\bigskip Flight times of identical non-relativistic particles scattered by a
delta function barrier}
\subsection{Preliminaries}
The Hamiltonian for the delta function barrier is
\begin{equation}
\hat{H}=-\frac{\hbar ^{2}}{2M}\frac{d^{2}}{dx^{2}}+\varepsilon \delta \left(
x\right) . \label{3.1}
\end{equation}%
and the coupling coefficient $\varepsilon >0$. The eigenfunctions of the
Hamiltonian at energy
\begin{equation}
E=\frac{\hbar ^{2}k^{2}}{2M} \label{3.2}
\end{equation}%
are
\begin{equation}
\psi \left( x\right) =\left(
\begin{array}{c}
\exp \left( ikx\right) +R\left( k\right) \exp \left( -ikx\right) ,\text{ \ \
}x<0 \\
T\left( k\right) \exp \left( ikx\right) ,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ }x>0%
\end{array}%
\right) \label{3.3}
\end{equation}%
with the reflection amplitude given as
\begin{equation}
R\left( k\right) =\frac{-i\alpha \left( k\right) }{1+i\alpha \left( k\right)
},\text{ \ \ }\alpha \left( k\right) =\frac{M\varepsilon }{\hbar ^{2}k}.
\label{3.4}
\end{equation}%
The transmission amplitude is
\begin{equation}
T\left( k\right) =\frac{1}{1+i\alpha \left( k\right) } \label{3.5}
\end{equation}%
and one readily sees that
\begin{equation}
\left\vert R\left( k\right) \right\vert ^{2}+\left\vert T\left( k\right)
\right\vert ^{2}=1. \label{3.6}
\end{equation}
The phase time delays are defined to be
\begin{equation}
\delta t_{T,R}=\frac{M}{\hbar k}\mathrm{Im}\left( \frac{1}{Y}\frac{\partial Y%
}{\partial k}\right) ,\text{ \ \ }Y=R,T \label{3.7}
\end{equation}%
so that
\begin{equation}
\delta t_{T}=\delta t_{R}=\frac{M}{\hbar k}\frac{\alpha \left( k\right) }{k%
\left[ 1+\alpha ^{2}\left( k\right) \right] }\equiv \delta t. \label{3.8}
\end{equation}%
The phase time then implies that for a repulsive delta function potential ($%
\alpha \left( k\right) >0$) the flight time is lengthened while for an
attractive delta function potential it is shortened. In the limit that the
coupling coefficient $\varepsilon \rightarrow \infty $, which is the
equivalent of a hard wall potential, the transmission amplitude vanishes
while the reflection amplitude goes to $-1$. The reflection time delay
vanishes in this case, the interference of the forward and reflected wave
does not change the reflected phase time delay. For a fixed nonzero value
of $\varepsilon >0$ in the limit that the energy vanishes ($k\rightarrow 0)$
the delay diverges as $k^{-1}$. For an attractive delta function potential,
the flight time is shortened and the reduction diverges as $k^{-1}$. Due to
the zero width of the delta function potential, the dwell time \cite{Mugabook} in the
barrier always vanishes.
It is worthwhile here also to consider the imaginary time defined as \cite{Pollak1984}
\begin{equation}
t_{im,R,T}=\hbar \text{Re}\left( \frac{1}{Y}\frac{\partial Y}{\partial E}%
\right) ,\text{ \ \ }Y=R,T \label{3.9}
\end{equation}%
so that the transmitted imaginary time is positive%
\begin{equation}
t_{im,T}=\frac{M\alpha ^{2}\left( k\right) }{\hbar k^{2}\left( 1+\alpha
^{2}\left( k\right) \right) } \label{3.10}
\end{equation}%
while the reflected imaginary time is negative%
\begin{equation}
t_{im,R}=-\frac{M}{\hbar k^{2}\left( 1+\alpha ^{2}\left( k\right) \right) }.
\label{3.11}
\end{equation}%
As the momentum increases, the transmission probability increases while the
reflection probability decreases, so that the transmitted imaginary time is
positive, and the reflected is negative.
In reduced variables (with $\epsilon_i=\alpha(k)$), the phase time delay takes the simple form%
\begin{equation}
\delta \tau \left( K_{i}\right) =\frac{\epsilon _{i}}{K_{i}^{2}\left(
1+\epsilon _{i}^{2}\right) }
\end{equation}%
while the imaginary time delays are
\begin{equation}
\tau _{im,T}\left( K_{i}\right) =\frac{\epsilon _{i}^{2}}{K_{i}^{2}\left(
1+\epsilon _{i}^{2}\right) }
\end{equation}%
and%
\begin{equation}
\tau _{im,R}\left( K_{i}\right) =-\frac{1}{K_{i}^{2}\left( 1+\epsilon
_{i}^{2}\right) }.
\end{equation}%
The transmission and reflection probabilities become
\begin{equation}
\left\vert T\left( K_{i}\right) \right\vert ^{2}=\frac{1}{1+\epsilon _{i}^{2}%
},\left\vert R\left( K_{i}\right) \right\vert ^{2}=\frac{\epsilon _{i}^{2}}{%
1+\epsilon _{i}^{2}}.
\end{equation}%
\subsubsection{The single particle dynamics. Momentum filtering}
Initially we consider a Gaussian wavepacket as in Eq. \ref{2.5}
\begin{equation}
\Psi \left( x,0\right) =\left( \frac{\Gamma }{\pi }\right) ^{1/4}\exp \left[
-\frac{\Gamma }{2}\left( x-x_{i}\right) ^{2}+ik_{i}\left( x-x_{i}\right) %
\right] \label{3.12}
\end{equation}%
whose momentum representation is
\begin{equation}
\Psi \left( k,0\right) =\left( \frac{1}{\pi \Gamma }\right) ^{1/4}\exp
\left( -\frac{\left( k-k_{i}\right) ^{2}}{2\Gamma }-ikx_{i}\right) .
\label{3.13}
\end{equation}%
The time dependent wavepacket in the transmitted region is
\begin{equation}
\Psi _{T}\left( x,t\right) =\int_{-\infty }^{\infty }\frac{dk}{\sqrt{2\pi }}%
\Psi \left( k,0\right) T\left( k\right) \exp \left( ikx-i\frac{\hbar k^{2}}{%
2M}t\right) ,x\geq 0 \label{3.14}
\end{equation}%
and in the reflected region is%
\begin{equation}
\Psi _{R}\left( x,t\right) =\int_{-\infty }^{\infty }\frac{dk}{\sqrt{2\pi }}%
\exp \left( -i\frac{\hbar k^{2}}{2M}t\right) \Psi \left( k,0\right) \left[
\exp \left( ikx\right) +R\left( k\right) \exp \left( -ikx\right) \right]
,x\leq 0 \label{3.15}
\end{equation}
Using the reduced variables as in Eq. \ref{2.4} and the reduced delta
function coupling variable
\begin{equation}
\epsilon =\frac{M\varepsilon }{\hbar ^{2}\sqrt{\Gamma }} \label{3.16}
\end{equation}%
and carrying out the momentum integrations in Eqs. \ref{3.14} and \ref{3.15}
one finds that the transmitted time-dependent wavepacket ($X\geq 0$) is
\begin{eqnarray}
&&\Psi _{T}\left( X,\tau \right) =\Psi _{fp}\left( X,\tau \right) \notag \\
&&\left[ 1-\epsilon \frac{\sqrt{\pi \left( 1+i\tau \right) }}{\sqrt{2}}\exp %
\left[ -\frac{\left( 1+i\tau \right) }{2}\left( Z_{T}+i\epsilon \right) ^{2}%
\right] \mathrm{{erf}c}\left( -\frac{i\sqrt{\left( 1+i\tau \right) }}{\sqrt{2%
}}\left( Z_{T}+i\epsilon \right) \right) \right] . \label{3.17}
\end{eqnarray}%
where $\Psi _{fp}\left( X,\tau \right) $ is the free particle time-dependent
wavepacket as in Eq. \ref{2.13}, $\mathrm{erfc}$ is the complementary
error function, and
\begin{equation}
Z_{T}=\frac{\left[ K_{i}+i\left( X-X_{i}\right) \right] }{\left( 1+i\tau
\right) }. \label{3.18}
\end{equation}%
The reflected time-dependent wavefunction ($X\leq 0$) is
\begin{eqnarray}
&&\Psi _{R}\left( X,\tau \right) =\Psi _{fp}\left( X,\tau \right) \notag \\
&&-\Psi _{fp}\left( -X,\tau \right) \epsilon \frac{\sqrt{\pi }}{\sqrt{2}}%
\sqrt{\left( 1+i\tau \right) }\exp \left( -\frac{\left( 1+i\tau \right) }{2}%
\left( i\epsilon +Z_{R}\right) ^{2}\right) \mathrm{\mathrm{{erf}c}}\left( -i%
\sqrt{\frac{\left( 1+i\tau \right) }{2}}\left( i\epsilon +Z_{R}\right)
\right) \notag \\
&& \label{3.19}
\end{eqnarray}%
with
\begin{equation}
Z_{R}=\frac{\left[ K_{i}-i\left( X+X_{i}\right) \right] }{\left( 1+i\tau
\right) }. \label{3.20}
\end{equation}
In practice, if the incident (reduced) momentum is sufficiently large, which
will be the case in all of our computations, and since we will be using small
momentum variances, one may safely replace the complementary error function
with its asymptotic expansion so that to leading order
\begin{equation}
\Psi _{T}\left( X,\tau \right) \simeq \Psi _{fp}\left( X,\tau \right) \left[
\frac{Z_{T}}{\left( Z_{T}+i\epsilon \right) }\right] ,X\geq 0 \label{3.21}
\end{equation}%
and
\begin{equation}
\Psi _{R}\left( X,\tau \right) \simeq \Psi _{fp}\left( X,\tau \right) -\Psi
_{fp}\left( -X,\tau \right) \frac{\epsilon }{\left( \epsilon -iZ_{R}\right) }%
,X\leq 0. \label{3.22}
\end{equation}%
Eqs. \ref{3.21} and \ref{3.22} will be the `workhorses' for the numerical
implementations below, but we stress that we have checked the validity of the
asymptotic expansion and it is quantitative for the conditions here used. To
see the long time limit we note that
\begin{equation}
\left\vert \frac{Z_{T}}{\left( Z_{T}+i\epsilon \right) }\right\vert ^{2}=%
\frac{\left[ K_{i}^{2}+\left( X-X_{i}\right) ^{2}\right] }{\left[ \left(
K_{i}-\epsilon \tau \right) ^{2}+\left( X-X_{i}+\epsilon \right) ^{2}\right]
} \label{3.23}
\end{equation}
and this goes as $\tau ^{-2}$ in the long time limit so that the transmitted
single particle density decays as $\tau ^{-3}$. The calculation is a bit
more involved for the reflected density but it also decays as $\tau ^{-3}$.
In contrast to the free particle, due to the potential, the mean flight time
(Eqs. \ref{2.10} and \ref{2.11}) is well-defined.
We can now rewrite the density as
\begin{equation}
\left\vert \Psi _{T}\left( X,\tau \right) \right\vert ^{2}\simeq \frac{%
\left( 1+\tau _{fp}^{2}\right) }{\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp
\left( -\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau
^{2}\right) }-\ln \left[ \left( \epsilon _{i}\tau -1\right) ^{2}+\left( \tau
_{fp}+\epsilon _{i}\right) ^{2}\right] \right)
\end{equation}%
and ask when the exponent is maximized as a function of the reduced time $%
\tau $. Defining
\begin{equation}
G\left( \tau \right) =\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{%
\left( 1+\tau ^{2}\right) }+\ln \left[ \left( \epsilon _{i}\tau -1\right)
^{2}+\left( \tau _{fp}+\epsilon _{i}\right) ^{2}\right]
\end{equation}%
we note that%
\begin{equation}
\frac{dG\left( \tau \right) }{d\tau }=-2\frac{K_{i}^{2}\left[ \tau
_{fp}-\tau \right] }{\left( 1+\tau ^{2}\right) }-2\tau \frac{K_{i}^{2}\left[
\tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau ^{2}\right) ^{2}}+\frac{\epsilon
_{i}2\left( \epsilon _{i}\tau -1\right) }{\left[ \left( \epsilon _{i}\tau
-1\right) ^{2}+\left( \tau _{fp}+\epsilon _{i}\right) ^{2}\right] }.
\end{equation}%
Setting the derivative equal to zero
\begin{equation}
\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] }{\left( 1+\tau ^{2}\right) }%
+\tau \frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau
^{2}\right) ^{2}}=\frac{\epsilon _{i}\left( \epsilon _{i}\tau -1\right) }{%
\left[ \left( \epsilon _{i}\tau -1\right) ^{2}+\left( \tau _{fp}+\epsilon
_{i}\right) ^{2}\right] }.
\end{equation}%
and looking for a solution
\begin{equation}
\tau =\tau _{fp}\left( 1-\Delta \tau \right)
\end{equation}%
and assuming that $\Delta \tau \ll 1$ and remembering that $\tau _{fp}\gg 1$
leads to the solution%
\begin{equation}
\Delta \tau \simeq \frac{\epsilon _{i}^{2}}{K_{i}^{2}\left[ 1+\epsilon
_{i}^{2}\right] }=\tau _{im,T}\left( K_{i}\right)
\end{equation}%
and this is precisely the momentum filtering effect. Due to the increase of the transmission probability with energy, the high-energy components of the incident wavepacket are preferably transmitted so that the flight time is reduced \cite{Filinov}.
\subsubsection{Two particles dynamics}
The composite initial wavefunction of the two particles is given by Eq. (\ref{2.6}) and the time evolved wavefunctions by Eq. (\ref{2.9}) for free particle evolution. For the $\delta$-tunneling dynamics, the initial wave function is the same but Eq. (\ref{2.9}) has to be replaced by
\begin{equation}\label{3.34}
\Psi _{k,T}\left( X, z, \tau \right) = \Psi _{1,T}\left( X,\tau \right) \left[\Psi _{2,T}\left(z,\tau \right)+\Psi _{2,R}\left(z,\tau \right)\right] + h_k \, \Psi _{2,T}\left( X,\tau \right) \left[\Psi _{1,T}\left(z,\tau \right)+\Psi _{1,R}\left(z,\tau \right)\right] \,
\end{equation}
for the total transmitted wavefunction and
\begin{equation}\label{3.35}
\Psi _{k,R}\left( X, z, \tau \right) = \Psi _{1,R}\left( X,\tau \right) \left[\Psi _{2,T}\left(z,\tau \right)+\Psi _{2,R}\left(z,\tau \right)\right] + h_k \, \Psi _{2,R}\left( X,\tau \right) \left[\Psi _{1,T}\left(z,\tau \right)+\Psi _{1,R}\left(z,\tau \right)\right] \,
\end{equation}
for the total reflected wavefunction, $ \Psi _{1,T}$, $ \Psi _{2,T}$, $ \Psi _{1,R}$ and $ \Psi _{2,R}$ being the
transmitted and reflected wave functions for each particle when considered to be independent. The one-particle mean flight time is given by
\begin{equation}
\left\langle \tau_k \right\rangle _{T,R}=\int_{0}^{\infty }d\tau \, \tau \, P_{k;T,R}\left( X_f, \tau \right) , \label{3.36}
\end{equation}%
where the probability distribution is
\begin{equation}
P_{k;T,R}\left( X_f, \tau \right) =\frac{\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k;T,R}\left( X_f,z;\tau \right) \right\vert ^{2}}{\int_{0}^{\infty }d\tau
\int_{-\infty }^{\infty }dz\left\vert \Psi _{k;T,R}\left( X_f,z,\tau \right)
\right\vert ^{2}} \label{3.37}
\end{equation}%
with $\, k=D, B, F$. The screen is located at $X=\pm X_f$ depending on whether we are considering the transmitted (plus sign) or reflected (minus sign) total wave function. As mentioned above,
these distributions and mean times are well defined under the presence of a potential since the density decays at long times
as $\tau ^{-3}$.
\renewcommand{\theequation}{5.\arabic{equation}} \setcounter{section}{4} %
\setcounter{equation}{0}
\section{Numerical Results}
\subsection{Free particle non-relativistic flight times}
To analyze the role played by the symmetry of the total wave function in systems of identical particles when considering free dynamics
and tunneling from a delta barrier, we have identified two types of initial conditions (in reduced coordinates): (I) different locations with the same
momenta, $X_{1i}=-301$, $X_{2i}= -299$ with $K_{1i}=K_{2i}=10$ and (II) the same locations with different momenta, $X_{1i}=X_{2i}= -300$
with $K_{1i}=10.1$ and $K_{2i}=9.9$. When the differences in initial positions
and momenta differ more, the cross terms in the density of the two-particle system become smaller, and one rapidly reaches the
distinguishable particle limit. Unless otherwise stated, we will assume $\Gamma = 0.01$ for the initial width of the Gaussian functions and
$\epsilon = 1$, for the strength of the coupling to the delta barrier. In all cases, the position of the screen is at $X_f= \pm 450$ and the delta barrier is located
at the origin.
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Initial-distribution1-eps-converted-to.pdf}
\label{fig1a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Time-distribution1-eps-converted-to.pdf}
\label{fig1b}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Initial-distribution2-eps-converted-to}
\label{fig1c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Time-distribution2-eps-converted-to.pdf}
\label{fig1d}
\end{subfigure}
\caption{Non-relativistic free particle dynamics. The left panels show the initial spatial distribution, the right panels the relative one particle flight time distributions as defined in Eq. \ref{2.21} hitting the screen located at $X_f=450$. The other parameters used are $\Gamma= 0.01$ for the initial width of the coherent states; for case (I) (initial spatial difference) $X_{1i}=-301$, $X_{2i}= -299$, $K_{1i}=K_{2i}=10$, left and right top panels; and for case (II) (initial momentum difference) $X_{1i}=X_{2i}= -300$, $K_{1i}=10.1$, $K_{2i}=9.9$, left and right bottom panels.} \label{fig1}
\end{figure}
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{O-function1-eps-converted-to.pdf}
\label{fig2a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{O-function2-eps-converted-to.pdf}
\label{fig2b}
\end{subfigure}
\caption{Overlap decay (O-function) defined in Eq. \ref{2.34} for non-relativistic free particles. The left and right panels are for initial spatial (case I) and momentum (case II) differences, respectively. The initial width of the Gaussians here is taken to be $\Gamma=0.01$. } \label{fig2}
\end{figure}
In Figure \ref{fig1}, we plot the initial spatial distributions (left panels) and (see Eq. \ref{2.12}) the one-particle flight time distributions (right panels) for the two sets of initial conditions. The top two panels are for case (I) (initial spatial difference) the bottom two are for case (II) (initial momentum difference).
The solid red curve is used for distinguishable particles, the
long-dashed blue curve for bosons and the dot-dashed brown curve for fermions. The one-particle flight time distribution is broader for fermions displaying early and late arrivals. Bosons tend to arrive later than distinguishable particles, showing the narrowest time distribution.
Furthermore, the flight time distributions of distinguishable particles
and bosons tend to have a similar shape, losing the two lobes of the initial density, whereas
the fermions display a double-peaked time distribution, reflecting their initial
density. In the bottom-right panel, bosons and distingusiable particles behave essentially identically
whereas fermions display a bimodal time distribution. Fermions not only arrive at the screen earlier and later, there is a distinctive time asymmetry
in the flight time distribution.
We attribute these different behaviors to the bunching and anti-bunching properties of bosons and fermions, respectively. Specifically, the early arrival of fermions at the screen is related to the `front' of the initial fermionic density distribution, which, due to the anti-bunching effect, is closer to the origin than in the case of bosons and distinguishable particles. The same is true for the portion of the fermionic flight time distribution which arrives later at the barrier. It is due to the back of the initial fermionic density which is further from the origin, when compared to bosons or distinguishable particles.
In a previous work where we studied the MacColl-Hartman effect we have argued against the `front' of the wavepacket being used to explain away supposedly superluminal propagation for \textit{tunneling} particles \cite{dumont2020,rivlin2020}, noting that the superluminality cannot be used for the purpose of early signaling. When considering the MacColl-Hartman effect, one is comparing final time distributions of tunneled particles with free particles, but the two have the same initial density distribution. In the case considered here, the initial wavepacket of the fermions is broadened when compared to that of the bosons. It is this broadening which leads to early and late arrival times of fermions as compared to bosons, meaning that one does not have to consider here the possibility of superluminality.
Another interesting aspect considered here is
the analysis of the survival amplitude, using Eq. (\ref{2.34})
and the limits at small initial spatial $\Delta_X$ and momentum $\Delta_K$ difference and the associated time dependence, as in Eqs. (\ref{2.38}) and (\ref{2.39}). In Figure \ref{fig2} we plot the overlapping functions (O-function, see Eq. \ref{2.34}) for initial conditions (I) (initial spatial difference) in the left panel and for case (II) (initial momentum difference) in the right panel.
As in Fig. \ref{fig1}, the red line indicates the behavior for distinguishable particles, the long-dashed blue curve is used for bosons and the dot-dashed brown curve for
fermions. Dotted black and green curves correspond to the limits of small values of $\Delta_X$, $\Delta_K$ and time. The time dependence of the corresponding overlapping functions is quite different for bosons and fermions. The bosons stay together for a long time whereas the fermionic overlap decays rapidly.
\subsection{Free particle relativistic flight time distributions}
Figure 3 shows the time-dependent density at the screen (at $X=0$) for two photons and two electrons traveling near the speed of light. In the left panel, the two Gaussians are centered at $X_{1i}=-3.5$ and $X_{2i}=-3$ and at a wavenumber consistent with velocity, $v=0.99c$. In the right panel, the two Gaussians are centered at $X_{1i}=X_{2i}=-3$ and at wavenumbers consistent with $v=0.984c$ and $0.996c$. In both cases, an electron is more likely to arrive at the screen before a photon. However, this is simply because the initial density for the electrons is broader than that of the photons. To show this, the density that would be seen if the electrons traveled dispersion-free at the speed of light is also shown (dotted lines). The observed electron density clearly travels with a speed less than $c$. The early and late arrivals of the fermions is just a reflection of their initial density, which, as may be seen from the left panels of Fig. \ref{fig1}, is broader than the initial distribution for bosons, due to the anti-bunching effect of fermions. The early arrival times are a reflection of the initial width of the packet -- this is the same for both the relativistic and the non-relativistic regimes.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{zm60m70vp99p99w20.png}
\label{fig3a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{zm60m60vp984p996w20.png}
\label{fig3b}
\end{subfigure}
\caption{\noindent Time-dependent densities for two photons (dashed lines) and two relativistic electrons (solid lines). The left and right panels are for initial spatial (case I) and momentum (case II) differences, respectively. The initial width of the Gaussians here is taken to be $\Gamma=0.0025$. Also shown (dotted lines) are the densities that would be seen if the electrons traveled dispersion-free at the speed of light.} \label{fig3}
\end{figure}
\subsection{Identical particle, non-relativistic flight times for scattering on a delta potential barrier}
In this case, as noted above, the mean flight times (Eq. \ref{2.10}) are well defined, and they provide a clear measure of the effect of particle symmetry on the flight time.
In Table \ref{table1}, we provide the transmitted and reflected mean flight times (in reduced coordinates) for bosons $<\tau_{1,B}>_{T,R}$ and
fermions $<\tau_{1,F}>_{T,R}$ with $\Gamma=0.01$ and $\epsilon=1$ for the two cases of initial spatial (I) and momentum (II) differences. Transmitted mean times
are always shorter than reflected mean times due to the momentum filtering effect and those of fermions are always greater than for bosons due to their anti-bunching
and bunching properties.
\begin{table}
\caption{Initial conditions and transmitted (T) and reflected (R) mean flight times (in reduced coordinates) for bosons, $<\tau_{1,B}>_{T,R}$, and fermions, $<\tau_{1,F}>_{T,R}$, with $\Gamma=0.01$ and $\varepsilon=1$.}
\begin{tabular}{||c|c|c|c||c|c||c|c||}
\hline $X_{1i}$& $X_{2i}$ & $X_f$ & $K_{1i}, K_{2i}$ & $<\tau_{1,B}>_T$ & $<\tau_{1,B}>_R $ & $<\tau_{1,F}>_T$ & $<\tau_{1,F}>_R $ \\
%
\hline -301 & -299 & $\pm$ 450 & 10, 10 &75.2908 & 75.8847 & 75.4992 & 76.5237 \\
%
\hline -300 & -300& $\pm$ 450 & 10.1, 9.9 &75.3749 & 76.1740 & 76.7417 &77.3525 \\
%
\hline
\hline
%
%
%
\end{tabular}
\label{table1}
\end{table}
The corresponding flight time probability distributions (eq. \ref{2.11}) are shown in Fig. \ref{fig4}.
The top and bottom panels correspond to initial spatial (case I) and initial momentum (case II) differences, the left and right panels correspond to the transmitted and reflected flight time distributions, respectively. The trends are similar to those found in the free particle dynamics scenario. The effect of symmetry on flight times seems to be robust and is not changed much in the presence of an interaction barrier. Time distributions are broadest for fermions and narrowest for bosons, with distinguishable particles in between. The asymmetry of the bimodal reflected distributions for fermions becomes less important when comparing with the transmitted ones.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Transmitted-distribution1-eps-converted-to.pdf}
\label{fig1a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Reflected-distribution1-eps-converted-to.pdf}
\label{fig1b}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Transmitted-distribution2-eps-converted-to.pdf}
\label{fig1c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Reflected-distribution2-eps-converted-to.pdf}
\label{fig1d}
\end{subfigure}
\caption{Tunneling $\delta$-barrier dynamics with $\Gamma= 0.01$. One-particle transmitted and reflected mean flight time distributions for conditions (I) are shown in the left and right top panels and for conditions (II) in the left and right bottom panels. } \label{fig4}
\end{figure}
Finally, it is also of interest to analyze the role played by the initial width ($\Gamma$) of the Gaussian wavepackets in the mean tunneling flight times which are well defined, especially when $\Gamma$ approaches zero such that the spatial extent of the two Gaussians is large, which creates large initial overlaps. This analysis is carried out by means of a fitting procedure to a linear function ($b \Gamma + c$) obtained from the numerics for eight values of the initial width,
$\Gamma= 10^{-2}, 0.25 \, 10^{-2}, 9.0 \, 10^{-4}, 4.0 \, 10^{-4}, 10^{-4}, 0.25 \, 10^{-4}, 9.0 \, 10^{-6}, 4.0 \, 10^{-6}$.
In Figure \ref{fig3}, the transmitted (left panel) and reflected (right panel) mean flight times versus $\Gamma$ for distinguishable particles, bosons and fermions
are plotted. The legend is the same one used along this work for each particle. In every case, the quality of the fitting is very good. All particles tend to a value of $c= 75.005$, which is the phase time under conditions (I). Thus, the symmetry seems to play no role in the mean tunneling flight times since the phase time for a single particle is recovered in the limit $\Gamma \rightarrow 0$.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{GammaT1-eps-converted-to.pdf}
\label{fig2a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{GammaR1-eps-converted-to.pdf}
\label{fig2b}
\end{subfigure}
\caption{One-particle transmitted (left panel) and reflected (right panel) mean flight times versus $\Gamma$ for distinguishable particles, bosons and fermions under conditions (I). The legend is the same one used along this work for each particle.} \label{fig3}
\end{figure}
\section{Discussion}
In this work, we have analyzed the influence of the symmetry of the wavefunction for a system consisting of two non-interacting identical particles. The well-known bunching and anti-bunching properties exhibited by bosons and fermions respectively have been translated into the spreading of the initial spatial densities and flight time distributions under the presence or not of an interaction potential such as a delta barrier, leading to early arrivals for fermions. Interestingly, the symmetry of the wave function seems to be robust for flight time distributions, but it does not affect the mean tunneling flight times. Analyzing the short-time dynamics through the survival probability, fermions tend to decay faster than bosons, which has profound implications for the well-known Zeno effect.
In our model, we have considered non-interacting identical particles. This is clearly an oversimplification of the real problem since, for example, electrons interact with each other through the long-range Coulomb repulsion. As far as we know, very few studies have addressed the issue of the effect of symmetry on flight times and the Zeno effect even though it should be readily accessible. For example, the triplet states of two electrons will give a fermionic spatial wavefunction while the singlet state a bosonic one. In a scattering experiment similar to the one discussed in Refs. \cite{grossmann2014,buchholz2018}, the two particles in the center of mass frame will approach each other such that the distances between the two particles becomes small and the symmetry will affect the time of flight distribution of the two particles. Due to the anti-bunching property of fermions, one should expect broader temporal distributions of the scattered particles.
Lozovik {\it et al} \cite{Filinov2} considered tunneling of two interacting particles in a double-well potential. They used quantum molecular dynamics within the Wigner representation and found that exchange effects are very important and affect the tunneling. However, this work does not mention flight time distributions. It is thus possible, at least in principle, to study the effect of particle symmetry on flight time distributions of identical electrons, without neglecting the repulsive potential of interaction between them, though the actual numerical implementation, especially in the relativistic regime is much more challenging.
Finally, we note that the study of symmetry on flight time distributions presented in this paper may be generalized to anyons, by introducing a phase in the initial distribution.
\vspace{1cm}
\noindent
{\bf Acknowledgements}
TR and EP acknowledge support from the Israel Science Foundation, SMA acknowledges support from the Ministerio de Ciencia, Innovaci\'on y Universidades (Spain)
under the Project FIS2017-83473-C2-1-P and Fundaci\'on Humanismo y Ciencia.
\bigskip
\section{Introduction}
It is well understood that the symmetry of indistinguishable particles has a
profound influence on their dynamics. A feature which is well documented is
the ``bunching" of bosons \cite{brown1956,jeltes2007} and ``anti-bunching" of
fermions \cite{henny1999,oliver1999,kiesel2002,ianuzzi2006,rom2006}.
Consider two identical particles, each described for simplicity by an
initial Gaussian wavepacket. When the two Gaussians of the two particles are
located sufficiently far from each other in phase space, there is no overlap between them
and the symmetry of particles plays no role. The fermions and bosons may be
considered as two independent distinguishable particles. However, when they
come close the symmetry leads to important consequences. Bosons, whose overall function is
symmetric with respect to exchange may overlap with each other, hence the interference term
`increases' the density, causing the ``bunching'' phenomenon. Fermions
on the other hand, due to the anti-symmetry, cannot be located at the same
place and the `hole' in the distribution created by the overlap term creates
a distancing between the particles, which is understood as the
``anti-bunching" effect.
These effects show up also in the temporal dynamics \cite%
{grossmann2014,buchholz2018}. Consider the scattering of two
indistinguishable particles on each other and the relative distance
(squared) between them as a function of time \cite{grossmann2014}. As they
come closer to each other the distance is reduced and as they move again
away it increases. Yet, when comparing such scattering with the exact same
potential, incident energy, etc. of bosons and fermions, one finds that the
distance between the fermions as they separate is larger than that of bosons
-- another reflection of the bunching and anti-bunching phenomenon \cite%
{grossmann2014,buchholz2018}. Some researchers have tried to describe the
repulsion of fermions in terms of an artificial repulsive potential -- the ``Pauli potential". \cite{wilets1977,dorso1987,boal1988,latora1994,gu2016} In statistical mechanics, this situation leads to the so-called statistical interparticle potential which is temperature-dependent since
it is related to what is known as the mean thermal wavelength or thermal de Broglie wavelength. \cite{Pathria} One then speaks about statistical
attraction and repulsion for bosons and fermions, respectively.
To the best of our knowledge the effect of symmetry on flight time
distributions \cite{petersen2017,petersen2018,rivlin2020,ianconescu2021} has
not been addressed.
The central objective of this present work is to study how the symmetry affects temporal evolution, one-particle flight time distributions and, under the presence of an interaction potential, mean flight times when considering non-interacting identical particles. We will show that fermions have a
broader time distribution than bosons so that the former will be detected arriving at a suitably placed screen earlier and later, a direct result of the anti-bunching effect.
Effectively, the symmetry can speed up or slow down the time evolving particles in the nonrelativistic and relativistic domains. In the second framework, we argue that one cannot speak of a superluminal effect since it can be seen as a mirror, or direct reflection, of the corresponding initial spatial distributions.
Specifically, consider first two identical particles initiated close to each other about a (mean) point in phase space, which then continue moving as free particles until they are detected on a screen some distance away. One may in principle measure the time at which a particle hits the screen
and thus obtain a flight time distribution. We will show that this flight time
distribution may be different for bosons and fermions as compared to
distinguishable particles. The same happens when their motion is not free but they
are individually scattered by a delta barrier potential.
We find that fermions have a broader time distribution than bosons, irrespective of whether they are
scattered through a potential or not. Fermions will be detected arriving at
the screen earlier and later than bosons, a direct result of the anti-bunching effect.
A related question has to do with the survival probability of the initial
wavefunction. We shall show that the bosonic survival probability of free
particles decays slower than that of distinguishable particles while for fermions it decays more rapidly. These results are also of interest in the short time limit, which should imply that the well-known quantum Zeno effect \cite{Zeno1,Zeno2,Zeno3}
would be stronger for bosons than for fermions.
However, at least for scattering through a
delta potential, the particle symmetry does not affect the mean tunneling flight time and it is given by the phase time for the distinguishable particle \cite%
{rivlin2020,dumont2020}.
The paper is organized as follows. In Section II we consider the case of free particle propagation in the nonrelativistic and relativistic domains. In
Section III the scattering of identical particles from a delta barrier
potential is analyzed in detail since closed analytical expressions can be obtained. Section IV presents and discusses our results for free and tunneling dynamics. The role played by the initial width of the Gaussian wavepackets describing the identical particles in mean flight times is also analyzed.
We argue that the implications of the early arrival of fermions versus bosons as, for example, photons at the screen in the relativistic case, which might seem to be superluminal, is not. It is a reflection of the anti-bunching effect on the initial density distribution.
We also consider further generalizations and
implications of these results to realistic systems in the last Section.
\renewcommand{\theequation}{2.\arabic{equation}} \setcounter{section}{1} %
\setcounter{equation}{0}
\section{\protect\bigskip Free dynamics of nonrelativistic identical particles}
\subsection{General considerations}
Our model system is two (one dimensional) non-interacting
identical particles (with coordinates $x_{1}$ and $x_{2})$ and mass$\ M\ $%
which scatter from a potential $V\left( x_{j}\right)$. We place a screen
to the right or left of the potential and measure a particle whenever it
hits the screen. The questions we seek to answer are what the
distribution of times at which one of the particles hits the screen is, and what
the mean time it takes is, assuming that the mean time exists. The
Hamiltonian for a single particle (operators are denoted with carets) is
\begin{equation}
\hat{H}_{j}=\frac{\hat{p}_{j}^{2}}{2M}+V\left( \hat{x}_{j}\right) ,j=1,2
\label{2.1}
\end{equation}%
with $\hat{p}_{j}$ and $\hat{x}_{j}$ the momentum and position operators of
the j-th particle respectively. The full Hamiltonian is the sum of the two%
\begin{equation}
\hat{H}=\hat{H}_{1}+\hat{H}_{2}. \label{2.2}
\end{equation}%
Initially, the single particle wavefunction will be a coherent state
localized about the mean position $x_{ji}$ and mean momentum $p_{ji}$ with
width parameter $\Gamma$%
\begin{equation}
\Psi _{j}\left( x_{j}\right) =\left( \frac{\Gamma }{\pi }\right) ^{1/4}\exp %
\left[ -\frac{\Gamma \left( x_{j}-x_{ji}\right) ^{2}}{2}+\frac{i}{\hbar }%
p_{ji}\left( x_{j}-x_{ji}\right) \right] ,j=1,2. \label{2.3}
\end{equation}%
To simplify, we introduce at this point the reduced coordinates of position, momenta, and time to be%
\begin{equation}
X=\sqrt{\Gamma }x,K=\frac{p}{\hbar \sqrt{\Gamma }},\tau =\frac{\hbar \Gamma
}{M}t \label{2.4}
\end{equation}%
so that the single particle wavefunction has the form%
\begin{equation}
\Psi _{j}\left( X_{j}\right) =\left( \frac{1}{\pi }\right) ^{1/4}\exp \left[
-\frac{1}{2}\left( X_{j}-X_{ji}\right) ^{2}+iK_{ji}\left( X_{j}-X_{ji}\right) %
\right] ,j=1,2. \label{2.5}
\end{equation}%
The composite wavefunction of the two particles is
\begin{equation}
\Psi _{k}\left( X_{1},X_{2}\right) =\frac{1}{N_{k}}\left[ \Psi _{1}\left(
X_{1}\right) \Psi _{2}\left( X_{2}\right) +h_{k}\Psi _{2}\left( X_{1}\right)
\Psi _{1}\left( X_{2}\right) \right] \label{2.6}
\end{equation}%
where the coefficient $h_{k}$ is
\begin{equation}
h_{k}=\left(
\begin{array}{c}
1 \\
-1 \\
0%
\end{array}%
\right) \text{ for }\left(
\begin{array}{c}
\text{bosons} \\
\text{fermions} \\
\text{distinguishable particles}%
\end{array}%
\right) \label{2.7}
\end{equation}%
and the corresponding normalization constant is
\begin{equation}
N_{k}^{2}=2\left[ 1+h_{k}\exp \left( -\frac{\left( X_{1i}-X_{2i}\right)
^{2}+\left( K_{1i}-K_{2i}\right) ^{2}}{2}\right) \right] \text{, }k=B,F
\label{2.8}
\end{equation}%
where $B$ and $F$ denote bosons and fermions, respectively. The
normalization for distinguishable particles ($k=D$) is unity. The time-evolved wavefunction is
\begin{eqnarray}
N_{k}\Psi _{k}\left( X_{1},X_{2};\tau \right) &=&\left[ \exp \left( -i\hat{H}%
_{1}\tau \right) \Psi _{1}\left( X_{1}\right) \right] \left[ \exp \left( -i%
\hat{H}_{2}\tau \right) \Psi _{2}\left( X_{2}\right) \right] \notag \\
&&+h_{k}\left[ \exp \left( -i\hat{H}_{1}\tau \right) \Psi _{2}\left(
X_{1}\right) \right] \left[ \exp \left( -i\hat{H}_{2}\tau \right) \Psi
_{1}\left( X_{2}\right) \right] \notag \\
&\equiv &\left[ \Psi _{1}\left( X_{1};\tau \right) \Psi _{2}\left(
X_{2};\tau \right) +h_{k}\Psi _{2}\left( X_{1};\tau \right) \Psi _{1}\left(
X_{2};\tau \right) \right] . \label{2.9}
\end{eqnarray}
We now put a `screen' at the point $X=X_f$ such that the initial wave function has negligible overlap with the screen.
Particle $1$ may reach the screen at
time $\tau $ \ when particle $2$ is found in any location. In other
words, the probability that a particle reaches the screen and that the second
particle will be found at that time at some point, say, $z$, will be
proportional to $\left\vert \Psi \left( X_f,X_{2}=z;\tau \right)
\right\vert ^{2}$. We are interested in knowing the distribution of times at
which a particle will hit the screen, irrespective of where the other
particle is so that the probability of finding a particle hitting the screen
at (the reduced) time $\tau $ is defined to be
\begin{equation}
P_{k}\left( X_f;\tau \right) =\frac{\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k}\left( X_f,z;\tau \right) \right\vert ^{2}}{\int_{0}^{\infty }d\tau
\int_{-\infty }^{\infty }dz\left\vert \Psi _{k}\left( X_f,z;\tau \right)
\right\vert ^{2}},k=B,F,D. \label{2.10}
\end{equation}%
The mean time is then naturally given as
\begin{equation}
\left\langle \tau \right\rangle _{k}=\int_{0}^{\infty }d\tau \tau
P_{k}\left( X_f;\tau \right) ,k=B,F,D. \label{2.11}
\end{equation}%
The distribution and the means are well defined if the time integrals
converge as in potential scattering where the density decays at long times
as $\tau ^{-3}$ \cite{Muga-2008,Eli-2018}. For free particles, the density decays as $\tau ^{-1}$ so
that the best one can do is to consider the relative probability of having a
particle arrive at the screen at time $\tau $. This we denote as
\begin{equation}
\rho _{k}\left( X_f,\tau \right) =\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k}\left( X_f,z;\tau \right) \right\vert ^{2},k=B,F,D. \label{2.12}
\end{equation}
\subsection{Symmetry and free particles}
The single free particle ($V=0$) time evolved wavefunction is
\begin{equation}
\Psi _{j}\left( X,X_{i},K_{i},\tau \right) =\frac{1}{\sqrt{\left( 1+i\tau
\right) }}\left( \frac{1}{\pi }\right) ^{1/4}\exp \left( -\frac{1}{2}\frac{%
\left[ \left( X_{j}-X_{ji}\right) -iK_{ji}\right] ^{2}}{\left( 1+i\tau
\right) }-\frac{K_{ji}^{2}}{2}\right) ,j=1,2 \label{2.13}
\end{equation}%
and the density is
\begin{equation}
\left\vert \Psi _{j}\left( X,X_{ji},K_{ji},\tau \right) \right\vert ^{2}=%
\frac{1}{\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp \left( -\frac{\left(
X-X_{ji}-K_{ji}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }\right) ,j=1,2.
\label{2.14}
\end{equation}%
The free particle time-dependent density has a maximum at the (free particle) time $\tau
=\left( X-X_{ji}\right) /K_{ji}$. It is normalized when integrating over the
position $X$ \ but diverges when integrating over the time due to the long
time tail which goes as $1/\tau $.
After some Gaussian integrations one finds that
\begin{eqnarray}
&&\rho _{k}\left( X;\tau \right) =\frac{1}{N_{k}^{2}}\left( \left\vert \Psi
_{1}\left( X,X_{1i},K_{1i},\tau \right) \right\vert ^{2}+\left\vert \Psi
_{2}\left( X,X_{2i},K_{2i},\tau \right) \right\vert ^{2}\right) \notag \\
&&+h_{k}\frac{2}{N_{k}^{2}}\left\vert \Psi _{1}\left( X,X_{1i},K_{1i},\tau
\right) \right\vert \left\vert \Psi _{2}\left( X,X_{2i},K_{2i},\tau \right)
\right\vert \notag \\
&&\exp \left( -\frac{\left( X_{2i}-X_{1i}\right) ^{2}+\left(
K_{2i}-K_{i1}\right) ^{2}}{4}\right) \cos \left( \Phi -\frac{\left(
X_{2i}-X_{1i}\right) \left( K_{2i}+K_{1i}\right) }{2}\right) ,k=B,F,D \notag
\\
&& \label{2.15}
\end{eqnarray}%
where the phase $\Phi $ is%
\begin{equation}
\Phi =\frac{\tau \left[ \left( X-X_{1i}\right) ^{2}-K_{1i}^{2}\right] }{%
2\left( 1+\tau ^{2}\right) }-\frac{\tau \left[ \left( X-X_{2i}\right)
^{2}-K_{2i}^{2}\right] }{2\left( 1+\tau ^{2}\right) }-\frac{K_{2i}\left(
X-X_{2i}\right) }{\left( 1+\tau ^{2}\right) }+\frac{\left( X-X_{1i}\right)
K_{1i}}{\left( 1+\tau ^{2}\right) }. \label{2.16}
\end{equation}%
As also shown below, in the long-time limit ($\tau \rightarrow \infty $) the
density scales as $\rho _{k}\left( X;\tau \right) \sim 1/\tau $ irrespective of
whether one is considering bosons, fermions or distinguishable particles so
that strictly speaking for freely evolving particles the time integral in the denominator of Eq. \ref{2.10}
diverges.
\subsubsection{\protect\bigskip Bosons}
If the two bosons are initially placed such that $X_{1i}=X_{2i}=X_{i}$ and $%
K_{1i}=K_{2i}=K_{i}$ then the phase $\Phi $ vanishes, there is no effect of
interference, and the time dependent density is the same as for a
distinguishable particle. Similarly, if the initial distance between the two
wavepackets is sufficiently large, the interference cross term will vanish and the
result will again reduce to the single particle distinguishable case. The
interesting case is when the two particles are close to each other. If the
initial momenta are identical, that is, $K_{1i}=K_{2i}=K_{i}$ ($\Delta_{K}=K_{2i}-K_{1i}=0$) and the initial coordinates are written as average and difference coordinates
\begin{equation}
X_{i}=\frac{X_{1i}+X_{2i}}{2},\Delta _{X}=X_{2i}-X_{1i} \label{2.17}
\end{equation}%
one finds that the density of finding a boson at the screen $X$\ at time $%
\tau $\
\begin{eqnarray}
&&\rho _{B}\left( X;\tau \right) \left( \Delta _{K}=0\right) =\frac{%
\left\vert \Psi \left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{\left[
1+\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) \right] }\exp \left( -\frac{%
\Delta _{X}^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{X}\left( X-X_{i}-K_{i}\tau \right) }{%
\left( 1+\tau ^{2}\right) }\right) +\exp \left( -\frac{\Delta _{X}^{2}}{4}%
\right) \cos \left( \frac{\tau \Delta _{X}\left[ \left( X-X_{i}\right) -\tau
K_{i}\right] }{\left( 1+\tau ^{2}\right) }\right) \right] \notag \\
&& \label{2.18}
\end{eqnarray}%
and indeed one may check to see that%
\begin{equation}
\int_{-\infty }^{\infty }dX\rho _{B}\left( X;\tau \right) =1. \label{2.19}
\end{equation}%
The long time limit is%
\begin{equation}
\lim_{\tau \rightarrow \infty }\rho _{B}\left( X;\tau \right) =\frac{%
\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{%
\left[ 1+\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) \right] }\left[
1+\exp \left( -\frac{\Delta _{X}^{2}}{4}\right) \cos \left( \Delta
_{X}K_{i}\right) \right] \label{2.20}
\end{equation}%
so that as already mentioned, the free-particle time distribution of bosons
decays at long time as $\tau ^{-1}$ just like the single
free particle.
Similarly let us consider two bosons that are initiated at the same point ($%
X_{1i}=X_{2i}=X_{i})$ but with different momenta and use the difference and
mean momenta%
\begin{equation}
K_{i}=\frac{K_{1i}+K_{2i}}{2},\Delta _{K}=K_{2i}-K_{1i} . \label{2.21}
\end{equation}%
In this case
\begin{eqnarray}
&&\rho _{B}\left( X;\tau \right) \left( \Delta _{X}=0\right) =\frac{2}{N^{2}%
\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp \left( -\frac{\left(
X-X_{i}-K_{i}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }-\frac{\Delta
_{K}^{2}\tau ^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{K}\tau \left( X-X_{i}-K_{i}\tau \right)
}{\left( 1+\tau ^{2}\right) }\right) +\exp \left( -\frac{\Delta _{K}^{2}}{4}%
\right) \cos \left( \frac{\Delta _{K}\left[ \tau K_{i}-\left( X-X_{i}\right) %
\right] }{\left( 1+\tau ^{2}\right) }\right) \right] . \notag \\
&& \label{2.23}
\end{eqnarray}%
and this again decays at long times as $\tau ^{-1}$.
\subsubsection{Fermions}
Following the same algebra as in the bosonic case, one finds that the
fermionic density is
\begin{eqnarray}
&&\rho _{F}\left( X;\tau \right) =\frac{1}{N_{F}^{2}}\left( \left\vert \Psi
\left( X,X_{1i},K_{1i},\tau \right) \right\vert ^{2}+\left\vert \Psi \left(
X,X_{2i},K_{2i},\tau \right) \right\vert ^{2}\right) \notag \\
&&-\frac{2}{N_{F}^{2}}\left\vert \Psi \left( X,X_{1i},K_{1i},\tau \right)
\right\vert \left\vert \Psi \left( X,X_{2i},K_{2i},\tau \right) \right\vert
\notag \\
&&\cos \left( \Phi -\frac{\left( X_{2i}-X_{1i}\right) \left(
K_{2i}+K_{1i}\right) }{2}\right) \exp \left( -\frac{\left(
X_{2i}-X_{1i}\right) ^{2}+\left( K_{2i}-K_{1i}\right) ^{2}}{4}\right)
\label{2.24}
\end{eqnarray}%
The normalization is
\begin{equation}
N_{F}^{2}=2\left[ 1-\exp \left( -\frac{\Delta _{X}^{2}+\Delta _{K}^{2}}{2}%
\right) \right] \label{2.25}
\end{equation}%
and it vanishes if $\Delta _{X}=\Delta _{K}=0$ so care must be taken in this
limit, since also the numerator vanishes but the ratio does not. More
specifically, at time $\tau =0$ we have with $K_{1i}=K_{2i}=K_{i}$ and $%
\Delta _{X}=X_{2i}-X_{1i}$
\begin{eqnarray}
&&\lim_{\Delta _{X}\rightarrow 0}\Psi \left( X_{1},X_{2};0\right) =-\frac{1}{%
\sqrt{\pi }}\left( X_{2}-X_{1}\right) \notag \\
&&\cdot \exp \left[ iK_{i}\left( X_{1}-X_{1i}\right) +iK_{i}\left(
X_{2}-X_{2i}\right) -\frac{1}{2}\left[ \left( X_{1}-X_{1i}\right)
^{2}+\left( X_{2}-X_{2i}\right) ^{2}\right] \right] \label{2.26}
\end{eqnarray}%
and this vanishes if $X_{1}=X_{2}$. Fermions cannot exist at the same point
in phase space. Note however that
\begin{equation}
\int_{-\infty }^{\infty }dX_{1}\int_{-\infty }^{\infty }dX_{2}\lim_{\Delta
_{i}\rightarrow 0}\left\vert \Psi \left( X_{1},X_{2};0\right) \right\vert
^{2}=1 \label{2.27}
\end{equation}%
as it should be. There is no difficulty in preparing an initial wavefunction
in the fermionic case even if both wavepackets are localized around the same
centers both in coordinate and momentum space. The density vanishes at one
point only.
Using the average and difference coordinates as above we readily find that
when the two incident momenta are identical
\begin{eqnarray}
&&\rho _{F}\left( X;\tau \right) \left( \Delta _{K}=0\right) =\frac{%
\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}}{%
2\sinh \left( \frac{\Delta _{i}^{2}}{4}\right) }\exp \left( \frac{\tau
^{2}\Delta _{X}^{2}}{4\left( 1+\tau ^{2}\right) }\right) \notag \\
&&\left[ \cosh \left( \frac{\Delta _{i}\left( X-X_{i}-K_{i}\tau \right) }{%
\left( 1+\tau ^{2}\right) }\right) -\exp \left( -\frac{\Delta _{X}^{2}}{4}%
\right) \cos \left( \frac{\tau \Delta _{X}\left[ \left( X-X_{i}\right) -\tau
K_{i}\right] }{\left( 1+\tau ^{2}\right) }\right) \right] . \notag \\
&& \label{2.28}
\end{eqnarray}%
It is also straightforward to see that
\begin{equation}
\int_{-\infty }^{\infty }dX\rho _{F}\left( X;\tau \right) \left( \Delta
_{K}=0\right) =1. \label{2.29}
\end{equation}%
When the (mean) distance between the two particles becomes small
\begin{equation}
\lim_{\Delta _{X}\rightarrow 0}\rho _{F}\left( X;\tau \right) =\left\vert
\Psi _{0}\left( X,X_{i},K_{i},\tau \right) \right\vert ^{2}\left[ \frac{%
\left( X-X_{i}-K_{i}\tau \right) ^{2}}{\left( 1+\tau ^{2}\right) }+\frac{1}{2%
}\right] \label{2.30}
\end{equation}%
and at long times the fermion density is\bigskip
\begin{equation}
\lim_{\tau \rightarrow \infty }\rho _{F}\left( X;\tau \right) \left( \Delta
_{K}=0\right) =\frac{\left\vert \Psi _{0}\left( X,X_{i},K_{i},\tau \right)
\right\vert ^{2}}{\left[ 1-\exp \left( -\frac{\Delta _{X}^{2}}{2}\right) %
\right] }\left[ 1-\exp \left( -\frac{\Delta _{i}^{2}}{4}\right) \cos \left(
K_{i}\Delta _{X}\right) \right] \label{2.31}
\end{equation}%
and it too decays as $\tau ^{-1}$ as for bosons. The difference is only in
the coefficients.
\subsection{Survival probability and symmetry}
The time-dependent overlap or survival amplitude for a single particle is%
\begin{eqnarray}
S_{j}\left( \tau \right) &=&\langle \Psi \left( X_{ji},K_{ji},0\right) |\Psi
\left( X_{ji},K_{ji},\tau \right) \rangle \notag \\
&=&\sqrt{\frac{2}{\left( 2+i\tau \right) }}\exp \left( -\frac{i\tau
K_{ji}^{2}}{\left( 2+i\tau \right) }\right) . \label{2.32}
\end{eqnarray}%
Analogously, the time-dependent overlap or survival amplitude for the two particle wavefunction is then
\begin{equation*}
\Sigma \left( \tau \right) =\frac{2}{N_{k}^{2}}S_{1}\left( \tau \right)
S_{2}\left( \tau \right) \left[ 1+h_{k}\exp \left( -\frac{\Delta
_{X}^{2}+\Delta _{K}^{2}}{\left( 2+i\tau \right) }\right) \right]
\end{equation*}%
and its square is%
\begin{equation}
\left\vert \Sigma _{k}\left( \tau \right) \right\vert ^{2}=\left\vert
S_{1}\left( \tau \right) S_{2}\left( \tau \right) \right\vert
^{2}O_{k}\left( \tau \right) \label{2.33}
\end{equation}%
with
\begin{eqnarray}
O_{k}\left( \tau \right) & =&\frac{\left[ 1+h_{k}^{2}\exp \left( -\frac{%
4\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }%
\right) +2h_{k}\exp \left( -\frac{2\left( \Delta _{X}^{2}+\Delta
_{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }\right) \cos \left( \frac{%
\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) }{\left( 4+\tau ^{2}\right) }%
\tau \right) \right] }{\left[ 1+h_{k}\exp \left( -\frac{\Delta
_{X}^{2}+\Delta _{K}^{2}}{2}\right) \right] ^{2}}, \notag \\
k&=&B,F,D. \label{2.34}
\end{eqnarray}
It is then of interest to study this overlap $ O_{k}\left( \tau \right)$ in some limits. First, we note
that when the initial distances between the wavepackets are sufficiently
large, such that $\Delta _{X}^{2}+\Delta _{K}^{2}\gg 1$, then for times
shorter than $\sim \sqrt{\Delta _{X}^{2}+\Delta _{K}^{2}}$, this overlap
function reduces to unity. This is what is expected: when the initial
distance between the particles is large, they behave as independent
distinguishable particles. The interesting case is when the initial
distances between the two wavepackets are small and the interference term is
no longer negligible at short times. For bosons, one finds to leading order
\begin{equation}
\lim_{\Delta _{X},\Delta _{K}\rightarrow 0}O_{B}\left( \tau \right) =1+\frac{%
\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right)
\tau ^{2}}{2\left( 4+\tau
^{2}\right) }\geq 1 =O_{D}\left( \tau \right) \label{2.35}
\end{equation}%
showing that in this limit, the bosonic survival probability is greater than
the distinguishable particle overlap and this is so for all times. For
fermions, though, one has that
\begin{equation}
\lim_{\Delta _{X},\Delta _{K}\rightarrow 0}O_{F}\left( \tau \right) =\frac{4%
}{\left( 4+\tau ^{2}\right) }\leq 1=O_{D}\left( \tau \right) \label{2.37}
\end{equation}%
showing that the fermionic overlap decays faster than the distinguishable
particle case in this limit. In other words, when the distance in phase
space between the centers of the two particles is small, which is the case
when the interference term becomes most important, one finds that the decay
of the overlap of fermions is faster than distinguishable particles which in
turn is faster than bosons. These results are also of interest in the short
time limit, where one finds that%
\begin{equation}
\lim_{\Delta _{X},\Delta _{K},\tau \rightarrow 0}O_{B}\left( \tau \right) =1+%
\frac{\left( \Delta _{X}^{2}+\Delta _{K}^{2}\right) \tau ^{2}}{8}
\label{2.38}
\end{equation}%
\begin{equation}
\lim_{\Delta _{X},\Delta _{K},\tau \rightarrow 0}O_{F}\left( \tau \right)
=\left( 1-\frac{\tau ^{2}}{4}\right) \label{2.39}
\end{equation}%
which indicates that the quantum Zeno effect \cite{Zeno1,Zeno2,Zeno3} would be stronger for bosons
than for fermions, as the fermionic survival probability decays faster also
at short times.
\subsection{\protect\bigskip Free dynamics of relativistic identical particles}
To investigate the relativistic regime, we consider relativistic electrons and photons. The wavepackets describing the bosons -- the photons -- travel dispersion-free at the speed of light. The wavepackets describing the fermions -- the electrons -- are four component spinors with time evolution determined by the Dirac equation. As we consider only free particle motion of two non-interacting (except via particle statistics) electrons, spin is conserved and the wavepackets reduce to two component spinors. Relativistic wavepacket propagation is much like non-relativistic propagation except that the velocity is no longer directly proportional to wavenumber -- the former asymptotes to the speed of light -- and wavepacket broadening is greatly suppressed due to the dispersion relation -- quadratic in the non-relativistic case -- approaching linearity. In particular, the time scale for wavepacket broadening scales with $\gamma^2$, where $\gamma = 1/\sqrt{1-v^2/c^2}$. (See Eq. (2.19) in \cite{dumont2020}.) For example, if $v=0.99c$, broadening takes 50 times longer than it does for non-relativistic velocities.
The single free relativistic electron time evolved wavefunction, in the highly accurate (for cases we considered) steepest descent approximation, is
\begin{equation}
\Psi _{j}\left( X,X_{i},K_{i},\tau \right) =\frac{\hat{u}}{\sqrt{\left( 1+i\left(\tau/\gamma^2 \right)
\right) }}\left( \frac{1}{\pi }\right) ^{1/4}\exp \left( -\frac{1}{2}\frac{%
\left[ \left( X_{j}-X_{ji}\right) -iK_{ji}\right] ^{2}}{\left( 1+i\left(\tau/\gamma^2 \right)
\right) }-\frac{K_{ji}^{2}}{2}\right) ,j=1,2, \label{3.1}
\end{equation}%
where $\hat{u}=u/\lVert u \rVert$ and
\begin{equation}
u=\left(
\begin{array}{c}
1 \\
\left(\frac{\hbar \Gamma^{1/2}}{mc} \right) \frac{K_{ji}}{1+\gamma}
\end{array}%
\right)
\end{equation}
is the two component spinor for a spin up electron. This is then used in the symmetrized wavefunction for the two bosons and electrons, respectively.
\bigskip
\bigskip
\renewcommand{\theequation}{4.\arabic{equation}} \setcounter{section}{3} %
\setcounter{equation}{0}
\section{\protect\bigskip Flight times of identical non-relativistic particles scattered by a
delta function barrier}
\subsection{Preliminaries}
The Hamiltonian for the delta function barrier is
\begin{equation}
\hat{H}=-\frac{\hbar ^{2}}{2M}\frac{d^{2}}{dx^{2}}+\varepsilon \delta \left(
x\right) . \label{3.1}
\end{equation}%
and the coupling coefficient $\varepsilon >0$. The eigenfunctions of the
Hamiltonian at energy
\begin{equation}
E=\frac{\hbar ^{2}k^{2}}{2M} \label{3.2}
\end{equation}%
are
\begin{equation}
\psi \left( x\right) =\left(
\begin{array}{c}
\exp \left( ikx\right) +R\left( k\right) \exp \left( -ikx\right) ,\text{ \ \
}x<0 \\
T\left( k\right) \exp \left( ikx\right) ,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ }x>0%
\end{array}%
\right) \label{3.3}
\end{equation}%
with the reflection amplitude given as
\begin{equation}
R\left( k\right) =\frac{-i\alpha \left( k\right) }{1+i\alpha \left( k\right)
},\text{ \ \ }\alpha \left( k\right) =\frac{M\varepsilon }{\hbar ^{2}k}.
\label{3.4}
\end{equation}%
The transmission amplitude is
\begin{equation}
T\left( k\right) =\frac{1}{1+i\alpha \left( k\right) } \label{3.5}
\end{equation}%
and one readily sees that
\begin{equation}
\left\vert R\left( k\right) \right\vert ^{2}+\left\vert T\left( k\right)
\right\vert ^{2}=1. \label{3.6}
\end{equation}
The phase time delays are defined to be
\begin{equation}
\delta t_{T,R}=\frac{M}{\hbar k}\mathrm{Im}\left( \frac{1}{Y}\frac{\partial Y%
}{\partial k}\right) ,\text{ \ \ }Y=R,T \label{3.7}
\end{equation}%
so that
\begin{equation}
\delta t_{T}=\delta t_{R}=\frac{M}{\hbar k}\frac{\alpha \left( k\right) }{k%
\left[ 1+\alpha ^{2}\left( k\right) \right] }\equiv \delta t. \label{3.8}
\end{equation}%
The phase time then implies that for a repulsive delta function potential ($%
\alpha \left( k\right) >0$) the flight time is lengthened while for an
attractive delta function potential it is shortened. In the limit that the
coupling coefficient $\varepsilon \rightarrow \infty $, which is the
equivalent of a hard wall potential, the transmission amplitude vanishes
while the reflection amplitude goes to $-1$. The reflection time delay
vanishes in this case, the interference of the forward and reflected wave
does not change the reflected phase time delay. For a fixed nonzero value
of $\varepsilon >0$ in the limit that the energy vanishes ($k\rightarrow 0)$
the delay diverges as $k^{-1}$. For an attractive delta function potential,
the flight time is shortened and the reduction diverges as $k^{-1}$. Due to
the zero width of the delta function potential, the dwell time \cite{Mugabook} in the
barrier always vanishes.
It is worthwhile here also to consider the imaginary time defined as \cite{Pollak1984}
\begin{equation}
t_{im,R,T}=\hbar \text{Re}\left( \frac{1}{Y}\frac{\partial Y}{\partial E}%
\right) ,\text{ \ \ }Y=R,T \label{3.9}
\end{equation}%
so that the transmitted imaginary time is positive%
\begin{equation}
t_{im,T}=\frac{M\alpha ^{2}\left( k\right) }{\hbar k^{2}\left( 1+\alpha
^{2}\left( k\right) \right) } \label{3.10}
\end{equation}%
while the reflected imaginary time is negative%
\begin{equation}
t_{im,R}=-\frac{M}{\hbar k^{2}\left( 1+\alpha ^{2}\left( k\right) \right) }.
\label{3.11}
\end{equation}%
As the momentum increases, the transmission probability increases while the
reflection probability decreases, so that the transmitted imaginary time is
positive, and the reflected is negative.
In reduced variables (with $\epsilon_i=\alpha(k)$), the phase time delay takes the simple form%
\begin{equation}
\delta \tau \left( K_{i}\right) =\frac{\epsilon _{i}}{K_{i}^{2}\left(
1+\epsilon _{i}^{2}\right) }
\end{equation}%
while the imaginary time delays are
\begin{equation}
\tau _{im,T}\left( K_{i}\right) =\frac{\epsilon _{i}^{2}}{K_{i}^{2}\left(
1+\epsilon _{i}^{2}\right) }
\end{equation}%
and%
\begin{equation}
\tau _{im,R}\left( K_{i}\right) =-\frac{1}{K_{i}^{2}\left( 1+\epsilon
_{i}^{2}\right) }.
\end{equation}%
The transmission and reflection probabilities become
\begin{equation}
\left\vert T\left( K_{i}\right) \right\vert ^{2}=\frac{1}{1+\epsilon _{i}^{2}%
},\left\vert R\left( K_{i}\right) \right\vert ^{2}=\frac{\epsilon _{i}^{2}}{%
1+\epsilon _{i}^{2}}.
\end{equation}%
\subsubsection{The single particle dynamics. Momentum filtering}
Initially we consider a Gaussian wavepacket as in Eq. \ref{2.5}
\begin{equation}
\Psi \left( x,0\right) =\left( \frac{\Gamma }{\pi }\right) ^{1/4}\exp \left[
-\frac{\Gamma }{2}\left( x-x_{i}\right) ^{2}+ik_{i}\left( x-x_{i}\right) %
\right] \label{3.12}
\end{equation}%
whose momentum representation is
\begin{equation}
\Psi \left( k,0\right) =\left( \frac{1}{\pi \Gamma }\right) ^{1/4}\exp
\left( -\frac{\left( k-k_{i}\right) ^{2}}{2\Gamma }-ikx_{i}\right) .
\label{3.13}
\end{equation}%
The time dependent wavepacket in the transmitted region is
\begin{equation}
\Psi _{T}\left( x,t\right) =\int_{-\infty }^{\infty }\frac{dk}{\sqrt{2\pi }}%
\Psi \left( k,0\right) T\left( k\right) \exp \left( ikx-i\frac{\hbar k^{2}}{%
2M}t\right) ,x\geq 0 \label{3.14}
\end{equation}%
and in the reflected region is%
\begin{equation}
\Psi _{R}\left( x,t\right) =\int_{-\infty }^{\infty }\frac{dk}{\sqrt{2\pi }}%
\exp \left( -i\frac{\hbar k^{2}}{2M}t\right) \Psi \left( k,0\right) \left[
\exp \left( ikx\right) +R\left( k\right) \exp \left( -ikx\right) \right]
,x\leq 0 \label{3.15}
\end{equation}
Using the reduced variables as in Eq. \ref{2.4} and the reduced delta
function coupling variable
\begin{equation}
\epsilon =\frac{M\varepsilon }{\hbar ^{2}\sqrt{\Gamma }} \label{3.16}
\end{equation}%
and carrying out the momentum integrations in Eqs. \ref{3.14} and \ref{3.15}
one finds that the transmitted time-dependent wavepacket ($X\geq 0$) is
\begin{eqnarray}
&&\Psi _{T}\left( X,\tau \right) =\Psi _{fp}\left( X,\tau \right) \notag \\
&&\left[ 1-\epsilon \frac{\sqrt{\pi \left( 1+i\tau \right) }}{\sqrt{2}}\exp %
\left[ -\frac{\left( 1+i\tau \right) }{2}\left( Z_{T}+i\epsilon \right) ^{2}%
\right] \mathrm{{erf}c}\left( -\frac{i\sqrt{\left( 1+i\tau \right) }}{\sqrt{2%
}}\left( Z_{T}+i\epsilon \right) \right) \right] . \label{3.17}
\end{eqnarray}%
where $\Psi _{fp}\left( X,\tau \right) $ is the free particle time-dependent
wavepacket as in Eq. \ref{2.13}, $\mathrm{erfc}$ is the complementary
error function, and
\begin{equation}
Z_{T}=\frac{\left[ K_{i}+i\left( X-X_{i}\right) \right] }{\left( 1+i\tau
\right) }. \label{3.18}
\end{equation}%
The reflected time-dependent wavefunction ($X\leq 0$) is
\begin{eqnarray}
&&\Psi _{R}\left( X,\tau \right) =\Psi _{fp}\left( X,\tau \right) \notag \\
&&-\Psi _{fp}\left( -X,\tau \right) \epsilon \frac{\sqrt{\pi }}{\sqrt{2}}%
\sqrt{\left( 1+i\tau \right) }\exp \left( -\frac{\left( 1+i\tau \right) }{2}%
\left( i\epsilon +Z_{R}\right) ^{2}\right) \mathrm{\mathrm{{erf}c}}\left( -i%
\sqrt{\frac{\left( 1+i\tau \right) }{2}}\left( i\epsilon +Z_{R}\right)
\right) \notag \\
&& \label{3.19}
\end{eqnarray}%
with
\begin{equation}
Z_{R}=\frac{\left[ K_{i}-i\left( X+X_{i}\right) \right] }{\left( 1+i\tau
\right) }. \label{3.20}
\end{equation}
In practice, if the incident (reduced) momentum is sufficiently large, which
will be the case in all of our computations, and since we will be using small
momentum variances, one may safely replace the complementary error function
with its asymptotic expansion so that to leading order
\begin{equation}
\Psi _{T}\left( X,\tau \right) \simeq \Psi _{fp}\left( X,\tau \right) \left[
\frac{Z_{T}}{\left( Z_{T}+i\epsilon \right) }\right] ,X\geq 0 \label{3.21}
\end{equation}%
and
\begin{equation}
\Psi _{R}\left( X,\tau \right) \simeq \Psi _{fp}\left( X,\tau \right) -\Psi
_{fp}\left( -X,\tau \right) \frac{\epsilon }{\left( \epsilon -iZ_{R}\right) }%
,X\leq 0. \label{3.22}
\end{equation}%
Eqs. \ref{3.21} and \ref{3.22} will be the `workhorses' for the numerical
implementations below, but we stress that we have checked the validity of the
asymptotic expansion and it is quantitative for the conditions here used. To
see the long time limit we note that
\begin{equation}
\left\vert \frac{Z_{T}}{\left( Z_{T}+i\epsilon \right) }\right\vert ^{2}=%
\frac{\left[ K_{i}^{2}+\left( X-X_{i}\right) ^{2}\right] }{\left[ \left(
K_{i}-\epsilon \tau \right) ^{2}+\left( X-X_{i}+\epsilon \right) ^{2}\right]
} \label{3.23}
\end{equation}
and this goes as $\tau ^{-2}$ in the long time limit so that the transmitted
single particle density decays as $\tau ^{-3}$. The calculation is a bit
more involved for the reflected density but it also decays as $\tau ^{-3}$.
In contrast to the free particle, due to the potential, the mean flight time
(Eqs. \ref{2.10} and \ref{2.11}) is well-defined.
We can now rewrite the density as
\begin{equation}
\left\vert \Psi _{T}\left( X,\tau \right) \right\vert ^{2}\simeq \frac{%
\left( 1+\tau _{fp}^{2}\right) }{\sqrt{\pi \left( 1+\tau ^{2}\right) }}\exp
\left( -\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau
^{2}\right) }-\ln \left[ \left( \epsilon _{i}\tau -1\right) ^{2}+\left( \tau
_{fp}+\epsilon _{i}\right) ^{2}\right] \right)
\end{equation}%
and ask when the exponent is maximized as a function of the reduced time $%
\tau $. Defining
\begin{equation}
G\left( \tau \right) =\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{%
\left( 1+\tau ^{2}\right) }+\ln \left[ \left( \epsilon _{i}\tau -1\right)
^{2}+\left( \tau _{fp}+\epsilon _{i}\right) ^{2}\right]
\end{equation}%
we note that%
\begin{equation}
\frac{dG\left( \tau \right) }{d\tau }=-2\frac{K_{i}^{2}\left[ \tau
_{fp}-\tau \right] }{\left( 1+\tau ^{2}\right) }-2\tau \frac{K_{i}^{2}\left[
\tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau ^{2}\right) ^{2}}+\frac{\epsilon
_{i}2\left( \epsilon _{i}\tau -1\right) }{\left[ \left( \epsilon _{i}\tau
-1\right) ^{2}+\left( \tau _{fp}+\epsilon _{i}\right) ^{2}\right] }.
\end{equation}%
Setting the derivative equal to zero
\begin{equation}
\frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] }{\left( 1+\tau ^{2}\right) }%
+\tau \frac{K_{i}^{2}\left[ \tau _{fp}-\tau \right] ^{2}}{\left( 1+\tau
^{2}\right) ^{2}}=\frac{\epsilon _{i}\left( \epsilon _{i}\tau -1\right) }{%
\left[ \left( \epsilon _{i}\tau -1\right) ^{2}+\left( \tau _{fp}+\epsilon
_{i}\right) ^{2}\right] }.
\end{equation}%
and looking for a solution
\begin{equation}
\tau =\tau _{fp}\left( 1-\Delta \tau \right)
\end{equation}%
and assuming that $\Delta \tau \ll 1$ and remembering that $\tau _{fp}\gg 1$
leads to the solution%
\begin{equation}
\Delta \tau \simeq \frac{\epsilon _{i}^{2}}{K_{i}^{2}\left[ 1+\epsilon
_{i}^{2}\right] }=\tau _{im,T}\left( K_{i}\right)
\end{equation}%
and this is precisely the momentum filtering effect. Due to the increase of the transmission probability with energy, the high-energy components of the incident wavepacket are preferably transmitted so that the flight time is reduced \cite{Filinov}.
\subsubsection{Two particles dynamics}
The composite initial wavefunction of the two particles is given by Eq. (\ref{2.6}) and the time evolved wavefunctions by Eq. (\ref{2.9}) for free particle evolution. For the $\delta$-tunneling dynamics, the initial wave function is the same but Eq. (\ref{2.9}) has to be replaced by
\begin{equation}\label{3.34}
\Psi _{k,T}\left( X, z, \tau \right) = \Psi _{1,T}\left( X,\tau \right) \left[\Psi _{2,T}\left(z,\tau \right)+\Psi _{2,R}\left(z,\tau \right)\right] + h_k \, \Psi _{2,T}\left( X,\tau \right) \left[\Psi _{1,T}\left(z,\tau \right)+\Psi _{1,R}\left(z,\tau \right)\right] \,
\end{equation}
for the total transmitted wavefunction and
\begin{equation}\label{3.35}
\Psi _{k,R}\left( X, z, \tau \right) = \Psi _{1,R}\left( X,\tau \right) \left[\Psi _{2,T}\left(z,\tau \right)+\Psi _{2,R}\left(z,\tau \right)\right] + h_k \, \Psi _{2,R}\left( X,\tau \right) \left[\Psi _{1,T}\left(z,\tau \right)+\Psi _{1,R}\left(z,\tau \right)\right] \,
\end{equation}
for the total reflected wavefunction, $ \Psi _{1,T}$, $ \Psi _{2,T}$, $ \Psi _{1,R}$ and $ \Psi _{2,R}$ being the
transmitted and reflected wave functions for each particle when considered to be independent. The one-particle mean flight time is given by
\begin{equation}
\left\langle \tau_k \right\rangle _{T,R}=\int_{0}^{\infty }d\tau \, \tau \, P_{k;T,R}\left( X_f, \tau \right) , \label{3.36}
\end{equation}%
where the probability distribution is
\begin{equation}
P_{k;T,R}\left( X_f, \tau \right) =\frac{\int_{-\infty }^{\infty }dz\left\vert \Psi
_{k;T,R}\left( X_f,z;\tau \right) \right\vert ^{2}}{\int_{0}^{\infty }d\tau
\int_{-\infty }^{\infty }dz\left\vert \Psi _{k;T,R}\left( X_f,z,\tau \right)
\right\vert ^{2}} \label{3.37}
\end{equation}%
with $\, k=D, B, F$. The screen is located at $X=\pm X_f$ depending on whether we are considering the transmitted (plus sign) or reflected (minus sign) total wave function. As mentioned above,
these distributions and mean times are well defined under the presence of a potential since the density decays at long times
as $\tau ^{-3}$.
\renewcommand{\theequation}{5.\arabic{equation}} \setcounter{section}{4} %
\setcounter{equation}{0}
\section{Numerical Results}
\subsection{Free particle non-relativistic flight times}
To analyze the role played by the symmetry of the total wave function in systems of identical particles when considering free dynamics
and tunneling from a delta barrier, we have identified two types of initial conditions (in reduced coordinates): (I) different locations with the same
momenta, $X_{1i}=-301$, $X_{2i}= -299$ with $K_{1i}=K_{2i}=10$ and (II) the same locations with different momenta, $X_{1i}=X_{2i}= -300$
with $K_{1i}=10.1$ and $K_{2i}=9.9$. When the differences in initial positions
and momenta differ more, the cross terms in the density of the two-particle system become smaller, and one rapidly reaches the
distinguishable particle limit. Unless otherwise stated, we will assume $\Gamma = 0.01$ for the initial width of the Gaussian functions and
$\epsilon = 1$, for the strength of the coupling to the delta barrier. In all cases, the position of the screen is at $X_f= \pm 450$ and the delta barrier is located
at the origin.
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Initial-distribution1-eps-converted-to.pdf}
\label{fig1a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Time-distribution1-eps-converted-to.pdf}
\label{fig1b}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Initial-distribution2-eps-converted-to}
\label{fig1c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Time-distribution2-eps-converted-to.pdf}
\label{fig1d}
\end{subfigure}
\caption{Non-relativistic free particle dynamics. The left panels show the initial spatial distribution, the right panels the relative one particle flight time distributions as defined in Eq. \ref{2.21} hitting the screen located at $X_f=450$. The other parameters used are $\Gamma= 0.01$ for the initial width of the coherent states; for case (I) (initial spatial difference) $X_{1i}=-301$, $X_{2i}= -299$, $K_{1i}=K_{2i}=10$, left and right top panels; and for case (II) (initial momentum difference) $X_{1i}=X_{2i}= -300$, $K_{1i}=10.1$, $K_{2i}=9.9$, left and right bottom panels.} \label{fig1}
\end{figure}
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{O-function1-eps-converted-to.pdf}
\label{fig2a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{O-function2-eps-converted-to.pdf}
\label{fig2b}
\end{subfigure}
\caption{Overlap decay (O-function) defined in Eq. \ref{2.34} for non-relativistic free particles. The left and right panels are for initial spatial (case I) and momentum (case II) differences, respectively. The initial width of the Gaussians here is taken to be $\Gamma=0.01$. } \label{fig2}
\end{figure}
In Figure \ref{fig1}, we plot the initial spatial distributions (left panels) and (see Eq. \ref{2.12}) the one-particle flight time distributions (right panels) for the two sets of initial conditions. The top two panels are for case (I) (initial spatial difference) the bottom two are for case (II) (initial momentum difference).
The solid red curve is used for distinguishable particles, the
long-dashed blue curve for bosons and the dot-dashed brown curve for fermions. The one-particle flight time distribution is broader for fermions displaying early and late arrivals. Bosons tend to arrive later than distinguishable particles, showing the narrowest time distribution.
Furthermore, the flight time distributions of distinguishable particles
and bosons tend to have a similar shape, losing the two lobes of the initial density, whereas
the fermions display a double-peaked time distribution, reflecting their initial
density. In the bottom-right panel, bosons and distingusiable particles behave essentially identically
whereas fermions display a bimodal time distribution. Fermions not only arrive at the screen earlier and later, there is a distinctive time asymmetry
in the flight time distribution.
We attribute these different behaviors to the bunching and anti-bunching properties of bosons and fermions, respectively. Specifically, the early arrival of fermions at the screen is related to the `front' of the initial fermionic density distribution, which, due to the anti-bunching effect, is closer to the origin than in the case of bosons and distinguishable particles. The same is true for the portion of the fermionic flight time distribution which arrives later at the barrier. It is due to the back of the initial fermionic density which is further from the origin, when compared to bosons or distinguishable particles.
In a previous work where we studied the MacColl-Hartman effect we have argued against the `front' of the wavepacket being used to explain away supposedly superluminal propagation for \textit{tunneling} particles \cite{dumont2020,rivlin2020}, noting that the superluminality cannot be used for the purpose of early signaling. When considering the MacColl-Hartman effect, one is comparing final time distributions of tunneled particles with free particles, but the two have the same initial density distribution. In the case considered here, the initial wavepacket of the fermions is broadened when compared to that of the bosons. It is this broadening which leads to early and late arrival times of fermions as compared to bosons, meaning that one does not have to consider here the possibility of superluminality.
Another interesting aspect considered here is
the analysis of the survival amplitude, using Eq. (\ref{2.34})
and the limits at small initial spatial $\Delta_X$ and momentum $\Delta_K$ difference and the associated time dependence, as in Eqs. (\ref{2.38}) and (\ref{2.39}). In Figure \ref{fig2} we plot the overlapping functions (O-function, see Eq. \ref{2.34}) for initial conditions (I) (initial spatial difference) in the left panel and for case (II) (initial momentum difference) in the right panel.
As in Fig. \ref{fig1}, the red line indicates the behavior for distinguishable particles, the long-dashed blue curve is used for bosons and the dot-dashed brown curve for
fermions. Dotted black and green curves correspond to the limits of small values of $\Delta_X$, $\Delta_K$ and time. The time dependence of the corresponding overlapping functions is quite different for bosons and fermions. The bosons stay together for a long time whereas the fermionic overlap decays rapidly.
\subsection{Free particle relativistic flight time distributions}
Figure 3 shows the time-dependent density at the screen (at $X=0$) for two photons and two electrons traveling near the speed of light. In the left panel, the two Gaussians are centered at $X_{1i}=-3.5$ and $X_{2i}=-3$ and at a wavenumber consistent with velocity, $v=0.99c$. In the right panel, the two Gaussians are centered at $X_{1i}=X_{2i}=-3$ and at wavenumbers consistent with $v=0.984c$ and $0.996c$. In both cases, an electron is more likely to arrive at the screen before a photon. However, this is simply because the initial density for the electrons is broader than that of the photons. To show this, the density that would be seen if the electrons traveled dispersion-free at the speed of light is also shown (dotted lines). The observed electron density clearly travels with a speed less than $c$. The early and late arrivals of the fermions is just a reflection of their initial density, which, as may be seen from the left panels of Fig. \ref{fig1}, is broader than the initial distribution for bosons, due to the anti-bunching effect of fermions. The early arrival times are a reflection of the initial width of the packet -- this is the same for both the relativistic and the non-relativistic regimes.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{zm60m70vp99p99w20.png}
\label{fig3a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{zm60m60vp984p996w20.png}
\label{fig3b}
\end{subfigure}
\caption{\noindent Time-dependent densities for two photons (dashed lines) and two relativistic electrons (solid lines). The left and right panels are for initial spatial (case I) and momentum (case II) differences, respectively. The initial width of the Gaussians here is taken to be $\Gamma=0.0025$. Also shown (dotted lines) are the densities that would be seen if the electrons traveled dispersion-free at the speed of light.} \label{fig3}
\end{figure}
\subsection{Identical particle, non-relativistic flight times for scattering on a delta potential barrier}
In this case, as noted above, the mean flight times (Eq. \ref{2.10}) are well defined, and they provide a clear measure of the effect of particle symmetry on the flight time.
In Table \ref{table1}, we provide the transmitted and reflected mean flight times (in reduced coordinates) for bosons $<\tau_{1,B}>_{T,R}$ and
fermions $<\tau_{1,F}>_{T,R}$ with $\Gamma=0.01$ and $\epsilon=1$ for the two cases of initial spatial (I) and momentum (II) differences. Transmitted mean times
are always shorter than reflected mean times due to the momentum filtering effect and those of fermions are always greater than for bosons due to their anti-bunching
and bunching properties.
\begin{table}
\caption{Initial conditions and transmitted (T) and reflected (R) mean flight times (in reduced coordinates) for bosons, $<\tau_{1,B}>_{T,R}$, and fermions, $<\tau_{1,F}>_{T,R}$, with $\Gamma=0.01$ and $\varepsilon=1$.}
\begin{tabular}{||c|c|c|c||c|c||c|c||}
\hline $X_{1i}$& $X_{2i}$ & $X_f$ & $K_{1i}, K_{2i}$ & $<\tau_{1,B}>_T$ & $<\tau_{1,B}>_R $ & $<\tau_{1,F}>_T$ & $<\tau_{1,F}>_R $ \\
%
\hline -301 & -299 & $\pm$ 450 & 10, 10 &75.2908 & 75.8847 & 75.4992 & 76.5237 \\
%
\hline -300 & -300& $\pm$ 450 & 10.1, 9.9 &75.3749 & 76.1740 & 76.7417 &77.3525 \\
%
\hline
\hline
%
%
%
\end{tabular}
\label{table1}
\end{table}
The corresponding flight time probability distributions (eq. \ref{2.11}) are shown in Fig. \ref{fig4}.
The top and bottom panels correspond to initial spatial (case I) and initial momentum (case II) differences, the left and right panels correspond to the transmitted and reflected flight time distributions, respectively. The trends are similar to those found in the free particle dynamics scenario. The effect of symmetry on flight times seems to be robust and is not changed much in the presence of an interaction barrier. Time distributions are broadest for fermions and narrowest for bosons, with distinguishable particles in between. The asymmetry of the bimodal reflected distributions for fermions becomes less important when comparing with the transmitted ones.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Transmitted-distribution1-eps-converted-to.pdf}
\label{fig1a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Reflected-distribution1-eps-converted-to.pdf}
\label{fig1b}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Transmitted-distribution2-eps-converted-to.pdf}
\label{fig1c}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Reflected-distribution2-eps-converted-to.pdf}
\label{fig1d}
\end{subfigure}
\caption{Tunneling $\delta$-barrier dynamics with $\Gamma= 0.01$. One-particle transmitted and reflected mean flight time distributions for conditions (I) are shown in the left and right top panels and for conditions (II) in the left and right bottom panels. } \label{fig4}
\end{figure}
Finally, it is also of interest to analyze the role played by the initial width ($\Gamma$) of the Gaussian wavepackets in the mean tunneling flight times which are well defined, especially when $\Gamma$ approaches zero such that the spatial extent of the two Gaussians is large, which creates large initial overlaps. This analysis is carried out by means of a fitting procedure to a linear function ($b \Gamma + c$) obtained from the numerics for eight values of the initial width,
$\Gamma= 10^{-2}, 0.25 \, 10^{-2}, 9.0 \, 10^{-4}, 4.0 \, 10^{-4}, 10^{-4}, 0.25 \, 10^{-4}, 9.0 \, 10^{-6}, 4.0 \, 10^{-6}$.
In Figure \ref{fig3}, the transmitted (left panel) and reflected (right panel) mean flight times versus $\Gamma$ for distinguishable particles, bosons and fermions
are plotted. The legend is the same one used along this work for each particle. In every case, the quality of the fitting is very good. All particles tend to a value of $c= 75.005$, which is the phase time under conditions (I). Thus, the symmetry seems to play no role in the mean tunneling flight times since the phase time for a single particle is recovered in the limit $\Gamma \rightarrow 0$.
\begin{figure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{GammaT1-eps-converted-to.pdf}
\label{fig2a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{GammaR1-eps-converted-to.pdf}
\label{fig2b}
\end{subfigure}
\caption{One-particle transmitted (left panel) and reflected (right panel) mean flight times versus $\Gamma$ for distinguishable particles, bosons and fermions under conditions (I). The legend is the same one used along this work for each particle.} \label{fig3}
\end{figure}
\section{Discussion}
In this work, we have analyzed the influence of the symmetry of the wavefunction for a system consisting of two non-interacting identical particles. The well-known bunching and anti-bunching properties exhibited by bosons and fermions respectively have been translated into the spreading of the initial spatial densities and flight time distributions under the presence or not of an interaction potential such as a delta barrier, leading to early arrivals for fermions. Interestingly, the symmetry of the wave function seems to be robust for flight time distributions, but it does not affect the mean tunneling flight times. Analyzing the short-time dynamics through the survival probability, fermions tend to decay faster than bosons, which has profound implications for the well-known Zeno effect.
In our model, we have considered non-interacting identical particles. This is clearly an oversimplification of the real problem since, for example, electrons interact with each other through the long-range Coulomb repulsion. As far as we know, very few studies have addressed the issue of the effect of symmetry on flight times and the Zeno effect even though it should be readily accessible. For example, the triplet states of two electrons will give a fermionic spatial wavefunction while the singlet state a bosonic one. In a scattering experiment similar to the one discussed in Refs. \cite{grossmann2014,buchholz2018}, the two particles in the center of mass frame will approach each other such that the distances between the two particles becomes small and the symmetry will affect the time of flight distribution of the two particles. Due to the anti-bunching property of fermions, one should expect broader temporal distributions of the scattered particles.
Lozovik {\it et al} \cite{Filinov2} considered tunneling of two interacting particles in a double-well potential. They used quantum molecular dynamics within the Wigner representation and found that exchange effects are very important and affect the tunneling. However, this work does not mention flight time distributions. It is thus possible, at least in principle, to study the effect of particle symmetry on flight time distributions of identical electrons, without neglecting the repulsive potential of interaction between them, though the actual numerical implementation, especially in the relativistic regime is much more challenging.
Finally, we note that the study of symmetry on flight time distributions presented in this paper may be generalized to anyons, by introducing a phase in the initial distribution.
\vspace{1cm}
\noindent
{\bf Acknowledgements}
TR and EP acknowledge support from the Israel Science Foundation, SMA acknowledges support from the Ministerio de Ciencia, Innovaci\'on y Universidades (Spain)
under the Project FIS2017-83473-C2-1-P and Fundaci\'on Humanismo y Ciencia.
\bigskip
| {
"timestamp": "2021-06-02T02:19:26",
"yymm": "2106",
"arxiv_id": "2106.00380",
"language": "en",
"url": "https://arxiv.org/abs/2106.00380",
"abstract": "In this work, our purpose is to show how the symmetry of identical particles can influence the time evolution of free particles in the nonrelativistic and relativistic domains. For this goal, we consider a system of either two distinguishable or indistinguishable (bosons and fermions) particles. Two classes of initial conditions have been studied: different initial locations with the same momenta, and the same locations with different momenta. The flight time distribution of particles arriving at a `screen' is calculated in each case. Fermions display broader distributions as compared with either distinguishable particles or bosons, leading to earlier and later arrivals for all the cases analyzed here. The symmetry of the wave function seems to speed up or slow down propagation of particles. Due to the cross terms, certain initial conditions lead to bimodality in the fermionic case. Within the nonrelativistic domain and when the short-time survival probability is analyzed, if the cross term becomes important, one finds that the decay of the overlap of fermions is faster than for distinguishable particles which in turn is faster than for bosons. These results are of interest in the short time limit since they imply that the well-known quantum Zeno effect would be stronger for bosons than for fermions.Fermions also arrive earlier than bosons when they are scattered by a delta barrier. Furthermore, the particle symmetry does not affect the mean tunneling flight time and it is given by the phase time for the distinguishable particle.",
"subjects": "Quantum Physics (quant-ph)",
"title": "The influence of the symmetry of identical particles on flight times",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9362850057480346,
"lm_q2_score": 0.7577943767446202,
"lm_q1q2_score": 0.709511512386165
} |
https://arxiv.org/abs/0707.1357 | Calculations of canonical averages from the grand canonical ensemble | Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in canonical ensemble based on calculations of the expectation values in grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble. | \section{Introduction}
Whenever we need to work with a system of many quantum particles
it is much easier to perform the calculation in the grand canonical ensemble
than the corresponding calculations in the canonical ensemble.
For example, the calculations of the grand canonical partition function for ideal gas of
fermions or bosons is trivial, whereas the computation of the canonical partition
function for the the same system becomes a formidable task even for a small number of particles.
Provided that the system is thermodynamically large, the canonical and
grand canonical descriptions agree with each other.
There are many quantum systems where the canonical description is more appropriate,
these include hot nuclei \cite{rossignoli94}, ultrasmall metallic grains\cite{delft},
Bose and Fermi gases in atomic traps \cite{politzer96,herzog97} and atoms in plasmas\cite{gilleron}.
Therefore, it is important to have a practical theoretical and computational method which enables us
to extract canonical averages from corresponding grand canonical calculations.
The problem, which we would like to solve in this paper is the
following. Suppose that we can perform calculations (or
measurements) in grand canonical ensemble which is characterized
by temperature $T$ and chemical potential $\mu$. We would like to
compute expectation values of an arbitrary quantity $O$ in the
canonical ensemble with temperature $T$ and the number of
particles $n$ by using only the averages in the grand canonical
ensemble. The number of particles $n$, which is fixed in canonical
ensemble, is not necessarily the same as the average number of
particles $\langle N \rangle$ in the grand canonical ensemble.
Earlier works on particle number projection in grand canonical
ensembles have been performed
to treat hot nuclei\cite{rossignoli94,rodriguez05,tanabe05,nakada06}, Bose-Einstein
condensation\cite{politzer96,herzog97,fujiwara70}
and to formulate canonical statistical mean field approximation for mesoscopic systems\cite{ ponomarenko}.
Our approach is
very different. First, we do not rely on projection operators but
extract information from the grand canonical averages by
inverting fluctuation matrix. Second, $\langle N \rangle$ does not
necessarily equal to $n$ although it can be.
\section{Theory}
A quantum mechanical system has the Hamiltonian $H$. The Hamiltonian $H$
commutes with the number of particles operator $N$
\begin{equation}
[H,N]=0.
\end{equation}
Therefore the Hamiltonian and the number of particles operator have the same set of eigenvectors:
\begin{equation}
N |n\alpha \rangle =n |n \alpha \rangle,
\end{equation}
\begin{equation}
H |n\alpha \rangle =E_{n\alpha} |n\alpha \rangle.
\end{equation}
Let $O$ be an arbitrary operator.
The average in the grand canonical ensemble is
\begin{equation}
\langle O \rangle =\frac{1}{Z} \sum_{n \alpha}
e^{-\beta(E_{n\alpha} -\mu n)} \langle n\alpha |O|n \alpha
\rangle, \label{gca}
\end{equation}
with $Z$ being the grand canonical partition function
\begin{equation}
Z=\sum_{n \alpha} e^{-\beta(E_{n\alpha} -\mu n)},
\label{gcpart}
\end{equation}
and $\beta=1/kT$.
The average in canonical ensemble is
\begin{equation}
\langle O \rangle_n=\frac{1}{Z_n} \sum_{ \alpha} e^{-\beta
E_{n\alpha} } \langle n\alpha |O|n \alpha \rangle,
\end{equation}
with $Z_n$ being the canonical partition function
\begin{equation}
Z_n=\sum_{ \alpha} e^{-\beta E_{n\alpha} }.
\end{equation}
We can consider $\langle O \rangle_n$ as a function of $n$ and
expand it as the power series:
\begin{equation}
\langle O\rangle_n=\sum_{k=0}^{\infty} q_k n^k.
\end{equation}
With this expansion the grand canonical average (\ref{gca}) becomes (appendix A):
\begin{equation}
\langle O \rangle = \langle \sum_{k=0}^{\infty} q_k N^k \rangle.
\end{equation}
Next, we take canonical expectation value and add/subtract the grand canonical average
\begin{equation}
\langle O\rangle_n= \sum_{k=0}^{\infty} q_k n^k + \langle O\rangle - \langle
\sum_{k=0}^{\infty} q_k N^k \rangle.
\end{equation}
Then we cancel $k=0$ terms in the both sums and the canonical expectation value becomes
\begin{equation}
\langle O \rangle_n= \langle O \rangle -\sum_{k=1}^{\infty} q_k
(\langle N^k \rangle -n^k). \label{expansion}
\end{equation}
This expression for canonical average of the operator $O$ involves
only grand canonical expectation values.
The coefficients $q_k$ are yet to be calculated. To determine them we introduce the operator
\begin{equation}
\bar{O}= O- \sum_{k=1}^{\infty} q_k N^k
\label{baro}
\end{equation}
and compute the expectation value
$ \langle\bar{O} f(N) \rangle$, where $f(N)$ is an arbitrary
function of $N$. We show in appendix A that this expectation value
can be split:
\begin{equation}
\langle \bar{O} f(N)\rangle = \langle \bar{O} \rangle \langle
f(N)\rangle. \label{eqforq}
\end{equation}
Since $f(N)$ is an arbitrary function of $N$, Eq.(\ref{eqforq}) is equivalent to the following system of equations
\begin{equation}
\left.
\begin{array}{c}
\langle \bar{O} N \rangle
= \langle \bar{O}\rangle \langle N\rangle \\
\langle \bar{O} N^2\rangle
= \langle \bar{O} \rangle\langle N^2\rangle \\
.....
\\
\langle \bar{O} N^k\rangle
= \langle \bar{O} \rangle\langle N^k\rangle \\
....
\end{array}
\right\}
\label{soe}
\end{equation}
If we use the explicit form for operator $\bar{O}$ (\ref{baro}), the system of equations (\ref{soe}) becomes:
\begin{equation}
\sum_{k=1}^{\infty} q_k A_{km} =\langle ON^m\rangle -\langle O\rangle \langle N^m\rangle,
\label{soe2}
\end{equation}
where the matrix $A_{km}$ is built from the fluctuations
\begin{equation}
A_{km} =\langle N^{k+m}\rangle -\langle N^k\rangle\langle N^m\rangle.
\end{equation}
The system of linear algebraic equations (\ref{soe2}) along with the expansion (\ref{expansion})
represent the main result of the paper. The similar equations (although for the case
$\langle N\rangle=n$) were obtained by the methods of thermofield dynamics in the context of
superconducting nuclei at finite temperature\cite{kosov96a,kosov96b}.
It can be readily shown by the direct differentiation of the partition function
(\ref{gcpart}) that
\begin{equation}
A_{km} =\frac{1}{Z^2 \beta^{k+m}} \left( Z \frac{\partial^{k+m} Z}{\partial \mu^{k+m}} -
\frac{\partial^{k} Z}{\partial \mu^{k}} \frac{\partial^{m} Z}{\partial \mu^{m}}
\right).
\label{adir}
\end{equation}
Now we prove the convergence of the expansion (\ref{expansion}).
It is possible to demonstrate based on simple consideration that
the difference between canonical and grand canonical averages is \cite{mcquarrie}
\begin{equation}
\langle O \rangle - \langle O \rangle_n \sim \frac{1}{\langle N \rangle}.
\label{o-on}
\end{equation}
The calculation in appendix C shows that for $k>1$
\begin{equation}
q_{k}\sim1/\left\langle N\right\rangle ^{k}.
\label{qscale}
\end{equation}
Suppose that $n=\langle N \rangle$. Then $(\langle N^k \rangle -n^k)
\sim \langle N \rangle^{k-1}$ for large $\langle N \rangle$.
Therefore, the corrections to the grand canonical average in Eq.(\ref{expansion}) become in this case
\begin{equation}
\sum_{k=1}^{\infty} q_k
(\langle N^k \rangle -n^k) \simeq \frac{1}{\langle N \rangle} \sum_{k=1}^{\infty} c_k ,
\label{limit}
\end{equation}
where $c_k$ are some coefficients.
Comparing (\ref{o-on}) and (\ref{limit}) we see that $\sum_{k=1}^{\infty} c_k$ is finite. It means that
the expansion (\ref{expansion}) converges when $\langle N \rangle =n$.
We would like to note at this point that each term in the sum (\ref{expansion}) is
$\sim 1/\langle N \rangle$. It means that the method becomes computationally efficient when it is applied to the large systems, since one needs to include less terms in the expansion to achieve the same accuracy in this case.
Let us consider the corrections to the grand canonical average in Eq.(\ref{expansion})
for the case $n=\langle N \rangle +j$ where $j$ is some integral number
\begin{eqnarray}
\langle O \rangle - \langle O \rangle_n =
\sum_{k=1}^{\infty} q_k
(\langle N^k \rangle -[\langle N \rangle +j]^k).
\label{j1}
\end{eqnarray}
We assume that $ j/\langle N \rangle \ll1 $. If we substitute the expression for
$\langle N^k \rangle$ (\ref{c17}) into (\ref{j1}) and use the binomial expansion
up to the first order in $j/\langle N\rangle$ for $[\langle N \rangle +j]^k$, we obtain
\begin{equation}
\langle O \rangle - \langle O \rangle_n =
\sum_{k=1}^{\infty} q_k \langle N \rangle^{k-1} \frac{c k (k-1)}{2} \left [1- \frac{2j}{c (k-1)} \right].
\label{series}
\end{equation}
The series (\ref{series}) always converges for $j=0$ as we just demonstrated. To prove the convergence for $j\ne0$ we split the sum (\ref{series}) into two parts: the first part is for $0 < k \le K$ and the second part is for $K<k< \infty$, where $K$ is some positive integer. The first part is always finite and $\sim 1/ \langle N \rangle $. By choosing $K$, we can always make
$\left [1- \frac{2j}{c (k-1)} \right] \simeq 1$ for $k>K$. Therefore, the convergence of the
expansion (\ref{expansion}) for $\langle N \rangle =n$ case implies the convergence for any finite
difference $|\langle N \rangle -n |$ for which $\frac{|\langle N \rangle -n |}{\langle N \rangle} \ll 1$.
We note that these arguments may not work when the matrix $A_{nm}$ is singular.
For example, in the case of low temperature Fermi gas, all matrix
elements $A_{nm}$ tend to zero, therefore canonical and grand
canonical descriptions may deviate from each other in the
thermodynamic limit due to the persistent existence of a
few-particle fluctuations in the grand canonical ensemble
\cite{bowen}.
\section{Example calculations}
As an example we consider the system of noninteracting quantum particles distributed on
$n_{levels}$ single particle energy levels with
energies $\varepsilon_{l}$. The logarithm of the grand canonical partition function is
\begin{equation}
\ln Z= \eta \sum_{l=1}^{n_{levels}} \ln\left( 1+ \eta e^{\beta(\mu-\varepsilon_l)} \right),
\label{fdpartition}
\end{equation}
where $\eta=+1$ is for fermions and $\eta=-1$ is for bosons.
We set $\beta=1$, $\varepsilon_l=l$, and $n_{levels}=5$ in all our calculations.
\begin{table}
\caption{Occupation numbers and total energy for fermions. FD
refers to the Fermi-Dirac statistics in grand canonical ensemble
($\langle N \rangle=4$). The number of particles in the canonical ensemble is 2. $k_{max} $ is the number of terms
included in expansion (\ref{expansion}). }
\begin{tabular}{ccccccccc}
\hline
\hline
& & & \multicolumn{5}{c}{$k_{max}$} & \\
l & $\varepsilon_l$ & FD & 1 & 2 & 3 & 4 & 6 & exact\\
\hline
1 & 1.0 & 0.98 & 0.92 & 0.87 & 0.86 & 0.87 & 0.87 & 0.87 \\
2 & 2.0 & 0.95 & 0.80 & 0.69 & 0.67 & 0.67 & 0.68 & 0.68 \\
3 & 3.0 & 0.87 & 0.52 & 0.34 & 0.34 & 0.30 & 0.30 & 0.30 \\
4 & 4.0 & 0.72 & 0.07 & 0.01 & 0.06 & 0.11 & 0.12 & 0.12 \\
5 & 5.0 & 0.48 & -0.31 & 0.10 & 0.06 & 0.04 & 0.04 & 0.04 \\
\hline
\multicolumn{2}{c}{total energy} & 10.77 & 2.82 & 3.77& 3.78 & 3.79 & 3.79 & 3.79 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Occupation numbers and total energy for bosons. BE refers
to the Bose-Einstein statistics in grand canonical ensemble
($\langle N \rangle=4$). The number of particles in the canonical ensemble is 2. $k_{max} $ is the number of terms
included in expansion (\ref{expansion}). }
\begin{tabular}{ccccccccc}
\hline
\hline
& & & \multicolumn{5}{c}{$k_{max}$} & \\
l & $\varepsilon_l$ & BE & 1 & 3 & 5 & 7 & 11 & exact\\
\hline
1 & 1.0 & 3.43 & 1.52 & 1.52 & 1.47 & 1.44 & 1.42 & 1.42 \\
2 & 2.0 & 0.40 & 0.33 & 0.33 & 0.37 & 0.39 & 0.39 & 0.39 \\
3 & 3.0 & 0.12 & 0.10 & 0.10 & 0.11 & 0.12 & 0.13 & 0.13 \\
4 & 4.0 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.05 & 0.05 \\
5 & 5.0 & 0.01 & 0.01 & 0.01 & 0.01 & 0.02 & 0.02& 0.02 \\
\hline
\multicolumn{2}{c}{total energy} & 4.81& 2.68 & 2.69 & 2.77& 2.82 & 2.84 & 2.85 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Occupation numbers and total energy for fermions. FD
refers to the Fermi-Diract statistics in grand canonical ensemble
($\langle N \rangle=2$). The number of particles in the
canonical ensemble is 4. $k_{max} $ is the number of terms
included in expansion (\ref{expansion}). }
\begin{tabular}{ccccccccc}
\hline
\hline
& & & \multicolumn{5}{c}{$k_{max}$} & \\
l & $\varepsilon_l$ & FD & 1 & 2 & 3 & 4 & 5 & exact\\
\hline
1 & 1.0 & 0.80 & 1.18 & 0.92 & 0.98 & 0.99 & 0.99 & 0.99 \\
2 & 2.0 & 0.60 & 1.18 & 1.01 & 0.97 & 0.93 & 0.97 & 0.97 \\
3 & 3.0 & 0.36 & 0.91 & 1.02 & 0.96 & 0.97 & 0.91 & 0.91 \\
4 & 4.0 & 0.17 & 0.51 & 0.70 & 0.72 & 0.73 & 0.77 & 0.77 \\
5 & 5.0 & 0.07 & 0.23 & 0.35 & 0.38 & 0.37 & 0.36 & 0.36 \\
\hline
\multicolumn{2}{c}{total energy} & 4.10 & 9.42 & 10.53& 10.54 & 10.55 & 10.55 & 10.55 \\
\hline
\end{tabular}
\end{table}
First, we extract averages in canonical ensemble for the smaller
system from grand canonical ensemble for the larger system. We
select the chemical potential $\mu$ in such a way that the average
number of particles $\langle N\rangle$
in the grand canonical ensemble is 4.
We would like to extract the information about the canonical
ensemble with $n=2$ particles from this grand canonical ensemble.
We compute the occupation numbers and then all physical quantities
like total energy can be calculated with the use of these
occupation numbers. To start our calculations we set $O=n_{l}$,
where $n_l$ is the operator of the number of particles on level
$l$. Then we solve the linear system of linear equations
(\ref{soe2}) to find the coefficients $q_k$ and use these $q_k$s
to calculate the grand canonical occupation numbers by
Eq.(\ref{expansion}). The matrix elements $A_{nm}$ are computed
by Eq.(\ref{adir}) and with the help of recurrent relation from
appendix B. The matrix element in the right hand side of
Eq.(\ref{soe2}) is computed as the following derivative of the
grand canonical partition function (\ref{fdpartition}):
\begin{equation}
\langle n_l N^m \rangle =- \frac{1}{Z \beta^{m+1}} \frac{\partial^{m+1} Z}{\partial \varepsilon_l \partial \mu^{m}} .
\label{dirme}
\end{equation}
The results of these calculations are shown in Table I (fermions)
and Table II (bosons). The fermionic and bosonic systems both show
the convergence to the exact results as we include more terms in
expansion (\ref{expansion}). The convergence for bosons is slower
than that for fermions. It is due to the fact that the
fluctuations of the occupation numbers $\langle \Delta n_l^2
\rangle = \langle n_l \rangle - \eta \langle n_l \rangle^2$ tend
to be larger for bosons ($\eta=-1$) than for fermions ($\eta=+1$).
The method also works in the opposite direction, therefore we can
compute averages in canonical ensemble for the larger system using
grand canonical averages for the smaller system. We select the
grand canonical ensemble with $\langle N\rangle=2$ and we
compute the occupation numbers in the canonical ensemble of $n=4$
particles. Table III shows the results of these calculations for
noninteracting fermions. The convergence to the exact values is
as good as in the previous case, thereby it demonstrates that the
method can be also used to extract the canonical ensemble
information for the larger system from the grand canonical
calculations of the smaller system. The very similar results were
obtained for bosons and we do not show it here.
\section{Conclusions}
We formulated the method to compute averages in canonical ensemble based on calculations
in grand canonical ensemble. The number of particles $n$, which is fixed in the canonical ensemble,
is not necessarily the same as the average number of particles $\langle N \rangle$ in the grand canonical ensemble.
Expansion (\ref{expansion}) and system of linear algebraic equations (\ref{soe2}) for coefficients
of the expansion are the main result of the paper.
We performed the test calculations for ideal Fermi and Bose gases,
compared our calculations with the exact results and demonstrated convergence
properties of expansion (\ref{expansion}).
\begin{acknowledgments}
This work has been supported by NSF-MRSEC
DMR0520471 at the University of Maryland and by the American
Chemical Society Petroleum Research Fund (44481-G6).
\end{acknowledgments}
| {
"timestamp": "2008-02-04T00:34:38",
"yymm": "0707",
"arxiv_id": "0707.1357",
"language": "en",
"url": "https://arxiv.org/abs/0707.1357",
"abstract": "Grand canonical and canonical ensembles become equivalent in the thermodynamic limit, but when the system size is finite the results obtained in the two ensembles deviate from each other. In many important cases, the canonical ensemble provides an appropriate physical description but it is often much easier to perform the calculations in the corresponding grand canonical ensemble. We present a method to compute averages in canonical ensemble based on calculations of the expectation values in grand canonical ensemble. The number of particles, which is fixed in the canonical ensemble, is not necessarily the same as the average number of particles in the grand canonical ensemble.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech)",
"title": "Calculations of canonical averages from the grand canonical ensemble",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9585377296574669,
"lm_q2_score": 0.740174367770488,
"lm_q1q2_score": 0.7094850580333745
} |
https://arxiv.org/abs/1008.1458 | The index quasi-periodicity and multiplicity of closed geodesics | In this paper, we prove the existence of at least two distinct closed geodesics on every compact simply connected irreversible or reversible Finsler (including Riemannian) manifold of dimension not less than 2. | \section{Introduction and main results
The closed geodesic problem is a traditional and active topic in dynamical
systems and differential geometry for more than one hundred years.
Studies of closed geodesics can be traced back to J. Jacobi, J. Hadamard,
H. Poincar\'e, G. D. Birkhoff, M. Morse, L. Lyusternik and Schnirelmann
and others. Specially G. D. Birkhoff established the existence of at least
one closed geodesic on every Riemannian sphere $S^d$ with $d\ge 2$
(cf. \cite{Bir1}). Later L. Lyusternik and A. Fet proved the existence of
at least one closed geodesic on every compact Riemannian manifold (cf.
\cite{LyF1}). Such a variational proof works also for Finlser metrics
on compact manifolds and produces at least one closed geodesic on every
such manifold. An important breakthrough on this study is due to V. Bangert
\cite{Ban2} and J. Franks \cite{Fra1} around 1990, who proved the existence
of infinitely many closed geodesics on every Riemannian $2$-sphere (cf.
also \cite{Hin2} and \cite{Hin3} for new proofs of some parts of this result).
For irreversible Finsler manifolds, the closed geodesic problem is more
delicate as discovered by A. Katok via his famous example of 1973 which yields
some irreversible Finsler metrics on $S^d$ with precisely $2[(d+1)/2]$ distinct
prime closed geodesics (cf. \cite{Kat1} and \cite{Zil1}). In \cite{HWZ1} of
2003, H. Hofer, K. Wysocki and E. Zehnder proved that there exist either two
or infinitely many distinct prime closed geodesics on a Finsler $(S^2,F)$
provided that all the iterates of all closed geodesics are non-degenerate and
the stable and unstable manifolds of all hyperbolic closed geodesics intersect
transversally. In \cite{BaL1} of 2005 published in 2010, V. Bangert and Y. Long
proved that on every irreversible Finsler $S^2$ there exist always at least two
distinct prime closed geodesics (cf. also \cite{LoW2}).
Here recall that on a Finsler manifold $(M,F)$, a closed geodesic
$c:S^1={\bf R}/{\bf Z}\to M$ is {\it prime}, if it is not a multiple covering (i.e., iteration)
of any other closed geodesic. Here the $m$-th iteration $c^m$ of $c$ is defined by
$c^m(t)=c(mt)$ for $m\in{\bf N}$. The inverse curve $c^{-1}$ of $c$ is defined by
$c^{-1}(t)=c(1-t)$ for $t\in S^1$. Two prime closed geodesics $c_1$ and $c_2$ on
a Finsler manifold $(M,F)$ (or Riemannian manifold $(M,g)$) are {\it distinct}
(or {\it geometrically distinct}), if they do not differ by an $S^1$-action
(or $O(2)$-action). We denote by ${\rm CG}(M,F)$ the set of all distinct closed geodesics
on $(M,F)$ for Finsler or Riemannian metric $F$ on $M$.
A long-standing conjecture on the closed geodesics is
\begin{equation} \;^{\#}{\rm CG}(M,g) = +\infty, \label{1.1}\end{equation}
for every Riemannian metric $g$ on any compact manifold $M$ with $\dim M\ge 2$.
Correspondingly for Finsler manifolds, it is conjectured (cf. \cite{Lon6}) that
for each positive integer $n$ there exist positive integers $1\le p_n\le q_n$ with
$p_n\to +\infty$ as $n\to +\infty$ such that there holds
\begin{equation} \;^{\#}{\rm CG}(M,F) \in [p_n,\,q_n]\cup \{+\infty\}, \label{1.2}\end{equation}
for every Finsler metric $F$ on each compact manifold $M$ satisfying $\dim M=n$.
Note that by the results of \cite{Ban2} and \cite{Fra1} and the classification of
$2$-dimensional compact manifolds, the conjecture (\ref{1.1}) was proved when
$\dim M=2$. Similarly by the results of \cite{Kat1} and \cite{BaL1}, we have $p_2=2$.
In the study of the conjecture (\ref{1.1}), D. Gromoll and W. Meyer \cite{GrM1}
in 1969 proved the following result:
{\bf Theorem A.} (\cite{GrM1}) {\it On a compact Riemannian manifold
there exist infinitely many closed geodesics, if the free loop space of
this manifold has an unbounded sequence of Betti numbers.}
Stimulated by this result, M. Vigu\'e-Poirrier and D. Sullivan \cite{ViS1}
in 1976 proved:
{\bf Theorem B.} (\cite{ViS1}) {\it The free loop space of a
compact simply connected Riemannian manifold $M$ has no unbounded sequence
of Betti numbers if and only if the rational cohomology algebra of $M$
possess only one generator. }
Both of these two theorems were generalized to corresponding Finsler manifolds by
H. Matthias in 1980 (cf. \cite{Mat1}). Therefore based on these two theorems,
the most interesting manifolds in this multiplicity problem are those
compact simply connected manifolds satisfying
\begin{equation} H^*(M;{\bf Q})\cong T_{d,h+1}(x)={\bf Q}[x]/(x^{h+1}=0) \label{1.3}\end{equation}
with a generator $x$ of degree $d\ge 2$ and hight $h+1\ge 2$. The most important
examples here are certainly spheres $S^d$ of dimension $d$.
Besides these results, when the dimension of a compact simply connected manifold
is greater than $2$, we are not aware of any multiplicity results on the existence
of at least two closed geodesics without pinching, generic, bumpy or other
conditions even on spheres (cf. \cite{Ano1}, \cite{Ban1}, \cite{Kli1}, \cite{BTZ1},
\cite{BTZ2}, \cite{DuL1}, \cite{DuL2}, \cite{LoW1}, \cite{Rad3}, \cite{Rad4},
\cite{Rad5}, \cite{Rad6}), except the Theorem C below proved recently in \cite{LoD1}
for the $3$-dimensional case and \cite{DuL3} for the $4$-dimensional case.
{\bf Theorem C.} (\cite{LoD1}, \cite{DuL3}) {\it There exist always at least two
distinct prime (geometrically distinct) closed geodesics for every irreversible
(or reversible, specially Riemannian) Finsler metric on every $3$ or
$4$-dimensional compact simply connected manifold. }
\smallskip
In this paper, we further generalize Theorem C to all compact simply
connected Finsler as well as Riemannian manifolds and prove the following
results.
{\bf Theorem 1.1.} {\it For every irreversible Finsler metric $F$ on
any compact simply connected manifold of dimension at least $2$, there exist
always at least two distinct prime closed geodesics.}
\smallskip
{\bf Theorem 1.2.} {\it For every reversible Finsler metric $F$ on
any compact simply connected manifold of dimension at least $2$, there exist
always at least two geometrically distinct closed geodesics. In particular,
it holds for every such Riemannian manifold.}
\smallskip
Next we briefly describe the main ideas in the proofs of Theorems 1.1 and 1.2.
In \cite{LoD1}, we classified all the closed geodesics into {\it rational} and
{\it irrational} two classes according to the basic normal form decomposition of
their linearized Poincar\'e maps as symplectic matrices introduced in \cite{Lon2}
in 1999. Then in \cite{LoD1}, we established periodicity of the Morse indices
and homological information of iterates of orientable rational closed geodesics
on any Finsler manifold $(M,F)$. Specially we proved
\begin{equation} i(c^{n+m}) = i(c^n) + i(c^m) + \overline{p}(c), \qquad \nu(c^{n+m})=\nu(c^m),
\quad \forall\, m\in{\bf N}, \label{1.4}\end{equation}
where $n=n(c)$ is the analytical period of a prime closed geodesic $c$, cf. (\ref{4.1})
below, and $\overline{p}(c)$ is a constant depends only on the linearized Poincar\'e
map $P_c$ of $c$. We proved also a boundedness property of Morse indices in iterates of
every prime orientable rational closed geodesic $c$:
\begin{equation} i(c^m) + \nu(c^m) \le i(c^n) + \overline{p}(c) + \dim M -3,
\quad \forall\,1\le m\le n-1. \label{1.5}\end{equation}
If $(M,F)$ is a compact simply connected Finsler manifold and possesses only one
prime closed geodesic $c$, and if $c$ is rational, based on the properties (\ref{1.4})
and (\ref{1.5}) we established in \cite{LoD1} and \cite{DuL3} the following identity
\begin{equation} B(d,h)(i(c^n) + \overline{p}(c)) + (-1)^{i(c^n)+\mu}{\kappa} = \sum_{j=\mu-\overline{p}(c)+1}^{i(c^n)+\mu}(-1)^j b_j,
\label{1.6}\end{equation}
for some integer ${\kappa}\ge 0$, where $\mu=\overline{p}(c)+\dim M -3$, and $B(d,h)$ depends
only on $d$ and $h$ and is given in Lemma 2.4 below, $b_j$s are Betti numbers of the
relative free loop spaces defined in Lemmas 2.5 and 2.6 below. Then using (\ref{1.6}),
and our computations on the precise sum of Betti numbers, we obtain a contradiction
and conclude that the only one prime closed geodesic $c$ on $M$ can not be rational.
Now in the current paper, our main idea is to generalize the above method on rational
closed geodesics to every closed geodesic on compact simply connected manifolds. Suppose
that there exists only one prime closed geodesic $c$ on a compact simply connected
Finsler manifold $(M,F)$. The new observations in the current paper are the following:
(i) When $c$ is irrational, suppose the basic normal form decomposition of the
linearized Poincar\'{e} map $P_c$ of $c$ contains $k$ irrational rotation matrices.
In this case, we can not hope the periodicity (\ref{1.4}) to be still true
anymore for the analytical period $n=n(c)$ and the constant $\overline{p}(c)$. But using the mod
one uniform distribution property of irrational numbers, we can still get a local version
of (\ref{1.4}), i.e., there exists a large enough even integer $T\in n(c){\bf N}$ such that
for some integer $m_0=m_0(c)>1$ depending on $c$ only there holds
\begin{equation} i(c^{T+m}) = i(c^T) + i(c^m) + p(c), \quad \nu(c^{T+m})=\nu(c^m),
\qquad \forall\,1\le m\le m_0, \label{1.7}\end{equation}
where $p(c)=\overline{p}(c)+2(A-k)$ for some integer $1\le A\le k$ depending on $P_c$ only.
We call such a property the {\it quasi-periodicity} of Morse indices of iterates
$c^m$.
(ii) Similarly for irrational $c$, we can not hope (\ref{1.5}) to be true for all
multiples of $n$. But using estimates on Morse indices of iterates of irrational closed
geodesics established in \cite{DuL3}, we can get also a similar version of (\ref{1.5}),
i.e., we can further choose the integer $T\in n(c){\bf N}$ so that there holds
\begin{equation} i(c^m) + \nu(c^m) \le i(c^T) + p(c) + \dim M -3, \qquad
\forall\,1\le m\le T-1. \label{1.8}\end{equation}
(iii) Now by computing out the alternating sum of the dimensions of all the critical
modules of $c^m$ with $1\le m \le T$, and then comparing with the Betti numbers of
the free loop space pairs on $M$ (i.e., $b_j$s below), we obtain the following version
of (\ref{1.6}) which holds at the iteration $T$: i.e., there exists an integer
${\kappa}\ge 0$ such that
\begin{equation} B(d,h)(i(c^T) + p(c)) + (-1)^{i(c^T)+\mu}{\kappa} = \sum_{j=\mu-p(c)+1}^{i(c^T)+\mu}(-1)^j b_j,
\label{1.9}\end{equation}
where $\mu=p(c)+\dim M -3$. Note that (\ref{1.7})-(\ref{1.9}) are automatically reduced
to (\ref{1.4})-(\ref{1.6}) when $c$ is rational.
(iv) Then the precise sum of Betti numbers on the right hand side of (\ref{1.9})
yields a contradiction, and shows that there must exist at least two distinct closed
geodesics.
Here we should point out that the identity (\ref{1.9}) (or (\ref{1.6})) is rather
different from the Morse inequalities, because the term $B(d,h)(i(c^T)+p(c))$ in
(\ref{1.9}) (or the corresponding term in (\ref{1.6})) represents the alternating
sum of dimensions of all local critical modules of $c^m$ with $1\le m\le T$
(or $1\le m\le n$), which is the alternating sum of all terms on or below the
$T$th (or $n$th) horizontal line in the Figure (\ref{5.54}) below, and is not the
alternating sum of Morse type numbers of $c^m$s with dimensions less than a fixed
integer, which is the alternating sum of all terms on the left of some fixed
vertical line in the Figure (\ref{5.54}) below. In fact in our case, firstly the
alternating sum of Morse type numbers with dimensions less than some fixed integer
in the Morse inequality may not be computable, because in general there may not
exist such a vertical line in the Figure (\ref{5.54}) below such that all
non-trivial critical modules of each iterate $c^m$ appears only on one side of
this vertical line. Secondly, even if it is computable, it is still not clear
whether the corresponding Morse inequalities may yield any contradiction.
Note that in his famous book \cite{Mor1}, M. Morse proved that for any given integer
$N>0$ the global homology of a $d$-dimensional ellipsoid ${\cal E}_d$ at all dimensions
less than $N$ can be produced by iterates of the $(d+1)$ main ellipses only, provided
${\cal E}_d$ is sufficiently close to the ball and all of its semi-axis are different.
His this example explains why the iterate $T$ in our proof should be sufficiently large
and carefully chosen.
\smallskip
For reader's conveniences, in Section 2 we briefly review some known results
on closed geodesics and Betti numbers of the $S^1$-invariant free loop space
of compact simply connected manifolds satisfying the condition (\ref{1.3}).
In Section 3 we briefly review basic normal form decompositions of symplectic
matrices and the precise index iteration formulae of symplectic paths established by
Y. Long in \cite{Lon2} and \cite{Lon3} together with the orientability of closed
geodesics. In Section 4, we establish the quasi-periodicity (\ref{1.7}) and the
boundedness estimate (\ref{1.8}) of iterated indices of closed geodesics. In Section 5,
using the index quasi-periodicity we prove some homological isomorphism theorems of
energy critical level pairs when there exists only one prime closed geodesic, and then
establish the identity (\ref{1.9}). In Section 6 we give proofs of Theorems 1.1 and 1.2.
In this paper, we denote by ${\bf N}$, ${\bf N}_0$, ${\bf Z}$, ${\bf Q}$, ${\bf R}$, and ${\bf C}$ the
sets of positive integers, non-negative integers, integers, rational numbers,
real numbers, and complex numbers respectively. We define the functions
$[a]=\max\{k\in{\bf Z}\,|\,k\le a\}$, $\{a\}=a-[a]$,
$E(a)=\min\{k\in{\bf Z}\,|\,k\ge a\}$ and ${\varphi}(a)=E(a)-[a]$. Denote by $\,^{\#}A$ the
number of elements in a finite set $A$. When $S^1$ acts on a topological space $X$,
we denote by $\overline{X}$ the quotient space $X/S^1$. In this paper, we use only
singular homology modules with ${\bf Q}$-coefficients.
\setcounter{equation}{0}
\section{Critical point theory of closed geodesics
\subsection{Critical modules for closed geodesics}
Let $M$ be a manifold with a Finsler metric $F$. Closed geodesics are critical
points of the energy functional
$E(\gamma)=\frac{1}{2}\int_{S^1}F(\gamma(t),\dot{\gamma}(t))^2dt$
on the Hilbert manifold $\Lambda M$ of $H^1$-maps from $S^1$ to $M$. An
$S^1$-action is defined by $(s\cdot\gamma)(t)=\gamma(t+s)$ for all
$\gamma\in{\Lambda} M$ and $s, t\in S^1$. The index form of the functional
$E$ is well defined along any closed geodesic $c$ on $M$, which we
denote by $E''(c)$. As usual, denote by $i(c)$ and $\nu(c)$ the
Morse index and nullity of $E$ at $c$. For a closed geodesic $c$,
denote by $c^m$ the $m$-fold iteration of $c$ and
${\Lambda}(c^m)=\{{\gamma}\in{\Lambda} M\,|\, E({\gamma})<E(c^m)\}$. Recall that
respectively the {\it mean index} $\hat{i}(c)$ and the $S^1$-{\it
critical modules} of $c^m$ are defined by
\begin{equation} \hat{i}(c)=\lim_{m\rightarrow\infty}\frac{i(c^m)}{m},
\quad \overline{C}_*(E,c^m) = H_*\left(({\Lambda}(c^m)\cup S^1\cdot c^m)/S^1,{\Lambda}(c^m)/S^1\right).
\label{2.1}\end{equation}
If $c$ has multiplicity $m$, then the subgroup ${\bf Z}_m=\{\frac{n}{m}:0\le n<m\}$
of $S^1$ acts on $\overline{C}_k(E,c)$. As on page 59 of \cite{Rad2}, for $m\ge1$, let
$H_*(X,A)^{\pm {\bf Z}_m}=\{[\xi]\in H_*(X,A):T_*[\xi]=\pm\xi\}$, where $T$ is a
generator of the ${\bf Z}_m$ action. On $S^1$-critical modules of $c^m$, the following
lemma holds:
\smallskip
{\bf Lemma 2.1.} (cf. \cite{Rad2}, \cite{BaL1}, \cite{LoD1}) {\it Suppose $c$ is
a prime closed geodesic on a Finsler manifold $M$. Then
there exist two sets $U_{c^m}^-$ and $N_{c^m}$, the so-called local negative
disk and the local characteristic manifold at $c^m$ respectively,
such that $\nu(c^m)=\dim N_{c^m}$ and
\begin{eqnarray} \overline{C}_q( E,c^m)
&\equiv& H_q\left(({\Lambda}(c^m)\cup S^1\cdot c^m)/S^1, {\Lambda}(c^m)/S^1\right)\nonumber\\
&=& \left(H_{i(c^m)}(U_{c^m}^-\cup\{c^m\},U_{c^m}^-)
\otimes H_{q-i(c^m)}(N_{c^m}^-\cup\{c^m\},N_{c^m}^-)\right)^{+{\bf Z}_m}, \nonumber\end{eqnarray}
(i) When $\nu(c^m)=0$, there holds
$$ \overline{C}_q( E,c^m) = \left\{\matrix{
{\bf Q}, &\quad {\it if}\;\; i(c^m)=i(c)\,({\rm mod} 2)\;\;{\it and}\;\;
q=i(c^m),\; \cr
0, &\quad {\it otherwise}, \cr}\right. $$
(ii) When $\nu(c^m)>0$, let ${\epsilon}(c^m)=(-1)^{i(c^m)-i(c)}$, then
there holds
$$ \overline{C}_q( E,c^m)=H_{q-i(c^m)}(N_{c^m}^-\cup\{c^m\},N_{c^m}^-)^{{\epsilon}(c^m){\bf Z}_m}. $$}
Let
\begin{equation} k_j(c^m) \equiv \dim\, H_j( N_{c^m}^-\cup\{c^m\},N_{c^m}^-), \quad
k_j^{\pm 1}(c^m) \equiv \dim\, H_j(N_{c^m}^-\cup\{c^m\},N_{c^m}^- )^{\pm{\bf Z}_m}.
\label{2.2}\end{equation}
Then we have
\smallskip
{\bf Lemma 2.2.} (cf. \cite{Rad2}, \cite{BaL1}, \cite{LoD1}) {\it Let $c$
be a closed geodesic on a Finsler manifold $M$.
(i) There hold $0\le k_j^{\pm 1}(c^m) \le k_j(c^m)$ for $m\ge 1$ and
$j\in{\bf Z}$, $k_j(c^m)=0$ whenever $j\not\in [0,\nu(c^m)]$ and
$k_0(c^m)+k_{\nu(c^m)}(c^m)\le1$. If $k_0(c^m)+k_{\nu(c^m)}(c^m)=1$,
then $k_j(c^m)=0$ when $j\in(0,\nu(c^m))$.
(ii) For any $m\in{\bf N}$, there hold $k_0^{+1}(c^m) = k_0(c^m)$ and
$k_0^{-1}(c^m) = 0$. In particular, if $c^m$ is non-degenerate,
there hold $k_0^{+1}(c^m) = k_0(c^m)=1$, and $k_0^{-1}(c^m) =
k_j^{\pm 1}(c^m)=0$ for all $j\neq 0$.
(iii) Suppose for some integer $m=np\ge 2$ with $n$ and $p\in{\bf N}$ the
nullities satisfy $\nu(c^m)=\nu(c^n)$. Then there hold
$k_j(c^m)=k_j(c^n)$ and ${k}_j^{\pm 1}(c^m)={k}^{\pm 1}_j(c^n)$ for
any integer $j$.}
\medskip
Let $(M,F)$ be a compact and simply connected Finsler manifold with
finitely many prime closed geodesics. It is well known that for every
prime closed geodesic $c$ on $(M,F)$, there holds either
$\hat{i}(c)>0$ and then $i(c^m)\to +\infty$ as $m\to +\infty$, or
$\hat{i}(c)=0$ and then $i(c^m)=0$ for all $m\in{\bf N}$. Denote those
prime closed geodesics on $(M,F)$ with positive mean indices by
$\{c_j\}_{1\le j\le k}$. In \cite{Rad1} and \cite{Rad2}, Rademacher
established a celebrated mean index identity relating all the $c_j$s
with the global homology of $M$ (cf. Section 7, specially Satz 7.9
of \cite{Rad2}) for compact simply connected Finsler manifolds. A
refined version of this identity with precise coefficients was
proved in \cite{BaL1}, \cite{LoW1}, and \cite{LoD1}.
For each $m\in{\bf N}$, let ${\epsilon}={\epsilon}(c^m)=(-1)^{i(c^m)-i(c)}$ and
\begin{eqnarray} K(c^m)
&\equiv&(k_0^{\epsilon}(c^m), k_1^{\epsilon}(c^m), \ldots, k_{2\dim M-2}^{\epsilon}(c^m))\nonumber\\
&=&(k_0^{{\epsilon} (c^m)}(c^m), k_1^{{\epsilon}(c^m)}(c^m), \ldots,
k_{\nu (c^m)}^{{\epsilon} (c^m)}(c^m), 0, \ldots, 0). \label{2.3}\end{eqnarray}
{\bf Lemma 2.3.} (cf. Lemmas 7.1 and 7.2 of \cite{Rad2}, cf. also \cite{LoD1})
{\it Let $c$ be a prime closed geodesic on a compact Finsler manifold $(M,F)$.
Then there exists a minimal integer $N=N(c)\in{\bf N}$ such that $\nu(c^{m+N})=\nu(c^m)$,
$i(c^{m+N})-i(c^m)\in 2{\bf Z}$, and $K(c^{m+N})=K(c^m)$ for all $m\in{\bf N}$.}
\smallskip
{\bf Lemma 2.4.} (cf. \cite{Rad2}, \cite{BaL1}, \cite{LoW1}, \cite{LoD1})
{\it Let $(M,F)$ be a compact simply connected Finsler manifold with
$\,H^{\ast}(M,{\bf Q})=T_{d,h+1}(x)$ for some integers $d\ge 2$ and $h\ge 1$.
Denote prime closed geodesics on $(M,F)$
with positive mean indices by $\{c_j\}_{1\le j\le k}$ for some $k\in{\bf N}$.
Then the following identity holds
\begin{equation} \sum_{j=1}^k\frac{\hat{\chi}(c_j)}{\hat{i}(c_j)}=B(d,h)
=\left\{\matrix{
-\frac{h(h+1)d}{2d(h+1)-4}, &\quad d\,{\rm even},\cr
\frac{d+1}{2d-2}, &\quad d\,{\rm odd}, \cr}\right. \label{2.4}\end{equation}
where $\dim M=hd$, $h=1$ when $M$ is a sphere $S^d$ of dimension $d$ and
\begin{equation} \hat{\chi}(c) = \frac{1}{N(c)}
\sum_{0\le l_m\le \nu(c^m) \atop 1\le m\le N(c)}(-1)^{i(c^m)+l_m}k_{l_m}^{{\epsilon}(c^m)}(c^m)\;
\in \;{\bf Q}. \label{2.5}\end{equation}}
\subsection{The structure of $H_*({\Lambda} M/S^1, {\Lambda}^0 M/S^1;{\bf Q})$}
Set $\overline{{\Lambda}}^0=\overline{\Lambda}^0M =\{{\rm
constant\;point\;curves\;in\;}M\}\cong M$. Let $(X,Y)$ be a
space pair such that the Betti numbers $b_i=b_i(X,Y)=\dim
H_i(X,Y;{\bf Q})$ are finite for all $i\in {\bf Z}$. As usual the {\it
Poincar\'e series} of $(X,Y)$ is defined by the formal power series
$P(X, Y)=\sum_{i=0}^{\infty}b_it^i$. We need the following version
of results on Betti numbers. The precise computations on
each Betti number in Lemma 2.6 and sums of Betti numbers in Lemmas 2.5
and 2.6 were given in \cite{LoD1} and \cite{DuL3}.
{\bf Lemma 2.5.} (cf. Theorem 2.4 and Remark 2.5 of \cite{Rad1}, Proposition 2.4
of \cite{LoD1}, Lemma 2.5 of \cite{DuL3}) {\it Let $(S^d,F)$ be a
$d$-dimensional Finsler sphere.}
(i) {\it When $d$ is odd, the Betti numbers are given by
\begin{eqnarray} b_j
&=& {\rm rank} H_j({\Lambda} S^d/S^1,{\Lambda}^0 S^d/S^1;{\bf Q}) \nonumber\\
&=& \left\{\matrix{
2,&\quad {\it if}\quad j\in {\cal K}\equiv \{k(d-1)\,|\,2\le k\in{\bf N}\}, \cr
1,&\quad {\it if}\quad j\in \{d-1+2k\,|\,k\in{\bf N}_0\}\setminus{\cal K}, \cr
0 &\quad {\it otherwise}. \cr}\right. \label{2.6}\end{eqnarray}
For any $k\in {\bf N}$ and $k\ge d-1$, there holds }
\begin{eqnarray} \sum_{j=0}^k(-1)^jb_j
&=& \sum_{0\le 2j\le k}b_{2j} \nonumber\\
&=& \frac{k(d+1)}{2(d-1)} - \frac{d-1}{2} - {\epsilon}_{d,1}(k) \nonumber\\
&\le& \frac{k(d+1)}{2(d-1)} - \frac{d-1}{2}. \label{2.7}\end{eqnarray}
where ${\epsilon}_{d,1}(k) = \{\frac{k}{d-1}\} + \{\frac{k}{2}\}\in [0,\frac{3}{2}-\frac{1}{2(d-1)})$.
(ii) {\it When $d$ is even, the Betti numbers are given by
\begin{eqnarray} b_j
&=& {\rm rank} H_j({\Lambda} S^d/S^1,{\Lambda}^0 S^d/S^1;{\bf Q}) \nonumber\\
&=& \left\{\matrix{
2,&\quad {\it if}\quad j\in {\cal K}\equiv \{k(d-1)\,|\,3\le k\in (2{\bf N}+1)\}, \cr
1,&\quad {\it if}\quad j\in \{d-1+2k\,|\,k\in{\bf N}_0\}\setminus{\cal K}, \cr
0 &\quad {\it otherwise}. \cr}\right. \label{2.8}\end{eqnarray}
For any $k\in {\bf N}$ and $k\ge d-1$, there holds
\begin{eqnarray} -\sum_{j=0}^k(-1)^jb_j
= \sum_{0\le 2j-1\le k}b_{2j-1}
\le \frac{k d}{2(d-1)} - \frac{d-2}{2}. \label{2.9}\end{eqnarray}}
\medskip
For a compact and simply connected Finsler manifold $M$ with
$H^*(M;{\bf Q})\cong T_{d,h+1}(x)$, when $d$ is odd, then $x^2=0$ and
$h=1$ in $T_{d,h+1}(x)$. Thus $M$ is rationally homotopy equivalent to
$S^d$ (cf. Remark 2.5 of \cite{Rad1}). Therefore, next we only consider
the case when $d$ is even.
\medskip
{\bf Lemma 2.6.} (cf. Theorem 2.4 of \cite{Rad1}, Lemma 2.6 of \cite{DuL3})
{\it Let $M$ be a compact simply connected manifold with $H^*(M;{\bf Q})\cong T_{d,h+1}(x)$
for some integer $h\ge 1$ and even integer $d\ge 2$. Let $D=d(h+1)-2$ and
\begin{eqnarray} {\Omega}(d,h) = \{k\in 2{\bf N}-1&\,|\,& iD\le k-(d-1)=iD+jd\le iD+(h-1)d\; \nonumber\\
&& \mbox{for some}\;i\in{\bf N}\;\mbox{and}\;j\in [1,h-1]\}. \label{2.10}\end{eqnarray}
Then the Betti numbers of the free loop space of $M$ defined by
$b_q = {\rm rank} H_q({\Lambda} M/S^1,{\Lambda}^0 M/S^1;{\bf Q})$ for $q\in{\bf Z}$ are given by
\begin{equation} b_q = \left\{\matrix{
0, & \quad \mbox{if}\ q\ \mbox{is even or}\ q\le d-2, \cr
[\frac{q-(d-1)}{d}]+1, & \quad \mbox{if}\ q\in 2{\bf N}-1\;\mbox{and}\;d-1\le q < d-1+(h-1)d, \cr
h+1, & \quad \mbox{if}\ q\in {\Omega}(d,h), \cr
h, & \quad \mbox{otherwise}. \cr}\right.
\label{2.11}\end{equation}
For every integer $k\ge d-1+(h-1)d=hd-1$, we have
\begin{eqnarray} \sum_{q=0}^kb_q
&=& \frac{h(h+1)d}{2D}(k-(d-1)) - \frac{h(h-1)d}{4} + 1 + {\epsilon}_{d,h}(k) \nonumber\\
&<& h(\frac{D}{2}+1)\frac{k-(d-1)}{D} - \frac{h(h-1)d}{4} + 2, \label{2.12}\end{eqnarray}
where
\begin{eqnarray} {\epsilon}_{d,h}(k)
&=& \{\frac{D}{hd}\{\frac{k-(d-1)}{D}\}\} - (\frac{2}{d}+\frac{d-2}{hd})\{\frac{k-(d-1)}{D}\} \nonumber\\
&&\quad - h\{\frac{D}{2}\{\frac{k-(d-1)}{D}\}\} - \{\frac{D}{d}\{\frac{k-(d-1)}{D}\}\}, \label{2.13}\end{eqnarray}
and there hold ${\epsilon}_{d,h}(k)\in (-(h+2),1)$ and ${\epsilon}_{d,1}(k)\in (-2,0]$ for all integer $k\ge d-1$. }
\setcounter{equation}{0}
\section{A review of the precise index iteration formulae for symplectic paths
For $d\in{\bf N}$ and $\tau>0$, denote by ${\rm Sp}(2d)$ the symplectic group whose elements are $2d\times 2d$
real symplectic matrices and let
$$ {\cal P}_{\tau}(2d)=\{{\gamma}\in C([0,\tau],{\rm Sp}(2d))\,|\,{\gamma}(0)=I\}. $$
An index function theory $(i_{{\omega}}({\gamma}),\nu_{{\omega}}({\gamma}))$ for every symplectic path
${\gamma}\in{\cal P}_{\tau}(2d)$ parametrized by ${\omega}\in {\bf U}=\{z\in{\bf C}\,|,|z|=1\}$ was introduced
by Y. Long in \cite{Lon2} of 1999. This index function theory is based on the
Maslov-type index theory $(i_1({\gamma}),\nu_1({\gamma}))$ for symplectic paths in ${\cal P}_{\tau}(2d)$
established by C. Conley, E. Zehnder in \cite{CoZ1} of 1984, Y. Long and E. Zehnder in
\cite{LZe1} of 1990, and Y. Long in \cite{Lon1} of 1990 (cf. \cite{Lon5}). In \cite{Lon2},
Y. Long established also the basic normal form decomposition of symplectic matrices.
Based on this result he further established the precise iteration formulae of indices
of symplectic paths in \cite{Lon3} of 2000. These results form the basis of our study
on the Morse indices and homological properties of iterates of closed geodesics. Here
we briefly review these results.
As in \cite{Lon5}, denote by
\begin{eqnarray}
N_1({\lambda}, a) &=& \left(\matrix{{\lambda} & a\cr
0 & {\lambda}\cr}\right), \qquad {\rm for\;}{\lambda}=\pm 1, \; a\in{\bf R}, \label{3.1}\\
H(b) &=& \left(\matrix{b & 0\cr
0 & b^{-1}\cr}\right), \qquad {\rm for\;}b\in{\bf R}\setminus\{0, \pm 1\}, \label{3.2}\\
R({\theta}) &=& \left(\matrix{\cos{\theta} & -\sin{\theta} \cr
\sin{\theta} & \cos{\theta}\cr}\right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi), \label{3.3}\\
N_2(e^{{\theta}\sqrt{-1}}, B) &=& \left(\matrix{ R({\theta}) & B \cr
0 & R({\theta})\cr}\right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi)\;\; {\rm and}\; \nonumber\\
&&B=\left(\matrix{b_1 & b_2\cr
b_3 & b_4\cr}\right)\quad {\rm with}\; b_j\in{\bf R}, \;\;
{\rm and}\;\; b_2\not= b_3. \label{3.4}\end{eqnarray}
Here $N_2(e^{{\theta}\sqrt{-1}}, B)$ is non-trivial if $(b_2-b_3)\sin\theta<0$, and trivial
if $(b_2-b_3)\sin\theta>0$. In \cite{Lon2}-\cite{Lon4}, these matrices are called
{\it basic normal forms} of symplectic matrices.
As in \cite{Lon5}, given any two real matrices of the square block form
$$ M_1=\left(\matrix{A_1 & B_1\cr C_1 & D_1\cr}\right)_{2i\times 2i},\qquad
M_2=\left(\matrix{A_2 & B_2\cr C_2 & D_2\cr}\right)_{2j\times 2j},$$
the $\diamond$-sum (direct sum) of $M_1$ and $M_2$ is defined by the
$2(i+j)\times2(i+j)$ matrix
$$ M_1\diamond M_2=\left(\matrix{A_1 & 0 & B_1 & 0 \cr
0 & A_2 & 0& B_2\cr
C_1 & 0 & D_1 & 0 \cr
0 & C_2 & 0 & D_2}\right). $$
{\bf Definition 3.1.} (cf. \cite{Lon3} and \cite{Lon5}) {\it For every
$P\in{\rm Sp}(2d)$, the homotopy set $\Omega(P)$ of $P$ in ${\rm Sp}(2d)$ is defined by
$$ {\Omega}(P)=\{N\in{\rm Sp}(2d)\,|\,{\sigma}(N)\cap{\bf U}={\sigma}(P)\cap{\bf U}\equiv\Gamma\;\mbox{and}
\;\nu_{{\omega}}(N)=\nu_{{\omega}}(P)\, \forall{\omega}\in\Gamma\}, $$
where ${\sigma}(P)$ denotes the spectrum of $P$,
$\nu_{{\omega}}(P)\equiv\dim_{{\bf C}}\ker_{{\bf C}}(P-{\omega} I)$ for all ${\omega}\in{\bf U}$.
The homotopy component ${\Omega}^0(P)$ of $P$ in ${\rm Sp}(2d)$ is defined by
the path connected component of ${\Omega}(P)$ containing $P$ (cf. page 38 of
\cite{Lon5}). }
Note that ${\Omega}^0(P)$ defines an equivalent relation among symplectic
matrices. Specially we call two matrices $N$ and $P\in{\rm Sp}(2d)$ {\it homotopic},
if $N\in{\Omega}^0(P)$, and in this case we write $N\approx P$.
Then the following decomposition theorem is proved in \cite{Lon2}
and \cite{Lon3}
\medskip
{\bf Theorem 3.2.} (cf. Theorem 7.8 of \cite{Lon2}, Lemma 2.3.5 and
Theorem 1.8.10 of \cite{Lon5}) {\it For every $P\in{\rm Sp}(2d)$, there
exists a continuous path $f\in{\Omega}^0(P)$ such that $f(0)=P$ and
\begin{eqnarray} f(1)
&=& N_1(1,1)^{{\diamond} p_-}\,{\diamond}\,I_{2p_0}\,{\diamond}\,N_1(1,-1)^{{\diamond} p_+} \nonumber\\
&&{\diamond}\,N_1(-1,1)^{{\diamond} q_-}\,{\diamond}\,(-I_{2q_0})\,{\diamond}\,N_1(-1,-1)^{{\diamond} q_+} \nonumber\\
&&{\diamond}\,R({\theta}_1)\,{\diamond}\,\cdots\,{\diamond}\,R({\theta}_k)\,{\diamond}\,R({\theta}_{k+1})\,{\diamond}\,\cdots\,{\diamond}\,R({\theta}_r) \nonumber\\
&&{\diamond}\,N_2(e^{{\alpha}_{1}\sqrt{-1}},A_{1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{{\alpha}_{k_{\ast}}\sqrt{-1}},A_{k_{\ast}}) \nonumber\\
&&\qquad\qquad \,{\diamond}\,N_2(e^{{\alpha}_{k_{\ast}+1}\sqrt{-1}},A_{k_{\ast}+1})\,{\diamond}\,\cdots
\,{\diamond}\,N_2(e^{{\alpha}_{r_{\ast}}\sqrt{-1}},A_{r_{\ast}}) \nonumber\\
&&{\diamond}\,N_2(e^{{\beta}_{1}\sqrt{-1}},B_{1})\,{\diamond}\,\cdots\,{\diamond}\,N_2(e^{{\beta}_{k_{0}}\sqrt{-1}},B_{k_{0}}) \nonumber\\
&&\qquad\qquad \,{\diamond}\,N_2(e^{{\beta}_{k_0+1}\sqrt{-1}},B_{k_0+1})\,{\diamond}\,\cdots\,
{\diamond}\,N_2(e^{{\beta}_{r_{0}}\sqrt{-1}},B_{r_{0}}) \nonumber\\
&&{\diamond}\,H(2)^{{\diamond} h_+}\,{\diamond}\,H(-2)^{{\diamond} h_-}, \label{3.5}\end{eqnarray}
where $\frac{{\theta}_{j}}{2\pi}\not\in{\bf Q}$ for $1\le j\le k$ and
$\frac{{\theta}_{j}}{2\pi}\in{\bf Q}$ for $k+1\le j\le r$;
$N_2(e^{{\alpha}_{j}\sqrt{-1}},A_{j})$'s are nontrivial basic normal
forms with $\frac{{\alpha}_{j}}{2\pi}\not\in{\bf Q}$ for $1\le j\le k_{\ast}$
and $\frac{{\alpha}_{j}}{2\pi}\in{\bf Q}$ for $k_{\ast}+1\le j\le r_{\ast}$;
$N_2(e^{{\beta}_{j}\sqrt{-1}},B_{j})$'s are trivial basic normal forms
with $\frac{{\beta}_{j}}{2\pi}\not\in{\bf Q}$ for $1\le j\le k_0$ and
$\frac{{\beta}_{j}}{2\pi}\in{\bf Q}$ for $k_0+1\le j\le r_0$; $p_-=p_-(P)$,
$p_0=p_0(P)$, $p_+=p_+(P)$, $q_-=q_-(P)$, $q_0=q_0(P)$,
$q_+=q_+(P)$, $r=r(P)$, $k=k(P)$, $r_{j}=r_{j}(P)$, $k_j=k_j(P)$ with $j=\ast, 0$ and
$h_+=h_+(P)$ are nonnegative integers, and $h_-=h_-(P)\in \{0,1\}$;
${\theta}_j$, ${\alpha}_j$, ${\beta}_j \in (0,\pi)\cup (\pi,2\pi)$; these integers
and real numbers are uniquely determined by $P$ and satisfy}
\begin{equation} p_- + p_0 + p_+ + q_- + q_0 + q_+ + r + 2r_{\ast} + 2r_0 + h_- + h_+ = d. \label{3.6}\end{equation}
\medskip
Based on Theorem 3.2, the homotopy invariance and symplectic
additivity of the indices, the following precise iteration formula
was proved in \cite{Lon3}:
\medskip
{\bf Theorem 3.3.} (cf. \cite{Lon3}, Theorem 8.3.1 and Corollary
8.3.2 of \cite{Lon5}) {\it Let ${\gamma}\in{\cal P}_{\tau}(2d)$. Denote the
basic normal form decomposition of $P\equiv {\gamma}(\tau)$ by
(\ref{3.5}). Then we have
\begin{eqnarray} i_1({\gamma}^m)
&=& m(i_1({\gamma})+p_-+p_0-r ) + 2\sum_{j=1}^rE\left(\frac{m{\theta}_j}{2\pi}\right) - r \nonumber\\
&& - p_- - p_0 - {{1+(-1)^m}\over 2}(q_0+q_+) \nonumber\\
&& + 2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}\left(\frac{m{\alpha}_j}{2\pi}\right) - 2(r_{\ast}-k_{\ast}), \label{3.7}\\
\nu_1({\gamma}^m) &=& \nu_1({\gamma}) + {{1+(-1)^m}\over 2}(q_-+2q_0+q_+) + 2{\varsigma}(m,{\gamma}(\tau)), \label{3.8}\\
\hat{i}({\gamma}) &=& i_1({\gamma}) + p_- + p_0 - r +
\sum_{j=1}^r\frac{{\theta}_j}{\pi}, \label{3.9}\end{eqnarray}
where we denote by}
\begin{eqnarray}
{\varsigma}(m,{\gamma}(\tau)) &=& (r-k) - \sum_{j=k+1}^r{\varphi}\left(\frac{m{\theta}_j}{2\pi}\right) \nonumber\\
&& + (r_{\ast}-k_{\ast}) -
\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}\left(\frac{m{\alpha}_j}{2\pi}\right)
+ (r_0-k_0) - \sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right).
\label{3.10}
\end{eqnarray}
Let
\begin{equation} {\cal M}\equiv\{N_1(1, b_1),\,b_1=0,1; \;\;N_1(-1,b_2),\,b_2=0,\pm1;\;\;R({\theta}),
\,{\theta}\in(0,\pi)\cup(\pi,2\pi);\,H(-2)\}. \label{3.11}\end{equation}
By Theorems 8.1.4-8.1.7 and 8.2.1-8.2.4 on pp179-187 of \cite{Lon5}, we have specially
\medskip
{\bf Proposition 3.4.} {\it Every path ${\gamma}\in{\cal P}_{\tau}(2)$ with end
matrix homotopic to some matrix in ${\cal M}$ must have odd index
$i_1({\gamma})$. Paths $\xi\in{\cal P}_{\tau}(2)$ ending at $N_1(1,-1)$ or $H(2)$ and
$\eta\in{\cal P}_{\tau}(4)$ with end matrix homotopic to $N_2({\omega},B)$ must have
even indices $i_1(\xi)$ and $i_1(\eta)$.}
\medskip
The relation between the Morse indices of closed geodesics on Finsler manifolds
and the above Maslov-type index theory for symplectic paths was studied
by C. Liu and Y. Long in \cite{LLo1} and C. Liu in \cite{Liu1}. Specially
we have
\medskip
{\bf Proposition 3.5.} (Theorem 1.1 and Remark 4.2 of \cite{Liu1}, cf. also
Theorem 1.1 of \cite{LLo1}, ) {\it For any closed geodesic $c$ on a Finsler
manifold $(M,F)$ with $d=\dim M<+\infty$, denote its linearized Poincar\'e
map by $P_c$. Then there exists a path ${\gamma}\in C([0,1],{\rm Sp}(2d-2))$
satisfying ${\gamma}(0)=I$, ${\gamma}(1)=P_c$, and}
\begin{eqnarray}
(i(c),\nu(c)) &=& (i_1({\gamma}), \nu_1({\gamma})), \qquad {\it if\;}c\;\;{\it is\;orientable},
\label{3.12}\\
(i(c),\nu(c)) &=& (i_{-1}({\gamma}), \nu_{-1}({\gamma})), \qquad {\it if\;}c\;\;
{\it is\;unorientable\;and\;}d\;{\it is\;even}. \label{3.13}\end{eqnarray}
By this result, the above index iteration formulae (Theorem 3.3) can be applied to
every orientable closed geodesic on Finsler and Riemannian manifolds. For unorientable
closed geodesics, one can get a similar iteration formulae using results in \cite{Lon5}.
\medskip
{\bf Remark 3.6.} Note that every closed geodesic $c$ on a simply connected Finsler
manifold is always orientable and thus Theorem 3.3 can be applied to get $i(c^m)$
directly (cf. Section 2.1-Appendix on pages 136-141 of \cite{Kli1}). In this
paper we are interested in orientable closed geodesics.
Next we need the following results from \cite{DuL3}.
\smallskip
{\bf Proposition 3.7.} (Corollary 3.19 of \cite{DuL3}) {\it Let
$v=(v_1,\ldots, v_k)\in ({\bf R}\setminus{\bf Q})^k$. Then there exists
an integer $A$ satisfying $[(k+1)/2]\le A\le k$ and a subset $P$ of
$\{1, \ldots, k\}$ containing $A$ integers, such that for any integer
$n\in{\bf N}$ and any small ${\epsilon}>0$ there exist infinitely many even integers
$T_1$ and $T_2\in n{\bf N}$ satisfying respectively }
\begin{eqnarray}
&& \left\{\matrix{ \{T_1v_i\} > 1-{\epsilon}, &\qquad& {\it for}\;\;i\in P, \cr
\{T_1v_j\} < {\epsilon}, &\qquad& {\it for}\;\; j\in \{1,\ldots,k\}\setminus P, \cr}\right. \label{3.14}\\
&{\it or}\quad & \left\{\matrix{ \{T_2v_i\} < {\epsilon}, &\qquad& {\it for}\;\;i\in P, \cr
\{T_2v_j\} > 1- {\epsilon}, &\qquad& {\it for}\;\; j\in \{1,\ldots,k\}\setminus P. \cr}\right.
\label{3.15}\end{eqnarray}
\smallskip
{\bf Theorem 3.8.} (Theorem 3.21 of \cite{DuL3}) ({\bf Quasi-monotonicity of index growth
for closed geodesics}) {\it Let $c$ be an orientable closed geodesic with mean index $\hat{i}(c)>0$ on
a Finsler manifold $(M,F)$ of dimension $d\ge 2$. Denote the basic
normal form decomposition of the linearized Poincar\'e map $P_c$ of $c$ by (\ref{3.5}). Then
there exist an integer $A$ with $[(k+1)/2]\le A\le k$ and a subset $P$ of integers
$\{1, \ldots, k\}$ with $A$ integers such that for any ${\epsilon}\in (0,1/4)$ there exist infinitely many
sufficiently large even integer $T\in n{\bf N}$ satisfying
\begin{eqnarray}
\left\{\frac{T{\theta}_j}{2\pi}\right\} &>& 1-{\epsilon}, \qquad {\it for}\;\; j\in P, \label{3.16}\\
\left\{\frac{T{\theta}_j}{2\pi}\right\} &<& {\epsilon}, \qquad {\it for}\;\; j\in \{1,\ldots,k\}\setminus P. \label{3.17}\end{eqnarray}
Consequently we have
\begin{eqnarray}
i(c^m)-i(c^T) &\ge& K_1 \equiv {\lambda} + (q_0+q_+) + 2(r-k) + 2(r_{\ast}-k_{\ast}) + 2A,
\quad \forall\, m\ge T+1, \label{3.18}\\
i(c^T)-i(c^m) &\ge& K_2 \equiv {\lambda} - (q_0+q_+) + 2k - 2(r_{\ast}-k_{\ast}) - 2A,
\quad \forall\, 1\le m\le T-1, \label{3.19}\end{eqnarray}
where ${\lambda} = i(c)+p_-+p_0-r$, the integers $p_-$, $p_0$, $q_0$, $q_+$, $r$, $k$, $r_{\ast}$
and $k_{\ast}$ are defined in (\ref{3.5}). }
\setcounter{equation}{0}
\section{Properties of Morse indices of iterates of closed geodesics
Let $(M,F)$ be a Finsler manifold of dimension $d$. As in \cite{LoD1}, a matrix
$P\in{\rm Sp}(2d-2)$ is {\it rational}, if no basic normal form in (\ref{3.5}) of $P$
is of the form $R({\theta})$ with ${\theta}/\pi\in{\bf R}\setminus{\bf Q}$, and is {\it irrational}, otherwise.
Let $c$ be a closed geodesic on $(M,F)$ whose linearized Poincar\'e map is
denoted by $P_c$ and then $P_c\in{\rm Sp}(2d-2)$. The closed geodesic $c$
is {\it rational}, {\it irrational}, if so is $P_c$. The {\it analytical period}
$n(c)$ of $c$ is defined by
\begin{equation} n(c) = \min\{j\in{\bf N}\,|\,\nu(c^j)=\max_{m\ge 1}\nu(c^m),\;\;
i(c^{m+j})-i(c^{m})\in 2{\bf Z}, \;\;\forall\,m\in{\bf N}\}. \label{4.1}\end{equation}
One of the most important properties of $n=n(c)$ is
\begin{equation} n(c) = N(c), \label{4.2}\end{equation}
where $N(c)$ is defined by Lemma 2.3. This was proved by Lemma 3.10 of \cite{DuL3}.
Next we need
{\bf Definition 4.1.} {\it For any closed geodesic $c$ with $\hat{i}(c)>0$ on
a Finsler manifold $(M,F)$ of dimension $d$, Denote the
basic normal form decomposition of the linearized Poincar\'e
map $P_c$ of $c$ by (\ref{3.5}). We define $m_0=m_0(c)$ as follows
$$ m_0(c)=\min\{m\in{\bf N}\,|\,i(c^{j+m})\ge d+4k,\,\forall\,j\ge1\}. $$
Here $k$ is defined in (\ref{3.5}). Note that $\hat{i}(c)>0$ implies that
$i(c^m)\rightarrow +\infty$ as $m\rightarrow +\infty$. Thus the integer $m_0$
is well-defined.}
\medskip
{\bf Lemma 4.2.} {\it Let $c$ be an orientable prime closed geodesic on a Finsler
manifold $M=(M,F)$ with $\dim M<+\infty$. Then for every $m\in{\bf N}$, we have
\begin{equation} i(c^{2m}) = i(c^2) \;\; {\rm mod} \; 2, \quad i(c^{2m+1}) = i(c) \;\;{\rm mod}\;2. \label{4.3}\end{equation}
For any two positive integers $q|p$, we have}
\begin{equation} i(c^p) \ge i(c^q) \quad {\it and} \quad \nu(c^p) \ge \nu(c^q). \label{4.4}\end{equation}
{\bf Proof.} (\ref{4.3}) follows from (\ref{3.7}) of Theorem 3.3 immediately.
The two inequalities in (\ref{4.4}) follow from the Bott formulae (cf. Theorem 1 and
its corollary on pages 177-178 of \cite{Bot1}) immediately. In fact using notations
of \cite{Lon5} we have
$$ i(c^p) = \sum_{{\omega}^p=1}i_{{\omega}}(c) = \sum_{{\omega}^q=1}i_{{\omega}}(c) + \sum_{{\omega}^p=1,\, {\omega}^q\not=1}i_{{\omega}}(c)
\ge \sum_{{\omega}^q=1}i_{{\omega}}(c) = i(c^q), $$
and
$$ \nu(c^p) = \sum_{{\omega}^p=1}\nu_{{\omega}}(c)
= \sum_{{\omega}^q=1}\nu_{{\omega}}(c) + \sum_{{\omega}^p=1,\, {\omega}^q\not=1}\nu_{{\omega}}(c)
\ge \sum_{{\omega}^q=1}\nu_{{\omega}}(c) = \nu(c^q). $$
Here that both $i_{{\omega}}(c)$ and $\nu_{{\omega}}(c)$ are non-negative integers for
any ${\omega}\in{\bf U}$ follow from Proposition 1.3 of \cite{Bot1}. The proof is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
The following theorem gives some precise index properties of iterates of closed
geodesics, which is a crucial step in the proofs of Theorems 1.1 and 1.2, and
generalizes Theorem 3.7 in \cite{LoD1} for rational closed geodesics to the
non-rational closed geodesics.
\medskip
{\bf Theorem 4.3.} ({\bf Index quasi-periodicity of closed geodesics})
{\it Let $c$ be an orientable closed geodesic with $\hat{i}(c)>0$ on a
Finsler manifold of dimension $d$. Denote the basic normal form decomposition
of the linearized Poincar\'e map $P_c$ of $c$ by (\ref{3.5}). Let $n=n(c)$ be
the analytical period of $c$.
Then when $k\ge 1$, there exist an integer $A$ with $[(k+1)/2]\le A\le k$ and a subset
$P$ of integers $\{1, \ldots, k\}$ with $A$ integers such that for any given
integer $m_0\in{\bf N}$ and any small ${\epsilon}>0$ there exists a sufficiently large even
integer $T\in n{\bf N}$ satisfying
\begin{eqnarray}
\left\{\frac{T{\theta}_j}{2\pi}\right\} &>& 1-{\epsilon}, \qquad {\it for}\;\; j\in P, \label{4.5}\\
\left\{\frac{T{\theta}_j}{2\pi}\right\} &<& {\epsilon}, \qquad {\it for}\;\; j\in \{1,\ldots,k\}\setminus P. \label{4.6}\end{eqnarray}
Note that both of (\ref{4.5}) and (\ref{4.6}) should be omitted when $k=0$, and the following
conclusions (A)-(D) still hold by Theorem 3.7 of \cite{LoD1}.
Let
\begin{equation} p(c) \equiv p_- + p_0 + q_0 + q_+ +2r_{\ast}-2k_{\ast}+r+2A-2k \ge 0. \label{4.7}\end{equation}
Then the following conclusions hold always:
(A) (Quasi-periodicity) For any $1\le m\le m_0$, there hold
\begin{eqnarray}
i(c^{m+T}) &=& i(c^m) + i(c^{T}) + p(c), \label{4.8}\\
\nu(c^{m+T}) &=& \nu(c^m). \label{4.9}\end{eqnarray}
(B) (Relative parity) There holds
\begin{equation} i(c^T) = p(c) \quad ({\rm mod} \, 2). \label{4.10}\end{equation}
(C) (Nullity-periodicity) There holds
\begin{equation} \nu(c^n)=\nu(c^T) \le p(c) + d-1-2A. \label{4.11}\end{equation}
(D) (Period-mean index) If $\hat{i}(c)>0$ is a rational number, there holds
\begin{equation} T\hat{i}(c) = i(c^T) + p(c). \label{4.12}\end{equation}
If $\hat{i}(c)>0$ is irrational, then for any small $\tau>0$ we can further
require the above chosen $T\in n{\bf N}$ to be even larger to satisfy }
\begin{equation} |T\hat{i}(c) - (i(c^T) + p(c))| < \tau. \label{4.13}\end{equation}
\medskip
{\bf Proof.} Note that by the definitions of $E(\cdot)$ and ${\varphi}(\cdot)$ there hold
\begin{eqnarray}
E(z+b)=z+E(b), && {\varphi}(z+b)={\varphi}(b)\;\;\;{\rm for}\;\;k\in{\bf Z},\;\; b\not\in{\bf Z}, \label{4.14}\\
E(b)+E(-b) =1, && {\rm for}\;\;b\in (0,1), \label{4.15}\\
E(a)+E(-a) - {\varphi}(a) = 0, && {\varphi}(a)={\varphi}(-a) \quad \forall \; a\in{\bf R}. \label{4.16}\end{eqnarray}
Let $n=n(c)$ be the analytical period of $c$. For the integer $m_0$ given in the
assumption of the theorem, we specially set
\begin{equation} {\epsilon} = \min\left\{\{\frac{m{\theta}_j}{2\pi}\},\,1-\{\frac{m{\theta}_j}{2\pi}\}\,|\,1\le m\le m_0;\,1\le j\le k\right\}.
\label{4.17}\end{equation}
Note that, when $k\ge 1$, we fix an even integer $T\in n{\bf N}$ obtained from Proposition 3.7
satisfying (\ref{4.5}) and (\ref{4.6}) for this ${\epsilon}>0$.
We use short hand notations as in (\ref{3.5}) and carry out the proof in several steps.
\medskip
{\bf Step 1.} {\it Proof of the quasi-periodicity (A). }
\medskip
By (\ref{3.7}) of Theorem 3.3 for any $m\in{\bf N}$ we obtain
\begin{eqnarray} i(c^{m+T})
&=& (m+T)(i(c)+p_-+p_0-r ) + 2\sum_{j=1}^rE(\frac{(m+T){\theta}_j}{2\pi}) - r \nonumber\\
&& - p_- - p_0 - {{1+(-1)^m}\over 2}(q_0+q_+) + 2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(\frac{m{\alpha}_j}{2\pi})
- 2(r_{\ast}-k_{\ast}) \nonumber\\
&=& i(c^m)+i(c^T) + (r + p_- + p_0 + q_0 + q_+ +2r_{\ast}-2k_{\ast}) \nonumber\\
&& \qquad+ 2\sum_{j=1}^rE(\frac{(m+T){\theta}_j}{2\pi})-2\sum_{j=1}^rE(\frac{m{\theta}_j}{2\pi})
- 2\sum_{j=1}^rE(\frac{T{\theta}_j}{2\pi}). \label{4.18}\end{eqnarray}
where we have used the evenness of $T$ and the fact $T\in n{\bf N}$. Note that
\begin{eqnarray} \frac{1}{2}\Theta(m,T)
&\equiv& \sum_{j=1}^rE(\frac{(m+T){\theta}_j}{2\pi})-\sum_{j=1}^rE(\frac{m{\theta}_j}{2\pi})
-\sum_{j=1}^rE(\frac{T{\theta}_j}{2\pi}) \nonumber\\
&=& \sum_{j=1}^kE(\frac{(m+T){\theta}_j}{2\pi})-\sum_{j=1}^kE(\frac{m{\theta}_j}{2\pi})
-\sum_{j=}^kE(\frac{T{\theta}_j}{2\pi}) \nonumber\\
&=& \sum_{j=1}^kE\left(\{\frac{m{\theta}_j}{2\pi}\}+\{\frac{T{\theta}_j}{2\pi}\}\right)
-\sum_{j=1}^kE\left(\{\frac{m{\theta}_j}{2\pi}\}\right)
-\sum_{j=1}^kE\left(\{\frac{T{\theta}_j}{2\pi}\}\right) \nonumber\\
&=& \sum_{j=1}^kE\left(\{\frac{m{\theta}_j}{2\pi}\}+\{\frac{T{\theta}_j}{2\pi}\}\right)-2k \nonumber\\
&=& \sum_{j=1}^AE\left(\{\frac{m{\theta}_j}{2\pi}\}+\{\frac{T{\theta}_j}{2\pi}\}\right)
+\sum_{j=A+1}^kE\left(\{\frac{m{\theta}_j}{2\pi}\}+\{\frac{T{\theta}_j}{2\pi}\}\right)-2k. \label{4.19}\end{eqnarray}
So it follows from (\ref{4.5})-(\ref{4.6}) and (\ref{4.19}) that
\begin{equation} \Theta(m,T)=2(A-k),\qquad \forall\, 1\le m\le m_0. \label{4.20}\end{equation}
Together with (\ref{4.18}), it yields
\begin{equation} i(c^{m+T})= i(c^m)+i(c^T) + r + p_- + p_0 + q_0 + q_+ + 2(r_{\ast}-k_{\ast})+2(A-k),\quad
\forall\,1\le m\le m_0. \label{4.21}\end{equation}
Thus (\ref{4.8}) holds. And (\ref{4.9}) follows from the definition of $T\in n{\bf N}$.
\medskip
{\bf Step 2.} {\it Proof of the relative parity (B).}
\medskip
By Theorem 3.3 and the definition (\ref{4.7}) of $p(c)$ we have
\begin{eqnarray} i(c^T)-p(c)
&=& T(i(c)+p_-+p_0-r )+2\sum_{j=1}^r E(\frac{T{\theta}_j}{2\pi}) \nonumber\\
& & - r - p_- - p_0 - {{1+(-1)^T}\over 2}(q_0+q_+)-2(r_{\ast}- k_{\ast}) \nonumber\\
& & - (p_- + p_0 + q_+ + q_0 +2r_{\ast}-2k_{\ast}+ r +2A-2k ) \nonumber\\
&=& T(i(c)+p_-+p_0-r ) + 2\sum_{j=1}^r E(\frac{T{\theta}_j}{2\pi}) - 2r - 2p_- - 2p_0 \nonumber\\
& & - {{3+(-1)^T}\over 2}(q_0+q_+) - 4(r_{\ast}-k_{\ast})-2(A-k). \nonumber\end{eqnarray}
Because $T$ is even, it yields the relative parity (B).
\medskip
{\bf Step 3.} {\it Proof of the nullity-periodicity (C).}
\medskip
Because $\nu(c)=p_-+2p_0+p_+$, by Theorem 3.3 we have
\begin{eqnarray} \nu(c^T)-p(c)&=&\nu(c^n)-p(c)\nonumber\\
&=& p_- + 2p_0 + p_+ + (q_-+2q_0+q_+) + 2(r-k+r_{\ast}-k_{\ast}+r_0-k_0) \nonumber\\
&& \quad - (p_0+p_-+q_0+q_++2r_{\ast}-2k_{\ast}+r+2A-2k) \nonumber\\
&=& p_0 + p_+ + q_0 + q_- + r + 2r_0-2k_0-2A \nonumber\\
&\le& d-1-2A. \nonumber\end{eqnarray}
This yields (C).
\medskip
{\bf Step 4.} {\it Proof of the period-mean index (D). }
When $k=0$, (D) is proved in Theorem 3.7 of \cite{LoD1}. Now we consider the case
$k\ge 1$.
When $\hat{i}(c)=i(c)+p_-+p_0-r+\sum_{j=1}^r{\theta}_j/\pi$ is a rational
number, we must have $k\ge 2$ and then $A\ge 1$ by Proposition 3.7.
Let $\sum_{j=1}^k{\theta}_j/2\pi=q/p$ for some integers $0<p,q\in{\bf N}$ with $(p,q)=1$.
Further choose $0<{\epsilon}<\frac{1}{k}$ and an even $T\in np{\bf N}$ satisfying (\ref{4.5})
and (\ref{4.6}). Note that $\sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\}$ is an integer
because $T$ is an integer multiple of $p$.
If $A=k\ge 2$, by (\ref{4.5}) it yields a contradiction
\begin{equation} \sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\}=\sum_{j=1}^A\{\frac{T{\theta}_j}{2\pi}\}\in (A(1-{\epsilon}),A)\cap{\bf Z}=\emptyset.
\label{4.22}\end{equation}
If $[\frac{k+1}{2}]\le A\le k-1$, by (\ref{4.5}) and (\ref{4.6}) we obtain
\begin{equation} \sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\}\in (A(1-{\epsilon}),A+(k-A){\epsilon})\cap{\bf Z}=\{A\}. \label{4.23}\end{equation}
Together with (\ref{3.7}) of Theorem 3.3 and the definition (\ref{4.7}) of $p(c)$, it
yields
\begin{eqnarray} T\hat{i}(c)
&=& T\left(i(c) + p_- + p_0 - r + \sum_{j=1}^r\frac{{\theta}_j}{\pi}\right) \nonumber\\
&=& i(c^T) + (r+p_-+p_0+2r_{\ast}-2k_{\ast})\nonumber\\
&& +\frac{1+(-1)^T}{2}(q_0+q_+)
+ 2\left(\sum_{j=1}^k\frac{T{\theta}_j}{2\pi} -\sum_{j=1}^k E(\frac{T{\theta}_j}{2\pi})\right)\nonumber\\
&=& i(c^T) + (r+p_-+p_0+2r_{\ast}-2k_{\ast}+q_0+q_+) \nonumber\\
&& +2\left(\sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\} -\sum_{j=1}^k E(\{\frac{T{\theta}_j}{2\pi}\})\right)\nonumber\\
&=& i(c^T)+(r+p_-+p_0+2r_{\ast}-2k_{\ast}+q_0+q_++2A-2k)\nonumber\\
&=& i(c^T) + p(c), \label{4.24}\end{eqnarray}
where we have used the fact that $T\in np{\bf N}$ is even, and the
rationality of $\hat{i}(c)$ is used only to get the second last equality.
When $\hat{i}(c)>0$ is irrational, we further require that ${\epsilon}>0$ satisfies $2k{\epsilon}<\tau$.
Thus in this case (\ref{4.22}) and (\ref{4.23}) become
$$ \sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\}\in (A(1-{\epsilon}),A+k{\epsilon})\subset (A-\tau/2, A+\tau/2). $$
Then from the third equality of (\ref{4.24}) we obtain
$$ |T\hat{i}(c) - (i(c^T) + p(c))| \le 2|\sum_{j=1}^k\{\frac{T{\theta}_j}{2\pi}\}-A| < \tau. $$
i.e., (D) holds.
This completes the proof of Theorem 4.3. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
{\bf Lemma 4.4.} {\it For any orientable closed geodesic $c$ with $\hat{i}(c)>0$ on a
Finsler manifold $(M,F)$ of dimension $d$, denote the basic normal form decomposition
of the linearized Poincar\'e map $P_c$ of $c$ by (\ref{3.5}). Then for any even $T\in n{\bf N}$
and $m_0=m_0(c)$ given by Definition 4.1, there holds
\begin{eqnarray} i(c^{T+m_0+m})-i(c^T)\ge p(c)+d, \qquad\forall\, m\ge 1. \label{4.25}\end{eqnarray}}
{\bf Proof.} By (\ref{3.7}) of Theorem 3.3 and Definition 4.1, we obtain
\begin{eqnarray} i(c^{T+m_0+m})
&=& i(c^T)+(m+m_0)(i(c)+p_-+p_0-r ) + {{1-(-1)^{m_0+m}}\over 2}(q_0+q_+) \nonumber\\
&& +2\sum_{j=1}^r\left(E(\frac{(m_0+m+T){\theta}_j}{2\pi}) - E(\frac{T{\theta}_j}{2\pi})\right)
+ 2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(\frac{(m_0+m){\alpha}_j}{2\pi}) \nonumber\\
&=& i(c^T)+i(c^{m_0+m})+r+p_-+p_0+2(r_{\ast}-k_{\ast})+q_0+q_++2A-2k-(2A-2k)\nonumber\\
&& +2\sum_{j=1}^r\left(E(\frac{(m_0+m+T){\theta}_j}{2\pi})
-E(\frac{(m_0+m){\theta}_j}{2\pi})-E(\frac{T{\theta}_j}{2\pi})\right)\nonumber\\
&=& i(c^T)+i(c^{m_0+m})+p(c)+2k-2A\nonumber\\
&& +2\sum_{j=1}^k\left(E(\{\frac{m_0{\theta}_j}{2\pi}\}
+\{\frac{m{\theta}_j}{2\pi}\}+\{\frac{T{\theta}_j}{2\pi}\})
-E(\{\frac{m_0{\theta}_j}{2\pi}\}+\{\frac{m{\theta}_j}{2\pi}\})
-E(\{\frac{T{\theta}_j}{2\pi}\})\right) \nonumber\\
&\ge& i(c^T)+i(c^{m_0+m})+p(c)-2\sum_{j=1}^kE(\{\frac{m_0{\theta}_j}{2\pi}\}+\{\frac{m{\theta}_j}{2\pi}\}) \nonumber\\
&\ge& i(c^T)+i(c^{m_0+m})+p(c)-4k\nonumber\\
&\ge& i(c^T)+p(c)+d,\qquad\forall \,m\ge 1. \label{4.26}\end{eqnarray}
This completes the proof of Lemma 4.4. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
Next result generalizes Proposition 3.11 of \cite{LoD1} for rational closed geodesics to
irrational ones.
{\bf Theorem 4.5.} {\it For every orientable closed geodesic $c$ with $\hat{i}(c)>0$
on a Finsler manifold $(M,F)$ of dimension $d\ge 2$, denote the basic normal
form decomposition of the linearized Poincar\'e map $P_c$ of $c$ by (\ref{3.5}). Then
there exist an integer $A$ with $[(k+1)/2]\le A\le k$ and a subset $P$ of integers
$\{1, \ldots, k\}$ with $A$ integers such that for any small ${\epsilon}>0$
there exists a sufficiently large even integer $T\in n{\bf N}$ such that (\ref{4.5}) and (\ref{4.6})
and the following estimate hold}
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+p(c)+d-3, \qquad\forall\,1\le m\le T-1. \label{4.27}\end{equation}
{\bf Proof.} When $k=0$, i.e., the closed geodesic $c$ is rational, this result was
proved in Proposition 3.11 of \cite{LoD1}, whose proof there in fact did not use
the fact $\;^{\#}{\rm CG}(M,F)=1$. Therefore here we only consider the case of $k\ge 1$.
On the one hand, by Theorem 3.3, for any $1\le m\le T-1$, we have
\begin{eqnarray}
i(c^m)
&+& i(c^{T-m})+\nu(c^m) \nonumber\\
&=& i(c^{T}) + 2\sum_{j=1}^r\left(E(\frac{m{\theta}_j}{2\pi})+E(\frac{(T-m){\theta}_j}{2\pi})
-E(\frac{T{\theta}_j}{2\pi})\right)- (r+p_-+p_0) \nonumber\\
&& - (-1)^m(q_0+q_+) +2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(-\frac{m{\alpha}_j}{2\pi})
+2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(\frac{m{\alpha}_j}{2\pi})-2(r_{\ast}-k_{\ast}) \nonumber\\
&& +p_-+2p_0+p_++\frac{1+(-1)^m}{2}(q_-+2q_0+q_+)
+2(r-k+r_{\ast}-k_{\ast}+r_0-k_0) \nonumber\\
&&-2[\sum_{j=k+1}^r{\varphi}\left(\frac{m{\theta}_j}{2\pi}\right)+
\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}\left(\frac{m{\alpha}_j}{2\pi}\right)
+ \sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right)] \nonumber\\
&=& i(c^{T}) + 2\sum_{j=1}^r\left(E(\frac{m{\theta}_j}{2\pi})+E(\frac{(T-m){\theta}_j}{2\pi})
-E(\frac{T{\theta}_j}{2\pi})\right)-r+p_0+p_+ \nonumber\\
&& - (-1)^m(q_0+q_+)+\frac{1+(-1)^m}{2}(q_-+2q_0+q_+)+2(r-k+r_0-k_0) \nonumber\\
&& +2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(-\frac{m{\alpha}_j}{2\pi})
-2[\sum_{j=k+1}^r{\varphi}\left(\frac{m{\theta}_j}{2\pi}\right)+
\sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right)], \label{4.28}\end{eqnarray}
where we have used the fact $T\in n{\bf N}$ is even and the fact $\nu(c)=p_-+p_++2p_0$
by the definitions of $p_{\ast}$s in Theorem 3.2.
Note that by (\ref{4.16}) we get
\begin{eqnarray}
&& 2\sum_{j=1}^r\left(E(\frac{m{\theta}_j}{2\pi}) + E(\frac{(T-m){\theta}_j}{2\pi})
-E(\frac{T{\theta}_j}{2\pi})\right)-2\sum_{j=k+1}^r{\varphi}\left(\frac{m{\theta}_j}{2\pi}\right)\nonumber\\
&&\qquad =2\sum_{j=1}^k\left(E(\frac{m{\theta}_j}{2\pi})+E(\frac{(T-m){\theta}_j}{2\pi})
-E(\frac{T{\theta}_j}{2\pi})\right)\nonumber\\
&&\qquad\qquad+2\sum_{j=k+1}^r\left(E(\frac{m{\theta}_j}{2\pi})
+E(-\frac{m{\theta}_j}{2\pi})\right)-2\sum_{j=k+1}^r{\varphi}\left(\frac{m{\theta}_j}{2\pi}\right)\nonumber\\
&&\qquad=2\sum_{j=1}^k\left(E(\{\frac{m{\theta}_j}{2\pi}\})+E(\{\frac{T{\theta}_j}{2\pi}\}
-\{\frac{m{\theta}_j}{2\pi}\})-E(\{\frac{T{\theta}_j}{2\pi}\})\right)\nonumber\\
&&\qquad=2\sum_{j=1}^k\left(E(\{\frac{T{\theta}_j}{2\pi}\}
-\{\frac{m{\theta}_j}{2\pi}\})\right)\nonumber\\
&&\qquad\le 2k. \label{4.29}\end{eqnarray}
Together with (\ref{4.28}), it yields
\begin{eqnarray} i(c^m) &+& i(c^{T-m})+\nu(c^m) \nonumber\\
&&\le i(c^{T}) + 2k-r+p_0+p_++2(r-k+r_0-k_0)- (-1)^m(q_0+q_+)\nonumber\\
&& \quad+\frac{1+(-1)^m}{2}(q_-+2q_0+q_+)
+2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(\frac{m{\alpha}_j}{2\pi})
-2\sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right)\nonumber\\
&&=i(c^{T}) + r+p_0+p_++q_0+2(r_0-k_0)+\frac{1- (-1)^m}{2}q_+\nonumber\\
&&\qquad \quad+\frac{1+(-1)^m}{2}q_-
+2\sum_{j=k_{\ast}+1}^{r_{\ast}}{\varphi}(\frac{m{\alpha}_j}{2\pi})
-2\sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right)\nonumber\\
&&\le i(c^T)+(p_-+p_0+q_0+q_++2r_{\ast}-2k_{\ast}+r+2A-2k)+p_++2(r_0-k_0)\nonumber\\
&&\qquad -p_--2(A-k)-\frac{1+(-1)^m}{2}q_++\frac{1+(-1)^m}{2}q_-
-2\sum_{j=k_0+1}^{r_0}{\varphi}\left(\frac{m{\beta}_j}{2\pi}\right)\nonumber\\
&&\le i(c^T)+p(c)+p_++q_-+2r_0-2(A-k)\nonumber\\
&&\le i(c^T)+p(c)+p_++q_-+2r_0+k, \qquad\forall\,1\le m\le T-1. \label{4.30}\end{eqnarray}
In other words, we obtain
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+p(c)-i(c^{T-m})+p_++q_-+2r_0+k, \qquad 1\le m\le T-1. \label{4.31}\end{equation}
On the other hand, it follows from Theorem 3.8 that
\begin{eqnarray} i(c^m)
&\le& i(c^T)-i(c)-p_0-p_-+r+q_0+q_++2(r_{\ast}-k_{\ast}) +2(A-k) \nonumber\\
&=& i(c^T)+p(c)-i(c)-2(p_0+p_-), \qquad \forall\,1\le m\le T-1. \label{4.32}\end{eqnarray}
Note that $p_+ + q_- + 2r_0+k\le d-1$ holds always in (\ref{4.31}) by (\ref{3.6})
with $d$ replaced by $d-1$. If
$p_+ + q_- + 2r_0+k \le d-3$, then (\ref{4.31}) yields (\ref{4.27}).
Therefore to continue our proof, it suffices to consider the following two distinct cases.
\medskip
{\bf Case 1.} {$p_+ + q_- + 2r_0+k = d-1$.}
\medskip
In this case, by (\ref{3.8}) and (\ref{3.10}) for all $m\ge 1$ we have
\begin{equation} \nu(c^m) \le \nu(c^n) = p_++q_-+2r_0 = d-1-k. \label{4.33}\end{equation}
Thus together with (\ref{4.32}) it yields
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+p(c)-i(c)+d-1-k. \label{4.34}\end{equation}
So in order to prove (\ref{4.27}), it suffices to consider the case of $i(c)=0$
and $k=1$. By the fact $i(c)=0$ and Proposition 3.4, we must have $q_-\in 2{\bf N}-1$
and thus $n\in 2{\bf N}$ by the definition of $n=n(c)$.
Therefore we have
\begin{equation} P_c\approx N_1(1,-1)^{{\diamond} p_+}{\diamond} N_1(-1,1)^{{\diamond} q_-}
{\diamond} ({\diamond}_{j=1}^{r_0}N_2(e^{{\beta}_j\sqrt{-1}},B_j)){\diamond} R({\theta}_1), \label{4.35}\end{equation}
where ${\theta}_1/\pi\in (0,2)\setminus{\bf Q}$. Thus by Theorem 3.3, we have
\begin{eqnarray}
i(c^m) &=& -m + 2E(\frac{m{\theta}_1}{2\pi}) - 1, \qquad \forall\; m\in{\bf N}, \label{4.36}\\
\nu(c^n) &=& p_+ + q_- + 2r_0 = d-1-k=d-2. \label{4.37}\end{eqnarray}
When $m\in ({\bf N}\setminus n{\bf N})$, we have $\nu(c^m)< \nu(c^n)=\nu(c^T)$. Thus by
(\ref{4.32}) and (\ref{4.37}) we get
$$ i(c^m) + \nu(c^m) \le i(c^T) + p(c) + \nu(c^n) -1 = i(c^T) + p(c) + d-3, $$
i.e., (\ref{4.27}) holds.
Then for $1\le mn<T$ and the $T$ chosen above, by $\hat{i}(c)>0$ we get
\begin{eqnarray} i(c^T)-i(c^{mn})&=& mn-T+2 \left(E(\frac{T{\theta}_1}{2\pi})-E(\frac{mn{\theta}_1}{2\pi})\right) \nonumber\\
&=& mn-T+(T-mn)\frac{{\theta}_1}{\pi}+2 \left(\{\frac{mn{\theta}_1}{2\pi}\}-\{\frac{T{\theta}_1}{2\pi}\}\right) \nonumber\\
&=& (T-mn)\hat{i}(c)+2 \left(\{\frac{mn{\theta}_1}{2\pi}\}-\{\frac{T{\theta}_1}{2\pi}\}\right) \nonumber\\
&>& 2 \left(\{\frac{mn{\theta}_1}{2\pi}\}-\{\frac{T{\theta}_1}{2\pi}\}\right) \nonumber\\
&>& -2. \label{4.38}\end{eqnarray}
Since both $n$ and $T$ are even, it follows from (\ref{4.36}) that $i(c^T)-i(c^{mn})$ is even.
Thus by the irrationality of $\frac{{\theta}_1}{\pi}$ and (\ref{4.38}) we obtain
\begin{equation} i(c^T)\ge i(c^{mn}),\qquad\,\forall\, 1\le mn<T. \label{4.39}\end{equation}
Then by the fact $p(c)=1$ and (\ref{4.37})-(\ref{4.39}) we have
\begin{equation} i(c^{mn}) + \nu(c^{mn}) \le i(c^T)+\nu(c^n)
\le i(c^T) + p(c)-1 + d-2=i(c^T)+p(c)+d-3. \label{4.40}\end{equation}
That is, (\ref{4.27}) holds.
\medskip
{\bf Case 2.} $p_+ + q_- + 2r_0+k = d-2$
\medskip
In this case, $ p_- + p_0 + q_0 + q_+ + r-k + 2r_{\ast} + h_- + h_+=1$ by (\ref{3.6})
with $d$ replaced by $d-1$, which implies $r_{\ast}=0$. By Theorem 3.3 we have
\begin{eqnarray} \nu(c^m)
&\le& \nu(c^n) \nonumber\\
&=& p_+ + q_- + 2r_0+(p_- +q_++ 2p_0 + 2q_0 + 2(r-k)) \nonumber\\
&=& d-2-k+(p_- +q_++ 2p_0 + 2q_0 + 2(r-k)),\quad\forall\,m\ge 1. \label{4.41}\end{eqnarray}
If $k\ge 3$, by (\ref{4.41}) it yields $\nu(c^m)\le d-3$, $\forall\,m\ge 1$. Thus
together with (\ref{4.32}), it yields (\ref{4.27}).
If $k=2$, by (\ref{4.32}) and (\ref{4.41}) it suffices to consider the following case
\begin{equation} i(c)=p_0=p_-=q_+=h_+=h_-=r_{\ast}=0, \quad q_0+(r-k)=1, \quad k=2, \label{4.42}\end{equation}
because otherwise (\ref{4.32}) would imply (\ref{4.27}) already.
Similarly, if $k=1$, by (\ref{4.31}), (\ref{4.32}) and (\ref{4.41}) it suffices to
consider the following case
\begin{equation} i(c^{T-m})=p_0=p_-=h_+=h_-=r_{\ast}=0, \quad q_++q_0+(r-k)=1, \quad k=1. \label{4.43}\end{equation}
because otherwise (\ref{4.31}) and (\ref{4.32}) would imply (\ref{4.27}) already.
Now we consider (\ref{4.42}) and (\ref{4.43}) respectively.
\medskip
{\bf Case 2.1.} {\it (\ref{4.42}) happens.}
\medskip
Viewing $-I$ as $R(\pi)$ if $q_0=1$, it suffices to consider the case
$r-k=1$. Thus in addition to (\ref{4.42}) we have
\begin{equation} r=3, \qquad q_0=0. \label{4.44}\end{equation}
Therefore we have
\begin{equation} P_c\approx N_1(1,-1)^{{\diamond} p_+}{\diamond} N_1(-1,1)^{{\diamond} q_-}
{\diamond} ({\diamond}_{j=1}^{r_0}N_2(e^{{\beta}_j\sqrt{-1}},B_j)){\diamond} R({\theta}_1){\diamond} R({\theta}_2){\diamond} R({\theta}_3), \label{4.45}\end{equation}
where ${\theta}_1/\pi$ and ${\theta}_2/\pi\in (0,2)\setminus{\bf Q}$ and ${\theta}_3/\pi\in(0,2)\cap{\bf Q}$. Thus by
Theorem 3.3, we have
\begin{eqnarray}
i(c^m) &=& -3m + 2\sum_{j=1}^3E(\frac{m{\theta}_j}{2\pi}) - 3, \qquad \forall\; m\in{\bf N}, \label{4.46}\\
\nu(c^n) &=& p_+ + q_- + 2r_0 + 2 = d-2. \label{4.47}\end{eqnarray}
By the fact $i(c)=0$, (\ref{4.45}) and Proposition 3.4, there holds $q_-\in 2{\bf N}-1$.
By the definition of $n$, it further yields
\begin{equation} n\in 2{\bf N}. \label{4.48}\end{equation}
When $m\in ({\bf N}\setminus n{\bf N})$, we have $\nu(c^m)< \nu(c^n)=\nu(c^T)$. Thus by
(\ref{4.32}) and (\ref{4.47}) we get
$$ i(c^m) + \nu(c^m) \le i(c^T) + p(c) + \nu(c^n) -1 = i(c^T) + p(c) + d-3, $$
i.e., (\ref{4.27}) holds.
When $mn\in {\bf N}$ and $1\le mn<T$, then by (\ref{4.46}) and (\ref{4.48}) we have
$i(c^{T-mn})\in 2{\bf N}-1$. Therefore by (\ref{4.31}), for any $1\le mn<T$, we get
\begin{eqnarray} i(c^{mn}) + \nu(c^{mn})
&\le & i(c^T)+ p(c)-i(c^{T-mn})+d-2 \nonumber\\
&\le& i(c^T) + p(c) + d-3. \label{4.49}\end{eqnarray}
That is, (\ref{4.27}) holds.
\medskip
{\bf Case 2.2.} {\it (\ref{4.43}) happens.}
\medskip
Viewing $-I$ (or $N_1(-1,-1)$) as $R(\pi)$ if $q_0=1$ (or $q_+=1$),
although their nullity may be different by $1$ (cf. (\ref{4.53}) below), it suffices to
consider the case $r-k=1$. Thus in addition to (\ref{4.43}) we have
\begin{equation} p(c)=2,\quad r=2, \quad q_+=q_0=0. \label{4.50}\end{equation}
Therefore we have
\begin{equation} P_c\approx N_1(1,-1)^{{\diamond} p_+}{\diamond} N_1(-1,1)^{{\diamond} q_-}
{\diamond} ({\diamond}_{j=1}^{r_0}N_2(e^{{\beta}_j\sqrt{-1}},B_j)){\diamond} R({\theta}_1){\diamond} R({\theta}_2), \label{4.51}\end{equation}
where ${\theta}_1/\pi\in (0,2)\setminus{\bf Q}$ and ${\theta}_2/\pi\in(0,2)\cap{\bf Q}$. Thus by Theorem 3.3, we have
\begin{eqnarray}
i(c^m) &=& (i(c)-2)m + 2\sum_{j=1}^2E(\frac{m{\theta}_j}{2\pi}) - 2, \qquad \forall\; m\in{\bf N}, \label{4.52}\\
\nu(c^n) &=& p_+ + q_- + 2r_0 + 2(r-k) = d-1. \label{4.53}\end{eqnarray}
If $q_-\in 2{\bf N}-1$, by (\ref{4.51}) and Proposition 3.4, there holds
\begin{equation} n\in 2{\bf N} \qquad \mbox{and}\qquad i(c^{T-m})\ge i(c)\in 2{\bf N}-1,\quad\forall\,1\le m\le T-1.\label{4.54}\end{equation}
Therefore, by (\ref{4.31}), (\ref{4.50}) and (\ref{4.53})-(\ref{4.54}) we get
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+p(c)-i(c^{T-m})+d-2\le i(c^T)+p(c)+d-3, \label{4.55}\end{equation}
That is, (\ref{4.27}) holds.
If $q_-\in2{\bf N}$, by (\ref{4.51}) and Proposition 3.4, it yields $i(c)\in 2{\bf N}_0$.
So it follows from (\ref{4.52}) that $i(c^m)\in 2{\bf N}_0$ for all $m\ge 1$. Let
$\frac{q}{p}=\frac{{\theta}_2}{2\pi}$ with integers $p$ and $q$ satisfying $(p,q)=1$.
When $m\in({\bf N}\setminus p{\bf N})$, we have $\nu(c^m)\le\nu(c^n)-2=d-3$ by (\ref{4.53}). Thus by
(\ref{4.32}) it yields
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+p(c)+d-3. \label{4.56}\end{equation}
When $m\in p{\bf N}$, then, similarly to (\ref{4.38}),
we can obtain $i(c^T)\ge i(c^m)$. Therefore, by (\ref{4.50}) and
(\ref{4.53}), we get
\begin{equation} i(c^m)+\nu(c^m)\le i(c^T)+\nu(c^n)\le i(c^T)+d-1+p(c)-2=i(c)+p(c)+d-3, \label{4.57}\end{equation}
That is, (\ref{4.27}) holds.
The proof is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0}
\section{Homological quasi-periodicity
In this section, we study properties of homologies of energy level sets determined
by closed geodesics and establish certain periodicity of homological modules of
energy level set pairs when there exists only one prime closed geodesic.
For any $m\in{\bf N}$, denote the energy level $E(c^m)$ of $c^m$ by
\begin{equation} {\kappa}_m=E(c^m). \label{5.1}\end{equation}
It is well known that ${\kappa}_m=E(c^m)=m^2E(c)$ is strictly increasing to $+\infty$.
Set ${\kappa}_0=0$. The next lemma follows from Theorem 3 of \cite{GrM1}, the Theorem
on p.367 of \cite{GrM2}, Lemma 3.1 to Theorem 3.7 of \cite{Lon4}, and
Theorem I.4.2 of \cite{Cha1}.
\medskip
{\bf Lemma 5.1.} (Lemma 4.2 of \cite{LoD1}) {\it Let $M=(M,F)$ be a Finsler
manifold with $\dim M<+\infty$. Let $c$ be a closed geodesic on $M$ each of whose
iteration $S^1\cdot c^m$ is an isolated critical orbit of $E$ in the loop space
${\Lambda} M$. Suppose that there are integers $m\in{\bf N}$ and $p\in 2{\bf N}_0$ such that
\begin{equation} i(c^m) = i(c) + p, \qquad \nu(c^m)=\nu(c). \label{5.2}\end{equation}
Then the iteration map $\psi^m$ induces an isomorphism }
\begin{equation} \psi^m_{\ast}: \overline{C}_{\ast}(E,c) \to \overline{C}_{\ast+p}(E,c^m). \label{5.3}\end{equation}
\medskip
One of the key results in \cite{LoD1} is the homological isomorphism Theorem 4.3
there for rational closed geodesics. Below we redescribe this theorem and give more
details on two points for the proof given in \cite{LoD1}.
\medskip
{\bf Theorem 5.2.} (Theorem 4.3 of \cite{LoD1}) {\it Let $M=(M,F)$ be a
Finsler manifold possessing only one prime closed geodesic $c$ which is
rational and orientable. Let $n=n(c)$ be the analytical period of $c$.
Recall that by Theorem 3.7 of \cite{LoD1} there hold
\begin{equation} i(c^{m+n}) = i(c^m) + \overline{p}, \quad \nu(c^{m+n})=\nu(c^m), \qquad \forall\, m\in{\bf N}, \label{5.4}\end{equation}
where $\overline{p}=i(c^n)+p(c)$ is even. Then for any non-negative integers $b>a$ and any integer
$h\in{\bf Z}$, the iteration maps $\{\psi^m\}$ and inclusion maps of corresponding level sets
induce a map $f$ on singular chains which yields an isomorphism }
\begin{equation} f_{\ast}: H_h(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_a})
\to H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+a}}). \label{5.5}\end{equation}
{\bf Proof.} Proof of this theorem was given in \cite{LoD1} based on the above
Lemma 5.1. Note that an important condition in Lemma 5.1 is that the constant $p$
in (\ref{5.2}) should be even. In the applications of Lemma 5.1 (i.e., Lemma 4.2 of
\cite{LoD1}) in the proof of Theorem 4.3 in \cite{LoD1}, there are two points in its
Step 1 on which we did not give details on how to get this evenness condition. Below
we provide details of the proofs for these two points.
Because there is only one prime closed geodesic $c$ on $M$,
and ${\kappa}_m=E(c^m)=m^2E(c)=m^2{\kappa}_1>0$ is strictly increasing to $+\infty$, the critical
module of $E$ at $S^1\cdot c^m$ can be defined by
\begin{equation} \overline{C}_j(E,c^m) = H_j(\overline{{\Lambda}}^{{\kappa}_m},\overline{{\Lambda}}^{{\kappa}_m\#})
= H_j(\overline{{\Lambda}}^{{\kappa}_m},\overline{{\Lambda}}^{{\kappa}_{m-1}}), \label{5.6}\end{equation}
where and below we denote by
\begin{equation} \overline{{\Lambda}}^{{\kappa}_m\#} \equiv \overline{{\Lambda}^{{\kappa}_m}\setminus(S^1\cdot c^m)}. \label{5.7}\end{equation}
Given a level set pair $(\overline{{\Lambda}}^{{\kappa}_p},\overline{{\Lambda}}^{{\kappa}_p\#})$ with $p\in{\bf N}$,
for any ${\gamma}\in{\Lambda}^{{\kappa}_p}$ and $m\in{\bf N}$ we have
$$ E(\psi^m({\gamma}))=m^2E({\gamma})\le m^2{\kappa}_p=m^2E(c^p)=E(c^{mp})={\kappa}_{mp}. $$
Therefore the iteration map $\psi^m$ maps the level set $\overline{{\Lambda}}^{{\kappa}_p}$ into
$\overline{{\Lambda}}^{{\kappa}_{mp}}$. We denote the image of the pair $(\overline{{\Lambda}}^{{\kappa}_p},\overline{{\Lambda}}^{{\kappa}_p\#})$
under the iteration map $\psi^m$ by
\begin{equation} (\overline{{\Lambda}}^{{\kappa}_p},\overline{{\Lambda}}^{{\kappa}_p\#})^m=(\psi^m(\overline{{\Lambda}}^{{\kappa}_p}),\psi^m(\overline{{\Lambda}}^{{\kappa}_p\#}))
=(\overline{\psi^m({\Lambda}^{{\kappa}_p})},\overline{\psi^m({\Lambda}^{{\kappa}_p}\setminus(S^1\cdot c^p))}\;). \label{5.8}\end{equation}
Note that we have (cf. (4.13)-(4.16) of \cite{LoD1})
\begin{eqnarray}
&& b=kn+q \qquad {\rm for\;some\;} k\in{\bf N}_0 \;\;{\rm and}\;\;0\le q\le n-1, \label{5.9}\\
&& i(c^b) = k\overline{p} + i(c^q), \quad i(c^{n+b})=(k+1)\overline{p} + i(c^q), \label{5.10}\\
&& \nu(c^{n+b})=\nu(c^b)=\nu(c^q), \quad {\rm when}\;\;q\not= 0. \label{5.11}\end{eqnarray}
{\bf Point 1.} {\it The Proof of Case (i) with $\nu(c^b)=\nu(c)$ on Page 1787 of \cite{LoD1} }
Below (4.17) in Page 1787 of \cite{LoD1}, we have defined $\hat{p}=i(c^q)-i(c)$.
Now if $\hat{p}$ is even, then the constant $k\overline{p}+\hat{p}$ is even, and then we
can use Lemma 5.1 to get the isomorphism (4.23) in Page 1787 of \cite{LoD1}. Thus the
proof on Page 1787 for the Case (i) in \cite{LoD1} goes through.
Now if $\hat{p}$ is odd, then both $q$ and $n=n(c)$ must be even by (\ref{4.3}) of Lemma 4.2
and the definition of $n(c)$. Therefore $b$ is even by (\ref{5.9}). By (\ref{4.4}) of
Lemma 4.2 and (\ref{5.11}) we then obtain
$$ \nu(c^{n+b})=\nu(c^b)=\nu(c^q)\ge \nu(c^2)\ge \nu(c). $$
Thus equalities must hold here and we get
\begin{equation} \nu(c^{n+b})=\nu(c^b)=\nu(c^q) = \nu(c^2). \label{5.12}\end{equation}
We define $\td{p}=i(c^q)-i(c^2)$. Then $\td{p}$ is even by Lemma 4.2. We have also
\begin{eqnarray}
i(c^b) &=& k\overline{p} + i(c^q) = k\overline{p} + \td{p} + i(c^2), \label{5.13}\\
i(c^{n+b}) &=& (k+1)\overline{p} + i(c^q) = (k+1)\overline{p} + \td{p} + i(c^2). \label{5.14}\end{eqnarray}
Thus we can replace (4.18)-(4.23) in \cite{LoD1} by the following arguments,
and obtain that the two iteration maps
\begin{eqnarray}
\psi^{b/2}:&&(\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})
\to (\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})^{b/2}
\subseteq (\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#}), \label{5.15}\\
\psi^{(n+b)/2}:&& (\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#}) \to (\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})^{(n+b)/2}
\subseteq (\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}), \label{5.16}\end{eqnarray}
induce two isomorphisms on homological modules:
\begin{eqnarray}
\psi^{b/2}_{\ast}:&&H_{h-k\overline{p}-\td{p}}(\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})
=\overline{C}_{h-k\overline{p}-\td{p}}(E,c^2) \to \overline{C}_{h}(E,c^b)
=H_h(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#}), \label{5.17}\\
\psi^{(n+b)/2}_{\ast}:&& H_{h-k\overline{p}-\td{p}}(\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})
=\overline{C}_{h-k\overline{p}-\td{p}}(E,c^2) \nonumber\\
&& \qquad\qquad \to \overline{C}_{h+\overline{p}}(E,c^{n+b})
=H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}). \label{5.18}\end{eqnarray}
Therefore the composed iteration map
\begin{equation} f=\psi^{(n+b)/2}\circ \psi^{-b/2}: (\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})^{b/2}
\to (\overline{{\Lambda}}^{{\kappa}_2},\overline{{\Lambda}}^{{\kappa}_2\#})^{(n+b)/2} \label{5.19}\end{equation}
is a homeomorphism and induces an isomorphism on homological modules:
\begin{eqnarray} f_{\ast}:&& H_{h}(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#})=\overline{C}_{h}(E,c^b) \to \overline{C}_{h+\overline{p}}(E,c^{n+b})
=H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}), \label{5.20}\end{eqnarray}
where we denote by $\psi^{-h}=(\psi^h)^{-1}$, the inverse map of $\psi^h$. Thus Theorem 5.2
holds in this case.
{\bf Point 2.} {\it The Proof of Case (iii-2) with $\nu(c^b)>\nu(c)$, $q>0$ in (\ref{5.9}),
and that there is some integer $t\in [1,q-1]$ such that $t|q$, $t|n$ and $\nu(c^t)=\nu(c^q)$ hold,
in Page 1789 of \cite{LoD1}. }
As in Page 1789 of \cite{LoD1}, let $s\in [1,q-1]$ be the minimal integer possessing the
property of the above integer $t$. Then $q=us$ and $n=vs$ hold for some $u, v\in{\bf N}$, and
as in \cite{LoD1} we obtain
\begin{eqnarray}
&& b = kn + (q-s) + s = (kv+u)s, \label{5.21}\\
&& n+b = (k+1)n + (q-s) + s = ((k+1)v+u)s, \label{5.22}\\
&& \nu(c^s) = \nu(c^q) = \nu(c^b) = \nu(c^{n+b}). \label{5.23}\end{eqnarray}
Let $\hat{p} = i(c^q) - i(c^s)$.
Now if $\hat{p}$ is even, then the constant $k\overline{p}+\hat{p}$ is even, and then we
can use Lemma 5.1 to get the isomorphism (4.46) in Page 1789 of \cite{LoD1}. Thus the
proof on Page 1789 for the Case (iii-2) in \cite{LoD1} goes through.
Now if $\hat{p}$ is odd, then $n=n(c)$ must be even by (\ref{4.3}) of Lemma 4.2 and
the definition of $n(c)$. By the same reason, $s$ and $q$ must have different parity.
Now if $s$ is even, then $q$ must be odd. Thus $b$ is odd by the evenness of $n$ and
(\ref{5.9}). This contradicts to (\ref{5.21}). Therefore $s$ must be odd and $q$ is even.
Because $s$ is odd, and both $s|q$ and $2|q$ hold, we have $(2s)|q$. Similarly $s|n$
and $2|n$ imply $(2s)|n$. Then by (\ref{5.9}) we obtain $(2s)|b$ and $(2s)|(n+b)$.
On the other hand, by (\ref{4.4}) of Lemma 4.2 and the fact $(2s)|q$, we obtain
$$ \nu(c^q)\ge \nu(c^{2s}) \ge \nu(c^s). $$
Together with (\ref{5.23}) we then obtain
\begin{equation} \nu(c^{n+b}) = \nu(c^b) = \nu(c^q) = \nu(c^{2s}) = \nu(c^s). \label{5.24}\end{equation}
In this case we define $\td{p} = i(c^q)-i(c^{2s})$. Then $\td{p}$ is even by Lemma 4.2.
We have also
\begin{eqnarray}
i(c^b) &=& k\overline{p} + i(c^q) = k\overline{p} + \td{p} + i(c^{2s}), \label{5.25}\\
i(c^{n+b}) &=& (k+1)\overline{p} + i(c^q) = (k+1)\overline{p} + \td{p} + i(c^{2s}). \label{5.26}\end{eqnarray}
Thus we can replace (4.41)-(4.46) in \cite{LoD1} by the following arguments, and
obtain from Lemma 5.1 that the two iteration maps
\begin{eqnarray}
\psi^{b/(2s)}:&&(\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})
\to (\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})^{b/(2s)}
\subseteq (\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#}), \label{5.27}\\
\psi^{(n+b)/(2s)}:&& (\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})
\to (\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})^{(n+b)/(2s)}
\subseteq (\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}), \label{5.28}\end{eqnarray}
induce two isomorphisms on homological modules:
\begin{eqnarray}
\psi^{b/(2s)}_{\ast}:&&H_{h-k\overline{p}-\td{p}}(\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})
=\overline{C}_{h-k\overline{p}-\td{p}}(E,c^{2s}) \to \overline{C}_{h}(E,c^b)
=H_h(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#}), \label{5.29}\\
\psi^{(n+b)/(2s)}_{\ast}:&& H_{h-k\overline{p}-\td{p}}(\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})
=\overline{C}_{h-k\overline{p}-\td{p}}(E,c^{2s}) \nonumber\\
&& \qquad\qquad \to \overline{C}_{h+\overline{p}}(E,c^{n+b})
=H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}). \label{5.30}\end{eqnarray}
Therefore the composed iteration map
\begin{equation} f=\psi^{(n+b)/(2s)}\circ \psi^{-b/(2s)}: (\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})^{b/(2s)}
\to (\overline{{\Lambda}}^{{\kappa}_{2s}},\overline{{\Lambda}}^{{\kappa}_{2s}\#})^{(n+b)/(2s)} \label{5.31}\end{equation}
is a homeomorphism and induces an isomorphism on homological modules:
\begin{equation}
f_{\ast}: H_{h}(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#})=\overline{C}_{h}(E,c^b) \to \overline{C}_{h+\overline{p}}(E,c^{n+b})
=H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{n+b}},\overline{{\Lambda}}^{{\kappa}_{n+b}\#}). \label{5.32}\end{equation}
Thus Theorem 5.2 holds in this case too.
Now the rest part of the proof of Theorem 4.3 of \cite{LoD1} yields Theorem 5.2. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
The above homological isomorphism theorem is for rational closed geodesics. Our next
result generalizes it to irrational closed geodesics, and will play a crucial role in the
proofs of Theorems 1.1 and 1.2. Here the quasi-periodicity which we established in the
above Theorem 4.3 is crucial in the proof.
\medskip
{\bf Theorem 5.3.} {\it Let $(M,F)$ be a Finsler manifold possessing only one
prime closed geodesic $c$ which is orientable and $n=n(c)$ be the analytical
period of $c$. Recall that by Theorem 4.3 there exists an even integer $T\in n{\bf N}$ such
that for $m_0=m_0(c)$ given by Definition 4.1 there hold
\begin{equation} i(c^{m+T}) = i(c^m) + \overline{p}, \quad \nu(c^{m+T})=\nu(c^m), \qquad \forall\, 1\le m\le m_0,
\label{5.33}\end{equation}
where $\overline{p}=i(c^T)+p(c)$. Then we can further require $T\in (m_0!n){\bf N}$ such that for any
non-negative integers $a$ and $b$ satisfying $0<a<b\le m_0$, the iteration maps $\{\psi^m\}$
and inclusion maps of corresponding level sets induce a map $f$ on singular chains which
yields an isomorphism }
\begin{equation} f_{\ast}: H_h(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_a})
\to H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{T+b}},\overline{{\Lambda}}^{{\kappa}_{T+a}}),\qquad \forall\, h\in {\bf Z}. \label{5.34}\end{equation}
{\bf Proof.} Here we follow the main ideas from pages 1786-1792 of \cite{LoD1}.
{\bf Step 1.} {\it The isomorphism in the case of $\;b-a=1$. }
In this case ${\kappa}_a$ and ${\kappa}_b$ are the only two critical values in $[{\kappa}_a, {\kappa}_b]$
and so are ${\kappa}_{T+a}$ and ${\kappa}_{T+b}$ in $[{\kappa}_{T+a}, {\kappa}_{T+b}]$. Then, note that
$0\le a<b\le m_0$, by (\ref{5.33}) it yields
\begin{equation} i(c^{T+b}) = \overline{p} + i(c^b),\qquad \nu(c^{T+b})=\nu(c^b). \label{5.35}\end{equation}
Here we require the even integer $T$ chosen by Theorems 4.3-4.5 to further satisfy
$T\in (m_0!n){\bf N}$. Thus it yields
\begin{equation} b\,|(T+b), \qquad\forall\, 0\le a<b\le m_0. \label{5.36}\end{equation}
Note first that $\overline{p}=i(c^T)+p(c)$ is always even by (B) of Theorem 4.3.
Therefore by (\ref{5.35}) we get
\begin{equation} {\epsilon}(c^{T+b})=(-1)^{i(c^{T+b})-i(c)}=(-1)^{i(c^b)-i(c)}={\epsilon}(c^b). \label{5.37}\end{equation}
For any $0\le a<b\le m_0$, since $\nu(c^{T+b})=\nu(c^b)$ holds in (\ref{5.35})
and $b|(T+b)$ holds in (\ref{5.36}), it follows from Lemma 2.1 and (iii) of Lemma 2.2 that
\begin{eqnarray} H_{h}(\overline{{\Lambda}}^{{\kappa}_b},\overline{{\Lambda}}^{{\kappa}_b\#})
&=& \overline{C}_{h}(E,c^b)\nonumber\\
&=& H_{h-i(c^b)}(N_{c^b}^-\cup\{c^b\},N_{c^b}^-)^{{\epsilon}(c^b){\bf Z}_b}\nonumber\\
&=& H_{h-i(c^b)}(N_{c^{T+b}}^-\cup\{c^{T+b}\},N_{c^{T+b}}^-)^{{\epsilon}(c^{T+b}){\bf Z}_{T+b}}\nonumber\\
&=& H_{h+\overline{p}-i(c^{T+b})}(N_{c^{T+b}}^-\cup\{c^{T+b}\},N_{c^{T+b}}^-)^{{\epsilon}(c^{T+b}){\bf Z}_{T+b}}\nonumber\\
&=& \overline{C}_{h+\overline{p}}(E,c^{T+b})\nonumber\\
&=& H_{h+\overline{p}}(\overline{{\Lambda}}^{{\kappa}_{T+b}},\overline{{\Lambda}}^{{\kappa}_{T+b}\#}). \label{5.38}\end{eqnarray}
Here we used Lemma 2.1 in the second and fifth equalities, (iii) of Lemma 2.2
and (\ref{5.35})-(\ref{5.36}) in the third one and (\ref{5.35}) in the fourth one.
The case of $b-a=1$ is proved.
\medskip
{\bf Step 2.} {\it The induction argument for general $b>a$.}
\medskip
Now we can follow precisely the proof in the Step 2 on pages 1789-1792 of Theorem 4.3 of
\cite{LoD1} and complete the proof of Theorem 5.3 here. Thus we omit all these details
here. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
Next we generalize the Proposition 5.1 of \cite{LoD1} for rational closed geodesics to
irrational ones. Here we denote by ${\bf Q}^m$ the $m$ times of the module instead of using
the notation $m{\bf Q}$ in order to make the text clearer.
\medskip
{\bf Theorem 5.4.} {\it Let $c$ be the only one prime closed geodesic on a compact Finsler
manifold $(M,F)$ of dimension $d$. Suppose $c$ is orientable. Let $n=n(c)$ be the
analytical period of $c$ and $m_0=m_0(c)$ be given by Definition 4.1. Let
$T\in (m_0!n){\bf N}$ be the even integer given by Theorems 4.3-4.5 and 5.3. Denote by
$X_j=H_j(\overline{{\Lambda}},\overline{{\Lambda}}^T)={\bf Q}^{x_j}$ for all $j\in{\bf Z}$. Then there holds
\begin{equation} x_j = b_{j-i(c^T)-p(c)} \qquad \forall\;0\le j\le i(c^T)+p(c)+d-2, \label{5.39}\end{equation}
where $b_j$'s are the Betti numbers of the loop space ${\Lambda} M$ defined in Section 2. }
\medskip
{\bf Proof.} Let $R=i(c^T)$. Firstly we fix an integer $j\le R+p(c)+d-2$. Because there is
only one prime closed geodesic $c$ on $M$, there holds $\hat{i}(c)>0$. Thus we have
$i(c^m)\to +\infty$ as $m\to +\infty$. According to Definition 4.1 and Lemma 4.4 we have
$$ i(c^m)\ge R+p(c)+d, \qquad \forall\;m\ge T+m_0. $$
It then implies
$$ \overline{C}_q(E,c^m) = 0, \qquad \forall\; m\ge T+m_0, \quad q\le j+1=R+p(c)+d-1. $$
Therefore by Theorem II.1.5 on page 89 of \cite{Cha1} we obtain
\begin{equation} H_q(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_m})=0 \qquad \forall\;m\ge T+m_0,\;\; 0\le q\le j+1. \label{5.40}\end{equation}
Thus the exact sequence of the triple $(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0+T}},\overline{{\Lambda}}^{{\kappa}_T})$ yields
\begin{equation} 0=H_{j+1}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0+T}})\to H_{j}(\overline{{\Lambda}}^{{\kappa}_{m_0+T}},\overline{{\Lambda}}^{{\kappa}_T})
\to H_{j}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T})\to H_{j}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0+T}})=0, \label{5.41}\end{equation}
which then implies the isomorphism:
\begin{equation} H_j(\overline{{\Lambda}}^{{\kappa}_{m_0+T}},\overline{{\Lambda}}^{{\kappa}_T}) = H_j(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T})={\bf Q}^{x_j}. \label{5.42}\end{equation}
On the other hand, by Theorem 5.3 we obtain an isomorphism:
\begin{equation} H_{j-R-p(c)}(\overline{{\Lambda}}^{{\kappa}_{m_0}},\overline{{\Lambda}}^0) = H_j(\overline{{\Lambda}}^{{\kappa}_{m_0+T}},\overline{{\Lambda}}^{{\kappa}_T}),
\quad\forall\,j\le R+p(c)+d-2. \label{5.43}\end{equation}
Fix an integer $l\le d-2$. By Definition 4.1 we have $i(c^m)\ge d+4k$ for all $m\ge m_0$,
which implies
\begin{equation} H_q(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_m})=0 \qquad \forall\;m\ge m_0,\;\; 0\le q\le l+1. \label{5.44}\end{equation}
Then the exact sequence of the triple $(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0}},\overline{{\Lambda}}^0)$ yields
\begin{eqnarray} 0
&=& H_{j-R-p(c)+1}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0}})\to H_{j-R-p(c)}(\overline{{\Lambda}}^{{\kappa}_{m_0}},\overline{{\Lambda}}^0) \nonumber\\
&& \qquad\qquad \to H_{j-R-p(c)}(\overline{{\Lambda}},\overline{{\Lambda}}^0)\to H_{j-R-p(c)}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_{m_0}})=0,
\quad\forall\,j\le R+p(c)+d-2. \nonumber\end{eqnarray}
It then implies the isomorphism:
\begin{equation} H_{j-R-p(c)}(\overline{{\Lambda}}^{{\kappa}_{m_0}},\overline{{\Lambda}}^0) = H_{j-R-p(c)}(\overline{{\Lambda}},\overline{{\Lambda}}^0)
= {\bf Q}^{b_{j-R-p(c)}},\quad\forall\,j\le R+p(c)+d-2. \label{5.45}\end{equation}
Therefore (\ref{5.42})-(\ref{5.43}) and (\ref{5.45}) yield the claim (\ref{5.39}). \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
Next we generalize the Theorems 5.2 of \cite{LoD1} for the rational closed geodesics to
irrational ones.
{\bf Theorem 5.5.} {\it Let $(M,F)$ be a compact simply connected $dh$-dimensional Finsler
manifold with $H^{\ast}(M,{\bf Q})=T_{d,h+1}(x)$ for some integers $d\ge 2$ and $h\ge 1$.
Suppose $c$ is the only one prime closed geodesic on $M$, and let $\mu=p(c)+dh-3$. Denote
by $n=n(c)$ and $m_0=m_0(c)$ given by (\ref{4.1}) and Definition 4.1. Then there exist an
even integer $T\in (m_0!n){\bf N}$ and an integer ${\kappa}\ge 0$ such that
\begin{equation} B(d,h)(i(c^T) + p(c)) + (-1)^{\mu+i(c^T)}{\kappa} = \sum_{j=\mu-p(c)+1}^{i(c^T)+\mu}(-1)^j b_j,
\label{5.46}\end{equation}
where $B(d,h)$ is given in Lemma 2.4. }
\medskip
{\bf Proof.} Note first that by Proposition 3.5 and Remark 3.6 the closed geodesic $c$
on $M$ is orientable because $M$ is simply connected.
Since there exists only one prime closed geodesic $c$, it follows that
$\hat{i}(c)>0$ and $0\le i(c)\le d-1$. Specially by Lemma 2.4 we obtain
\begin{equation} \hat{i}(c)\in {\bf Q}. \label{5.47}\end{equation}
Let
\begin{equation} d_j = k_j^{{\epsilon}(c^n)}(c^n), \qquad \forall j\in{\bf Z}. \label{5.48}\end{equation}
Then by the definition of $n=n(c)$, Lemma 2.3 and (\ref{4.2}) we obtain
\begin{equation} k_j^{{\epsilon}(c^{mn})}(c^{mn}) = d_j, \qquad \forall j\in{\bf Z}, \quad m\in{\bf N}. \label{5.49}\end{equation}
Fix $T\in (m_0!n){\bf N}$ to be an even integer determined by Theorems 4.3-4.5, 5.3, and 5.5.
Specially we require that this $T$ makes (\ref{4.12}) hold.
Then we claim the following four conditions hold:
\begin{eqnarray}
&& i(c^{m+T}) = i(c^T) + i(c^m) + p(c), \qquad \forall\; 1\le m\le m_0, \label{5.50}\\
&& i(c^m)+\nu(c^m) \le i(c^T)+\mu, \qquad \forall\; 1\le m<T, \label{5.51}\\
&& d_j = 0, \qquad \forall\;j\ge \mu+2, \label{5.52}\\
&& H_{i(c^T)+\mu+1}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T})=0. \label{5.53}\end{eqnarray}
In fact, (\ref{5.50}) follows from (A) of Theorem 4.3, and (\ref{5.51}) follows from
Theorem 4.5.
Note that if $k\ge 1$ in Theorem 4.3, there holds $A\ge 1$ by Proposition 3.7. Thus for
$j\ge\mu+2=p(c)+dh-1$, it yields $j>\nu(c^n)$ by (C) of Theorem 4.3, which implies
that (\ref{5.52}) holds. If $k=0$, then (\ref{5.52}) was proved in the proof of Theorem
6.1 of \cite{LoD1} when verifying the condition (5.11) there via Hingston's Theorem
of \cite{Hin2} (cf. Theorem 4.1 of \cite{LoD1}).
Note that $i(c^T)+\mu+1=i(c^T)+p(c)+dh-2$ holds. So by Theorem 5.4, Lemmas 2.5 and 2.6, we
obtain
$$ H_{i(c^T)+\mu+1}(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T})=b_{dh-2}=0. $$
Thus (\ref{5.53}) holds, and the proof of the four conditions (\ref{5.50})-(\ref{5.53})
is complete.
Let $R=i(c^T)$. Then by (\ref{5.50})-(\ref{5.53}) and Lemma 4.4 we obtain the
following distribution diagram (\ref{5.54}) of $\dim\overline{C}_j(E,c^m)$ for any
$j\ge 0$ and $m\ge 1$.
{\footnotesize\begin{eqnarray}
\begin{tabular}{c|ccc ccc ccc ccc ccc}
$\cdots$& & & & & &&&&&&$\ast$&$\cdots$\\
$T+m_0+1$& & & & & &&&&&&$\ast$&$\cdots$\\
$T+m_0$& & & & & &$\ast$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\ast$&$\cdots$\\
$\cdots$& & & & & &$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$\\
$T+1$& & & & & &$\ast$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\ast$&$\cdots$\\
$T$& & &$0$&$d_0$&$\cdots$&$d_{p(c)}$&$\cdots$&$d_{\mu}$&$d_{\mu+1}$&$d_{\mu+2}$&$0$&$0$\\
$T-1$&$\ast$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\ast$ \\
$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$ \\
$1$&$\ast$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\cdots$&$\ast$ \\
\cline{1-13}
$m$ in $c^m$&$c_{0}$&$\cdots$&$c_{R-1}$&$c_{R}$
&$\cdots$&$c_{R+p(c)}$&$\cdots$&$c_{R+\mu}$&$c_{R+\mu+1}$
&$c_{R+\mu+2}$&$c_{R+\mu+3}$&$\cdots$ \\
\end{tabular} \label{5.54}\end{eqnarray}}
Here as coordinates of the diagram (\ref{5.54}), the first left column lists the
iteration time $m$ of $c^m$ starting from $1$ to $T+m_0+1$ and upwards, and the
first row from below lists the dimensions $c_j=\dim\overline{C}_j(E,c^{\ast})$ of the
$S^1$-equivariant critical module $\overline{C}_j=\overline{C}_j(E,c^{\ast})$ from $j=0$ to
$j=R+\mu+3$ and rightwards. The entry $D_j(c^m)$ in this diagram at $m$-th row and
$j$-th column is given by $D_j(c^m)=\dim \overline{C}_j(E,c^m)$. Here $d_j=\dim \overline{C}_j(E,c^T)$s
are shown in this diagram. $\ast$s and Dots in this diagram indicate entries which may not
be zero whose precise values depend on $\dim \overline{C}_j(E,c^m)$. Entries on the empty places
in the diagram are all $0$.
Now the proof is similar to that of Theorem 5.2 in \cite{LoD1} (cf. pages 1795-1799 for
more details). Here for reader's conveniences, we include certain details of the proof
here.
Denote by ${\kappa}_m=E(c^m)$ for $m\ge 1$. As in the Step 1 of the proof of Theorem 5.2 of
\cite{LoD1}, for $j\in{\bf Z}$, we denote by
\begin{equation} U_j=H_j(\overline{{\Lambda}}^{{\kappa}_T},\overline{{\Lambda}}^0)={\bf Q}^{u_j}, \quad B_j=H_j(\overline{{\Lambda}},\overline{{\Lambda}}^0)={\bf Q}^{b_j},
\quad X_j=H_j(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T})={\bf Q}^{x_j}. \label{5.55}\end{equation}
Then the long exact sequence of the triple
$(\overline{{\Lambda}},\overline{{\Lambda}}^{{\kappa}_T},\overline{{\Lambda}}^0)$ yields the following diagram:
\begin{eqnarray}
\begin{tabular}{ccc ccc ccc ccc ccc}
$X_{R+\mu+1}$&$\to$&$U_{R+\mu}$&$\to$&$B_{R+\mu}$&$\to$&$X_{R+\mu}$
&$\to$&$\cdots$&$\to$&$U_0$&$\to$&$B_0$&$\to$&$X_0$ \\
$\parallel$& &$\parallel$& &$\parallel$& &$\parallel$& & & &$\parallel$& &$\parallel$& &$\parallel$ \\
$0$& &${\bf Q}^{u_{R+\mu}}$& &${\bf Q}^{b_{R+\mu}}$& &${\bf Q}^{x_{R+\mu}}$& &$\cdots$& &${\bf Q}^{u_0}$& &$0$& &$\;0$, \\
\end{tabular} \nonumber\end{eqnarray}
where $X_{R+\mu+1}=0=X_0$ follows from (\ref{5.53}), Theorem 5.4 and Lemmas 2.5 and 2.6.
$B_0=0$ follows from Lemmas 2.5 and 2.6. Then this long exact sequence yields
\begin{equation} 0 = \sum_{j=0}^{R+\mu}(-1)^j(u_j - b_j + x_j). \label{5.56}\end{equation}
Because $T\ge 2$, for $j\in{\bf Z}$ besides $U_j$ defined in (\ref{5.55}) we denote by
$$ V_j=H_j(\overline{{\Lambda}}^{{\kappa}_{T-1}},\overline{{\Lambda}}^0)={\bf Q}^{v_j},
\quad E_j=H_j(\overline{{\Lambda}}^{{\kappa}_T},\overline{{\Lambda}}^{{\kappa}_{T-1}})={\bf Q}^{e_j}. $$
Then the exact sequence of the triple $(\overline{{\Lambda}}^{{\kappa}_T},\overline{{\Lambda}}^{{\kappa}_{T-1}},\overline{{\Lambda}}^0)$ and
the diagram (\ref{5.54}) yield the following diagram:
\begin{eqnarray}
\begin{tabular}{ccc ccc ccc ccc ccc}
$V_{R+\mu+1}$&$\to$&$U_{R+\mu+1}$&$\to$&$E_{R+\mu+1}$&$\to$&$V_{R+\mu}$&$\to$&$\cdots$ \\
$\parallel$& &$\parallel$& &$\parallel$& &$\parallel$& & & \\
$0$& &${\bf Q}^{u_{R+\mu+1}}$& &${\bf Q}^{e_{R+\mu+1}}$& &${\bf Q}^{v_{R+\mu}}$& &$\cdots$& \\
&&&&&&&& \\
&$\to$&$V_{R}$&$\to$&$U_{R}$&$\to$&$E_{R}$&$\to$&$\cdots$ \\
& &$\parallel$& &$\parallel$& &$\parallel$& & \\
& &${\bf Q}^{v_{R}}$& &${\bf Q}^{u_{R}}$& &${\bf Q}^{e_{R}}$& &$\cdots$ \\
&&&&&&&& \\
&$\to$&$V_{0}$&$\to$&$U_{0}$&$\to$&$E_{0}$&$\to$&$0$ \\
& &$\parallel$& &$\parallel$& &$\parallel$& \\
& &${\bf Q}^{v_{0}}$& &${\bf Q}^{u_{0}}$& &${\bf Q}^{e_0}$ \\
\end{tabular} \nonumber\end{eqnarray}
where $V_{R+\mu+1}=0$ follows from (\ref{5.51}) and the diagram (\ref{5.54}). Then this long exact
sequence yields
\begin{equation} \sum_{j=0}^{R+\mu}(-1)^ju_j = (-1)^{R+\mu}u_{R+\mu+1} + \sum_{j=0}^{R+\mu}(-1)^jv_j
+ \sum_{j=0}^{R+\mu+1}(-1)^je_j. \label{5.57}\end{equation}
Note that by (\ref{5.49}) we have
\begin{equation} e_j = \left\{\matrix{
d_{j-R}, & \quad {\rm for}\;\;R \le j\le R+\mu+1, \cr
0, & \quad {\rm otherwise}. \cr}\right. \label{5.58}\end{equation}
Thus we obtain
\begin{equation} \sum_{j=0}^{R+\mu}(-1)^ju_j = (-1)^{R+\mu}u_{R+\mu+1} + \sum_{j=0}^{R+\mu}(-1)^jv_j
+ \sum_{j=0}^{\mu+1}(-1)^{R+j}d_j. \label{5.59}\end{equation}
Now combining (\ref{5.56}) and (\ref{5.59}) we obtain
\begin{equation} 0 = \sum_{j=0}^{R+\mu}(-1)^jv_j + \sum_{j=0}^{\mu+1}(-1)^{R+j}d_j
- \sum_{j=0}^{R+\mu}(-1)^jb_j
+\sum_{j=0}^{R+\mu}(-1)^jx_j + (-1)^{R+\mu}u_{R+\mu+1}. \label{5.60}\end{equation}
Now as in \cite{LoD1}, we can apply the procedure above to decrease the level sets
one by one by induction. In this way, each time we pass through a critical level
$E(c^m)$ with $m\le T$, the term $\sum_{j=0}^{R+\mu}(-1)^jv_j$ on the right hand
side of (\ref{5.60}) will be replaced by the sum of a similar alternating sum of
dimensions of homological modules of a new lower level set pair
$(\overline{{\Lambda}}^{{\kappa}_{m-1}},\overline{{\Lambda}}^0)$ and a term $\sum_{j=0}^{\nu(c^m)}(-1)^{i(c^m)+j}k_j^{{\epsilon}(c^m)}(c^m)$.
Here the sign of $i(c^m)$ indicates the parity of the number of column in which the term
$k_0^{{\epsilon}(c^m)}(c^m)$ appears. Then by induction from (\ref{5.56})-(\ref{5.60}) repeating
the proof of Theorem 5.2 in \cite{LoD1} by using the above diagram (\ref{5.54}) and our
Theorem 5.4, similarly to (5.22) of \cite{LoD1} we obtain
\begin{eqnarray} 0
&=& \frac{T}{n}\sum_{j=0}^{\mu+1}(-1)^{i(c^n)+j}d_j
+ \frac{T}{n}\sum_{m=1}^{n-1}\sum_{j=0}^{\nu(c^m)}(-1)^{i(c^m)+j}k_j^{{\epsilon}(c^m)}(c^m) \nonumber\\
&& \qquad - \sum_{j=0}^{R+\mu}(-1)^j b_j + \sum_{j=0}^{R+\mu}(-1)^jb_{j-R-p(c)}
+ (-1)^{R+\mu}u_{R+\mu+1}. \label{5.61}\end{eqnarray}
Note that in the proof of (\ref{5.61}), the facts that $T$ is an integer multiple of
$n(c)$, the $n(c)$-periodicity of critical modules in iterates given by Lemma 2.3,
(\ref{4.2}) and (\ref{5.49}) are crucial.
Now similarly to (5.23) of \cite{LoD1} we can apply the mean index identity
Lemma 2.4 to further obtain
\begin{eqnarray} B(d,h)n\hat{i}(c)
&=& \sum_{1\le m\le n\atop 0\le j\le 2dh-2}(-1)^{i(c^m)+j}k_j^{\epsilon_j}(c^m) \nonumber\\
&=& \sum_{1\le m\le n-1\atop 0\le j\le 2dh-2}(-1)^{i(c^m)+j}k_j^{\epsilon_j}(c^m)
+\sum_{j=0}^{\nu(c^n)}(-1)^{i(c^n)+j}k_j^{\epsilon_n}(c^n) \nonumber\\
&=& \sum_{m=1}^{n-1}\sum_{j=0}^{\nu(c^m)}(-1)^{i(c^m)+j}k_j^{\epsilon_j}(c^m)
+\sum_{j=0}^{\nu(c^n)}(-1)^{i(c^n)+j}k_j^{\epsilon_n}(c^n) \nonumber\\
&=& \sum_{m=1}^{n-1}\sum_{j=0}^{\nu(c^m)}(-1)^{i(c^m)+j}k_j^{\epsilon_j}(c^m) \nonumber\\
& & \qquad +\sum_{j=0}^{\mu+1}(-1)^{i(c^n)+j}d_j+\sum_{j=\mu+2}^{\nu(c^n)}(-1)^{i(c^n)+j}d_j \nonumber\\
&=& \sum_{m=1}^{n-1}\sum_{j=0}^{\nu(c^m)}(-1)^{i(c^m)+j}k_j^{\epsilon_j}(c^m)
+\sum_{j=0}^{\mu+1}(-1)^{i(c^n)+j}d_j, \label{5.62}\end{eqnarray}
where we have used the condition $d_j=0$ for all $j\ge\mu+2$ of (5.52) in the last equality.
Now by (D) of Theorem 4.3, the rationality (\ref{5.47}) of $\hat{i}(c)$, (\ref{5.61})
and (\ref{5.62}) we obtain
\begin{eqnarray} 0
&=& B(d,h)T\hat{i}(c) - \sum_{j=0}^{R+\mu}(-1)^jb_j
+ \sum_{j=0}^{R+\mu}(-1)^jb_{j-R-p(c)} + (-1)^{R+\mu}u_{R+\mu+1} \nonumber\\
&=& B(d,h)(R + p(c)) - \sum_{j=0}^{R+\mu}(-1)^jb_j
+ \sum_{j=0}^{R+\mu}(-1)^jb_{j-R-p(c)} + (-1)^{R+\mu}u_{R+\mu+1} \nonumber\\
&=& B(d,h)(R + p(c)) - \sum_{j=\mu-p(c)+1}^{R+\mu}(-1)^jb_j
+ (-1)^{R+\mu}u_{R+\mu+1}. \label{5.63}\end{eqnarray}
That is, (\ref{5.46}) holds with ${\kappa}=u_{R+\mu+1}\ge 0$.
This completes the proof of Theorem 5.5. \hfill\vrule height0.18cm width0.14cm $\,$
\setcounter{equation}{0}
\section{Proofs of Theorems 1.1 and 1.2
In this section, we will follow ideas from Section 6 of \cite{LoD1} and Section 4 of
\cite{DuL3} to give the proofs of Theorems 1.1 and 1.2 via replacing $n=n(c)$ by the
integer $T$ obtained by Theorems 4.3, 4.5, 5.3 and 5.5, and modifying related arguments
using our above results. For reader's conveniences and completeness, we give all the
details here.
\medskip
{\bf Proof of Theorem 1.1.} Let $M$ be a compact simply connected manifold of dimension
not less than $2$ with a Finsler metric $F$. By Theorems A and B in the Section 1, it
suffices to assume that the condition (\ref{1.3}) on $M$ holds, i.e.,
$$ H^*(M;{\bf Q})\cong T_{d,h+1}(x)={\bf Q}[x]/(x^{h+1}=0) $$
with a generator $x$ of degree $d\ge 2$ and hight $h+1\ge 2$.
We prove the theorem by contradiction. Thus we assume that there exists only one prime
closed geodesic $c$ on $(M,F)$. To generate the non-trivial $H_{d-1}({\Lambda} M/S^1,{\Lambda} M^0/S^1;{\bf Q})$
(cf. Lemmas 2.5 and 2.6), this $c$ must satisfy
\begin{equation} 0\le i(c)\le d-1, \quad \hat{i}(c)>0, \quad \hat{i}(c)\in {\bf Q}. \label{6.1}\end{equation}
where the last conclusion follows from Lemma 2.4.
For the analytic period $n=n(c)$ and $m_0=m_0(c)$ given by Definition 4.1, fix a large
even integer $T\in (m_0!n){\bf N}$ determined by Theorems 4.3, 4.5, 5.3 and 5.5. Then by (\ref{6.1})
and (D) of Theorem 4.3 we have
$$ i(c^T) + p(c) = T \hat{i}(c)>0. $$
Note that $i(c^T)=p(c)$ $({\rm mod}\;2)$ by (B) of Theorem 4.3, so we obtain
\begin{equation} i(c^T) + p(c) \in 2{\bf N}. \label{6.2}\end{equation}
Let $\mu=p(c)+(dh-3)$. Then by (\ref{6.2}) we have
\begin{equation} i(c^T) + \mu \ge dh-1 \ge 1, \qquad i(c^T) + \mu \in 2{\bf N}_0+(dh-1). \label{6.3}\end{equation}
Then by Theorem 5.5, we obtain for some integer ${\kappa}\ge 0$:
\begin{equation} B(d,h)(i(c^T) + p(c)) + (-1)^{i(c^T)+\mu}{\kappa}
= \sum_{j=\mu-p(c)+1}^{i(c^T)+\mu}(-1)^jb_j. \label{6.4}\end{equation}
Note that when $d$ is odd, then $h=1$ by Remark 2.5 of \cite{Rad1}. And when $h=1$, $M$
is rationally homotopic to the sphere $S^d$. So we can classify the manifolds $M$ satisfying
(\ref{1.3}) into two classes according to the parity of $d$, and continue our proof
correspondingly.
\medskip
{\bf Case 1.} {$d\ge 2$ is even and $h\ge 1$.}
\medskip
Note that, in this case, $i(c^T)+\mu$ is odd by (\ref{6.3}). And there holds
$b_{2j}=0$ for all $j\in{\bf N}_0$ by Lemma 2.6. Thus by (\ref{6.4}) we obtain
\begin{equation} B(d,h)(i(c^T) + p(c)) \ge -\sum_{2j-1=\mu-p(c)+1}^{i(c^T)+\mu}b_{2j-1}.
\label{6.5}\end{equation}
Let $D=d(h+1)-2$. By Lemma 2.4 we have
$$ B(d,h) = -\frac{h(h+1)d}{2D}<0. $$
Thus from (B) of Theorem 4.3, we have
\begin{equation} i(c^T)+\mu-(d-1)=i(c^T)+p(c)+dh-d-2 \in 2{\bf N}. \label{6.6}\end{equation}
By (\ref{6.5}), (\ref{6.6}) and (\ref{2.12}) we obtain
\begin{eqnarray} i(c^T) + p(c)
&\le& -\frac{1}{B(d,h)}\sum_{2j-1=\mu-p(c)+1}^{i(c^T)+\mu}b_{2j-1} \nonumber\\
&=& \frac{2D}{h(h+1)d}\left(\sum_{2j-1=1}^ {i(c^T)+\mu}b_{2j-1}-\sum_{2j-1=1}^{dh-2}b_{2j-1}\right).
\label{6.7}\end{eqnarray}
Note that here because $i(c^T)+p(c)\ge 2$ by (\ref{6.2}), we have
\begin{equation} i(c^T)+\mu = i(c^T)+p(c)+dh-3 \ge dh-1=d-1+(h-1)d. \label{6.8}\end{equation}
By Lemma 2.6 we have
\begin{equation} \sum_{2j-1=1}^{i(c^T)+\mu}b_{2j-1} =
\frac{h(h+1)d}{2D}\left(i(c^T)+\mu-(d-1)\right) - \frac{h(h-1)d}{4} + 1 +{\epsilon}_{d,h}(i(c^T)+\mu). \label{6.9}\end{equation}
On the other hand, because $dh-3< dh-1= d-1+(h-1)d$, by Lemma 2.6 we have
\begin{eqnarray} \sum_{0\le 2j-1\le dh-3}b_{2j-1}
&=& \sum_{d-1\le 2j-1\le dh-3}\left([\frac{2j-1-(d-1)}{d}]+1\right) \nonumber\\
&=& \sum_{d\le 2j\le dh-2}[\frac{2j}{d}] \nonumber\\
&=& \sum_{\frac{d}{2}\le j\le \frac{dh}{2}-1}[\frac{j}{d/2}] \nonumber\\
&=& \sum_{i=1}^{h-1}\sum_{j=\frac{id}{2}}^{\frac{(i+1)d}{2}-1}[\frac{j}{d/2}] \nonumber\\
&=& \frac{d}{2}\sum_{i=1}^{h-1}i \nonumber\\
&=& \frac{dh(h-1)}{4}. \label{6.10}\end{eqnarray}
Therefore we get
\begin{eqnarray}
&& \sum_{0\le 2j-1\le i(c^T)+\mu}b_{2j-1} - \sum_{0\le 2j-1\le dh-3}b_{2j-1} \nonumber\\
&&\qquad\quad = \frac{h(h+1)d}{2D}\left(i(c^T)+\mu-(d-1)\right) - \frac{h(h-1)d}{4}
+ 1 +{\epsilon}_{d,h}(i(c^T)+\mu) - \frac{dh(h-1)}{4} \nonumber\\
&&\qquad\quad = \frac{h(h+1)d}{2D}\left(i(c^T)+p(c)+dh-d-2\right) - \frac{dh(h-1)}{2}
+ 1 + {\epsilon}_{d,h}(i(c^T)+\mu). \quad \label{6.11}\end{eqnarray}
Then (\ref{6.7}) becomes
$$ i(c^T) + p(c)
\le i(c^T)+p(c) + dh -d-2 + \frac{2D}{h(h+1)d}\left(1-\frac{dh(h-1)}{2}+{\epsilon}_{d,h}(i(c^T)+\mu)\right), $$
that is,
\begin{eqnarray} {\epsilon}_{d,h}(i(c^T)+\mu)
&\ge& \frac{h(h+1)d}{2D}\left(d+2 + \frac{(h-1)D}{h+1} - dh - \frac{2D}{h(h+1)d}\right) \nonumber\\
&=& \frac{dh-(d-2)}{dh+(d-2)}. \label{6.12}\end{eqnarray}
Note that by (\ref{6.6}) we have
\begin{equation} i(c^T)+\mu-(d-1) = i(c^T)+p(c)+dh-d-2 = i(c^T)+p(c)-2d + D. \label{6.13}\end{equation}
Let $\eta\in [0,D/2-1]$ be an integer such that
\begin{equation} \frac{2\eta}{D} = \{\frac{i(c^T)+p(c)-2d}{D}\} = \{\frac{i(c^T)+\mu-(d-1)}{D}\}. \label{6.14}\end{equation}
By the definition (\ref{2.13}) of ${\epsilon}_{d,h}(i(c^T)+\mu)$ and (\ref{6.14}), we obtain
\begin{eqnarray} {\epsilon}_{d,h}(i(c^T)+\mu)
&=& \{\frac{D}{dh}\{\frac{i(c^T)+\mu-(d-1)}{D}\}\}
- (\frac{2}{d}+\frac{d-2}{dh})\{\frac{i(c^T)+\mu-(d-1)}{D}\} \nonumber\\
&&\qquad - h\{\frac{D}{2}\{\frac{i(c^T)+\mu-(d-1)}{D}\}\}
- \{\frac{D}{d}\{\frac{i(c^T)+\mu-(d-1)}{D}\}\} \nonumber\\
&=& \{\frac{2\eta}{dh}\} - (\frac{2}{d}+\frac{d-2}{dh})\frac{2\eta}{D}
- h\{\frac{2\eta}{2}\} - \{\frac{2\eta}{d}\} \nonumber\\
&=& \{\frac{2\eta}{dh}\} - (\frac{2}{d}+\frac{d-2}{dh})\frac{2\eta}{D} - \{\frac{2\eta}{d}\} \nonumber\\
&\equiv& {\epsilon}(2\eta). \label{6.15}\end{eqnarray}
Now we claim
\begin{equation} {\epsilon}(2\eta) < \frac{dh-(d-2)}{dh+(d-2)}, \qquad \forall\; 2\eta\in [0,dh-2]. \label{6.16}\end{equation}
In fact, we write
\begin{equation} 2\eta = pd + 2m \qquad \mbox{with some}\;\; p\in {\bf N}_0, \;\; 2m\in [0,d-2]. \label{6.17}\end{equation}
Then from $pd+2m=2\eta \le dh-2=(h-1)d+d-2$ we have
\begin{equation} p\in [0, h-1]. \label{6.18}\end{equation}
Therefore in this case we obtain
\begin{eqnarray} {\epsilon}(2\eta)
&=& \frac{pd+2m}{dh} - (\frac{2}{d}+\frac{d-2}{dh})\frac{pd+2m}{D} - \frac{2m}{d} \nonumber\\
&=& \frac{p}{h} - \frac{(2h+d-2)p}{hD} + \frac{2m}{dh} - \frac{(2h+d-2)2m}{dhD} - \frac{2m}{d} \nonumber\\
&=& \frac{p}{h}(1-\frac{2h+d-2}{D}) + \frac{2m}{d}(\frac{1}{h} - \frac{2h+d-2}{hD} - 1) \nonumber\\
&=& \frac{p(d-2) - 2mh}{D} \nonumber\\
&\le& \frac{(h-1)(d-2)}{D}. \label{6.19}\end{eqnarray}
Now if (\ref{6.16}) does not hold, we then obtain
$$ \frac{dh-(d-2)}{D} \le {\epsilon}(2\eta) \le \frac{(h-1)(d-2)}{D}, $$
that is,
$$ dh - d +2 \le dh - d + 2 - 2h. $$
Because $h\ge 1$, this yields a contradiction and completes the proof of (\ref{6.16}).
If $d=2$, there holds $D-2=dh+d-4=dh-2$. Thus (\ref{6.16}) holds for any
integer $2\eta\in[0,D-2]$.
If $d\ge 4$, for any $2\eta\in [dh,D-2]$, write $2\eta = pdh + 2m$ for some $p\in{\bf N}_0$ and
$2m\in [0,dh-2]$. Then from $D-2=(h+1)d-4=hd+d-4$ we obtain $p\le 1$ and $2m\le d-4$.
Thus we have
\begin{eqnarray} {\epsilon}(2\eta)
&=& \frac{2m}{dh} - (\frac{2}{d}+\frac{d-2}{dh})\frac{pdh+2m}{D} - \frac{2m}{d} \nonumber\\
&=& {\epsilon}(2m) - (\frac{2}{d}+\frac{d-2}{dh})\frac{pdh}{D} \nonumber\\
&\le& {\epsilon}(2m). \label{6.20}\end{eqnarray}
Therefore from (\ref{6.16}) and (\ref{6.20}), it yields
\begin{equation}{\epsilon}(2\eta)<\frac{dh-(d-2)}{dh+(d-2)},\qquad\forall\,\eta\in [0,D/2-1].\label{6.21}\end{equation}
Together with (\ref{6.12}), (\ref{6.15}), and the
choice (\ref{6.14}) of $2\eta$, we then obtain
\begin{equation} \frac{dh-(d-2)}{dh+(d-2)} \le {\epsilon}_{d,h}(i(c^T)+\mu) = {\epsilon}(2\eta) < \frac{dh-(d-2)}{dh+(d-2)}.
\label{6.22}\end{equation}
This contradiction completes the proof of Case 1.
\medskip
{\bf Case 2.} {\it $d\ge 2$ is odd and $h=1$.}
\medskip
In this case, $M$ is rationally homotopic to the sphere $S^d$. Note that $i(c^T)+\mu$
is even by (\ref{6.3}). Because $\mu-p(c)+1=d-2$, there holds $b_j=0$ for any
$j\le\mu-p(c)+1$ by Lemma 2.5. Thus by Theorem 5.5 and Lemma 2.5 we obtain
\begin{eqnarray} i(c^T) + p(c)
&\le& \frac{1}{B(d,1)}\sum_{2j=0}^{i(c^T)+\mu}b_{2j} \nonumber\\
&\le& i(c^T) + p(c) +d-3 - \frac{d-1}{2}\frac{2(d-1)}{d+1} \nonumber\\
&=& i(c^T) + p(c) - \frac{4}{d+1}. \label{6.23}\end{eqnarray}
This is a contradiction.
\medskip
Now the proof of Theorem 1.1 is complete. \hfill\vrule height0.18cm width0.14cm $\,$
\medskip
Now we give
{\bf Proof of Theorem 1.2.} Here arguments are the same as in Section 7 of
\cite{LoD1}. For any reversible
Finsler as well as Riemannian metric $F$ on a compact manifold $M$, the energy
functional $E$ is symmetric on every loop $f\in {\Lambda} M$ and its inverse curve
$f^{-1}$ defined by $f^{-1}(t)=f(1-t)$. Thus these two curves have the same
energy $E(f)=E(f^{-1})$ and play the same roles in the variational structure of
the energy functional $E$ on ${\Lambda} M$. Specially, the $m$-th iterates $c^m$ and
$c^{-m}$ of a prime closed geodesic $c$ and its inverse curve $c^{-1}$ have
precisely the same Morse indices, nullities, and critical modules. Let
$n=n(c)=n(c^{-1})$. Then there holds
\begin{equation} \dim\overline{C}_*(E,c^m)=\dim\overline{C}_*(E,c^{-m}). \label{6.24}\end{equation}
Thus if $c$ is the only geometrically distinct prime closed geodesic on $M$,
each entry in the diagram (\ref{5.54}) in the reversible case should be doubled.
So (\ref{5.46}) in Theorem 5.5 becomes
\begin{equation} B(d,h)(i(c^T) + p(c)) + (-1)^{\mu+i(c^T)}2{\kappa}
= \sum_{j=\mu-p(c)+1}^{\mu+i(c^T)}(-1)^j b_j. \label{6.25}\end{equation}
These changes bring no influence to our proofs in Section 6. Therefore our above
proof yields two geometrically distinct closed geodesics for reversible Finsler
metrics too. \hfill\vrule height0.18cm width0.14cm $\,$
\bibliographystyle{abbrv}
| {
"timestamp": "2010-08-24T02:01:38",
"yymm": "1008",
"arxiv_id": "1008.1458",
"language": "en",
"url": "https://arxiv.org/abs/1008.1458",
"abstract": "In this paper, we prove the existence of at least two distinct closed geodesics on every compact simply connected irreversible or reversible Finsler (including Riemannian) manifold of dimension not less than 2.",
"subjects": "Symplectic Geometry (math.SG); Differential Geometry (math.DG); Dynamical Systems (math.DS)",
"title": "The index quasi-periodicity and multiplicity of closed geodesics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9585377296574668,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7094850525395425
} |
https://arxiv.org/abs/1411.2023 | Schrödinger spectrum generated by the Cornell potential | The eigenvalues $E_{n\ell}^d(a,c)$ of the $d$-dimensional Schrödinger equation with the Cornell potential $V(r)=-a/r+c\,r$, $a,c>0$ are analyzed by means of the envelope method and the asymptotic iteration method (AIM). Scaling arguments show that it is sufficient to know $E(1,\lambda)$, and the envelope method provides analytic bounds for the equivalent complete set of coupling functions $\lambda(E)$. Meanwhile the easily-implemented AIM procedure yields highly accurate numerical eigenvalues with little computational effort. | \section{Introduction}\label{intro}
\noindent The Schr\"dinger equation with the Cornell potential is an important non-relativistic model for the study of quark-antiquark systems \cite{alford,chung, claudio,eichten,eichten1978,eichten1980, evans,chen,hamz}. For example, it is used in describing the masses and decay widths of charmonium states. This Coulomb-plus-linear pair potential was originally proposed for describing quarkonia with heavy quarks \cite{eichten,eichten1978,eichten1980}. It takes into account general properties expected from the interquark interaction, namely Coulombic behavior at short
distances and a linear confining term at long distances \cite{claudio}. By varying the parameters one can obtain good fits to lattice measurements for the heavy-quark-antiquark static potential \cite{bali}. Although such models have been studied for many years, exact solutions of Schr\"odinger's equation with this potential are unknown. Most of the earlier work either relies on direct numerical integration of the Schr\"odinger equation or various techniques for approximating the eigenenergies \cite{chung,kang,hall7}. Without specific reference to a particular physical system, we present a simple and very effective general method for solving Schr\"odinger's equation to any degree of precision in arbitrary dimensional $d>1$. We write the Cornell potential in the form
\begin{equation}\label{cornell}
V(r)=-\frac{a}{r}+c\,r,
\end{equation}
where $a>0$ is a parameter representing the Coulomb strength,
and $c>0$ measures the strength of the linear confining term. The method we use do not require any particular constraint on the potential parameters and thus they are appropriate for any physical problem that may be modelled by this class of potential. The method of solution is based on a special application of the asymptotic iteration method (AIM, \cite{aim}). AIM is an iterative algorithm originally introduced to investigate the analytic
and approximate solutions of a second-order linear differential equation of the form
\begin{equation}\label{AIM_Eq}
y''=\lambda_0(r) y'+s_0(r) y,\quad\quad ({}^\prime={d\over dr})
\end{equation}
where $\lambda_0(r)$ and $s_0(r)$ are $C^{\infty}-$differentiable
functions. It states \cite{aim} that: \emph{Given $\lambda_0$ and $s_0$ in
$C^{\infty}(a,b),$ the differential equation (\ref{AIM_Eq}) has the
general solution
\begin{eqnarray}\label{AIM_solution}
\nonumber y(r)= \exp\left(-\int\limits^{r}{s_{n-1}(t)\over \lambda_{n-1}(t)} dt\right)
\left[C_2
+C_1\int\limits^{r}\exp\left(\int\limits^{t}\left[\lambda_0(\tau) +
{2s_{n-1}\over \lambda_{n-1}}(\tau)\right] d\tau \right)dt\right]
\end{eqnarray}
if for some $n>0$
\begin{equation}\label{tm_cond}
\delta_n=\lambda_n s_{n-1}-\lambda_{n-1}s_n=0.
\end{equation}
where $\lambda_n$ and $s_n$ are given by
\begin{equation}\label{AIM_seq}
\lambda_{n}=
\lambda_{n-1}^\prime+s_{n-1}+\lambda_0\lambda_{n-1}\hbox{ ~~and~~
} s_{n}=s_{n-1}^\prime+s_0\lambda_{n-1}.
\end{equation}}
\noindent Applications of AIM to a variety of problems have been reported in numerous publications over the past few years. In most applications the functions $\lambda_0(r)$ and $s_0(r)$ are taken to be polynomials or rational functions. However, we show in this paper that the applicability of the method is not restricted to a particular class of differentiable functions. We consider the case where $\lambda_0(r)$ and $s_0(r)$ involve higher transcendental functions, specifically Airy functions. Provided the computer-algebra system employed has sufficient information about the functions and their derivatives, they present no difficulty. The paper is organized as follows. In section \ref{sec2}, we set up the $d$-dimensional Schr\"odinger equation for the Cornell potential and present some analytical
spectral bounds based on envelope methods \cite{hall1,hall3,hall4,hall5,hall6}. In particular we generalize to $d>1$ dimensions an analytical formula, first derived \cite{hall7} for $d=3$, which exhibits
energy upper and lower bounds for all the discrete eigenvalues of the problem. In section \ref{sec3}, we present an asymptotic solution that allows us to express Schr\"odinger's equation in a form suitable for the application of AIM. In section \ref{sec4}, we apply AIM to the Cornell potential and discuss some of its numerical results, in particular comparisons with the earlier results of Eichten et al. \cite{eichten1978} and the recent work of Chung and Lee \cite{chung}.
\section{Formulation of the problem and analytical estimates in $d$ dimensions}\label{sec2}
\noindent The $d$-dimensional Schr\"odinger equation,
in atomic units $\hbar=2\mu=1$, with a spherically symmetric
potential $V(r)$ can be written as
\begin{equation}\label{Sch_eq}
\left[-\Delta_d +V(r)\right]\psi(r)=E\psi(r),
\end{equation}
where $\Delta_d$ is the $d$-dimensional Laplacian operator, $d > 1,$ and
$r^2=\sum_{i=1}^d x_i^2$. In order to express (\ref{Sch_eq}) in terms of
$d$-dimensional spherical coordinates $(r, \theta_1, \theta_2,
\dots, \theta_{d-1})$, we separate variables using
\begin{equation}\label{gs_Sch_eq}
\psi(r)=r^{-(d-1)/2}\,u(r)\, Y_{\ell_1,\dots,\ell_{d-1}}(\theta_1\dots\theta_{d-1}),
\end{equation}
where $Y_{\ell_1,\dots,\ell_{d-1}}(\theta_1\dots\theta_{d-1})$ is a
normalized spherical harmonic \cite{atkin} with characteristic value $\ell(\ell+d-2),$ and $\ell=\ell_1=0, 1, 2, \dots$ (the principal angular-momentum quantum number). One obtains the
radial Schr\"odinger equation as
\begin{eqnarray}\label{gs_Sch_eq1}
&\left[-{d^2\over dr^2}+{(k-1)(k-3)\over
4r^2}+V(r)-E\right] \psi_{n\ell}^{(d)}(r)=0, \\
\nonumber &\int_0^\infty \left\{\psi_{n\ell}^{(d)}(r)\right\}^2dr=1, ~~~
\psi_{n\ell}^{(d)}(0)=0,
\end{eqnarray}
where $k=d+2\ell$. We assume that the potential $V(r)$ is less singular than the centrifugal term so that
for $(k-1)(k-3)\ne 0$ we have
\begin{equation}\label{ursmall}
u(r)\sim A\,r^{({k-1})/{2}},\quad r\rightarrow 0,\quad {\rm where}~A~{\rm is~a~constant}.
\end{equation}
Since $d>1$ it follows that $k>1,$ and meanwhile $k=3$ only when $\ell=0$ and $d=3.$
Thus in the very special case $k=3$, $u(r)\sim Ar$ (as we have for the Hydrogen atom), and we see that \eq{ursmall} is also valid when $k=3.$
We note that the Hamiltonian and the boundary
conditions of \eq{gs_Sch_eq1} are invariant under the transformation
$$(d, \ell)\rightarrow (d\mp2, \ell\pm 1), $$
thus, given any solution for fixed $d$ and $\ell$, we can immediately
generate others for different values of $d$ and $\ell$. Further, the
energy is unchanged if $k=d+2\ell$ and the number of nodes $n$ is
constant: this point has been discussed, for example, by Doren \cite{doren1986}.
Repeated application of this transformation produces a
large collection of states. In the present work, we study the $d$-dimension Schr\"odinger eigenproblem
\begin{eqnarray}\label{Sch}
&\left[-{d^2\over dr^2}+{(k-1)(k-3)\over 4r^2}-{a\over r}+c\,r\right]u_{nl}^d(r)=E_{nl}^d u_{nl}^d(r),\\
\nonumber &k=d+2\ell,~a>0, ~ 0<r<\infty, \quad u_{nl}^d(0) = 0.
\end{eqnarray}
Because of the presence of the linear confining term in the potential, for $c>0$ the spectrum of this problem is entirely discrete: a formal proof for $d>2$ is given in Reed-Simon IV \cite{simon}.
\medskip
If the parametric dependence of the eigenvalues on the potential coefficients $a$ and $c$ is written $E = E(a,c),$
then elementary scaling arguments reduce the dimension of the parameter space to one by means of the equation
\begin{equation}\label{ebounds}
E(a,c) = a^{2}\,E(1,\lambda),\quad {\rm where}\quad \lambda = \frac{c}{a^3}.
\end{equation}
Since $V(r)$ is at once a convex function of $-1/r$ and a concave function of $r^2$, the envelope method
\cite{hall1,hall3,hall4,hall5,hall6} can be used to derive lower and upper energy bounds based on the comparison theorem and the known exact solutions for the pure Hydrogenic and oscillator problems in $d$ dimensions. It turns out \cite{hall7} that the bounds can be expressed by a formula for $\lambda$ as a function of $E(1,\lambda).$ We have generalized the $d=3$ result of Ref.\kern 2pt \cite{hall7} to $d>1$ dimensions and we obtain:
\begin{equation}\label{eformula}
\lambda = \frac{2\nu^2E^3 -E^2\left[(1+3\nu^2E)^{\frac{1}{2}}-1\right]}
{\left[(1+3\nu^2E)^{\frac{1}{2}}-1\right]^3}\equiv g(E),\quad E\ge -\frac{1}{4\nu^2},
\end{equation}
which formula yields an upper bound when $\nu = 2n+\ell+d/2$ and a lower bound when $\nu = n+\ell +(d-1)/2.$ It is interesting that
this entire set of lower and upper (energy) curves are all scaled versions, for example, of the single ground-state curve. Again,
$n = 0,1,2,\dots$ counts the nodes in the radial eigenfunction. Thus by using a computer solve routine to invert the function $g(E)$ in \eq{eformula} for each of the two values of $\nu$, the energy bounds we can be written in the form
\begin{equation}
E(a,c) = a^2g^{-1}_{\nu}(c/a^3).
\end{equation}
For the $s$-states, sharper upper bounds may be obtained (via envelopes of the linear potential) in terms of the zeros of the Airy function. This is about as far as we can go generally and analytically with this spectral problem.
\section{Asymptotic solution}\label{sec3}
\noindent We note first that the differential equation (\ref{Sch}) has one regular singular
point at $r = 0$ with exponents given by the roots of the indicial equation
\begin{equation}\label{indicial}
s(s-1)-{1\over 4}(k-1)(k-3)=0,
\end{equation}
and an irregular singular point at $r = \infty$. For large $r$, the differential equation \eq{Sch} assumes the asymptotic form
\begin{equation}\label{Sch_asy}
\left[-{d^2\over dr^2}+c\,r\right]u_{nl}^d(r)\approx 0
\end{equation}
with a solution
\begin{equation}\label{asy_sol}
u_{nl}^d(r)\approx Ai\left(c^{1/3}\,r\right),\qquad u_{nl}^d(\infty)\approx 0,
\end{equation}
where $Ai(z)$ is the well-known Airy function \cite{abr}. Since the roots $s$ of Eq.(\ref{indicial}), namely,
\begin{equation*}
s_1=\frac{1}{2}(3-k),\qquad s_2=\frac{1}{2}(k-1),
\end{equation*}
determine the behavior of $u_{nl}^d(r)$ as $r$ approaches $0$, only $s>1/2$ is acceptable, since only in this case is the mean value of the kinetic energy finite \cite{landau}.
Thus, the exact solution of (\ref{Sch}) assumes the form
\begin{equation}\label{gen_sol}
u_{nl}^d(r)=r^{(k-1)/2}Ai\left(c^{1/3}\,r\right)~f_n(r),\quad c\neq 0,\quad k=d+2l,
\end{equation}
where we note that $u_{nl}^d(r)\sim r^{(k-1)/2}$ as $r\rightarrow 0$. On insertion of this ansatz wave function into (\ref{Sch}), we obtain the differential equation for the functions $f_n(r)$ as
\begin{eqnarray}\label{secondorderde}
-r\,f_n''(r)+\left(1-k-2\,r\,\frac{d}{dr}\ln[Ai(c^{1/3}\,r)]\right)f_n'(r)+\left(-a-E\,r-(k-1)\frac{d}{dr}\ln[Ai(c^{1/3}\,r)]
\right)f_n(r)=0.
\end{eqnarray}
\section{Application of the asymptotic iteration method}\label{sec4}
\noindent For arbitrary values of the potential parameters $a$ and $c$, AIM is an effective method to compute the eigenvalues accurately as roots of the termination condition \eq{tm_cond}, which plays a crucial role. The AIM sequences $\lambda_n(r)$ and $s_n(r)$, $n=0,1,\dots$, depend on the (unknown) eigenvalue $E$ and the variable $r$: thus $\delta_n$ is an implicit function
of $E$ and $r$. If the eigenvalue problem is analytically solvable, the roots of the termination condition \eq{tm_cond} are independent of the variable $r$ in the sense that the roots of $\delta_n=0$ are independent of any particular value of $r$. In this case, the eigenvalues are simple zeros of this function. For instance, in the case of a pure Coulomb potential $V(r)=-a/r$, $a>0$, the exact solutions of Sch\"odinger equation
\begin{eqnarray}\label{Sch_Coulomb}
&\left[-{d^2\over dr^2}+{(k-1)(k-3)\over 4r^2}-{a\over r}\right]u_{nl}^d(r)=E_{nl}^d u_{nl}^d(r),\\
\nonumber &k=d+2\ell,~a>0, ~ 0<r<\infty, \quad u_{nl}^d(0) = 0.
\end{eqnarray}
By means of the asymptotic solutions near $r=0$ and $r=\infty$, \eq{Sch_Coulomb} assumes the form
\begin{equation}\label{gen_sol}
u_{nl}^d(r)=r^{(k-1)/2}e^{-\kappa\,r}~f_n(r),\quad k=d+2l,\quad \kappa =\sqrt{-E_n},
\end{equation}
where the functions $f_n$ satisfy the differential equation
\begin{equation}\label{secondorderdeReducd}
\nonumber f_n''(r)=\left(2\kappa+\frac{\left(1-k\right)}{r}\right)f_n'(r)+\frac{(-a+(k-1)\kappa)}{r}f_n(r),
\end{equation}
for $n=0,1,2,\dots$ Thus, continuing the pure Coulomb case, with
\begin{equation}\label{AIM_Seq_1}
\lambda_0(r)=2\kappa+\frac{\left(1-k\right)}{r},\quad
s_0(r)=\frac{-a+(k-1)\kappa}{r}
\end{equation}
we use AIM to compute the sequences $\lambda_n$ and $s_n,~ n=0,1,2,\dots$ initiated with $\lambda_{-1}(r)=1$ and $s_{-1}(r)=0$. The termination condition is $\delta_n=0,n=0,1,2,\dots$ We observe that if $\delta_{n}=0$, then $\delta_{n+1}=0$ for all $n$. Direct computation implies
\begin{eqnarray*}
\delta_0=0,& \quad E_0=-\frac{a^2}{(k-1)^2}\\
\delta_1=0,&\quad E_0=-\frac{a^2}{(k-1)^2},\quad E_1=-\frac{a^2}{(k+1)^2}\\
\delta_2=0,&\quad E_0=-\frac{a^2}{(k-1)^2},\quad E_1=-\frac{a^2}{(k+1)^2},\quad E_2=-\frac{a^2}{(k+3)^2}\\
\delta_3=0,&\quad E_0=-\frac{a^2}{(k-1)^2},\quad E_1=-\frac{a^2}{(k+1)^2},\quad E_2=-\frac{a^2}{(k+3)^2},
E_3=-\frac{a^2}{(k+5)^2}\\
\end{eqnarray*}
and in general
\begin{eqnarray*}
\delta_n&=0\quad\Longrightarrow\quad E_j=-\frac{a^2}{(k+2j-1)^2},\quad j=0,1,2,\dots, n.
\end{eqnarray*}
as the well-know eigenvalue formula for the Coulomb potential in $d$-dimensions. The situation is quite different in the case of $c\neq 0$. Here we use AIM with (see equation \eq{secondorderde})
\begin{eqnarray}\label{AIM_Seq_g}
\lambda_0(r)&=&\frac{(1-k)}{r}-2\frac{d}{dr}\ln[Ai(c^{1/3}\,r)],\\
s_0(r)&=&-E-\frac{a}{r}-\frac{(k-1)}{r}\frac{d}{dr}\ln[Ai(c^{1/3}\,r)],
\end{eqnarray}
where the termination condition $\delta_n=0$ is a function of both $r$ and $E$, namely
\begin{equation}\label{deltafunction}
\delta_n\equiv \delta_n(E;r)=0.
\end{equation}
The problem is then finding an initial value $r=r_0$ that would stabilize the recursive computation of the roots by the termination condition \eq{deltafunction} for all $n$. This is still an open problem with no general strategy to locate this initial value. A good choice for $r_0$ depends on the shape of the potential under consideration and sometimes on the asymptotic solution process itself. Thus two policies for the choice of $r_0$ are: (1) the point where the minimum of the potential occurs if it is not infinity; (2) the point where the maximum of the ground-state asymptotic solution occurs.
For the Cornell potential, because of the attractive Coulomb term, the potential function is not bounded below and we therefore choose $r_0$ to be the location of the maximum of the ground-state wave function as follows. The asymptotic solution is given by:
\begin{equation}\label{asy_sol}
u_{\rm as}(r)\approx r^{(k-1)/2}Ai\left(c^{1/3}\,r\right),
\end{equation}
and we suppose that $\hat{r}$ is the position of the maximum of $u_{\rm as}(r).$
We start with $r_0= \hat{r},$ then
we gradually increase the value of $r_0$ until we reach stability in the computational process, in the sense that it converges in few iterations. Thus, once a suitable value is found for $r_0$ for a parameter patch, the actual eigenvalue calculations are extremely fast. We only found one difficulty with this approach for the present problem, namely when $c$ is small so that the wave function is very spread out (like the pure Coulomb case). In order to deal wih this, we adopted the following strategy: we took $r_0$ as a point at which the tail of the asymptotic solution
\eq{asy_sol} starts to diminish rapidly. In Figure 1, we show plots of $u_{\rm as}$ for different values of $c$. These graphs suggest that the starting value of $r_0=20$ for the potential $V(r)=-1/r+0.01\,r$, the starting value of $r_0=5$ for the potential $V(r)=-1/r+r$, and $r_0=1$ for $V(r)=-1/r+100\,r$.
\begin{figure}[!h]
\centering
\includegraphics[width=5cm, height=3cm]{clfig1a.eps}
\includegraphics[width=5cm, height=3cm]{clfig1b.eps}
\includegraphics[width=5cm, height=3cm]{clfig1c.eps}
\caption{The spatial spread of the asymptotic solution $u_{\rm as}$ as $c$ increases.}\label{Fig1}
\end{figure}
For the purpose of consistency we have calculated each eigenvalue to 12 significant figures and recorded in a subscript the minimum number of iterations required to reach this precision. The computation of the Airy function is
straightforward, thanks to Maple, where the \emph{`AiryAi'} and its derivative are built-in functions. The eigenvalues reported in Table \ref{table:tab1} were computed using Maple version 16 running on an Apple iMAC computer in a high-precision environment. In order to accelerate our computation we have written our own code for a root-finding algorithm instead of using the default procedure {\tt Solve} of \emph{Maple 16}. The results of AIM may be obtained to any desired degree of precision: we have reported most of our results to twelve decimal places, and those of Table
\ref{table:tab3} to fifteen places, as an illustration. Of course, once the energy eigenvalue has been determined accurately, it is straightforward to integrate \eq{Sch} to find the corresponding wave function $u(r):$
we exhibit the result in Fig. \ref{Fig2}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{clfig2.eps}
\caption{ The wave function $u(r)$ obtained by integrating \eq{Sch} with $ k = 4$ and
the energy eigenvalue $E = E^{d}_{n\ell} = 8.997414071$ taken from Table \ref{table:tab1}. This corresponds, for example, to
the case $d = 4, \ell = 0,$ and $n = 6.$}
\label{Fig2}
\end{figure}
\begin{table}[h] \caption{Eigenvalues $E_{nl}^{d=3,4}$ for $V(r)=-1/r+r$. The initial value used by AIM is $r_0=5$. The subscript $N$ refers to the number of iteration used by AIM.\\ }
\centering
\begin{tabular}{|c|c |p{1.4in}||c|c| p{1.5in}|}
\hline
$\ell$&$n$&$E_{n0}^{d=3}$&$\ell$&$n$&$E_{0l}^{d=3}$\\ \hline
$0$&0&$~1.397~875~641~660_{N=58}$&$0$&0&$~1.397~875~641~660_{N=70}$\\
~&1&$~3.475~086~545~396_{N=73}$&$1$&~&$~2.825~646~640~704_{N=56}$\\
~&2&$~5.032~914~359~536_{N=73}$&$2$&~&$~3.850~580~006~803_{N=51}$\\
~&3&$~6.370~149~125~486_{N=72}$&$3$&~& $~4.726~752~007~096_{N=43}$ \\
~&4&$~7.574~932~640~591_{N=66}$&$4$&~&$~5.516~979~644~329_{N=37}$\\
~&5&$~8.687~914~590~401_{N=82}$&$5$&~&$~6.248~395~598~411_{N=33}$\\
\hline
\hline
$\ell$&$n$&$E_{n0}^{d=4}$&$\ell$&$n$&$E_{0\ell}^{d=4}$\\ \hline
$0$&0&$~2.202~884~354~411_{N=56}$&$0$&0&$~2.202~884~354~411_{N=56}$\\
~&1&$~3.998~899~718~709_{N=67}$&$1$& ~ &$~3.363~722~259~378_{N=54}$\\
~&2&$~5.457~656~703~862_{N=68}$&$2$&~&$~4.301~971~630~406_{N=48}$\\
~&3&$~6.740~670~678~009_{N=67}$&$3$&~& $~5.130~492~519~711_{N=41}$ \\
~&4&$~7.909~993~263~956_{N=63}$&$4$&~&$~7.085~515~480~564_{N=37}$\\
~&5&$~8.997~414~071~258_{N=58}$&$5$&~&$~8.799~435~022~938_{N=41}$\\
\hline
\hline
\end{tabular}
\label{table:tab1}
\end{table}
\vskip0.1true in
\noindent In table \ref{table:tab2} we report the eigenvalues for the Schr\"odinger equation with the potential $V(r)=-1/r+0.01\,r$. The AIM iterations used $r_0=20$. In table \ref{table:tab3}, we report the eigenvalues for the Schr\"odinger equation with the potential $V(r)=-1/r+100\,r$ where with $r_0=1$. In Table \ref{table:tab4} we compare our AIM ground-state eigenenergies for the potential $V(r)=-a/r+r$ and different values of the parameter $a$, with those computed earlier by Eichten et al. \cite{eichten1978} using an interpolation technique and that
of Chung and Lee \cite{chung} using the Crank-Nicholson method. Since the asymptotic solution \eq{asy_sol} is independent of the Coulombic parameter $a$ we use AIM with $r_0=1$, as shown in Figure \ref{Fig1}.
\begin{table}[h] \caption{Eigenvalues $E_{nl}^{d=3,4}$ for $V(r)=-1/r+0.01\,r$. The initial value used by AIM is $r_0=20$ or as indicated. The subscript $N$ refers to the number of iteration used by AIM.\\ }
\centering
\begin{tabular}{|c|c |p{2in}||c|c| p{2.0in}|}
\hline
$\ell$&$n$&$E_{n0}^{d=3}$&$\ell$&$n$&$E_{0l}^{d=3}$\\ \hline
$0$&0&$~-0.221~030~563~404_{N=79}$&$0$&0&$~-0.221~030~563~404_{N=79}$\\
~&1&$~~~~~0.034~722~241~998_{N=70}$&$1$&~&$~~~~~0.017~400~552~510_{N=61}$\\
~&2&$~~~~~0.141~913~022~811_{N=66}$&$2$&~&$~~~~~0.102~472~150~415_{N=47}$\\
~&3&$~~~~~0.220~287~171~811_{N=60}$&$3$&~& $~~~~~0.159~830~894~613_{N=39}$ \\
~&4&$~~~~~0.344~602~792~592_{N=75}$&$4$&~&$~~~~~0.206~238~109~687_{N=41}$\\
~&5&$~~~~~0.448~055~673~514_{N=85}$&$5$&~&$~~~~~0.246~682~072~100_{N=34}$\\
\hline
\hline
$\ell$&$n$&$E_{n0}^{d=4}$&$\ell$&$n$&$E_{0\ell}^{d=4}$\\ \hline
$0$&0&$~-0.057~503~250~143_{N=69}$&$0$&0&$~-0.057~503~250~143_{N=69}$\\
~&1&$~~~~~0.087~181~857~064_{N=63}$&$1$& ~ &$~~~~~0.065~687~904~463_{N=54}$\\
~&2&$~~~~~0.176~559~165~345_{N=72}$&$2$&~&$~~~~~0.133~067~612~356_{N=43}$\\
~&3&$~~~~~0.247~865~703~619_{N=67}$&$3$&~& $~~~~~0.183~984~697~123_{N=36}$ \\
~&4&$~~~~~0.309~777~243~695_{N=69,r_0=25}$&$4$&~&$~~~~~0.227~037~524~190_{N=37,r_0=25}$\\
~&5&$~~~~~0.365~723~900~484_{N=71,r_0=25}$&$5$&~&$~~~~~0.287~224~084~341_{N=39,r_0=25}$\\
\hline
\hline
\end{tabular}
\label{table:tab2}
\end{table}
\begin{table}[h] \caption{Eigenvalues $E_{nl}^{d=3}$ for $V(r)=-1/r+100\,r$. The initial value used by AIM is $r_0=1$ or as indicated. The subscript $N$ refers to the number of iteration used by AIM.\\ }
\centering
\begin{tabular}{|c|c |p{1.7in}||c|c| p{2.0in}|}
\hline
$\ell$&$n$&$E_{n0}^{d=3}$&$\ell$&$n$&$E_{0l}^{d=3}$\\ \hline
$0$&0&~~$46.402~258~652~779_{N=104}$&$0$&0&$~~46.402~258~652~779_{N=75}$\\
~&1&~~$85.339~271~687~574_{N=106}$&$1$&~&$~~70.016~058~921~076_{N=62}$\\
~&2&~$116.728~692~980~119_{N=103}$&$2$&~&$~~89.715~370~910~984_{N=51}$\\
~&3&$~144.315~456~241~781_{N=99}$&$3$&~& $~107.334~329~106~273_{N=46}$ \\
~&4&$~169.460~543~870~657_{N=102}$&$4$&~&$~123.561~985~764~157_{N=56,r_0=1.5}$\\
~&5&$~192.850~291~861~086_{N=103}$&$5$&~&$~138.761~138~633~388_{N=50,r_0=1.5}$\\
\hline
\hline
\end{tabular}
\label{table:tab3}
\end{table}
\begin{table}[h]
\caption{A comparison between the eigenvalues the S-wave heavy quarkonium results of Eichten et al.\cite{eichten1978}, Chung and Lee \cite{chung} and those of the present work, $E_{00}^{d=3}$ for ground state with the Coulombic parameter $a$ in the potential $V(r)=-a/r+r$. The initial value used by AIM was fixed at $r_0=6$. The subscript $N$ refers to the number of iteration used by AIM.\\ }
\centering
\begin{tabular}{|c|c|c|}\hline
$a$& $E_{00}^3$ (Eichten et al.\cite{eichten1978}) & $E_{0,0}^3$(AIM) \\ \hline
$0.2$ & $2.167~316$ & $2.167~316~208~772~717_{N=104}$ \\ \hline
$0.4$ & $1.988~504$ & $1.988~503~899~750~869_{N=105}$ \\ \hline
$0.6$ & $1.801~074$ & $1.801~073~805~646~947_{N=104}$ \\ \hline
$0.8$ & $1.604~410$ & $1.604~408~543~236~585_{N=103}$ \\ \hline
$1.0$ & $1.397~877$ & $1.397~875~641~659~907_{N=102}$ \\ \hline
$1.2$ & $1.180~836$ & $1.180~833~939~744~787_{N=109}$ \\ \hline
$1.4$ & $0.952~644$ & $0.952~640~495~218~560_{N=110}$ \\ \hline
$1.6$ & $0.712~662$ & $0.712~657~680~461~034_{N=115}$ \\ \hline
$1.8$ & $0.460~266$ & $0.460~260~113~873~608_{N=117}$ \\ \hline
\end{tabular}
\vspace{ 0.3 in}
\begin{tabular}{|c||c||c|}\hline
$a$& $E_{00}^3$ (Chung and Lee \cite{chung}) & $E_{0,0}^3$ (AIM) \\ \hline
$0.1$ &$2.253~678$ & $2.253~678~098~810~761_{104}$\\ \hline
$0.3$ &$2.078~949$ & $2.078~949~440~194~840_{105}$\\ \hline
$0.5$ &$1.895~904$ & $1.895~904~238~476~994_{106}$\\ \hline
$0.7$ &$1.703~935$& $1.703~934~818~031~980_{104}$\\ \hline
$0.9$ &$1.502~415$& $1.502~415~495~453~739_{99~}$ \\ \hline
$1.1$ & $1.290~709$&$1.290~708~615~983~606_{105}$ \\ \hline
$1.3$ &$1.068~171$ &$1.068~171~244~486~971_{109}$ \\ \hline
$1.5$ & $0.834~162$& $0.834~162~211~049~953_{111}$ \\ \hline
$1.7$ & $0.588~049$&$0.588~049~168~557~953_{115}$ \\ \hline
\end{tabular}
\label{table:tab4}
\end{table}
\vskip0.1true in
\section{Conclusion}
\noindent The solution procedure presented in this paper is based on the asymptotic iteration method
and is very simple. It yields highly accurate eigenvalues with little computational effort. To our knowledge, this work is the first attempt to employ the asymptotic iteration method where the AIM sequences $\lambda_n$ and $s_n, n=0,1,2,\dots$, are computed in terms of higher transcendental functions, rather than polynomials or rational functions. This simple and practical method can easily be implemented with any available symbolic mathematical software to elucidate the dependence of the energy spectrum on potential parameters. Once accurate eigenvalues are at hand, it is straightforward to obtain the corresponding wave functions
\vskip0.1true in
\section{Acknowledgments}
\medskip
\noindent Partial financial support of this work under Grant Nos. GP3438 and GP249507 from the
Natural Sciences and Engineering Research Council of Canada
is gratefully acknowledged by us (respectively RLH and NS).
\medskip
\section*{References}
| {
"timestamp": "2014-11-10T02:12:58",
"yymm": "1411",
"arxiv_id": "1411.2023",
"language": "en",
"url": "https://arxiv.org/abs/1411.2023",
"abstract": "The eigenvalues $E_{n\\ell}^d(a,c)$ of the $d$-dimensional Schrödinger equation with the Cornell potential $V(r)=-a/r+c\\,r$, $a,c>0$ are analyzed by means of the envelope method and the asymptotic iteration method (AIM). Scaling arguments show that it is sufficient to know $E(1,\\lambda)$, and the envelope method provides analytic bounds for the equivalent complete set of coupling functions $\\lambda(E)$. Meanwhile the easily-implemented AIM procedure yields highly accurate numerical eigenvalues with little computational effort.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Schrödinger spectrum generated by the Cornell potential",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9585377284730286,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7094850516628518
} |
https://arxiv.org/abs/math/0501305 | On Aumann's Theorem that the sphere does not admit a mean | We prove that the circle S_1 does not have a 2-mean, i.e., S_1 times S_1 cannot have a retraction r onto its diagonal with r(x,y) = r(y,x), whenever x,y in S_1. Our proof is combinatorial and topological rather than analytical. | \section{Introduction}
{\sc Aumann} and {\sc Caratheodory} \cite{AuCa34}, \cite{Au35} and
\cite{Au43} were among the pioneers who first considered the question
about the structure of spaces for which the topological product $X^n$ has
a symmetric retraction onto its diagonal, \ie \ {\em a $n$-mean}. They
studied such objects in the complex plane and in the Euclidean $n$-space
using analytical tools. For example {\sc Aumann} in \cite{Au43} proved
that the $n$-dimensional sphere does not have a mean. For more information
about means see \cite{Hilton}. The aim of this note is to prove that the
circle $S_1$ does not have a 2-mean, using only combinatorial and
topological tools. For this purpose we use a method comparable to the one
used in \cite{kuS}. It is interesting to notice that this method has been
used (in dimension 2) to prove, among some other results, the {\sc
Brouwer} fixed point theorem and the special hexagonal chessboard theorem
(see {\sc Gale} \cite{Gal}, who, as far as we know, introduced the
method), and the {\sc Borsuk-Ulam} antipodal theorem (see \cite{KuTu}).
We point out that in the case of the {\sc Brouwer} fixed point theorem,
the combinatorial proof in \cite{KKM} is based on {\sc Sperner's} Lemma
\cite{Sp} but in the case of the {\sc Borsuk-Ulam} antipodal theorem
\cite{Bor}, for the combinatorial proof {\sc Tucker's} Lemma
\cite{Tucker45} is used (in this case {\sc Sperner's} Lemma is not enough)
({\sc Ky Fan} \cite{Fan52} extended {\sc Tucker}'s result to arbitrary
$n$). For more information about fixed point theory see \cite{D-G}. Here
we have (in dimension 2) one universal combinatorial lemma (see next
section). We wonder if it is possible to generalize this method to
arbitrary $n$.
\section{Combinatorial part}
Let us fix a natural number $k > 1$ and let
$$
Z_k = \left\{ \frac{i}{k}: i \in \{0,..., k \} \right\}
$$
and denote by
$$
D^{2}(k)
= (Z_k \times Z_k)
= \left\{0,\frac{1}{k},..., \frac{k-1}{k},1\right\}^2;
$$
$D^{2}(k)$ is called {\em a combinatorial square.}
\begin{definition}
Denote by ${\mathbf e}_0 = ( \frac{1}{k}, 0), {\mathbf e}_1 = ( 0,
\frac{1}{k})$ the basic vectors of length $\frac{1}{k}$. An ordered set
$z = [z_0, z_1, z_2]$ is said to be a {\em simplex} if and only if
$$z_1 = z_0 + \mathbf{e}_i, z_2 = z_1 + \mathbf{e}_{1-i}
\quad\text{where $i \in \{ 0, 1 \}$.}$$
Any subset $[z_0, z_1 ], [z_1, z_2 ]$ and $[z_2, z_0] \subset z$ is
said to be a face of the simplex $z$.
\end{definition}
\begin{figure}[h]
\setlength{\unitlength}{1cm}
\begin{picture}(3,2)
\thicklines
\put(.8,0){$\rightarrow$}
\put(.4,.1){\line(1,0){.7}}
\put(1.32,.3){\line(0,1){.7}}
\put(.3,.25){\line(1,1){.75}}
\put(1.2,0){$z_1$}
\put(1.23,.8){$\uparrow$}
\put(0,0){$z_0$}
\put(1.1,1.2){$z_2$}
\put(.23,.25){$\swarrow$}
\put(0.6,-.3){$\mathbf{e}_0$}
\put(1.6,.5){$\mathbf{e}_1$}
\put(2.35,1.3){\line(1,0){.65}}
\put(2.15,.3){\line(0,1){.7}}
\put(2.3,.25){\line(1,1){.75}}
\put(2,0){$z_0$}
\put(2,1.2){$z_1$}
\put(3.1,1.2){$z_2$}
\put(2.7,1.2){$\rightarrow$}
\put(2.05,.8){$\uparrow$}
\put(2.23,.25){$\swarrow$}
\put(2.6,1.5){$\mathbf{e}_0$}
\end{picture}
\caption{}
\end{figure}
\begin{observation}
Any face of a simplex $z$ contained in $D^{2}(k)$
is a face of exactly one or two simplexes from $D^{2}(k)$, depending on
whether or not it lies on the boundary of $D^{2}(k)$.
\end{observation}
\begin{definition}
Let ${\mathcal{P}}(k)$ be the family of all simplexes in $D^{2}(k)$ and
let ${\mathcal{V}}(k)$ be the set of all vertices of the simplexes from
${\mathcal{P}}(k)$. {\em A coloring} of ${\mathcal{P}}(k)$ is any
function $f:{ \mathcal{V}}(k) \longrightarrow \{1, -1\}$, and any face $s$
of any simplex $z$ is called {\em an $f$-gate} (or simply a gate if there
is no ambiguity of what $f$ is) if $f[s] = \{1, -1\}$.
\end{definition}
\begin{observation}
Let $w$ be a simplex, $\mathcal{W}$ be the set of vertices of $w$
and $f: {\mathcal{W}} \longrightarrow \{1, -1\}$ be a function. Then
$w$ has an even number of gates.
\end{observation}
\begin{definition}
If $f:{ \mathcal{V}}(k) \longrightarrow \{0, 1\}$ is a function, two
simplexes $w$ and $v$ from ${\mathcal {P}}(k) $ are in the relation $\sim$
if $w \cap v$ is a gate. A subset ${\mathcal{S}} \subset {\mathcal{P}}(k)$
is called a chain in ${\mathcal{P}}(k)$ if ${\mathcal{S}} = \{w_0,
w_1,..., w_n\}$ and for each $i\in\{0,..., n-1\}, w_{i} \sim w_{i+1}$.
\end{definition}
\begin{observation}
For each chain $\{v_1,...,v_n \}\subset {\mathcal{P}}(k)$ there exists no
more than one $v \in \mathcal{P}(k)$ and one $w \in \mathcal{P}(k)$ such that
$\{v_1,...,v_n,v \}$ and $\{w,v_1,...,v_n \}$ are chains.
Also, if ${\mathcal{S}}_1$ and ${\mathcal{S}}_2$ are maximal chains in
${\mathcal{P}}(k)$, then either
${\mathcal{S}}_1 \cap {\mathcal{S}}_2 = \emptyset$ or
${\mathcal{S}}_1 = {\mathcal{S}}_2$.
\end{observation}
Let $a$ and $b$ be two different elements of $D^2(k)$.
Consider the rectangle $R$ with $a$ and $b$ as opposites vertices and
right-hand-orient its boundary. By $\overline{ab}$ we mean the part of
the boundary that goes from $a$ to $b$. We define similarly $\overline{ba}$.
The boundary of $R$ is denoted by $\partial R$.
\begin{lemma}
No maximal chain $\mathcal{S} \subseteq \mathcal{P}(k)$ ever finishes at a gate of an
interior simplex, \ie \ a simplex disjoint from $\partial R$.
\end{lemma}
\begin{proof}
It consists to show that if a simplex
$S_1$ in $\mathcal{S}$ is disjoint with $\partial R$, there is always another
simplex $S_2$ with a common gate (Observation 2). Thus the only
possibility for $\mathcal{S}$ to stop is at $\partial R$.
There are twelve possible (simplex, flow of the chain (if directed))
combinations of the simplex to be considered,
each of them with two possible outcomes. We picture some of them with the
following in mind: arrows mean flow, thick lines are NOT gates and
thin lines are gates:
\begin{figure}[h]
\setlength{\unitlength}{1cm}
\begin{picture}(8,2)
\thicklines
\put(.35,1.1){\line(1,0){.65}}
\put(2.35,1.1){\line(1,0){.65}}
\put(3.15,0.29){\line(0,1){.65}}
\put(4.35,1.1){\line(1,0){.65}}
\put(4.35,.1){\line(1,0){.6}}
\thinlines
\put(.15,.3){\line(0,1){.65}}
\put(2.15,.3){\line(0,1){.65}}
\put(2.35,.1){\line(1,0){.6}}
\put(4.15,.3){\line(0,1){.65}}
\put(.3,.25){\line(1,1){.75}}
\put(2.3,.25){\line(1,1){.75}}
\put(4.3,.25){\line(1,1){.75}}
\put(5.15,0.29){\line(0,1){.65}}
\put(0,0){$\oplus$}
\put(2,0){$\oplus$}
\put(4,0){$\oplus$}
\put(0,1){$\ominus$}
\put(2,1){$\ominus$}
\put(4,1){$\ominus$}
\put(1,1){$\ominus$}
\put(3,1){$\ominus$}
\put(5,0){$\oplus$}
\put(5,1){$\ominus$}
\put(3,0){$\ominus$}
\put(-.05,.5){$\rightarrow$}
\put(1.95,.5){$\rightarrow$}
\put(3.95,.5){$\rightarrow$}
\put(4.95,.5){$\rightarrow$}
\put(.55,.5){$\searrow$}
\put(2.55,.5){$\searrow$}
\put(4.55,.5){$\searrow$}
\put(2.55,-.1){$\downarrow$}
\put(.55,1.5){A}
\put(2.5,1.5){A1}
\put(4.5,1.5){A2}
\end{picture}
\setlength{\unitlength}{1cm}
\begin{picture}(8,2)
\thicklines
\put(.35,1.1){\line(1,0){.65}}
\put(3.35,1.1){\line(1,0){.65}}
\put(6.35,1.1){\line(1,0){.65}}
\put(2.2,0.3){\line(1,1){.75}}
\put(5.35,.15){\line(1,0){.6}}
\thinlines
\put(.15,.3){\line(0,1){.65}}
\put(3.15,.3){\line(0,1){.65}}
\put(6.15,.3){\line(0,1){.65}}
\put(.3,.25){\line(1,1){.75}}
\put(3.3,.25){\line(1,1){.75}}
\put(6.3,.25){\line(1,1){.75}}
\put(5.2,0.3){\line(1,1){.75}}
\put(2.35,.15){\line(1,0){.6}}
\put(0,0){$\oplus$}
\put(3,0){$\oplus$}
\put(6,0){$\oplus$}
\put(2,0){$\ominus$}
\put(5,0){$\oplus$}
\put(0,1){$\ominus$}
\put(3,1){$\ominus$}
\put(6,1){$\ominus$}
\put(1,1){$\ominus$}
\put(4,1){$\ominus$}
\put(7,1){$\ominus$}
\put(-.05,.5){$\leftarrow$}
\put(2.95,.5){$\leftarrow$}
\put(5.95,.5){$\leftarrow$}
\put(5.25,.5){$\leftarrow$}
\put(.55,.5){$\nwarrow$}
\put(3.55,.5){$\nwarrow$}
\put(6.55,.5){$\nwarrow$}
\put(2.55,0){$\downarrow$}
\put(.55,1.5){B}
\put(3.55,1.5){B1}
\put(6.55,1.5){B2}
\end{picture}
\caption{}
\end{figure}
\end{proof}
\begin{corollary}
Any maximal chain $\mathcal{S} \subseteq \mathcal{P}(k)$ beginning at $\partial R$ must
finish at $\partial R$.
\end{corollary}
\begin{combinatoriallemma}
Let $\mathcal{P}(k)$ be the set of simplexes of
$D^2(k)$ and $f:\mathcal{V}(k)\to \{-1,1\}$ be a coloring of $\mathcal{V}(k)$. If
$a$ and $b$ belong to $\mathcal{V}(k)$, and
$f(b)=-f(a)$, then there exists a chain $\mathcal{S} \subseteq \mathcal{P}(k)$ such
that $\mathcal{S} \cap \overline{ab} \neq \emptyset \neq \mathcal{S} \cap \overline{ba}$.
\end{combinatoriallemma}
This result was proved originally in \cite{Tur1}.
Here we present a different argument.
\begin{proof}
We first define two equivalence relations on $\mathcal{V}(k)$:
If $u,v \in D^2(k) \cap R$, we will say that $u\approx v$ if $u=v$ or
if there
are vertices $u=x_0, x_1,...,x_{n-1},x_n=v$ in $D^2(k) \cap R$ such that \
$[x_i,x_{i+1}]$ is a face of a simplex ($i=0,...,n-1$) and
$f(x_i)=f(x_{i+1})$. Clearly $\approx$ is an equivalence relation on
$D^2(k) \cap R$.
Let $\mathcal{S} \subseteq \mathcal{P}(k)$ be a maximal chain beginning
at the boundary of $R$. If $u,v \in D^2(k) \cap R$, we will say that
$u\simeq v$ if $u=v$ or
if there are vertices $u=x_0, x_1,...,x_{n-1},x_n=v$ in $D^2(k) \cap R$
with $[x_i,x_{i+1}]$ being a face of a simplex ($i\in \{0,...,n-1\}$) and
no $[x_i,x_{i+1}]$ is a gate belonging to a simplex belonging to $\mathcal{S}$.
Clearly $\simeq$ is an equivalence relation on $D^2(k) \cap R$ as well.
Let $\mathcal{C}$ be the $\approx$-component of $b$. Walking from $a$ to $b$ let
$x$ be the vertex on $\overline{ab}$ found right before $\mathcal{C} \cap
\overline{ab}$, and $y$ be the vertex on $\overline{ab}$ right after $x$.
Then $y \in \mathcal{C}$ and $f(x)=f(a)$. Thus $[x,y]$ is a gate.
Let $\mathcal{S}$ be the unique maximal chain to which the simplex containing
$[x,y]$ belongs to (Observation 3). By Corollary 1, $\mathcal{S}$ ends on
$\partial R$. By the choice of $x$ and $y$ and since points in $\mathcal{C}$ are
all $\simeq$-equivalent, $\mathcal{S}$ must end on $\overline{ba}$, as required.
\end{proof}
\section{Topological Part}
We borrow the following from \cite{kuS}.
\begin{definition}
If $\{A_m: m \in \mathbb{N} \}$ is a sequence of subsets of a compact metric
space $X$, we define its {\em upper limit} $Ls\{A_n: n \in \mathbb{N}\}$ as the
set of points $x \in X$ such that \ there is an infinite $M \subseteq \mathbb{N}$
such that for every $m \in M$ there is $x_m \in A_M$ with $x_m \to x$.
\end{definition}
In the paper \cite{kuS} the following result has been proved. See also
\cite{Kur2} (5.47.6).
\begin{lemma}
Let $\{A_m: m \in \mathbb{N} \}$ be a sequence of connected subsets of a compact
metric space $X$ such that some sequence $\{a_n: n \in \mathbb{N}\}$ of points
$a_n \in A_n$ is converging in $X$. Then the set $Ls\{A_n: n \in N\}$ is
compact and connected.
\end{lemma}
\section{Main Result}
In this section we prove the result mentioned in the abstract.
Let $X$ be a space, and denote by $\Delta(X^2):=\{(x,x):x\in X\}$.
Obviously $\Delta(X^2)$ is homeomorphic to $X$. Identify $S_1$ with
$I:=[0,1]$ and $0=1$.
Suppose that there exists a symmetric retraction $r$ from $S_1\times S_1$
onto its diagonal $\Delta(S_1^2)$, \ie \ a continuous map
$r:S_1\times S_1 \to \Delta(S_1^2)$ satisfying:\\
a) $r(x,y)=r(y,x)$ for each $x$ and $y$ from $S_1$, and \\
b) $r(x,x)=(x,x)$. \\
We call $r$ a {\em 2-mean,} and say that $S_1$ has a 2-mean.
To prove that the existence of $r:S_1^2\to \Delta(S_1^2)$ with properties
(a-b) is impossible, we consider two cases:
(1) Assume that $r[(I\times \{1\}) \cup (\{0\}\times I)]\neq \{(0,0)\}$.
Notice that if we consider $I^2$ instead of $S_1^2$, and $r:I^2\to
\Delta(I^2)$ rather than $r:S_1^2\to \Delta(S_1^2)$, then $r$ has the
following additional properties: \\
c) $r(0,0)=(0,0),\:r(1,1)=(1,1)$,\\
d) $r(0,x)=r(1,x),\:r(x,0)=r(x,1).$
For illustrative purposes, we call $\{0\}\times I$
$:=$ ``left'', $\{1\}\times I$ $:=$ ``right'', $I\times \{0\}:=$
``bottom'' and
$ I \times \{1\}:=$ ``top''. The assumption we are assuming reads now
$r[(I\times \{1\}) \cup (\{0\}\times I)]\neq \{(0,0),(1,1)\}$. (c) and the
Intermediate Value Theorem imply that
$r[(I\times \{1\}) \cup (\{0\}\times I)]=\Delta(I^2)$.
Fix $k \in \mathbb{N}$, and if
$p:I\times I \to I$ denotes the projection on the first coordinate,
define the coloring $f:V(k)\to \{\pm1\}$ as follows:
$$f(i/k,j/k)=\left\{\begin{array}{rc}
-1& \mbox{if}\:\: \cos(2\pi p(r(i/k,j/k)))\leq 0,\\
1& \mbox{otherwise.}
\end{array} \right.$$
This coloring is symmetric with respect to \
$\Delta(I^2)$ and each side of the square has exactly the same number of
gates: The gates at the left and right sides are at the same vertical
positions, and those at the bottom and top sides are at the same
horizontal positions, respectively.
Considering once again $r:S_1^2\to \Delta(S_1^2)$, we identify the points
$(0,i/k)$ with $(1,i/k)$ and $(i/k,0)$ with $(i/k,1)$ ($i=0,...,k$).
Walking to the right of $(0,0)$, one finds the first gate $g_b^1$
($b$, $l$, $r$ and $t$
stand for ``bottom'',''left'', ``right'' and ``top'')
on $I \times \{0\}$ which gives place to a chain $\mathcal{S}_k$ ``going'' on top
of the $\approx$-component $A$ of $(0,0)$ (the relation $\approx$ was
defined in the proof of the Combinatorial Lemma). By the case we are
dealing with, $\mathcal{S}_k$ intersects $\{0\}\times I$
in the last gate $g_l^\infty$ from top to bottom.
By the identification of $(0,i/k)$ with $(1,i/k)$ $\mathcal{S}_k$ reappears through
the first gate $g_r^1$ in $\{1\}\times I$ from bottom to top, and thus
$\mathcal{S}_k$ ``goes'' above the $\approx$-component $B$ of $(1,0)$.
$\mathcal{S}_k$ intersects $I \times \{0\}$ in the last gate $g_b^\infty$
going from left to right. By the identification of $(0,i/k)$ with
$(1,i/k)$ $\mathcal{S}_k$ reappears through
the first gate $g_t^1$ of the top side from right to left, and thus
$\mathcal{S}_k$ ``goes'' under
the $\approx$-component $C$ of $(1,1)$. Again $\mathcal{S}_k$ intersects
$I \times \{1\}$ in the last gate $g_r^\infty$ of the right side
going from bottom to top,
thus $\mathcal{S}_k$ reappears on the first gate $g_l^1$ of the left side
going from top to bottom,
going under the $\approx$-component $D$ of $(0,1)$ and intersecting
the last gate $g_t^\infty$ of the top side
from right to left, reappearing on $g_b^1$
and beginning the whole cycle once again.
The union of the simplexes from the chain $\mathcal{S}_k$ is a connected set
for each natural number $k$.
According to Lemma 2 the upper limit $C=Ls\{\mathcal{S}_k: k\in \mathbb{N}\}$ is
connected, and we have that $C\subset r^{-1}(p^{-1}(\cos^{-1}(0)))$, thus
$r$ maps the continuum $C$ onto two points in $\Delta(S_1^2)$; a
contradiction.
This concludes the proof in case (1).
(2) Assume that $r[(I\times \{1\}) \cup (\{0\}\times I)]= \{(0,0)\}$.
This would mean that $r[\partial I^2]=\{(0,0)\}$, and thus would imply
that any copy $S$ of $S_1$
in the sphere $S_2$ is a retract: The sphere $S_2$ is the image of
$S_1 \times S_1$ by identifying the left (and right) and
bottom (and top) sides of $S_1 \times S_1$.
Since $S_2$ equals the union of two copies of the unit disk,
sharing the same boundary, this is impossible by the following corollary
to the Combinatorial Lemma:
\begin{corollary}[Borsuk's non-retraction theorem]
$S_1$ is not a retract of the unit disk.
\end{corollary}
\begin{proof}
Identify the disk with the square $I^2$, and $S_1$ with its boundary
$\partial I^2$.
If $k \in \mathbb{N}$ consider $D^2(k)$ and color it according to what points
get mapped to the bottom and left sides, and to the top and right sides.
There are only two gates in $\partial I^2$. By corollary 1 there is one and
only one chain connecting these two gates. Then we proceed similarly
as in the end of case (1).
\end{proof}
\bibliographystyle{amsplain}
\makeatletter
\renewcommand{\@biblabel}[1]{\hfill#1.}
\makeatother
| {
"timestamp": "2005-01-19T23:07:29",
"yymm": "0501",
"arxiv_id": "math/0501305",
"language": "en",
"url": "https://arxiv.org/abs/math/0501305",
"abstract": "We prove that the circle S_1 does not have a 2-mean, i.e., S_1 times S_1 cannot have a retraction r onto its diagonal with r(x,y) = r(y,x), whenever x,y in S_1. Our proof is combinatorial and topological rather than analytical.",
"subjects": "General Topology (math.GN)",
"title": "On Aumann's Theorem that the sphere does not admit a mean",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9585377284730286,
"lm_q2_score": 0.7401743563075446,
"lm_q1q2_score": 0.70948504616902
} |
https://arxiv.org/abs/1609.08121 | Improving the Randomization Step in Feasibility Pump | Feasibility pump (FP) is a successful primal heuristic for mixed-integer linear programs (MILP). The algorithm consists of three main components: rounding fractional solution to a mixed-integer one, projection of infeasible solutions to the LP relaxation, and a randomization step used when the algorithm stalls. While many generalizations and improvements to the original Feasibility Pump have been proposed, they mainly focus on the rounding and projection steps.We start a more in-depth study of the randomization step in Feasibility Pump. For that, we propose a new randomization step based on the WalkSAT algorithm for solving SAT instances. First, we provide theoretical analyses that show the potential of this randomization step; to the best of our knowledge, this is the first time any theoretical analysis of running-time of Feasibility Pump or its variants has been conducted. Moreover, we also conduct computational experiments incorporating the proposed modification into a state-of-the-art Feasibility Pump code that reinforce the practical value of the new randomization step. | \section{Introduction}
Primal heuristics are used within mixed-integer linear programming (MILP) solvers for finding good integer feasible solutions quickly~\cite{lodiF:2011}. \emph{Feasibility pump} (FP) is a very successful primal heuristic for mixed-binary LPs that was introduced in~\cite{FischettiGL05}. At its core, Feasibility Pump is an \emph{alternating projection method}, as described below.
\begin{algorithm}[H]
\caption{Feasibility Pump (Na{\"i}ve version)}
\label{hello}
\begin{algorithmic}[1]
\State \textbf{Input:} mixed-binary LP (with binary variables $x$ and continuous variables $y$)
\smallskip
\State Solve the linear programming relaxation, and let ${(\bar{x}, \bar{y})}$ be an optimal solution
\While{$\bar{x}$ is not integral}
\State (Round) Round each coordinate of $\bar{x}$ to the closest integer, call the obtained vector $\wt{x}$ \label{alg:naiveRound}
\State (Project) Let $(\bar{x}, \bar{y})$ be the point in the LP relaxation that minimizes
$\sum_i |{x}_i - \wt{x}_i|$ \label{alg:naiveProj}
\EndWhile
\State Return $(\bar{x}, \bar{y})$
\end{algorithmic}
\end{algorithm}
The scheme presented above may \emph{stall}, since the same infeasible integer point may be visited in Step \ref{alg:naiveRound} at different iterations. Whenever this happens, the paper~\cite{FischettiGL05} recommends a \emph{randomization step}, that after Step \ref{alg:naiveRound} flips the value of some of the binary variables as follows: Defining the \emph{fractionality} of variable $x_i$ as $|\bar{x}_i - \tilde{x}_i|$ and let $NN$ be the number of variables with positive fractionality, randomly generate a positive integer $TT$ and flip $\textup{min}\{TT, NN\}$ variables with largest fractionality.
Together with a few other tweaks, this surprisingly simple method works very well. On MIPLIB 2003 instances, FP finds feasible solutions for $96.3\%$ of the instances in reasonable time~\cite{FischettiGL05}.
Due to its success, many improvements and generalizations of FP (both for MILPs and mixed integer non-linear programs(MINLPs)) have been studied~\cite{AchterbergB07,BertaccoFL07,BonamiCLM09,FischettiS09,santis:lu:ri:2010,DAmbrosioFLL10,BolandEET12,DAmbrosioFLL12,BolandEEFST14}.
However, the focus of these improvements has been on the projection and rounding steps or generalization for MINLPs; to the best of our knowledge, they use essentially the same randomization step as proposed in the original algorithm~\cite{FischettiGL05} (and its generalization to the general integer MILP case of \cite{BertaccoFL07}).
Moreover, even though FP is so successful and so many variants have been proposed, there is very limited theoretical analysis of its properties~\cite{BolandEET12}. In particular, to the best of our knowledge there is no known bounds on expected running-time of FP.
\section{Our contributions} \label{sec:contrib}
In this paper, we start a more in-depth study of the randomization step in Feasibility Pump. For that, we propose a new randomization step $\perturb{\ell}$ and provide both \emph{theoretical analysis} as well as \emph{computational experiments} in a state-of-the-art Feasibility Pump code that show the potential of this method.
\paragraph{Theoretical justification of $\perturb{\ell}$.}
The new randomization step $\perturb{\ell}$
is inspired by the classical algorithm \emph{\textsc{WalkSAT}\xspace}~\cite{Schoning99} for solving SAT instances (see also~\cite{Papadimitriou91,MintonJPL92}).
The key idea of $\perturb{\ell}$ is that whenever Feasibility Pump stalls, namely an infeasible mixed-binary solution is revisited, it should flip a binary variable that participates in an \emph{infeasible constraint}. More precisely, $\perturb{\ell}$ constructs a \emph{minimal (projected) infeasibility certificate} for this solution and \emph{randomly picks} a binary variable in it to be flipped (see Section \ref{sec:WalkSAT} for exact definitions).
While the vague intuition that such randomization is trying to ``fix'' the infeasible constraint is clear, we go further and provide theoretical analyses that formally justify this and highlight more subtle advantageous properties of $\perturb{\ell}$.
First, we analyze what happens if we simply repeatedly use \emph{only} the new proposed randomization step $\perturb{\ell}$, which gives a simple primal heuristic that we denote by \textsc{mbWalkSAT}\xspace. Not only we show that \textsc{mbWalkSAT}\xspace is guaranteed to find a solution if one exists, but its behavior is related to the \emph{(almost) decomposability} and \emph{sparsity} of the instance. To make this precise, consider a decomposable mixed-binary set with $k$ blocks:
%
\begin{gather}
P^I = P^I_1 \times \ldots \times P^I_k \textrm{, where for all $i \in [k]$ we have} \notag\\
P^I_i = P _i \cap (\{0,1\}^{n_i} \times \R^{d_i}) \textrm{, } P_i = \{(x^i,y^i) \in [0,1]^{n_i} \times \R^{d_i} : A^i x^i + B^i y^i \le b^i\}. \label{eq:decomp}\\
\textrm{Let } P = P_1 \times \ldots \times P_k \textrm{ denote the LP relaxation of $P^I$}. \notag
\end{gather}
Note that since we allow $k=1$, this also captures a general mixed-binary set. We then have the following running-time guarantee for the primal heuristic \textsc{mbWalkSAT}\xspace.
\begin{theorem} \label{thm:decomp}
Consider a feasible decomposable mixed-binary set as in equation \eqref{eq:decomp}. Let $s_i$ be such that each constraint in $P_i^I$ has at most $s_i$ binary variables, and define $c_i := \min\{ s_i \cdot (d_i + 1), n_i\}$. Then with probability at least $1-\delta$, \textsc{mbWalkSAT}\xspace with parameter $\ell=1$ returns a feasible solution within $\ln(k/\delta)\, \sum_i n_i \, 2^{n_i \log c_i}$ iterations. In particular, this bound is at most $\bar{n} k \, 2^{\bar{n} \log \bar{n}} \cdot \ln(k/\delta)$, where $\bar{n} = \max_i n_i$.
\end{theorem}
There are a few interesting features of this bound that indicates good properties of the proposed randomization step, apart from the fact that it is already able to find feasible solutions by itself. First, it depends on the \emph{sparsity} $s_i$ of the blocks, giving better running times on sparser problems. More importantly, the bound indicates that the algorithm works almost \emph{independently} on each of the blocks, that is, it just takes about $2^{n_i}$ iterations to find a solution for each of the blocks, instead of $2^{n_1 + \ldots + n_k}$ of a complete enumeration over the whole problem. In fact, the proof of Theorem \ref{thm:decomp} makes explicit this almost independence of the algorithm over the blocks, and motivates the uses of \emph{minimal} infeasibility certificates. Moreover, we note the important point that the algorithm is not provided the knowledge of the decomposability of the instance, it just \emph{automatically} runs ``fast'' when the problem is decomposable. This gives some indication that the proposed randomization could still exhibit good behavior on the \emph{almost decomposable} instances often found in practice (see discussion in~\cite{dey:molinaro:wang:2016}).
\paragraph{$\perturb{\ell}$ in conjunction with FP.}
Next, we analyze $\perturb{\ell}$ in the context of Feasibility Pump by adding it as a randomization step to the Na\"ive Feasbility Pump algorithm (Algorithm \ref{hello}); we call the resulting algorithm \textsc{WFP}\xspace. This now requires understanding the complicated interplay of the randomization, rounding and projection steps: While in practice rounding and projection greatly help finding feasible solutions, their worst-case behavior is difficult to analyze and in fact they could take the iterates far away from feasible solutions. Although the general case is elusive at this point, we are nonetheless able to analyze the running time of \textsc{WFP}\xspace for \emph{decomposable subset-sum} instances.
\begin{definition}
A \emph{separable subset-sum set} is one of the form
\begin{align}
\{(x^1, x^2, \ldots, x^k) \in \{0,1\}^{n_1 + n_2 + \ldots + n_k}: a^i x^i = b_i ~~\forall i\} \label{eq:subset}
\end{align} for non-negative $(a^i,b_i)$'s.
\end{definition}
While this may seem like a simple class of problems, on these instances Feasibility Pump with the original randomization step from \cite{FischettiGL05} (without restarts) may not even converge, as illustrated next.
\begin{remark}
Consider the feasible subset-sum problem
%
\begin{align*}
\max ~& x_2\\
s.t. ~& 3 x_1 + x_2 = 3\\
& x_1, x_2 \in \{0,1\}.
\end{align*}
Consider the execution of the original Feasibility Pump algorithm (without restarts). The starting point is an optimal LP solution; without loss of generality, suppose it is the solution $(\frac{2}{3}, 1)$. This solution is then rounded to the point $(1, 1)$, which is infeasible. This point is then $\ell_1$-projected to the LP, giving back the point $(\frac{2}{3}, 1)$, which is then rounded again to $(1, 1)$. At this point the algorithm has stalled and applies the randomization step. Since only variable $x_2$ has strictly positive fractionality $|\frac{2}{3} - 1| = \frac{1}{3}$, only the first coordinate of $(1,1)$ is a candidate to be flipped. So suppose this coordinate is flipped. The infeasible point $(0,1)$ obtained is then $\ell_1$-projected to the LP, giving again the point $(\frac{2}{3}, 1)$. This sequence of iterates repeats indefinitely and the algorithm does not find the feasible solution $(1,0)$.
\end{remark}
The issue in this example is that the original randomization step never flips a variable with zero fractionality. Moreover, in Section \ref{app:3choice} of the appendix we show that even if such flips are considered, there is a more complicated subset-sum instance where the algorithm stalls.
On the other hand, we show that algorithm \textsc{WFP}\xspace with the proposed randomization step always finds a feasible solution of feasible subset-sum instances, and moreover its running time again depends on the sparsity and the decomposability of the instance (in order to simplify the proof, we assume that $\wt{x} \notin P$, then $\textrm{$\ell_1$-proj}(P, \wt{x})$ is a vertex of $P$; notice that since $\textrm{$\ell_1$-proj}(P, \wt{x})$ is a linear programming problem and subset-sum instances are bounded, there is always a vertex satisfying the desired properties from $\textrm{$\ell_1$-proj}$).
\begin{theorem} \label{thm:WFP}
Consider a feasible separable subset-sum set $P$ as in \eqref{eq:subset}.
Then with probability at least $1-\delta$, \textsc{WFP}\xspace with $\ell = 2$ returns a feasible solution within $T = \lceil\ln(k/\delta)\rceil\, \sum_i n_i \, 2^{2n_i \log n_i} \le \bar{n} k \, 2^{2 \bar{n} \log \bar{n}} \cdot \ln(k/\delta)$ iterations, where $\bar{n} = \max_i n_i$.
\end{theorem}
To the best of our knowledge this is the first theoretical analysis of the running-time of a variant of Feasibility Pump algorithm, even for a special class of instances. As in the case of repeatedly using just $\perturb{\ell}$, the algorithm \textsc{WFP}\xspace essentially works independently on each of the blocks (inequalities) of the problem, and has reduced running time on sparser instances.
The high-level idea of the proof Theorem \ref{thm:WFP} is to: 1) Show that the combination of projection plus rounding is \emph{idempotent} for these instances, namely applying them once or repeatedly yields the same effect (Lemma \ref{lemma:stabAltProj}); 2) Show that a round of randomization step plus projection plus rounding has a non-zero probability of generating an iterate closer to a feasible solution (Lemma \ref{lemma:2choice}).
\paragraph{Computational experiments.} While the analyses above give insights on the usefulness of using $\perturb{\ell}$ in the randomization step of FP, in order to attest its practical value it is important to understand how it interacts with complex engineering components present in current Feasibility Pump codes. To this end, we considered the state-of-the-art code of~\cite{FischettiS09} and modified its randomization step based on $\perturb{\ell}$. While the full details of the experiments are presented in Section \ref{sec:computation}, we summarize some of the main findings here.
We conducted experiments on MIPLIP~2010~\cite{MIPLIB2010} instances and on randomly generated two-stage stochastic models. In the first testbed there was a small but consistent improvement in both running-time and number of iterations. More importantly, the success rate of the heuristic improved consistently. In the second testbed, the new algorithm performs even better, according to all measures. It is somewhat surprising that our small modification of the randomization step could provide noticeable improvements over the code in~\cite{FischettiS09}, specially considering that it already includes several improvements over the original Feasibility Pump (e.g. constraint propagation). In addition, the proposed modification is generic and could be easily incorporated in essentially any Feasibility Pump code. Moreover, for virtually all the seeds and instances tested the modified algorithm performed better than the original version in~\cite{FischettiS09}; this indicates that, in practice, the modified randomization step dominates the previous one.
The rest of the paper is organized as follows: Section~\ref{sec:WalkSAT} we discuss and present out analysis of the proposed randomization scheme $\perturb{\ell}$, Section~\ref{sec:toyFPWalkSAT} presents the analysis of the new randomization scheme $\perturb{\ell}$ in conjunction with feasibility pump, and Section~\ref{sec:computation} describes details of our empirical experiments.
\medskip
\noindent \textbf{Notation.} We use $\R_+$ to denote the non-negative reals, and $[k] := \{1, 2, \ldots, k\}$. For a vector $v \in \R^n$, we use $\textrm{supp}(v) \subseteq [n]$ to denote its support, namely the set of coordinates $i$ where $v_i \neq 0$. We also use $\|v\|_0 = |\textrm{supp}(v)|$, and $\|v\|_1 = \sum_i |v_i|$ to denote the $\ell_1$ norm.
\section{New randomization step $\perturb{\ell}$}\label{sec:WalkSAT}
\subsection{Description of the randomization step}
We start by describing the \textsc{WalkSAT}\xspace algorithm~\cite{Schoning99}, that serves as the inspiration for the proposed randomization step $\perturb{\ell}$, in the context of pure-binary linear programs. The vanilla version of \textsc{WalkSAT}\xspace starts with a random point $\bar{x} \in \{0,1\}^n$; if this point is feasible, the algorithm returns it, and otherwise selects any constraint violated by it. The algorithm then select a random index $i$ from the support of the selected constraint and flips the value of the entry $\bar{x}_i$ of the solution. This process is repeated until a feasible solution is obtained. It is known that this simple algorithm finds a feasible solution in expected time at most $2^n$ (see \cite{mitz} for a proof for 3-SAT instances), and Sch\"oning~\cite{Schoning99} showed that if the algorithm is restarted at every $3n$ iterations, a feasible solution is found in expected time at most a polynomial factor from $(2(1-\frac{1}{s}))^n$, where $s$ is the largest support size of the constraints.
Based on this \textsc{WalkSAT}\xspace algorithm, to obtain a randomization step for mixed-binary problems we are going to work on the projection onto the binary variables, so instead of looking for violated constraints we look for a \emph{certificate of infeasibility} in the space of binary variables. Importantly, we use a \textbf{minimal} certificate, which makes sure that for decomposable instances the certificate does not ``mix'' the different blocks of the problem.
Now we proceed with a formal description of the proposed randomization step $\perturb{\ell}$. Consider a mixed-binary set
\begin{gather}
\hspace{-3pt}P^I = P \cap (\{0,1\}^n \times \R^d), \textrm{ where }
P = \{(x,y) \in [0,1]^n \times \R^d : Ax + By \le b\}.\label{eq:MBS}
\end{gather}
We use $\proj_{bin} P$ to denote the projection of $P$ onto the binary variables $x$.
\begin{definition}[Projected certificates]
Given a mixed-binary set $P^I$ as in \eqref{eq:MBS} and a point $(\bar{x}, \bar{y}) \in \{0,1\}^n \times \R^d$ such that $\bar{x} \notin \proj_{bin} P$, a \emph{projected certificate} for $\bar{x}$ is an inequality $\lambda A x + \lambda B y \le \lambda b$ with $\lambda \in \R^m_+$ such that: (i) $\bar{x}$ does not satisfy this inequality; (ii) $\lambda B = 0$. A \emph{minimal} projected certificate is one where the support of the vector $\lambda$ is minimal (i.e. the certificate uses a minimal set of the original inequalities).
\end{definition}
Standard Fourier-Motzkin theory guarantees us that projected certificates always exist, and furthermore Caratheodory's theorem~\cite{SchrijverIntBook} guarantees that minimal projected certificates use at most $d+1$ inequalities. Together these give the following lemma.
\begin{lemma} \label{lemma:minimalCert}
Consider a mixed-binary set $P^I$ as in \eqref{eq:MBS} and a point $(\bar{x}, \bar{y}) \in \{0,1\}^n \times \R^d$ such that $\bar{x} \notin \proj_{bin} P$. There exists a vector $\lambda \in \R^m_+$ with support of size at most $d+1$ such that $\lambda Ax + \lambda B y \le \lambda b$ is a minimal projected certificate for $\bar{x}$. Moreover, this minimal projected certificate can be obtained in polynomial-time (by solving a suitable LP).
\end{lemma}
For completeness, see Appendix \ref{app:minPolytime} for a proof of Lemma~\ref{lemma:minimalCert}.
Now we can formally define the randomization step $\perturb{\ell}$ (notice that the condition $\lambda B = 0$ guarantees that a projected certificate has the form $ax \le b$).
\begin{algorithm}[H] \caption{$\perturb{\ell}(\bar{x})$}
\begin{algorithmic}[1]
\State //Assumes that $\bar{x}$ does not belong to $\proj_{bin} P$
\State Let $a x \le b$ be a minimal projected certificate for $\bar{x}$ \label{algo:wStep1}
\State Sample $\ell$ indices from the support $\textrm{supp}(a)$ uniformly and independently, let $\mathbf{I}$ be the set of indices obtained \label{algo:randCoord}
\State (Flip coordinates) For all $i \in \mathbf{I}$, set $\bar{x}_i \leftarrow 1 - \bar{x}_i$ \label{algo:wStep2}
\end{algorithmic}
\end{algorithm}
Note that in the pure-binary case and $\ell=1$, this is reduces to the main step executed during \textsc{WalkSAT}\xspace. We remark that the flexibility of introducing the parameter $\ell$ will be needed in Section~\ref{sec:toyFPWalkSAT}.
\subsection{Analyzing the behavior of $\perturb{\ell}$}
In this section we consider the behavior of the algorithm $\textsc{mbWalkSAT}\xspace$ that tries to find a feasible mixed-binary solution by just repeatedly applying the randomization step $\perturb{\ell}$.
\begin{algorithm}[H] \caption{\textsc{mbWalkSAT}\xspace}
\begin{algorithmic}[1]
\State \textbf{input parameter:} Integer $\ell \ge 1$
\State (Starting solution) Consider any mixed-binary point $(\bar{\mathbf{x}}, \bar{\mathbf{y}}) \in \{0,1\}^n \times \R^d
\Loop
\If{$\bar{\mathbf{x}}$ does not belong to $\proj_{bin} P$}
\State $\perturb{\ell}(\bar{\mathbf{x}})$
\Else
\State (Output feasible lift of $\bar{\mathbf{x}}$) Find $\bar{\mathbf{y}} \in \R^d$ such that $(\bar{\mathbf{x}}, \bar{\mathbf{y}}) \in P$, return $(\bar{\mathbf{x}},\bar{\mathbf{y}})$
\EndIf
\EndLoop
\end{algorithmic}
\end{algorithm}
As mentioned in the introduction, we show that this algorithm find a feasible solution if such exists, and the running-time improves with the sparsity and decomposability of the instance. Recall the definition of a decomposable mixed-binary problem from equation \eqref{eq:decomp}, and let $\textrm{certSupp}_i$ denote the maximum support size of a minimal projected certificate for the instance $P^I_i$ which consists only of the $i$th block.
\begin{theorem}[Theorem \ref{thm:decomp} restated] \label{thm:decompR}
Consider a feasible decomposable mixed-binary set as in equation \eqref{eq:decomp}. Then with probability at least $1-\delta$, \textsc{mbWalkSAT}\xspace with parameter $\ell=1$ returns a feasible solution within $T = \lceil\ln(k/\delta)\rceil\, \sum_i n_i \, 2^{n_i \log \textrm{certSupp}_i}$ iterations.
\end{theorem}
In light of Lemma \ref{lemma:minimalCert}, if each constraint in $P_i$ has at most $s_i$ integer variables, we have $\textrm{certSupp}_i \le \min\{s_i \cdot (d_i+1), n_i\}$, and thus this statement indeed implies Theorem \ref{thm:decomp} stated in the introduction. We remark that similar guarantees can be obtained for general $\ell$, but we focus on the case $\ell = 1$ to simplify the exposition.
The high-level idea of the proof of Theorem \ref{thm:decompR} is the following:
\begin{enumerate}
\item First we show that if we run \textsc{mbWalkSAT}\xspace over a single block $P^I_i$, then with high probability the algorithm returns a feasible solution within $n_i\ 2^{n_i \log \textrm{certSupp}_i} \cdot \ln(1/\delta)$ iterations. This analysis is inspired by the one given by Sch\"oning~\cite{Schoning99} and argues that with a small, but non-zero, probability the iteration of the algorithm makes the iterate $\bar{\bx}$ closer (in Hamming distance) to a fixed solution $x^*$ for the instance.
\medskip
\item Next, we show that when running \textsc{mbWalkSAT}\xspace over the whole decomposable instance each iteration only depends on \textbf{one} of the blocks $P^I_i$; this uses the minimality of the certificates. So in effect the execution of \textsc{mbWalkSAT}\xspace can be split up into independent executions over each block, and thus we can put together the analysis from Item 1 for all blocks with a union bound to obtain the result.
\end{enumerate}
For the remainder of the section we prove Theorem \ref{thm:decompR}. We start by considering a general mixed-binary set as in equation \eqref{eq:MBS}. Given such mixed-binary set $P^I$, we use $\textrm{certSupp} = \textrm{certSupp}(P^I)$ to denote the maximum support size of all minimal projected certificates.
\begin{theorem} \label{thm:walkSATMI2}
Consider the execution of \textsc{mbWalkSAT}\xspace over a feasible mixed-binary program as in equation \eqref{eq:MBS}. The probability that \textsc{mbWalkSAT}\xspace does not find a feasible solution within the first $T$ iterations is at most $(1-p)^{\lfloor T/n \rfloor}$, where $p = \textrm{certSupp}^{-n}$. In particular, for $T = n \cdot 2^{n \log(\textrm{certSupp})} \cdot \lceil\ln(1/\delta)\rceil$ this probability is at most $\delta$ (this follows from the inequality $(1-x) \le e^{-x}$ valid for $x \ge 0$).
\end{theorem}
\begin{proof}
Consider a fixed solution $x^* \in \proj_{bin} P$. To analyze \textsc{mbWalkSAT}\xspace, we only keep track of the Hamming distance of the (random) iterate $\bar{\mathbf{x}}$ to $x^*$; let $\bX_t$ denote this (random) distance at iteration $t$, for $t \ge 1$. If at some point this distance vanishes, i.e. $\bX_t = 0$, we know that $\bar{\mathbf{x}} = x^*$ and thus $\bar{\mathbf{x}} \in \proj_{bin} P$; at this point the algorithm returns a feasible solution for $P^I$.
Fix an iteration $t$. To understand the probability that $\bX_t = 0$, suppose that in this iteration $\bar{\mathbf{x}}$ does not belong to $\proj_{bin} P$, and let $a x \le b$ be the minimal projected certificate for it used in $\perturb{1}$. Since the feasible point $x^*$ satisfies the inequality $a x \le b$ but $\bar{\mathbf{x}}$ does not, there must be at least one index $\mathbf{i}^*$ in the support of $a$ such where $x^*$ and $\bar{\mathbf{x}}$ differ. Then if algorithm \textsc{mbWalkSAT}\xspace makes a ``lucky move'' and chooses $\mathbf{I} = \{\mathbf{i}^*\}$ in Line \ref{algo:randCoord}, the modified solution after flipping this coordinate (the next line of the algorithm) is one unit closer to $x^*$ in Hamming distance, hence $\bX_{t+1} = \bX_t - 1$. Moreover, since $\mathbf{I}$ is independent of $\mathbf{i}$, the probability of choosing $\mathbf{I} = \{\mathbf{i}^*\}$ is $1/|\textrm{supp}(a)| \ge 1/\textrm{certSupp}$.
Therefore, if we start at iteration $t$ and for all the next $\bX_t$ iterations either the iterate belongs to $\proj_{bin} P$ or the algorithm makes a ``lucky move'', it terminates by time $t + \bX_t$. Thus, with probability at least $(1/\textrm{certSupp})^{\bX_t} \ge (1/\textrm{certSupp})^n = p$ the algorithm terminates by time $t + \bX_t \le t + n$.
To conclude the proof, let $\alpha = \lfloor T/n \rfloor$ and call iterations $i \cdot n$, \ldots, $(i+1) \cdot n -1$ the $i$-th block of iterations. If the algorithm has not terminated by iteration $i \cdot n - 1$, then with probability at least $p$ it terminates within the next $n$ iterations, and hence within the $i$-th block. Putting these bounds together for all $\alpha$ blocks, the probability that the algorithm \emph{does not} stop by the end of block $\alpha$ is at most $(1-p)^\alpha$. This concludes the proof.
\end{proof}
Going back to decomposable problems, we now make formal the claim that minimal projected certificates for decomposable mixed-binary sets do not mix the constraints from different blocks. Notice that projected certificates for a decomposable mixed-binary set as in equation \eqref{eq:decomp} have the form $\sum_i \lambda^i A^i x^i \le \sum_i \lambda^i b^i$ and $\lambda^i B^i = 0$ for all $i \in [k]$.
\begin{lemma} \label{lemma:minCertificate}
Consider a decomposable mixed-integer set as in equation \eqref{eq:decomp}. Consider a point $\bar{x} \notin \proj_{bin} P$ and let $\sum_i \lambda^i A^i x^i \le \sum_i \lambda^i b^i$ be a minimal projected certificate for $\bar{x}$. Then this certificate uses only inequalities from one block $P^j$, i.e. there is $j$ such that $\lambda^i = 0$ for all $i \neq j$. Moreover, $\bar{x}^j \notin \proj_{bin} P_j$.
\end{lemma}
\begin{proof}
Let $\bar{x} = (\bar{x}^1, \bar{x}^2, \ldots, \bar{x}^k)$ and call the certificate $(ax \le b) \triangleq (\sum_i \lambda^i A^i x^i \le \sum_i \lambda^i b^i)$. By definition of projected certificate we have $\sum_i \lambda^i A^i \bar{x}^i > \sum_i \lambda^i b^i$, and thus by linearity there must be an index $j$ such that $\lambda^{j} A^{j} \bar{x}^{j} > \lambda^{j} b^{j}$. Moreover, as remarked earlier, decomposability implies that the certificate satisfies $\lambda^i B^i = 0$ for all $i$, so in particular for $j$. Thus, the inequality $\lambda^{j} (A^{j}, B^{j}) (x^{j}, y^{j}) \le \lambda^j b^{j}$ obtained by combining only the inequalities form $P_{j}$ is a projected certificate for $\bar{x}$. The minimality of the original certificate $ax \le b$ implies that $\lambda^i = 0$ for all $i \neq j$. This concludes the first part of the proof.
Moreover, since $\lambda^{j} A^{j} \bar{x}^{j} > \lambda^{j} b^{j}$ and $\lambda^j B^j = 0$ we have that $\lambda^j (A^j, B^j) (\bar{x}^j, y) > \lambda^j b^j$ for all $y$, and hence $\bar{x}^j$ does not belong to $\proj_{bin} P_j$. This concludes the proof.
\end{proof}
We can finally prove the desired theorem.
\begin{proof}[Proof of Theorem \ref{thm:decompR}.] We use the natural decomposition $\bar{\mathbf{x}} = (\bar{\mathbf{x}}^1, \ldots, \bar{\mathbf{x}}^k) \in \{0,1\}^{n_1} \times \ldots \times \{0,1\}^{n_k}$ of the iterates of the algorithm. From Lemma \ref{lemma:minCertificate}, we have that for each scenario, each iteration of \textsc{mbWalkSAT}\xspace is associated with just one of the blocks $P^I_j$'s, namely the $P^I_j$ containing all the inequalities in the minimal projected certificate used in this iteration; let $\mathbf{J}_t \in [k]$ denote the (random) index $j$ of the block associated to iteration $t$. Notice that at iteration $t$, only the binary variables $x^{\mathbf{J}_t}$ can be modified by the algorithm.
Let $T_i = n_i\, 2^{n_i \log n_i}\lceil \ln(k/\delta) \rceil$. Applying the proof of Theorem \ref{thm:walkSATMI2} to the iterations $\{t : \mathbf{J}_t = i\}$ with index $i$, we get that with probability at least $1-\frac{\delta}{k}$ the algorithm finds some $\bar{\mathbf{x}}^i$ in $\proj_{bin} P_i$ within the first $T_i$ of these iterations. Moreover, after the algorithm finds such a point, it does not change it (that is, the remaining iterations have index $\mathbf{J}_t \neq i$, due to the second part of Lemma \ref{lemma:minCertificate}).
Therefore, by taking a union bound we get that with probability at least $1 - \delta$, \emph{for all} $i \in [k]$ the algorithm finds $\bar{\mathbf{x}}^i \in \proj_{bin} P_i$ within the first $T_i$ iterations with index $i$ (for a total of $\sum_i T_i = T$ iterations). When this happens, the total solution $\bar{\mathbf{x}}$ belongs to $\proj_{bin} P$ and the algorithm returns. This concludes the proof.
\end{proof}
\section{Randomization step $\perturb{\ell}$ within Feasibility Pump}\label{sec:toyFPWalkSAT}
In this section we incorporate the randomization step $\perturb{\ell}$ into the Na\"ive Feasibility Pump, the resulting algorithm being called \textsc{WFP}\xspace. We describe this algorithm in a slightly different way and using a notation more convenient for the analysis.
Consider a mixed-binary set $P^I$ as in equation \eqref{eq:MBS}. Given a 0/1 point $\wt{x} \in \{0,1\}^n$, let $\textrm{$\ell_1$-proj}(P, \wt{x})$ denote a point $(x,y)$ in $P$ where $\|\wt{x} - x\|_1$ is as small as possible. Also, for a vector $v \in [0,1]^p$, we use $\textrm{round}(v)$ to denote the vector obtained by rounding each component of $v$ to the closest integer; we use the convention that $\frac{1}{2}$ is rounded to 1, but any consistent rounding would suffice. Notice that operations `$\textrm{$\ell_1$-proj}$' and `$\textrm{round}$' correspond precisely to Steps \ref{alg:naiveProj} and \ref{alg:naiveRound} in the Na\"ive Feasibility Pump. With this notation, algorithm \textsc{WFP}\xspace can be described as follows.
\begin{algorithm}[h] \caption{\textsc{WFP}\xspace} \label{alg:WFP}
\begin{algorithmic}[1]
\State \textbf{input parameter:} integer $\ell \ge 1$
\smallskip
\State Let $(\bar{x}^0, \bar{y}^0)$ be an optimal solution of the LP relaxation
\State Let $\wt{x}^0 = \textrm{round}(\bar{x}^0)$
\For{t = 1,2,\ldots}
\State $(\bar{\mathbf{x}}^t, \bar{\mathbf{y}}^t) = \textrm{$\ell_1$-proj}(P, \widetilde{\x}^{t-1})$ \label{alg:lproj}
\State $\widetilde{\x}^t = \textrm{round}(\bar{\mathbf{x}}^t)$ \label{alg:round}
\smallskip
\If{$(\widetilde{\x}^t, \bar{\mathbf{y}}^t) \in P$} \Comment{equivalently, $\widetilde{\x}^t \in \proj_{bin}(P)$}
\State Return $(\widetilde{\x}^t, \bar{\mathbf{y}}^t)$ \label{algo:WFPRet}
\EndIf
\smallskip
\If{$\widetilde{\x}^t = \widetilde{\x}^{t-1}$} \Comment{iterations have stalled}
\State $\widetilde{\x}^t = \perturb{\ell}(\widetilde{\x}^t)$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
Note that stalling in the above algorithm is determined using the condition $\widetilde{\x}^t = \widetilde{\x}^{t-1}$. What about `long cycle' stalling, that is $\widetilde{\x}^t = \widetilde{\x}^{t'}$ where $t' < t -1$, but $\widetilde{\x}^{t'}, \dots, \widetilde{\x}^{t -1}$ are all distinct binary vectors. As it turns out (assuming no numerical errors) a consistent rounding rule implies that stalling will always occur with cycles of length two.
\begin{theorem}\label{thm:sidenote}
With consistent rounding, long cycles cannot occur.
\end{theorem}
We present a proof of \ref{thm:sidenote} in Appendix \ref{sec:side}.
For the remainder of the section, we analyze the behavior of algorithm \textsc{WFP}\xspace on separable subset-sum instances, proving Theorem \ref{thm:WFP} stated in the introduction.
\subsection{Running time of \textsc{WFP}\xspace for separable subset-sum instances: Proof of Theorem \ref{thm:WFP}}
Notice that the projection operators `$\textrm{$\ell_1$-proj}$' and `$\textrm{round}$' now present also act on each block independently, namely given a point $x = (x^1, \ldots, x^k) \in \R^{n_1} \times \ldots \times \R^{n_k}$, if $(\check{x}^1, \ldots, \check{x}^k) = \textrm{$\ell_1$-proj}(P, x)$ then $\check{x}^i = \textrm{$\ell_1$-proj}(P_i, x^i)$ for all $i \in [k]$, and similarly for `$\textrm{round}$'.
Therefore, as in the proof of Theorem \ref{thm:decompR}, it suffices to analyze the execution of algorithm \textsc{WFP}\xspace over a single block/inequality of the separable subset-sum problem. More precisely, it suffices to prove the following guarantee for $\textsc{WFP}\xspace$ on a general subset-sum instance.
\begin{theorem} \label{thm:WFPGen}
Consider a feasible subset-sum problem $P \subseteq \R^n$. Then for every $T \ge 1$, the probability that \textsc{WFP}\xspace with $\ell = 2$ does not find a feasible solution within the first $2T$ iterations is at most $(1-p)^{\lfloor T/n \rfloor}$, where $p = (1/n^2)^n$. In particular, for $T = n \cdot 2^{2n \log n} \cdot \lceil\ln(1/\delta)\rceil$ this probability is at most $\delta$.
\end{theorem}
The high-level idea of the proof of this theorem is the following. We use a similar strategy as before, where we consider a fixed feasible solution $x^*$ and track its distance to the iterates $\wt{\bx}^t$ generated by algorithm \textsc{WFP}\xspace. However, while again the randomization step $\perturb{2}$ brings $\wt{\bx}^t$ closer to $x^*$ with small but non-zero probability, the issue is that the projections `$\textrm{$\ell_1$-proj}$' and `$\textrm{round}$' in the next iterations could send the iterate even further from $x^*$. To analyze the algorithm we then use the structure of subset-sum instances to: 1) First control the combination `$\textrm{$\ell_1$-proj} + \textrm{round}$' in Steps \ref{alg:lproj} and \ref{alg:round}, showing that in this case they are \emph{idempotent}, namely applying them once or repeatedly yields the same effect (Lemma \ref{lemma:stabAltProj}); 2) Strengthen the analysis of Theorem \ref{thm:decompR} to show that a round of $\perturb{2}$ \emph{plus} `$\textrm{$\ell_1$-proj} + \textrm{round}$' still has a non-zero probability of generating a point closer to $x^*$ (Lemma \ref{lemma:2choice}). For this, it will be actually important that we use $\ell = 2$ in algorithm \textsc{WFP}\xspace (actually $\ell \ge 2$ suffices).
For the remainder of the section we prove Theorem \ref{thm:WFPGen}. To simplify the notation we omit the polytope $P$ from the notation of $\textrm{$\ell_1$-proj}$. We assume that our subset-sum problem $P = \{x \in [0,1]^n : ax = b\}$ is such that \emph{all} coordinates of $a$ are positive, since components with $a_i = 0$ do not affect the problem (more precisely, after the first iteration of the algorithm, the value of $\wt{\bx}^t_i$ is set to 0 or 1 and does not change anymore, and this value does not affect the feasibility of the solutions $\wt{\bx}^t$'s). Also remember that subset-sum problems only have binary variables.
Given a point $\wt{x} \in \{0,1\}^n$, let $\textrm{AltProj}(\wt{x}) \in \{0,1\}^n$ be the effect of applying to $\wt{x}$ $\textrm{$\ell_1$-proj}(.)$ and then $\textrm{round}(.)$. Notice that if $\wt{x}$ belongs to $P$, then $\textrm{AltProj}(\wt{x}) = \wt{x}$. Then algorithm \textsc{WFP}\xspace can be thought as performing a $\textrm{AltProj}$ operation, then checking if the iterate obtained either belongs to $P$ (in which case it exits) of if it equals the previous iterate (in which case it applies $\perturb{2}$); if neither of these occur, then another $\textrm{AltProj}$ operation is performed. So an important component for analyzing this algorithm is getting a good control over a sequence of $\textrm{AltProj}$ operations. For that, define the iterated operation $\textrm{AltProj}^t(\wt{x}) = \textrm{AltProj}\left(\textrm{AltProj}^{t-1}(\wt{x})\right)$ (with $\textrm{AltProj}^1 = \textrm{AltProj}$) and if the sequence $(\textrm{AltProj}^t(\wt{x}))$ stabilizes at a point, let $\textrm{AltProj}^*(\wt{x})$ denote this point.
A crucial observation, given by the next lemma, is that for subset-sum instances the operation of $\textrm{AltProj}$ is idempotent, namely it stabilizes after just one operation.
\begin{lemma} \label{lemma:stabAltProj}
Let $P$ be a subset-sum instance. Then for every $\wt{x} \in \{0,1\}^n$, $\textrm{AltProj}^*_P(\wt{x}) = \textrm{AltProj}_P(\wt{x})$.
\end{lemma}
\begin{proof}
Again to simplify the notation we omit the polyhedron $P$ when writing $\textrm{$\ell_1$-proj}$ and $\textrm{AltProj}$. Let $\bar{x}=\textrm{$\ell_1$-proj}(\wt{x})$ and recall it is an extreme point of $P$. Clearly, if $\wt{x} \in P$ then $\textrm{AltProj}(\wt{x})=\wt{x}$ and hence $\textrm{AltProj}^*(\wt{x}) = \textrm{AltProj}(\wt{x})$. Similarly, if $\bar{x}$ is a 0/1 point then $\textrm{AltProj}(\wt{x})=\bar{x}$, and again $\textrm{AltProj}^*(\wt{x}) = \textrm{AltProj}(\wt{x})$.
Thus, assume that $\wt{x}\notin P$ and $\bar{x}$ is not a 0/1 point. Since $\bar{x}$ is an extreme point of the subset-sum LP $P$ it has exactly 1 fractional coordinate, so by permuting indices we assume without loss of generality:
%
\begin{enumerate}
\item $\bar{x}_1 = \dots = \bar{x}_k = 1$.
\item $\bar{x}_{k + 1} \in (0, \ 1)$.
\item $\bar{x}_{k + 2}= \dots = \bar{x}_n = 0$
\item $a_{k + 2} \geq a_{k + 3}\geq \dots \geq a_{n}$.
\item $a_1 \leq a_2 \leq a_3 \leq \dots \leq a_{k}$.
\end{enumerate}
Now we look at the points obtained after applying $\textrm{round}(.)$ and $\textrm{$\ell_1$-proj}(.)$ to $\bar{x}$, namely let $\wt{x}' := \textrm{round}(\bar{x}) = \textrm{AltProj}(\wt{x})$ and let $\bar{x}' := \textrm{$\ell_1$-proj}(\wt{x}')$. Notice that $\bar{x}'$ is obtained by solving:
\begin{eqnarray}
\textup{min}~&\sum_{ \{j \,|\, \wt{x}'_j = 0\}}x_j + \sum_{ \{j \,|\, \wt{x}'_j = 1\}}(1 - x_j) \notag\\
\textup{s.t.}~& ax=b \label{eq:lproj}\\
&\ 0\le x\le 1.\notag
\end{eqnarray}
\medskip
\noindent \textbf{Case 1:} $\bar{x}_{k+1}<1/2$. Then $\wt{x}'_i=1$ for all $i \le k$, $\wt{x}'_i =0$ for all $i \ge k+1$; also notice $\wt{x}'\le \bar{x}$, and hence $a\wt{x}'<b$; thus $\bar{x}'$ is obtained from $\wt{x}$' by increasing some components of 0 value. We have three subcases:
\begin{enumerate}
\item[a.] If $a_{k+1}>a_{k+2}$: then $a_{k+1}$ is the largest coordinate of $a$ where $\wt{x}'$ has value 0, so it follows from \eqref{eq:lproj} that $\bar{x}'$ is obtained from $\wt{x}'$ by raising its $(k+1)$-component from 0 to $\bar{x}_{k+1}$. Thus, $\bar{x}' = \bar{x}$, and hence $\textrm{AltProj}(\textrm{AltProj}(\wt{x})) = \textrm{round}(\bar{x}')$ equals $\textrm{round}(\bar{x}) = \textrm{AltProj}(\wt{x})$; this implies $\textrm{AltProj}^*(\wt{x}) = \textrm{AltProj}(\wt{x})$.
\medskip
\item[b.] If $a_{k+1} < a_{k+2}$: then $\bar{x}'$ is obtained from $\wt{x}$ by raising its $(k+2)$-component to a value that is at most $\bar{x}_{k+1} < 1/2$. Now, $\textrm{round}(\bar{x}')=\wt{x}'$, so again we get $\textrm{AltProj}(\textrm{AltProj}(\wt{x})) = \textrm{round}(\bar{x}') = \wt{x}' = \textrm{AltProj}(\wt{x})$ and we are done.
\medskip
\item[c.] If $a_{k+1} = a_{k+2}$: Since $\bar{x}'$ is a vertex of the subset-sum LP $P$, again it only has 1 fractional component (either $k+1$ or $k+2$) and then it is easy to see that $\bar{x}'$ is equal to the one in either Case (a) or Case (b) above; thus the result also holds for this case.
\end{enumerate}
\noindent \textbf{Case 2:} $\bar{x}_{k+1} \ge 1/2$. Then $\wt{x}'$ is such that $\wt{x}'_i=1$ for all $i \le k+1$ and $\wt{x}'=0$ for all $i \ge k+2$; also notice $\wt{x}' \ge \bar{x}$ and hence $a\wt{x}'>b$.
Now, consider $\bar{x}'=\textrm{$\ell_1$-proj}(\wt{x}')$:
\begin{enumerate}
\item[a.] If $a_{k} < a_{k+1}$: This is analogous to Case 1a: $\bar{x}'$ is obtained by lowering the $(k+1)$-coordinate of $\wt{x}'$ from 1 to $\bar{x}$, and thus $\bar{x}' = \bar{x}$; the rest of the proof is identical to Case 1a.
\medskip
\item[b.] If $a_{k} > a_{k+1}$: In this case, $\bar{x}'$ is obtained by lowering the $k$-component of $\wt{x}'$. Since $a\bar{x}=a\bar{x}'=b$, and $k$ and $(k+1)$ are the only components where $\bar{x}$ and $\bar{x}'$ differ, we have: $a_k + a_{k+1} \bar{x}_{k+1} = a_k \bar{x}'_k + a_{k+1} $. Hence $\bar{x}'_k = 1-\frac{a_{k+1}}{a_k}(1-\bar{x}_{k+1})\ge 1/2$ and $\textrm{round}(\bar{x}') = \wt{x}'$; the rest of the proof is identical to Case 1b.
\medskip
\item[c.] If $a_{k} = a_{k+1}$: Identical to Case 1c.
\end{enumerate}
\end{proof}
Therefore, there is not much loss in looking at a ``compressed'' version of algorithm \textsc{WFP}\xspace that packs repeated applications of $\textrm{AltProj}$ until stalling happens into a single $\textrm{AltProj}^*$; more formally, we have the following algorithm (stated in the pure-binary case to simplify the notation).
\begin{algorithm}[h] \caption{\textsc{WFP}\xspace-Compressed} \label{alg:WFPComp}
\begin{algorithmic}[1]
\State \textbf{input parameter:} integer $\ell \ge 1$
\smallskip
\State Let $\bar{x}^0$ be an optimal solution of the LP relaxation
\State Let $\wt{z}^0 = \textrm{round}(\bar{x}^0)$
\For{$\tau$ = 1,2,\ldots}
\State $\bar{\bz}^{\tau} = \textrm{AltProj}^*(\wt{\bz}^{\tau - 1})$ \label{alg:lprojComp}
\smallskip
\If{$\wt{\bz}^{\tau} \in P$}
\State Return $\wt{\bz}^{\tau}$ \label{algo:WFPRetComp}
\EndIf
\smallskip
\State $\wt{\bz}^{\tau} = \perturb{\ell}(\wt{\bz}^{\tau})$
\EndFor
\end{algorithmic}
\end{algorithm}
Intuitively, Lemma \ref{lemma:stabAltProj} should imply that packing the repeated applications of $\textrm{AltProj}$ into a single $\textrm{AltProj}^*$ should not save more than 1 iteration. To see this more formally, assume that both algorithms use as starting point the same optimal solution of the LP, so $\wt{z}^0 = \wt{x}^0$. Now condition on a scenario where we have $\wt{\bz}^{\tau} = \wt{\mathbf{x}}^t$ at the beginning of iterations $\tau$ and $t$ of algorithms \textsc{WFP}\xspace-Compressed and \textsc{WFP}\xspace respectively (for $\tau, t \ge 1$). Then we claim that either both algorithms return at the current iteration, or $\wt{\bz}^{\tau + 1}$ has the same distribution as either $\wt{\mathbf{x}}^{t + 1}$ or $\wt{\mathbf{x}}^{t + 2}$ (at the beginning of they respective iterations): If $\wt{\bz}^{\tau} = \wt{\mathbf{x}}^t \in P$, then both algorithms return; if $\wt{\mathbf{x}}^t \notin P$ but $\wt{\mathbf{x}}^t = \wt{\mathbf{x}}^{t-1}$, then both algorithms \textsc{WFP}\xspace-Compressed and \textsc{WFP}\xspace employ $\perturb{2}$ over $\wt{\bz}^{\tau}=\wt{\mathbf{x}}^t$, in which case $\wt{\bz}^{\tau + 1}$ has the same distribution as $\wt{\mathbf{x}}^{t + 1}$; finally, if $\wt{\mathbf{x}}^t \neq \wt{\mathbf{x}}^{t-1}$, then \textsc{WFP}\xspace at the beginning of the next iteration will have $\wt{\mathbf{x}}^{t+1} = \textrm{AltProj}(\wt{\mathbf{x}}^t)$, which by Lemma \ref{lemma:stabAltProj} (and $t \ge 1$) equals $\wt{\mathbf{x}}^t$ itself, and so it will employ $\perturb{2}$ to $\wt{\mathbf{x}}^{t+1} = \wt{\mathbf{x}}^t$ and again we have that $\wt{\mathbf{x}}^{t+2}$ has the same distribution as $\wt{\bz}^{\tau + 1}$.
Therefore, since we can employ this argument to couple iterations $\le \tau$ of \textsc{WFP}\xspace-Compressed with iterations $\le 2\tau$ of \textsc{WFP}\xspace, we have the following result.
\begin{lemma} \label{lemma:coupling}
Consider the application of algorithms \textsc{WFP}\xspace and \textsc{WFP}\xspace-Compressed over the subset-sum problem $P$. Then the probability that algorithm \textsc{WFP}\xspace returns after at most $2T$ iterations is at least the probability that algorithm \textsc{WFP}\xspace-Compressed after at most $T$ iterations.
\end{lemma}
Therefore, it suffices to upper bound the number of iterations of \textsc{WFP}\xspace-Compressed until it returns. To avoid ambiguity, let $\bz^{\tau}$ be the value of $\wt{\bz}^{\tau}$ at the \emph{beginning} of iteration $\tau$ of \textsc{WFP}\xspace-Compressed. Notice that $z^1 = \textrm{AltProj}^*(\wt{x}^0)$, and $\bz^{\tau+1} = \textrm{AltProj}^*(\perturb{2}(\bz^\tau))$ for $\tau \ge 2$. It suffices to show that with probability at least $1 - (1-p)^{T/n}$, there is $\tau \le T/2$ such that $\bz^\tau$ belongs to $P$.
To do so, for $\wt{x} \in \{0,1\}^n$ and $I \subseteq [n]$ let $\textrm{flip}(\wt{x}, I)$ denote the 0/1 vector obtained starting from $\wt{x}$ and flipping the value of all coordinates that belongs to $I$. Notice that (up to scaling) the only possible projected certificates for our subset-sum problem are $ax \ge b$ and $ax \le b$.
Since we have assumed that the vector $a$ has full support, it follows that on this problem $\perturb{2}(\wt{x}) = \textrm{flip}(\wt{x}, \mathbf{I})$ for $\mathbf{I}$ being the set obtained by sampling independently two indices uniformly from $[n]$.
The next lemma then shows that there is always a ``lucky choice'' of set $\mathbf{I}$ in $\perturb{2}(\bz^\tau)$ that brings $\bz^{\tau + 1} = \textrm{AltProj}^*(\perturb{2}(\bz^\tau))$ closer to a fixed solution $x^*$ to the subset-sum problem.
The following definition is convenient.
\begin{definition}\label{defn:stall}
A point $\wt{x} \in \{0, 1\}^n$ is called a stalling solution if $\textrm{AltProj}(\wt{x}) = \wt{x}$.
\end{definition}
\begin{lemma} \label{lemma:2choice}
Let $x^* \in \{0,1\}^n$ be a feasible solution to the subset-sum problem. Consider $\wt{x} \in \{0,1\}^n$ with $a \wt{x} \neq b$ that satisfies the fixed point condition $\textrm{AltProj}(\wt{x}) = \wt{x}$. Then there is a set $I \subseteq [n]$ of size at most 2 such that the point $x' = \textrm{AltProj}^*_P(\textrm{flip}(\wt{x}, I))$ is closer to $x^*$ than $\wt{x}$, namely $\|x' - x^*\|_0 \le \|\wt{x} - x^*\|_0 - 1$.
\end{lemma}
\begin{proof}
Again to simplify the notation we omit $P$ from $\textrm{$\ell_1$-proj}$ and $\textrm{AltProj}$, and use $\textrm{flip}(\wt{x}, j)$ instead of $\textrm{flip}(\wt{x}, \{j\})$ in the singleton case.
We start with a couple of claims.
\paragraph{Claim 1} Suppose $\wt{x} \in \{0,1\}^n$ is a stalling point. If $a \wt{x} < b$, then there is $k \notin \textrm{supp}(\wt{x})$ such that $\textrm{$\ell_1$-proj}(\wt{x})_i = \wt{x}_i$ for all $i \neq k$, and $\textrm{$\ell_1$-proj}(\wt{x})_k \in (0,\frac{1}{2})$. Similarly, if $a \wt{x} > b$, then there is $k \in \textrm{supp}(\wt{x})$ such that $\textrm{$\ell_1$-proj}(\wt{x})_i = \wt{x}_i$ for all $i \neq k$, and $\textrm{$\ell_1$-proj}(\wt{x})_k \in [\frac{1}{2}, 1)$.
\begin{proof}[Proof of Claim 1]
We only prove the first statement, the proof of the second is completely analogous. Since $\wt{x}$ is stalling we have that $\textrm{round}(\textrm{$\ell_1$-proj}(\wt{x})) = \wt{x}$, and since $\textrm{$\ell_1$-proj}(\wt{x})$ is an extreme point of the subset-sum problem $P$ it has at most 1 fractional component, and hence only differs in one component $k$ from $$\textrm{round}(\textrm{$\ell_1$-proj}(\wt{x})) = \wt{x}.$$ Since $a \cdot \textrm{$\ell_1$-proj}(\wt{x}) = b > a \cdot \wt{x}$, we have that $\wt{x}_k = 0$ and $\textrm{$\ell_1$-proj}(\wt{x})_k > 0$; since $\textrm{round}(\textrm{$\ell_1$-proj}(\wt{x})_k) = \wt{x}_k = 0$, we have $\textrm{$\ell_1$-proj}(\wt{x})_k < \frac{1}{2}$.
\end{proof}
\paragraph{Claim 2}
Consider a point $\wt{x} \in \{0,1\}^n$.
\begin{enumerate}
\item If the objective value of \eqref{eq:lproj} is strictly less than $\frac{1}{2}$, then $\textrm{AltProj}(\wt{x}) = \wt{x}$.
\item If the objective value of \eqref{eq:lproj} is strictly less than 1, then $\|\textrm{AltProj}(\wt{x}) - \wt{x}\|_0 \le 1$.
\end{enumerate}
\begin{proof}[Proof of Claim 2]
Let $\bar{x} = \textrm{$\ell_1$-proj}(\wt{x})$ be an optimal solution for \eqref{eq:lproj}. Proof of Part 1: the assumption implies that $|\bar{x}_i - \wt{x}_i| < \frac{1}{2}$ for all $i$, which directly implies that $\textrm{AltProj}(\wt{x}) = \textrm{round}(\bar{x}) = \wt{x}$.
Proof of Part 2: the assumption implies that there can be at most one index $j$ with $|\bar{x}_j - \wt{x}_j|\geq \frac{1}{2}$, which implies that for all $i \neq j$, $\textrm{AltProj}(\wt{x})_i = \textrm{round}(\bar{x}_i) = \wt{x}_i$ and the result follows.
\end{proof}
Now we are ready to present the proof of Lemma \ref{lemma:2choice}. Let $x^*$ and $\tilde{x}$ be as in the statement of the Lemma.
From Lemma \ref{lemma:stabAltProj} we know that $$\textrm{AltProj}^*(\textrm{flip}(\wt{x}, J)) = \textrm{AltProj}(\textrm{flip}(\wt{x}, J)),$$ so it suffices to work with the right-hand side instead. Since $\wt{x}\neq x^*$
we have $\textrm{supp}(\wt{x}) \neq \textrm{supp}(x^*)$. We separate the proof in three cases depending on the relationship between these supports.
\medskip \noindent \textbf{Case 1:} $\textrm{supp}(\wt{x}) \subsetneq \textrm{supp}(x^*)$: Pick any $j \in \textrm{supp}(x^*) \setminus \textrm{supp}(\wt{x})$ and notice that $\|\textrm{flip}(\wt{x},j) - x^*\|_0 = \|\wt{x} - x^*\|_0 - 1$. Notice that both $\textrm{supp}(\wt{x})$ and $\textrm{supp}(\textrm{flip}(\wt{x},j))$ are
contained in the support of $x^*$, and hence we have $a \wt{x} \le b$ and $a \cdot \textrm{flip}(\wt{x},j) \le b$. Moreover, since $\textrm{flip}(\wt{x},j) \ge \wt{x}$, it is easy to see that the optimal value of \eqref{eq:lproj} for $\textrm{flip}(\wt{x}, j)$ is \emph{strictly less than} that for $\wt{x}$ (we need to raise fewer variables to make the point satisfy $ax = b$), which by Claim 1 is at most $\frac{1}{2}$. Thus, employing Part 1 of Claim 2 to $\textrm{flip}(\wt{x},j)$ gives that $\textrm{AltProj}(\textrm{flip}(\wt{x},j)) = \textrm{flip}(\wt{x},j)$, which is the desired point closer to $x^*$.
\medskip \noindent \textbf{Case 2:} $\textrm{supp}(x^*) \subsetneq \textrm{supp}(\wt{x})$: The proof is the same as above, with the only change that we take $j \in \textrm{supp}(\wt{x}) \setminus \textrm{supp}(x^*)$.
\medskip \noindent \textbf{Case 3:} The supports $\textrm{supp}(x^*)$ and $\textrm{supp}(\wt{x})$ are not contained in one another. In this case $a\wt{x}$ can be either $< b$ or $>b$:
\begin{enumerate}
\item If $a\wt{x}< b$. Take $m\in \textrm{supp}(x^*)\setminus \textrm{supp}(\wt{x})$. If $a\cdot \textrm{flip}(\wt{x},m)\le b$, then we can argue exactly as in Case 1 to get that $\textrm{AltProj}(\textrm{flip}(\wt{x},m)) = \textrm{flip}(\wt{x},m)$, which is closer to $x^*$ than $\wt{x}$. So consider the case $a \cdot \textrm{flip}(\wt{x},m) > b$. Take $i \in \textrm{supp}(\wt{x})\setminus \textrm{supp}(x^*)$ and consider $\textrm{flip}(\wt{x}, \{m,i\})$, which is 2 units closer to $x^*$ in Hamming distance.
We claim that the optimal value of \eqref{eq:lproj} for $\textrm{flip}(\wt{x}, \{m,i\})$ is strictly less than 1. Suppose $a \cdot \textrm{flip}(\wt{x}, \{m,i\}) \le b$; since $a \cdot \textrm{flip}(\wt{x}, m) > b$ (notice $\textrm{flip}(\wt{x}, m)$ is obtained from $\textrm{flip}(\wt{x}, \{m,i\})$ by increasing coordinate $i$ to 1), this means that we can make $\textrm{flip}(\wt{x}, \{m,i\})$ satisfy $ax = b$ by increasing coordinate $i$ to a value \emph{strictly less} than 1, thus upper bounding the optimum of \eqref{eq:lproj}. On the other hand, consider $a \cdot \textrm{flip}(\wt{x}, \{m,i\}) > b$; notice $a \cdot \textrm{flip}(\wt{x}, i) \le a \cdot \wt{x} < b$ (the last uses a running assumption), and thus again we can make $\textrm{flip}(\wt{x}, \{m,i\})$ satisfy $ax = b$ by decreasing coordinate $m$ to a value strictly smaller than 1. This proves the claim.
With this claim in place, we can just employ Part 2 of Claim 2 to $\textrm{flip}(\wt{x}, \{m,i\})$ and triangle inequality to obtain that $\|\textrm{AltProj}(\textrm{flip}(\wt{x}, \{m,i\})) - x^*\|_0$ is at most $$1 + \|\textrm{flip}(\wt{x}, \{m,i\}) - x^*\|_0 = 1 + \|\wt{x} - x^*\|_0 - 2,$$ which gives the desired result.
\item If $a\bar{x}> b$. The proof of this case mirrors that of the above case (only with the inequalities $<$ and $>$ reversed throughout).
\end{enumerate}
\end{proof}
Notice that since $\bz^\tau$ is obtained from $\textrm{AltProj}^*(.)$, it satisfies the fixed point condition $\textrm{AltProj}(\bz^\tau) = \bz^\tau$. Thus, as long as $\bz^{\tau}$ does not belong to $P$ we can apply the above lemma to obtain that with probability at least $\frac{1}{n^2}$ we have $\mathbf{I}$ in $\perturb{2}$ equal to the set $I$ in the lemma and thus the iterate moves closer to a feasible solution; more formally we have the following.
\begin{corollary} \label{cor:2choice}
Let $x^* \in \{0,1\}^n$ be a feasible solution to the subset-sum problem $P$. Then $$\Pr\Big(\|\bz^{\tau + 1} - x^*\|_0 \le \|\bz^\tau - x^*\|_0 - 1 ~\Big\vert~ \bz^{\tau} \notin P \Big) \ge \frac{1}{n^2}.$$
\end{corollary}
Now we can conclude the proof of Theorem \ref{thm:WFPGen} arguing just like in the proof of Theorem \ref{thm:walkSATMI2}.
\begin{proof}[Proof of Theorem \ref{thm:WFPGen}]
Consider $x^* \in P$ and let $\bZ_\tau = \|\bz^{\tau} - x^*\|_0$. Notice that $\bZ_\tau = 0$ implies $\bz^\tau = x^*$ and hence $\bz^\tau \in P$. Corollary \ref{cor:2choice} gives that $\Pr(\bZ_{\tau + 1} \le \bZ_{\tau} - 1 \mid \bz^\tau \notin P) \ge \frac{1}{n^2}$. Therefore, if we start at iteration $\tau$ and for all the next $\bZ_{\tau}$ iterations either the iterate $\bz^{\tau'}$ belongs to $P$ or the algorithm reduces $\bZ_{\tau'}$, it terminates by time $\tau + \bZ_{\tau}$. Thus, with probability at least $(1/n^2)^{\bZ_{\tau}} \ge (1/n^2)^n = p$ the algorithm terminates by time $t + \bZ_{\tau} \le t + n$.
To conclude the proof, let $\alpha = \lfloor T/n \rfloor$ and call time steps $i \cdot n$, \ldots, $(i+1) \cdot n -1$ the $i$-th block of time. From the above paragraph, the probability that there is $\tau$ in the $i$th block of time such that $\bz^\tau \in P$ conditioned on $\bz^{i \cdot n - 1} \notin P$ is at least $p$. Using the chain rule of probability gives that the probability that there is no $\bz^\tau \in P$ within any of the $\alpha$ blocks is at most $(1-p)^\alpha$. This concludes the proof.
\end{proof}
\section{Computations}\label{sec:computation}
In this section, we describe the algorithms that we have implemented and report computational experiments comparing the performance of the original Feasibility Pump 2.0 algorithm from~\cite{FischettiS09}, which we denote by \textsc{FPorig}\xspace, to our modified code that uses the new perturbation procedure. The code is based on the current version of the Feasibility Pump 2.0 code (the one available on the NEOS servers), which is implemented in C++ and linked to IBM ILOG CPLEX 12.6.3~\cite{CPLEX} for preprocessing and solving LPs. All features such as constraint propagation which are part of the Feasibility Pump 2.0 code have been left unchanged.
All algorithms have been run on a cluster of identical machines, each equipped with an Intel Xeon CPU E3-1220 V2 running at 3.10GHz and 16 GB of RAM. Each run had a time limit of half an hour.
\subsection{WalkSAT-based perturbation}
In preliminary tests, we implemented the algorithm \textsc{WFP}\xspace as described in the previous section. However, its performance was not competitive with \textsc{FPorig}\xspace. In hindsight, this can be justified by the following reasons:
\begin{itemize}
\item Picking a fixed $\ell$ can be tricky. Too small or too big a value can lead to slow convergence in practical implementations.
\item Using $\perturb{\ell}$ at each perturbation step can be overkill, as in most cases the original perturbation scheme does just fine.
\item Computing the minimal certificate is too expensive, as it requires solving LPs.
\end{itemize}
For the reasons above, we devised a more conservative implementation of a perturbation procedure inspired by \textsc{WalkSAT}\xspace, which we denote by \textsc{WFPbase}\xspace. The algorithm works as follows. Let $F\subset [n]$ be the set of indices with positive fractionality $|\wt{x}_j - \bar{x}_j|$. If $TT \le |F|$, then the perturbation procedure is just the original one in \textsc{FPorig}\xspace.
Else, let $S$ be the union of the supports of the constraints that are not satisfied by the current point $(\wt{x}, \bar{y})$.
We select the $|F|$ indices with largest fractionality $|\wt{x}_j - \bar{x}_j|$ and select uniformly at random $\textup{min}\{|S|, TT-|F|\}$ indices from $S$, and flip the values in $\wt{x}$ for all the selected indices.
Note also that the above procedure applies only to the case in which a cycle of length one is detected. In case of longer cycle, we use the very same restart strategy of \textsc{FPorig}\xspace.
\subsection{Computational results}
We tested the two algorithms on two classes of models: two-stage stochastic models, and the MIPLIB 2010 dataset.
\paragraph{Two-stage stochastic models.} In order to validate the hypothesis suggested by the theoretical results that our walkSAT-based perturbation should work well on almost-decomposable models, we tested \textsc{WFPbase}\xspace on two-stage stochastic models. These are the deterministic equivalent of two-stage stochastic programs and have the form
\begin{align*}
&Ax + D^i y^i \le b^i ~~, i \in \{1, \ldots, k\}\\
&x \in \{0,1\}^p\\
&y^i \in \{0,1\}^q ~~, i \in \{1, \ldots, k\}.
\end{align*}
The variables $x$ are the first-stage variables, and $y^i$ are the second-stage variables for the $i$th scenario. Notice that these second-stage variables are different for each scenario, and are only coupled through the first-stage variables $x$. Thus, as long as the number of scenarios is reasonably large compared to dimensions of $x, y^1, \ldots, y^k$, these problems are to some extent almost-decomposable.
For our experiments we randomly generated instances of this form as follows: (1) the entries in $A$ and the $D^i$'s are independently and uniformly sampled from $\{-10, \ldots, 10\}$; (2) to guarantee feasibility, a 0/1 point is sampled uniformly at random from $\{0,1\}^{p + k \cdot q}$ and the right-hand sides $b^i$ are set to be the smallest ones that make this points feasible. We generated 50 instances, 5 for each setting of parameters $k=\{5,15,25,35,45\}$, $p = \{10,20\}$, $q = 10$.
We compared the two algorithms \textsc{FPorig}\xspace and \textsc{WFPbase}\xspace over these instances using ten different random seeds. A seed by seed comparison is reported in Table~\ref{tab:stoch}. In the tables, \texttt{\#found} denotes the number of models for which a feasible solution was found, while \texttt{time} and \texttt{itr.} report the shifted geometric means~\cite{Achterberg07} of running times and iterations, respectively.
\begin{table}[ht]
\begin{center}
\begin{tabular}{lrrrrrr}
\toprule
& \multicolumn{2}{c}{\# found} & \multicolumn{2}{c}{time (s)} & \multicolumn{2}{c}{itr.}\\
\cmidrule{1-7}
Seed & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace \\
\midrule
1 & 28 & \textbf{31} & 4.12 & \textbf{3.36} & 124.43 & \textbf{76.02} \\
2 & 26 & \textbf{35} & 4.06 & \textbf{3.17} & 122.51 & \textbf{82.85} \\
3 & 25 & \textbf{37} & 4.00 & \textbf{3.02} & 117.74 & \textbf{72.50} \\
4 & 26 & \textbf{36} & 4.28 & \textbf{3.40} & 119.82 & \textbf{75.17} \\
5 & 25 & \textbf{31} & 4.20 & \textbf{3.44} & 124.41 & \textbf{81.66} \\
6 & 26 & \textbf{35} & 3.98 & \textbf{3.56} & 122.74 & \textbf{79.73} \\
7 & 25 & \textbf{27} & 4.22 & \textbf{3.98} & 126.77 & \textbf{91.59} \\
8 & 28 & \textbf{38} & 3.82 & \textbf{3.10} & 112.91 & \textbf{73.92} \\
9 & 25 & \textbf{31} & 4.22 & \textbf{3.67} & 117.61 & \textbf{83.46} \\
10 & 25 & \textbf{32} & 4.12 & \textbf{3.57} & 116.92 & \textbf{88.23} \\
\bottomrule
\end{tabular}
\caption{Aggregated results on two-stage stochastic models.}
\label{tab:stoch}
\end{center}
\end{table}
Notice that \textsc{WFPbase}\xspace performed substantially better than \textsc{FPorig}\xspace, in agreement with our theoretical results. Using the walkSAT-based perturbation the average number of successful instances increased by $28\%$, while average runtime was reduced by $17\%$ and average number of iterations was reduced by $33\%$.
\paragraph{MIPLIB 2010.} We also compared the algorithms on a subset of models from MIPLIB 2010~\cite{MIPLIB2010}.
The subset is defined by the models for which at least one of the two algorithms took more than 20 iterations to find a feasible solution (if any); the remaining models are basically too easy and not useful for comparing the two perturbation procedures. We are thus left with a subset of 82 models. Again we compared the two algorithms using ten different random seeds. A seed by seed comparison is reported in Table~\ref{tab:mip2010}.
Even though the improvement in this heterogeneous testbed was less dramatic as in the two-stage stochastic models, as expected, \textsc{WFPbase}\xspace still consistently dominates \textsc{FPorig}\xspace: it can find more solutions in 7 out 10 cases (in the remaining 3 cases it is a tie), taking always less time and almost always fewer iterations. On average over the seeds, \textsc{WFPbase}\xspace increased the number of successfully solved instances by $6\%$, reduced by the computation time by $8.4\%$ and reduced the number of iterations by $5.9\%$.
In conclusion, given that the suggested modification is very simple to implement, and appears to dominate \textsc{FPorig}\xspace consistently, it suggests it is a good idea to add it as a feature in all future feasibility pump codes.
\begin{table}[ht]
\begin{center}
\begin{tabular}{lrrrrrr}
\toprule
& \multicolumn{2}{c}{\# found} & \multicolumn{2}{c}{time (s)} & \multicolumn{2}{c}{itr.}\\
\cmidrule{1-7}
Seed & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace & \textsc{FPorig}\xspace & \textsc{WFPbase}\xspace \\
\midrule
1 & 33 & \textbf{34} & 1070.35 & \textbf{1068.09} & \textbf{103.38} & 104.59 \\
2 & \textbf{34} & \textbf{34} & 1073.03 & \textbf{1004.84} & 108.65 & \textbf{104.05} \\
3 & 34 & \textbf{39} & 1125.44 & \textbf{976.16} & 107.10 & \textbf{96.18} \\
4 & 34 & \textbf{36} & 1045.10 & \textbf{976.31} & 101.30 & \textbf{96.24} \\
5 & 31 & \textbf{32} & 1033.60 & \textbf{974.56} & 96.67 & \textbf{94.36} \\
6 & \textbf{34} & \textbf{34} & 974.47 & \textbf{880.05} & 99.61 & \textbf{91.20} \\
7 & 33 & \textbf{36} & 972.96 & \textbf{877.45} & 102.39 & \textbf{95.04} \\
8 & 29 & \textbf{32} & 1085.82 & \textbf{1049.22} & 104.63 & \textbf{103.22} \\
9 & \textbf{37} & \textbf{37} & 1065.50 & \textbf{937.19} & 101.44 & \textbf{91.73} \\
10 & 32 & \textbf{37} & 1096.99 & \textbf{913.50} & 103.01 & \textbf{90.85} \\
\bottomrule
\end{tabular}
\caption{Aggregated results on MIPLIB2010.}
\label{tab:mip2010}
\end{center}
\end{table}
\section*{Acknowledgments}
We would like to thank Andrea Lodi for discussions and clarifications on Feasibility Pump. Santanu S. Dey and Andres Iroume would like to gratefully acknowledge the support of NSF grants CMMI 1562578 and CMMI 1149400 respectively.
\bibliographystyle{alpha}
| {
"timestamp": "2016-09-27T02:12:49",
"yymm": "1609",
"arxiv_id": "1609.08121",
"language": "en",
"url": "https://arxiv.org/abs/1609.08121",
"abstract": "Feasibility pump (FP) is a successful primal heuristic for mixed-integer linear programs (MILP). The algorithm consists of three main components: rounding fractional solution to a mixed-integer one, projection of infeasible solutions to the LP relaxation, and a randomization step used when the algorithm stalls. While many generalizations and improvements to the original Feasibility Pump have been proposed, they mainly focus on the rounding and projection steps.We start a more in-depth study of the randomization step in Feasibility Pump. For that, we propose a new randomization step based on the WalkSAT algorithm for solving SAT instances. First, we provide theoretical analyses that show the potential of this randomization step; to the best of our knowledge, this is the first time any theoretical analysis of running-time of Feasibility Pump or its variants has been conducted. Moreover, we also conduct computational experiments incorporating the proposed modification into a state-of-the-art Feasibility Pump code that reinforce the practical value of the new randomization step.",
"subjects": "Optimization and Control (math.OC)",
"title": "Improving the Randomization Step in Feasibility Pump",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9585377272885904,
"lm_q2_score": 0.7401743563075446,
"lm_q1q2_score": 0.7094850452923291
} |
https://arxiv.org/abs/1304.5010 | Small-Bias Sets for Nonabelian Groups: Derandomizing the Alon-Roichman Theorem | In analogy with epsilon-biased sets over Z_2^n, we construct explicit epsilon-biased sets over nonabelian finite groups G. That is, we find sets S subset G such that | Exp_{x in S} rho(x)| <= epsilon for any nontrivial irreducible representation rho. Equivalently, such sets make G's Cayley graph an expander with eigenvalue |lambda| <= epsilon. The Alon-Roichman theorem shows that random sets of size O(log |G| / epsilon^2) suffice. For groups of the form G = G_1 x ... x G_n, our construction has size poly(max_i |G_i|, n, epsilon^{-1}), and we show that a set S \subset G^n considered by Meka and Zuckerman that fools read-once branching programs over G is also epsilon-biased in this sense. For solvable groups whose abelian quotients have constant exponent, we obtain epsilon-biased sets of size (log |G|)^{1+o(1)} poly(epsilon^{-1}). Our techniques include derandomized squaring (in both the matrix product and tensor product senses) and a Chernoff-like bound on the expected norm of the product of independently random operators that may be of independent interest. | \section{Introduction}
Small-bias sets are useful combinatorial objects for derandomization,
and are particularly well-studied over the Boolean hypercube $\{0,1\}^n$. Specifically, if we identify the hypercube with the group ${\mathbb{Z}}_2^n$, then a \emph{character} $\chi$ is a homomorphism from ${\mathbb{Z}}_2^n$ to ${\mathbb{C}}$. We say that a set $S \subseteq {\mathbb{F}}_2^n$ is \emph{$\varepsilon$-biased} if, for all characters $\chi$,
\[
\abs{ \Exp_{x \in S} \chi(x) } \le \varepsilon \, ,
\]
except for the trivial character $\mathds{1}$, which is identically equal to $1$. Since any character of ${\mathbb{F}}_2^n$ can be written $\chi(x) = (-1)^{k \cdot x}$ where $k \in {\mathbb{Z}}_2^n$ is the ``frequency vector,''
this is equivalent to the familiar definition which demands that on any nonzero set of bits, $x$'s parity should be odd or even with roughly equal probability, $(1 \pm \varepsilon)/2$.
It is easy to see that $\varepsilon$-biased sets of size $O(n / \varepsilon^2)$ exist: random sets suffice. Moreover, several efficient deterministic constructions are
known~\cite{NaorN:Small,AlonBNNR:Construction,DBLP:journals/rsa/AlonGHP92,BenAroyaTS:Constructing} of size polynomial in $n$ and $1/\varepsilon$. These constructions have been used to derandomize a wide variety of randomized algorithms, replacing random sampling over all of $\{0,1\}^n$ with deterministic sampling on $S$ (see e.g.~\cite{BogdanovV:Pseudorandom}). In particular, sampling a function on an $\varepsilon$-biased set yields a good estimate of its expectation if its Fourier spectrum has bounded $\ell_1$ norm.
The question of whether similar constructions exist for
nonabelian groups has been a topic of intense interest. Given a group $G$, a \emph{representation} is a homomorphism $\rho$ from $G$ into the group ${\textsf{U}}(d)$ of $d \times d$ unitary matrices for some $d=d_\rho$. If $G$ is finite, then up to isomorphism there is a finite set $\widehat{G}$ of \emph{irreducible} representations, or \emph{irreps} for short, such that any representation $\sigma$ can be written as a direct sum of irreps. These irreps form the basis for harmonic analysis over $G$, analogous to classic discrete Fourier analysis on abelian groups such as ${\mathbb{Z}}_p$ or ${\mathbb{Z}}_2^n$.
Generalizing the standard notion from characters to matrix-valued representations, we say that a set $S \subseteq G$ is \emph{$\varepsilon$-biased} if, for all nontrivial irreps $\rho \in \widehat{G}$,
\[
\Bnorm{\Exp_{x \in S} \rho(x) } \le \varepsilon \, ,
\]
where $\norm{\cdot}$ denotes the operator norm. There is a natural connection with expander graphs. If we define a Cayley graph on $G$ using $S$ as a set of generators, then $G$ becomes an expander if and only if $S$ is $\varepsilon$-biased. Specifically, if $M$ is the stochastic matrix equal to $1/|S|$ times the adjacency matrix, corresponding to the random walk where we multiply by a random element of $S$ at each step, then $M$'s second eigenvalue has absolute value $\varepsilon$. Thus $\varepsilon$-biased sets $S$ are precisely sets of generators that turn $G$ into an expander of degree $|S|$.
The Alon-Roichman theorem~\cite{AlonR:Random} asserts that a uniformly random set of $O((\log |G|) / \varepsilon^2)$ group elements is $\varepsilon$-biased with high probability. Thus, our goal is to
derandomize the Alon-Roichman theorem---finding explicit constructions
of $\varepsilon$-biased sets of size polynomial in $\log |G|$ and $1/\varepsilon$. (For another notion of derandomizing the Alon-Roichman theorem, in time $\mathrm{poly}(|G|)$, see Wigderson and Xiao~\cite{WigdersonX:Derandomizing}.)
Throughout, we apply the technique of ``derandomized
squaring''---analogous to the principal construction in Rozenman and
Vadhan's alternate proof of Reingold's theorem~\cite{Rozenman:2005fk}
that Undirected Reachability is in \textsf{LOGSPACE}. In particular,
we observe that derandomized squaring provides a generic amplification tool in our setting; specifically, given a constant-bias set $S$, we can obtain an $\varepsilon$-biased set of size $O(|S| \varepsilon^{-11})$. We also use a tensor product version of derandomized squaring to build $\varepsilon$-biased sets from $G$ recursively, from $\varepsilon$-biased sets for its subgroups or quotients.
\paragraph{Homogeneous direct products and branching programs}
Groups of the form $G^n$ where $G$ is fixed have been actively studied by the
pseudorandomness community as a specialization of the class of
constant-width branching programs. The problem of fooling ``read-once'' group programs induces an alternate notion of
$\varepsilon$-biased sets over groups of the form $G^n$ defined by Meka and
Zuckerman~\cite{Meka:2009fk}. Specifically, a read-once branching
program on $G$ consists of a tuple $\vec{g} = (g_1,\ldots,g_n) \in
G^n$ and takes a vector of $n$ Boolean variables $\vec{b} =
(b_1,\ldots,b_n)$ as input. At each step, it applies $g_i^{b_i}$, i.e., $g_i$ if $b_i=1$ and $1$ if $b_i=0$.
They say a set $S \subset G^n$ is $\varepsilon$-biased if, for all
$\vec{b} \ne 0$, the distribution of $\vec{g}^\vec{b}$ is close to
uniform, i.e.,
\begin{equation}\label{prop:MZ}
\forall h \in G:
\abs{
\Pr_{\vec{g} \in S} \left[ \vec{g}^\vec{b} = h \right] - \frac{1}{|G|}
} \le \varepsilon
\quad \text{where} \quad
\vec{g}^\vec{b} = \prod_{i=1}^n g_i^{b_i} \, .
\end{equation}
As they comment, there is no obvious relationship
between this definition and the one we consider.\footnote{In particular,
there is no obvious way to amplify in their setting: for instance,
squaring a set $S$ by multiplication in $G^n$ squares the operator
norm of any representation, but it has a very complicated effect on
the distribution of $\vec{g}^\vec{b}$.} We are unable to establish
such a connection in general. However, we show in
Section~\ref{sec:homogeneous} that a particular set shown to have property~\eqref{prop:MZ}
in~\cite{Meka:2009fk} is also $\varepsilon$-biased in our sense; the proof
is completely different. This yields $\varepsilon$-biased sets of size
$O(n \cdot \mathrm{poly}(\varepsilon^{-1}))$.
\paragraph{Inhomogeneous direct products} For the more general case of groups of the form $G = G_1
\times \cdots \times G_n$, we show that a tensor product adaptation of derandomized squaring yields a recursive
construction of $\varepsilon$-biased sets of size $\mathrm{poly}(\max_i |G_i|, n,
1/\varepsilon)$.
\paragraph{Normal extensions and ``smoothly solvable'' groups}
Finally, we show that if $G$ is solvable and has abelian quotients of bounded exponent, we can construct $\varepsilon$-biased sets of size $(\log |G|)^{1+o(1)} \,\mathrm{poly}(\varepsilon^{-1})$.
Here we use the representation theory of solvable groups to build an $\varepsilon$-biased set for $G$ recursively from those for a normal subgroup $H$ and the quotient $G/H$.
\section{An explicit set for $G^n$ with constant $\varepsilon$}
\label{sec:homogeneous}
Meka and Zuckerman~\cite{Meka:2009fk} considered the following construction for fooling
read-once group branching programs:
\begin{definition}\label{def:MZ}
Let $G$ be a group and $n \in \mathbb{N}$. Then, given an
$\varepsilon$-biased set $S$ over ${\mathbb{Z}}_{|G|}^n$, define
\[
T_S \triangleq\{(g^{s_1},\dots,g^{s_n} )\mid g\in G, (s_1,\dots,s_n)\in
S\} \, .
\]
\end{definition}
We prove the following theorem, showing that this construction
yields sets of small bias in our sense (and, hence, expander Cayley
graphs over $G^n$).
\begin{theorem}\label{thm:constant}
If $S$ is $\varepsilon$-biased over $\mathbb{Z}_{|G|}^n$ then $T_S$ is $(1
- \Omega(1/\log\log |G|)^2 + \varepsilon)$-biased over $G^n$.
\end{theorem}
\noindent
Anticipating the proof, we set down the following definition.
\begin{definition}
Let $G$ be a finite group. For a representation $\rho \in
\widehat{G}$ and a subgroup $H$, define
\[
\Pi_H^\rho \triangleq
\Exp_{h \in H} \rho(h)
\]
to be the projection operator induced by the subgroup $H$ in $\rho$.
In the case where $H=\langle g \rangle$ is the cyclic
group generated by $g$, we use the following shorthand:
\[
\Pi^\rho_g = \Pi^\rho_{\langle g \rangle} \, .
\]
Finally, for groups of the form $G^n$ we use the following convention. Recall that any irreducible representation $\bar \rho \in \widehat{G^n}$ is a tensor product, $\bar \rho = \bigotimes_{i=1}^n \rho_i$ where $\rho_i \in \widehat{G}$ for each $i$. That is, if $\bar{g} = (g_1,\ldots,g_n)$, then $\bar \rho(\bar{g}) = \bigotimes_{i=1}^n \rho_i(g_i)$. Then for an element $g \in G$, we write
\begin{equation}\label{def:proj}
\Pi_g^{\bar \rho} \triangleq \Pi_{\langle g\rangle^n}^{\bar \rho} =
\bigotimes_{i=1}^n \Pi^{\rho_i}_g
\end{equation}
for the projection operator determined by the abelian subgroup
$\langle g \rangle^n$.
\end{definition}
\begin{lemma}\label{lem:power-avg}
Let $G$ be a finite group and $\rho$ a nontrivial irreducible
representation of $G$. Then
\[
\Bnorm{ \Exp_{g \in G} \Pi_g^\rho }
\leq 1 - \frac{\phi(|G|)}{|G|} \leq 1 - \Omega\left(\frac{1}{\log \log |G|} \right) \, ,
\]
where $\phi(\cdot)$ denotes the Euler totient function.
\end{lemma}
\begin{proof}
Expanding the definition of $\Pi_{\langle g\rangle}^\rho$, we have
\[
\Bnorm{ \Exp_{g \in G} \Pi_g^\rho }
= \Bnorm{ \Exp_{g \in G} \Exp_{t \in {\mathbb{Z}}_{|G|}} \rho(g^t) }
\leq \Exp_{t \in {\mathbb{Z}}_{|G|}} \Bnorm{ \Exp_g \rho(g^t) } \, .
\]
Recall that the function $x \mapsto x^k$ is a bijection in any
group $G$ for which $\gcd(|G|, k) = 1$. Moreover, for such $k$, $\Exp_g
\rho(g^k) = \Exp_g \rho(g) = 0$ as $\rho \neq 1$. Assuming
pessimistically that $\norm{\Exp_g \rho(g^k)} = 1$ for all other
$k$ yields the bound $\norm{ \Exp_{g \in G} \Pi_{\langle g\rangle}^\rho } \leq
1 - \phi(|G|)/|G|$ promised in the statement of the lemma. The
function $\phi(n)$ has the property that
\[
{\phi(n)} > \frac{n}{e^\gamma \log\log n + \frac{3}{\log \log n}}
\]
for $n > 3$, where $\gamma \approx .5772\ldots$ is the Euler
constant~\cite{RosserS1962}; this yields the second estimate in the
statement of the lemma.
\end{proof}
Our proof will rely on the following tail bound for products of
operator-valued random variables, proved in Appendix~\ref{app:tail}.
\begin{theorem}\label{thm:operator-avg}
Let $\positive(H)$ denote the cone of positive operators on the
Hilbert space $H$. Let $P_1,\ldots, P_k$ be independent random
variables taking values in $\positive(H)$ for which $\norm{ P_i } \le 1$
and $\bnorm{ \Exp[P_i] } \leq 1 - \delta$.
Then
\[
\Pr\left[ \bnorm{ P_k \cdots P_1} \geq
\sqrt{\dim H} \exp\left(-\frac{k\delta}{6}\right)\right] \leq
\dim H \cdot \exp\left(-\frac{k\delta^2}{13} \right) \, . \qedhere
\]
\end{theorem}
We return to the proof of Theorem~\ref{thm:constant}.
\begin{proof}[Proof of Theorem~\ref{thm:constant}]
For a non-trivial irrep $\bar \rho=\rho_1 \otimes \cdots \otimes \rho_n \in
\widehat{G}^n$, we write
\[
\Exp_{\bar t\in T_S}\;\bar \rho(\bar t)
= \Exp_{g \in G} \Exp_{\bar s \in S}\;\bar\rho(g^{\bar s})
= \Exp_{g \in G} \Exp_{\bar s \in S}\;\left( \Res_{\langle g \rangle^n} \bar\rho \right)(g^{\bar s}) \, ,
\]
where $\bar s=(s_1, \dots, s_n)$, $g^{\bar s}=(g^{s_1},\dots,
g^{s_n})$, and $\Res_H \bar{\rho}$ denotes the restriction of $\bar{\rho}$ to the subgroup $H \subseteq G^n$.
For a particular $g \in G$, we decompose the restricted representation $\Res_{\langle g \rangle^n} \bar\rho$
into a direct sum of irreps of the abelian group $\langle g \rangle^n \cong {\mathbb{Z}}_{|\langle g \rangle|}^n$. This yields
\[
\Res_{\langle g\rangle^n} \bar\rho
= \bigoplus_{\chi \in \widehat{\langle g \rangle^n}} \chi^{\oplus a_\chi} \, ,
\]
where each $\chi$ is a one-dimensional representation of the cyclic
group $\langle g\rangle^n$ and $a_\chi$ denotes the multiplicity with which $\chi$ appears in the
decomposition.
Now, as $S$ is an $\varepsilon$-biased set over ${\mathbb{Z}}_{|G|}^n$, its
quotient modulo any divisor $d$ of $|G|$ is
$\varepsilon$-biased over ${\mathbb{Z}}_d^n$. It follows that
\[
\left| \Exp_{\bar s \in S} \chi(\bar s) \right| \leq \varepsilon
\]
for any nontrivial $\chi$; when $\chi$ is trivial, the expectation is 1.
Thus for any fixed $g \in G$ we may write
\[
\Exp_{\bar s\in S}\; \left( \Res_{\langle g \rangle^n} \bar\rho\right)(g^{\bar s})
= \Pi_{g}^{\bar \rho} + E_{g}^{\bar \rho} \, .
\]
Recall that $\Pi_g^{\bar \rho}$ is the projection operator onto the space associated with the copies of the trivial representation of
$\langle g \rangle^n$ in $\Res_{\langle g\rangle^n} \bar\rho$, i.e., the expectation we would obtain if $\bar{s}$ ranged over all of
$\langle g \rangle^n$ instead over just $S$. The ``error operator'' $E_{g}^{\bar\rho}$ arises from the nontrivial representations
of $\langle g \rangle^n$ appearing in $\Res_{\langle g\rangle^n} \bar\rho$, and has operator norm bounded by $\varepsilon$.
It follows that
\begin{align*}
\Bnorm{ \Exp_{\bar t \in T} \bar\rho(\bar t) }
&= \Bnorm{ \Exp_{g \in G} \left(\Exp_{\bar s \in S} \bar\rho(g^{\bar s})\right) }
= \Bnorm{ \Exp_{g \in G} \left( \Pi_g^{\bar\rho} + E_g^{\bar\rho}\right) } \\
& \leq \Bnorm{ \Exp_{g \in G} \Pi_g^{\bar\rho} }
+ \Bnorm{ \Exp_{g \in G} E_g^{\bar\rho} }
\leq \Bnorm{ \Exp_{g \in G} \Pi_g^{\bar\rho} } + \varepsilon \, ,
\end{align*}
and it remains to bound $\norm{ \Exp_{g\in G} \Pi^{\bar \rho}_g }$.
As $\Exp_{g\in G} \Pi^{\bar \rho}_g$ is Hermitian, for any positive $k$ we have
\begin{equation}
\label{eq:power}
\Bnorm{ \Exp_{g\in G} \Pi^{\bar \rho}_g }
= \sqrt[k]{\left\| \left(\Exp_{g\in G} \Pi^{\bar \rho}_g\right)^k \right\| } \, ,
\end{equation}
so we focus on the operator $\left(\Exp_{g\in G} \Pi^{\bar \rho}_g\right)^k$.
Expanding $\Pi^{\bar \rho}_g = \bigotimes_i \Pi^{\rho_i}_g$, we may
write
\begin{equation}\label{eq:expansion}
\left(\Exp_{g\in G} \Pi^{\bar \rho}_g\right)^k = \Exp_{g_1, \ldots,
g_k} \left[ \Pi_{g_1}^{\bar \rho} \cdots \Pi_{g_k}^{\bar \rho}\right] = \Exp_{g_1, \ldots,
g_k} \left[\bigotimes_{i=1}^n \Pi_{g_1}^{\rho_i} \cdots \Pi_{g_k}^{\rho_i}\right] \, .
\end{equation}
As $\bar\rho$ is nontrivial, there is some coordinate $j$ for which
$\rho_j$ is nontrivial. Combining~\eqref{eq:expansion} with the fact
that $\norm{A \otimes B} = \norm{A} \norm{B}$, we conclude that
\begin{equation}\label{eq:triangle}
\left\| \left(\Exp_{g\in G} \Pi^{\bar \rho}_g \right)^k \right\|
\leq \Exp_{g_1, \ldots, g_k} \left\| \bigotimes_{i=1}^n \Pi_{g_1}^{\rho_i} \cdots \Pi_{g_k}^{\rho_i} \right\|
\leq \Exp_{g_1,\ldots, g_k} \left\| \Pi^{\rho_j}_{g_1} \cdots \Pi^{\rho_j}_{g_k} \right\| \, .
\end{equation}
Lemma~\ref{lem:power-avg} asserts that $\norm{\Exp_g \Pi_g^{\rho_j} } \leq 1 - \delta_G$,
where $\delta_G = \Omega(1 / \log \log |G|)$. It follows then from
Theorem~\ref{thm:operator-avg} that
\begin{equation}\label{eq:tail-bound}
\Pr_{g_1, \ldots, g_k} \Biggl[\underbrace{\Bnorm{ \Pi^{\rho_j}_{g_1} \cdots \Pi^{\rho_j}_{g_k} }
\geq \sqrt{d_j} \exp(-k \delta_G/6)}_{(\ddag)} \Biggr] \leq d_j \cdot \exp\left(-{k \delta_G^2}/{13}\right) \, ,
\end{equation}
where $d_j = \dim \rho_j$. This immediately provides a bound on $\norm{ (\Exp_g \Pi_g^{\bar \rho})^k }$. Specifically,
combining~\eqref{eq:triangle} with~\eqref{eq:tail-bound}, let us pessimistically assume that
$\norm{ \Pi^{\rho_j}_{g_1}\cdots \Pi^{\rho_j}_{g_k} } = d_j \exp(-k\delta_G/6)$
for tuples $(g_1, \ldots, g_k)$ that do not enjoy property $(\ddag)$, and $1$ for tuples that do. Then
\begin{align*}
\left\| \left(\Exp_{g\in G} \Pi^{\bar \rho}_g\right)^k \right\|
& \leq \Exp_{g_1,\ldots, g_k} \bnorm{ \Pi^{\rho_j}_{g_1} \ldots \Pi^{\rho_j}_{g_k} } \\
&\leq d_j \exp\left(-{k
\delta_G^2}/{13}\right) + \left(1 - d_j \exp\left(-{k
\delta_G^2}/{13}\right)\right) \sqrt{d_j} \exp\left(-{k
\delta_G}/{6}\right)\\
&\leq 2d_j \exp\left(-{k \delta_G^2}/{13}\right) \, ,
\end{align*}
and hence
\[
\left\| \Exp_{g \in G} \Pi_g^{\bar \rho}\right\|
\leq \inf_k \left( \sqrt[k]{2d_j}\right) \cdot \exp(-{\delta_G^2}/{13})
= 1 - \Omega\left({1}/{\log\log |G|}\right)^2 \, ,
\]
where we take the limit of large $k$.
\end{proof}
\section{Derandomized squaring and amplification}
\label{sec:amplification}
In this section we discuss how to amplify $\varepsilon$-biased sets in a generic way. Specifically, we use derandomized squaring to prove the following.
\begin{theorem}
\label{thm:amplify}
Let $G$ be a group and $S$ an $1/10$-biased set on $G$. Then for any $\varepsilon > 0$, there is an $\varepsilon$-biased set $S_\varepsilon$ on $G$ of size $O(|S| \varepsilon^{-11})$. Moreover, assuming that multiplication can be efficiently implemented in $G$, the set $S_\varepsilon$ can be constructed from $S$ in time polynomial in $|S_\varepsilon|$.
\end{theorem}
\noindent
We have made no attempt to improve the exponent of $\varepsilon$ in $|S_\varepsilon|$.
Our approach is similar to~\cite{Rozenman:2005fk}. Roughly, if $S$ is an $\varepsilon$-biased set on $G$ we can place a degree-$d$ expander graph $\Gamma$ on the elements of $S$ to induce a new set
\[
S \times_\Gamma S \triangleq \{ st \mid \text{$(s, t)$ an edge of $\Gamma$}\} \, .
\]
If $\rho: G \rightarrow \textsf{U}(V)$ is a nontrivial representation of $G$, by
assumption $\|\Exp_{s \in S} \rho(g) \| \leq \varepsilon$. Applying
a natural operator-valued Rayleigh quotient for expander graphs (see
Lemma~\ref{lem:expander-operators} below), we conclude that
\[
\left\|\Exp_{(s,t) \in \Gamma} \rho(s)\rho(t)\right\| = \left\|\Exp_{(s,t) \in \Gamma}
\rho(st)\right\| \leq \lambda(\Gamma) + \varepsilon^2 \, .
\]
If $\Gamma$ comes from a family of Ramanujan-like expanders, then
$\lambda(\Gamma) = \Theta(1/\sqrt{d})$, and we can guarantee that
$\lambda(\Gamma) = O(\varepsilon^2)$ by selecting $d = \Theta(\varepsilon^{-4})$.
The size of the set then grows by a factor of $|S \times_\Gamma S| /
|S| = d = \Theta(\varepsilon^{-4})$. We make this precise in
Lemma~\ref{lem:amplify} below, which regrettably loses an additional
factor of $\varepsilon^{-1}$.
Preparing for the proof of Theorem~\ref{thm:amplify}, we record some
related material on expander graphs.
\paragraph{Expanders and derandomized products}
For a $d$-regular graph $G = (V, E)$, let $A$
denote its normalized adjacency matrix:
$A_{uv} = 1/d$ if $(u,v) \in E$ and $0$ otherwise.
Then $A$ is stochastic, normal, and has operator norm $\norm{A} =
1$; the uniform eigenvector $\vec{y^+}$ given by $y^+_s = 1$ for all $s \in V$
has eigenvalue $1$. When $G$ is connected, the
eigenspace associated with $1$ is spanned by this eigenvector, and
all other eigenvalues lie in $[-1, 1)$.
Bipartite graphs will play a special role in our analysis. We write a
bipartite graph $G$ on the bipartition $U, V$ as the tuple $G = (U, V; E)$. In
a regular bipartite graph, we have $|U| = |V|$ and $-1$ is an eigenvalue of
$A$ associated with the eigenvector $\vec{y^-}$ which is $+1$ for $s \in U$ and $-1$ for $s \in V$.
When $G$ is connected, the eigenspace associated with $-1$
is one-dimensional, and all other eigenvalues lie in $(-1, 1)$: we let
$\lambda(G) < 1$ be the leading nontrivial eigenvalue:
\[
\lambda(G) = \sup_{\vec{y} \; \perp\; {\vec{y^{\pm}}}} {\| M\vec{y} \|}/{\| \vec{y} \|} \, .
\]
When $\vec{y} \perp \vec{y^{\pm}}$, observe that $|{\langle \vec{y}, M\vec{y} \rangle}| \leq \| \vec{y} \| \cdot \| M\vec{y}
\| \leq \lambda \| \vec{y}\|^2$ by Cauchy-Schwarz.
We say that a $d$-regular, connected, bipartite graph $G = (U, V; E)$
for which $|U| = |V| = n$ and $\lambda(G) \leq \Lambda)$ is a \emph{bipartite $(n, d, \Lambda)$-expander}. A well-known consequence of expansion is that the ``Rayleigh quotient'' determined by the expander is bounded: for
any function $f: U \cup V \rightarrow {\mathbb{R}}$ defined on the vertices of a
$(n, d, \lambda)$ expander for which $\sum_{u \in U} f(u) = \sum_{v \in V} f(v) = 0$,
\[
\Exp_{(u,v) \in E} f(u)f(v) \leq \lambda \|f\|_2^2 \, .
\]
We will apply a version of this property pertaining to operator-valued functions.
\begin{lemma}
\label{lem:expander-operators}
Let $G = (U , V; E)$ be a bipartite $(n, d, \lambda)$-expander. Associate with each vertex $s \in U \cup V$ a linear operator $X_s$ on the vector space $\mathbb{C}^d$ such that $\norm{X_s} \leq 1$, $\bnorm{ \Exp_{u \in U} X_u } \leq \varepsilon_U$, and $\bnorm{ \Exp_{v \in V} X_v } \leq \varepsilon_V$. Then
\[
\Bnorm{ \Exp_{(u,v) \in E} X_u X_v } \leq \lambda + (1 - \lambda) \varepsilon_U \varepsilon_V \, .
\]
\end{lemma}
\noindent
We will sometimes apply Lemma~\ref{lem:expander-operators} to the tensor product of operators. That is, given the same assumptions, we have
\[
\Bnorm{ \Exp_{(u,v) \in E} X_u \otimes X_v } \leq \lambda + (1 - \lambda) \varepsilon_U \varepsilon_V \, .
\]
To see this, simply apply the lemma to the operators $X_u \otimes \mathds{1}$ and $\mathds{1} \otimes X_v$.
Critical in our setting is the fact that this conclusion is
independent of the dimension $d$. A proof of this folklore lemma
appears in Appendix~\ref{appendix:expanders}; see
also~\cite{De:Pseudorandomness} for a related application to branching
programs over groups.
\paragraph{Amplification} We return now to the problem of amplifying
$\varepsilon$-biased sets over general groups.
\begin{lemma}
\label{lem:amplify}
Let $S$ be an $\varepsilon$-biased set on the group $G$. Then there is an $\varepsilon'$-biased set $S'$ on $G$ for which $\varepsilon' \le 5 \varepsilon^2$ and $|S'| \le C |S| \varepsilon^{-5}$, where $C$ is a universal constant. Moreover, assuming that multiplication can be efficiently implemented in $G$, the set $S'$ can be constructed from $S$ in time polynomial in $|S'|$.
\end{lemma}
\begin{proof}
We proceed as suggested above. The only wrinkle is that we need to
introduce an expander graph on the elements of $S$ that achieves
second eigenvalue $\Theta(\varepsilon^2)$.
We apply the explicit family of Ramanujan graphs due to Lubotzky,
Phillips, and Sarnak~\cite{Lubotzky:Ramanujan}. For each pair of
primes $p$ and $q$ congruent to $1$ modulo $4$, they obtain a graph $\Gamma_{p,q}$ with $p(p^2-1)$ vertices, degree $q+1$, and
$\lambda(\Gamma_{p,q}) = 2\sqrt{q}/(q+1) < 2/\sqrt{q}$. We treat
$\Gamma_{p,q}$ as a bipartite graph by taking the double cover: this
introduces a pair of vertices, $v_\text{A}$ and $v_\text{B}$, for
each vertex $v$ of $\Gamma_{p,q}$ and introduces an edge $(u_A,
v_B)$ for each edge $(u,v)$. This graph has eigenvalues $\pm \lambda$ for each eigenvalue $\lambda$ of $\Gamma_{p,q}$, so except for the $\pm 1$ eigenspace the spectral radius is unchanged.
As we do not have precise control over the number of vertices in
this expander family, we will use a larger graph and approximately
tile each side with copies of $S$. Specifically, we select the
smallest primes $p,q \equiv 1 \pmod{4}$ for which
\begin{equation}\label{eq:size-degree}
p(p^2 - 1) > |S|
\cdot \lceil \varepsilon^{-1}\rceil\quad\text{and}\quad 2/\sqrt{q} \leq
\varepsilon^2 \, .
\end{equation}
We now associate elements of $S$ with the vertices (of each side) of
$\Gamma = \Gamma_{p,q} = (U, V; E)$ as uniformly as possible;
specifically, we partition the vertices of $U$ and $V$ into a family
of blocks, each of size $|S|$; this leaves a set of less than $|S|$
elements uncovered on each side. Then elements in the blocks are
directly associated with elements of $S$; the ``uncovered'' elements
may in fact be assigned arbitrarily. As $|U| = |V| \geq |S|
\lceil\varepsilon^{-1}\rceil$, the uncovered elements above comprise
less than an $\varepsilon$-fraction of the vertices. As above, we
define the set $S \times_{\Gamma} S \triangleq \{ uv \mid (u,v) \in E\}$ (where we
blur the distinction between a vertex and the element of $S$ to
which it has been associated).
Consider, finally, a nontrivial representation $\rho$ of $G$. As the
average over any block of $U$ or $V$ has operator norm no more
than $\varepsilon$, and we have an $\varepsilon$-fraction of uncovered
elements, the average of $\rho$ over each of $U$
and $V$ is no more than $(1 - \varepsilon)\varepsilon + \varepsilon \leq 2 \varepsilon$. Applying Lemma~\ref{lem:expander-operators}, we conclude
that $\| \Exp_{s \in S \times_\Gamma S} \rho(s) \| \leq
(2\varepsilon)^2 + \lambda(\Gamma) \leq 5 \varepsilon^2$ by our choice of
$q$ (the degree less one).
By Dirichlet's theorem on the density of primes in arithmetic progressions, $p$ and $q$ need be no more than (say) a constant factor larger than the lower bounds $p(p^2-1) > |S| \varepsilon^{-1}$ and $q \ge 4 \varepsilon^{-4}$ implied by~\eqref{eq:size-degree}. Thus there is a constant $C$ such that
$|S'| = p(p^2-1)(q+1) \le C |S| \cdot \varepsilon^{-5}$.
\end{proof}
\paragraph{Remarks} The construction above is saddled with the tasks of identifying appropriate primes $p$ and $q$, and constructing the generators for the associated expander of~\cite{Lubotzky:Ramanujan}. While these can clearly be carried out in time polynomial in $|S'|$, alternate explicit constructions of expander graphs~\cite{Morgenstern:Existence} can significantly reduce this overhead. However, no known explicit family of Ramanujan graphs appears to provide enough density to avoid the tiling construction above. On the other hand, expander graphs with significantly weaker properties would suffice for the construction: any uniform bound of the form $\lambda \leq c\sqrt{\text{degree}}$ would be enough.
\begin{proof}[Proof of Theorem~\ref{thm:amplify}]
We apply Lemma~\ref{lem:amplify} iteratively. Set $\varepsilon_0 = 1/10$. After $t$ applications, we have an $\varepsilon_t$-biased set where $\varepsilon_t = 2^{-2^t} / 5$. After $t = \lceil \log_2 \log_2 (1/5 \varepsilon) \rceil$ steps, we have $5 \varepsilon^2 \le \varepsilon_t \le \varepsilon$. The total increase in size is
\[
\frac{|S_\varepsilon|}{|S|}
= C^t \left( \prod_{i=0}^{t-1} \varepsilon_i \right)^{\!-5}
= C^t \left( \frac{2 \varepsilon_t}{5^{t-1}} \right)^{-5}
\le (C/5)^t (50 \varepsilon^2)^{-5}
= O\big( \varepsilon^{-10} (\log \varepsilon^{-1})^{O(1)} \big)
= O(\varepsilon^{-11}) \, .
\qedhere
\]
\end{proof}
Combining Theorem~\ref{thm:amplify} with the $\varepsilon$-biased sets constructed in
Section~\ref{sec:homogeneous} we establish a family of
$\varepsilon$-biased set over $G^n$ for smaller $\varepsilon$:
\begin{theorem}
\label{thm:homogeneous}
Fix a group $G$. There is an $\varepsilon$-biased set in $G^n$ of size
$O(n \varepsilon^{-11})$ that can be constructed in time
polynomial in $n$ and $\varepsilon^{-1}$.
\end{theorem}
\begin{proof}
Alon et al.~\cite{AlonBNNR:Construction} construct a families of
explicit codes over finite fields which, in particular, offer
$\delta$-biased sets over ${\mathbb{Z}}_p^n$ of size $O(n)$ for any constant
$\delta$. As $G$ is fixed, applying Theorem~\ref{thm:constant} to
these sets over ${\mathbb{Z}}_{|G|}$ with sufficiently small $\delta \approx 1/\log\log |G|$ yields an $\varepsilon_0$-biased set $S_0$ over $G^n$, where
$\varepsilon_0$ is a constant close to one (depending on the size of $G$
and the constant $\delta$). We cannot directly apply
Theorem~\ref{thm:amplify} to $S_0$, as the bias may exceed
$1/10$. To bridge this constant gap (from $\varepsilon_0$ to $1/10$),
we apply the construction of the proof of Theorem~\ref{thm:amplify}
with a slight adaptation. Selecting a small constant $\alpha$, we may enlarge the expander graph to ensure
that it has size at least $|S_0| (1/\alpha)$; then the resulting error guarantee on each side of the graph bipartition
is no more than $\alpha + (1 - \alpha)\varepsilon$ and the product set
has bias no more than $(\alpha + \varepsilon)^2 + \lambda(\Gamma)$.
This can be brought as close as desired to $\varepsilon^2$ with
appropriate selection of the constants $\alpha$ and $\lambda(G)$. As
$\lambda(G)$ is constant, this transformation likewise increases the
size of the set by a constant, and this
method can reduce the error to $1/10$, say, with a constant-factor
penalty in the size of $S_0$. At this point,
Theorem~\ref{thm:amplify} applies, and establishes the bound of the theorem.
\end{proof}
\section{Inhomogeneous direct products}
\label{sec:direct}
Groups of the form $G = G_1 \times \cdots \times G_n$ appear to frustrate natural attempts to borrow $\varepsilon$-biased sets directly from abelian groups as we did for $G^n$ in Section~\ref{sec:homogeneous}. In this section, we build an $\varepsilon$-biased set for groups of this form by iterating a construction that takes $\varepsilon$-biased sets on two groups $G_1$ and $G_2$ and stitches them together, again with an expander graph, to produce an $\varepsilon'$-biased set on $G_1 \times G_2$. In essence, we again use derandomized squaring, but now for the tensor product of two operators rather than their matrix product.
\begin{construction}\label{cons:direct}
Let $G_1$ and $G_2$ be two groups; for each $i =1,2$, let $S_i$ be
an $\varepsilon_i$-biased set on $G_i$. We assume that $|S_1| \leq
|S_2|$. Let $\Gamma = (U, V; E)$ be a
bipartite $(|S_2|, d, \lambda)$-expander. Associate elements of $V$
with elements of $S_2$ and, as in the proof of
Lemma~\ref{lem:amplify}, associate elements of $S_1$ with $U$ as
uniformly as possible. As above, we order the elements of $U$ and tile them with copies of $S_1$, leaving a collection of no more than $S_1$ vertices ``uncovered''; these vertices are then assigned to an initial subset of $S_1$ of appropriate size. Define $S_1 \otimes_\Gamma S_2
\subset G_1 \times G_2$ to be the set of edges of $\Gamma$ (realized
as group elements according to the association above).
\end{construction}
Recall that an irreducible representation $\rho$ of $G_1 \times G_2$ is a tensor prodoct $\rho_1 \otimes \rho_2$, where each $\rho_i$
is an irrep of $G_i$ and $\rho(g_1, g_2) = \rho_1(g_1) \otimes \rho_2(g_2)$. If $\rho$ is nontrivial, then one or both of $\rho_1$ and $\rho_2$ is nontrivial, and the bias we achieve on $\rho$ will depend on which of these is the case.
\begin{claim}\label{claim:direct-cons}
Assuming that $|S_1| \leq |S_2|$, the set $S_1 \otimes_\Gamma S_2$
of Construction~\ref{cons:direct} has size $d|S_2|$ and bias no more than
\[
\max\left(\varepsilon_2, \varepsilon_1 + \frac{|S_1|}{|S_2|}, \lambda + \varepsilon_2\left(\varepsilon_1 + \frac{|S_1|}{|S_2|}\right) \right) \, .
\]
\end{claim}
\begin{proof}
The size bound is immediate. As for the bias, let $\rho = \rho_1 \otimes \rho_2$ be nontrivial. If
$\rho_1 = \mathds{1}$,
\begin{equation}\label{eq:trivial-good}
\Bnorm{ \Exp_{s \in S_1 \otimes_\Gamma S_2} (\rho_1 \otimes \rho_2)(s) }
= \Bnorm{ \Exp_{v \in V} \rho_2(v) }
\leq \varepsilon_2 \, ,
\end{equation}
as $S_2$ is in one-to-one correspondence with $V$. In
contrast, if $\rho_2 = \mathds{1}$, the best we can say is that
\begin{equation}
\label{eq:trivial-bad}
\Bnorm{ \Exp_{s \in S_1 \otimes_\Gamma S_2} (\rho_1 \otimes \rho_2)(s) }
= \Bnorm{ \Exp_{u \in U} \rho_1(u) }
\leq \left( 1 - \frac{|S_1|}{|S_2|} \right) \varepsilon_1 + \frac{|S_1|}{|S_2|}
\leq \varepsilon_1 + \frac{|S_1|}{|S_2|}
\end{equation}
as in the proof of Lemma~\ref{lem:amplify}. When both $\rho_i$ are
nontrivial, applying Lemma~\ref{lem:expander-operators} to~\eqref{eq:trivial-good} and~\eqref{eq:trivial-bad} implies that
\begin{equation}\label{eq:nontrivial}
\Bnorm{ \Exp_{s \in S_1 \otimes_\Gamma S_2} (\rho_1 \otimes \rho_2)(s) }
\leq \lambda + \varepsilon_2 \left( \varepsilon_1 + \frac{|S_1|}{|S_2|} \right) \, ,
\end{equation}
as desired.
\end{proof}
Finally, we apply Construction~\ref{cons:direct} to groups of the form
$G_1 \times \cdots \times G_n$.
\begin{theorem}\label{thm:direct}
Let $G = G_1 \times \cdots \times G_n$. Then, for any
$\varepsilon$, there is an $\varepsilon$-biased set in $G$ of
size $\mathrm{poly}(\max_i |G_i|, n, \varepsilon^{-1})$. Furthermore, the set
can be constructed in time polynomial in its size.
\end{theorem}
\begin{proof}
Given the amplification results of Section~\ref{sec:amplification}, we may focus on constructing sets
of constant bias. We start by adopting the
entire group $G_i$ as a $0$-biased set for each $G_i$, and then recursively apply
Construction~\ref{cons:direct}. This process will only
involve expander graphs of constant degree, which simplifies the
task of finding the expander required for
Construction~\ref{cons:direct}. In this case, one can construct a
constant degree expander graph of desired constant spectral gap
on a set $X$ by covering the vertices of $X$ with a family of
overlapping expander graphs, uniformizing the degree arbitrarily,
and forming a small power of the result. So long as the pairwise
intersections of the covering expanders are not too small, the
resulting spectral gap can be controlled uniformly. (This luxury
was not available to us in the proof of Lemma~\ref{lem:amplify}, since in that setting we required $\lambda$ tending to zero, and insisted on a Ramanujan-like relationship between $\lambda$ and the degree.)
The recursive construction proceeds by dividing $G$ into two factors: $A = G_1 \times \cdots \times G_{n'}$ and $B = G_{n'+1} \times \cdots \times G_n$, where $n' = \lceil n/2 \rceil$. Given small-biased sets $S_A$ and $S_B$, we combine them using Construction~\ref{cons:direct}. Examining Claim~\ref{claim:direct-cons}, we wish to ensure that $|S_A| / |S_B|$ is a small enough constant. To arrange for this, we assume without loss of generality that $|S_B| \geq |S_A|$ and duplicate $S_B$ five times, resulting in a (multi-)set $S'_B$ such that $|S_A| / |S'_B| \le 1/5$.
Assume that each of the recursively constructed sets $S_A, S_B$ has bias at most $1/4$. We apply Construction~\ref{cons:direct} to $S_A$ and $S'_B$ with an expander $\Gamma$ of degree $d$ for which $\lambda \leq 1/8$, producing the set $S = S_A \otimes_\Gamma S'_B$. Ideally, we would like $S$ to also be $1/4$-biased, in which case a set of constant bias and size $\mathrm{poly}(\max_i |G_i|, n)$ would follow by induction.
Let $\rho = \rho_A \otimes \rho_B$ be nontrivial, where $\rho_A \in \widehat{A}$ and $\rho_B \in \widehat{B}$. If $\rho_A = \mathds{1}$ then, as in~\eqref{eq:trivial-good}, $\bnorm{ \Exp_{s \in S} \rho(s) } \leq 1/4$. Likewise, if both $\rho_A$ and $\rho_B$ are nontrivial, \eqref{eq:nontrivial} gives $\bnorm{ \Exp_{s \in S} \rho(s) } \leq 1/8 + 1/4(1/4 + 1/5) \leq 1/4$. At first inspection, the case where $\rho_B = \mathds{1}$ appears problematic, as~\eqref{eq:trivial-bad} only provides the discouraging estimate $\bnorm{ \Exp_{s \in S} \rho(s) } \leq 1/4 + 1/5$. Thus it seems possible that iterative application of Construction~\ref{cons:direct} could lose control of the error. However, as long as the tiling of $U$, the left side of the expander in Construction~\ref{cons:direct}, is carried out in a way that ensures that the uncovered elements of $U$ are tiled with respect to \emph{previous} stages of the recursive construction, it is easy to check that subsequent recursive appearances of this case can contribute no more than the geometric series $1/5 + (1/5)^2 + \cdots = 1/4$ to the bias. Any following recursive application of the construction in which the representation is nontrivial in both blocks will then drive the error back to $1/4$, as $1/8 + 1/4(1/4 + 1/4) = 1/4$. (If this case occurs at the last stage of recursion, then $S$ still has bias at most $1/4+1/5 \le 1/2$.)
Recall that for the base case of the induction, we treat each $G_i$ as a $0$-biased set for itself. Since there are $\log_2 n$ layers of recursion, and each layer multiplies the size of the set by the constant factor $5d$, we end with a $1/2$-biased set $S$ of size at most $(5d)^{\log_2 n} \max_i |G_i| = \mathrm{poly}( \max_i |G_i|, n)$. Finally, applying the amplification of Theorem~\ref{thm:amplify}, after first driving the bias down to $1/10$ as in Theorem~\ref{thm:homogeneous}, completes the proof.
\end{proof}
We note that if the $G_i$ are of polynomial size, then we can use the results of Wigderson and Xiao~\cite{WigdersonX:Derandomizing} to find $\varepsilon$-biased sets of size $O(\log |G_i|)$ in time $\mathrm{poly}(|G_i|)$. Using these sets in the base case of our recursion then gives a $\varepsilon$-biased set for $G$ of size $\mathrm{poly}(\max_i \log |G_i|, n, \varepsilon^{-1})$.
\section{Normal extensions and smoothly solvable groups}
While applying these techniques to arbitrary groups (even in the case when they have plentiful subgroups) seems difficult, for solvable groups can again use a form of derandomized squaring. First, recall the derived series: if $G$ is solvable, then setting $G^{(0)}=G$ and taking commutator subgroups $G^{(i+1)} = [G^{(i)},G^{(i)}]$ gives a series of normal subgroups,
\[
1 = G^{(\ell)} \lhd \cdots \lhd G^{(1)} \lhd G^{(0)} = G \, .
\]
We say that $\ell$ is the \emph{derived length} of $G$.
Each factor $G^{(i)}/G^{(i+1)} = A_i$ is abelian, and $G^{(i)}$ is
normal in $G$ for all $i$. Since $|A_i| \geq 2$, it is obvious that $\ell = O(\log |G|)$. However, more is true. The
\emph{composition series} is a refinement of the derived series where each quotient is a cyclic group of prime order,
and the length $c$ of this refined series is the \emph{composition length}. Clearly $c \le \log_2 |G|$.
Glasby~\cite{glasby} showed that $\ell \le 3 \log_2 c + 9 = O(\log c)$, so $\ell = O(\log \log |G|)$.
We focus on groups that are \emph{smoothly solvable}~\cite{friedletal}, in the sense that the abelian factors have constant exponent. (Their definition of smooth solvability allows the factors to be somewhat more general, but we avoid that here for simplicity.) We then have the following:
\begin{theorem}
\label{thm:solvable}
Let $G$ be a solvable group,
and let its abelian factors be of the form $A_i = {\mathbb{Z}}_{p_i}^t$ (or factors of such groups) where $p_i=O(1)$. Then $G$ possesses an $\varepsilon$-biased set $S_\varepsilon$ of size
$(\log |G|)^{1+o(1)} \,\mathrm{poly}(\varepsilon^{-1})$.
\end{theorem}
We deliberately gloss over the issue of explicitness. However, we claim that if $G$ is polynomially uniform in the sense of~\cite{GenericQFT}, so that we can efficiently express group elements and products as a string of coset representatives in the derived series, then $S_\varepsilon$ can be computed in time polynomial in its size.
\begin{proof}
Solvable groups can be approached via Clifford theory, which controls the structure of representations of a group $G$ when restricted to a normal subgroup. In fact, we require only a simple fact about this setting. Namely, if $H \lhd G$ and $\rho$ is an irrep of $G$, then either $\Res_H \rho$ contains only copies of the trivial representation so that $\rho(h) = \mathds{1}_{\rho_d}$ for all $h \in H$, or $\Res_H \rho$ contains \emph{no} copies of the trivial representation.
It is easy to see that the irreps $\rho$ of $G$ for
which $\Res_H \rho$ is trivial are in one-to-one correspondence with
irreps of the group $G/H$, and we will blur this distinction. With this
perspective, it is natural to attempt to assemble an $\varepsilon$-biased set for $G$
from $S_H$, an $\varepsilon_H$-biased set for $H$, and $S_{G/H}$, an
$\varepsilon_{G/H}$-biased set for $G/H$. While $S_H \subset H \subset
G$, there is---in general---no subgroup of $G$ isomorphic to $G/H$, so
it is not clear how to appropriately embed $S_{G/H}$ into $G$. Happily, we will
see that reasonable bounds can be obtained even with an arbitrary
embedding. In particular, we treat $S_{G/H}$ as a subset of $G$ by
lifting each element $x \in S_{G/H}$ to an arbitrary element $\hat{x} \in G$ lying in the
$H$-coset associated with $x$.
If $S_H$ and $S_{G/H}$ were the same size, and we could directly
introduce an expander graph $\Gamma$ on $S_H \times S_{G/H}$, then
Lemma~\ref{lem:expander-operators} could still be used to control the
bias of $S = \{ s \hat{t} \mid (s,t) \in \Gamma\}$. Specifically, consider a nontrivial representation $\rho$ of
$G$. If $\Res_H \rho$ is trivial, then analogous to~\eqref{eq:trivial-good} we have
$\bnorm{ \Exp_{s \in S} \rho(s) } = \bnorm{ \Exp_{s \in S_{G/H}} \rho(s) } \leq \varepsilon_{G/H}$.
On the other hand, if $\Res_H
\rho$ restricts to $H$ without any appearances of the trivial
representation, then $\bnorm{ \Exp_{h \in S_H} \rho(h) } \leq \varepsilon_H$.
In this case, the action of the elements of $S_{G/H}$ on
$\rho$ may be quite pathological, permuting and ``twiddling'' the $H$-irreps
appearing in $\Res_H \rho$.
However, as $\norm{\rho(s)} = 1$ (by unitarity) for all $s \in S_{G/H}$, we can conclude from
Lemma~\ref{lem:expander-operators} that
$\bnorm{ \Exp_{s \in S} \rho(s) } \leq \lambda(\Gamma) + \varepsilon_H$.
We recursively apply the construction outlined above, accounting for the ``tiling error'' of finding an appropriate expander. Specifically, let us inductively assume we have $\epsilon$-biased sets $S^+$ on $G^{+} = G/G^{(k)}$ and $S^-$ on $G^- = G^{(k)}$ for $k = \lceil \ell/2\rceil$, where $\ell$ is the derived length of $G$. Selecting an expander graph $\Gamma$ of size at least $\alpha^{-1} \max(|S^-|, |S^+|)$ and
$\lambda(\Gamma) \leq \alpha$, for an $\alpha$ to be determined, we
tile each side of the graph with elements from $S^-$ and $S^+$,
completing them arbitrarily on the ``uncovered elements.''
Since at most a fraction $\alpha$ of the elements on either side are uncovered, the average of a nontrivial representation
over either side of the expander has operator norm no more than $\epsilon + \alpha$.
Lemma~\ref{lem:expander-operators} then implies that the bias of the set $S = \{s \hat{t} \mid (s,t) \in \Gamma\}$ is at most
$\lambda(\Gamma) + (\epsilon + \alpha) \leq \epsilon + 2\alpha$.
If we use the Ramanujan graphs of~\cite{Lubotzky:Ramanujan} described above, we can achieve degree $O(\alpha^{-2})$ and size
$O(\alpha \max(|S^-|, |S^+|))$. Thus, each recursive step of this
process scales the sizes of the sets by a factor $O(\alpha^{-3})$ and
introduces additive error $2\alpha$. The number of levels of recursion is $\lceil \log_2 \ell \rceil$,
so if we choose $\alpha < 1/(4 \lceil \log \ell \rceil)$ then the total accumulated error is less than $1/2$.
Assuming that we have $\alpha$-biased sets for each abelian factor $A_i$ of size no more than $s$, this yields a $1/2$-biased set $S$ for $G$ of size $s \alpha^{-3 \log_2 \ell} = s (\log \ell)^{O(\log \ell)}$. For constant $p$, there are $\alpha$-biased sets for ${\mathbb{Z}}_p^n$~\cite{AlonBNNR:Construction} of size $s = O(n / \alpha^3) = (\log |G|)(\log \ell)^{O(1)}$. Using the fact~\cite{glasby} that $\ell = O(\log \log |G|)$, the total size of $S$ is
\[
(\log |G|) (\log \ell)^{O(\log \ell)}
= (\log |G|) (\log \log \log |G|)^{O(\log \log \log |G|)}
= (\log |G|)^{1+o(1)} \, .
\]
Finally, we amplify $S$ to an $\varepsilon$-biased set $S_{\varepsilon}$ for whatever $\varepsilon$ we desire with Theorem~\ref{thm:amplify}, introducing a factor $O(\varepsilon^{-11})$.
\end{proof}
\section*{Acknowledgments}
We thank Amnon Ta-Shma, Emanuele Viola, and Avi Wigderson for helpful discussions. This work was supported by NSF grant CCF-1117426 and ARO contract W911NF-04-R-0009.
| {
"timestamp": "2013-05-01T02:03:15",
"yymm": "1304",
"arxiv_id": "1304.5010",
"language": "en",
"url": "https://arxiv.org/abs/1304.5010",
"abstract": "In analogy with epsilon-biased sets over Z_2^n, we construct explicit epsilon-biased sets over nonabelian finite groups G. That is, we find sets S subset G such that | Exp_{x in S} rho(x)| <= epsilon for any nontrivial irreducible representation rho. Equivalently, such sets make G's Cayley graph an expander with eigenvalue |lambda| <= epsilon. The Alon-Roichman theorem shows that random sets of size O(log |G| / epsilon^2) suffice. For groups of the form G = G_1 x ... x G_n, our construction has size poly(max_i |G_i|, n, epsilon^{-1}), and we show that a set S \\subset G^n considered by Meka and Zuckerman that fools read-once branching programs over G is also epsilon-biased in this sense. For solvable groups whose abelian quotients have constant exponent, we obtain epsilon-biased sets of size (log |G|)^{1+o(1)} poly(epsilon^{-1}). Our techniques include derandomized squaring (in both the matrix product and tensor product senses) and a Chernoff-like bound on the expected norm of the product of independently random operators that may be of independent interest.",
"subjects": "Computational Complexity (cs.CC); Combinatorics (math.CO); Group Theory (math.GR); Representation Theory (math.RT)",
"title": "Small-Bias Sets for Nonabelian Groups: Derandomizing the Alon-Roichman Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9916842227513689,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7094746831791392
} |
https://arxiv.org/abs/1903.05548 | Schubert polynomials as projections of Minkowski sums of Gelfand-Tsetlin polytopes | Gelfand-Tsetlin polytopes are classical objects in algebraic combinatorics arising in the representation theory of $\mathfrak{gl}_n(\mathbb{C})$. The integer point transform of the Gelfand-Tsetlin polytope $\mathrm{GT}(\lambda)$ projects to the Schur function $s_{\lambda}$. Schur functions form a distinguished basis of the ring of symmetric functions; they are also special cases of Schubert polynomials $\mathfrak{S}_{w}$ corresponding to Grassmannian permutations.For any permutation $w \in S_n$ with column-convex Rothe diagram, we construct a polytope $\mathcal{P}_{w}$ whose integer point transform projects to the Schubert polynomial $\mathfrak{S}_{w}$. Such a construction has been sought after at least since the construction of twisted cubes by Grossberg and Karshon in 1994, whose integer point transforms project to Schubert polynomials $\mathfrak{S}_{w}$ for all $w \in S_n$. However, twisted cubes are not honest polytopes; rather one can think of them as signed polytopal complexes. Our polytope $\mathcal{P}_{w}$ is a convex polytope. We also show that $\mathcal{P}_{w}$ is a Minkowski sum of Gelfand-Tsetlin polytopes of varying sizes. When the permutation $w$ is Grassmannian, the Gelfand-Tsetlin polytope is recovered. We conclude by showing that the Gelfand-Tsetlin polytope is a flow polytope. | \section{Introduction}
\label{sec:intro}
Schubert polynomials, introduced by Lascoux and Sch\"utzenberger in 1982 \cite{LS}, are extensively studied in algebraic combinatorics \cite{BJS, FK1993, laddermoves, nilcoxeter, thomas, prismtableaux, lenart, manivel, multidegree, KM, sottile}. They represent cohomology classes of Schubert cycles in flag varieties, and they generalize Schur functions, a distinguished basis of the ring of symmetric functions.
A well-known property of the Schur function $s_{\lambda}$ is that it is a projection of the integer point transform of the Gelfand-Tsetlin polytope $\GT(\lambda)$. This has inspired the following natural question for Schubert polynomials:
\medskip
\noindent{\bf Question 1.} {\it For $w\in S_n$, is there a natural polytope $\mathcal{P}_{w}$ and a projection map $\pi_{w}$ such that the projection of the integer point transform of $\mathcal{P}_{w}$ under the map $\pi_{w}$ equals the Schubert polynomial $\mathfrak{S}_{w}$?}
\medskip
The construction of twisted cubes by Grossberg and Karshon in 1994 \cite{botttowers} is the first attempt at an answer to the above question. The integer point transforms of twisted cubes project to any Schubert polynomial. Indeed, Grossberg and Karshon show that for both flag and Schubert varieties, their (virtual) characters are projections of integer point transforms of twisted cubes.
The one catch with twisted cubes is that they are not always honest polytopes; intuitively one can think of them as signed polytopal complexes. For the Grassmannian case they do not yield the Gelfand-Tsetlin polytope. Kiritchenko's beautiful work \cite{divdiff} explains how to make certain corrections to the Grossberg-Karshon twisted cubes in order to obtain the Gelfand-Tsetlin polytope for Grassmannian permutations.
\medskip
Recall that given a partition $\lambda = (\lambda_1,\dots,\lambda_n)\in \mathbb{Z}^n_{\geq 0}$, the \textbf{Gelfand-Tsetlin polytope} $\GT(\lambda)$ is the set of all nonnegative triangular arrays
\begin{center}
\begin{tabular}{ccccccc}
$x_{11}$&&$x_{12}$&&$\cdots$&&$x_{1n}$\\
&$x_{22}$&&$x_{23}$&$\cdots$&$x_{2n}$&\\
&&$\cdots$&&$\cdots$&&\\
&&$x_{n-1,n-1}$&&$x_{n-1,n}$&&\\
&&&$x_{nn}$&&&
\end{tabular}
\end{center}
such that
\begin{align*}
x_{in}=\lambda_i &\mbox{ for all } 1\leq i\leq n,\\
x_{i-1,j-1}\geq x_{ij}\geq x_{i-1,j} &\mbox{ for all } 1\leq i \leq j\leq n.
\end{align*}
To state our main result, which is a partial answer to Question 1, we need to consider the Minkowski sums of Gelfand-Tsetlin polytopes of partitions with different lengths.
Fix $n$, and for each $k\in[n]$, let $\lambda^{(k)}$ be a partition with $k$ parts (with empty parts allowed). We wish to study the Minkowski sum
\[\GT(\lambda^{(1)})+\GT(\lambda^{(2)})+\cdots+\GT(\lambda^{(n)}).\]
To make this Minkowski sum well-defined, we embed $\mathbb{R}^{\binom{k+1}{2}}$ into $\mathbb{R}^{\binom{n+1}{2}}$ for each $k$. To do this, let $y_{ij}$ be coordinates of $\mathbb{R}^{\binom{k+1}{2}}$ and $x_{ij}$ be coordinates of $\mathbb{R}^{\binom{n+1}{2}}$ as in the definition of the Gelfand-Tsetlin polytope. The embedding is given by
\[y_{ij}\mapsto x_{i,j+n-k} \text{ for all }i+j\leq k+1.\]
Given a column-convex diagram $D$ with $n$ rows, we associate to it a family of partitions $\mathrm{Par}_D=\{\lambda^{(1)}, \ldots, \lambda^{(n)}\}$ in the following way. The shape $\lambda^{(i)}$, $i \in [n]$, has $i$ parts and is obtained from $D$ by ordering the columns of $D$ whose lowest box is in the $i$th row in decreasing fashion and reading off $\lambda^{(i)}$ according to the French notation. Note that $\lambda^{(i)}$ is empty if there is no column of $D$ whose lowest box is in the $i$th row.
\begin{theorem}
\label{thm:gtsum}
The character $\mathfrak{s}_D$ of the flagged Schur module associated to a column-convex diagram $D$ with $n$ rows and $\mathrm{Par}_D=\{\lambda^{(1)}, \ldots, \lambda^{(n)}\}$ is a projection of the integer point transform of
\begin{equation} \label{eq:ms}
\mathcal{P}_D\coloneqq\mathrm{GT}(\lambda^{(1)})+\mathrm{GT}(\lambda^{(2)})+\cdots+\mathrm{GT}(\lambda^{(n)})
\end{equation}
with the embedding specified above. We obtain $\mathfrak{s}_D(x_i)$ from the integer point transform $\sigma_{\mathcal{P}_D}({x}_{ij})$ via the specialization
\[
x_{ij}\mapsto
\begin{cases}
x_1&\text{when }\,i=1,\\
x_{i-1}^{-1} x_{i}&\text{when }\,i>1.
\end{cases}
\]
\end{theorem}
In the case that $D$ is the Rothe diagram of a permutation $w \in S_n$, the character $\mathfrak{s}_D$ of the flagged Schur module associated to $D$ is the Schubert polynomial $\mathfrak{S}_{w}$. Thus, Theorem \ref{thm:gtsum} answers Question 1 for permutations whose Rothe diagram is column-convex. The necessary background for and the proof of Theorem \ref{thm:gtsum} is in Section \ref{sec:proj}. It is interesting to note that the Newton polytope of a Schubert polynomial is a generalized permutahedron \cite{FMS, MTY}; thus, the affine projection specified in Theorem \ref{thm:gtsum} maps $\mathcal{P}_{D(w)}$ to a generalized permutahedron for column-convex $D(w)$.
Theorem \ref{thm:gtsum} recovers Gelfand-Tsetlin polytopes for Grassmannian permutations. We conclude our paper by showing in Theorem \ref{thm:gtflowpolytope} that Gelfand-Tsetlin polytopes are flow polytopes and by showing how to view $\mathcal{P}_D$ in Theorem \ref{thm:gtsum} in the context of flow polytopes.
\begin{theorem} \label{thm:gtflowpolytope}
$\GT(\lambda)$ is integrally equivalent to the flow polytope $\mathcal{F}_{G_\lambda}$.
\end{theorem}
Section \ref{sec:gtflowpolytope} contains the background for and the proof of Theorem \ref{thm:gtflowpolytope}, as well as some of its corollaries.
\section{Polytopes projecting to Schubert polynomials}
\label{sec:proj}
This section is devoted to proving Theorem~\ref{thm:gtsum} and explaining the relevant terminology. We start by defining diagrams, flagged Schur modules, and their characters.
\subsection{Background}
A \textbf{diagram} is a finite subset of $\mathbb{N} \times \mathbb{N}$. Its elements $(i,j)\in D$ are called \textbf{boxes}. We will think of $\mathbb{N}\times \mathbb{N}$ as a grid of boxes in matrix notation, so $(1,1)$ is the topmost and leftmost box. Canonically associated to each permutation is its Rothe diagram.
\begin{definition}
The \textbf{Rothe diagram} of a permutation $w\in S_n$ is the collection of boxes
\[D(w)=\{(i,j) \mid 1\leq i,j\leq n,\, w(i)>j,\, w^{-1}(j)>i \}.\]
We can visualize $D(w)$ as the set of boxes remaining in the $n\times n$ grid after crossing out all boxes below or to the right of $(i,w(i))$ for each $i\in [n]$.
\end{definition}
\begin{figure}[ht]
\begin{tikzpicture}
\draw (0,0)--(3,0)--(3,3)--(0,3)--(0,0);
\draw[]
(3,2.75)
-- (.75,2.75) node {$\bullet$}
-- (0.75,0);
\draw[]
(3,2.25)
-- (2.25,2.25) node {$\bullet$}
-- (2.25,0);
\draw[]
(3,1.75)
-- (2.75,1.75) node {$\bullet$}
-- (2.75,0);
\draw[]
(3,1.25)
-- (1.75,1.25) node {$\bullet$}
-- (1.75,0);
\draw[]
(3,.75)
-- (.25,.75) node {$\bullet$}
-- (0.25,0);
\draw[]
(3,.25)
-- (1.25,.25) node {$\bullet$}
-- (1.25,0);
\draw (.5,3)--(.5,1)--(0,1);
\draw (0,2.5)--(.5,2.5);
\draw (0,2)--(.5,2);
\draw (0,1.5)--(.5,1.5);
\draw (1,2.5)--(1,1)--(1.5,1)--(1.5,1.5)--(2,1.5)--(2,2.5)--(1,2.5);
\draw (1,2)--(2,2);
\draw (1,1.5)--(1.5,1.5);
\draw (1.5,2.5)--(1.5,1.5);
\end{tikzpicture}
\caption{The permutation $w=256413$ is column-convex and has Rothe diagram \\ $D(w)=\{(1,1),(2,1),(3,1),(4,1),(2,3),(3,3),(4,3),(2,4),(3,4)\}$.}
\label{fig:rothe}
\end{figure}
\begin{definition}
A diagram $D$ is \textbf{column-convex} if for each $j$, the set $\{i \mid (i,j)\in D \}$ is an interval in $\mathbb{N}$.
\end{definition}
Note that a Rothe diagram $D(w)$ is column-convex if and only if $w$ avoids the patterns $3142$ and $4132$.
Let $D$ be a diagram with $n$ rows. Denote by $\Sigma_D$ the symmetric group on the boxes in $D$. Let $\mathrm{Col}(D)$ be the subgroup of $\Sigma_D$ permuting the boxes of $D$ within each column, and define $\mathrm{Row}(D)$ similarly for rows. Let $\mathcal{T}_D$ denote the $\mathbb{C}$-vector space with basis indexed by fillings $T\colon D\to [n]$ of $D$. Observe that $\Sigma_D$, $\mathrm{Col}(D)$, and $\mathrm{Row}(D)$ act on $\mathcal{T}_D$ on the right by permuting the filled boxes.
Define idempotents $\alpha_D$, $\beta_D$ in the group algebra $\mathbb{C}[\Sigma_D]$ by
\[
\alpha_D = {1 \over |\mathrm{Row}(D)|} \sum_{w \in \mathrm{Row}(D)} w,\mbox{\hspace{3ex}}
\beta_D = {1 \over |\mathrm{Col}(D)|} \sum_{w \in \mathrm{Col}(D)} \mathrm{sgn}(w) w ,
\]
where $\mathrm{sgn}(w)$ is the sign of the permutation $w$. Given a filling $T\in\mathcal{T}_D$, define $e_T\in\mathcal{T}_D$ to the be the linear combination
\[e_T=T\cdot\alpha_D\beta_D. \]
Identify $\mathcal{T}_D$ with the tensor product $V^{\otimes N}$, where $V=\mathbb{C}^n$ and $N$ is the number of boxes of $D$, in the following manner. First, fix an order on the boxes of $D$. Then read each filling $T$ in this order to obtain a word $i_1,\ldots,i_N$ on $[n]$, and identify this word with the tensor $e_{i_1}\otimes e_{i_2}\otimes\cdots\otimes e_{i_N} \in V^{\otimes N}$, where $e_1,\ldots,e_n$ is the standard basis of $\mathbb{C}^n$. As $GL_n(\mathbb{C})$ acts on $V$, it acts diagonally on $V^{\otimes N}$ by acting on each component. This left action of $GL_n(\mathbb{C})$ on $\mathcal{T}_D$ commutes with the right action of $S_D$. Thus, the subspace of $\mathcal{T}_D$ spanned by all elements $e_T$ is a submodule, called the \textbf{Schur module} of $D$.
Call a filling $T$ of $D$ \textbf{row-flagged} if $T(i,j)\leq i$ for all $i,j$. Let $B_n$ be the subgroup of $GL_n(\mathbb{C})$ consisting of upper triangular matrices. The subspace of $\mathcal{T}_D$ spanned by the elements $e_T$ for $T$ row-flagged forms a $B_n$-submodule of $\mathcal{T}_D$, called the flagged Schur module of $D$.
\begin{definition}
The \textbf{flagged Schur module} $\mathcal{S}_D$ of a diagram $D$ is the $B_n$-submodule of $\mathcal{T}_D$ spanned by \[\{e_T \mid T\mbox{ is a row-flagged filling of }D\}.\]
The \textbf{formal character} $\mathrm{char}(\mathcal{S}_D)$, denoted by $\mathfrak{s}_D$, is the polynomial
\[\mathfrak{s}_D=\mathrm{char}(\mathcal{S}_D)(x_1,\ldots,x_n) = \mathrm{Trace}(X\colon \mathcal{S}_D\to\mathcal{S}_D), \]
where $X$ is the diagonal matrix in $B_n$ with diagonal entries $x_1,\ldots,x_n$.
\end{definition}
A particularly important subclass of characters of flagged Schur modules is that of Schubert polynomials as explained in Theorem \ref{thm:kp} below. Schubert polynomials are associated to permutations, and they admit various combinatorial and algebraic definitions. For a permutation $w\in S_n$, we will define the Schubert polynomial $\mathfrak{S}_w$ via divided difference operators $\partial_i$ on polynomials.
\begin{definition}
The Schubert polynomial of the long word $w_0 \in S_n$ $(w_0(i)=n-i+1$ for $1\leq i\leq n)$ is defined as \[\mathfrak{S}_{w_0}\coloneqq x_1^{n-1}x_2^{n-2}\cdots x_{n-1}.\]
For $w\neq w_0$, there exists $i\in [n-1]$ such that $w(i)<w(i+1)$. For any such~$i$, the \textbf{Schubert polynomial} $\mathfrak{S}_{w}$ is defined by
\[\mathfrak{S}_{w}\coloneqq\partial_i \mathfrak{S}_{ws_i},\]
where
\[\partial_i (f)= \frac{f-s_if}{x_i-x_{i+1}}=
\frac{f(x_1,\ldots,x_n)-f(x_1,\ldots,x_{i-1},x_{i+1},x_{i},\ldots,x_n)}{x_i-x_{i+1}},\]
and $s_i$ is the transposition swapping $i$ and $i+1$. The operators $\partial_i$ can be shown to satisfy the braid relations, so the Schubert polynomials $\mathfrak{S}_{w}$ are well-defined.
\end{definition}
Schubert polynomials appear as the characters of flagged Schur modules of Rothe diagrams.
\begin{theorem}[\cite{KP}] \label{thm:kp}
Let $w\in S_n$ be a permutation, $D(w)$ be the Rothe diagram of $w$, and $\mathfrak{s}_{D(w)}$ be the character of the associated flagged Schur module $\mathcal{S}_{D(w)}$. Then,
\[\mathfrak{S}_w(x_1,\ldots,x_n) = \mathfrak{s}_{D(w)}(x_1,\ldots,x_n). \]
\end{theorem}
\subsection{Minkowski sums of Gelfand-Tsetlin polytopes}
We now move towards proving Theorem~\ref{thm:gtsum}, which for any column-convex diagram $D$, relates the character $\mathfrak s_D$ with the Minkowski sum \[\mathcal{P}_D=\GT(\lambda^{(1)}) + \cdots + \GT(\lambda^{(n)})\] defined in equation \eqref{eq:ms}. To begin, we describe this Minkowski sum in terms of inequalities. We will need the following Lemma~\ref{lem:gtsum}, which is proved in Section \ref{sec:gtflowpolytope}.
\medskip
\begin{lemma} \label{lem:gtsum}
If $\lambda$ has $n$ parts, then the Gelfand-Tsetlin polytope $\GT(\lambda)$ decomposes as a Minkowski sum:
\[\GT(\lambda) = \sum_{k=1}^{n}(\lambda_k-\lambda_{k+1})\mathrm{GT}(1^k0^{n-k}).\]
\end{lemma}
\begin{proposition} \label{prop:gtsum}
Let $\lambda^{(1)}, \dots, \lambda^{(n)}$ be partitions such that $\lambda^{(i)}$ has $i$ (possibly empty) parts. The Minkowski sum $\GT(\lambda^{(1)}) + \cdots + \GT(\lambda^{(n)})$ is defined by the following inequalities:
\begin{itemize}
\item for all $1 \leq i \leq j \leq n$, $x_{i-1,j-1} \geq x_{ij}$; and
\item for any positive integer $k$ and nonempty sequence $I$ of even length $0 \leq i_k < i_{k-1} < \cdots < i_1 < j_1 < j_2 < \cdots < j_k \leq n$,
\[\sum_{s=1}^k x_{j_s - i_s, j_s} - \sum_{s=1}^{k-1} x_{j_{s+1}-i_s, j_{s+1}} \geq \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)}, \tag{$*$}\]
with equality when $k=1$ and $j_1=i_1+1$.
\end{itemize}
\end{proposition}
\begin{remark}
A simple calculation shows that if, for instance, $i_{s+1} = i_s$ for some $s$, then neither side of $(*)$ would change if we simply remove $i_{s+1}$ and $j_{s+1}$ from the sequence. Likewise, if $j_s = j_{s+1}$ for some $s$, then neither side would change if we remove $i_s$ and $j_s$ from the sequence. Therefore we may equivalently take the inequalities $(*)$ for sequences $0 \leq i_k \leq \cdots \leq i_1 < j_1 \leq \cdots j_k \leq n$.
\end{remark}
One should observe that the entries occurring on the left side of $(*)$ lie at the corners of a path that zigzags southeast and southwest inside the triangular array.
\begin{example}\label{ex:3rows}
Suppose $n=3$. We first have inequalities $x_{11} \geq x_{22} \geq x_{33}$ and $x_{12} \geq x_{23}$ as with ordinary Gelfand-Tsetlin patterns. Then for $k=1$, we get equalities
\[x_{11} = \lambda_1^{(3)}, \qquad x_{12} = \lambda_1^{(2)} + \lambda_2^{(3)}, \qquad x_{13} = \lambda_1^{(1)} + \lambda_2^{(2)} + \lambda_3^{(3)},\]
as well as inequalities
\[x_{22} \geq \lambda_2^{(3)}, \qquad x_{23} \geq \lambda_2^{(2)} + \lambda_3^{(3)}, \qquad \text{and} \qquad x_{33} \geq \lambda_3^{(3)}.\]
Finally, for $k=2$, there is one more inequality, namely
\[x_{12} - x_{23} + x_{33} \geq \lambda_2^{(3)}.\]
\end{example}
\begin{proof}[Proof of Proposition~\ref{prop:gtsum}]
Let $P = P(\lambda^{(1)}, \dots, \lambda^{(n)}) = \GT(\lambda^{(1)}) + \cdots + \GT(\lambda^{(n)})$, and let $Q$ be the polytope given by the inequalities above, $Q = Q(\lambda^{(1)}, \dots, \lambda^{(n)})$. We first show that $P \subseteq Q$. For any point $(x_{ij})_{1 \leq i \leq j \leq n} \in P$, choose, for each $0 \leq m < n$, points $(y_{ij}^{(n-m)})_{1 \leq i \leq j \leq m} \in \GT(\lambda^{(n-m)})$ summing to it, so that $x_{ij} = \sum_{s = 0}^{j-i} y_{i,j-s}^{(n-s)}$. In particular, $\GT(\lambda^{(n-m)})$ will contribute to a coordinate of the form $x_{j-i,j}$ if and only if $m \leq i$.
Inequalities of the form $x_{i-1,j-1} \geq x_{ij}$ are derived by summing the respective inequalities $y_{i-1,j-1-s}^{(n-s)} \geq y_{i,j-s}^{(n-s)}$ over all $0 \leq s \leq j-i$. For inequalities of type $(*)$, consider a sequence $I$, and suppose first that $0 \leq m \leq i_k$. Then
\[
\sum_{s=1}^k y^{(n-m)}_{j_s-i_s, j_s-m} - \sum_{s=1}^{k-1} y^{(n-m)}_{j_{s+1}-i_s, j_{s+1}-m} = y_{j_1 - i_1, j_1 - m}^{(n-m)} + \sum_{s=1}^{k-1} (y^{(n-m)}_{j_{s+1}-i_{s+1}, j_{s+1}-m} - y^{(n-m)}_{j_{s+1}-i_s,j_{s+1}-m})\geq \lambda^{(n-m)}_{j_1-m},\]
for each term in the sum is nonnegative by the defining inequalities of $\GT(\lambda^{(n-m)})$.
If instead $m>i_k$, then let $k'<k$ be the minimum value such that $m \leq i_{k'}$. Then
\begin{align*}
\sum_{s=1}^{k'} y^{(n-m)}_{j_s-i_s, j_s-m} - \sum_{s=1}^{k'} y^{(n-m)}_{j_{s+1}-i_s, j_{s+1}-m} &= \sum_{s=1}^{k'} (y^{(n-m)}_{j_{s}-i_{s}, j_{s}-m} - y^{(n-m)}_{j_{s+1}-i_s,j_{s+1}-m}) \geq 0
\end{align*}
since again each term in the sum is nonnegative. Summing these inequalities over all $m$ then gives the desired inequality. In the case that $k=1$ and $j_1=i_1+1$, we get equality since
\[x_{1,j_1} = \sum_{s=0}^{j_1-1} y_{1,j_1-s}^{(n-s)} = \sum_{s=0}^{i_1} \lambda_{j_1-s}^{(n-s)}.\]
To show $Q \subseteq P$, we induct on $n$ and then the size of $\lambda^{(n)}$. First suppose $\lambda^{(n)} = \varnothing$. The inequalities involving $x_{jj}$ are $x_{11} \geq x_{22} \geq \cdots \geq x_{nn}$, and, when $i_k = 0$,
\[\sum_{s=1}^{k-1} (x_{j_s-i_s,j_s}-x_{j_{s+1}-i_s,j_{s+1}}) + x_{j_k,j_k} \geq \lambda_{j_1}^{(n)} = 0\]
with equality if also $k=1$ and $j_1 = 1$. These imply that $x_{jj} = 0$ for all $1 \leq j \leq n$ and impose no additional constraints on the other entries. Removing the diagonal of entries $x_{jj}$ then yields a triangular array that satisfies the inequalities defining $Q(\lambda^{(1)}, \cdots, \lambda^{(n-1)})$. Therefore by induction
\[Q(\lambda^{(1)}, \dots, \lambda^{(n-1)}, \varnothing) = Q(\lambda^{(1)}, \dots, \lambda^{(n-1)}) = P(\lambda^{(1)}, \dots, \lambda^{(n-1)}) = P(\lambda^{(1)}, \dots, \lambda^{(n-1)}, \varnothing).\]
If $\lambda^{(n)} \neq \varnothing$, then let $m = \ell(\lambda^{(n)})$ be the number of nonzero parts. We will prove that $Q \subseteq \GT(1^m0^{n-m}) +Q'$, where we let $Q' = Q(\lambda^{(1)}, \dots, \lambda^{(n-1)}, \mu^{(n)})$ for $\mu^{(n)} = (\lambda^{(n)}_1-1, \dots, \lambda^{(n)}_m-1, 0, \dots, 0)$. This will prove the result by induction using Lemma \ref{lem:gtsum} since then $\GT(1^m0^{n-m}) + \GT(\mu^{(n)}) = \GT(\lambda^{(n)})$.
Recall that Gelfand-Tsetlin polytopes are integral polytopes. Given any integer point $(x_{ij}) \in Q$, set $t_j = 1$ for $1 \leq j \leq m$, while for $m < j \leq n$, set $t_j$ to be the minimum value such that $t_j > t_{j-1}$ and $x_{t_j-1,j-1} = x_{t_j,j}$ (if such an index exists, otherwise set $t_j = \infty$). Then define the point $(z_{ij})_{1 \leq i \leq j \leq n} \in \GT(1^m, 0^{n-m})$ by $z_{ij} = 1$ if $i \geq t_j$, otherwise $z_{ij}=0$.
We claim that $(x'_{ij}) = (x_{ij}-z_{ij}) \in Q'$. Our choice of $t_j$ guarantees that $x_{i-1,j-1} - x_{i,j} \geq 1$ whenever $z_{i-1,j-1}-z_{i,j} = 1$, which ensures that $x'_{i-1,j-1} \geq x'_{ij}$ for all $1 \leq i \leq j \leq n$. Therefore it suffices to show inequalities of type $(*)$.
Given any sequence $I$, suppose that for some $s$, $z_{j_s-i_{s-1},j_s}=0$ but $z_{j_s-i_s,j_s}=1$. Consider what happens to the left hand side of $(*)$ if we insert $j'=j_{s}-1$ between $j_{s-1}$ and $j_s$, and we insert $i'=j_s-t_{j_{s}}$ between $i_s$ and $i_{s-1}$ to get a new sequence $I'$. (Note that $j_{s-1} \leq j' < j_s$ and $i_s \leq i' < i_{s-1}$.) This reduces the left hand side of $(*)$ by
\begin{align*}
(x'_{j' - i_{s-1},j'}-x'_{j_s-i_{s-1},j_s}) - (x'_{j'-i',j'}-x'_{j_s-i',j_s}) &=
(x'_{j_s-1-i_{s-1},j_s-1}-x'_{j_s-i_{s-1},j_s})-(x'_{t_{j_s}-1,j_s-1}-x'_{t_{j_s},j_s})\\
&= x'_{j_s-1-i_{s-1},j_s-1}-x'_{j_s-i_{s-1},j_s}\\ &\geq 0,
\end{align*}
while the right hand side of $(*)$ is unchanged. Thus $(*)$ for the sequence $I$ is implied by $(*)$ for the new sequence $I'$. Since $z_{j_s-i',j_s} = z_{t_{j_s},j_s} = 1$, by iteratively applying this procedure to the new sequence, we will eventually arrive at a sequence for which such an $s$ does not exist.
It therefore suffices to prove inequality $(*)$ in the case that there exists some $s'$ such that $z_{j_s-i_{s-1},j_s}=1$ and $z_{j_s-i_s,j_s}= 1$ exactly when $s \leq s'$. If $j_1 \leq m$, then the left hand side of $(*)$ is
\begin{align*}
\sum_{s=1}^k x'_{j_s - i_s, j_s} - \sum_{s=1}^{k-1} x'_{j_{s+1}-i_s, j_{s+1}} &= \left(\sum_{s=1}^k x_{j_s - i_s, j_s}-s'\right) - \left(\sum_{s=1}^{k-1} x_{j_{s+1}-i_s, j_{s+1}}-s'+1\right)\\
&= \sum_{s=1}^k x_{j_s - i_s, j_s} - \sum_{s=1}^{k-1} x_{j_{s+1}-i_s, j_{s+1}} - 1,
\end{align*}
while the right hand side is
\[\mu_{j_1}^{(n)} + \sum_{s=1}^{i_k} \lambda_{j_1-s}^{(n-s)} = \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)} -1,\]
so this inequality follows from the corresponding inequality for $(x_{ij}) \in Q$. If $j_1 > m$, then consider the sequence obtained by inserting $m, m+1, \dots, j_1-1$ before $j_1$, and $j_1-t_{j_1}, j_1-1-t_{j_1-1}, \dots, m+1-t_{m+1}$ after $i_1$ in the sequence. For $(x_{ij}) \in Q$, this yields the inequality
\[\left(\sum_{j = m}^{j_1-1} x_{t_{j+1}-1,j}+\sum_{s=1}^k x_{j_s-i_s, j_s}\right) - \left(\sum_{j =m}^{j_1-1} x_{t_{j+1},j+1} + \sum_{s=1}^{k-1} x_{j_{s+1}-i_s,j_{s+1}}\right) \geq \sum_{s=0}^{i_k} \lambda_{m-s}^{(n-s)}.\]
But $x_{t_{j+1}-1,j} = x_{t_{j+1},j+1}$, and the right side is strictly greater than $\sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)}$ (since $\lambda^{(n)}_{m} > 0 = \lambda^{(n)}_{j_1}$). Thus
\[\sum_{s=1}^k x_{j_s-i_s, j_s} - \sum_{s=1}^{k-1} x_{j_{s+1}-i_s,j_{s+1}} \geq \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)} + 1,\]
or equivalently,
\[\left(\sum_{s=1}^k x_{j_s-i_s, j_s}-s'\right) - \left(\sum_{s=1}^{k-1} x_{j_{s+1}-i_s,j_{s+1}}-s'+1\right) \geq \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)} = \mu_{j_1}^{(n)} + \sum_{s=1}^{i_k} \lambda_{j_1-s}^{(n-s)},\]
which is the inequality $(*)$ for $(x'_{ij}) \in Q'$. This completes the proof.
\end{proof}
\subsection{Demazure operators and parapolytopes}
To prove Theorem~\ref{thm:gtsum}, we will need a formula for the character $\mathfrak s_D$. The following formula is essentially a particular case of one due to Magyar \cite{magyar}. (See also Reiner-Shimozono \cite{dpeel}.) We first define the \textbf{isobaric divided difference operator} (or \textbf{Demazure operator}) $\pi_i$ acting on polynomials $f(x_1, \dots, x_n)$ by
\[\pi_if(x_1, \dots, x_n) = \partial_i (x_if)=\frac{x_if - x_{i+1}s_if}{x_i-x_{i+1}},\]
where $s_if$ is the polynomial obtained from $f$ by switching $x_i$ and $x_{i+1}$. Note that $\pi_i f = f$ if $f$ is symmetric in $x_i$ and $x_{i+1}$.
\begin{proposition} \label{prop:sd}
Let $D$ be a column-convex diagram with $n$ rows with ${\rm{Par}}_D=\{\lambda^{(1)}, \ldots, \lambda^{(n)}\}$. Define $\widetilde D$ to be the diagram with $n-1$ rows such that $\mathrm{Par}_{\widetilde D} = (\widetilde \lambda^{(1)}, \dots, \widetilde \lambda^{(n-1)})$, where $\widetilde \lambda^{(i)}_j = \lambda^{(i+1)}_j - \lambda^{(i+1)}_{i+1}$. (Here, $\widetilde D$ is obtained from $D$ by removing any column with a box in the first row and then shifting all remaining boxes up by one row.) Also let
\[\mu = (\lambda^{(1)}_1 + \lambda^{(2)}_2+\cdots + \lambda^{(n)}_n, \lambda^{(2)}_2 + \cdots + \lambda^{(n)}_n, \dots, \lambda^{(n)}_n),\]
the partition formed from all columns of $D$ with a box in the first row. Then
\[\mathfrak s_{D} = x_1^{\mu_1}\cdots x_n^{\mu_n} \pi_1\pi_2\cdots \pi_{n-1} (\mathfrak s_{\widetilde D}).\]
\end{proposition}
\begin{proof}
Note that $D$ can be obtained from $\widetilde D$ by switching the $i$th and $(i+1)$st row for $i = n-1, n-2, \dots, 1$, and then adding $\mu_i$ columns with boxes in rows $\{1, \dots, i\}$ for each $i = 1, \dots, n$. The result then follows immediately from \cite{magyar} (see, for instance, Proposition 15).
\end{proof}
We now show that the polytope for $D$ can be constructed iteratively in a way that mimics the application of the operator $\pi_i$. This geometric operation is the same as the operator $D_i$ given by Kiritchenko in \cite{divdiff} specialized for our current situation.
The key lemma is the following calculation.
\begin{lemma} \label{lem:Di}
Choose nonnegative integers $N_1$, $N_2$ and $\mu_1 \leq \nu_1$, \dots, $\mu_k \leq \nu_k$ such that $\sum_{i=1}^k (\mu_i + \nu_i) \leq N_1+N_2$. Define the polynomial
\[f(x_1, x_2) = \sum_{c_1 = \mu_1}^{\nu_1} \cdots \sum_{c_k = \mu_k}^{\nu_k} x_1^{N_1 - c_1 - \cdots - c_k}x_2^{c_1 + \cdots + c_k-N_2} .\]
Then
\[\pi_1 f(x_1, x_2) = \sum_{c_1 = \mu_1}^{\nu_1} \cdots \sum_{c_k = \mu_k}^{\nu_k} \sum_{c_{k+1} = 0}^{\nu_{k+1}}x_1^{N_1 - c_1 - \cdots - c_k - c_{k+1}}x_2^{c_1 + \cdots + c_k + c_{k+1}-N_2} ,\]
where $\nu_{k+1} = N_1+N_2 - \sum_{i=1}^k (\mu_i + \nu_i)$.
\end{lemma}
\begin{proof}
Note that reversing the order of each of the summations in the expression for $f$ gives
\[f = \sum_{c_1 = \mu_1}^{\nu_1} \cdots \sum_{c_k = \mu_k}^{\nu_k} x_1^{\nu_{k+1}-N_2+c_1 + \cdots + c_k}x_2^{N_1-\nu_{k+1}-c_1-\cdots - c_k} = (\tfrac{x_1}{x_2})^{\nu_{k+1}} \cdot s_1f.\]
Hence
\[\pi_1f = \frac{x_1f - x_2s_1f}{x_1-x_2} = f \cdot \frac{1 - \left(\tfrac{x_2}{x_1}\right)^{\nu_{k+1}+1}}{1-\tfrac{x_2}{x_1}} = f \cdot \sum_{c_{k+1}=0}^{\nu_{k+1}} x_1^{-c_{k+1}}x_2^{c_{k+1}},\]
as desired.
\end{proof}
Consider $\mathbb R^{\binom{n+1}{2}}$ with coordinates $x_{ij}$ for $1 \leq i \leq j \leq n$. Let $\varphi_k \colon \mathbb R^{\binom{n+1}{2}} \to \mathbb R^{\binom{n+1}{2} - n-1+k}$ be the projection onto the coordinates $x_{ij}$ for all $i \neq k$.
\begin{definition}[\cite{divdiff}]\label{def:parapolytope}
A \textbf{parapolytope} $P \subset \mathbb R^{\binom{n+1}{2}}$ is a convex polytope such that, for all $k$, every fiber of the projection $\varphi_k$ on $P$ is a coordinate parallelepiped.
In other words, for every $k$ and every set of constants $c_{ij}$ ($i \neq k$), there exist constants $\mu_j$ and $\nu_j$ (depending on the $c_{ij}$) such that $(x_{ij}) \in P$ with $x_{ij} = c_{ij}$ for $i \neq k$ if and only if $\mu_j \leq x_{kj} \leq \nu_j$.
We denote this parallelepiped (which depends on $k$ and $c_{ij}$ for $i \neq k$) by \[\Pi(\mu_k, \dots, \mu_n; \nu_k, \dots, \nu_n)=\Pi(\mu,\nu) = \{(x_{kj})_{j=k}^n \mid \mu_j \leq x_{kj} \leq \nu_j\} \subset \mathbb R^{n+1-k}.\]
\end{definition}
Given a polytope $P \subset \mathbb R^{\binom{n+1}{2}}$, let $\sigma_P$ be its {\bf integer point transform}
\[\sigma_P(x_{ij}) = \sum_{(c_{ij}) \in P \cap \mathbb Z^{\binom{n+1}{2}}} \prod_{1 \leq i \leq j \leq n} x_{ij}^{c_{ij}},\]
and define $s_P(x_i)$ to be the image of $\sigma_P(x_{ij})$ under the specialization sending
\[
x_{ij}\mapsto
\begin{cases}
x_1&\text{when }\,i=1,\\
x_{i-1}^{-1} x_{i}&\text{when }\,i>1.
\end{cases}
\]
In other words, the point $(c_{ij}) \in P \cap \mathbb Z^{\binom{n+1}{2}}$ corresponds to the monomial in which the exponent of $x_i$ is $C_i - C_{i+1}$, where $C_i = \sum_{j=i}^n c_{ij}$.
\begin{lemma}\label{lem:Di2}
Fix $2 \leq k \leq n$, and let $P, Q \subset \mathbb R^{\binom{n+1}{2}}$ be parapolytopes. Suppose that for any fixed integer point $c = (c_{ij})_{i \neq k}$, the fiber over $c$ of the projection $\varphi_k$ on $P$ is the (integer) parallelepiped
\[\Pi_P = \Pi(\mu_k, \dots, \mu_{n-1}, 0; \nu_k, \dots, \nu_{n-1}, 0),\]
while the fiber over $c$ of $\varphi_k$ on $Q$ is
\[\Pi_Q =\Pi(\mu_k, \dots, \mu_{n-1}, \mu_n; \nu_k, \dots, \nu_{n-1}, \nu_n),\]
where $\mu_n = 0$ and \[\nu_n = \sum_{j=k-1}^n c_{k-1,j} + \sum_{j=k+1}^n c_{k+1,j} - \sum_{j=k}^{n-1} (\mu_j+\nu_j) \geq 0.\]
Then $s_Q = \pi_{k-1} s_P$.
\end{lemma}
\begin{proof}
For fixed $c$, the contribution to $s_P$ of the fiber over $c$ has the form \[M \cdot \sum_{(c_{kk}, \dots, c_{k,n-1}) \in \Pi_P} x_{k-1}^{C_{k-1} - \sum_j c_{kj}} x_k^{\sum_j c_{kj} - C_{k+1}},\]
where $M$ is a monomial that does not contain $x_{k-1}$ nor $x_k$, and $C_i = \sum_{j=i}^n c_{ij}$ only depends on $c$ for $i \neq k$. This summation has the same form as the one in Lemma~\ref{lem:Di}, so applying $\pi_{k-1}$ as per the lemma immediately gives the result.
\end{proof}
\begin{remark} \label{rem:para}
The operator that produces $Q$ from $P$ is denoted by $D_{k-1}$ in \cite{divdiff}. However, it is important to note that the operator $D_{k-1}$ will not in general yield a parapolytope or even necessarily a polytope from a general parapolytope $P$.
\end{remark}
We are now ready to prove our main theorem.
\begin{proof} [Proof of Theorem~\ref{thm:gtsum}]
Let $D$ be a column-convex diagram with $n$ rows with $\mathrm{Par}_D = \{\lambda^{(1)}, \dots, \lambda^{(n)}\}$, and let
\[\mathcal{P}_D = \GT(\lambda^{(1)}) + \cdots + \GT(\lambda^{(n)}).\] We first claim that we can reduce to the case when $D$ does not contain any boxes in the first row. Indeed, adding a column with boxes in rows $1, 2, \dots,k$ to $D$ serves to add $1$ to each part of $\lambda^{(k)}$, which, by Lemma \ref{lem:gtsum}, translates $\GT(\lambda^{(k)})$ by the single point $\GT(1^k)$ and hence does the same to $\mathcal{P}_D$. This translation adds $k+1-i$ to the sum of row $i$ for $i=1, \dots, k$, so it multiplies $s_{\mathcal{P}_D}$ by $x_1x_2 \cdots x_k$. Since Proposition~\ref{prop:sd} shows that adding this column also multiplies $\mathfrak s_D$ by $x_1x_2 \cdots x_k$, the claim follows.
Therefore, we may assume that $D$ has no boxes in the first row, so that $\lambda^{(k)}_k = 0$ for all $k$, which implies that $\mathcal{P}_D$ is contained in the hyperplane $x_{1n} = 0$. Denote by $\mathcal{P}_D^{(m)}$ the intersection of $\mathcal{P}_D$ with the subspace $x_{1n} = x_{2n} = \cdots = x_{mn} = 0$. In fact, $\mathcal{P}_D^{(m)}$ is also the orthogonal projection of $\mathcal{P}_D$ onto this subspace. To see this, note that for any Gelfand-Tsetlin pattern $(y_{ij})_{1 \leq i \leq j \leq k} \in \GT(\lambda^{(k)})$, setting $y_{in} = 0$ for any $i \leq m$ again yields a valid Gelfand-Tsetlin pattern. Thus for any $(x_{ij}) \in \mathcal{P}_D$, setting $x_{in} = 0$ for all $i \leq m$ will again yield an element of $\mathcal{P}_D$.
Note also that $\mathcal{P}_D^{(n)}$ is just a translate of $\mathcal{F}_{\widetilde D}$ where $\widetilde D$ is the diagram obtained from $D$ by shifting each box up by one row. (Any Gelfand-Tsetlin pattern for $\lambda^{(k)}$ in which the last entry in each row is $0$ is just a Gelfand-Testlin pattern for $\lambda^{(k)}$, thought of as a partition of length $k-1$.) It follows that $\mathfrak s_{\widetilde D} = s_{\mathcal{F}_{\widetilde D}} = s_{\mathcal{P}_D^{(n)}}$.
We first show that the inequalities defining $\mathcal{P}_D^{(m)}$ are precisely the inequalities for $\mathcal{P}_D$ described in Proposition~\ref{prop:gtsum} that do not involve any $x_{in}$ for $i \leq m$ (together with $x_{1n} = x_{2n} = \cdots = x_{mn} = 0$). Clearly any inequality of the form $x_{i-1,n-1} \geq x_{in}$ for $i \leq m$ is redundant since it is implied by inequality $(*)$ for $i_1 = n-i$ and $j_1 = n-1$. Then consider any inequality $(*)$ for a sequence $I$ with $j_k = n$ and $i_{k-1} \geq n-m$:
\[\sum_{s=1}^{k-1} x_{j_s - i_s, j_s} - \sum_{s=1}^{k-2} x_{j_{s+1}-i_s, j_{s+1}} + x_{n-i_k, n} - x_{n-i_{k-1}, n}\geq \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)}.\]
Let $I'$ be the sequence obtained from $I$ by removing $i_k$ and $j_k$. The corresponding inequality is
\[\sum_{s=1}^{k-1} x_{j_s - i_s, j_s} - \sum_{s=1}^{k-2} x_{j_{s+1}-i_s, j_{s+1}} \geq \sum_{s=0}^{i_{k-1}} \lambda_{j_1-s}^{(n-s)}.\]
Since $x_{n-i_k,n} \geq 0$, $x_{n-i_{k-1},n} = 0$, and $i_{k-1} > i_k$, we see that the inequality for $I$ follows immediately from that for $I'$.
Since none of the inequalities defining $\mathcal{P}_D^{(m)}$ involve two coordinates in the same row, $\mathcal{P}_D^{(m)}$ is a parapolytope. It therefore suffices to show that $\mathcal{P}_D^{(m)}$ and $\mathcal{P}_D^{(m-1)}$ are related as in Lemma~\ref{lem:Di2}, for it will then follow that $s_{\mathcal{P}_D^{(m-1)}} = \pi_{m-1}s_{\mathcal{P}_D^{(m)}}$, which combined with $\mathfrak s_{\widetilde D} = s_{\mathcal{P}_D^{(n)}}$ and $s_{\mathcal{P}_D} = s_{\mathcal{P}_D^{(1)}} $ will imply that $s_{\mathcal{P}_D} = \pi_1\pi_2 \cdots \pi_{n-1}(\mathfrak s_{\widetilde D}) = \mathfrak s_D$ by Proposition~\ref{prop:sd}, as desired.
Therefore, fix $c_{ij}$ for $i \neq m$, with $c_{in} = 0$ for $i < m$, and define $\mu_m, \dots, \mu_{n}, \nu_m, \dots, \nu_{n}$ as in Definition~\ref{def:parapolytope} for $\mathcal{P}_D^{(m-1)}$. We claim that $\nu_j + \mu_{j-1} = c_{m-1,j-1} + c_{m+1,j}$. It will then follow by summing over all $j$ that \[\nu_n = \sum_{j=m-1}^{n-1} c_{m-1,j} + \sum_{j=m+1}^{n} c_{m+1,j} - \sum_{j = m}^{n-1}(\mu_j + \nu_j).\]
Together with noting that the only lower bound on $x_{mn}$ is $0$, this will complete the proof by Lemma~\ref{lem:Di2}.
Consider the upper bounds on $x_{mj}$ in $\mathcal{P}_D^{(m-1)}$. We need to show that if $x_{mj} \leq C$ (where $C$ is some function of $c_{ij}$ for $i \neq m$), then $x_{m,j-1} \geq c_{m-1,j-1} + c_{m+1,j} - C$. This is immediate for the inequality $x_{mj} \leq c_{m-1,j-1}$ since $x_{m,j-1} \geq c_{m+1, j}$. Then consider a sequence $I$ such that $j_{s'+1}-i_{s'} = m$ and $j_{s'+1} = j$ for some $s'$, so that $-x_{mj}$ appears on the left side of $(*)$. Thus $C-x_{mj} \geq 0$, where
\[C = \sum_{s=1}^k c_{j_s-i_s,j_s} - \sum_{\substack{1 \leq s \leq k-1\\mathfrak{S} \neq s'}} c_{j_{s+1-i_s,j_{s+1}}} - \sum_{s=0}^{i_k} \lambda_{j_1-s}^{(n-s)}.\]
By inserting $j_{s'+1}-1 = j-1$ before $j_{s'+1} = j$ and $i_{s'}-1 = j-m-1$ before $i_{s'} = j-m$ in $I$ to get a new sequence $I'$, the left side of $(*)$ for $I'$ differs from the left side of $(*)$ for $I$ by $x_{m,j-1} + x_{m,j} - c_{m-1,j-1} - c_{m+1,j}$. Therefore the inequality $(*)$ for $I'$ is equivalent to
\[C+x_{m,j-1} - c_{m-1,j-1} - c_{m+1,j} \geq 0,\]
or $x_{m,j-1} \geq c_{m-1,j-1} + c_{m+1,j} - C$, as desired. A similar argument shows that any lower bound $x_{m,j-1} \geq C'$ yields an upper bound $x_{mj} \leq c_{m-1,j-1} + c_{m+1,j} - C'$, which completes the proof.
\end{proof}
\begin{example} \label{ex:3rows2}
Let $n=3$, and let $D$ be the column-convex diagram shown below with $\lambda^{(3)} = (a+b,a,0)$, $\lambda^{(2)} = (c,0)$, and $\lambda^{(1)} = (0)$.
\[
\begin{tikzpicture}[scale=.5]
\node at (0,2) {$1$};
\node at (0,1) {$2$};
\node at (0,0) {$3$};
\draw (1,-.5) rectangle (2,.5) rectangle (1,1.5);
\node at (3,0){$\cdots$};
\node at (3,1){$\cdots$};
\draw (4,-.5) rectangle (5,.5) rectangle (4,1.5);
\draw (2,-.5) rectangle (4,.5) rectangle (2,1.5);
\draw[decorate,decoration={brace,amplitude=5pt,raise=1ex}] (4.9,-.5)--(1.1,-.5) node[midway,yshift=-2em]{$a$};
\draw (5,-.5) rectangle (9,.5) (6,-.5)--(6,.5) (8,-.5)--(8,.5);
\node at (7,0){$\cdots$};
\draw[decorate,decoration={brace,amplitude=5pt,raise=1ex}] (8.9,-.5)--(5.1,-.5) node[midway,yshift=-2em]{$b$};
\draw (9,.5) rectangle (13,1.5) (10,.5)--(10,1.5) (12,.5)--(12,1.5);
\node at (11,1){$\cdots$};
\draw[decorate,decoration={brace,amplitude=5pt,raise=1ex}] (12.9,-.5)--(9.1,-.5) node[midway,yshift=-2em]{$c$};
\end{tikzpicture}
\]
Using the notation in the proof of Theorem~\ref{thm:gtsum}, all the polytopes $\mathcal{P}_D^{(m)}$ for $m=1,2,3$ have $x_{11} = a+b$, $x_{12} = a+c$, and $x_{13} = 0$.
\begin{itemize}
\item For $m=3$, $\mathcal{P}_D^{(3)}$ is a segment since we have $a \leq x_{22} \leq a+b$.
\item For $m=2$, the fiber of $\mathcal{P}_D^{(2)}$ above a point of $\mathcal{P}_D^{(3)}$ is defined by $0 \leq x_{33} \leq x_{22}$, making $\mathcal{P}_D^{(2)}$ a trapezoid. Note that for fixed $x_{33}$, the condition on $x_{22}$ is that $\max\{a, x_{33}\} \leq x_{22} \leq a+b$.
\item For $m=1$, the fiber of $\mathcal{P}_D = \mathcal{P}_D^{(1)}$ above a point of $\mathcal{P}_D^{(2)}$ is defined by
\begin{align*}
0 \leq x_{23} &\leq x_{11} + x_{12} + x_{13} + x_{33} - (\mu_2 + \nu_2)\\
&= (a+b)+(a+c) +0 + x_{33} - (\max\{a, x_{33}\} + a+b)\\
&= c+ \min\{a, x_{33}\}.
\end{align*}
This is equivalent to the inequalities on $x_{23}$ given in Example~\ref{ex:3rows}:
\begin{align*}
\lambda^{(2)}_2+\lambda^{(3)}_3 = 0 \leq x_{23} &\leq c+a = x_{12},\\
x_{23} &\leq c+x_{33} = x_{12}- \lambda_2^{(3)} + x_{33}.
\end{align*}
\end{itemize}
See Figure~\ref{fig:3rows} for a depiction of $\mathcal{P}^{(m)}_D$ for $m=3,2,1$.
\begin{figure}
\begin{tikzpicture}[z = {(250:.6)}]
\filldraw[fill=blue!10] (0,0,2)--(2,0,2)--(4,0,4)--(0,0,4);
\node at (1.5,0,3){$\mathcal{P}_D^{(2)}$};
\node (1) at (-1.5,0,2){$\mathcal{P}_D^{(3)}$};
\draw[->,>=stealth] (1)--(-.1,0,2.7);
\draw[ultra thick, red] (0,0,2)--(0,0,4);
\draw (0,2,2)--(2,4,2)--(2,4,4)--(0,2,4)--cycle (2,4,2)--(4,4,4)--(2,4,4);
\draw[gray!50] (0,0,2)--(0,2,2) (2,0,2)--(2,4,2);
\draw (0,0,4)--(0,2,4) (4,0,4)--(4,4,4);
\node at (1.5,2,3){$\mathcal{P}_D^{(1)}$};
\end{tikzpicture}
\caption{$\mathcal{P}_D=\mathcal{P}^{(1)}_D$ with faces $\mathcal{P}^{(2)}_D$ and $\mathcal{P}^{(3)}_D$ as in Example~\ref{ex:3rows2}. (See also Example~\ref{ex:3rows}.)}
\label{fig:3rows}
\end{figure}
\end{example}
\begin{remark}
The results of Magyar \cite{magyar} allow one to compute the character of the flagged Schur module for any diagram whose columns form a so-called \emph{strongly separated family} (or equivalently, for any \emph{percentage-avoiding diagram} \cite{dpeel}), which includes all Rothe diagrams of permutations. The technique above can be used to find suitable polytopes for a somewhat more general class of diagrams and permutations as Minkowski sums of faces of Gelfand-Tsetlin polytopes (such as the intermediate steps $\mathcal{P}_D^{(m)}$ in the proof of Theorem~\ref{thm:gtsum}), but it does not apply in full generality to all Schubert polynomials due to the ill behavior of general parapolytopes (see Remark~\ref{rem:para}).
\end{remark}
\section{Gelfand-Tsetlin polytopes as flow polytopes}
\label{sec:gtflowpolytope}
In this section we show that the Gelfand-Tsetlin polytope is integrally equivalent to a flow polytope and give alternative proofs of several known results using flow polytopes. We start by defining flow polytopes and providing the necessary background on them.
\subsection{Background on flow polytopes}
Let $G$ be a loopless directed acyclic connected (multi-)graph on the vertex set $[n+1]$ with $m$ edges. An integer vector $a=(a_1,\ldots,a_n,-\sum_{i=1}^na_i)\in \mathbb{Z}^{n+1}$ is called a \textbf{netflow vector}. A pair $(G,a)$ will be referred to as a \textbf{flow network}. To minimize notational complexity, we will typically omit the netflow $a$ when referring to a flow network $G$, describing it only when defining $G$. When not explicitly stated, we will always assume vertices of $G$ are labeled so that $(i,j)\in E(G)$ implies $i<j$.
To each edge $(i,j)$ of $G$, associate the type $A$ positive root $e_i-e_j\in\mathbb{R}^n$. Let $M_G$ be the incidence matrix of $G$, the matrix whose columns are the multiset of vectors $e_i-e_j$ for $(i,j)\in E(G)$. A \textbf{flow} on a flow network $G$ with netflow $a$ is a vector $f=(f(e))_{e\in E(G)}$ in $\mathbb{R}_{\geq 0}^{E(G)}$ such that $M_Gf=a$. Equivalently, for all $1\leq i \leq n$, we have
\[\sum_{e=(k,i)\in E(G)}f(e)+a_i = \sum_{e=(i,k)\in E(G)} f(e). \]
The fact that the netflow of vertex $n+1$ is $-\sum_{i=1}^n a_i$ is implied by these equations.
Define the \textbf{flow polytope} $\mathcal{F}_G(a)$ of a graph $G$ with netflow $a$ to be the set of all flows on $G$:
\[\mathcal{F}_G=\mathcal{F}_G(a)=\{f\in\mathbb{R}^{E(G)}_{\geq 0} \mid M_Gf=a \}. \]
\begin{remark}
\label{rem:flownetwork}
When $G$ is a flow network $(G,a)$, we will write $\mathcal{F}_G$ for $\mathcal{F}_G(a)$.
\end{remark}
\subsection{The Gelfand-Tsetlin polytope as a flow polytope}
\begin{namedtheorem}[\ref{thm:gtflowpolytope}]
$\mathrm{GT}(\lambda)$ is integrally equivalent to $\mathcal{F}_{G_\lambda}$.
\end{namedtheorem}
\medskip
Recall that given a partition $\lambda = (\lambda_1,\dots,\lambda_n)\in \mathbb{Z}^n_{\geq 0}$, the \textbf{Gelfand-Tsetlin polytope} $\mathrm{GT}(\lambda)$ is the set of all nonnegative triangular arrays
\begin{center}
\begin{tabular}{ccccccc}
$x_{11}$&&$x_{12}$&&$\cdots$&&$x_{1n}$\\
&$x_{22}$&&$x_{23}$&$\cdots$&$x_{2n}$&\\
&&$\cdots$&&$\cdots$&&\\
&&$x_{n-1,n-1}$&&$x_{n-1,n}$&&\\
&&&$x_{nn}$&&&
\end{tabular}
\end{center}
such that
\begin{align*}
x_{in}=\lambda_i &\mbox{ for all } 1\leq i\leq n\\
x_{i-1,j-1}\geq x_{ij}\geq x_{i-1,j} &\mbox{ for all } 1\leq i \leq j\leq n.
\end{align*}
Recall also that two integral polytopes $\mathcal{P}$ in $\mathbb{R}^d$ and $\mathcal{Q}$ in $\mathbb{R}^m$ are \textbf{integrally equivalent}
if there is an affine transformation
$\varphi\colon\mathbb{R}^d \to \mathbb{R}^m$ whose restriction to
$\mathcal{P}$ is a bijection $\varphi\colon \mathcal{P} \to \mathcal{Q}$
that preserves the lattice, i.e., $\varphi$ is a
bijection between $\mathbb{Z}^d \cap {\rm aff}(\mathcal{P})$ and
$\mathbb{Z}^m \cap {\rm aff}(\mathcal{Q})$, where ${\rm aff}(\cdot)$ denotes affine span. The map $\varphi$ is called an \textbf{integral equivalence}. Note that integrally equivalent polytopes have the same Ehrhart polynomials, and therefore the same volume.
We now define the flow network $G_\lambda$, describing the graph and its associated netflow (see Remark \ref{rem:flownetwork}). For an illustration of $G_{\lambda}$, see Figure \ref{Glambda}.
\begin{definition}
\label{def:Glambda}
For a partition $\lambda\in\mathbb{Z}^n_{\geq 0}$ with $n\geq 2$, let $G_\lambda$ be defined as follows:
\noindent If $n=1$, let $G_\lambda$ be a single vertex $v_{22}$ defined to have flow polytope consisting of one point, $0$. Otherwise, let $G_\lambda$ have vertices
\[V(G_\lambda) = \{v_{ij} \mid 2\leq i\leq j \leq n\}\cup\{v_{i,i-1} \mid 3\leq i\leq n+2\}\cup\{v_{i,n+1} \mid 3\leq i \leq n+1\} \]
and edges
\begin{equation*}
\begin{split}
E(G_\lambda) &= \{(v_{ij},v_{i+1,j}) \mid 2\leq i\leq j\leq n \}\cup\{(v_{i,n+1},v_{i+1,n+1}) \mid 3\leq i \leq n+1 \}\\
&\quad\cup\{(v_{ij},v_{i+1,j+1}) \mid 2\leq i\leq j\leq n \}\cup\{(v_{i,i-1},v_{i+1,i}) \mid 3\leq i \leq n+1 \}.
\end{split}
\end{equation*}
The default netflow vector on $G_\lambda$ is as follows:
\begin{itemize}
\item To vertex $v_{2j}$ for $2\leq j \leq n$, assign netflow $\lambda_{j-1}-\lambda_{j}$.
\item To vertex $v_{n+2,n+1}$, assign netflow $\lambda_{n}-\lambda_{1}$.
\item To all other vertices, assign netflow $0$.
\end{itemize}
Given a flow on $G_\lambda$, denote the flow value on each edge $(v_{ij},v_{i+1,j})$ by $a_{ij}$, and denote the flow value on each edge $(v_{ij},v_{i+1,j+1})$ by $b_{ij}$.
\end{definition}
\begin{figure}[ht]
\includegraphics[scale=.55]{GTGraphEdgesVertices.pdf}
\caption{The flow network $G_\lambda$ with $\ell(\lambda)=5$.}
\label{Glambda}
\end{figure}
\bigskip
\begin{proof}[Proof of Theorem \ref{thm:gtflowpolytope}.]
To map a point $(x_{ij})_{i,j}\in \mathrm{GT}(\lambda)$ to $\mathcal{F}_{G_{\lambda}}$, use the map
\begin{align*}
a_{i\,j}&=x_{i-1,j-1}-x_{ij},\\
b_{i\,j}&=x_{ij}-x_{i-1,j}.
\end{align*}
Conversely, to map a flow $f\in\mathcal{F}_{G_\lambda}$ to $\mathrm{\mathrm{GT}(\lambda)}$, use either
\[x_{ij}=\lambda_j+\sum_{k=2}^{i}b_{kj}\mbox{\hspace{2ex}or\hspace{2ex} } x_{ij}=\lambda_{j-i+1}-\sum_{k=0}^{i-2}a_{i-k,j-k}.\]
It is easily checked these two maps are inverses of each other and are both integral, completing the proof.
\end{proof}
\begin{example}
\label{exp:coordinates}
For $n=5$, the integral equivalences between $\mathrm{GT}(\lambda)$ and $\mathcal{F}_{G_\lambda}$ are: \newline
\begin{minipage}{.4\linewidth}
\centering
\includegraphics[scale=.40]{ArraysToFlows.pdf}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[scale=.40]{FlowsToArrays.pdf}
\end{minipage}
\end{example}
\subsection{Consequences of the Gelfand-Tsetlin polytope being a flow polytope} Here we provide a few corollaries to the Gelfand-Tsetlin polytope $\mathrm{GT}(\lambda)$ being integrally equivalent to the flow polytope $\mathcal{F}_{G_{\lambda}}$. In \cite{paper2} we give further applications of this result, particularly about the volume and Ehrhart polynomial of Gelfand-Tsetlin polytopes. The corollaries presented below are all well-known; we include them here to demonstrate proofs via flow polytopes. We begin with two well-known results about flow polytopes, and then we give their applications to Gelfand-Tsetlin polytopes.
\begin{lemma}[\cite{BV2}]
\label{prop:flowminkowskidecomposition}
For a graph $G$ on $[n+1]$ and nonnegative integers $a_1,\ldots,a_n$,
\[\mathcal{F}_G(a_1,\ldots,a_n,-\sum_{i=1}^{n}a_i) = a_1\mathcal{F}_G(e_1-e_{n+1})+a_2\mathcal{F}_G(e_2-e_{n+1})+\cdots+a_n\mathcal{F}_G(e_n-e_{n+1}).\]
\end{lemma}
\begin{proof}
One inclusion is proven by adding flows edgewise. The other is shown by induction on the number of nonzero $a_i$.
\end{proof}
\begin{corollary}
If $G$ is a graph on $[n+1]$ and $a_1,\ldots,a_n,b_1,\ldots,b_n$ are nonnegative integers, then
\[\mathcal{F}_G(a_1,\ldots,a_n,-\sum_{i=1}^{n}a_i)+\mathcal{F}_G(b_1,\ldots,b_n,-\sum_{i=1}^{n}b_i) = \mathcal{F}_G(a_1+b_1,\ldots,a_n+b_n,-\sum_{i=1}^{n}a_i+b_i). \]
\end{corollary}
\begin{proof}
Induct on the number of nonzero $b_i$ and use Lemma \ref{prop:flowminkowskidecomposition}.
\end{proof}
As a consequence of the previous two results and the integral equivalence of $\mathrm{GT}(\lambda)$ and $\mathcal{F}_{G_\lambda}$, we obtain the following two well-known facts about Gelfand-Tsetlin polytopes.
\begin{namedlemma}[\ref{lem:gtsum}]
If $\lambda$ is a partition with $n$ parts, then the Gelfand-Tsetlin polytope $\mathrm{GT}(\lambda)$ decomposes as the Minkowski sum
\[\mathrm{GT}(\lambda) = \sum_{k=1}^{n}(\lambda_k-\lambda_{k+1})\mathrm{GT}(1^k0^{n-k}),\]
where $\lambda_{n+1}$ is taken to be zero.
\end{namedlemma}
\begin{lemma}
If $\lambda$ and $\mu$ are partitions with $n$ parts, then
\[\mathrm{GT}(\lambda)+\mathrm{GT}(\mu) = \mathrm{GT}(\lambda+\mu).\]
\end{lemma}
\vspace{2ex}
\noindent Recall that the \textbf{Schur polynomial} $s_\lambda$ can be expressed as
\[s_{\lambda}(x_1,\ldots,x_n)=\sum_{P\in \mathrm{GT}(\lambda)\cap \mathbb{Z}^{\binom{n+1}{2}}} x_1^{wt(P)_1}x_2^{wt(P)_2}\cdots x_n^{wt(P)_n} \]
where $wt\colon\mathbb{R}^{\binom{n+1}{2}}\to\mathbb{R}^n$ is the \textbf{weight map}, defined by
\[wt(P)_i = \sum_{j=i}^{n}x_{ij}-\sum_{j=i+1}^{n} x_{i+1,j}\]
for $P\in \mathrm{GT}(\lambda)$.
We now introduce the flow polytopal analogue of $wt$ and study it. Recall the variables $\{a_{ij}\}_{i,j}\cup\{b_{ij}\}_{i,j}$ of Definition \ref{def:Glambda}: in $\mathcal{F}_{G_\lambda}$, $a_{ij}$ represents the flow on the edge $(v_{ij},v_{i+1,j})$ and $b_{ij}$ represents the flow on the edge $(v_{ij},v_{i+1,j+1})$.
\begin{definition}
Let $\lambda$ be a partition with $n$ parts. Define the \textbf{graphical weight map} $gwt\colon\mathbb{R}^{E(G_\lambda)}\to \mathbb{R}^n$ by setting
\[gwt(x_{(v_{ij},v_{i+1,j})}) =e_{i-1}\mbox{\hspace{2ex} and \hspace{1ex}} gwt(x_{(v_{ij},v_{i+1,j+1})}) =0, \]
so in particular
\[gwt(a_{ij}) =e_{i-1}\mbox{\hspace{2ex} and \hspace{1ex}} gwt(b_{ij}) =0. \]
\end{definition}
\begin{proposition}
\label{prop:graphicalweightshift}
For a partition $\lambda$ with $n$ parts, let $f\in\mathcal{F}_{G_{\lambda}}$ correspond to $P_f\in \mathrm{GT}(\lambda)$. Then, the maps $gwt$ and $wt$ are related by the translation
\[wt(P_f)=gwt(f) +\lambda_n \bm{1}_n, \]
where $\bm{1}_n$ denotes the vector of all ones in $\mathbb{R}^n$.
\end{proposition}
\begin{proof}
We have
\begin{align*}
gwt(f)_i&=a_{i+1,i+1}+\cdots+a_{i+1,n}+a_{i+1,n+1}\\
&=a_{i+1,i+1}+\cdots+a_{i+1,n}+b_{2n}+\cdots+b_{in}.
\end{align*}
Using the integral equivalence $x_{ij}=\lambda_{j-i+1}-\sum_{k=0}^{i-2}a_{i-k,j-k}$ between $\mathrm{GT}(\lambda)$ and $\mathcal{F}_{G_\lambda}$,
\begin{align*}
wt(P_f)_i&=\sum_{j=i}^{n}{x_{ij}}-\sum_{j=i+1}^{n}x_{i+1,j}\\
&=x_{in}+\sum_{j=i}^{n-1}\left(x_{ij}-x_{i+1,j+1}\right)\\
&=x_{in}+\sum_{j=i+1}^{n}a_{i+1,j}.
\end{align*}
Now, using the integral equivalence $x_{ij}=\lambda_j+\sum_{k=2}^{i}b_{kj}$, we have
\begin{align*}
(gwt(f)-wt(P_f))_i&=x_{in}-\sum_{k=2}^{i}b_{kn}\\
&=\left(\lambda_n+\sum_{k=2}^{i}b_{kn}\right)-\sum_{k=2}^{i}b_{kn}\\
&=\lambda_n.\qedhere
\end{align*}
\end{proof}
Using the map $gwt$, we now describe the polytopes $\mathrm{GT}(1^k0^{n-k})$ and rederive a result of Postnikov from \cite{beyond}.
\begin{proposition}
\label{prop:gwt}
If $\lambda$ is of the form $1^k0^{n-k}$ with $1\leq k\leq n$, then $gwt(\mathcal{F}_{G_\lambda})$ equals the hypersimplex $\Delta_{k,n}=\mathrm{Conv}(\{x\in[0,1]^n \mid x_1+x_2+\cdots+x_n=k\})$.
\end{proposition}
\begin{proof}
If $\lambda$ is of the form $1^k0^{n-k}$, then $G_\lambda$ will have a single source with netflow $1$ and a single sink with netflow $-1$. Ignoring all edges and vertices not lying on path from the source to sink (which will carry zero flow), we are left with a rectangular grid as shown in Figure \ref{fig:rectgwt}. A path from source to sink in the grid requires $k$ NW steps and $n-k$ SW steps. Recall (cf. \cite{qcp}, Lemma 3.1) that the vertices of a flow polytope with a single source and sink are exactly the flows that are nonzero only on a path from source to sink.
Thus, the vertices of $\mathcal{F}_{G_\lambda}$ are exactly the flows with support a path from source to sink in the grid. These paths are in bijection with length $n$ words on $\{N,S\}$ having $k$ $N$'s (corresponding to NW steps in the path) and $n-k$ $S$'s (corresponding to SW steps in the path). By definition, the map $gwt$ takes a vertex of $\mathcal{F}_{G_\lambda}$ to the vector with ones in the positions of the $N$'s in the corresponding string, and zero elsewhere. Thus,
\[gwt(V(\mathcal{F}_{G_\lambda})) = \{x\in \{0,1\}^n \mid x_1+\cdots+x_n=k \} = V(\Delta_{k,n}),\]
so $gwt(\mathcal{F}_{G_\lambda})=\Delta_{k,n}$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[scale=.45]{RectangularGrid.pdf}
\end{center}
\caption{$\mathrm{GT}(1,1,1,0,0)$ and the associated map $gwt$.}
\label{fig:rectgwt}
\end{figure}
\begin{corollary}[\cite{beyond}]
The permutahedron $\mathcal{P}_\lambda=\mathrm{Conv}(S_n\cdot \lambda)$ of $\lambda$ equals the Minkowski sum of hypersimplices
\[\mathcal{P}_\lambda = (\lambda_1-\lambda_2)\Delta_{1,n}+(\lambda_2-\lambda_3)\Delta_{2,n}+\cdots+(\lambda_{n-1}-\lambda_n)\Delta_{n-1,n}+\lambda_{n}\Delta_{n,n}. \]
\end{corollary}
\begin{proof}
Since $wt(\mathrm{GT}(\lambda)) = \mathcal{P}_\lambda$, applying $gwt$ to both sides of
\[\mathcal{F}_{G_\lambda}=\sum_{k=1}^{n-1} (\lambda_k-\lambda_{k+1})\mathcal{F}_{G_{(1^k0^{n-k})}} \]
and using Propositions \ref{prop:gwt} and \ref{prop:graphicalweightshift} yields
\[\mathcal{P}_\lambda - \lambda_n\bm{1}_n = (\lambda_1-\lambda_2)\Delta_{1,n}+(\lambda_2-\lambda_3)\Delta_{2,n}+\cdots+(\lambda_{n-1}-\lambda_n)\Delta_{n-1,n}.\qedhere \]
\end{proof}
\bigskip
\subsection{The Minkowski sum of Gelfand-Tsetlin polytopes}
\label{sec:sum}
In this section we observe that the Minkowski sum of Gelfand-Tsetlin polytopes $\mathcal{P}_D$ appearing in Theorem \ref{thm:gtsum} can be viewed naturally as a subset of a larger Gelfand-Tsetlin polytope.
Recall the embedding of the Gelfand-Tsetlin polytopes in the sum $\mathcal{P}_D=\mathrm{GT}(\lambda^{(1)})+\mathrm{GT}(\lambda^{(2)})+\cdots+\mathrm{GT}(\lambda^{(n)})$ from Section \ref{sec:intro}. In light of Theorem \ref{thm:gtflowpolytope}, $\mathcal{P}_D$ should be integrally equivalent to a sum of flow polytopes \[\mathcal{F}_{G_{\lambda^{(1)}}}+\cdots+\mathcal{F}_{G_{\lambda^{(n)}}}.\] Just like for the Gelfand-Tsetlin polytope sum, we must specify how the graphs $G_{\lambda^{(i)}}$, $i \in [n]$, are embedded.
Let us embed $G_{\lambda^{(k)}}$, $k \in [n]$, into $G_{\lambda^{(n)}}$ by identifying $v_{ij}$ (see Definition \ref{def:Glambda}) in $G_{\lambda^{(k)}}$ with $v_{i,j+n-k}$ in $G_{\lambda^{(k)}}$. Note that the trivial case $G_{\lambda^{(1)}}$ is just a single vertex with netflow $0$ and flow polytope defined to be the single point $0$.
Lemmas \ref{lem:ms} and \ref{lem:g} follow readily by the definitions and the integral equivalence given in Theorem \ref{thm:gtflowpolytope}:
\begin{lemma} \label{lem:ms}
The Minkowski sum
\[\mathrm{GT}(\lambda^{(1)})+\cdots+\mathrm{GT}(\lambda^{(n)}) \]
is integrally equivalent to
\[\mathcal{F}_{G_{\lambda^{(1)}}}+\cdots+\mathcal{F}_{G_{\lambda^{(n)}}} \] with the embedding specified above.
\end{lemma}
\begin{definition}
Given partitions $\lambda^{(k)}$ of size $k$ for $k\in[n]$, let $G({\lambda^{(1)}},\ldots,{\lambda^{(n)}})$ denote the flow network obtained by overlaying the flow networks $G_{\lambda^{(1)}},\ldots,G_{\lambda^{(n)}}$ according to the embedding specified above and adding the corresponding netflows. Let $\widehat{G}({\lambda^{(1)}},\ldots,{\lambda^{(n)}})$ denote the flow network obtained from $G_{\lambda^{(1)}},\ldots,G_{\lambda^{(n)}}$ by moving all negative netflows to $v_{n+2,n+1}$ and replacing them by zero netflows. The case $n=4$ is demonstrated in Figure \ref{fig:sumexample}.
\end{definition}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.8]{SumGTGraphEdgesVertices3.pdf}
\end{center}
\caption{The flow networks $G({\lambda^{(1)}},{\lambda^{(2)}},{\lambda^{(3)}},{\lambda^{(4)}})$ (left) and $\widehat{G}({\lambda^{(1)}},{\lambda^{(2)}},{\lambda^{(3)}},{\lambda^{(4)}})$ (right)}
\label{fig:sumexample}
\end{figure}
\begin{lemma}\label{lem:g} The following polytope inclusions hold:
\[\mathcal{F}_{G_{\lambda^{(n)}}}+\cdots+\mathcal{F}_{G_{\lambda^{(1)}}} \subset \mathcal{F}_{G(\lambda^{(1)},\ldots,\lambda^{(n)})} \subset \mathcal{F}_{\widehat{G}(\lambda^{(1)},\ldots,\lambda^{(n)})},\]
the latter being true up to an integral translation of $\mathcal{F}_{G(\lambda^{(1)},\ldots,\lambda^{(n)})}$.
\medskip
\noindent In general, none of the above inclusions is an equality. The polytope $\mathcal{F}_{\widehat{G}(\lambda^{(1)},\ldots,\lambda^{(n)})}$ is integrally equivalent to the Gelfand-Tsetlin polytope ${\rm GT }(\mu)$ where $\mu_n$ is arbitrary, and for $k<n$,
\[\mu_k=\mu_{k+1}+\sum_{j=0}^{k-1} \lambda_{k-j}^{(n-j)}-\lambda_{k-j+1}^{(n-j)}. \]
\end{lemma}
Thus, we conclude that for a column-convex diagram $D$ the polytope $\mathcal{P}_D$ can be thought of as obtained from ${\rm GT }(\mu)$ specified in Lemma \ref{lem:g} via further hyperplane cuts. Recall also Proposition \ref{prop:gtsum}, which gives another view on $\mathcal{P}_D$.
\section*{Acknowledgments}
We are grateful to Allen Knutson for inspiring conversations about Schubert polynomials.
\bibliographystyle{plain}
| {
"timestamp": "2019-03-28T01:00:46",
"yymm": "1903",
"arxiv_id": "1903.05548",
"language": "en",
"url": "https://arxiv.org/abs/1903.05548",
"abstract": "Gelfand-Tsetlin polytopes are classical objects in algebraic combinatorics arising in the representation theory of $\\mathfrak{gl}_n(\\mathbb{C})$. The integer point transform of the Gelfand-Tsetlin polytope $\\mathrm{GT}(\\lambda)$ projects to the Schur function $s_{\\lambda}$. Schur functions form a distinguished basis of the ring of symmetric functions; they are also special cases of Schubert polynomials $\\mathfrak{S}_{w}$ corresponding to Grassmannian permutations.For any permutation $w \\in S_n$ with column-convex Rothe diagram, we construct a polytope $\\mathcal{P}_{w}$ whose integer point transform projects to the Schubert polynomial $\\mathfrak{S}_{w}$. Such a construction has been sought after at least since the construction of twisted cubes by Grossberg and Karshon in 1994, whose integer point transforms project to Schubert polynomials $\\mathfrak{S}_{w}$ for all $w \\in S_n$. However, twisted cubes are not honest polytopes; rather one can think of them as signed polytopal complexes. Our polytope $\\mathcal{P}_{w}$ is a convex polytope. We also show that $\\mathcal{P}_{w}$ is a Minkowski sum of Gelfand-Tsetlin polytopes of varying sizes. When the permutation $w$ is Grassmannian, the Gelfand-Tsetlin polytope is recovered. We conclude by showing that the Gelfand-Tsetlin polytope is a flow polytope.",
"subjects": "Combinatorics (math.CO)",
"title": "Schubert polynomials as projections of Minkowski sums of Gelfand-Tsetlin polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9916842217682944,
"lm_q2_score": 0.7154239836484143,
"lm_q1q2_score": 0.7094746764587507
} |
https://arxiv.org/abs/1705.10293 | Solutions of the Schroedinger equation for piecewise harmonic potentials: remarks on the asymptotic behavior of the wave functions | We discuss the solutions of the Schroedinger equation for piecewise potentials, given by the harmonic oscillator potential for $\vert x\vert >a$ and an arbitrary function for $\vert x\vert <a$, using elementary methods. The study of this problem sheds light on usual errors when discussing the asymptotic behavior of the eigenfunctions of the quantum harmonic oscillator and can also be used for the analysis of the eigenfunctions of the hydrogen atom. We present explicit results for the energy levels of a potential of this class, used to model the confinement of electrons in nanostructures. | \section{Introduction}\label{sec:intro}
The Schroedinger equation for the linear harmonic oscillator reads
\begin{equation}
\frac{d^2\psi}{dz^2}+({\mathcal E}-\frac{z^2}{4})\psi=0\, ,
\label{Schroed}
\end{equation}
where $z=\sqrt{2 m \omega/\hbar}\, x$, $\mathcal{E}=E/(\hbar\omega)$, $x$ is the coordinate of the oscillator, and $m$ and $\omega$ its mass and frequency, respectively.
The traditional approach for solving this equation consists in proposing a solution of the form
\begin{eqnarray}
\psi(z)&=&e^{-z^2/4}\left[\sum_{n\geq 0} a_{2n}z^{2n}+ \sum_{n\geq 0} a_{2n+1}z^{2n+1}\right]
\nonumber\\
&\equiv& e^{-z^2/4}\left[a_0 \, S_{even}(z)+a_1 \, S_{odd}(z)\right]\, ,
\label{prop}
\end{eqnarray}
and obtain recurrence relations for the coefficients
\begin{equation}
a_{n+2}=\frac{n-{\mathcal E}+1/2}{(n+1)(n+2)}a_n\, .
\label{rec}
\end{equation}
Note that these are two sets of independent recurrence relations for the even and odd coefficients, which are fully determined once one fixes $a_0$ and $a_1$.
The energy eigenvalues are obtained by imposing the condition that $\vert\psi(z)\vert^2$ should be integrable. In many textbooks
\cite{text1,text2,text3,text4,text5,text6,text7,text8}
it is remarked
that, for large $n$, $a_{2n+2}/a_{2n}\simeq a_{2n+3}/a_{2n+1}\simeq 1/(2n)$, which is also the behavior for the coefficients of the series of $e^{z^2/2}$. Then, it is
(incorrectly) argued \cite{text1,text2,text3,text4} that, for large $z$,
\begin{equation}
S_{even}\simeq S_{odd}/z \simeq e^{z^2/2}\, ,
\label{incor}
\end{equation}
and therefore $\vert\psi(z)\vert^2$ would not be integrable, unless
the series include only a finite number of terms. As a consequence, the allowed energy eigenvalues are ${\mathcal E}=n+1/2$, for some nonnegative integer $n$.
Long time ago it has been pointed out that, although the conclusion for the eigenvalues is correct, the argument is wrong. \cite{Buchdahl} It is not true that, if two power series with coefficients
$a_n$ and $b_n$ are such that $a_{n+1}/a_n\simeq b_{n+1}/b_n$ for large $n$, then they have the same behavior for large arguments. This is
implicitly assumed in other textbooks, \cite{text5,text6,text7,text8} where it is pointed out that $a_{2n+2}/a_{2n}\simeq 1/(2n)$ is the behavior of the coefficients of the series
of $z^k e^{z^2/2}$ for {\it any value of} $k$. This property is used to argue that the asymptotic behavior of the odd and even series
should be of this form for some particular values of $k$. Note that this claim is at least
incomplete, since $z^k e^{z^2/2}$ admits a representation in powers of $z$ (for all $z$), only if $k$ is a natural number (or eventually an integer, if one considers
also Laurent series). Moreover, in principle there could exist other functions with different asymptotic behavior and the same ratio
$a_{2n+2}/a_{2n}$ in the large $n$ limit.
A correct reasoning is as follows. \cite{Bowen} If $a_{n+1}/a_n > b_{n+1}/b_n$ and $a_n, b_n >0$ for $n\geq N$, then one can show that
\begin{equation} \label{bound}
\sum_{n\geq 0} a_{n}z^{n}\geq k \sum_{n\geq 0} b_{n}z^{n} + P(z)\, ,
\end{equation}
where $k$ is a positive constant and $P(z)$ a polynomial of degree $N$. One can use this bound to show that the odd and even series
in Eq.~\eqref{prop} diverge faster than $e^{\alpha z^2}$ with $\alpha<1/2$, unless they contain a finite number of terms. From this property one can derive the allowed eigenvalues for the harmonic oscillator.
In the present paper we discuss a related problem: the Schroedinger equation in the presence of piecewise potentials that coincide with the harmonic oscillator potential for $\vert x \vert >a$. The analysis of this potential makes more evident the usual mistakes in the discussions of the asymptotic behavior of the wave functions, as the following (erroneous) argument shows: if ${\mathcal E}-1/2$ were not an integer, as the odd and even series cannot cancel each other for $z\to +\infty$ (Eq.~\eqref{incor}), the wave function would not be quadratically integrable. Therefore, ${\mathcal E}-1/2$ must be a nonnegative integer, and the eigenvalues for the piecewise potentials would coincide with those of the harmonic oscillator, irrespective of the form of the potential for $\vert x \vert <a$. This is obviously nonsense. The error in the argument goes back to the assumed asymptotic behavior in Eq.~\eqref{incor}. As we will see, $S_{even}/S_{odd}\to const$ as $z\to + \infty$, for any value of ${\mathcal E}$ such that ${\mathcal E}-1/2\neq 0,1,2,... \, .$
It is worth to note that, for solving this problem, it is not enough to obtain a lower bound for the series: the leading behavior of both series is needed. As this behavior is not difficult to obtain, it will be useful even when discussing the usual quantum harmonic oscillator. Moreover, when considering the radial Schroedinger equation for the hydrogen atom, one also encounters vague arguments in the analysis of the asymptotic behavior of the solutions. Our results shed light on the discussion on this and related problems.
Piecewise potentials involving the harmonic potential have been considered before by other authors. \cite{similar1, similar2, similar3,similar4,similar5} In some works,
\cite {similar1,similar2,similar3}
the harmonic part of the potential is restricted to a bounded region ($\vert x\vert < a$ in our notation), the opposite situation of the one considered here, and therefore the discussion of the asymptotic behavior of the series Eq.~\eqref{prop}
is not relevant there. In other works, \cite{similar4, similar5} the authors consider the combination of an harmonic potential for $x>a$ and a finite potential step for $x<a$. In this case,
the analysis of the large-$x$ behavior of the solutions is relevant, and could be discussed using the elementary methods proposed below. Alternatively, in Ref.\ \onlinecite{similar4} the problem is tackled solving Schroedinger equation in terms of special functions, while in Ref.\ \onlinecite {similar5} the eigenvalue equation is solved using an integral representation method.
\section{Schroedinger equation with piecewise harmonic potentials}
Let us now consider the Schroedinger equation
\begin{equation}
\frac{d^2\psi}{dz^2}+({\mathcal E}-V(z))\psi=0\, ,
\label{SchroedV}
\end{equation}
with
\begin{equation}
\label{ppotential}
V(z)=\left\{\begin{matrix}
\frac{1}{4} (z+l)^2\,\,\,\,\, &&z<-l,
\\ f(z) \,\, && -l<z<l ,
\\ \frac{1}{4} (z-l)^2 \,\, &&z>l,
\end{matrix}\right.
\end{equation}
where $f(z)$ is an arbitrary function and $l=\sqrt{2 m \omega/\hbar}\, a$. The potential is harmonic for $\vert z\vert >l$ and arbitrary otherwise.
Let us first analyze the asymptotic behavior of the solutions of Eq.~\eqref{SchroedV}. As the potential is defined in three different regions, one can
study the behavior for $z<-l$ and $z>l$ separately. It will be enough to analyze the asymptotic behavior in the region $z > l$. We
introduce the notation $y=z-l$. Given the form of the potential, one expects that, for $y\to +\infty$:
\begin{equation}\label{asymp}
\psi(y)\simeq y^\gamma e^{\beta y^2}\, ,
\end{equation}
for some constants $\beta$ and $\gamma$. Indeed, inserting this ansatz into Eq.~\eqref{SchroedV} we obtain
\begin{equation}\label{betagamma}
4 \beta^2-\frac{1}{4} + y^{-2} ({\mathcal E}+2\beta(1+2\gamma)) +O(y^{-4})=0\, ,
\end{equation}
that is satisfied, in the limit
$y\gg 1$, when
\begin{equation}
\beta^2= \frac{1}{16}\quad \gamma=-\frac{1}{4\beta} ({\mathcal E}+ 2 \beta)\, .
\label{asympcoef}
\end{equation}
We conclude that the Schroedinger equation has a solution that converges at $y\to+\infty$ ($\beta=-1/4, \gamma={\mathcal E}- 1/2$) and a linearly independent solution that diverges in the same
limit ($\beta=1/4, \gamma=-{\mathcal E}-1/2$). The analysis could be pursued systematically by assuming
\begin{equation}\label{asymp2}
\psi(y)\simeq y^\gamma e^{\beta y^2}(1+\frac{\gamma_1}{y}+\frac{\gamma_2}{y^2}+...)\, ,
\end{equation}
but this will not be necessary for what follows.
In the usual discussions of the asymptotic behavior of the solutions of the harmonic oscillator, only the leading term is kept in Eq.~\eqref{betagamma}. This gives $\beta=\pm 1/4$, and no information on $\gamma$.
We now propose a solution of Eq.~\eqref{SchroedV} for $y>0\, (z > l)$ of the form given in Eq.~\eqref{prop}, with $y$ instead of $z$. As we expect
that the eigenvalues for the piecewise potentials will differ from those of the usual harmonic oscillator, in what follows we will assume that
${\mathcal E}-1/2\neq 0,1,2,...\, .$ The usual eigenvalues will be obtained in the limiting case $l\to 0$.
The bounds Eq.~\eqref{bound} on the odd and even series
imply that both
$e^{-y^2/4}S_{even}(y)$ and $e^{-y^2/4}S_{odd}(y)$ diverge as $y\to +\infty$. However, the existence of solutions with the asymptotic behavior given
in Eq.~\eqref{asymp} with $\beta=-1/4$ implies that there should be a {\it unique choice} of $a_1/a_0$ such that the combination
\begin{equation}\label{psiy}
\psi(y)=e^{-y^2/4}\left[ a_0 S_{even}(y)+ a_1 S_{odd}(y)\right]
\end{equation}
converges as $y\to\ +\infty$. When $a_1/a_0\equiv a_*$ is properly chosen,
the linear combination of the two divergent series becomes convergent. It is important to remark that this should happen for any value of ${\mathcal{E}}$. The value of $a_*$ is clearly unique, otherwise one would obtain two convergent, linearly independent solutions of the differential equation, and
the divergent solutions would not exist.
In the next section we will obtain the precise value of $a_*$. Assuming that this value is known, it is easy to find the set of equations that determines
the energy eigenvalues. We introduce the notation
\begin{equation}\label{weber}
D_{\mathcal E-1/2}(y) = S_{even}(y)+ a_* S_{odd}(y)\, .
\end{equation}
In terms of this function, the quadratically integrable solution of Eq.~\eqref{SchroedV} can be written as
\begin{equation}
\psi(z)=\left\{\begin{matrix}
A e^{-(z+l)^2/4}D_{\mathcal E-1/2}(-(z+l)) \,\,\,\,\, &&z<-l,
\\ B \psi_1(z)+C\psi_2(z) \,\, && -l<z<l,
\\ F e^{-(z-l)^2/4}D_{\mathcal E-1/2}(z-l)\,\, &&z>l,
\end{matrix}\right.
\label{soln}
\end{equation}
where $\psi_1$ and $\psi_2$ are two linearly independent solutions in the region $\vert z\vert <l$, while $A,B,C,$ and $F$ are constants.
The wavefunction $\psi$ and its first derivative should be continuous both at $z=\pm l$. These four conditions and the normalization of the wavefunction
determine the four constants and the allowed values of the energy.
\subsection{Calculation of $a_*$}
From the recurrence relation Eq.~\eqref{rec} one can see that
\begin{eqnarray}
a_2&=&\frac{a_0(-\mathcal E+1/2)}{2}=\frac{a_0(-\mathcal E /2+1/4)}{2\times 1/2},\nonumber\\
a_4&=&\frac{a_0(-\mathcal E+1/2)\times (-\mathcal E+1/2+2)}{2\times 3\times 4}=\frac{a_0 (-\mathcal E/2+1/4)\times (-\mathcal E/2+1/4+1)}{2^2 \times 1/2\times 3/2 \times 2 },\nonumber\\
a_6&=&=\frac{a_0 (-\mathcal E/2+1/4)\times (-\mathcal E/2+1/4+1)\times (-\mathcal E/2+1/4+2) }{2^3 \times 1/2\times 3/2\times 5/2 \times 2\times 3 },
\end{eqnarray}
and, in general,
\begin{eqnarray}
a_{2n}&=&\frac{a_0}{2^n n!}\frac{(-\mathcal E /2+1/4)\times\cdots \times(-\mathcal E /2+1/4+ n-1)}{1/2\times3/2\times\cdots\times (1/2+n-1)}
\nonumber\\ &=&
\frac{a_0}{2^n n!}\frac{\Gamma[1/2]}{\Gamma[-{\mathcal E}/2+1/4]}\frac{\Gamma[-{\mathcal E}/2+1/4+n]}{\Gamma[1/2+n]} \, ,\label{a_npar}
\end{eqnarray}
where $\Gamma[z]$ denotes the Gamma function. Note that in the last equality we have made repeated use of the well known identity $\Gamma[z+1]=z\Gamma[z]$. Following similar steps, one can verify that
\begin{equation}
a_{2n+1} = \frac{a_1}{2^n n!}\frac{\Gamma[3/2]}{\Gamma[-{\mathcal E}/2+3/4]}\frac{\Gamma[-{\mathcal E}/2+3/4+n]}{\Gamma[3/2+n]} \, .
\label{coef2}
\end{equation}
The large-$n$ behavior of the coefficients $a_n$ can be analyzed using
Stirling's approximation for the Gamma function at large arguments \cite{Abra}
\begin{equation} \label{Stirling}
\Gamma[z+1] \simeq z^ze^{-z}\sqrt{2\pi z}\, ,
\end{equation}
from which we obtain
\begin{equation}
\frac{\Gamma[n+b]}{\Gamma[n+c]}\simeq n^{b-c}\, .
\end{equation}
Inserting this approximation into Eq.~\eqref{a_npar} and Eq.~\eqref{coef2} we obtain, for large $n$,
\begin{eqnarray}
a_{2n} &\simeq& \frac{a_0}{2^n n!}\frac{\Gamma[1/2]}{\Gamma[-{\mathcal E}/2+1/4]}n^{-{\mathcal E}/2-1/4}, \label{coef1approx}\\
a_{2n+1} &\simeq& \frac{a_1}{2^n n!}\frac{\Gamma[3/2]}{\Gamma[-{\mathcal E}/2+3/4]} n^{-{\mathcal E}/2-3/4} \, .
\label{coef2approx}
\end{eqnarray}
From Eq.~\eqref{coef1approx} and Eq.~\eqref{coef2approx} we see that the asymptotic behavior of $S_{even}$ and $S_{odd}$
can be studied by considering the series
\begin{equation}\label{serieS}
S(\omega)=\sum_{n=1}^\infty \frac{n^{-r}\omega^n}{n!}\, .
\end{equation}
Indeed, if two power series with positive coefficients $A_n$ and $B_n$ are such that $A_n/B_n\to 1$ for $n\to\infty$, then they have the same asymptotic behavior. Hence, in virtue of Eq.~\eqref{coef1approx}, putting $\omega=y^2/2$, $r={\mathcal E}/2+1/4$ and multiplying by $a_0\, \Gamma[1/2]/\Gamma[-{\mathcal E}/2+1/4]$ on both sides of Eq.~\eqref{serieS}, we see that
\begin{equation}\label{SevenS}
S_{even}(y)\simeq \frac{\Gamma[1/2]}{\Gamma[-{\mathcal E}/2+1/4]} S\left( \frac{y^2}{2} \right)
\end{equation}
for large values of $y$. If, instead, we put $r={\mathcal E}/2+3/4$ and multiply by ${a_1\, \Gamma[3/2]/\Gamma[-{\mathcal E}/2+3/4]}$ on both sides of Eq.~\eqref{serieS}, we obtain
\begin{equation}
\label{SoddS}
S_{odd}(y) \simeq \frac{y\, \Gamma[3/2]}{\Gamma[-{\mathcal E}/2+3/4]} S\left(\frac{y^2}{2}\right)
\end{equation}
for large $y$.
In order to study the asymptotic behavior of $S(\omega)$, the key observation is that,
for a fixed large value of $\omega$,
the coefficients
\begin{equation}
\label{coefS2}
c_n(\omega)=\frac{\omega^n}{n!}\, ,
\end{equation}
have, as a function of $n$, a peak at $n=\omega$. Moreover, width of the peak is much smaller than $\omega$. It is an interesting exercise
to verify these properties by plotting $c_n(\omega)$ as a function of $n$ for large values of $\omega$. We can prove them analytically
using Stirling's approximation
Eq.~\eqref{Stirling} for the factorial $n!=\Gamma[n+1]$,
and evaluating for $n\simeq \omega$. We obtain
\begin{equation}
n!\simeq\sqrt{2\pi\omega}e^{-n}\omega^n e^{n\ln(1+\frac{(n-\omega)}{\omega})}\, .
\end{equation}
Expanding the logarithm in the exponential in powers of $(n-\omega)/\omega$ we get
\begin{equation}
\label{coefSapprox}
c_n(\omega)\simeq \frac{e^\omega}{\sqrt{2\pi\omega}}e^{-\frac {(n-\omega)^2}{2\omega}} \, .
\end{equation}
Therefore, for large (fixed) $\omega$, $c_n(\omega)$ is a Gaussian function of $n$, with a peak at $n=\omega$ and width $\sqrt\omega$.
Thus, for the relevant values of $n$ we can approximate
$n^{-r}$ by $\omega^{-r}$ in $S(\omega)$ obtaining, for large $\omega$,
\begin{equation}\label{serieSSapprox}
S(\omega)\simeq \omega^{-r}\sum_{n=1}^\infty \frac{w^n}{n!}\simeq \omega^{-r}e^\omega\, .
\end{equation}
We present a more rigorous proof of this asymptotic behavior in the Appendix.
Taking into account Eq.~\eqref{SevenS}, Eq.~\eqref{SoddS} and Eq.\eqref{serieSSapprox} and we obtain, for large $y$,
\begin{eqnarray}
\label{sevens} S_{even}(y) &\simeq& \frac{\Gamma[1/2]}{\Gamma[-{\mathcal E}/2+1/4]}\left[\frac{y^2}{2}\right]^{-\frac{\mathcal E}{2}-\frac{1}{4}}e^{\frac{y^2}{2}}, \\
\label{sodds} S_{odd}(y) &\simeq& \frac{\sqrt 2 \Gamma[3/2]}{\Gamma[-{\mathcal E}/2+3/4]}\left[\frac{y^2}{2}\right]^{-\frac{\mathcal E}{2}-\frac{1}{4}}e^{\frac{y^2}{2}}\, .
\end{eqnarray}
This calculation reproduces the asymptotic behavior of the solutions anticipated in Eq.~\eqref{asymp} and Eq.\eqref{asympcoef}. Both series lead to linearly independent solutions to the Schroedinger equation that have the same asymptotic behavior with $\beta=+1/4$ and $\gamma=-{\mathcal E}-1/2$ (see Eq.~\eqref{asymp} and Eq.\eqref{asympcoef}).
These two linearly independent solutions of the Schroedinger equation are not quadratically integrable, and therefore physically unacceptable. However, we know that
a linear combination of them should produce a solution with the adequate behavior ($\beta=-1/4$). A necessary condition for this to happen is that
the exponentially growing behavior of both series should cancel each other. Therefore, using Eq.~\eqref{psiy}, Eq.~\eqref{sevens} and Eq.~\eqref{sodds} we obtain
\begin{equation}\label{astar}
a_*=\frac{a_1}{a_0}=- \sqrt 2 \frac{\Gamma[-{\mathcal E}/2+3/4]} {\Gamma[-{\mathcal E}/2+1/4]}\, .
\end{equation}
With this result we can construct the function $D_{\mathcal E-1/2}(y)$ in Eq.~\eqref{weber}, and obtain the formal solution of the Schroedinger equation Eq.~\eqref{soln}.
We point out, for the advanced reader, that for this particular value of $a_*$ Eq.~\eqref{psiy} reproduces the series expansion of the parabolic Weber function $D_\sigma(y)$, \cite{similar4, watson} which is the unique solution of Eq.~\eqref{SchroedV} that tends to zero as $y\to+\infty$.
\section{Example: a particle in a ``bathtub" potential}
In order to illustrate the usefulness of the previous results, we will consider the particular case of a particle in a box bounded by harmonic walls, i.e., we will analyze the Schroedinger equation with the potential given in Eq.~\eqref{ppotential} with $f(z)=0$. These so called ``bathtub" potentials have been used as confining potentials for electrons in nanostructures, in particular when analyzing the quantum Hall effect. \cite{nano1,nano2,nano3,nano4,nano5, nano6}
As the potential is an even function, it is convenient to take as independent solutions in the region
$ -l<z<l$
\begin{eqnarray}
\psi_1(z)&=& \cos\sqrt\mathcal E z,\nonumber\\
\psi_2(z)&=& \sin\sqrt\mathcal E z\, ,
\end{eqnarray}
and look for solutions of the Schroedinger equation which are either even or odd. For the even solutions we take $C=0$ in Eq.~\eqref{soln} and impose continuity of the function
and its first derivative at $z=l$. The transcendental equation that determines the energy eigenvalues is
\begin{equation}
\label{eveneigen}
\sqrt\mathcal E\tan \sqrt\mathcal E l =-\frac{D'_{\mathcal E-1/2} (0)}{D_{\mathcal E-1/2} (0)}=-a_*\, .
\end{equation}
Similarly, for the odd solutions ($B=0$) the condition reads
\begin{equation}
\label{oddeigen}
\sqrt\mathcal E\cot \sqrt\mathcal E l =\frac{D'_{\mathcal E-1/2} (0)}{D_{\mathcal E-1/2} (0)}=a_*\, .
\end{equation}
\begin{figure}[!ht]
\includegraphics[scale=.4]{Fig01.pdf}
\caption{Eigenvalues for the ``bathtub" potential, as a function of $l$. According to the uncertainty principle, the eigenvalues are decreasing functions of
$l$. The wave functions associated to the particular values (a), (b),(c) and (d) are plotted in Figs. 2 and 4.}
\end{figure}
In the limit $l\to 0$ one recovers the eigenvalues of the harmonic oscillator. On the one hand, for the even solutions, in this limit the condition Eq.~\eqref{eveneigen}
reads $a_*=0$. As the Gamma function does not have zeros for real arguments, and has poles on the non-positive integers, from Eq.~\eqref{astar} we see
that the argument of the Gamma function in the denominator must be a non-positive integer $-n$, and therefore ${\mathcal E}= 2 n + 1/2$, the usual eigenvalues
for even eigenfunctions. On the other hand,
for the odd solutions the condition Eq.~\eqref{oddeigen} is $a_*=\infty$, which is satisfied for ${\mathcal E}= 2 n + 1+ 1/2$, i.e. the usual energy levels for odd wave functions.
In Fig. 1 we plot the eigenvalues of the energy $\mathcal E$ as a function of $l$. The eigenvalues start at the harmonic oscillator values $n+1/2$ for $l=0$, and are decreasing
functions of $l$, as suggested by Heisenberg uncertainty principle. In Fig. 2 we plot the wave function of the second excited state for increasing values of $l$. At $l=0$ the wave function is the usual solution for the harmonic oscillator with energy ${\mathcal E}= 5/2$. The wave function has two nodes for all values of $l$. They are located in the harmonic region for $0<l<1.28$, and in the flat region for $l>1.28$. For this critical value of $l$, the eigenvalue of the second excited state equals $\mathcal E=3/2$, i.e. the value
of the first excited state of the usual harmonic oscillator (point (b) in Fig. 1).
\begin{figure}[!ht]
\includegraphics[scale=.4]{Fig02.pdf}
\caption{Plot the normalized wave function for the second excited state, for different values of $l$. As expected, the wave function has two nodes. Note that they are located in the harmonic region for $0<l<1.28$, at $z=l$ for l=1.28 and in the flat bottom for $ l>1.28 $. The corresponding eigenvalues are given by the points (a), (b), and (c) in Fig. 1 }
\end{figure}
\begin{figure}[!ht]
\includegraphics[scale=.4]{Fig03.pdf}
\caption{Eigenvalues for the ``bathtub" potential, normalized to the ground state, as a function of $l$. At large values of $l$ the spectrum
coincides with that of an infinite square well. The wave functions associated to the particular values (a), (b), (c), (d), and (e) are plotted in Figs. 2 and 4. }
\end{figure}
An interesting property of the eigenvalues is their behavior in the limit $l\gg 1$. When $a\gg\sqrt{\hbar/2m\omega}$,
the scale of variation of the harmonic potential is much shorter than the size of the flat bottom of the potential. Therefore, the harmonic walls act as infinite potential barriers, and we expect the spectrum of a particle in a box, that is, ${\mathcal E}_n/{\mathcal E}_0\simeq (n+1)^2$. This behavior is illustrated in Fig. 3.
We also expect the wave functions to evolve from those of the usual harmonic oscillator at $l=0$ to those of the infinite square well for $l\gg 1$. This fact is illustrated in Fig. 4, where we plotted the first excited state for different values of $l$. Note that for large values of $l$ the wave function tends to zero in the harmonic region, in a spatial scale much shorter than the size of the flat bottom.
\begin{figure}[!ht]
\includegraphics[scale=.4]{Fig04.pdf}
\caption{Plot of the wave function for the first excited state, for different values of $l$. This wave function has only one node at $z=0$ for all values
of $l$. The figure illustrates the fact that the wave function for the piecewise potential tends to that of a particle in a box, and therefore vanishes in the harmonic region in the large
$l$ limit. The corresponding eigenvalues are given by the points (d) and (e) in Fig. 3. Point (f) is out of scale in Fig. 3, being at the right of point (e), on the curve $n=1$. For the sake of clarity, in this figure we normalized each wave function to its maximum value.}
\end{figure}
\section{The hydrogen atom}
The remarks about the behavior of the series for the harmonic oscillator also apply to the solutions of the Schroedinger equation for the hydrogen atom.
The radial wave function is usually written as
\begin{equation}\label{radial}
\psi(\rho)=\rho^{L+1}e^{-\rho}F(\rho)\, ,
\end{equation}
where $\rho$ is a dimensionless radius, $L$ is the angular momentum, and
\begin{equation}
\label{F}
F(\rho)=\sum_{n\geq 0} c_n\rho^n\, .
\end{equation}
The coefficients of the power series satisfy the recurrence relation
\begin{equation}
c_{n+1}=\frac{(-\xi+ 2 L +2 + 2 n)}{(n+1)(2L+2+n)} c_n\, ,
\end{equation}
where $\xi$ is the inverse of the (dimensionless) energy. Note that, once again, for large $n$ we have $c_{n+1}/c_n\simeq 2/n$, and
one would be tempted to conclude that, if the series does not have a finite number of terms, $F(\rho)\simeq e^{2\rho}$ as $\rho\to\infty$. However, a
more careful analysis along the lines of the previous sections shows that this is not the case. Indeed, the coefficients are given by
\begin{equation}
c_n=c_0\frac{2^n}{n!}\frac{\Gamma(2L+2)}{\Gamma(-\xi/2+L+1)}\frac{\Gamma(-\xi/2+L+n+1)}{\Gamma(2L+n+2)}\, ,
\end{equation}
and tend to
\begin{equation}
c_n\simeq c_0\frac{ 2^n}{n!} n^{-\xi/2-L-1}\,
\end{equation}
for large $n$. Therefore
\begin{equation}
F(\rho)\simeq c_0 \,e^{2\rho} (2\rho)^{-\xi/2-L-1}\,
\end{equation}
for large $\rho$. This is the correct behavior of the series, that of course leads to an unacceptable wave function, unless the series
has a finite number of non-vanishing terms.
We leave the details for the reader. She/he could also address the problem of a particle in a piecewise Coulomb potential given by
\begin{equation}
\label{ppotentialcoul}
V(r)=\left\{\begin{matrix}
-\frac{k}{R} \,\,\,\,\, &&0<r<R, \\
-\frac{k}{r} \,\, && r>R,
\end{matrix}\right.
\end{equation}
following the procedure described for the piecewise harmonic oscillator.
\section{Conclusions}
We have discussed in detail the asymptotic behavior of the solutions of the Schroedinger equation with harmonic-like potentials. Following the standard approach we looked for solutions of the form given in Eq.~\eqref{prop}. We have shown that when the even and odd series contain an infinite number of terms, they have, up to a constant, the same divergent asymptotic behavior as $z\to +\infty$, contrary to previous claims in many textbooks. This is a necessary property, given that there should be a linear combination of the odd and even series that produce a solution that is convergent for $z\to +\infty$, for any value of ${\mathcal E}$.
For the usual harmonic oscillator, Eq.~\eqref{prop} should be the solution to the Schroedinger equation for all values of $z$. If we choose $a_*$ such that the wave function converges at $z\to +\infty$, then it will diverge at $z\to -\infty$ (and viceversa). Therefore, the physically acceptable solutions are those for which both series contain a finite number of terms,
and ${\mathcal E}= n+1/2$. However, for piecewise potentials, we can consider independent linear combinations of the even and odd series in the regions $z<-l$ and $z>l$, such that $\vert \psi(z)\vert^2$ is integrable. The continuity conditions of the wave function and its first derivative at $z=\pm\, l$ fix the allowed energy eigenvalues. We illustrated the procedure by computing the eigenvalues of a potential with a ``bathtub'' shape.
The main mathematical result in our discussion is the large-$\omega$ behavior of the series
\begin{equation}
S(\omega)=\sum_{n=1}^\infty \frac{n^{-r}\omega^n}{n!}\simeq \omega^{-r}e^{\omega}\, ,
\end{equation}
that can be derived as described above and in the Appendix. It can be even checked numerically by the students using Mathematica or similar programs, by plotting
$S(\omega) \omega^{r}e^{-\omega}$ as a function of $\omega$, for different values of $r$.
\section*{Acknowledgements} This research was supported by ANPCyT, CONICET, and UNCuyo.
\section*{Appendix : Asymptotic behavior of the series $S(\omega)$}
In this Appendix we provide an alternative and more rigorous proof of Eq.~\eqref{serieSSapprox}, which is the main mathematical ingredient in our work.
The derivation is somewhat cumbersome, but only uses elementary bounds for different series.
For simplicity we will assume $r>0$ (the case $r<0$ can be treated using similar arguments).
Let us consider
\begin{equation}\label{serieSS}
e^{-\omega}\omega^r S(\omega)=e^{-\omega}\sum_{n=1}^\infty \left(\frac{\omega}{n}\right)^{r}\frac{\omega^n}{n!}\, ,
\end{equation}
and introduce the notation
\begin{equation}\label{serieT}
T(\omega,n_1,n_2)=e^{-\omega}\sum_{n=n_1}^{n_2} \left(\frac{\omega}{n}\right)^{r}\frac{\omega^n}{n!}\, .
\end{equation}
We would like to see that $T(\omega,1,\infty)\to 1$ as $\omega\to\infty$.
On one hand, given any $0<\lambda<1$, we split the series as
\begin{equation}\label{split}
T(\omega,1,\infty)=T(\omega,1,[\lambda \omega]) +
T(\omega, [\lambda \omega]+1,\infty)\, ,
\end{equation}
where the brackets denote integer part. As $\omega/n \leq \omega$, the first term can be bounded by
\begin{equation}\label{primercotaT}
T(\omega,1,[\lambda \omega]) \leq e^{-\omega}\omega^r\sum_{n=1}^{[\lambda\omega]}\frac{\omega^n}{n!}.
\end{equation}
Noting that $\omega^n/n!$ is an increasing function of $n$ for $n\leq [\omega]$, we see that the series on the right hand side of Eq.~\eqref{primercotaT} satisfies
\begin{equation}\label{cotarhs}
\sum_{n=1}^{[\lambda\omega]}\frac{\omega^n}{n!} \leq \sum_{n=1}^{[\lambda\omega]}\frac{\omega^{[\lambda\omega]}}{[\lambda\omega]!} = [\lambda\omega] \frac{\omega^{[\lambda\omega]}}{[\lambda\omega]!} \leq \lambda \omega \frac{\omega^{[\lambda\omega]}}{[\lambda\omega]!}
\end{equation}
and, hence, putting Eq.~\eqref{primercotaT} and Eq.\eqref{cotarhs} together we obtain $T(\omega,1,[\lambda \omega]) \leq \lambda e^{-\omega} \omega^{r+1}
\omega^{[\lambda\omega]}/[\lambda\omega]!$. Let us show that $\lambda e^{-\omega} \omega^{r+1}
\omega^{[\lambda\omega]}/[\lambda\omega]!$ and, hence, $T(\omega,1,[\lambda \omega])$, vanishes as $\omega\to\infty$. Using Stirling's approximation we see that, for large $\omega$,
\begin{equation}
e^{-\omega}\frac{\omega^{[\lambda\omega]}}{[\lambda\omega]!} \simeq e^{-\omega}\frac{\omega^{[\lambda\omega]}}{\sqrt{2\pi [\lambda\omega]}} \left(\frac{e}{[\lambda\omega]}\right)^{[\lambda\omega]}.
\end{equation}
Then, observing that
\begin{equation}
\left(\frac{\omega}{[\lambda\omega]}\right)^{[\lambda\omega]} \lesssim \left(\frac{1}{\lambda}\right)^{\lambda\omega}
\end{equation}
and $\sqrt{2\pi [\lambda\omega]} \simeq \sqrt{2\pi \lambda\omega}$, we obtain
\begin{equation}\label{zeroexponentially}
e^{-\omega}\frac{\omega^{[\lambda\omega]}}{[\lambda\omega]!} \lesssim \frac{e^{-\omega}}{\sqrt{2\pi \lambda\omega}} \left(\frac{e}{\lambda}\right)^{\lambda\omega}
\end{equation}
and, since $\left(e/\lambda\right)^{\lambda} < e$ for $0<\lambda < 1$, we deduce from Eq.~\eqref{zeroexponentially} that $e^{-\omega}\omega^{[\lambda\omega]}/[\lambda\omega]!$ goes to zero exponentially as $\omega \to \infty$. This proves that $\lambda e^{-\omega} \omega^{r+1}
\omega^{[\lambda\omega]}/[\lambda\omega]!$ vanishes as $\omega \to \infty$. Therefore the first term in Eq.~\eqref{split} also vanishes.
The second term in Eq.~\eqref{split} can be bounded by
\begin{equation}\label{segundacotaT}
T(\omega, [\lambda \omega]+1,\infty)\leq e^{-\omega}\left(\frac{\omega}{[\lambda\omega]+1} \right)^r\sum_{n=[\lambda\omega]+1}^\infty\frac{\omega^n}{n!}
\end{equation}
simply noting that $\omega/n \leq \omega/([\lambda\omega]+1)$. Since $\omega/([\lambda\omega]+1) \leq 1/\lambda$ and
\begin{equation}
\sum_{n=[\lambda\omega]+1}^\infty\frac{\omega^n}{n!} \leq e^{\omega},
\end{equation}
we deduce that
$T(\omega, [\lambda \omega]+1,\infty) \leq \left(1/\lambda\right)^r$
and, therefore,
\begin{equation}\label{final1}
T(\omega,1,\infty)\leq T(\omega,1,[\lambda \omega]) + \left(\frac{1}{\lambda}\right)^r \xrightarrow[\omega\to\infty]{}\, \left(\frac{1}{\lambda}\right)^r.
\end{equation}
On the other hand, given any $\sigma>1$ we have
\begin{equation}\label{bound3}
T(\omega,1,\infty)\geq T(\omega,1,[\sigma\omega])
\geq e^{-\omega}\left(\frac{\omega}{[\sigma\omega]}\right)^r
\, \sum_{n=1}^{[\sigma\omega]}\frac{\omega^n}{n!}.
\end{equation}
We will see that $e^{-\omega}\, \sum_{n=1}^{[\sigma\omega]}\omega^n/n!$ tends to one as $\omega \to \infty$.
Given that
\begin{equation}\label{bound2}
e^{-\omega}\, \sum_{n=1}^{[\sigma\omega]}\frac{\omega^n}{n!}
= e^{-\omega} \left(e^\omega-1-\sum_{[\sigma\omega]+1}^\infty\frac{\omega^n}{n!}\right) = 1 - e^{-\omega} - e^{-\omega}\sum_{[\sigma\omega]+1}^\infty\frac{\omega^n}{n!},
\end{equation}
it suffices to show that $e^{-\omega}\sum_{[\sigma\omega]+1}^\infty\omega^n/n!$ vanishes as $\omega \to \infty$. Now, since
\begin{eqnarray}
\sum_{[\sigma\omega]+1}^\infty\frac{\omega^n}{n!} &=& \frac{\omega^{[\sigma\omega]+1}}{([\sigma\omega]+1)!} \left(1+\frac{\omega}{[\sigma\omega]+2}+ \frac{\omega^2}{([\sigma\omega]+3)([\sigma\omega]+2)}+\cdots \right) \nonumber\\
&\leq& \frac{\omega^{[\sigma\omega]+1}}{([\sigma\omega]+1)!} \left(1+\frac{\omega}{[\sigma\omega]+1}+ \frac{\omega^2}{([\sigma\omega]+1)^2}+\cdots \right)
\end{eqnarray}
and $\omega/([\sigma\omega]+1) \leq 1/\sigma$, we deduce
\begin{eqnarray}\label{bound4}
e^{-\omega}\sum_{[\sigma\omega]+1}^\infty\frac{\omega^n}{n!}
\leq e^{-\omega}\frac{\omega^{[\sigma\omega]+1}}{([\sigma\omega]+1)!}
\sum_{k\geq 0}\left(\frac{1}{_{\sigma}}\right)^k
= e^{-\omega}\frac{\sigma}{\sigma -1}
\frac{\omega^{[\sigma\omega]+1}}{([\sigma\omega]+1)!}\, .
\end{eqnarray}
Once more, one can check that this last term vanishes as $\omega\to\infty$. Then, the left hand side of Eq.~\eqref{bound2} tends to one as $\omega \to \infty$ and, therefore, from Eq.~\eqref{bound3} we see that
\begin{equation}
\label{final2}
T(\omega,1,\infty)\geq e^{-\omega}\left(\frac{\omega}{[\sigma\omega]}\right)^r
\, \sum_{n=1}^{[\sigma\omega]}\frac{\omega^n}{n!} \xrightarrow[\omega\to\infty]{}\, \left(\frac{1}{\sigma}\right)^r.
\end{equation}
Now, since $\lambda$ and $\sigma$ were arbitrarily close to $1$ (from below and above, respectively), Eq.~\eqref{final1} and Eq.~\eqref{final2} imply that
$
\lim_{\omega\to\infty} T(\omega,1,\infty) =1\, ,
$
which is the desired statement.
| {
"timestamp": "2017-05-30T02:12:25",
"yymm": "1705",
"arxiv_id": "1705.10293",
"language": "en",
"url": "https://arxiv.org/abs/1705.10293",
"abstract": "We discuss the solutions of the Schroedinger equation for piecewise potentials, given by the harmonic oscillator potential for $\\vert x\\vert >a$ and an arbitrary function for $\\vert x\\vert <a$, using elementary methods. The study of this problem sheds light on usual errors when discussing the asymptotic behavior of the eigenfunctions of the quantum harmonic oscillator and can also be used for the analysis of the eigenfunctions of the hydrogen atom. We present explicit results for the energy levels of a potential of this class, used to model the confinement of electrons in nanostructures.",
"subjects": "Quantum Physics (quant-ph)",
"title": "Solutions of the Schroedinger equation for piecewise harmonic potentials: remarks on the asymptotic behavior of the wave functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.954647415574754,
"lm_q2_score": 0.7431680199891789,
"lm_q1q2_score": 0.7094634296204767
} |
https://arxiv.org/abs/1803.06376 | A Generalised Method for Empirical Game Theoretic Analysis | This paper provides theoretical bounds for empirical game theoretical analysis of complex multi-agent interactions. We provide insights in the empirical meta game showing that a Nash equilibrium of the meta-game is an approximate Nash equilibrium of the true underlying game. We investigate and show how many data samples are required to obtain a close enough approximation of the underlying game. Additionally, we extend the meta-game analysis methodology to asymmetric games. The state-of-the-art has only considered empirical games in which agents have access to the same strategy sets and the payoff structure is symmetric, implying that agents are interchangeable. Finally, we carry out an empirical illustration of the generalised method in several domains, illustrating the theory and evolutionary dynamics of several versions of the AlphaGo algorithm (symmetric), the dynamics of the Colonel Blotto game played by human players on Facebook (symmetric), and an example of a meta-game in Leduc Poker (asymmetric), generated by the PSRO multi-agent learning algorithm. | \section{Conclusion}\label{sec:conclusions}
In this paper we have generalised the heuristic payoff table method introduced by Walsh et al. \cite{Walsh02} to two-population asymmetric games. We call such games \textit{meta-games} as they consider complex strategies instead of atomic actions as found in normal-form games. As such they are well suited to investigate real-world multi-agent interactions, as they summarize behaviour in terms of high-level strategies rather than primitive actions. We have shown that a Nash equilibrium of the meta-game is a $2 \epsilon$ Nash equilibrium of the true underlying game, providing theoretical bounds on how much data samples are required to build a reliable meta payoff table. As such our method allows for an equilibrium analysis with a certain confidence that this game is a good approximation of the underlying real game. Finally, we have carried out an empirical illustration of this method in three complex domains, i.e., \textit{AlphaGo}, Colonel Blotto and PSRO, showing the feasibility and strengths of the approach.
\section*{Acknowledgments}
We wish to thank Angeliki Lazaridou and Guy Lever for insightful comments, the DeepMind AlphaGo team for support with the analysis of the AlphaGo dataset, and Pushmeet Kohli for supporting us with the Colonel Blotto dataset.
\section{Experiments}\label{sec:exps}
This section presents experiments that illustrate the meta-game approach and its feasibility for examining strengths and weaknesses of higher-level strategies in various domains, including \textit{AlphaGo}, Colonel Blotto, and the meta-game generated by PSRO. Note that we restrict the meta-games to three strategies, as we can nicely visualise this in a phase plot, and these still provide useful information about the dynamics in the full strategy spaces.
\vspace{-\secspace cm}
\subsection{AlphaGo}
The data set under study consists of $7$ \textit{AlphaGo} variations and a a number of different Go strategies such as Crazystone and Zen (previously the state-of-the-art). $\alpha$ stands for the algorithm and the indexes $r,v,p$ for the use of respectively \emph{rollouts}, \emph{value nets} and \emph{policy nets} (e.g. $\alpha_{rvp}$ uses all 3). For a detailed description of these strategies see \cite{DSilverHMGSDSAPL16}.
The meta-game under study here concerns a $2$-type NFG with $|S|=9$. We will look at various $2$-faces of the larger simplex. Table 9 in \cite{DSilverHMGSDSAPL16} summarises all wins and losses between these various strategies (meeting several times), from which we can compute meta-game payoff tables.
\subsubsection{Experiment 1: strong strategies}
\noindent This first experiment examines three of the strongest \textit{AlphaGo} strategies in the data-set, i.e., $\alpha_{rvp}, \alpha_{vp}, \alpha_{rp}$.
\noindent As a first step we created a meta-game payoff table involving these three strategies, by looking at their pairwise interactions in the data set (summarised in Table 9 of \cite{DSilverHMGSDSAPL16}). This set contains data for all strategies on how they interacted with the other $8$ strategies, listing the win rates that strategies achieved against one another (playing either as white or black) over several games.
The meta-game payoff table derived for these three strategies is described in Table \ref{table:exp1Go}.
\begin{table}[!ht]
\vspace{-0.3cm}
\footnotesize
\begin{center}
$\left( \begin{array}{ccccccc}
\alpha_{rvp} & \alpha_{vp} & \alpha_{rp} & \vline & U_{i1} & U_{i2} & U_{i3} \\
\hline
2 & 0 & 0 & \vline & 0.5 & 0 & 0 \\
1 & 0 & 1 & \vline & 0.95 & 0 & 0.05 \\
0 & 2 & 0 & \vline & 0 & 0.5 & 0 \\
1 & 1 & 0 & \vline & 0.99 & 0.01 & 0 \\
0 & 0 & 2 & \vline & 0 & 0 & 0.5 \\
0 & 1 & 1 & \vline & 0 & 0.39 & 0.61 \\
\end{array} \right)$
\end{center}
\caption{\small Meta-game payoff table generated from Table 9 in \cite{DSilverHMGSDSAPL16} for strategies $\alpha_{rvp}, \alpha_{vp}, \alpha_{rp}$}
\label{table:exp1Go}
\vspace{-0.8cm}
\end{table}
\noindent In Figure \ref{fig:alphago-exp1-df} we have plotted the directional field of the meta-game payoff table using the replicator dynamics for a number of strategy profiles $\mathbf{x}$ in the simplex strategy space. From each of these points in strategy space an arrow indicates the direction of flow, or change, of the population composition over the three strategies. Figure \ref{fig:alphago-exp1-tp} shows a corresponding trajectory plot. From these plots one can easily observe that strategy $\alpha_{rvp}$ is a strong attractor and consumes the entire strategy space over the three strategies. This restpoint is also a Nash equilibrium. This result is in line with what we would expect from the knowledge we have of the strengths of these various learned policies. Still, the arrows indicate how the strategy landscape flows into this attractor and therefore provides useful information as we will discuss later.
\begin{figure*}[!tbp]
\centering
\begin{minipage}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{DF-arvp-avp-arp.png}
\vspace{-1.2cm}
\caption{\small Directional field plot for the 2-face consisting of strategies $\alpha_{rvp}, \alpha_{vp}, \alpha_{rp}$}
\label{fig:alphago-exp1-df}
\end{minipage}
\qquad
\begin{minipage}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{TP-arvp-avp-arp.png}
\vspace{-1.2cm}
\caption{\small Trajectory plot for the 2-face consisting of strategies $\alpha_{rvp}, \alpha_{vp}, \alpha_{rp}$}
\label{fig:alphago-exp1-tp}
\end{minipage}
\end{figure*}
\subsubsection{Experiment 2: evolution and transitivity of strengths}
\noindent We start by investigating the 2-face simplex involving strategies $\alpha_{rp}$, $\alpha_{vp}$ and $\alpha_{rv}$, for which we created a meta-game payoff table similarly as in the previous experiment (not shown). The evolutionary dynamics of this 2-face can be observed in Figure \ref{AG:exp3}a. Clearly strategy $\alpha_{rp}$ is a strong attractor and dominates the two other strategies. We now replace this attractor by strategy $\alpha_{rvp}$ and plot its evolutionary dynamics in Figure \ref{AG:exp3}b.
What can be observed from both trajectory plots in Figure \ref{AG:exp3} is that the curvature is less pronounced in plot \ref{AG:exp3}b than it is in plot \ref{AG:exp3}a. The reason for this is that the difference in strength between $\alpha_{rv}$ and $\alpha_{vp}$ is less obvious in the presence of an even stronger attractor than $\alpha_{rp}$. This means that $\alpha_{rvp}$ is now pulling much stronger on both $\alpha_{rv}$ and $\alpha_{vp}$ and consequently the flow goes more directly to $\alpha_{rvp}$.
So even when a strategy space is dominated by one strategy, the curvature (or curl) is a promising measure for the strength of a meta-strategy.
What is worthwhile to observe from the \textit{AlphaGo} dataset, and illustrated as a series in Figures \ref{AG:exp2} and \ref{AG:exp3}, is that there is clearly an incremental increase in the strength of the \textit{AlphaGo} algorithm going from version $\alpha_r$ to $\alpha_{rvp}$, building on previous strengths, without any intransitive behaviour occurring, when only considering a strategy space formed by the \textit{AlphaGo} versions.
Finally, as discussed in Section~\ref{TheoreticalInsights}, we can now examine how good of an approximation an estimated game is. In the \textit{AlphaGo} domain we only do this analysis for the games displayed in Figures \ref{AG:exp3}a and \ref{AG:exp3}b, as it is similar for the other experiments. We know that \emph{$\alpha_{rp}$} is a Nash equilibrium of the estimated game analyzed in Figure~\ref{AG:exp3}a (meta Table not shown). The outcome of $\alpha_{rp}$ against $\alpha_{rv}$ was estimated with $n_{\alpha_{rp},\alpha_{rv}} = 63$ games (for the other pair of strategies we have $n_{\alpha_{vp},\alpha_{rp}} = 65$ and $n_{\alpha_{vp},\alpha_{rv}} = 133$). Because of the symmetry of the problem, the bound in section~\ref{batchScenario} is reduced to:
\vspace{-0.2cm}
\footnotesize
\begin{align*}
P\left( \sup_{\bm{\pi},i} |r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})| < \epsilon \right) \geq \left(1-2e^{\left(-2\epsilon^2 n_{\alpha_{rp},\alpha_{rv}}\right)}\right) &\times \left(1-2e^{\left(-2\epsilon^2 n_{\alpha_{vp},\alpha_{rp}}\right)}\right) \\
&\times \left(1-2e^{\left(-2\epsilon^2 n_{\alpha_{vp},\alpha_{rv}}\right)}\right)
\end{align*}
\normalsize
Therefore, we can conclude that the strategy \emph{$\alpha_{rp}$} is an $2\epsilon$-Nash equilibrium (with $\epsilon=0.15$) for the real game with probability at least $0.78$. The same calculation would also give a confidence of $0.85$ for the RD studied in Figure~\ref{AG:exp3}b for an $\epsilon = 0.15$ (as the number of samples are $(n_{\alpha_{rv},\alpha_{vp}},n_{\alpha_{vp},\alpha_{rvp}},n_{\alpha_{rvp},\alpha_{rv}}) = (65,106,91)$).
\begin{figure*}[!tbp]
\vspace{-0.3cm}
\centering
\begin{minipage}{0.33\textwidth}
\begin{subfigure}{\textwidth}
\centering
\vspace{-0.1cm}
\includegraphics[width=\textwidth]{TP-av-ap-ar.png}
\vspace{-1cm}
\caption{\footnotesize Trajectory plot for $\alpha_v$, $\alpha_p$, and $\alpha_r$}
\end{subfigure}\\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{TP-arv-av-ap.png}
\vspace{-1cm}
\caption{\footnotesize Trajectory plot for $\alpha_{rv}$, $\alpha_v$, and $\alpha_p$}
\end{subfigure
\vspace{-0.3cm}
\caption{}\label{AG:exp2}
\end{minipage}%
\hfill
\begin{minipage}{.33\textwidth}
\begin{subfigure}{\textwidth}
\centering
\vspace{-0.1cm}
\includegraphics[width=\textwidth]{TP-arp-avp-arv.png}
\vspace{-1cm}
\caption{\footnotesize Trajectory plot for $\alpha_{rp}$, $\alpha_{vp}$, and $\alpha_{rv}$}
\end{subfigure}\\
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{TP-arvp-avp-arv.png}
\vspace{-1cm}
\caption{\footnotesize Trajectory plot for $\alpha_{rvp}$, $\alpha_{vp}$, and $\alpha_{rv}$}
\end{subfigure
\vspace{-0.3cm}
\caption{}\label{AG:exp3}
\end{minipage}
\hfill
\begin{minipage}{.33\textwidth}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{AlphaGo_cycle.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure
\vspace{-0.3cm}
\caption{\footnotesize Intransitive behaviour for $\alpha_v$, $\alpha_p$, and $Zen$.}\label{AG:exp4}
\end{minipage
\end{figure*}
\subsubsection{Experiment 3: cyclic behaviour}
A final experiment investigates what happens if we add a \textit{pre-AlphaGo} state-of-the-art algorithm to the strategy space. We have observed that even though $\alpha_{rvp}$ remains the strongest strategy, dominating all other \textit{AlphaGo} versions and previous state-of-the-art algorithms, cyclic behaviour can occur, something that cannot be measured or seen from Elo ratings.\footnote{An Elo rating or score is a measure to express the relative strength of a player, or strategy. It was named after Arpad Elo and originally introduced to rate chess players. For an introduction see e.g. \cite{Coulom08}} More precisely, we constructed a meta-game payoff table for strategies $\alpha_v$, $\alpha_p$ and $Zen$ (one of the previous commercial state-of-the-art algorithms). In Figure \ref{AG:exp4} we have plotted the evolutionary dynamics for this meta-game, and as can be observed there is a mixed equilibrium in strategy space, around which the dynamics cycle, indicating that $Zen$ is capable of introducing in-transitivity, as $\alpha_v$ dominates $\alpha_p$, $\alpha_p$ dominates $Zen$ and $Zen$ dominates $\alpha_v$.
\vspace{-\secspace cm}
\subsection{Colonel Blotto}
\noindent Colonel Blotto is a resource allocation game originally introduced by Borel \cite{Borel}. Two players interact, each allocating $m$ troops over $n$ locations. They do this separately without communication, after which both distributions are compared to determine the winner. When a player has more troops in a specific location, it wins that location. The player winning the most locations wins the game. This game has many game theoretic intricacies, for an analysis see \cite{KohliKBHSG12}. Kohli et al. have run Colonel Blotto on Facebook (project Waterloo), collecting data describing how humans play this game, with each player having $m=100$ troops and considering $n=5$ battlefields. The number of strategies in the game is vast: a game with $m$ troops and $n$ locations has $\binom{m + n - 1}{n - 1}$ strategies.
Based on Kohli et al. we carry out a meta game analysis of the \emph{strongest strategies} and the\emph{most frequently played strategies} on Facebook. We have a look at several $3$-strategy simplexes, which can be considered as $2$-faces of the entire strategy space.
\noindent An instance of a strategy in the game of Blotto will be denoted as follows: $[t_1,t_2,t_3,t_4,t_5]$ with $\sum_it_i=100$. All permutations $\sigma_i$ in this division of troops belong to the same strategy.
We assume that permutations are chosen uniformly by a player. Note that in this game there is no need to carry out the theoretical analysis of the approximation of the meta-game, as we are are not examining heuristics or strategies over Blotto strategies, but rather these strategies themselves, for which the payoff against any other strategy will always be the same (by computation). Nevertheless, carrying out a meta-game analysis reveals interesting information.
\subsubsection{Experiment 1: Top performing strategies}
In this first experiment we examine the dynamics of the simplex consisting of the three best scoring strategies from the study of \cite{KohliKBHSG12}: $[36,35,24,3,2]$, $[37,37,21,3,2]$, and $[35,35,26,2,2]$, see Table \ref{tab:blotto_strategies_strong}.
In a first step we compute a meta-game payoff table for these three strategies. The interactions are pairwise, and the expected payoff can be easily computed, assuming a uniform distribution for different permutations of a strategy. This normalised payoff is shown in Table \ref{table:exp1Blotto}.
\begin{table}
\vspace{-1cm}
\footnotesize
\begin{center}
\begin{tabular}{ |p{2cm}|p{1cm}|p{1cm}| }
\hline
\multicolumn{3}{|c|}{Strongest strategies} \\
\hline
Strategy & Frequency & Win rate\\
\hline
$[36, 35, 24, 3, 2]$ & 1 & .74 \\
$[37, 37, 21, 3, 2]$ & 17 & .73\\
$[35, 35, 26, 2, 2]$ & 1 & .73\\
$[35, 34, 25, 3, 3]$ & 3 & .70\\
$[35, 35, 24, 3, 3]$ & 13 & .70 \\
\hline
\end{tabular}
\end{center}
\caption{\small 5 of the strongest strategies played on Facebook.}
\label{tab:blotto_strategies_strong}
\vspace{-1.1cm}
\end{table}
\begin{table}[!ht]
\vspace{-0.2cm}
\footnotesize
\begin{center}
$\left( \begin{array}{ccccccc}
s_1 & s_2 & s_3 & \vline & U_{i1} & U_{i2} & U_{i3} \\
\hline
2 & 0 & 0 & \vline & 0.5 & 0 & 0 \\
1 & 0 & 1 & \vline & 0.66 & 0 & 0.34 \\
0 & 2 & 0 & \vline & 0 & 0.5 & 0 \\
1 & 1 & 0 & \vline & 0.33 & 0.67 & 0 \\
0 & 0 & 2 & \vline & 0 & 0 & 0.5 \\
0 & 1 & 1 & \vline & 0 & 0.75 & 0.25 \\
\end{array} \right)$
\end{center}
\caption{\small Meta-game payoff table generated for strategies $s_1=[36,35,24,3,2]$, $s_2=[37,37,21,3,2]$, and $s_3=[35,35,26,2,2]$.}
\label{table:exp1Blotto}
\vspace{-0.5cm}
\end{table}
\noindent Using table \ref{table:exp1Blotto} we can compute evolutionary dynamics using the standard replicator equation. The resulting trajectory plot can be observed in Figure \ref{fig:blotto-exps}a.
The first thing we see is that we have one strong attractor, i.e, strategy $s_2=[37,37,21,3,2]$ and there is transitive behaviour, meaning that $[36,35,24,3,2]$ dominates $[35,35,26,2,2]$, $[37,37,21,3,2]$ dominates $[36,35,24,3,2]$, and $[37,37,21,3,2]$ dominates $[35,35,26,2,2]$. Although $[37,37,21,3,2]$ is the strongest strategy in this $3$-strategy meta-game, the win rates (computed over all played strategies in project Waterloo) indicate that strategy $[36,35,24,3,2]$ was more successful on Facebook. The differences are minimal, and on average it is better to choose $[37,37,21,3,2]$, which was also the most frequently chosen strategy from the set of strong strategies, see Table \ref{tab:blotto_strategies_strong}.
We show a similar plot for the evolutionary dynamics of strategies $[35,34,25,3,3]$, $[37,37,21,3,2]$, and $[35,35,24,3,3]$ in Figure \ref{fig:blotto-exps}b, which are three of the most frequently played strong strategies from Table \ref{tab:blotto_strategies_strong}.
\begin{figure*}[!tbp]
\centering
\begin{minipage}[b]{0.33\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{Blotto_topperformers.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.33\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{mostfrequenttopperformers.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure}
\end{minipage}
\vspace{-0.5cm}
\caption{\small (a) dynamics of $[36,35,24,3,2]$, $[37,37,21,3,2]$, and $[35,35,26,2,2]$. (b) dynamics of $[35,34,25,3,3]$, $[37,37,21,3,2]$, and $[35,35,24,3,3]$. }\label{fig:blotto-exps}
\end{figure*}
\subsubsection{Experiment 2: most frequently played strategies}
We compared the evolutionary dynamics of the eight most frequently played strategies and present here a selection of some of the results. The meta-game under study in this domain concerns a 2-type repeated NFG G with $|S|=8$.
We will look at various $2$-faces of the $8$-simplex.
The top eight most frequently played strategies are shown in Table \ref{tab:blotto_strategies_frequent}.
\begin{table}
\footnotesize
\begin{center}
\begin{tabular}{ |p{2cm}|p{2cm}| }
\hline
\multicolumn{2}{|c|}{Most played strategies} \\
\hline
Strategy & Frequency \\
\hline
$[34, 33, 33, 0, 0]$ & 271 \\
$[20, 20, 20, 20, 20]$ & 235 \\
$[33, 1, 33, 0, 33]$ & 127 \\
$[1, 32, 33, 1, 33]$ & 97 \\
$[35, 30, 35, 0, 0]$ & 68 \\
$[0, 100, 0, 0, 0]$ & 67 \\
$[10, 10, 35, 35, 10]$ & 58 \\
$[25, 25, 25, 25, 0]$ & 50 \\
\hline
\end{tabular}
\end{center}
\caption{\small The 8 most frequently played strategies on Facebook.}
\label{tab:blotto_strategies_frequent}
\vspace{-1cm}
\end{table}
First we investigate the strategies $[20,20,20,20,20]$, $[1,32,33,1,33]$, and $[10,10,35,35,10]$ from our strategy set. In Table \ref{table:exp2Blotto} we show the resulting meta-game payoff table of this $2$-face simplex. Using this table we can again compute the replicator dynamics and investigate the trajectory plots in Figure \ref{fig:exp2BlottoNash}a. We observe that the dynamics cycle around a mixed Nash equilibrium (every interior rest point
is a Nash equilibrium). This intransitive behaviour makes sense by looking at the pairwise interactions between strategies and the corresponding payoffs they receive from Table \ref{table:exp1Blotto}. The expected payoff for $[20,20,20,20,20]$, when playing against $[1,32,33,1,33]$ will be lower than the expected payoff for $[1,32,33,1,33]$. Similarly, $[1,32,33,1,33]$ will be dominated by $[10,10,35,35,10]$ when they meet, and to make the cycle complete, $[10,10,35,35,10]$ will receive a lower expected payoff against $[20,20,20,20,20]$. As such, the behaviour will cycle around a the Nash equilibrium
\begin{table}[!h]
\vspace{-0.2cm}
\footnotesize
\begin{center}
$\left( \begin{array}{ccccccc}
s_1 & s_2 & s_3 & \vline & U_{i1} & U_{i2} & U_{i3} \\
\hline
2 & 0 & 0 & \vline & 0.5 & 0 & 0 \\
1 & 0 & 1 & \vline & 1 & 0 & 0 \\
0 & 2 & 0 & \vline & 0 & 0.5 & 0 \\
1 & 1 & 0 & \vline & 0 & 1 & 0 \\
0 & 0 & 2 & \vline & 0 & 0 & 0.5 \\
0 & 1 & 1 & \vline & 0 & 0.1 & 0.9 \\
\end{array} \right)$
\end{center}
\caption{\small Meta-game payoff table generated for strategies $s_1=[20,20,20,20,20]$, $s_2=[1,32,33,1,33]$, and $s_3=[10,10,35,35,10]$.}
\label{table:exp2Blotto}
\vspace{-0.7cm}
\end{table}
\begin{figure*}[!tbp]
\vspace{-0.5cm}
\centering
\begin{minipage}[b]{0.33\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{Blotto-exp3.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.33\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{Blotto-exp4.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.33\textwidth}
\begin{subfigure}{\textwidth}
\includegraphics[width=\textwidth]{cyclemostfrequentstrategies.png}
\vspace{-1.2cm}
\caption{}
\end{subfigure}
\end{minipage}
\vspace{-0.5cm}
\caption{\small Dynamics of 3 $2$-faces of the $8$-simplex: (a) Nash eq. (b) Human play (c) Another example of intransitive behaviour}\label{fig:exp2BlottoNash}
\vspace{-0.3cm}
\end{figure*}
\noindent An interesting question is where human players are situated in this cyclic behaviour landscape. In Figure \ref{fig:exp2BlottoNash}b we show the same trajectory plot but added a red marker to indicate the strategy profile based on the frequencies of these 3 strategies played by human players. This is derived from Table \ref{tab:blotto_strategies_frequent} and the profile vector is $(0.6,0.25,0.15)$. If we assume that the human agents optimise their behaviour in a \emph{survival of the fittest} style they will cycle along the red trajectory.
In Figure \ref{fig:exp2BlottoNash}c we illustrate similar intransitive behaviour for three other frequently played strategies.
\vspace{-\secspace cm}
\subsection{PSRO-generated Meta-Game}
We now turn our attention to an asymmetric game. Policy Space Response Oracles (PSRO) is a multiagent reinforcement learning process that reduces the strategy space of large extensive-form games via iterative best response computation. PSRO can be seen as a generalized form of fictitious play that produces approximate best responses, with arbitrary distributions over generated responses computed by meta-strategy solvers.
One application of PSRO was applied to a commonly-used benchmark problem known as Leduc poker~\citep{Southey05}, except with a fixed action space and penalties for taking illegal moves. Therefore PSRO learned to play from scratch, without knowing which moves were legal. Leduc poker has a deck of 6 cards (jack, queen, king in two suits). Each player receives an initial private card, can bet a fixed amount of 2 chips in the first round, 4 chips in the second round, with a maximum of two raises in each round. A public card is revealed before the second round starts.
In Table \ref{fig:PSROgame} we present such an asymmetric $3 \times 3$ 2-player game generated by the first few epochs of PSRO learning to play Leduc Poker. In the game illustrated here, each player has three strategies that, for ease of the exposition, we call $\{A, B, C\}$ for player 1, and $\{D, E, F\}$ for player 2. Each one of these strategies represents an approximate best response to a distribution over previous opponent strategies.
In Table \ref{tab:CPtables} we show the two symmetric counterpart games (see section \ref{sec:theor}) of the empirical game produced by PSRO.
\begin{table}[h!]
\vspace{-0.6cm}
\footnotesize
\centering
\begin{game}{3}{3}[][]
& D & E & F \\
A & $-2.26,0.02$ & $-2.06,-1.72$ & $-1.65,-1.43$ \\
B & $-4.77,-0.13$ & $-4.02,-3.54$ & $-5.96,-2.30$ \\
C & $-2.71,-1.77$ & $-2.52,-2.94$ & $-6.10,1.06$ \\
\end{game}
\caption{\small Asymmetric PSRO meta game applied to Leduc poker.}
\label{fig:PSROgame}
\vspace{-1cm}
\end{table}
\begin{table}[htb]
\vspace{-0.5cm}
\footnotesize
\centering
\begin{minipage}{.25\textwidth}
\begin{game}{3}{3}[][]
& A & B & C \\
A & $-2.26$ & $-2.06$ & $-1.65$\\
B & $-4.77$ & $-4.02$ & $-5.96$\\
C & $-2.71$ & $-2.52$ & $-6.10$\\
\end{game}
\label{fig:CP-PSRO1}
\end{minipage
\begin{minipage}{.25\textwidth}
\begin{game}{3}{3}[][]
& D & E & F \\
D & $0.02$ & $-1.72$ & $-1.43$\\
E & $-0.13$ & $-3.54$ & $-2.30$\\
F & $-1.77$ & $-2.94$ & $1.06$\\
\end{game}
\label{fig:CP-PSRO2}
\end{minipage}
\caption{\small Left - first counterpart game of the PSRO empirical game. Right - second counterpart game of the PSRO empirical game.}\label{tab:CPtables}
\vspace{-0.8cm}
\end{table}
Again we can now analyse the equilbrium landscape of this game, but now using the asymmetric meta-game payoff table and the decomposition result introduced in section \ref{sec:theor}. Since the PSRO meta game is asymmetric we need two populations for the asymmetric replicator equations. Analysing and plotting the evolutionary asymmetric replicator dynamics now quickly becomes very tedious as we deal with two simplices, one for each player. More precisely, if we consider a strategy profile for one player in its corresponding simplex, and that player is adjusting its strategy, this will immediately cause the second simplex to change, and vice versa. Consequently, it is not straightforward anymore to analyse the dynamics.
In order to facilitate the process of analysing the dynamics we can apply the counterpart theorems to remedy the problem.
In Figures \ref{fig:dfieldPSROP1} and \ref{fig:dfieldPSROP2} we show the evolutionary dynamics of the counterpart games. As can be observed in Figure \ref{fig:dfieldPSROP1} the first counterpart game has only one equilibrium, i.e., a pure Nash equilibrium in which both players play strategy $A$, which absorbs the entire strategy space. Looking at Figure \ref{fig:dfieldPSROP2} we see the situation is a bit more complex in the second counterpart game, here we observe three equilibiria: one pure at strategy $D$, one pure at strategy $F$, and one unstable mixed equilibrium at the 1-face formed by strategies $D$ and $F$. All these equilibria are Nash in the respective counterpart games\footnote{Banach solver (\url{http://banach.lse.ac.uk/}) is used to check Nash equilibria \cite{Avis10}}. By applying the theory of section \ref{sec:theor} we now know that we only maintain the combination $((1,0,0),(1,0,0))$ as a pure Nash equilibrium of the asymmetric PSRO empirical game, since these strategies have the same support as a Nash equilibrium in the counterpart games. The other equilibria in the second counterpart game can be discarded as candidates for Nash equilibria in the PSRO empirical game since they do not appear as equilibria for player 1.
\begin{figure}[h!]
\vspace{-0.6cm}
\centering
\begin{minipage}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{PSRO-player1oldgame.png}
\vspace{-1.2cm}
\caption{\small Trajectory plot of the first CP game.}
\label{fig:dfieldPSROP1}
\end{minipage}
\vspace{-0.4cm}
\end{figure}
\begin{figure}[h!]
\vspace{-0.3cm}
\centering
\begin{minipage}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{PSRO-player2-oldgame.png}
\vspace{-1.2cm}
\caption{\small Trajectory plot of the 2nd CP game.}
\label{fig:dfieldPSROP2}
\end{minipage}
\vspace{-0.4cm}
\end{figure}
Finally, each joint action of the game was estimated with $100$ samples. As the outcome of the game is bounded in the interval $[-13,13]$ we can only guarantee that the Nash equilibrium of the meta game we studied is a $2\epsilon$-Nash equilibrium of the unknown underlying game. It turns out that with $n=100$ and $\epsilon=0.05$, the confidence can only be guaranteed to be above $10^{-8}$. To guarantee a confidence of at least $0.95$ for the same value of $\epsilon=0.05$, we would need at least $n=886 \times 10^3$ samples.
\section{Introduction}
\noindent Using game theory to examine multi-agent interactions in complex systems is a non-trivial task. Works by Walsh et al. \cite{Walsh02,Walsh03} and Wellman et al. \cite{Wellman06}, have shown the great potential of using heuristic strategies and empirical game theory to examine such interactions at a higher meta-level, instead of trying to capture the decision-making processes at the level of the atomic actions involved. Doing this turns the interaction in a smaller normal form game, or meta-game, with the higher-level strategies now being the primitive actions of the game, making the complex multi-agent interaction amenable to game theoretic analysis.
\noindent Others have built on this empirical game theoretic methodology and applied these ideas to no limit Texas hold'em Poker and various types of double auctions for example, see \cite{PhelpsPM04,PonsenTKR09,PhelpsCMNPS07,KaisersTTP08,TuylsP07}, showing that a game theoretic analysis at the level of meta-strategies yields novel insights into the type and form of interactions in complex systems.
\noindent A major limitation of this empirical game theoretic approach is that it comes without theoretical guarantees on the approximation of the true underlying game by an estimated game based on sampled data, and that it is unclear how many data samples are required to achieve a good approximation. Additionally, the method remains limited to symmetric situations, in which the agents or players have access to the same set of strategies, and are interchangeable. One approach is to ignore asymmetry (types of players), and average over many samples of types resulting in a single expected payoff to each player in each entry of the meta-game payoff table. Many real-world situations though are asymmetric in nature and involve various roles for the agents that participate in the interactions. For instance, buyers and sellers in auctions, or games such as Scotland Yard, but also different roles in e.g. robotic soccer (defender vs striker) and even natural language (hearer vs speaker).
\noindent In this paper we tackle these problems. We prove that a Nash equilibrium of the estimated game is a $2 \epsilon$-Nash equilibrium of the real underlying game, showing that we can closely approximate the real Nash equilibrium as long as we have enough data samples from which to build the meta-game payoff table. Furthermore, we also examine how much data samples are required to confidently approximate the underlying game. We also show how to generalise the heuristic payoff or meta-game method introduced by Walsh \textit{et al.} to two-population asymmetric games.
\noindent Finally, we illustrate the generalised method in several domains. We carry out an experimental illustration on the \textit{AlphaGo} algorithm \cite{DSilverHMGSDSAPL16}, Colonel Blotto \cite{KohliKBHSG12} and an asymmetric Leduc poker game. In the \textit{AlphaGo} experiments we show how a symmetric meta-game analysis can provide insights into the evolutionary dynamics and strengths of various versions of the \textit{AlphaGo} algorithm while it was being developed, and how intransitive behaviour can occur by introducing a non-related strategy. In the Colonel Blotto game we illustrate how the methodology can provide insights into how humans play this game, constructing several symmetric meta-games from data collected on Facebook. Finally, we illustrate the method in Leduc poker, by examining an asymmetric meta-game, generated by a recently introduced multiagent reinforcement learning algorithm, policy-space response oracles (PSRO) \cite{Lanctot17}. For this analysis we rely on some theoretical results that connect an asymmetric normal form game to its symmetric counterparts \cite{TuylsSym}.
\section{Method}\label{sec:method}
\noindent There are now two possibilities, either the meta-game is symmetric, or it is asymmetric. We will start with the simpler symmetric case, which has been studied in empirical game theory, then we continue with asymmetric games, in which we consider two populations, or roles.
\vspace{-\secspace cm}
\subsection{Symmetric Meta Games}
We consider a set of agents or players $A$ with $|A|=n$ that can choose a strategy from a set $S$ with $|S|=k$ and can participate in one or more $p$-type meta-games with $p \leq n$.
If the game is symmetric then the formulation of meta strategies has the advantage that the payoff for a strategy does not depend on which player has chosen that strategy and consequently the payoff for that strategy only depends on the composition of strategies it is facing in the game and not on who is playing the strategy. This symmetry has been the main focus of the use of empirical game theory analysis \cite{Walsh02,Wellman06,PonsenTKR09,PhelpsCMNPS07}.
\noindent If we were to construct a classical payoff table for $\mathbf{r}$ we would require $k^p$ entries in the table (which becomes large very quickly). Since all players can choose from the same strategy set and all players receive the same payoff for being in the same situation, we can simplify our payoff table.
\noindent Let $N$ be a matrix, where each row $N_i$ contains a discrete distribution of $p$ players over $k$ strategies. The matrix yields $\binom{p+k-1}{p}$ rows. Each distribution over strategies can be simulated (or derived from data), returning a vector of expected rewards $u(N_i)$. Let $U$ be a matrix which captures the rewards corresponding to the rows in $N$, i.e., $U_i = u(N_i)$. We refer to a meta payoff table as $M = (N, U)$.
\noindent So each row yields a \emph{discrete profile} $(n_{\pi_1}, \ldots, n_{\pi_k})$ indicating exactly how many players play each strategy, with $\sum_j n_{\pi_j}=p$. A strategy profile $\mathbf{x}$ then equals $(\frac{n_{\pi_1}}{p}, \ldots, \frac{n_{\pi_k}}{p})$.
\noindent Suppose we have a meta-game with $3$ meta-strategies ($|S|=3$) and $6$ players ($|A|=6$) that interact in a $6$-type, this leads to a meta game payoff table of $28$ entries (which is a good reduction from $3^6 cells$. An important advantage of this type of table is that it easily extends to many agents, as opposed to the classical payoff matrix. Table \ref{table:hpt} provides an example for three strategies $\pi_1, \pi_2$ and $\pi_3$. The left-hand side expresses the discrete profiles and corresponds to matrix $N$, while the right-hand side gives the payoffs for playing any of the strategies given the discrete profile and corresponds to matrix $U$.
\begin{table}[!ht]
\footnotesize
\begin{center}
$P = \left( \begin{array}{ccccccc}
N_{i1}& N_{i2} & N_{i3} & \vline & U_{i1} & U_{i2} & U_{i3} \\
\hline
6 & 0 & 0 & \vline & 0 & 0 & 0 \\
& ... & & \vline & & ... & \\
4 & 0 & 2 & \vline & -0.5 & 0 & 1 \\
& ... & & \vline & & ... & \\
0 & 0 & 6 & \vline & 0 & 0 & 0 \\
\end{array} \right)$
\end{center}
\caption{\small An example of a meta game payoff table}
\label{table:hpt}
\vspace{-1cm}
\end{table}
In order to analyse the evolutionary dynamics of high-level meta-strategies, we also need to estimate the expected payoff of such strategies relative to each other. In evolutionary game theoretic terms, this is the relative fitness of the various strategies, dependent on the current frequencies of those strategies in the population.
In order to approximate the payoff for an arbitrary mix of strategies in an infinite population of replicators distributed over the species according to $\mathbf{x}$, $p$ individuals are drawn randomly from the infinite distribution. The probability for selecting a specific row $N_i$ can be computed from $\mathbf{x}$ and $N_i$ as
\begin{equation*}
\phantom{.}P(N_i | \mathbf{x}) = \binom{p}{N_{i1}, N_{i2}, \ldots, N_{ik}} \prod_{j=1}^{k} x_j^{N_{ij}}.
\end{equation*}
The expected payoff of strategy $\pi^i$, $r^i(\mathbf{x})$, is then computed as the weighted combination of the payoffs given in all rows:
\begin{equation*}
\phantom{.}r^i(\mathbf{x}) = \frac{\sum_j P(N_j | \mathbf{x}) U_{ji}}{1 - (1-x_i)^p}.
\end{equation*}
This expected payoff function can be used in Equation~\ref{eq:singleRD} to compute the evolutionary population change according to the replicator dynamics by replacing $(A x)_i$ by $r^i(\mathbf{x})$. Note that we need to re-normalize (denominator) by ignoring rows that do not contribute to the payoff of a strategy because it is not present in the distribution $N_j$ in the HPT.
\vspace{-\secspace cm}
\subsection{Asymmetric Meta Games}
\noindent One can now wonder how the previously introduced method extends to asymmetric games, which has not been considered in the literature.
An example of an asymmetric game is the famous battle of the sexes game illustrated in Table \ref{fig:BoS}. In this game both players do have the same strategy sets, i.e., go to the opera or go to the movies, however, the corresponding payoffs for each are different, expressing the differences in preferences that both players have.
\begin{table}[htb]
\vspace{-0.5cm}
\footnotesize
\centering
\begin{minipage}{.20\textwidth}
\begin{game}{2}{2}[][]
& O & M \\
O & $3,2$ & $0,0$ \\
M & $0,0$ & $2,3$ \\
\end{game}
\caption{\footnotesize Battle of the Sexes game: strategies $O$ and $M$ correspond to going to the Opera and going to the Movies respectively.}\label{fig:BoS}
\end{minipage
\begin{minipage}{.30\textwidth}
\begin{game}{3}{3}[][]
& $C_1$ & $C_2$ & $C_3$ \\
$R_1$ & $r_{11},c_{11}$ & $r_{12},c_{12}$ & $r_{13},c_{13}$\\
$R_2$ & $r_{21},c_{21}$ & $r_{22},c_{22}$ & $r_{23},c_{23}$\\
$R_3$ & $r_{31},c_{31}$ & $r_{32},c_{32}$ & $r_{33},c_{33}$\\
\end{game}
\caption{\footnotesize General 3x3 normal form game.}
\label{fig:gentab}
\end{minipage}
\vspace{-0.8cm}
\end{table}
\noindent If we aim to carry out a similar evolutionary analysis as in the symmetric case, restricting ourselves to two populations or roles, we will need two meta game payoff tables, one for each player over its own strategy set. We will also need to use the asymmetric version of the replicator dynamics as shown in Equation \ref{eq:asymRD2}.
Additionally, in order to compute the right payoffs for every situation we will have to interpret a discrete strategy profile in the meta-table slightly different. Suppose we have a 2-type meta game, with three strategies in each player's strategy set. We introduce a generalisation of our meta-table for both players by means of an example shown in Table \ref{table:asym-hpt}, which corresponds to the general NFG shown in Table \ref{fig:gentab}.
\begin{table}[!ht]
\vspace{-0.3cm}
\footnotesize
\begin{center}
$P = \left( \begin{array}{ccccccc}
N_{i1,j1}& N_{i2,j2} & N_{i3,j3} & \vline & U_{i1,j1} & U_{i2,j2} & U_{i3,j3} \\
\hline
(1,1) & 0 & 0 & \vline & (r_{11},c_{11}) & 0 & 0 \\
& ... & & \vline & & ... & \\
(1,0) & (0,1) & 0 & \vline & (r_{12},0) & (0,c_{12}) & 0 \\
(0,1) & (1,0) & 0 & \vline & (0,c_{21}) & (r_{21},0) & 0 \\
& ... & & \vline & & ... & \\
0 & 0 & (1,1) & \vline & 0 & 0 & (r_{33},c_{33}) \\
\end{array} \right)$
\end{center}
\caption{\small An example of an asymmetric meta game payoff table}
\label{table:asym-hpt}
\vspace{-1cm}
\end{table}
\noindent Let's have a look at the first entry in Table \ref{table:asym-hpt}, i.e., $[(1,1), 0, 0]$. This entry means that both agents ($i$ and $j$) are playing their first strategy, expressed by $N_{i1,j1}$, meaning the number of agents $N_{i1}$ playing strategy $\pi^1_i$ in the first population equals $1$ and that the number of agents $N_{j1}$ playing strategy $\pi^2_j$ in the second population equals $1$ as well. The corresponding payoff for each player $U_{i1,j1}$ equals $(r_{11},c_{11})$. Now lets have a look at the discrete profiles: $[(1,0),(0,1),0]$ and $[(0,1),(1,0),0]$. The first one means that the first player is playing its first strategy while the second player is playing their second strategy. The corresponding payoffs are $r_{12}$ for the first player and $c_{12}$ for the second player.
The profile $[(0,1),(1,0),0]$ shows the reverted situation in which the second player plays his first strategy and the first player plays his second strategy, yielding payoffs $r_{21}$ and $c_{21}$ for the first player and second player respectively.
In order to turn the table into a similar format as for the symmetric case, we can now introduce $p$ meta-tables, one for each player. More precisely, we get Tables \ref{table:asym-hpt-decomp1} and \ref{table:asym-hpt-decomp2} for players 1 and 2 respectively.
\begin{table}[!ht]
\vspace{-0.5cm}
\footnotesize
\begin{center}
$P = \left( \begin{array}{ccccccc}
N_{i1,j1}& N_{i2,j2} & N_{i3,j3} & \vline & U_{i1,j1} & U_{i2,j2} & U_{i3,j3} \\
\hline
2 & 0 & 0 & \vline & r_{11} & 0 & 0 \\
& ... & & \vline & & ... & \\
1 & 1 & 0 & \vline & r_{12} & r_{21} & 0 \\
& ... & & \vline & & ... & \\
0 & 0 & 2 & \vline & 0 & 0 & r_{33} \\
\end{array} \right)$
\end{center}
\caption{\small A decomposed asymmetric meta payoff table for Player 1.}
\label{table:asym-hpt-decomp1}
\vspace{-1.1cm}
\end{table}
\begin{table}[!ht]
\footnotesize
\begin{center}
$Q = \left( \begin{array}{ccccccc}
N_{i1,j1}& N_{i2,j2} & N_{i3,j3} & \vline & U_{i1,j1} & U_{i2,j2} & U_{i3,j3} \\
\hline
2 & 0 & 0 & \vline & c_{11} & 0 & 0 \\
& ... & & \vline & & ... & \\
1 & 1 & 0 & \vline & c_{12} & c_{21} & 0 \\
& ... & & \vline & & ... & \\
0 & 0 & 2 & \vline & 0 & 0 & c_{33} \\
\end{array} \right)$
\end{center}
\caption{\small A decomposed asymmetric meta payoff table for Player 2.}
\label{table:asym-hpt-decomp2}
\vspace{-0.8cm}
\end{table}
\noindent One needs to take care in correctly interpreting these tables. Let's have a look at row $[1, 1, 0]$ for instance. This should now be interpreted in two ways: one, the first player plays his first strategy while the other player plays his second strategy and he receives a payoff of $r_{12}$, two, the first player plays his second strategy while the other player plays his first strategy and receives a payoff of $r_{21}$.
The expected payoff $r^i(\mathbf{x})$ can now be estimated in the same way as explained for the symmetric case as we will be relying on symmetric replicator dynamics by decoupling asymmetric games in their \textit{symmetric counterparts} (explained in the next section).
\vspace{-\secspace cm}
\subsection{Linking symmetric and asymmetric games}\label{sec:theor}
Here we summarize the most important results on the link between an asymmetric game and its symmetric counterpart games. For a full treatment and discussion of these results see \cite{TuylsSym}.
In a nutshell, this work proves that if $x,y$ is a Nash equilibrium of the bimatrix game $(A,B)$ (where $x$ and $y$ have the same support\footnote{$x$ and $y$ have the same support if $I_x=I_y$ where $I_x = \{i \; | \; x_i>0\}$ and $I_y = \{i \; | \; y_i>0\}$}), then $y$ is a Nash equilibrium of the single population, or symmetric, game $A$ and $x$ is a Nash equilibrium of the single population, or symmetric, game $B^\top$. Both symmetric games are called the \emph{counterpart games} of the asymmetric game $(A,B)$. The reverse is also true: If $y$ is a Nash equilibrium of the single population game $A$ and $x$ is a Nash equilibrium of the single population game $B^\top$ (and if $x$ and $y$ have the same support), then $x,y$ is a Nash equilibrium of the game $(A,B)$.
In our empirical analysis, we use this property to analyze an asymmetric games $(A,B)$ by looking at the counterpart single population games $A$ and $B^\top$.
\section{Preliminaries}\label{sec:prelim}
In this section, we introduce the necessary background to describe our game theoretic meta-game analysis of the repeated interaction between $p$ players.
\vspace{-\secspace cm}
\subsection{Normal Form Games:} In a $p$-player Normal Form Game (NFG), players are involved in a single round strategic interaction. Each player $i$ chooses a strategy $\pi^i$ from a set of $k$ strategy $S^i = \{\pi_1^i, \dots,\pi_k^i\}$ and receives a payoff $r^i(\pi^1, \dots, \pi^p): S^1 \times \dots \times S^p \rightarrow \mathbb{R}$. For the sake of simplicity, we will write $\bm{\pi}$ the joint strategy $(\pi^1,\dots,\pi^p)$ and $\bm{r}(\bm{\pi})$ the joint reward $(r^1(\bm{\pi}), \dots,r^p(\bm{\pi}))$. Then a $p$-player NFG is a tuple $G=(S^1, \dots, S^p, r^1, \dots, r^p)$.
Each player interacts in this game by following a strategy profile $x^i$ which is a probability distribution over $S^i$.
A symmetric NFG captures interactions where players can be interchanged. The first condition is therefore that the strategy sets are the same for all players, (\textit{i.e.} $\forall i,j \; S_i=S_j$ and will be written $S$). In a symmetric NFG, if a permutation is applied to the joint strategy $\bm{\pi}$, the joint payoff is permuted accordingly. Formally, a game $G$ is symmetric if for all permutations of $p$ elements $\sigma$ we have $\bm{r}(\bm{\pi}_{\sigma}) = \bm{r}_{\sigma}(\bm{\pi}) $ (where $\bm{\pi}_{\sigma} = (\pi^{\sigma(1)}, \dots, \pi^{\sigma(p)})$ and $\bm{r}_{\sigma}(\bm{\pi}) = (r^{\sigma(1)}(\bm{\pi}), \dots,r^{\sigma(p)}(\bm{\pi}))$). So for a game to be symmetric there are two conditions, the players need to have access to the same strategy set and the payoff structure needs to be symmetric, such that players are interchangeable. If one of these two conditions is violated the game is asymmetric.
In the asymmetric case our analysis will focus on the two-player case (two roles) and thus we introduce specific notations for the sake of simplicity. In a two-player normal-form game, each player's payoff can be seen as a $k \times k$ matrix. We will write $A = (a_{l,l'})_{1 \leq l,l' \leq k}$ for the payoff matrix of player one (\textit{i.e.} $a_{l,l'} = r^1(\pi^1_l, \pi^2_{l'})$) and $B = (b_{l,l'})_{1 \leq l,l' \leq k}$ for the payoff matrix of player two (\textit{i.e.} $b_{l,l'} = r^2(\pi^1_l, \pi^2_{l'})$). In this two-player game, the column vector $x$ is the strategy of player one and $y$ the one of player two. In the end, a two player NFG is defined by the following tuple $G=(S^1, S^2, A, B)$.
\vspace{-\secspace cm}
\subsection{Nash Equilibrium}
In a two-player game, a pair of strategies $(x,y)$ is a Nash equilibrium of the game $(A,B)$ if no player has an incentive to switch from their current strategy. In other words, $(x,y)$ is a Nash equilibrium if $x^\top A y = \max Ay$ and $x^\top B y = \max x^\top B$.
Evolutionary game theory often consider a single strategy $x$ that plays against itself. In this situation, the game is said to have a single population. In a single population game, $x$ is a Nash equilibrium if $x^\top A x = \max Ax$.
\vspace{-\secspace cm}
\subsection{Replicator Dynamics}
\label{ReplicatorDynamics}
The replicator dynamics equation describes how a strategy profile evolves in the midst of others. This evolution is described according to a first order dynamical system. In a two-player NFG $(A, B, S^1, S^2)$, the replicator equations are defined as:
\vspace{-0.2cm}
\begin{align}\label{eq:asymRD1}
&\dot{x}_l = x_l \left( (A y)_l - x^\top A y\right)
&\dot{y}_{l'} = y_{l'} \left( (x^\top B)_{l'} - x^\top B y \right)
\end{align}
The dynamics defined by these two coupled differential equations changes the strategy profile to increase the probability of the strategies that have the best return or are the \textit{fittest}.
In the case of a symmetric two-player game ($A=B^\top$), the replicator equations assume that both players play the same strategy profile (\textit{i.e.} player one and two play according to $x$) and the dynamics is defined as follows:
\vspace{-0.2cm}
\begin{align}\label{eq:singleRD}
&\dot{x}_l = x_l \left( (A x)_l - x^\top A x\right)
\end{align}
\vspace{-\secspace cm}
\subsection{Meta Games}
A meta game is a simplified model of a complex interaction. In order to analyze complex games like e.g. poker, we do not need to consider all possible strategies but a set of relevant meta-strategies that are often played~\cite{PonsenTKR09}. These meta strategies (or styles of play), over atomic actions, are commonly played by players such as for instance "passive/aggressive" or "tight/loose" in poker. A $p$-type meta game is now a $p$-player repeated NFG where players play a limited number of meta strategies. Following our poker example, the strategy set of the meta game will now be defined as the set $S=\{\textbf{"aggressive"}, \textbf{"tight"}, \textbf{"passive"}\}$ and the reward function as the outcome of a game between $p$-players using different profiles.
\section{Related Work}
\section{Theoretical Insights}
\label{TheoreticalInsights}
As illustrated in the previous section the procedure for empirical meta-game analysis consists of two parts. Firstly, one needs to construct an empirical meta-game utility function for each player. This step can be performed using logs of interactions between players, or by playing the game sufficiently enough. Secondly, one expects that analyzing the empirical game will give insights in the true underlying game itself (i.e. the game from which we sample)
This section provides insights in the following: how much data is enough to generate a good approximation of the true underlying game? Is uniform sampling over actions or strategies the right method?
\vspace{-\secspace cm}
\subsection{Main Lemma}
Sometimes players receive a stochastic reward $R^i(\pi^1, \dots, \pi^p)$ for a given joint action $\bm{\pi}$. The underlying game we study is $r^i(\pi^1, \dots,\pi^p) = E \left[ R^i(\pi^1, \dots,\pi^p) \right]$ and for the sake of simplicity the joint action of every player but player $i$ will be written $\bm{\pi}^{-i}$.
In the two following definitions, we introduce the concept of Nash equilibrium and $\epsilon$-Nash equilibrium in $p$-player games (as we only introduced it in the $2$-player case):
\textbf{Definition :} A joint strategy $\bm{x} = (x^1, \dots,x^p) = (x^{-i}, \bm{x}^{-i})$ is a Nash equilibrium if for all $i$:
$$E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi})\right] = \max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i})\right]$$
\textbf{Definition :} A joint strategy $\bm{x} = (x^1, \dots,x^p) = (x^{-i}, \bm{x}^{-i})$ is an $\epsilon$-Nash equilibrium if for all $i$:
$$\max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i})\right] - E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi})\right] \leq \epsilon$$
When running an analysis on a meta game, we do not have access to the average reward function $r^i(\pi^1, \dots,\pi^p)$ but to an empirical estimate $\hat{r}^i(\pi^1, \dots,\pi^p)$. The following lemma shows that a Nash equilibrium for the empirical game $\hat{r}^i(\pi^1, \dots,\pi^p)$ is an $2\epsilon$-Nash equilibrium for the game $r^i(\pi^1, \dots,\pi^p)$ where $\epsilon = \sup_{\bm{\pi},i} |\hat{r}^i(\bm{\pi})-r^i(\bm{\pi})|$.
\textbf{Lemma:} If $\bm{x}$ is a Nash equilibrium for $\hat{r}^i(\pi^1, \dots,\pi^p)$, then it is an $2\epsilon$-Nash equilibrium for the game $r^i(\pi^1, \dots,\pi^p)$ where $\epsilon = \sup_{\bm{\pi},i} |r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})|$.
\textbf{Proof:}\newline
First we have the following relation:
\begin{align*}
&E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi})\right] = E_{\bm{\pi} \sim \bm{x}} \left[ \hat{r}^i(\bm{\pi})\right] + E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi}) - \hat{r}^i(\bm{\pi})\right]
\end{align*}
Then:
\footnotesize
\begin{align*}
E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i})\right] &= E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right]\\
&\quad + E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i}) - \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right]\\
\max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i})\right] &\leq \max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right]\\
& \quad + \max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i}) - \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right]
\end{align*}
\normalsize
Finally,
\footnotesize
\begin{align*}
&\max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i})\right] - E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi})\right]\\
&\qquad\qquad\qquad \leq \underbrace{\max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right] - E_{\bm{\pi} \sim \bm{x}} \left[ \hat{r}^i(\bm{\pi})\right]}_{=0 \textrm{ since $\bm{x}$ is a Nash equilibrium for $\hat{r}^i$}}\\
&\qquad\qquad\qquad\qquad \underbrace{ + \max_{\pi^i} E_{\bm{\pi}^{-i} \sim \bm{x}^{-i}} \left[ r^i(\pi^i, \bm{\pi}^{-i}) - \hat{r}^i(\pi^i, \bm{\pi}^{-i})\right]}_{\leq \epsilon}\\
&\qquad\qquad\qquad\qquad \underbrace{ - E_{\bm{\pi} \sim \bm{x}} \left[ r^i(\bm{\pi}) - \hat{r}^i(\bm{\pi})\right]}_{\leq \epsilon}
\end{align*}
\normalsize
\qed
This lemma shows that if one can control the difference between $|r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})|$ uniformly over players and actions, then an equilibrium for the empirical game $\hat{r}^i(\pi^1, \dots,\pi^p)$ is almost an equilibrium for the game defined by the average reward function $r^i(\pi^1, \dots,\pi^p)$.
\subsection{Finite Samples Analysis}
This section details some concentration results. In practice, we often have access to a batch of observations of the underlying game. We will run our analysis on an empirical estimate of the game denoted by $\hat{r}^i(\bm{\pi})$. The question then will be either with which confidence can we say that a Nash equilibrium for $\bm{\hat{r}}$ is a $2\epsilon$-Nash equilibrium, or for a fixed confidence, for which $\epsilon$ can we say that a Nash equilibrium for $\bm{\hat{r}}$ is a $2\epsilon$-Nash equilibrium for $\bm{r}$. In the case we have access to game play, the question is how many samples $n$ do we need to assess that a Nash equilibrium for $\bm{\hat{r}}$ is a $2\epsilon$-Nash equilibrium for $\bm{r}$ for a fixed confidence and a fixed $\epsilon$.
For the sake of simplicity, we will assume that all payoff are bounded in $[0, 1]$.
\subsubsection{The batch scenario}
\label{batchScenario}
Here we assume that we are given $n(i,\bm{\pi})$ independent samples to compute the empirical average $\hat{r}^i(\bm{\pi})$. Then, by applying Hoeffding’s inequality we can prove the following result:
\footnotesize
$$P\left( \sup_{\bm{\pi},i} |r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})| < \epsilon \right) \geq \prod_{i \in \{1,\dots,p\}} \prod_{\bm{\pi}} \left(1-2e^{\left(-2\epsilon^2 n(i,\bm{\pi})\right)}\right)$$
\normalsize
\subsubsection{uniform sampling}
In this section we assume that we have a budget of $n$ samples per joint actions $\bm{\pi}$ and per player $i$. In that case we have the following bound:
\footnotesize
$$P\left( \sup_{\bm{\pi},i} |r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})| < \epsilon \right) \geq \left(1-2e^{\left(-2\epsilon^2 n\right)}\right)^{|S^1| \times \dots \times |S^p| \times p}$$
\normalsize
Then, If we want $\sup_{\bm{\pi},i} |r^i(\bm{\pi})-\hat{r}^i(\bm{\pi})| < \epsilon $ with a probability of at least $1-\delta$ we need at least $n= - \frac{\ln\left(1-(1-\delta)^{\frac{1}{|S^1| \times \dots \times |S^p| \times p}}\right)}{2\epsilon^2}$
| {
"timestamp": "2018-03-20T01:01:11",
"yymm": "1803",
"arxiv_id": "1803.06376",
"language": "en",
"url": "https://arxiv.org/abs/1803.06376",
"abstract": "This paper provides theoretical bounds for empirical game theoretical analysis of complex multi-agent interactions. We provide insights in the empirical meta game showing that a Nash equilibrium of the meta-game is an approximate Nash equilibrium of the true underlying game. We investigate and show how many data samples are required to obtain a close enough approximation of the underlying game. Additionally, we extend the meta-game analysis methodology to asymmetric games. The state-of-the-art has only considered empirical games in which agents have access to the same strategy sets and the payoff structure is symmetric, implying that agents are interchangeable. Finally, we carry out an empirical illustration of the generalised method in several domains, illustrating the theory and evolutionary dynamics of several versions of the AlphaGo algorithm (symmetric), the dynamics of the Colonel Blotto game played by human players on Facebook (symmetric), and an example of a meta-game in Leduc Poker (asymmetric), generated by the PSRO multi-agent learning algorithm.",
"subjects": "Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)",
"title": "A Generalised Method for Empirical Game Theoretic Analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9546474142844408,
"lm_q2_score": 0.743168019989179,
"lm_q1q2_score": 0.7094634286615573
} |
https://arxiv.org/abs/1705.02298 | Consistent Sensor, Relay, and Link Selection in Wireless Sensor Networks | In wireless sensor networks, where energy is scarce, it is inefficient to have all nodes active because they consume a non-negligible amount of battery. In this paper we consider the problem of jointly selecting sensors, relays and links in a wireless sensor network where the active sensors need to communicate their measurements to one or multiple access points. Information messages are routed stochastically in order to capture the inherent reliability of the broadcast links via multiple hops, where the nodes may be acting as sensors or as relays. We aim at finding optimal sparse solutions where both, the consistency between the selected subset of sensors, relays and links, and the graph connectivity in the selected subnetwork are guaranteed. Furthermore, active nodes should ensure a network performance in a parameter estimation scenario. Two problems are studied: sensor and link selection; and sensor, relay and link selection. To solve such problems, we present tractable optimization formulations and propose two algorithms that satisfy the previous network requirements. We also explore an extension scenario: only link selection. Simulation results show the performance of the algorithms and illustrate how they provide a sparse solution, which not only saves energy but also guarantees the network requirements. | \section{Convex relaxation}
\label{Convexrelaxation}
We relax the nonconvex program~\eqref{Problem.nonconvex} by substituting the $\ell_0$-pseudo norm, with the $\ell_1$ norm, and by substituting the nonconvex constraint~\eqref{eq.Tminw} with the convex surrogate
\begin{equation}\label{eq.Tminw_cvx}
T_{ip} \leq \min\{r_i, r_p\}, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V}.
\end{equation}
These operations transform the original problem~\eqref{Problem.nonconvex} into
\begin{subequations}
\label{Problem:Relaxed}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\r\|_1 + \alpha_2 \| \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw_cvx}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma.
\end{align}
\end{subequations}
With the assumption that the now continuous function $f: \mathbf{R}^J \to \mathbf{R}$ is convex in $\r$ (as it happens with all the aforementioned quality measurement examples~\cite{Joshi09}), then the program~\eqref{Problem:Relaxed} is convex. In addition, for the mentioned examples of $f(\r)$, \eqref{Problem:Relaxed} is a semidefinite program, which makes its solution efficient to compute polynomially with off-the-shelf software. We indicate with $(\hat{\r}, \hat{\mathbold{T}})$ any possible solution of~\eqref{Problem:Relaxed}.
It is important to note that the couple $(\hat{\r}, \hat{\mathbold{T}})$ is only an approximation of the sought solution $(\r^*, \mathbold{T}^*)$. However, we will see in the simulation section that $(\hat{\r}, \hat{\mathbold{T}})$ is usually a sparsely enough approximate solution. An additional feature of the approximate couple $(\hat{\r}, \hat{\mathbold{T}})$ is that it is feasible w.r.t. the constraint set of the original problem~\eqref{Problem.nonconvex}, and therefore it does not have to be mapped into a different set (as it usually happens in relaxed sensor selection problems). The reason for this is that we are working with rates and not Boolean variables.
A strategy to increase the sparsity of the approximate couple $(\hat{\r}, \hat{\mathbold{T}})$, which has been proposed in~\cite{Candes08}, is to use a reweighted $\ell_1$ minimization mechanism. In this paper, we also use this strategy, which goes as follows. Consider the relaxed problem~\eqref{Problem:Relaxed}, with the different cost function $\alpha_1 \|\mathbold{w}\odot\r\|_1 + \alpha_2 \|\mathbold{W}\odot \mathbold{T} \|_1 $, where $\mathbold{w} \in \mathbf{R}^{J}$ and $\mathbold{W} \in \mathbf{R}^{J\times (J+K)}$ are a weighting vector and matrix, respectively. The weights can be determined so to push small components of $\r$ and $\mathbold{T}$ to zero, and boost big ones to one. In particular, initialize $w^0_i = 1$ and $W^0_{ip} = 1$, then for each $\tau\geq 0$ solve the problem
\begin{subequations}
\label{Problem:RelaxedWeighted}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\mathbold{w}^{\tau}\odot\r\|_1 + \alpha_2 \|\mathbold{W}^{\tau}\odot \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw_cvx}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma, \label{eq.MSErate}
\end{align}
\end{subequations}
whose solution is $(\hat{\r}^\tau, \hat{T}^\tau)$, and whose weights for $\tau \geq 1$ are $w^\tau_i = w^{\tau-1}_i/(\epsilon + \hat{r}^{\tau-1}_i)$ and $W^{\tau}_{ip} = W^{\tau-1}_{ip}/(\epsilon + \hat{T}^{\tau-1}_{ip})$, with $\epsilon$ a small positive constant.
This iterative (reweighted) procedure delivers sparser solutions, as we will show in the simulation results. We have summarized the resulting sparse sensor and link selection (SSLS) iterative algorithm in Algorithm \ref{alg:SeLiS}.
{{\bf Connectivity guarantees of Algorithm~\ref{alg:SeLiS}.}
We notice that due to~\eqref{eq.flow}, the solution coming from Algorithm \ref{alg:SeLiS} guarantees that the measurements acquired at the sensors are delivered at the APs, i.e., each of the active sensors has a path back to at least one AP. To formally prove this statement, consider~\eqref{eq.flow}:
\begin{equation*
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation*}
this constraint has to be true for each active sensor (the one for which $r_i>0$), and it reads $0 \leq 0$ for the not active ones (due to constraint~\eqref{eq.Tminw_cvx}, i.e., in this case also $T_{pi}$ and $T_{ip}$ are $0$). Since it has to be true for all active sensors, each of them has to send out more rate than what it receives (and the difference is given by its measurement rate), that is
$$
\sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} < \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \{j \in \mathcal{V}_{\textrm{s}}|r_j>0\}.
$$
Therefore, first: no active sensor can be a sink (it has to send out more than it receives). Second: there cannot be loops of active sensors not connected to a sink. In fact, if there were, since the rate augments along the loop, constraint~\eqref{eq.flow} would not be satisfied for at least a pair of active sensors connected together. Thus, the only possibility is that eventually each sensor has a path to a sink. This is also what we observe in simulations. \hfill $\Box$
}
\begin{algorithm}[t]
\footnotesize
\begin{algorithmic}[1]
\REQUIRE Number of iterations $N$, reweighting tolerance $\epsilon>0$, sensor importance $\alpha_1\geq 0$, link importance $\alpha_2\geq 0$.
\STATE Set the weighting vector and matrix as $w^0_i = 1$ and $W^0_{ip} = 1$ for all $i\in \mathcal{V}_\textrm{s}$ and $p \in \mathcal{V}$
\FOR {$\tau = 0$ to $N-1$}
\STATE Solve the convex program~\eqref{Problem:RelaxedWeighted} with off-the-shelf interior point methods (e.g., SDPT3~\cite{Toh1999} or SeDuMi~\cite{Sturm1998}). Let the solution be $(\hat{\r}^\tau, \hat{\mathbold{T}}^\tau)$.
\STATE Compute the new weights $\mathbold{w}^{\tau+1}$ and $\mathbold{W}^{\tau+1}$ as
$$
w^{\tau+1}_i = \frac{w^{\tau}_i}{\epsilon + \hat{r}^{\tau}_i}, \quad W^{\tau+1}_{ip} = \frac{W^{\tau}_{ip}}{\epsilon + \hat{T}^{\tau}_{ip}}
$$
\ENDFOR
\STATE Output the solution couple $(\hat{\r}^N, \hat{\mathbold{T}}^N)$
\end{algorithmic}
\caption{Sparse Sensor and Link Selection}
\label{alg:SeLiS}
\end{algorithm}
\begin{remark}\label{rem.distributed}
\emph{(Distributed algorithms)} Although the algorithms in this paper are centralized, one could devise distributed algorithms in a standard fashion. For instance, Problem (10) and (19) with the choice for $f(\r)$ of (2) fit the general structure presented in \cite{Simonetto2015}. In particular one needs to consider the local decision variables $x_i$ as the vector $(r_i, \{T_{ip}\}_{p\in \mathcal{V}_{\mathrm{s}}})$. In this case, with the use of consensus-based dual decomposition each sensor could decide their on/off strategy and to whom to communicate. Nonetheless, first, the re-weighting procedure is not trivial to implement in this case, and second, the sensors could spend a considerable amount of battery power to decide their on/off strategy. We believe that developing distributed and yet efficient (i.e., power-aware) algorithms for sensor selection is still an open research area, which is left for future investigations.
\end{remark}
\begin{remark}\label{rem.stochastic}
{\emph{(Stochasticity of the reliability matrix $R_{ip}$)} Although here we assume to know each element $R_{ip}$ in a deterministic sense, one could also think of estimating $R_{ip}$ online. If then one possesses a pdf for $R_{ip}$, one could replace the deterministic constraint~\eqref{eq.flow} with a stochastic variant of it. Another approach in the estimation would be the one of~\cite{Kim2011a}. Finally, a third approach would consider a time-varying online algorithm to track $R_{ij}$ as it (possibly) varies in time, which is in line with the research proposed in~\cite{Paper1}.
}
\end{remark}
\begin{remark}\label{rem.energy}
{\emph{(Energy efficiency)} Energy efficiency can also be considered in the proposed approach. For instance, one could re-run the selection algorithm to take into account that the battery charge of the devices has changed, so to keep a balance in the usage of the whole sensor network. A way to include battery charge into the optimization problem is, e.g., to initialize the weights $w_i^0$'s not to $1$ but to the inverse of the battery level: $1$ if fully charged, $\infty$ if out of charge.
}
\end{remark}
\section{Introduction}
\label{Introduction}
Nowadays, wireless sensor networks are developed to provide fast, cheap, reliable, and scalable hardware solutions for a large number of industrial applications, ranging from surveillance \cite{Biswas2006, Raty2010} and tracking \cite{Songhwai2007, Liu2007a} to exploration \cite{Sun2005, Leonard2007}, monitoring \cite{Corke2010, Sun2011}, and other sensing tasks \cite{Arampatzis2005}. From the software perspective, an increasing effort is spent on designing algorithms that can provide high reliability with limited computation, communication, and energy requirements for the sensor nodes.
In this paper, we consider a network of battery-powered sensors that take measurements related to some important environmental parameter and that need to communicate their measurements to one or multiple access points (APs), or sinks, which are responsible for processing the gathered information. Communication with the APs is achieved through multihop routes defined via a connectivity graph which considers the sensors' communication range.
Resources (mainly energy) in this network are scarce so it is inefficient to have all sensors active. Some sensors may not be informative enough and hardly contribute to achieve a minimum desired network performance; nonetheless, if active, they would consume a non-negligible amount of resources. Moreover, communication efforts are among the most energy demanding tasks in wireless sensor networks~\cite{Raghunathan02} and they should be minimized by properly selecting not only the suitable sensors but also the proper active links. Knowledge of the network topology should be exploited in order to make a better selection of the links that are in charge of conveying the information because information may be degraded over long distances and transmissions should be avoided to reduce energy expenditure.
With the reduction of energy expenditure in mind, in this paper we consider a distributed estimation scenario in wireless sensor networks, where each sensor takes local measurements of a phenomenon of interest at a particular rate and communicates them in a multihop way to one or multiple APs. In this scenario, we study the problem of judiciously and consistently selecting the optimal minimum set of sensors \emph{and} links that ought to be active in the network, so that a prescribed network performance (e.g., the mean squared error of the estimation of the parameter of interest) \emph{as well as} graph connectivity among the selected active sensors are guaranteed. Only the measurements taken by the active sensors must be reported back to the APs via the active sensors and active links. This is the reason for requiring graph connectivity among the selected active sensors. Moreover, the optimal sensing rates supported by the active sensors are calculated. We analyze the problem from a stochastic point of view, where information messages are routed stochastically thereby capturing the inherent reliability of the broadcast wireless links.
\subsection{State of the art}
The concept of sensor selection has been extensively studied in the context of parameter and state estimation. The resulting minimum cardinality combinatorial problem has been tackled by using different tools, from convex relaxations, e.g., \cite{Joshi09, Chepuri15, Jamali-Rad15}, to sub-modularity~\cite{Shamaiah2010, Liao09, Naeem09} and frame theory~\cite{Ranieri14,Zhao12}. These tools have their pros and cons.
More along the lines of this paper, in \cite{Liu15} not only is the best subset of sensors selected that communicate with the fusion center but also the collaboration scheme that allows each sensor to combine its raw measurements with those coming from other sensors according to certain weights.
Stochastic routing in multihop networks has been introduced in the literature in order to cope with the random nature of wireless links \cite{Sivrikaya09, Ribeiro08}. Transmissions are based on a reliability matrix, where each element of the matrix reflects the probability of satisfactorily transmitting and receiving a message between two given sensors.
In \cite{Zavlanos13}, the authors define the concept of connectivity within a context of mobile robotic networks in terms of communication rates, and based on this definition, the authors propose a distributed algorithm to find the optimal operating points of wireless networks when the link metric is the link reliability.
The work of~\cite{Shah2014} considers the problem of optimizing the routing and sensor selection given a total budget constraint. Yet, the approach presented in~\cite{Shah2014} is heuristic and divides the estimation and routing problems, by tackling them in two separated phases, which could cause additional suboptimality of the solution.
Often times, a distinction is made between sensor and relay nodes. Relay nodes help the source nodes (sensing nodes) in forwarding the messages to the APs: they receive a message from the source nodes, process it and forward it towards the intended APs. Relaying is especially beneficial when there is no line-of-sight path between the source and the destination. This distinction between sensor types may be motivated, for instance, by economical reasons (relay devices may be cheaper than sensors given that their functionality is more limited), or by design prerequisites (sensors need to achieve a certain performance while relays do not because they are only limited to forwarding the information). Previous state-of-the-art works are only based on proposing relay selection schemes (e.g., \cite{Ibrahim08, Lo09}, and references therein): given a source sensor and a sink, they try to choose the best relays among a collection of available ones based on different criteria. Other works are aimed at optimally placing wireless relay nodes and sinks \cite{Bhattacharya14}.
\subsection{Our contributions}
All the aforementioned state-of-the-art works either face the sensor selection problem \emph{or} the stochastic routing, but what has never been addressed in the literature before is the challenge of jointly selecting the optimal minimum set of active sensors (and their corresponding sensing rates) which satisfies a prescribed estimation performance metric and consistently finding the optimal multihop routes so that the selected subgraph is connected. Hence, in this paper we do not focus on devising new methods to solve selection problems or on comparing them, instead we are mainly interested in formulating a stochastic framework for consistent sensor and link selection. Even the closest prior work \cite{Liu15}, which is a ``dual'' problem w.r.t. ours, differs from this paper in several ways: in \cite{Liu15} all sensors can directly communicate with the fusion center (i.e., it is not a multihop scenario so the graph connectivity is not a problem), communication links are established based on inter-sensor collaboration before transmitting a processed message to the fusion center, and the optimal transmission rates of transmitting sensors are not determined.
The problem at hand becomes even more challenging when there is a distinction between sensor and relay nodes. In a scenario where there are the two types of nodes, we want to consistently determine which of the nodes, placed at well-determined positions, should play the role of sensors (and hence their sensing rate should be determined) and which ones the role of relays while guaranteeing both a prescribed network performance and connectivity in the selected subgraph. To find an optimal solution, a joint source and relay selection should be performed, which implicitly implies to activate suitable links.
The main contributions of this paper can be summarized as follows.
1) From a stochastic point of view and in a multihop scenario, we formulate a tractable optimization problem to select consistently the optimal subsets of sensors (with their sensing rates) and links that guarantee both, a required network performance and graph connectivity in the selected subnetwork. To solve this problem, we propose a sparsity-aware algorithm based on a convex relaxation (Section \ref{ProblemFormulation} to Section \ref{SimulationResults1}).
2) The previous framework is also well-suited for the joint selection of sensors, relays and links (which is not the case for other approaches in the literature, e.g.,~\cite{Shah2014}). Under a slight modification of the previous optimization problem and applying a convex relaxation technique, we propose another sparsity-aware consistent sensor-relay-and-link selection algorithm. This algorithm assigns the optimal sensing rates to the active sensors and ensures network connectivity as well as a prescribed network performance (Sections \ref{Relays} and \ref{SimulationResults2}).
3) Finally, we also extend the work to a special case where only link selection is considered (Section \ref{SpecialCasesExtensions}).
Contributions 1)-3) rely on a reformulation of the problems as $\ell_1$ convex optimization problems. This allows for efficient and well performing algorithms. Different approaches, e.g.~\cite{Dai2011}, would yield more complex problem formulations, which rely on dedicated non-convex solvers. This is avoided here. In addition, based on the fact that a $\ell_1$ relaxation is leveraged, distributed algorithms can be envisioned (See Remark~\ref{rem.distributed}).
\begin{figure}
\includegraphics[width=1\textwidth]{Figures/schema}
\caption{General structure of the paper with the three problem formulations and relations among them.}
\label{fig.schema}
\end{figure}
Numerical simulation results support our claims and illustrate a satisfactory performance of the proposed algorithms. As a last note, we highlight that the presented algorithms are exposed in a static framework, i.e., given a certain network, we provide a selection strategy. Yet, they could be implemented in a dynamical way, by repeating their execution, so to balance the energy level of the active and non-active sensors and relays (See Remark~\ref{rem.energy}).
\vskip2mm
\textbf{Notation.} Notation is where possible standard: we indicate with boldfaced small letters, such as $\mathbold{x}$, real vectors, whereas capital boldfaced letters, e.g. $\mathbold{A}$, represent real matrices. Vector $p$-norms are indicated with $\|\cdot\|_p$, while $p$-norms for matrices are intended \emph{element-wise}, e.g., $\|\mathbold{A}\|_1$ is the sum of the absolute values of the elements of the matrix $\mathbold{A}$. Pseudo-norms, such as the $0$-norm, follow the same notation.
\section{Convex relaxation}
\label{Convexrelaxation}
We relax the nonconvex program~\eqref{Problem.nonconvex} by substituting the $\ell_0$-pseudo norm, with the $\ell_1$ norm, and by substituting the nonconvex constraint~\eqref{eq.Tminw} with the convex surrogate
\begin{equation}\label{eq.Tminw_cvx}
T_{ip} \leq \min\{r_i, r_p\}, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V}.
\end{equation}
These operations transform the original problem~\eqref{Problem.nonconvex} into
\begin{subequations}
\label{Problem:Relaxed}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\r\|_1 + \alpha_2 \| \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw_cvx}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma.
\end{align}
\end{subequations}
With the assumption that the now continuous function $f: \mathbf{R}^J \to \mathbf{R}$ is convex in $\r$ (as it happens with all the aforementioned quality measurement examples~\cite{Joshi09}), then the program~\eqref{Problem:Relaxed} is convex. In addition, for the mentioned examples of $f(\r)$, \eqref{Problem:Relaxed} is a semidefinite program, which makes its solution efficient to compute polynomially with off-the-shelf software. We indicate with $(\hat{\r}, \hat{\mathbold{T}})$ any possible solution of~\eqref{Problem:Relaxed}.
It is important to note that the couple $(\hat{\r}, \hat{\mathbold{T}})$ is only an approximation of the sought solution $(\r^*, \mathbold{T}^*)$. However, we will see in the simulation section that $(\hat{\r}, \hat{\mathbold{T}})$ is usually a sparsely enough approximate solution. An additional feature of the approximate couple $(\hat{\r}, \hat{\mathbold{T}})$ is that it is feasible w.r.t. the constraint set of the original problem~\eqref{Problem.nonconvex}, and therefore it does not have to be mapped into a different set (as it usually happens in relaxed sensor selection problems). The reason for this is that we are working with rates and not Boolean variables.
A strategy to increase the sparsity of the approximate couple $(\hat{\r}, \hat{\mathbold{T}})$, which has been proposed in~\cite{Candes08}, is to use a reweighted $\ell_1$ minimization mechanism. In this paper, we also use this strategy, which goes as follows. Consider the relaxed problem~\eqref{Problem:Relaxed}, with the different cost function $\alpha_1 \|\mathbold{w}\odot\r\|_1 + \alpha_2 \|\mathbold{W}\odot \mathbold{T} \|_1 $, where $\mathbold{w} \in \mathbf{R}^{J}$ and $\mathbold{W} \in \mathbf{R}^{J\times (J+K)}$ are a weighting vector and matrix, respectively. The weights can be determined so to push small components of $\r$ and $\mathbold{T}$ to zero, and boost big ones to one. In particular, initialize $w^0_i = 1$ and $W^0_{ip} = 1$, then for each $\tau\geq 0$ solve the problem
\begin{subequations}
\label{Problem:RelaxedWeighted}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\mathbold{w}^{\tau}\odot\r\|_1 + \alpha_2 \|\mathbold{W}^{\tau}\odot \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw_cvx}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma, \label{eq.MSErate}
\end{align}
\end{subequations}
whose solution is $(\hat{\r}^\tau, \hat{T}^\tau)$, and whose weights for $\tau \geq 1$ are $w^\tau_i = w^{\tau-1}_i/(\epsilon + \hat{r}^{\tau-1}_i)$ and $W^{\tau}_{ip} = W^{\tau-1}_{ip}/(\epsilon + \hat{T}^{\tau-1}_{ip})$, with $\epsilon$ a small positive constant.
This iterative (reweighted) procedure delivers sparser solutions, as we will show in the simulation results. We have summarized the resulting sparse sensor and link selection (SSLS) iterative algorithm in Algorithm \ref{alg:SeLiS}.
{{\bf Connectivity guarantees of Algorithm~\ref{alg:SeLiS}.}
We notice that due to~\eqref{eq.flow}, the solution coming from Algorithm \ref{alg:SeLiS} guarantees that the measurements acquired at the sensors are delivered at the APs, i.e., each of the active sensors has a path back to at least one AP. To formally prove this statement, consider~\eqref{eq.flow}:
\begin{equation*
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation*}
this constraint has to be true for each active sensor (the one for which $r_i>0$), and it reads $0 \leq 0$ for the not active ones (due to constraint~\eqref{eq.Tminw_cvx}, i.e., in this case also $T_{pi}$ and $T_{ip}$ are $0$). Since it has to be true for all active sensors, each of them has to send out more rate than what it receives (and the difference is given by its measurement rate), that is
$$
\sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} < \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \{j \in \mathcal{V}_{\textrm{s}}|r_j>0\}.
$$
Therefore, first: no active sensor can be a sink (it has to send out more than it receives). Second: there cannot be loops of active sensors not connected to a sink. In fact, if there were, since the rate augments along the loop, constraint~\eqref{eq.flow} would not be satisfied for at least a pair of active sensors connected together. Thus, the only possibility is that eventually each sensor has a path to a sink. This is also what we observe in simulations. \hfill $\Box$
}
\begin{algorithm}[t]
\footnotesize
\begin{algorithmic}[1]
\REQUIRE Number of iterations $N$, reweighting tolerance $\epsilon>0$, sensor importance $\alpha_1\geq 0$, link importance $\alpha_2\geq 0$.
\STATE Set the weighting vector and matrix as $w^0_i = 1$ and $W^0_{ip} = 1$ for all $i\in \mathcal{V}_\textrm{s}$ and $p \in \mathcal{V}$
\FOR {$\tau = 0$ to $N-1$}
\STATE Solve the convex program~\eqref{Problem:RelaxedWeighted} with off-the-shelf interior point methods (e.g., SDPT3~\cite{Toh1999} or SeDuMi~\cite{Sturm1998}). Let the solution be $(\hat{\r}^\tau, \hat{\mathbold{T}}^\tau)$.
\STATE Compute the new weights $\mathbold{w}^{\tau+1}$ and $\mathbold{W}^{\tau+1}$ as
$$
w^{\tau+1}_i = \frac{w^{\tau}_i}{\epsilon + \hat{r}^{\tau}_i}, \quad W^{\tau+1}_{ip} = \frac{W^{\tau}_{ip}}{\epsilon + \hat{T}^{\tau}_{ip}}
$$
\ENDFOR
\STATE Output the solution couple $(\hat{\r}^N, \hat{\mathbold{T}}^N)$
\end{algorithmic}
\caption{Sparse Sensor and Link Selection}
\label{alg:SeLiS}
\end{algorithm}
\begin{remark}\label{rem.distributed}
\emph{(Distributed algorithms)} Although the algorithms in this paper are centralized, one could devise distributed algorithms in a standard fashion. For instance, Problem (10) and (19) with the choice for $f(\r)$ of (2) fit the general structure presented in \cite{Simonetto2015}. In particular one needs to consider the local decision variables $x_i$ as the vector $(r_i, \{T_{ip}\}_{p\in \mathcal{V}_{\mathrm{s}}})$. In this case, with the use of consensus-based dual decomposition each sensor could decide their on/off strategy and to whom to communicate. Nonetheless, first, the re-weighting procedure is not trivial to implement in this case, and second, the sensors could spend a considerable amount of battery power to decide their on/off strategy. We believe that developing distributed and yet efficient (i.e., power-aware) algorithms for sensor selection is still an open research area, which is left for future investigations.
\end{remark}
\begin{remark}\label{rem.stochastic}
{\emph{(Stochasticity of the reliability matrix $R_{ip}$)} Although here we assume to know each element $R_{ip}$ in a deterministic sense, one could also think of estimating $R_{ip}$ online. If then one possesses a pdf for $R_{ip}$, one could replace the deterministic constraint~\eqref{eq.flow} with a stochastic variant of it. Another approach in the estimation would be the one of~\cite{Kim2011a}. Finally, a third approach would consider a time-varying online algorithm to track $R_{ij}$ as it (possibly) varies in time, which is in line with the research proposed in~\cite{Paper1}.
}
\end{remark}
\begin{remark}\label{rem.energy}
{\emph{(Energy efficiency)} Energy efficiency can also be considered in the proposed approach. For instance, one could re-run the selection algorithm to take into account that the battery charge of the devices has changed, so to keep a balance in the usage of the whole sensor network. A way to include battery charge into the optimization problem is, e.g., to initialize the weights $w_i^0$'s not to $1$ but to the inverse of the battery level: $1$ if fully charged, $\infty$ if out of charge.
}
\end{remark}
\section{Introduction}
\label{Introduction}
Nowadays, wireless sensor networks are developed to provide fast, cheap, reliable, and scalable hardware solutions for a large number of industrial applications, ranging from surveillance \cite{Biswas2006, Raty2010} and tracking \cite{Songhwai2007, Liu2007a} to exploration \cite{Sun2005, Leonard2007}, monitoring \cite{Corke2010, Sun2011}, and other sensing tasks \cite{Arampatzis2005}. From the software perspective, an increasing effort is spent on designing algorithms that can provide high reliability with limited computation, communication, and energy requirements for the sensor nodes.
In this paper, we consider a network of battery-powered sensors that take measurements related to some important environmental parameter and that need to communicate their measurements to one or multiple access points (APs), or sinks, which are responsible for processing the gathered information. Communication with the APs is achieved through multihop routes defined via a connectivity graph which considers the sensors' communication range.
Resources (mainly energy) in this network are scarce so it is inefficient to have all sensors active. Some sensors may not be informative enough and hardly contribute to achieve a minimum desired network performance; nonetheless, if active, they would consume a non-negligible amount of resources. Moreover, communication efforts are among the most energy demanding tasks in wireless sensor networks~\cite{Raghunathan02} and they should be minimized by properly selecting not only the suitable sensors but also the proper active links. Knowledge of the network topology should be exploited in order to make a better selection of the links that are in charge of conveying the information because information may be degraded over long distances and transmissions should be avoided to reduce energy expenditure.
With the reduction of energy expenditure in mind, in this paper we consider a distributed estimation scenario in wireless sensor networks, where each sensor takes local measurements of a phenomenon of interest at a particular rate and communicates them in a multihop way to one or multiple APs. In this scenario, we study the problem of judiciously and consistently selecting the optimal minimum set of sensors \emph{and} links that ought to be active in the network, so that a prescribed network performance (e.g., the mean squared error of the estimation of the parameter of interest) \emph{as well as} graph connectivity among the selected active sensors are guaranteed. Only the measurements taken by the active sensors must be reported back to the APs via the active sensors and active links. This is the reason for requiring graph connectivity among the selected active sensors. Moreover, the optimal sensing rates supported by the active sensors are calculated. We analyze the problem from a stochastic point of view, where information messages are routed stochastically thereby capturing the inherent reliability of the broadcast wireless links.
\subsection{State of the art}
The concept of sensor selection has been extensively studied in the context of parameter and state estimation. The resulting minimum cardinality combinatorial problem has been tackled by using different tools, from convex relaxations, e.g., \cite{Joshi09, Chepuri15, Jamali-Rad15}, to sub-modularity~\cite{Shamaiah2010, Liao09, Naeem09} and frame theory~\cite{Ranieri14,Zhao12}. These tools have their pros and cons.
More along the lines of this paper, in \cite{Liu15} not only is the best subset of sensors selected that communicate with the fusion center but also the collaboration scheme that allows each sensor to combine its raw measurements with those coming from other sensors according to certain weights.
Stochastic routing in multihop networks has been introduced in the literature in order to cope with the random nature of wireless links \cite{Sivrikaya09, Ribeiro08}. Transmissions are based on a reliability matrix, where each element of the matrix reflects the probability of satisfactorily transmitting and receiving a message between two given sensors.
In \cite{Zavlanos13}, the authors define the concept of connectivity within a context of mobile robotic networks in terms of communication rates, and based on this definition, the authors propose a distributed algorithm to find the optimal operating points of wireless networks when the link metric is the link reliability.
The work of~\cite{Shah2014} considers the problem of optimizing the routing and sensor selection given a total budget constraint. Yet, the approach presented in~\cite{Shah2014} is heuristic and divides the estimation and routing problems, by tackling them in two separated phases, which could cause additional suboptimality of the solution.
Often times, a distinction is made between sensor and relay nodes. Relay nodes help the source nodes (sensing nodes) in forwarding the messages to the APs: they receive a message from the source nodes, process it and forward it towards the intended APs. Relaying is especially beneficial when there is no line-of-sight path between the source and the destination. This distinction between sensor types may be motivated, for instance, by economical reasons (relay devices may be cheaper than sensors given that their functionality is more limited), or by design prerequisites (sensors need to achieve a certain performance while relays do not because they are only limited to forwarding the information). Previous state-of-the-art works are only based on proposing relay selection schemes (e.g., \cite{Ibrahim08, Lo09}, and references therein): given a source sensor and a sink, they try to choose the best relays among a collection of available ones based on different criteria. Other works are aimed at optimally placing wireless relay nodes and sinks \cite{Bhattacharya14}.
\subsection{Our contributions}
All the aforementioned state-of-the-art works either face the sensor selection problem \emph{or} the stochastic routing, but what has never been addressed in the literature before is the challenge of jointly selecting the optimal minimum set of active sensors (and their corresponding sensing rates) which satisfies a prescribed estimation performance metric and consistently finding the optimal multihop routes so that the selected subgraph is connected. Hence, in this paper we do not focus on devising new methods to solve selection problems or on comparing them, instead we are mainly interested in formulating a stochastic framework for consistent sensor and link selection. Even the closest prior work \cite{Liu15}, which is a ``dual'' problem w.r.t. ours, differs from this paper in several ways: in \cite{Liu15} all sensors can directly communicate with the fusion center (i.e., it is not a multihop scenario so the graph connectivity is not a problem), communication links are established based on inter-sensor collaboration before transmitting a processed message to the fusion center, and the optimal transmission rates of transmitting sensors are not determined.
The problem at hand becomes even more challenging when there is a distinction between sensor and relay nodes. In a scenario where there are the two types of nodes, we want to consistently determine which of the nodes, placed at well-determined positions, should play the role of sensors (and hence their sensing rate should be determined) and which ones the role of relays while guaranteeing both a prescribed network performance and connectivity in the selected subgraph. To find an optimal solution, a joint source and relay selection should be performed, which implicitly implies to activate suitable links.
The main contributions of this paper can be summarized as follows.
1) From a stochastic point of view and in a multihop scenario, we formulate a tractable optimization problem to select consistently the optimal subsets of sensors (with their sensing rates) and links that guarantee both, a required network performance and graph connectivity in the selected subnetwork. To solve this problem, we propose a sparsity-aware algorithm based on a convex relaxation (Section \ref{ProblemFormulation} to Section \ref{SimulationResults1}).
2) The previous framework is also well-suited for the joint selection of sensors, relays and links (which is not the case for other approaches in the literature, e.g.,~\cite{Shah2014}). Under a slight modification of the previous optimization problem and applying a convex relaxation technique, we propose another sparsity-aware consistent sensor-relay-and-link selection algorithm. This algorithm assigns the optimal sensing rates to the active sensors and ensures network connectivity as well as a prescribed network performance (Sections \ref{Relays} and \ref{SimulationResults2}).
3) Finally, we also extend the work to a special case where only link selection is considered (Section \ref{SpecialCasesExtensions}).
Contributions 1)-3) rely on a reformulation of the problems as $\ell_1$ convex optimization problems. This allows for efficient and well performing algorithms. Different approaches, e.g.~\cite{Dai2011}, would yield more complex problem formulations, which rely on dedicated non-convex solvers. This is avoided here. In addition, based on the fact that a $\ell_1$ relaxation is leveraged, distributed algorithms can be envisioned (See Remark~\ref{rem.distributed}).
\begin{figure}
\includegraphics[width=1\textwidth]{Figures/schema}
\caption{General structure of the paper with the three problem formulations and relations among them.}
\label{fig.schema}
\end{figure}
Numerical simulation results support our claims and illustrate a satisfactory performance of the proposed algorithms. As a last note, we highlight that the presented algorithms are exposed in a static framework, i.e., given a certain network, we provide a selection strategy. Yet, they could be implemented in a dynamical way, by repeating their execution, so to balance the energy level of the active and non-active sensors and relays (See Remark~\ref{rem.energy}).
\vskip2mm
\textbf{Notation.} Notation is where possible standard: we indicate with boldfaced small letters, such as $\mathbold{x}$, real vectors, whereas capital boldfaced letters, e.g. $\mathbold{A}$, represent real matrices. Vector $p$-norms are indicated with $\|\cdot\|_p$, while $p$-norms for matrices are intended \emph{element-wise}, e.g., $\|\mathbold{A}\|_1$ is the sum of the absolute values of the elements of the matrix $\mathbold{A}$. Pseudo-norms, such as the $0$-norm, follow the same notation.
\section{Problem Formulation}
\label{ProblemFormulation}
\textbf{High-level problem description. }
In this paper, we are facing the problem of consistently selecting the smallest subset of sensors and links out of all available ones such that a certain performance measure and network connectivity (which ensures a path from the active sensors to the APs) is guaranteed. The motivation behind selecting a low number of sensors (and subsequently, an appropriate reduced amount of links) comes from the need of minimizing the economical and communication costs in wireless sensor networks. Clearly, this saving should not jeopardize the performance or the network connectivity. Communication between the active sensors and the APs as well as a network performance must be guaranteed.
\vskip2mm
We consider a static wireless sensor network composed of $J$ sensor nodes and $K$ access points (APs) or sinks. At this point, we don not consider any relays yet.
We denote with $\mathcal{V}=\{1,2,..., J, J+1, ..., J+K \}$ the set of sensors and access points, where $i\in\mathcal{V}_{\textrm{s}} = \{1,...,J\}$ are the indexes corresponding to the sensor nodes and $i\in\mathcal{V}_{\textrm{AP}}=\{J+1, ..., J+K\}$ are the indexes corresponding to the APs. The network topology is determined by the physical locations of the sensors and APs, collected in the stacked vector $\mathbold{x}=[\mathbold{x}_1^\mathsf{T},...,\mathbold{x}_J^\mathsf{T}, \mathbold{x}_{J+1}^\mathsf{T},\dots,\mathbold{x}_{J+K}^\mathsf{T} ]^\mathsf{T}$, where the vector $\mathbold{x}_i$ indicates the position of sensor or AP $i$.
\subsection{Communication Network}
Sensors need to communicate their measurement to the APs in a multi-hop fashion (due to energy/power constraints). An important feature of this paper is that we can only use active sensors to transmit messages. We model the communication quality among sensors and APs using a link reliability metric, denoted as $R_{ip} := R(\|\mathbold{x}_i-\mathbold{x}_p\|)$, which represents the probability that sensor $p$ (if $p \leq J$) or an AP (otherwise) receives successfully a message sent from sensor $i$. We model this probability as a smooth non-increasing function with compact support, and in particular, $R(0) = 1$ and $R(d) = 0$ for all $d\geq \bar{d}$, for a predefined cut-off distance $\bar{d}$.
The link reliability metric induces a specific undirected communication graph on the wireless sensor network: whenever $R_{ip}$ is nonzero, there is a possible link between sensor $i$ and sensor or AP $p$. We describe this communication graph in terms of the edge set $\mathcal{E}$, given by $\mathcal{E} = \{ (i,p), i \in \mathcal{V}_{\textrm{s}}, p \in \mathcal{V} | i\neq p, R_{ip} > 0\}$, and we denote the graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E})$.
\subsection{Sensing}
Sensors take measurements of a parameter $\mathbold{\theta} \in \mathbf{R}^m$, $m \ll J$, according to the linear measurement model,
\begin{equation}
y_i = \a_i^\mathsf{T} \mathbold{\theta} + n_i,\quad i\in\mathcal{V}_{\textrm{s}},
\end{equation}
where the vectors $\a_i\in\mathbf{R}^m$ represent the regressors, while $n_i$ is a Gaussian noise term with mean 0 and covariance $\sigma_i^2$.
Sensor $i$ acquires measurements $y_i$ with a rate $r_i \bar{r}_{i}$ (we assume that the maximum relative rate $\bar{r}_{i}$ is known and fixed, while the relative rate $r_i \in [0,1]$ is a design parameter). If $r_i=0$, the node will not take any measurements and will not be active.
As we mentioned, the collected measurements need to be communicated back to the APs in a multi-hop fashion. The APs are in charge of combining the measurements $y_i$, coming from different sensors at different rates, to estimate the value of the parameter $\mathbold{\theta}$. The \emph{quality} of the estimate can be evaluated a priori based on which sensors are measuring (more specifically their regressors $\a_i$ and noise variances $\sigma_i^2$) and their rates. Examples of such quality metrics are rate versions of the mean square error (MSE), the worst case error variance, or the volume of the confidence ellipsoid~\cite{Joshi09}. For instance, if we select the MSE-rate as quality metric and assume that the noise experienced at different sensors is uncorrelated, then we would have
\begin{equation}
f(\r) := \mathrm{tr}\Big(\sum_{i\in\mathcal{V}_{\textrm{s}}} r_i \bar{r}_{i} \a_i \a_i^\mathsf{T} /\sigma_i^2\Big)^{-1}\!\!\!,
\label{CostFunction}
\end{equation}
where we have collected the relative rates in $\r = [r_1, \dots, r_J]^\mathsf{T}$. Remark that if a sensor is not active, its relative rate is zero. The higher the value of $f(\r)$, the higher the MSE-rate of the estimate, and vice versa. {Other types of function $f(\r)$ can be found in~\cite{Joshi09, Jamali-Rad15}, both for uncorrelated and correlated noise}. In order to keep the presentation as general as possible, we will not specify which quality metric we select: we will simply write the metric as the function $f(\r)$.
\subsection{Connectivity Modeling}
Before formalizing the problem mathematically, we need to introduce how we will model the communication links and the induced connectivity. In this paper, we use a stochastic point of view and we use the stochastic routing framework of \cite{Zavlanos13}.
In our multihop wireless network, messages will be routed stochastically, i.e., sensor nodes select a neighbor, either a sensor or an AP, to forward the message according to a certain probability. A set of variables $T_{ip} \in [0,1]$ will denote the probability that node $i$ selects node $p$, either a sensor or an AP, as a destination of the transmitted messages. In this sense, the variables $T_{ip}$ can be seen as the probability that node $i$ selects the link that joins sensors $i$ and $p$. The matrix $\mathbold{T}$, of size $J \times (J+K)$, gathers all these probability values. Further, the matrix $\mathbold{T}$ needs to satisfy a certain number of constraints. First, if either one of the sensors $i$ or $p$ is not active, then $T_{ip}$ must be zero: this models the fact that if a sensor is not active then it cannot send or receive messages. This can be formulated as
\begin{equation}\label{eq.Tminw}
T_{ip} = 0 \quad \textrm{iff}\quad r_i r_p = 0, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V},
\end{equation}
since $T_{ip}$ will be nonzero if and only if both $r_i$ and $r_p$ are nonzero, meaning that the sensors are active (we can fix $r_p = 1$ for APs, without loss of generality). Second, since we are dealing with link probability values, the sum of all link probabilities of an active sensor should be at most $1$:
\begin{equation}\label{eq.prob1}
\sum_{p\in\mathcal{V}} T_{ip} \leq 1, \quad i\in\mathcal{V}_{\textrm{s}}.
\end{equation}
We notice that we can use $i\in\mathcal{V}_{\textrm{s}}$ in the condition~\eqref{eq.prob1}, since non-active sensors have $T_{ip} = 0$ due to condition~\eqref{eq.Tminw}, and therefore \eqref{eq.prob1} is automatically satisfied.
To complete the formulation, we want to ensure the delivery of messages to the APs, which is achieved by guaranteeing network connectivity among the active sensors and APs. To that aim, let $R_0$ be the transmission rate of the sensors. Then the effective transmission rate in the active link between nodes $i$ and $p$ is $R_0 R_{ip}$ (recall that $R_{ip} := R(\mathbold{x}_i,\mathbold{x}_p)$ is the link reliability between sensors or APs). We consider normalized rates by making $R_0=1$, and we further assume that all sensors have the same transmission rate $R_0$, which is an easy-to-lift constraint.
Each sensor stores messages in a queue between the generation or arrival from other sensors and their transmission. An active sensor $i$, apart from generating messages locally at rate $r_i \bar{r}_i$, also receives messages from other sensors $p$ with an active link $T_{pi}R_{pi}$. Thus, the aggregate rate at which messages arrive at sensor node $i$ is
\begin{equation}
\label{r_in}
r_i^{\textrm{in}}= r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi}.
\end{equation}
In a similar way, the rate at which sensor $i$ sends messages to other nodes $p$, which may be sensors or APs, is given by
\begin{equation}
\label{r_out}
r_i^{\textrm{out}} = \sum_{p\in\mathcal{V}}T_{ip}R_{ip}.
\end{equation}
If we consider that the average rate at which messages leave the sensor's queue is higher than the rate at which messages arrive at a sensor, i.e., $r_i^{\textrm{out}} \geq r_i^{\textrm{in}}$, i.e.,
\begin{equation}\label{eq.flow}
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation}
then the queue empties often with probability one and there is an almost sure guarantee that the messages are delivered to the AP \cite{Zavlanos13} ({a formal statement of this fact will be given in the following}).
\textbf{Problem statement}: Given the measurement model for the different sensors and a prescribed performance measure value $\gamma>0$, we want to find the relative rates $\r \in [0,1]^J$, which select the minimum subset of sensors, and the probabilistic routing matrix $\mathbold{T} \in [0,1]^{J \times (J+K)} $, which selects the minimum subset of links, so that the performance measure $f(\r) \leq \gamma$ is satisfied and the messages are delivered to the APs. This can be stated as
\begin{subequations}
\label{Problem.nonconvex}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\r\|_0 + \alpha_2 \| \mathbold{T} \|_0 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma,
\end{align}
\end{subequations}
where the non-negative scalars $\alpha_1$ and $\alpha_2$ determine the importance of the sensors and the links. If $\alpha_1 = 0$, then the problem becomes link selection with stochastic routing, while for $\alpha_2 = 0$, the problem is sensor selection. We denote as $(\r^*, \mathbold{T}^*)$ any optimal couple determined by the solution of the problem~\eqref{Problem.nonconvex}.
We can readily notice that \eqref{Problem.nonconvex} is a nonconvex program, which makes finding any optimal couple $(\r^*, \mathbold{T}^*)$ computationally expensive in practice. In this paper, we are interested in finding an approximate solution of~\eqref{Problem.nonconvex} by a suitable convex relaxation.
\section{Sensor and Relay Selection}
\label{Relays}
When dealing with wireless sensor networks which are deployed in large areas, it is often useful to employ relays to facilitate the transmission of measurements back to the APs. In this spirit, we also consider the possible presence of relays. In particular, from here on, all the nodes deployed in the sensor network may act as sensors or as relays. Note that sensors can also act as a relay while sensing, as discussed in the previous section. Our goal is to consistently determine which of the nodes, which are placed at well-defined positions, should play the role of sensors and which ones the role of relays while guaranteeing both a prescribed network performance and connectivity in the selected subgraph. Notice that relays have less energy requirements that sensors, and therefore the distinction between sensors and relays is beneficial to further reduce the overall energy consumption. Notice also that, as expressed in the introduction, the proposed solution may be reiterated in time, to assign different roles at different times.
In order to model the possibility for a node to be acting as a sensor or as a relay, we introduce a new Boolean variable $\mathbold{\nu} \in \{0,1\}^{J+K}$, and we define that a node $i \in \mathcal{V}_{\textrm{s}}$, a sensor or relay, is on if $\nu_i = 1$ and it is off, otherwise ($\nu_p =1$ for APs). From the nodes that are on, we will know they are sensors when their $r_i$ is strictly positive, while the others are acting as relays.
We also reformulate the constraints accordingly. The constraint~\eqref{eq.Tminw} gets reformulated as
\begin{equation}\label{eq.Tminnu}
T_{ip} \leq \min\{\nu_i, \nu_p\}, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V},
\end{equation}
as the relays can exchange information. Notice that the constraint has a simplified form w.r.t.~\eqref{eq.Tminw}, since the variable $\mathbold{\nu}$ is Boolean. In addition, we need a constraint that makes sure that a sensor has a positive relative rate only when its node is activated, that is
\begin{equation}\label{eq.wminnu}
r_i \leq \nu_i, \quad i\in\mathcal{V}_{\textrm{s}}.
\end{equation}
Finally, the constraint~\eqref{eq.flow} can be carried over as it is,
\begin{equation}\label{eq.flowrelay}
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation}
With this in place, the problem we want to solve is how to consistently select minimum rates, relays, and links so to guarantee a certain network performance and connectivity. We can formulate this as
\begin{subequations}
\label{SensorRelayOptizProblemDeter0}
\begin{align}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\r\|_0 + \alpha_2 \|\mathbold{T}\|_0 + \alpha_3 \|\mathbold{\nu}\|_0 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in \{0,1\}, \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma.
\end{align}
\end{subequations}
Any possible solution of this problem is indicated as the triplet $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$. This problem is a nonconvex mixed-integer programming problem and therefore finding any triplet $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$ would be in general too computationally expensive. As done for the case where the relays are not present, we relax the problem to a convex one. In particular, we substitute the $\ell_0$ pseudo-norm with the convex surrogate $\ell_1$ norm, and we let the boolean vector $\mathbold{\nu}$ become real and live in the set $[0,1]^{J+K}$. With this, we arrive to the convex problem
\begin{subequations}
\label{SensorRelayOptizProblemDeter}
\begin{align}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\r\|_1 + \alpha_2 \|\mathbold{T}\|_1 + \alpha_3 \|\mathbold{\nu}\|_1 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma,
\end{align}
\end{subequations}
whose solution is indicated with $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$. Once again, the approximate triplet $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$ is going to be different in general from the sought one $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$. An important difference with problem~\eqref{Problem.nonconvex} and its relaxed version is the presence of the Boolean vector $\mathbold{\nu}$: this makes the triplet $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$ in general unfeasible w.r.t. the constraints of the nonconvex problem~\eqref{SensorRelayOptizProblemDeter0} (the reason is that $\hat{\nu}_i$ does not have to be either $0$ or $1$). In this paper, we consider to project $\hat{\nu}_i$ to $1$ any time $\hat{\nu}_i > 0$. In this way, the new triplet becomes feasible w.r.t. constraints of the nonconvex problem~\eqref{SensorRelayOptizProblemDeter0}.
In Algorithm~\ref{alg:SeReLiS}, we summarize the procedure for consistent sparse sensor, relay, and link selection (SSRLS), where we have used once again the sparse-enhancement procedure of reweighting.
\begin{algorithm}[t]
\footnotesize
\begin{algorithmic}[1]
\REQUIRE Number of iterations $N$, reweighting tolerance $\epsilon>0$, sensor importance $\alpha_1\geq 0$, link importance $\alpha_2\geq 0$, relay importance $\alpha_3\geq 0$.
\STATE Set the weighting vectors and matrix as $w^0_i = 1$, $v^0_i = 1$, and $W^0_{ip} = 1$ for all $i\in \mathcal{V}_\textrm{s}$ and $p \in \mathcal{V}$
\FOR {$\tau = 0$ to $N-1$}
\STATE Solve the convex program
\begin{align*}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\mathbold{w}^{\tau}\odot\r\|_1 + \alpha_2 \|\mathbold{W}^{\tau}\odot\mathbold{T}\|_1 + \alpha_3 \|\v^{\tau}\odot\mathbold{\nu}\|_1 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma,
\end{align*}
with off-the-shelf interior point methods (e.g., SDPT3~\cite{Toh1999} or SeDuMi~\cite{Sturm1998}). Let the solution be $(\hat{\r}^\tau, \hat{\mathbold{T}}^\tau, \hat{\mathbold{\nu}}^\tau)$.
\STATE Compute the new weights $\mathbold{w}^{\tau+1}$, $\v^{\tau+1}$ and $\mathbold{W}^{\tau+1}$ as
$$
w^{\tau+1}_i = \frac{w^{\tau}_i}{\epsilon + \hat{r}^{\tau}_i}, \quad W^{\tau+1}_{ip} = \frac{W^{\tau}_{ip}}{\epsilon + \hat{T}^{\tau}_{ip}}, \quad v^{\tau+1}_i = \frac{v^{\tau}_i}{\epsilon + \hat{\nu}^{\tau}_i}
$$
\ENDFOR
\STATE Project $\hat{\nu}_i^N$ to $1$, if $\hat{\nu}_i^N> 0$.
\STATE Output the solution triplet $(\hat{\r}^N, \hat{\mathbold{T}}^N, \hat{\mathbold{\nu}}^N)$
\end{algorithmic}
\caption{Sparse Sensor, Relay, and Link Selection}
\label{alg:SeReLiS}
\end{algorithm}
{{\bf Connectivity guarantees of Algorithm~\ref{alg:SeReLiS}.}
We formalize now the claim that from each active sensor there exists a path (formed by relays and other active sensors) that goes to an AP. The argument that we use to prove this claim is the same as the one that we have used to prove the connectivity guarantees of Algorithm~\ref{alg:SeLiS} (where no relay were considered). Consider~\eqref{eq.flowrelay}: this constraint has to be true for each active sensor and active relay (the one for which $r_i = 0$ and $\nu_i>0$), and it reads $0 \leq 0$ for the not active ones (due to constraints~\eqref{eq.Tminnu}-\eqref{eq.wminnu}). Since it has to be true for all active sensors and relays, the sensors have to send out more rate than what they receive (and the difference is given by $r_i \bar{r}_i$), while the relays can send out exactly what they receive. Therefore, first: no active sensor or relay can be a sink. Second: there cannot be loops containing active sensors not connected to a sink, since the rate augments along the loop and~\eqref{eq.flowrelay} would not be satisfied for at least one pair of connected active elements (either sensor-sensor, sensor-relay, or relay-relay). Third: there cannot be loops containing only active relays. The reason for the last claim is that, although~\eqref{eq.flowrelay} would be satisfied along the loop for any $T_{ip} = T_{pi}$ for all pairs of active relays $(i,p)$ on the loop, the solution $T_{ip}=0$ is the optimal one, given the selected cost function. Which induces all the relays in the loop to become inactive. Thus, the only possibility is that eventually each sensor has a path to a sink, and no relays are used without purpose. \hfill $\Box$
}
\section{Numerical Results for sensors and links}
\label{SimulationResults1}
In this section, we assess the performance of the proposed SSLS algorithm in terms of the amount of resources that are used, i.e., the number of both, active sensors and links. We also verify the consistency and the subgraph connectivity.
We consider an estimation scenario where sensors are randomly deployed according to a uniform distribution in a square area of side $5$ units. The regression matrix, $\mathbold{A}=\left[\textbf{a}_1, \cdots, \textbf{a}_J\right]^{\intercal}$ $\mathbold{A} \in \mathbb{R}^{J\text{x}m}$, is drawn from a zero-mean Gaussian distribution with variance $1$.
The variance of the noise is the same at all sensors, $\sigma_i=1/ \sqrt{\text{SNR}}$, where SNR is set to $0$ dB. We use the cost $f(\r)$ of \eqref{CostFunction} and set the parameter $\gamma$ in \eqref{eq.MSErate} to 0.5.
The link reliability metric that we use in the simulations is given by:
\begin{equation}
\label{LinkReliability}
R_{ip}= \left\{
\begin{array}{ll}
1 - \frac{1}{2}({\frac{\parallel \mathbold{x}_i - \mathbold{x}_p \parallel}{d}})^{2\beta} & \mbox{if} ~ 0 \le \parallel \mathbold{x}_i - \mathbold{x}_p \parallel < d \\
\frac{1}{2}(2-\frac{\parallel \mathbold{x}_i - \mathbold{x}_p \parallel}{d})^{2\beta} & \mbox{if} ~ d \le \parallel \mathbold{x}_i - \mathbold{x}_p \parallel < 2d \\
0 & \mbox{otherwise} \\
\end{array} \right.
\end{equation}
with $\beta$ the power attenuation factor ($2 \leqslant \beta\leqslant 6$) and $d$ the communication radius. We have considered $\beta=2$ and $d=1.74$ \cite{Arroyo-Valles07}.
The number of iterations in the reweighted $\ell_1$ minimization is empirically set to 30 to trade-off sparsity of the solution and computational time.
Due to the application of the reweighted $\ell_1$ minimization mechanism, only the sensors and links with relatively high acquisition rate and link probability are active. We round off to 0 the link probabilities and sensor rates lower than a sufficiently small constant $\delta$, which is set to $\delta = 2\cdot 10^{-4}$. Further, we consider $\alpha_1=\alpha_2=1$. {Notice that rounding off the probabilities to $0$ could incur in a loss of connectivity. This is however not likely in practice, due to the reweighting procedure that makes sure that the non-zero probabilities have values well above the selected threshold $\delta$. The experimental results support this claim, since we have not witnessed any loss in connectivity.}
Fig. \ref{fig:SLSelection} is an example of a 100-node sensor network with a single AP. The parameter to estimate has dimension $m=2$ and the maximum rate is $\bar{r}=0.7$. Active sensors are colored in green while the AP is in black. The results show the sparsity of the solution since only a few sensors (4\%) and links (0.072\%) are active. It can be also seen that the selected subgraph is connected and there is always a path between the active sensors and the AP. The solution also satisfies the other constraints of the optimization problem. Fig. \ref{fig:SLSelection_Index} shows the relative rates of the active sensors.
\begin{figure}[tb]
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n2}
\caption[Network topology with selected sensors and links]{Active links and sensors in a one-AP network for $J=100$ nodes.}
\label{fig:SLSelection}
\end{figure}
\begin{figure}[tb]
\centering
\psfrag{a}{\small \hskip1cm $\hat{r}_i$}
\psfrag{b}{\small \text{Sensor index $i$}}
\includegraphics[scale=1]{./Figures/IndexSLSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n2Mod}
\caption{Relative rates of the active sensors.}
\label{fig:SLSelection_Index}
\end{figure}
Next, our purpose is to show average performance results. Hence, we run $250$ Monte Carlo simulations for each network configuration. The number of deployed sensors, $J$, varies from 30 to 100 and there is one AP. Two values of $\bar{r}$ are considered, 0.4 and 0.7. Simulations are run considering that $\bar{r}$ is identical for all nodes in the sensor network. Also, we consider two values for the dimension of the parameter to estimate, $m=2$ and $m=4$.
Two metrics are considered to assess the performance of the networks. They try to measure the amount of resources that are used in the network.
Since we are dealing with acquisition rates, let us first define the total relative acquisition rate of the whole network as the sum of the acquisition rates of the sensors in the network, i.e., $\sum_{i\in\mathcal{V}_{\textrm{s}}} \hat{r}_i^N$. In order to make the performance measurement independent of the number of sensors in the network, we define the percentage of the total relative acquisition rate of the whole network, $P_{\textrm{trr}}$, as
\begin{equation}
P_{\textrm{trr}}=\frac{\sum_{i\in\mathcal{V}_{\textrm{s}}} \hat{r}_i^N}{J} \cdot 100.
\end{equation}
Recall that the relative acquisition rate $\hat{r}_i^N \in [0,1]$. Note that only the active sensors contribute to the sum since their acquisition rate is different from 0, so this measure gives us information about the percentage of active sensors.
Considering that $\sum_{p \in \mathcal{V}} \hat{T}_{ip}^N \leq 1$ for $i\in\mathcal{V}_{\textrm{s}}$, we next define the percentage of the aggregate network link probability, $P_{\textrm{alp}}$, as
\begin{equation}
P_{\textrm{alp}}=\frac{\sum_{i\in\mathcal{V}_{\textrm{s}}} \sum_{p \in \mathcal{V}} \hat{T}_{ip}^N}{J} \cdot 100.
\end{equation}
Note that only active sensors and APs contribute to the sum, since the remaining link probabilities are 0. In this case, the metric is related to the percentage of active links in the network.
In both cases, the lower the metrics are, the fewer resources (in terms of active sensors and links) are used.
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip1cm $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average performance and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n2}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip1cm $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption{Average performance and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n4}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip-.8cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n2_SLPerc}
\end{figure}
\begin{figure}[h!]
\centering
\psfrag{a}{\small \hskip-.8cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n4_SLPerc}
\end{figure}
Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4} show the average performance and the standard deviation for $m=2$ and $m=4$, respectively, for different amounts of deployed sensors and the two values of $\bar{r}$. Even for the worst case scenario, i.e., for $\bar{r}=0.4$ and a $30$-node network, the $P_{\textrm{trr}}$ and $P_{\textrm{alp}}$ values are $8$\% and $5.5$\% for $m=2$ and $22$\% and $17$\% for $m=4$, respectively (which represents a small percentage of used resources).
In order to verify if those metric values correspond to the activation of a low number of sensors with high relative rate values or correspond to a high number of active sensors with low relative rate values, Fig. \ref{fig:SLSelectionAvg_n2_SLPerc} and Fig. \ref{fig:SLSelectionAvg_n4_SLPerc} illustrate the average percentage of active sensors and links. For networks of $30$ nodes and $\bar{r}=0.4$, the average percentage of active sensors and links is $12$\% (i.e., $3.6$ sensors) and $0.55$\% for $m=2$ and $30$\% (i.e., $9$ sensors) and $1.7$\% for $m=4$, respectively. Thus, this result corroborates that the amount of used resources is conservative, i.e., there is a low percentage of active sensors with high relative rates because the $P_{\textrm{trr}}$ values are the highest in Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4}.
\subsection{Case $\bar{r}=0.7$ (yellow and red bars)}
From Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4} it can be seen that the values of the two metrics decrease with the increase of the number of sensors (regardless of $m$), reaching lower values than those of the network of $30$ nodes.
Let us examine the scenario with $m=2$ (analogous conclusions hold for networks with $m=4$). If we also analyze the trend in the percentage of active sensors (Fig. \ref{fig:SLSelectionAvg_n2_SLPerc}), it first decreases and later increases slightly starting from networks of $80$ nodes.
Even though networks with $80$ to $100$ nodes have between $6 \%$ to $8\%$ of active nodes (i.e., $7$ sensors), those networks have a slightly higher amount of active resources than in networks of $30$ nodes ($10\%$, approximately $3$ nodes). However, in general, the total number of active sensors stay low in comparison to the total number of sensors, which corroborates the sparsity of the solution.
\subsection{Case $\bar{r}=0.4$ (dark and light blue bars)}
Let us analyze the behavior of the metrics for $\bar{r}=0.4$ and $m=2$ (analogous conclusions are raised for networks with $m=4$). In Fig. \ref{fig:SLSelectionAvg_n2}, $P_{\textrm{trr}}$ values decrease from $7.5$\% at networks of $30$ nodes to $2.7$\% at networks of $70$ nodes, and from that point increases slightly up to a value of $3.2$\% at networks of $100$ nodes. If we now have a look at Fig. \ref{fig:SLSelectionAvg_n2_SLPerc}, the percentage of active sensors goes from $11.8$\% at $30$-node networks (i.e., $3.5$ sensors) to $8$\% at $50$-node networks (i.e., $4$ sensors) and later increases until reaching a value of $22$\% at $90$-node networks (i.e., $20$ nodes).
While the number of active nodes is similar in networks with a low amount of sensors ($30$ to $50$ nodes), it increases slightly for denser networks ($60$ to $100$ nodes). In this latter case, sensors are closer to each other so that the reliability values among sensors are similar and more sensors may be activated. First, although not reported here, we have observed that increasing the number of reweighting iterations does help in the latter case in reducing the amount of active sensors, at the cost of increasing the computational time requirements. Second, we will see how this is not an issue when relays are considered.
In case of the percentage of active links, the values are below $0.7$\% for $m=2$ and $2$\% for $m=4$ for all the network sizes. Hence, the networks are sparse in the amount of active links.
\vskip5mm
From the Figures, it can be appreciated that, for a given number of nodes, the percentage of used resources is lower in case of estimating a parameter of 2 dimensions than one of 4 dimensions (compare the metric values as well as the percentage of active sensors and links). Furthermore, the percentage of used resources (active sensors and links) is lower in case of considering $\bar{r}=0.7$.
Subgraph connectivity and consistency have been also checked for every run. All the activated sensors have a path to the AP. Furthermore, connectivity of the network in the sense of \eqref{eq.flow} is guaranteed.
\section{Numerical Results with Relays}
\label{SimulationResults2}
Similarly to the sensor and link selection scenario, in this section we assess the performance of the new SSRLS algorithm in terms of the amount of resources that are used. We also verify the consistency and the subgraph connectivity.
Once again, we consider an estimation scenario, where the parameters are the same\footnote{In particular, the number of iterations in the reweighted $\ell_1$ minimization is set to 30 while $\delta = 2\cdot 10^{-4}$.} as those used in the sensor and link selection case (Section \ref{SimulationResults1}). We consider $\alpha_1=\alpha_2=\alpha_3=1$.
Fig. \ref{fig:SRLSelectionSc1} is an example of a 50-node sensor network with two APs with $m=4$ and $\bar{r}=0.4$. The active sensors are colored in green, the active relays in blue and the APs are colored in black. Looking at the figure, it is evident that the obtained solution is sparse. From the 50 nodes (excluding the APs), 5 are selected as sensors and 3 as relays. The amount of active links (i.e., those that have a probability value higher than 0) is $8$\%. Observe the connectivity of the selected subgraph, where there is a path from each active sensor to the APs via the relays, where messages are routed stochastically according to the link probability. The solution also satisfies the other constraints of the optimization problem.
\begin{figure}[tb]
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLSel_J50_K2_c1_5_c2_1_c4_1_MSE_05_r_04_n4_rl2}
\caption{Selected sensors, relays and links in a two-AP 50-sensor network where $m=4$. }
\label{fig:SRLSelectionSc1}
\end{figure}
Next, and following a parallel analysis to the one made in the sensor and link selection scenario, we show average performance results. The number of sensors varies from 30 to 100, $\bar{r}$ is 0.4 or 0.7 and $m$ is either 2 or 4. 250 Monte Carlo simulations are run for each network configuration. The metrics to assess the network performance are the ones exposed in Section \ref{SimulationResults1}. To check the sparsity in the number of relays, we also evaluate the percentage of active relays in the network.
Fig. \ref{fig:SLRelSelectionAvg_n2} and Fig. \ref{fig:SLRelSelectionAvg_n4} show the average performance and the standard deviation, for $m=2$ and $m=4$, respectively, for different numbers of deployed sensor nodes and the two values of the maximum acquisition rate.
{From these figures it can be seen that the $P_{\textrm{trr}}$ and $P_{\textrm{alp}}$ values decrease when the number of sensor nodes increases, going from a value of 6.3$\%$ ($m=2$, $\bar{r}=0.4$) or 19$\%$ ($m=4$, $\bar{r}=0.4$) for networks of $30$ nodes to values lower than 2$\%$ ($m=2$, $\bar{r}=0.4$) or 5$\%$ ($m=4$, $\bar{r}=0.4$) for networks of $100$ nodes.
To verify if those metric values correspond to the activation of a few sensors with high relative rate value, Fig. \ref{fig:SLRelSelectionAvg_n2_SLPerc} and Fig. \ref{fig:SLRelSelectionAvg_n4_SLPerc} illustrate the average percentage of active sensors and links. For $\bar{r}=0.4$, the average percentage of sensors goes from approximately $8\%$ (i.e., $2.4$ sensors for $m=2$) or $22\%$ (i.e., $6.5$ sensors for $m=4$) in 30-node networks to around $2\%$ (i.e., $2$ sensors for $m=2$) or $5\%$ (i.e., $5$ sensors for $m=4$) in 100-node networks, respectively.
Fig. \ref{fig:SLRelSelection_AvgRel} also illustrates the percentage of active relays for sensor networks of different sizes, $\bar{r}$ and $m$. For $m=4$ and $\bar{r}=0.4$, $2$ relays (or 5.5$\%$ of the nodes) are active in 30-sensor networks, while $1$ relay ($0.9\%$) is active in 100-sensor networks.
The conclusions from these figures are two-fold:
First, independently of the total number of available nodes, the algorithm robustly selects a similar number of sensors, relays and links to satisfy the constraint on the measurement errors. This strongly suggests that the optimality of the sensing, given the constraints, is achieved.
Second, the active sensors are those with high relative rates. And even more, we obtain sparse solutions not only in the amount of active sensors but also in the amount of active links and relays.
As observed for the sensor and link selection scenario, the demand of resources (percentage of active sensors, links and relays) is less when considering higher maximum rates (See Fig. \ref{fig:SLRelSelectionAvg_n2_SLPerc} , Fig. \ref{fig:SLRelSelectionAvg_n4_SLPerc} and Fig. \ref{fig:SLRelSelection_AvgRel}). Also, for a given number of nodes, the amount of used resources grows whenever the dimension of the parameter to estimate increases.
Besides, the variability in the results of the sensor, relay and link selection problem is lower than in the sensor and link selection problem. This can be observed by taking into account the standard deviation in the figures. All in all, it appears that when one considers also the presence of relays, one obtains better performance in terms of reduced active resources than in the case of no relays. A more in depth characterization is left as future research.}
\begin{figure}
\centering
\psfrag{a}{\small $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average performance and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n2}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average performance and its standard for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n4}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-1cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n2_SLPerc}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-1.5cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n4_SLPerc}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-.5cm \text{Percentage of active relays}}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small $\bar{r}=0.4, m=2$}
\psfrag{lg2}{\small $\bar{r}=0.7, m=2$}
\psfrag{lg3}{\small $\bar{r}=0.4, m=4$}
\psfrag{lg4}{\small $\bar{r}=0.7, m=4$}
\includegraphics[scale=1]{./Figures/AveragePercRelays_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_n4_newStd}
\caption {Average percentage of active relays and its standard deviation for different amount of sensors, $\bar{r}$ and $m$.}
\label{fig:SLRelSelection_AvgRel}
\end{figure}
Therefore, the SSRLS algorithm provides a consistent solution to the sensor and relay selection problem by always finding a connected path among the active sensors, relays and APs no matter the size of the network and the dimension $m$ of the parameter to estimate, and which satisfies the network performance constraint for the active sensors.
However, the following question may arise: are the active sensors obtained after solving the SSRLS algorithm the same as those that would be active in a problem which aims at selecting the minimum number of sensors and their corresponding acquisition rates to satisfy a certain MSE-rate, i.e., solve the relaxed version of the following problem: $$\underset{\r}{\text{minimize}} \quad \|\r\|_0 \quad \text{subject to} \quad f(\r) \leq \gamma,$$ where sensors that satisfy $r_i>\delta$ (have an acquisition rate different from 0) are selected?
\begin{figure}
\centering
{
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLRelSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n4}
}
\caption{Selected sensors, relays and links in a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. }
\label{fig:SRLSel_comp1}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/OnlySensorSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n4}
\caption{Selected sensors in a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. }
\label{fig:SLSel_comp1}
\end{figure}
Clearly, the answer is no. As an example, compare the active sensors in Fig.\ref{fig:SRLSel_comp1} and Fig. \ref{fig:SLSel_comp1}, for a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. The solution provided by the SSRLS algorithm not only takes into account the sensors with the highest acquisition rates (to satisfy the MSE-rate constraint), but also selects in a robust, coherent and consistent way relays and probability links such that the active sensors are connected to the APs (it considers the sensor deployment, too). On the contrary, the solution provided by the sensor selection problem does not consider the spatial distribution of sensors, and the only issue that matters is the selection of the sensors with the best acquisition rates. Obviously, this does not mean that both solutions do not activate some common sensors. In the previous example, sensors with indexes 26, 49, 74 and 93 are selected in both solutions.
\input{SpecialCasesExtensions}
\section{Link selection}
\label{SpecialCasesExtensions}
This last scenario is a particular case of the general one where we assume that all sensors are active, acquire measurements with relative rate at least $r_{i0}$, and communicate with the APs in a multi-hop fashion. The problem that remains is to determine the probabilistic routing matrix $\mathbold{T}$, that selects the minimum subset of links so that a certain constraint is satisfied. In particular, we want to ensure network integrity, defined according to \cite{Zavlanos13} as the ability of the network to support the desired communication rates in a certain network topology.
As in the general scenario, the network needs to satisfy the flow inequality constraint given by \eqref{eq.flow} in order to guarantee that messages are delivered to the APs. Furthermore, it is also required that sensors communicate their measurements with the APs at a nominal rate of $r_{i0}$ messages per time unit. This means that the relative acquisition rate should satisfy the following inequality: $r_i \geq r_{i0}$. Thus, we aim at finding the appropriate relative rates $\r \in [0,1]^J$ and the sparse probabilistic routing matrix $\mathbold{T}$.
The network's objective function to be optimized is the social utility value of the optimization variables, $U_i(r_i)$ for the relative rate $r_i$, and $V_{ip}(T_{ip})$ for the links $T_{ip}$, which is defined as $\sum_{i=1}^J U(r_i) + \sum_{i=1}^J \sum_{p=1}^{J+K} V_{ip}(T_{ip})$.
Following \cite{Zavlanos13}, we measure the utility value associated to the rate as $U(r_i)= log(r_i) $, which penalizes small rates $r_i$, and the utility value of the links as $V_{ip}(T_{ip})=-T_{ip}^2$.
Then, the optimization problem that we have to solve is given by\footnote{Note that these utilities can be also incorporated in the objection functions of the previous optimization problems. However, and for the sake of simplicity, we only consider them in this scenario. }
\begin{subequations}
\label{LinkOptizProblemV0}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{maximize}}
& & \sum_{ i \in \mathcal{V}_{\textrm{s}}} U(r_i) + \sum_{ i \in \mathcal{V}_{\textrm{s}}} \sum_{p\in\mathcal{V}} V_{ip}(T_{ip}) - \alpha \| \mathbold{T} \|_0 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& r_i \geq r_{i0}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{align}
\end{subequations}
The problem in \eqref{LinkOptizProblemV0} is not convex due to the $\ell_0$-norm in the objective function. Thus, we relax the non-convex term of \eqref{LinkOptizProblemV0} by substituting the
$\ell_0$-pseudo norm, with the $\ell_1$ norm. Then, the previous optimization problem is transformed into the following one
\begin{subequations}
\label{LinkOptizProblem}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{maximize}}
& & \sum_{ i \in \mathcal{V}_{\textrm{s}}} U(r_i) + \sum_{ i \in \mathcal{V}_{\textrm{s}}} \sum_{p\in\mathcal{V}} V_{ip}(T_{ip}) - \alpha \| \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& r_i \geq r_{i0}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{align}
\end{subequations}
The objective function is strictly concave and the constraints are linear inequalities, so the problem can be solved efficiently by using convex optimization tools. Note that the optimal utility depends on the spatial configuration of the sensors, and consequently the optimal link probabilities and rate variables do too, which are denoted as $r_{\textbf{x},i}^*$, and $T_{\textbf{x},ip}^*$.
The amount of selected links depends on parameter $\alpha$, which controls the sparsity level (the higher it is, the fewer links are selected). In order to increase sparsity and avoid the tuning of parameter $\alpha$, we apply the iterative reweighted $\ell_1$ minimization algorithm only to the third term of the objective function (we call this partial reweighted $\ell_1$ minimization), which diminishes the influence of that parameter and helps in the link selection process. We round off to 0 the link probabilities lower than a sufficiently small constant $\delta$.
In the remaining of this section, we show the performance of the link selection scheme. Fig. \ref{fig:LinkSelExam} is an example of a 50-node sensor network with one AP. Sensors are colored in red and the AP in black. The nominal rate is $r_{i0}=0.2$, which is identical for all the sensor nodes; another weighting parameter is $\alpha=1$. $\delta$ and the rest of the parameters are identical to those defined in Section \ref{SimulationResults1}. The color and the thickness of the links are related to the routing probability values, which are graded into different ranges. Blue links have a probability value between $\delta$ and $0.25$ and their line is the finest. The red ones have a probability between $0.25$ and $0.5$, the black links between $0.5$ and $0.75$ and the green ones between $0.75$ and $1$, having the thickest line.
Note that only $51$ links are active (i.e., it is a sparse solution). Every sensor is connected via multiple hops (links that connect the sensors) to the AP, where the links with higher probabilities are always established between the AP and some of its neighboring nodes. This is logical given that a message that has been routed through multiple sensors should have a higher probability of arriving successfully to the AP. In general, the sensors which are placed far from the AP tend to establish links with low probability values.
\begin{figure}
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\psfrag{lg1}{\footnotesize \text{sensors}}
\psfrag{lg2}{\footnotesize \text{sink}}
\psfrag{lg3}{\footnotesize \text{$\delta \leq T_{i,j} < 0.25$}}
\psfrag{lg4}{\footnotesize \text{$0.25 \leq T_{i,j} < 0.5$}}
\psfrag{lg5}{\footnotesize \text{$0.5 \leq T_{i,j} < 0.75$}}
\psfrag{lg6}{\footnotesize \text{$0.75 \leq T_{i,j} \leq 1$}}
\includegraphics[scale=1, trim=0cm 0 0 0, clip=on]{./Figures/SLSel_J50_K1_c_1_r_02_ProbCrit025_ReweightDeploy1_seed100_Mod2}
\caption {Selected links in a one-AP 50-node network or $\alpha=1$, and using partial reweighted $l_1$ . Sensors are colored in red, the AP in black and different colors in the links represent the different probability ranges.}
\label{fig:LinkSelExam}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip0cm\text{Percentage of active links}}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\footnotesize \text{Total - $\delta \leq T_{i,j} \leq 1$}}
\psfrag{lg2}{\footnotesize \text{$\delta \leq T_{i,j} < 0.25$}}
\psfrag{lg3}{\footnotesize \text{$0.25 \leq T_{i,j} < 0.5$}}
\psfrag{lg4}{\footnotesize \text{$0.5 \leq T_{i,j} < 0.75$}}
\psfrag{lg5}{\footnotesize \text{$0.75 \leq T_{i,j} \leq 1$}}
\includegraphics[scale=1]{./Figures/AveragePercActiveLink_K1_n2_r02_Std}
\caption {Average percentage of total active links and its standard deviation for different number of sensor nodes, and when link probabilities are graded in different ranges. }
\label{fig:LSelectionAvg_n2}
\end{figure}
Fig. \ref{fig:LSelectionAvg_n2} illustrates the average percentage of active links (the total percentage and the percentage by probability ranges) for sensor networks composed of a number of sensors whose amount varies between 30 to 100 and $r_{i0}=0.2$. 250 Monte Carlo simulations are run for each network configuration. First, the figure shows the low amount of active links, so that the matrix $\mathbold{T}$ is sparse. For 30-node networks, the average percentage of active links is slightly higher than 5 $\%$. The percentage values decrease whenever the number of nodes in the network increases. Regarding the different probability ranges, the highest percentage of active links corresponds to values of $T_{ij}$ between $\delta$ and 0.25, and it is followed by links with probabilities $T_{ij}$ between 0.75 and 1. As in the earlier example, most of those links are established between the AP and its neighbors, which ensures that messages arrive to the AP.
\section{Conclusions}
In this paper we have proposed two optimization methods for selecting optimally and consistently the minimum set of sensors (and their corresponding sensing rates) and links (and their link probability values); or sensors, relays and links, in wireless sensor networks. The chosen scenario has been parameter estimation, where the selected sensors have to guarantee a prescribed network performance based on the MSE-rate.
Numerical results showed the sparsity of the solution, which translates into a smart use of the network resources. The proposed algorithms have provided a consistent solution to the selection problem by always finding a connected path among the selected set of sensors, relays and APs no matter the size of the network and the dimension of the parameter to estimate. This ensures the compliance of the network performance constraint by the selected sensors.
Future work will consider the study of these algorithms from a decentralized point of view, eliminating the need of having an AP that collects all the measurements. The application of these algorithms to other scenarios is also a matter of further studies.
\section{Problem Formulation}
\label{ProblemFormulation}
\textbf{High-level problem description. }
In this paper, we are facing the problem of consistently selecting the smallest subset of sensors and links out of all available ones such that a certain performance measure and network connectivity (which ensures a path from the active sensors to the APs) is guaranteed. The motivation behind selecting a low number of sensors (and subsequently, an appropriate reduced amount of links) comes from the need of minimizing the economical and communication costs in wireless sensor networks. Clearly, this saving should not jeopardize the performance or the network connectivity. Communication between the active sensors and the APs as well as a network performance must be guaranteed.
\vskip2mm
We consider a static wireless sensor network composed of $J$ sensor nodes and $K$ access points (APs) or sinks. At this point, we don not consider any relays yet.
We denote with $\mathcal{V}=\{1,2,..., J, J+1, ..., J+K \}$ the set of sensors and access points, where $i\in\mathcal{V}_{\textrm{s}} = \{1,...,J\}$ are the indexes corresponding to the sensor nodes and $i\in\mathcal{V}_{\textrm{AP}}=\{J+1, ..., J+K\}$ are the indexes corresponding to the APs. The network topology is determined by the physical locations of the sensors and APs, collected in the stacked vector $\mathbold{x}=[\mathbold{x}_1^\mathsf{T},...,\mathbold{x}_J^\mathsf{T}, \mathbold{x}_{J+1}^\mathsf{T},\dots,\mathbold{x}_{J+K}^\mathsf{T} ]^\mathsf{T}$, where the vector $\mathbold{x}_i$ indicates the position of sensor or AP $i$.
\subsection{Communication Network}
Sensors need to communicate their measurement to the APs in a multi-hop fashion (due to energy/power constraints). An important feature of this paper is that we can only use active sensors to transmit messages. We model the communication quality among sensors and APs using a link reliability metric, denoted as $R_{ip} := R(\|\mathbold{x}_i-\mathbold{x}_p\|)$, which represents the probability that sensor $p$ (if $p \leq J$) or an AP (otherwise) receives successfully a message sent from sensor $i$. We model this probability as a smooth non-increasing function with compact support, and in particular, $R(0) = 1$ and $R(d) = 0$ for all $d\geq \bar{d}$, for a predefined cut-off distance $\bar{d}$.
The link reliability metric induces a specific undirected communication graph on the wireless sensor network: whenever $R_{ip}$ is nonzero, there is a possible link between sensor $i$ and sensor or AP $p$. We describe this communication graph in terms of the edge set $\mathcal{E}$, given by $\mathcal{E} = \{ (i,p), i \in \mathcal{V}_{\textrm{s}}, p \in \mathcal{V} | i\neq p, R_{ip} > 0\}$, and we denote the graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E})$.
\subsection{Sensing}
Sensors take measurements of a parameter $\mathbold{\theta} \in \mathbf{R}^m$, $m \ll J$, according to the linear measurement model,
\begin{equation}
y_i = \a_i^\mathsf{T} \mathbold{\theta} + n_i,\quad i\in\mathcal{V}_{\textrm{s}},
\end{equation}
where the vectors $\a_i\in\mathbf{R}^m$ represent the regressors, while $n_i$ is a Gaussian noise term with mean 0 and covariance $\sigma_i^2$.
Sensor $i$ acquires measurements $y_i$ with a rate $r_i \bar{r}_{i}$ (we assume that the maximum relative rate $\bar{r}_{i}$ is known and fixed, while the relative rate $r_i \in [0,1]$ is a design parameter). If $r_i=0$, the node will not take any measurements and will not be active.
As we mentioned, the collected measurements need to be communicated back to the APs in a multi-hop fashion. The APs are in charge of combining the measurements $y_i$, coming from different sensors at different rates, to estimate the value of the parameter $\mathbold{\theta}$. The \emph{quality} of the estimate can be evaluated a priori based on which sensors are measuring (more specifically their regressors $\a_i$ and noise variances $\sigma_i^2$) and their rates. Examples of such quality metrics are rate versions of the mean square error (MSE), the worst case error variance, or the volume of the confidence ellipsoid~\cite{Joshi09}. For instance, if we select the MSE-rate as quality metric and assume that the noise experienced at different sensors is uncorrelated, then we would have
\begin{equation}
f(\r) := \mathrm{tr}\Big(\sum_{i\in\mathcal{V}_{\textrm{s}}} r_i \bar{r}_{i} \a_i \a_i^\mathsf{T} /\sigma_i^2\Big)^{-1}\!\!\!,
\label{CostFunction}
\end{equation}
where we have collected the relative rates in $\r = [r_1, \dots, r_J]^\mathsf{T}$. Remark that if a sensor is not active, its relative rate is zero. The higher the value of $f(\r)$, the higher the MSE-rate of the estimate, and vice versa. {Other types of function $f(\r)$ can be found in~\cite{Joshi09, Jamali-Rad15}, both for uncorrelated and correlated noise}. In order to keep the presentation as general as possible, we will not specify which quality metric we select: we will simply write the metric as the function $f(\r)$.
\subsection{Connectivity Modeling}
Before formalizing the problem mathematically, we need to introduce how we will model the communication links and the induced connectivity. In this paper, we use a stochastic point of view and we use the stochastic routing framework of \cite{Zavlanos13}.
In our multihop wireless network, messages will be routed stochastically, i.e., sensor nodes select a neighbor, either a sensor or an AP, to forward the message according to a certain probability. A set of variables $T_{ip} \in [0,1]$ will denote the probability that node $i$ selects node $p$, either a sensor or an AP, as a destination of the transmitted messages. In this sense, the variables $T_{ip}$ can be seen as the probability that node $i$ selects the link that joins sensors $i$ and $p$. The matrix $\mathbold{T}$, of size $J \times (J+K)$, gathers all these probability values. Further, the matrix $\mathbold{T}$ needs to satisfy a certain number of constraints. First, if either one of the sensors $i$ or $p$ is not active, then $T_{ip}$ must be zero: this models the fact that if a sensor is not active then it cannot send or receive messages. This can be formulated as
\begin{equation}\label{eq.Tminw}
T_{ip} = 0 \quad \textrm{iff}\quad r_i r_p = 0, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V},
\end{equation}
since $T_{ip}$ will be nonzero if and only if both $r_i$ and $r_p$ are nonzero, meaning that the sensors are active (we can fix $r_p = 1$ for APs, without loss of generality). Second, since we are dealing with link probability values, the sum of all link probabilities of an active sensor should be at most $1$:
\begin{equation}\label{eq.prob1}
\sum_{p\in\mathcal{V}} T_{ip} \leq 1, \quad i\in\mathcal{V}_{\textrm{s}}.
\end{equation}
We notice that we can use $i\in\mathcal{V}_{\textrm{s}}$ in the condition~\eqref{eq.prob1}, since non-active sensors have $T_{ip} = 0$ due to condition~\eqref{eq.Tminw}, and therefore \eqref{eq.prob1} is automatically satisfied.
To complete the formulation, we want to ensure the delivery of messages to the APs, which is achieved by guaranteeing network connectivity among the active sensors and APs. To that aim, let $R_0$ be the transmission rate of the sensors. Then the effective transmission rate in the active link between nodes $i$ and $p$ is $R_0 R_{ip}$ (recall that $R_{ip} := R(\mathbold{x}_i,\mathbold{x}_p)$ is the link reliability between sensors or APs). We consider normalized rates by making $R_0=1$, and we further assume that all sensors have the same transmission rate $R_0$, which is an easy-to-lift constraint.
Each sensor stores messages in a queue between the generation or arrival from other sensors and their transmission. An active sensor $i$, apart from generating messages locally at rate $r_i \bar{r}_i$, also receives messages from other sensors $p$ with an active link $T_{pi}R_{pi}$. Thus, the aggregate rate at which messages arrive at sensor node $i$ is
\begin{equation}
\label{r_in}
r_i^{\textrm{in}}= r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi}.
\end{equation}
In a similar way, the rate at which sensor $i$ sends messages to other nodes $p$, which may be sensors or APs, is given by
\begin{equation}
\label{r_out}
r_i^{\textrm{out}} = \sum_{p\in\mathcal{V}}T_{ip}R_{ip}.
\end{equation}
If we consider that the average rate at which messages leave the sensor's queue is higher than the rate at which messages arrive at a sensor, i.e., $r_i^{\textrm{out}} \geq r_i^{\textrm{in}}$, i.e.,
\begin{equation}\label{eq.flow}
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation}
then the queue empties often with probability one and there is an almost sure guarantee that the messages are delivered to the AP \cite{Zavlanos13} ({a formal statement of this fact will be given in the following}).
\textbf{Problem statement}: Given the measurement model for the different sensors and a prescribed performance measure value $\gamma>0$, we want to find the relative rates $\r \in [0,1]^J$, which select the minimum subset of sensors, and the probabilistic routing matrix $\mathbold{T} \in [0,1]^{J \times (J+K)} $, which selects the minimum subset of links, so that the performance measure $f(\r) \leq \gamma$ is satisfied and the messages are delivered to the APs. This can be stated as
\begin{subequations}
\label{Problem.nonconvex}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{minimize}}
& & \alpha_1 \|\r\|_0 + \alpha_2 \| \mathbold{T} \|_0 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.Tminw}, \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& f(\r) \leq \gamma,
\end{align}
\end{subequations}
where the non-negative scalars $\alpha_1$ and $\alpha_2$ determine the importance of the sensors and the links. If $\alpha_1 = 0$, then the problem becomes link selection with stochastic routing, while for $\alpha_2 = 0$, the problem is sensor selection. We denote as $(\r^*, \mathbold{T}^*)$ any optimal couple determined by the solution of the problem~\eqref{Problem.nonconvex}.
We can readily notice that \eqref{Problem.nonconvex} is a nonconvex program, which makes finding any optimal couple $(\r^*, \mathbold{T}^*)$ computationally expensive in practice. In this paper, we are interested in finding an approximate solution of~\eqref{Problem.nonconvex} by a suitable convex relaxation.
\section{Sensor and Relay Selection}
\label{Relays}
When dealing with wireless sensor networks which are deployed in large areas, it is often useful to employ relays to facilitate the transmission of measurements back to the APs. In this spirit, we also consider the possible presence of relays. In particular, from here on, all the nodes deployed in the sensor network may act as sensors or as relays. Note that sensors can also act as a relay while sensing, as discussed in the previous section. Our goal is to consistently determine which of the nodes, which are placed at well-defined positions, should play the role of sensors and which ones the role of relays while guaranteeing both a prescribed network performance and connectivity in the selected subgraph. Notice that relays have less energy requirements that sensors, and therefore the distinction between sensors and relays is beneficial to further reduce the overall energy consumption. Notice also that, as expressed in the introduction, the proposed solution may be reiterated in time, to assign different roles at different times.
In order to model the possibility for a node to be acting as a sensor or as a relay, we introduce a new Boolean variable $\mathbold{\nu} \in \{0,1\}^{J+K}$, and we define that a node $i \in \mathcal{V}_{\textrm{s}}$, a sensor or relay, is on if $\nu_i = 1$ and it is off, otherwise ($\nu_p =1$ for APs). From the nodes that are on, we will know they are sensors when their $r_i$ is strictly positive, while the others are acting as relays.
We also reformulate the constraints accordingly. The constraint~\eqref{eq.Tminw} gets reformulated as
\begin{equation}\label{eq.Tminnu}
T_{ip} \leq \min\{\nu_i, \nu_p\}, \quad i\in\mathcal{V}_{\textrm{s}}, p \in \mathcal{V},
\end{equation}
as the relays can exchange information. Notice that the constraint has a simplified form w.r.t.~\eqref{eq.Tminw}, since the variable $\mathbold{\nu}$ is Boolean. In addition, we need a constraint that makes sure that a sensor has a positive relative rate only when its node is activated, that is
\begin{equation}\label{eq.wminnu}
r_i \leq \nu_i, \quad i\in\mathcal{V}_{\textrm{s}}.
\end{equation}
Finally, the constraint~\eqref{eq.flow} can be carried over as it is,
\begin{equation}\label{eq.flowrelay}
r_i \bar{r}_i + \sum_{p\in\mathcal{V}_{\mathrm{s}}} T_{pi}R_{pi} \leq \sum_{p\in\mathcal{V}}T_{ip}R_{ip}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{equation}
With this in place, the problem we want to solve is how to consistently select minimum rates, relays, and links so to guarantee a certain network performance and connectivity. We can formulate this as
\begin{subequations}
\label{SensorRelayOptizProblemDeter0}
\begin{align}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\r\|_0 + \alpha_2 \|\mathbold{T}\|_0 + \alpha_3 \|\mathbold{\nu}\|_0 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in \{0,1\}, \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma.
\end{align}
\end{subequations}
Any possible solution of this problem is indicated as the triplet $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$. This problem is a nonconvex mixed-integer programming problem and therefore finding any triplet $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$ would be in general too computationally expensive. As done for the case where the relays are not present, we relax the problem to a convex one. In particular, we substitute the $\ell_0$ pseudo-norm with the convex surrogate $\ell_1$ norm, and we let the boolean vector $\mathbold{\nu}$ become real and live in the set $[0,1]^{J+K}$. With this, we arrive to the convex problem
\begin{subequations}
\label{SensorRelayOptizProblemDeter}
\begin{align}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\r\|_1 + \alpha_2 \|\mathbold{T}\|_1 + \alpha_3 \|\mathbold{\nu}\|_1 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma,
\end{align}
\end{subequations}
whose solution is indicated with $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$. Once again, the approximate triplet $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$ is going to be different in general from the sought one $(\r^*, \mathbold{T}^*, \mathbold{\nu}^*)$. An important difference with problem~\eqref{Problem.nonconvex} and its relaxed version is the presence of the Boolean vector $\mathbold{\nu}$: this makes the triplet $(\hat{\r}, \hat{\mathbold{T}}, \hat{\mathbold{\nu}})$ in general unfeasible w.r.t. the constraints of the nonconvex problem~\eqref{SensorRelayOptizProblemDeter0} (the reason is that $\hat{\nu}_i$ does not have to be either $0$ or $1$). In this paper, we consider to project $\hat{\nu}_i$ to $1$ any time $\hat{\nu}_i > 0$. In this way, the new triplet becomes feasible w.r.t. constraints of the nonconvex problem~\eqref{SensorRelayOptizProblemDeter0}.
In Algorithm~\ref{alg:SeReLiS}, we summarize the procedure for consistent sparse sensor, relay, and link selection (SSRLS), where we have used once again the sparse-enhancement procedure of reweighting.
\begin{algorithm}[t]
\footnotesize
\begin{algorithmic}[1]
\REQUIRE Number of iterations $N$, reweighting tolerance $\epsilon>0$, sensor importance $\alpha_1\geq 0$, link importance $\alpha_2\geq 0$, relay importance $\alpha_3\geq 0$.
\STATE Set the weighting vectors and matrix as $w^0_i = 1$, $v^0_i = 1$, and $W^0_{ip} = 1$ for all $i\in \mathcal{V}_\textrm{s}$ and $p \in \mathcal{V}$
\FOR {$\tau = 0$ to $N-1$}
\STATE Solve the convex program
\begin{align*}
& \underset{\r,\mathbold{T},\mathbold{\nu}}{\text{minimize}}
& & \alpha_1 \|\mathbold{w}^{\tau}\odot\r\|_1 + \alpha_2 \|\mathbold{W}^{\tau}\odot\mathbold{T}\|_1 + \alpha_3 \|\v^{\tau}\odot\mathbold{\nu}\|_1 \\
& \text{subject to}& & r_i \in [0,1],\, T_{ip} \in [0,1], \nonumber \\ &&& \nu_i \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.Tminnu}, \eqref{eq.wminnu}, \eqref{eq.flowrelay}, \\
&
&& f(\r) \leq \gamma,
\end{align*}
with off-the-shelf interior point methods (e.g., SDPT3~\cite{Toh1999} or SeDuMi~\cite{Sturm1998}). Let the solution be $(\hat{\r}^\tau, \hat{\mathbold{T}}^\tau, \hat{\mathbold{\nu}}^\tau)$.
\STATE Compute the new weights $\mathbold{w}^{\tau+1}$, $\v^{\tau+1}$ and $\mathbold{W}^{\tau+1}$ as
$$
w^{\tau+1}_i = \frac{w^{\tau}_i}{\epsilon + \hat{r}^{\tau}_i}, \quad W^{\tau+1}_{ip} = \frac{W^{\tau}_{ip}}{\epsilon + \hat{T}^{\tau}_{ip}}, \quad v^{\tau+1}_i = \frac{v^{\tau}_i}{\epsilon + \hat{\nu}^{\tau}_i}
$$
\ENDFOR
\STATE Project $\hat{\nu}_i^N$ to $1$, if $\hat{\nu}_i^N> 0$.
\STATE Output the solution triplet $(\hat{\r}^N, \hat{\mathbold{T}}^N, \hat{\mathbold{\nu}}^N)$
\end{algorithmic}
\caption{Sparse Sensor, Relay, and Link Selection}
\label{alg:SeReLiS}
\end{algorithm}
{{\bf Connectivity guarantees of Algorithm~\ref{alg:SeReLiS}.}
We formalize now the claim that from each active sensor there exists a path (formed by relays and other active sensors) that goes to an AP. The argument that we use to prove this claim is the same as the one that we have used to prove the connectivity guarantees of Algorithm~\ref{alg:SeLiS} (where no relay were considered). Consider~\eqref{eq.flowrelay}: this constraint has to be true for each active sensor and active relay (the one for which $r_i = 0$ and $\nu_i>0$), and it reads $0 \leq 0$ for the not active ones (due to constraints~\eqref{eq.Tminnu}-\eqref{eq.wminnu}). Since it has to be true for all active sensors and relays, the sensors have to send out more rate than what they receive (and the difference is given by $r_i \bar{r}_i$), while the relays can send out exactly what they receive. Therefore, first: no active sensor or relay can be a sink. Second: there cannot be loops containing active sensors not connected to a sink, since the rate augments along the loop and~\eqref{eq.flowrelay} would not be satisfied for at least one pair of connected active elements (either sensor-sensor, sensor-relay, or relay-relay). Third: there cannot be loops containing only active relays. The reason for the last claim is that, although~\eqref{eq.flowrelay} would be satisfied along the loop for any $T_{ip} = T_{pi}$ for all pairs of active relays $(i,p)$ on the loop, the solution $T_{ip}=0$ is the optimal one, given the selected cost function. Which induces all the relays in the loop to become inactive. Thus, the only possibility is that eventually each sensor has a path to a sink, and no relays are used without purpose. \hfill $\Box$
}
\section{Numerical Results for sensors and links}
\label{SimulationResults1}
In this section, we assess the performance of the proposed SSLS algorithm in terms of the amount of resources that are used, i.e., the number of both, active sensors and links. We also verify the consistency and the subgraph connectivity.
We consider an estimation scenario where sensors are randomly deployed according to a uniform distribution in a square area of side $5$ units. The regression matrix, $\mathbold{A}=\left[\textbf{a}_1, \cdots, \textbf{a}_J\right]^{\intercal}$ $\mathbold{A} \in \mathbb{R}^{J\text{x}m}$, is drawn from a zero-mean Gaussian distribution with variance $1$.
The variance of the noise is the same at all sensors, $\sigma_i=1/ \sqrt{\text{SNR}}$, where SNR is set to $0$ dB. We use the cost $f(\r)$ of \eqref{CostFunction} and set the parameter $\gamma$ in \eqref{eq.MSErate} to 0.5.
The link reliability metric that we use in the simulations is given by:
\begin{equation}
\label{LinkReliability}
R_{ip}= \left\{
\begin{array}{ll}
1 - \frac{1}{2}({\frac{\parallel \mathbold{x}_i - \mathbold{x}_p \parallel}{d}})^{2\beta} & \mbox{if} ~ 0 \le \parallel \mathbold{x}_i - \mathbold{x}_p \parallel < d \\
\frac{1}{2}(2-\frac{\parallel \mathbold{x}_i - \mathbold{x}_p \parallel}{d})^{2\beta} & \mbox{if} ~ d \le \parallel \mathbold{x}_i - \mathbold{x}_p \parallel < 2d \\
0 & \mbox{otherwise} \\
\end{array} \right.
\end{equation}
with $\beta$ the power attenuation factor ($2 \leqslant \beta\leqslant 6$) and $d$ the communication radius. We have considered $\beta=2$ and $d=1.74$ \cite{Arroyo-Valles07}.
The number of iterations in the reweighted $\ell_1$ minimization is empirically set to 30 to trade-off sparsity of the solution and computational time.
Due to the application of the reweighted $\ell_1$ minimization mechanism, only the sensors and links with relatively high acquisition rate and link probability are active. We round off to 0 the link probabilities and sensor rates lower than a sufficiently small constant $\delta$, which is set to $\delta = 2\cdot 10^{-4}$. Further, we consider $\alpha_1=\alpha_2=1$. {Notice that rounding off the probabilities to $0$ could incur in a loss of connectivity. This is however not likely in practice, due to the reweighting procedure that makes sure that the non-zero probabilities have values well above the selected threshold $\delta$. The experimental results support this claim, since we have not witnessed any loss in connectivity.}
Fig. \ref{fig:SLSelection} is an example of a 100-node sensor network with a single AP. The parameter to estimate has dimension $m=2$ and the maximum rate is $\bar{r}=0.7$. Active sensors are colored in green while the AP is in black. The results show the sparsity of the solution since only a few sensors (4\%) and links (0.072\%) are active. It can be also seen that the selected subgraph is connected and there is always a path between the active sensors and the AP. The solution also satisfies the other constraints of the optimization problem. Fig. \ref{fig:SLSelection_Index} shows the relative rates of the active sensors.
\begin{figure}[tb]
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n2}
\caption[Network topology with selected sensors and links]{Active links and sensors in a one-AP network for $J=100$ nodes.}
\label{fig:SLSelection}
\end{figure}
\begin{figure}[tb]
\centering
\psfrag{a}{\small \hskip1cm $\hat{r}_i$}
\psfrag{b}{\small \text{Sensor index $i$}}
\includegraphics[scale=1]{./Figures/IndexSLSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n2Mod}
\caption{Relative rates of the active sensors.}
\label{fig:SLSelection_Index}
\end{figure}
Next, our purpose is to show average performance results. Hence, we run $250$ Monte Carlo simulations for each network configuration. The number of deployed sensors, $J$, varies from 30 to 100 and there is one AP. Two values of $\bar{r}$ are considered, 0.4 and 0.7. Simulations are run considering that $\bar{r}$ is identical for all nodes in the sensor network. Also, we consider two values for the dimension of the parameter to estimate, $m=2$ and $m=4$.
Two metrics are considered to assess the performance of the networks. They try to measure the amount of resources that are used in the network.
Since we are dealing with acquisition rates, let us first define the total relative acquisition rate of the whole network as the sum of the acquisition rates of the sensors in the network, i.e., $\sum_{i\in\mathcal{V}_{\textrm{s}}} \hat{r}_i^N$. In order to make the performance measurement independent of the number of sensors in the network, we define the percentage of the total relative acquisition rate of the whole network, $P_{\textrm{trr}}$, as
\begin{equation}
P_{\textrm{trr}}=\frac{\sum_{i\in\mathcal{V}_{\textrm{s}}} \hat{r}_i^N}{J} \cdot 100.
\end{equation}
Recall that the relative acquisition rate $\hat{r}_i^N \in [0,1]$. Note that only the active sensors contribute to the sum since their acquisition rate is different from 0, so this measure gives us information about the percentage of active sensors.
Considering that $\sum_{p \in \mathcal{V}} \hat{T}_{ip}^N \leq 1$ for $i\in\mathcal{V}_{\textrm{s}}$, we next define the percentage of the aggregate network link probability, $P_{\textrm{alp}}$, as
\begin{equation}
P_{\textrm{alp}}=\frac{\sum_{i\in\mathcal{V}_{\textrm{s}}} \sum_{p \in \mathcal{V}} \hat{T}_{ip}^N}{J} \cdot 100.
\end{equation}
Note that only active sensors and APs contribute to the sum, since the remaining link probabilities are 0. In this case, the metric is related to the percentage of active links in the network.
In both cases, the lower the metrics are, the fewer resources (in terms of active sensors and links) are used.
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip1cm $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average performance and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n2}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip1cm $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption{Average performance and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n4}
\end{figure}
\begin{figure}[t]
\centering
\psfrag{a}{\small \hskip-.8cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n2_SLPerc}
\end{figure}
\begin{figure}[h!]
\centering
\psfrag{a}{\small \hskip-.8cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSel_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLSelectionAvg_n4_SLPerc}
\end{figure}
Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4} show the average performance and the standard deviation for $m=2$ and $m=4$, respectively, for different amounts of deployed sensors and the two values of $\bar{r}$. Even for the worst case scenario, i.e., for $\bar{r}=0.4$ and a $30$-node network, the $P_{\textrm{trr}}$ and $P_{\textrm{alp}}$ values are $8$\% and $5.5$\% for $m=2$ and $22$\% and $17$\% for $m=4$, respectively (which represents a small percentage of used resources).
In order to verify if those metric values correspond to the activation of a low number of sensors with high relative rate values or correspond to a high number of active sensors with low relative rate values, Fig. \ref{fig:SLSelectionAvg_n2_SLPerc} and Fig. \ref{fig:SLSelectionAvg_n4_SLPerc} illustrate the average percentage of active sensors and links. For networks of $30$ nodes and $\bar{r}=0.4$, the average percentage of active sensors and links is $12$\% (i.e., $3.6$ sensors) and $0.55$\% for $m=2$ and $30$\% (i.e., $9$ sensors) and $1.7$\% for $m=4$, respectively. Thus, this result corroborates that the amount of used resources is conservative, i.e., there is a low percentage of active sensors with high relative rates because the $P_{\textrm{trr}}$ values are the highest in Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4}.
\subsection{Case $\bar{r}=0.7$ (yellow and red bars)}
From Fig. \ref{fig:SLSelectionAvg_n2} and Fig. \ref{fig:SLSelectionAvg_n4} it can be seen that the values of the two metrics decrease with the increase of the number of sensors (regardless of $m$), reaching lower values than those of the network of $30$ nodes.
Let us examine the scenario with $m=2$ (analogous conclusions hold for networks with $m=4$). If we also analyze the trend in the percentage of active sensors (Fig. \ref{fig:SLSelectionAvg_n2_SLPerc}), it first decreases and later increases slightly starting from networks of $80$ nodes.
Even though networks with $80$ to $100$ nodes have between $6 \%$ to $8\%$ of active nodes (i.e., $7$ sensors), those networks have a slightly higher amount of active resources than in networks of $30$ nodes ($10\%$, approximately $3$ nodes). However, in general, the total number of active sensors stay low in comparison to the total number of sensors, which corroborates the sparsity of the solution.
\subsection{Case $\bar{r}=0.4$ (dark and light blue bars)}
Let us analyze the behavior of the metrics for $\bar{r}=0.4$ and $m=2$ (analogous conclusions are raised for networks with $m=4$). In Fig. \ref{fig:SLSelectionAvg_n2}, $P_{\textrm{trr}}$ values decrease from $7.5$\% at networks of $30$ nodes to $2.7$\% at networks of $70$ nodes, and from that point increases slightly up to a value of $3.2$\% at networks of $100$ nodes. If we now have a look at Fig. \ref{fig:SLSelectionAvg_n2_SLPerc}, the percentage of active sensors goes from $11.8$\% at $30$-node networks (i.e., $3.5$ sensors) to $8$\% at $50$-node networks (i.e., $4$ sensors) and later increases until reaching a value of $22$\% at $90$-node networks (i.e., $20$ nodes).
While the number of active nodes is similar in networks with a low amount of sensors ($30$ to $50$ nodes), it increases slightly for denser networks ($60$ to $100$ nodes). In this latter case, sensors are closer to each other so that the reliability values among sensors are similar and more sensors may be activated. First, although not reported here, we have observed that increasing the number of reweighting iterations does help in the latter case in reducing the amount of active sensors, at the cost of increasing the computational time requirements. Second, we will see how this is not an issue when relays are considered.
In case of the percentage of active links, the values are below $0.7$\% for $m=2$ and $2$\% for $m=4$ for all the network sizes. Hence, the networks are sparse in the amount of active links.
\vskip5mm
From the Figures, it can be appreciated that, for a given number of nodes, the percentage of used resources is lower in case of estimating a parameter of 2 dimensions than one of 4 dimensions (compare the metric values as well as the percentage of active sensors and links). Furthermore, the percentage of used resources (active sensors and links) is lower in case of considering $\bar{r}=0.7$.
Subgraph connectivity and consistency have been also checked for every run. All the activated sensors have a path to the AP. Furthermore, connectivity of the network in the sense of \eqref{eq.flow} is guaranteed.
\section{Numerical Results with Relays}
\label{SimulationResults2}
Similarly to the sensor and link selection scenario, in this section we assess the performance of the new SSRLS algorithm in terms of the amount of resources that are used. We also verify the consistency and the subgraph connectivity.
Once again, we consider an estimation scenario, where the parameters are the same\footnote{In particular, the number of iterations in the reweighted $\ell_1$ minimization is set to 30 while $\delta = 2\cdot 10^{-4}$.} as those used in the sensor and link selection case (Section \ref{SimulationResults1}). We consider $\alpha_1=\alpha_2=\alpha_3=1$.
Fig. \ref{fig:SRLSelectionSc1} is an example of a 50-node sensor network with two APs with $m=4$ and $\bar{r}=0.4$. The active sensors are colored in green, the active relays in blue and the APs are colored in black. Looking at the figure, it is evident that the obtained solution is sparse. From the 50 nodes (excluding the APs), 5 are selected as sensors and 3 as relays. The amount of active links (i.e., those that have a probability value higher than 0) is $8$\%. Observe the connectivity of the selected subgraph, where there is a path from each active sensor to the APs via the relays, where messages are routed stochastically according to the link probability. The solution also satisfies the other constraints of the optimization problem.
\begin{figure}[tb]
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLSel_J50_K2_c1_5_c2_1_c4_1_MSE_05_r_04_n4_rl2}
\caption{Selected sensors, relays and links in a two-AP 50-sensor network where $m=4$. }
\label{fig:SRLSelectionSc1}
\end{figure}
Next, and following a parallel analysis to the one made in the sensor and link selection scenario, we show average performance results. The number of sensors varies from 30 to 100, $\bar{r}$ is 0.4 or 0.7 and $m$ is either 2 or 4. 250 Monte Carlo simulations are run for each network configuration. The metrics to assess the network performance are the ones exposed in Section \ref{SimulationResults1}. To check the sparsity in the number of relays, we also evaluate the percentage of active relays in the network.
Fig. \ref{fig:SLRelSelectionAvg_n2} and Fig. \ref{fig:SLRelSelectionAvg_n4} show the average performance and the standard deviation, for $m=2$ and $m=4$, respectively, for different numbers of deployed sensor nodes and the two values of the maximum acquisition rate.
{From these figures it can be seen that the $P_{\textrm{trr}}$ and $P_{\textrm{alp}}$ values decrease when the number of sensor nodes increases, going from a value of 6.3$\%$ ($m=2$, $\bar{r}=0.4$) or 19$\%$ ($m=4$, $\bar{r}=0.4$) for networks of $30$ nodes to values lower than 2$\%$ ($m=2$, $\bar{r}=0.4$) or 5$\%$ ($m=4$, $\bar{r}=0.4$) for networks of $100$ nodes.
To verify if those metric values correspond to the activation of a few sensors with high relative rate value, Fig. \ref{fig:SLRelSelectionAvg_n2_SLPerc} and Fig. \ref{fig:SLRelSelectionAvg_n4_SLPerc} illustrate the average percentage of active sensors and links. For $\bar{r}=0.4$, the average percentage of sensors goes from approximately $8\%$ (i.e., $2.4$ sensors for $m=2$) or $22\%$ (i.e., $6.5$ sensors for $m=4$) in 30-node networks to around $2\%$ (i.e., $2$ sensors for $m=2$) or $5\%$ (i.e., $5$ sensors for $m=4$) in 100-node networks, respectively.
Fig. \ref{fig:SLRelSelection_AvgRel} also illustrates the percentage of active relays for sensor networks of different sizes, $\bar{r}$ and $m$. For $m=4$ and $\bar{r}=0.4$, $2$ relays (or 5.5$\%$ of the nodes) are active in 30-sensor networks, while $1$ relay ($0.9\%$) is active in 100-sensor networks.
The conclusions from these figures are two-fold:
First, independently of the total number of available nodes, the algorithm robustly selects a similar number of sensors, relays and links to satisfy the constraint on the measurement errors. This strongly suggests that the optimality of the sensing, given the constraints, is achieved.
Second, the active sensors are those with high relative rates. And even more, we obtain sparse solutions not only in the amount of active sensors but also in the amount of active links and relays.
As observed for the sensor and link selection scenario, the demand of resources (percentage of active sensors, links and relays) is less when considering higher maximum rates (See Fig. \ref{fig:SLRelSelectionAvg_n2_SLPerc} , Fig. \ref{fig:SLRelSelectionAvg_n4_SLPerc} and Fig. \ref{fig:SLRelSelection_AvgRel}). Also, for a given number of nodes, the amount of used resources grows whenever the dimension of the parameter to estimate increases.
Besides, the variability in the results of the sensor, relay and link selection problem is lower than in the sensor and link selection problem. This can be observed by taking into account the standard deviation in the figures. All in all, it appears that when one considers also the presence of relays, one obtains better performance in terms of reduced active resources than in the case of no relays. A more in depth characterization is left as future research.}
\begin{figure}
\centering
\psfrag{a}{\small $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average performance and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n2}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small $P_{\textrm{trr}}, P_{\textrm{alp}}$}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercRates_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average performance and its standard for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n4}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-1cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=2$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n2_SLPerc}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-1.5cm Percentage of active sensors / links}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small \text{sensors} $\bar{r}=0.4$}
\psfrag{lg2}{\small \text{links} $\bar{r}=0.4$}
\psfrag{lg3}{\small \text{sensors} $\bar{r}=0.7$}
\psfrag{lg4}{\small \text{links} $\bar{r}=0.7$}
\includegraphics[scale=1]{./Figures/AveragePercSenLinks_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n4_newStd}
\caption {Average percentage of active sensors and links and its standard deviation for $m=4$ and for different amount of sensors and $\bar{r}$.}
\label{fig:SLRelSelectionAvg_n4_SLPerc}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip-.5cm \text{Percentage of active relays}}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\small $\bar{r}=0.4, m=2$}
\psfrag{lg2}{\small $\bar{r}=0.7, m=2$}
\psfrag{lg3}{\small $\bar{r}=0.4, m=4$}
\psfrag{lg4}{\small $\bar{r}=0.7, m=4$}
\includegraphics[scale=1]{./Figures/AveragePercRelays_SLSelRelays_K1_c1_5_c2_1_MSE_05_r_04_r_07_n2_n4_newStd}
\caption {Average percentage of active relays and its standard deviation for different amount of sensors, $\bar{r}$ and $m$.}
\label{fig:SLRelSelection_AvgRel}
\end{figure}
Therefore, the SSRLS algorithm provides a consistent solution to the sensor and relay selection problem by always finding a connected path among the active sensors, relays and APs no matter the size of the network and the dimension $m$ of the parameter to estimate, and which satisfies the network performance constraint for the active sensors.
However, the following question may arise: are the active sensors obtained after solving the SSRLS algorithm the same as those that would be active in a problem which aims at selecting the minimum number of sensors and their corresponding acquisition rates to satisfy a certain MSE-rate, i.e., solve the relaxed version of the following problem: $$\underset{\r}{\text{minimize}} \quad \|\r\|_0 \quad \text{subject to} \quad f(\r) \leq \gamma,$$ where sensors that satisfy $r_i>\delta$ (have an acquisition rate different from 0) are selected?
\begin{figure}
\centering
{
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/SLRelSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n4}
}
\caption{Selected sensors, relays and links in a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. }
\label{fig:SRLSel_comp1}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\includegraphics[scale=1]{./Figures/OnlySensorSel_J100_K1_c1_5_c2_1_MSE_05_r_07_n4}
\caption{Selected sensors in a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. }
\label{fig:SLSel_comp1}
\end{figure}
Clearly, the answer is no. As an example, compare the active sensors in Fig.\ref{fig:SRLSel_comp1} and Fig. \ref{fig:SLSel_comp1}, for a one-AP 100-sensor network where $m=4$ and $\bar{r}=0.7$. The solution provided by the SSRLS algorithm not only takes into account the sensors with the highest acquisition rates (to satisfy the MSE-rate constraint), but also selects in a robust, coherent and consistent way relays and probability links such that the active sensors are connected to the APs (it considers the sensor deployment, too). On the contrary, the solution provided by the sensor selection problem does not consider the spatial distribution of sensors, and the only issue that matters is the selection of the sensors with the best acquisition rates. Obviously, this does not mean that both solutions do not activate some common sensors. In the previous example, sensors with indexes 26, 49, 74 and 93 are selected in both solutions.
\input{SpecialCasesExtensions}
\section{Link selection}
\label{SpecialCasesExtensions}
This last scenario is a particular case of the general one where we assume that all sensors are active, acquire measurements with relative rate at least $r_{i0}$, and communicate with the APs in a multi-hop fashion. The problem that remains is to determine the probabilistic routing matrix $\mathbold{T}$, that selects the minimum subset of links so that a certain constraint is satisfied. In particular, we want to ensure network integrity, defined according to \cite{Zavlanos13} as the ability of the network to support the desired communication rates in a certain network topology.
As in the general scenario, the network needs to satisfy the flow inequality constraint given by \eqref{eq.flow} in order to guarantee that messages are delivered to the APs. Furthermore, it is also required that sensors communicate their measurements with the APs at a nominal rate of $r_{i0}$ messages per time unit. This means that the relative acquisition rate should satisfy the following inequality: $r_i \geq r_{i0}$. Thus, we aim at finding the appropriate relative rates $\r \in [0,1]^J$ and the sparse probabilistic routing matrix $\mathbold{T}$.
The network's objective function to be optimized is the social utility value of the optimization variables, $U_i(r_i)$ for the relative rate $r_i$, and $V_{ip}(T_{ip})$ for the links $T_{ip}$, which is defined as $\sum_{i=1}^J U(r_i) + \sum_{i=1}^J \sum_{p=1}^{J+K} V_{ip}(T_{ip})$.
Following \cite{Zavlanos13}, we measure the utility value associated to the rate as $U(r_i)= log(r_i) $, which penalizes small rates $r_i$, and the utility value of the links as $V_{ip}(T_{ip})=-T_{ip}^2$.
Then, the optimization problem that we have to solve is given by\footnote{Note that these utilities can be also incorporated in the objection functions of the previous optimization problems. However, and for the sake of simplicity, we only consider them in this scenario. }
\begin{subequations}
\label{LinkOptizProblemV0}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{maximize}}
& & \sum_{ i \in \mathcal{V}_{\textrm{s}}} U(r_i) + \sum_{ i \in \mathcal{V}_{\textrm{s}}} \sum_{p\in\mathcal{V}} V_{ip}(T_{ip}) - \alpha \| \mathbold{T} \|_0 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& r_i \geq r_{i0}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{align}
\end{subequations}
The problem in \eqref{LinkOptizProblemV0} is not convex due to the $\ell_0$-norm in the objective function. Thus, we relax the non-convex term of \eqref{LinkOptizProblemV0} by substituting the
$\ell_0$-pseudo norm, with the $\ell_1$ norm. Then, the previous optimization problem is transformed into the following one
\begin{subequations}
\label{LinkOptizProblem}
\begin{align}
& \underset{\r,\mathbold{T}}{\text{maximize}}
& & \sum_{ i \in \mathcal{V}_{\textrm{s}}} U(r_i) + \sum_{ i \in \mathcal{V}_{\textrm{s}}} \sum_{p\in\mathcal{V}} V_{ip}(T_{ip}) - \alpha \| \mathbold{T} \|_1 \\
& \text{subject to} & & r_i \in [0,1],\, T_{ip} \in [0,1], \quad i\in\mathcal{V}_{\textrm{s}}, \, p \in\mathcal{V} \\
&
&& \eqref{eq.prob1}, \eqref{eq.flow} \\
&
&& r_i \geq r_{i0}, \quad i \in \mathcal{V}_{\textrm{s}},
\end{align}
\end{subequations}
The objective function is strictly concave and the constraints are linear inequalities, so the problem can be solved efficiently by using convex optimization tools. Note that the optimal utility depends on the spatial configuration of the sensors, and consequently the optimal link probabilities and rate variables do too, which are denoted as $r_{\textbf{x},i}^*$, and $T_{\textbf{x},ip}^*$.
The amount of selected links depends on parameter $\alpha$, which controls the sparsity level (the higher it is, the fewer links are selected). In order to increase sparsity and avoid the tuning of parameter $\alpha$, we apply the iterative reweighted $\ell_1$ minimization algorithm only to the third term of the objective function (we call this partial reweighted $\ell_1$ minimization), which diminishes the influence of that parameter and helps in the link selection process. We round off to 0 the link probabilities lower than a sufficiently small constant $\delta$.
In the remaining of this section, we show the performance of the link selection scheme. Fig. \ref{fig:LinkSelExam} is an example of a 50-node sensor network with one AP. Sensors are colored in red and the AP in black. The nominal rate is $r_{i0}=0.2$, which is identical for all the sensor nodes; another weighting parameter is $\alpha=1$. $\delta$ and the rest of the parameters are identical to those defined in Section \ref{SimulationResults1}. The color and the thickness of the links are related to the routing probability values, which are graded into different ranges. Blue links have a probability value between $\delta$ and $0.25$ and their line is the finest. The red ones have a probability between $0.25$ and $0.5$, the black links between $0.5$ and $0.75$ and the green ones between $0.75$ and $1$, having the thickest line.
Note that only $51$ links are active (i.e., it is a sparse solution). Every sensor is connected via multiple hops (links that connect the sensors) to the AP, where the links with higher probabilities are always established between the AP and some of its neighboring nodes. This is logical given that a message that has been routed through multiple sensors should have a higher probability of arriving successfully to the AP. In general, the sensors which are placed far from the AP tend to establish links with low probability values.
\begin{figure}
\centering
\psfrag{a}{\small \text{y-axis}}
\psfrag{b}{\small \text{x-axis}}
\psfrag{lg1}{\footnotesize \text{sensors}}
\psfrag{lg2}{\footnotesize \text{sink}}
\psfrag{lg3}{\footnotesize \text{$\delta \leq T_{i,j} < 0.25$}}
\psfrag{lg4}{\footnotesize \text{$0.25 \leq T_{i,j} < 0.5$}}
\psfrag{lg5}{\footnotesize \text{$0.5 \leq T_{i,j} < 0.75$}}
\psfrag{lg6}{\footnotesize \text{$0.75 \leq T_{i,j} \leq 1$}}
\includegraphics[scale=1, trim=0cm 0 0 0, clip=on]{./Figures/SLSel_J50_K1_c_1_r_02_ProbCrit025_ReweightDeploy1_seed100_Mod2}
\caption {Selected links in a one-AP 50-node network or $\alpha=1$, and using partial reweighted $l_1$ . Sensors are colored in red, the AP in black and different colors in the links represent the different probability ranges.}
\label{fig:LinkSelExam}
\end{figure}
\begin{figure}
\centering
\psfrag{a}{\small \hskip0cm\text{Percentage of active links}}
\psfrag{b}{\small \text{Number of Sensor Nodes}}
\psfrag{lg1}{\footnotesize \text{Total - $\delta \leq T_{i,j} \leq 1$}}
\psfrag{lg2}{\footnotesize \text{$\delta \leq T_{i,j} < 0.25$}}
\psfrag{lg3}{\footnotesize \text{$0.25 \leq T_{i,j} < 0.5$}}
\psfrag{lg4}{\footnotesize \text{$0.5 \leq T_{i,j} < 0.75$}}
\psfrag{lg5}{\footnotesize \text{$0.75 \leq T_{i,j} \leq 1$}}
\includegraphics[scale=1]{./Figures/AveragePercActiveLink_K1_n2_r02_Std}
\caption {Average percentage of total active links and its standard deviation for different number of sensor nodes, and when link probabilities are graded in different ranges. }
\label{fig:LSelectionAvg_n2}
\end{figure}
Fig. \ref{fig:LSelectionAvg_n2} illustrates the average percentage of active links (the total percentage and the percentage by probability ranges) for sensor networks composed of a number of sensors whose amount varies between 30 to 100 and $r_{i0}=0.2$. 250 Monte Carlo simulations are run for each network configuration. First, the figure shows the low amount of active links, so that the matrix $\mathbold{T}$ is sparse. For 30-node networks, the average percentage of active links is slightly higher than 5 $\%$. The percentage values decrease whenever the number of nodes in the network increases. Regarding the different probability ranges, the highest percentage of active links corresponds to values of $T_{ij}$ between $\delta$ and 0.25, and it is followed by links with probabilities $T_{ij}$ between 0.75 and 1. As in the earlier example, most of those links are established between the AP and its neighbors, which ensures that messages arrive to the AP.
\section{Conclusions}
In this paper we have proposed two optimization methods for selecting optimally and consistently the minimum set of sensors (and their corresponding sensing rates) and links (and their link probability values); or sensors, relays and links, in wireless sensor networks. The chosen scenario has been parameter estimation, where the selected sensors have to guarantee a prescribed network performance based on the MSE-rate.
Numerical results showed the sparsity of the solution, which translates into a smart use of the network resources. The proposed algorithms have provided a consistent solution to the selection problem by always finding a connected path among the selected set of sensors, relays and APs no matter the size of the network and the dimension of the parameter to estimate. This ensures the compliance of the network performance constraint by the selected sensors.
Future work will consider the study of these algorithms from a decentralized point of view, eliminating the need of having an AP that collects all the measurements. The application of these algorithms to other scenarios is also a matter of further studies.
| {
"timestamp": "2017-05-08T02:08:38",
"yymm": "1705",
"arxiv_id": "1705.02298",
"language": "en",
"url": "https://arxiv.org/abs/1705.02298",
"abstract": "In wireless sensor networks, where energy is scarce, it is inefficient to have all nodes active because they consume a non-negligible amount of battery. In this paper we consider the problem of jointly selecting sensors, relays and links in a wireless sensor network where the active sensors need to communicate their measurements to one or multiple access points. Information messages are routed stochastically in order to capture the inherent reliability of the broadcast links via multiple hops, where the nodes may be acting as sensors or as relays. We aim at finding optimal sparse solutions where both, the consistency between the selected subset of sensors, relays and links, and the graph connectivity in the selected subnetwork are guaranteed. Furthermore, active nodes should ensure a network performance in a parameter estimation scenario. Two problems are studied: sensor and link selection; and sensor, relay and link selection. To solve such problems, we present tractable optimization formulations and propose two algorithms that satisfy the previous network requirements. We also explore an extension scenario: only link selection. Simulation results show the performance of the algorithms and illustrate how they provide a sparse solution, which not only saves energy but also guarantees the network requirements.",
"subjects": "Information Theory (cs.IT)",
"title": "Consistent Sensor, Relay, and Link Selection in Wireless Sensor Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9546474194456936,
"lm_q2_score": 0.7431680029241321,
"lm_q1q2_score": 0.7094634162061324
} |
https://arxiv.org/abs/1808.07260 | On an improvement of LASSO by scaling | A sparse modeling is a major topic in machine learning and statistics. LASSO (Least Absolute Shrinkage and Selection Operator) is a popular sparse modeling method while it has been known to yield unexpected large bias especially at a sparse representation. There have been several studies for improving this problem such as the introduction of non-convex regularization terms. The important point is that this bias problem directly affects model selection in applications since a sparse representation cannot be selected by a prediction error based model selection even if it is a good representation. In this article, we considered to improve this problem by introducing a scaling that expands LASSO estimator to compensate excessive shrinkage, thus a large bias in LASSO estimator. We here gave an empirical value for the amount of scaling. There are two advantages of this scaling method as follows. Since the proposed scaling value is calculated by using LASSO estimator, we only need LASSO estimator that is obtained by a fast and stable optimization procedure such as LARS (Least Angle Regression) under LASSO modification or coordinate descent. And, the simplicity of our scaling method enables us to derive SURE (Stein's Unbiased Risk Estimate) under the modified LASSO estimator with scaling. Our scaling method together with model selection based on SURE is fully empirical and do not need additional hyper-parameters. In a simple numerical example, we verified that our scaling method actually improves LASSO and the SURE based model selection criterion can stably choose an appropriate sparse model. | \section{}
\section{Introduction}
A sparse modeling is a major topic in machine learning and
statistics. Especially, LASSO (Least Absolute Shrinkage and Selection
Operator) is a popular method that has been extensively
studied\cite{LARS,KF2000,FL2001,LASSO,HZ2006,NM2007,ZHT2007}. LASSO is
an $\ell_1$ penalized least squares method and has a nature of
soft-thresholding that implements thresholding and shrinkage of
coefficients; see \cite{DJ1994,DJ1995}. These two properties are
simultaneously controlled by a single regularization parameter. This
causes an excessive shrinkage, thus, a large bias that is directly
related to a consistency of model selection by LASSO. This has been
pointed out by \cite{LLW2006,FL2001,NM2007} and it has been proposed
several methods for solving this
problem\cite{FL2001,HZ2006,NM2007,MCP}. \cite{HZ2006} has proposed
adaptive LASSO that employs a weighted $\ell_1$ penalty, by which small
penalty is assigned to a large coefficient values. \cite{NM2007} has
proposed relaxed LASSO to solve a limitation of one parameter control for
thresholding and shrinkage by introducing an additional parameter. On
the other hand, \cite{FL2001} has proposed SCAD (Smoothly Clipped
Absolute Deviation) that employs a non-convex penalty instead of
$\ell_1$ penalty. \cite{MCP} has also introduced a different type of
non-convex penalty called MCP(minimax concave penalty). The introduction
of non-convex penalty has an effect to suppress a bias at large values
of estimators. Since the methods with non-convex penalty have a
difficulty in optimization, the solutions to them have been
investigated; e.g. \cite{ZZ2012,LW2015}. \cite{ZZ2012} has shown that a
gradient descent started from a LASSO solution yields a local minimum of
a objective function with a non-convex penalty and it can be a good
solution for the objective function. In \cite{LW2015}, local minima in
a non-convex penalty method including SCAD and MCP have good quality for
true values of coefficients.
In this article, we focus on a model selection problem in applications
of a sparse modeling. \cite{ZY2006} has shown that LASSO has a
consistent model selection property under a certain condition that is,
however, known to be somewhat restrictive. On the other hand, under
milder conditions than in LASSO, adaptive LASSO, SCAD and MCP have the
oracle property that consists of consistency of model selection and
asymptotic normality of the estimators of non-zero
coefficients\cite{FL2001,MCP}. All these results are based on an
appropriate setting of the regularization parameter. Therefore, it does
not tell us a choice of the regularization parameter in
application. Usually, it relies on the cross validation; e.g. it is
commonly implemented in many software packages. We emphasize that a bias problem
in LASSO directly affects a choice of model (regularization parameter)
under the cross validation. Since a bias is high at a sparse
representation in LASSO, a good sparse representation may not be
selected by a prediction error based criterion such as cross validation
error; see \cite{LLW2006}. We need to take into account of this point
rather than improvement of estimators. This model selection problem is
relaxed in cross validation for adaptive LASSO, SCAD and MCP since a
bias problem of LASSO is improved in these methods. However, despite of
a good quality of SCAD and MCP estimators as in \cite{ZZ2012,LW2015},
local minima and optimization problem may yield a fluctuation of
estimators among the training sets in cross validation. The impact of
this fluctuation on validation error may not be well
evaluated. Especially, since these methods need an another
hyper-parameter for specifying the shape of penalty term, we need to
conduct cross validation for grid search on two hyper-parameters.
In this article, we consider to introduce a scaling of LASSO estimator;
i.e. scalar times of LASSO estimator. We here give an appropriate
empirical scaling value which actually improves the excessive shrinkage,
thus a large bias in LASSO. The empirical scaling value has a simple
form with LASSO estimator; i.e. LASSO estimator is plugged in to the
scaling value. Therefore, in our method, we just need LASSO estimator
that can be obtained by a fast and stable method such as LARS (Least
Angle Regression)\cite{LARS} under LASSO modification or coordinate
descent\cite{WL2008,FHHT2007}. This is a benefit of our method in
comparing with the other methods including non-convex methods. Moreover,
a simplicity of our scaling method enables us to derive its analytic
model selection criterion that is $C_p$-type criterion based on SURE
(Stein's Unbiased Risk Estimate). For a naive LASSO, SURE has already
been derived in \cite{ZHT2007}. Actually, we apply this result to derive
SURE for the LASSO with scaling. However, it is not available for
adaptive LASSO, relaxed LASSO and SCAD. Although it is derived for MCP
under a specific condition, its effectiveness in applications is not
clear; e.g. many software packages that implement MCP employed cross
validation. On the other had, our scaling method is closely related to
adaptive LASSO and relaxed LASSO. Adaptive LASSO controls biases
componentwisely by coefficientwise weights in $\ell_1$ regularizer. The
weights are calculated based on the initial estimator such as the least
squares estimators. Note that we may need ridge estimators as the
initial estimator for stable training in applications. The cost function
including the weighted $\ell_1$ regularizer can be simply optimized by a
modified LARS-LASSO\cite{HZ2006}. On the other hand, in relaxed LASSO,
shrinkage and thresholding parameters are introduced differently and
those are simultaneously optimized by an algorithm based on LARS-LASSO.
Relaxed LASSO can be viewed as controlling bias independently of
threshold. In this point of view, in our scaling method, threshold is
achieved by LASSO and amount of shrinkage is controlled by scaling
value. Although there have been derived some important asymptotic
results for adaptive LASSO and relaxed LASSO, it may be difficult to
derive an analytic solution to model selection. On the other hand, as
an improvement of adaptive LASSO, multi-step adaptive LASSO has been
proposed in \cite{BM2008}; see also \cite{XX2015}. Multi-step adaptive
LASSO employ adaptive LASSO at each cycle, in which LASSO estimators are
employed as initial estimators in weights. Multi-step adaptive LASSO is
similar to our scaling method since both methods employ LASSO estimator
in the parameters for improving a bias problem of LASSO.
Unfortunately, the method of model selection has not been discussed for
multi-step adaptive LASSO. In conclusion, we can say that
possibility of deriving SURE is an another benefit of our scaling
method.
In section 2, we give a regression framework including LASSO and a
definition of risk with its Stein's formula. In section 3, we introduce
a scaling of LASSO estimator. Especially, we give a reasonable empirical
scaling value and derive a model selection criterion under the given
scaling value. In section 4, we verify our results in section
3 through a simple numerical experiment. It includes comparisons to the
other modeling method such as MCP and adaptive LASSO. Section 5 is
devoted for conclusions and future works.
\section{LASSO with scaling}
\subsection{Regression problem and LASSO}
Let ${\boldsymbol{x}}=(x_1,\ldots,x_m)$ and $y$ be explanatory variables and a
response variable, for which we have $n$ samples :
$\{(x_{i,1},\ldots,x_{i,m},y_i):i=1,\ldots,n\}$. We define
${\boldsymbol{x}}_j=(x_{1,j},\ldots,x_{n,j})'\in\mathbb{R}^n$ for $j=1,\ldots,m$, where $'$
stands for the transpose operator. We define ${\bf{X}}=({\boldsymbol{x}}_1,\ldots,{\boldsymbol{x}}_m)$
and ${\boldsymbol{y}}=(y_1,\ldots,y_n)'$. In this article, we assume that $m\le n$
holds and ${\boldsymbol{x}}_1,\ldots,{\boldsymbol{x}}_m$ are linearly independent. Therefore,
${\bf{X}}'{\bf{X}}$ is not singular here. Let $\varepsilon_1,\ldots,\varepsilon_n$
be i.i.d. samples from $N(0,\sigma^2)$; i.e. normal distribution with
mean $0$ and variance $\sigma^2$. Thus, by defining
${\boldsymbol{\varepsilon}}=(\varepsilon_1,\ldots,\varepsilon_n)'$, ${\boldsymbol{\varepsilon}}\sim
N({\bf{0}}_n,\sigma^2{\bf{I}}_n)$, where ${\bf{0}}_n$ is an $n$-dimensional zero vector
and ${\bf{I}}_n$ is an $n\times n$ identity matrix. We assume
${\boldsymbol{y}}={\boldsymbol{\mu}}+{\boldsymbol{\varepsilon}}$. We therefore have ${\boldsymbol{\mu}}=\mathbb{E}{\boldsymbol{y}}$, where $\mathbb{E}$ is
the expectation with respect to the joint probability distribution of
${\boldsymbol{y}}$. We consider a regression problem by ${\bf{X}}{\boldsymbol{b}}$, where
${\boldsymbol{b}}=(b_1,\ldots,{\boldsymbol{b}}_m)$ is a coefficient vector. Let
$\widehat{\boldsymbol{b}}=(\widehat{b}_1,\ldots,\widehat{b}_m)$ be an estimator of ${\boldsymbol{b}}$. LASSO is a method
for obtaining coefficient estimators that minimize $\ell_1$ regularized
cost function defined by
\begin{equation}
\label{eq:cost-lasso}
C_{\lambda}({\boldsymbol{b}})=\|{\boldsymbol{y}}-{\bf{X}}{\boldsymbol{b}}\|^2+\lambda\|{\boldsymbol{b}}\|_1,
\end{equation}
where $\|\cdot\|$ is the Euclidean norm and
$\|{\boldsymbol{b}}\|_1=\sum_{k=1}^n|b_j|$. $\lambda\ge 0$ is a regularization
parameter. The second term of the right hand side of
(\ref{eq:cost-lasso}) is called $\ell_1$ regularizer. Let
$\widehat{\boldsymbol{b}}_{\lambda}=(\widehat{b}_{1,\lambda},\ldots,\widehat{b}_{m,\lambda})$ be a LASSO
solution. Since the LASSO is known to be yield a sparse representation
under an appropriate choice of $\lambda$, some of elements in
$\widehat{\boldsymbol{b}}_{\lambda}$ are exactly zeros.
We denote a LASSO output vector by $\widehat{\boldsymbol{\mu}}_{\lambda}=(\widehat{\mu}_{\lambda,1},\ldots,\widehat{\mu}_{\lambda,n})'$
that is given by $\widehat{\boldsymbol{\mu}}_{\lambda}={\bf{X}}\widehat{\boldsymbol{b}}_{\lambda}$.
We define $\widehat{B}_{\lambda}=\{i:\widehat{b}_{i,\lambda}\neq 0\}$ and
$\widehat{k}_{\lambda}=|\widehat{B}_{\lambda}|$. $\widehat{B}_{\lambda}$ is called an active
set. There are regularization parameter values at which the active set
changes. We denote those by $\lambda_0>\cdots>\lambda_J=0$, in which
$\widehat{\boldsymbol{b}}_{\lambda}={\bf{0}}_m$ for $\lambda>\lambda_0$ under a given ${\boldsymbol{y}}$.
$\lambda_j$ is called a transition point.
Let ${\bf{X}}_{\widehat{B}_{\lambda}}$ be an $n\times\widehat{k}_{\lambda}$ matrix whose
column vectors are ${\boldsymbol{x}}_j$, $j\in\widehat{B}_{\lambda}$. We write
$\widehat{\bf X}_{\lambda}={\bf{X}}_{\widehat{B}_{\lambda}}$ for simplicity. Also we define
$\widehat{\boldsymbol{\beta}}$ as a $\widehat{k}$-dimensional vector whose elements are
$\{\widehat{b}_k:k\in\widehat{B}_{\lambda}\}$. We write
$\widehat{\boldsymbol{\beta}}_{\lambda}=(\widehat{\beta}_{1,\lambda},\ldots,\widehat{\beta}_{\widehat{k},\lambda})'$;
i.e. $\widehat{\beta}_k$ is a member of $\{\widehat{b}_k:k\in\widehat{B}_{\lambda}\}$ under an
appropriate enumeration. Under this definition, we have
$\widehat{\boldsymbol{\mu}}_{\lambda}=\widehat{\bf X}_{\lambda}\widehat{\boldsymbol{\beta}}_{\lambda}$ since
$\widehat{b}_{k,\lambda}=0$ for $k\notin\widehat{B}_{\lambda}$. Let
$\widehat{\bf S}_{\lambda}=(\widehat{S}_{1,\lambda},\ldots,\widehat{S}_{\widehat{k},\lambda})'$ be a sign
vector of $\widehat{\boldsymbol{\beta}}_{\lambda}$; i.e.
\begin{equation}
\widehat{S}_{k,\lambda}=\begin{cases}
1 & \widehat{\beta}_{k,\lambda}>0\\
0 & \widehat{\beta}_{k,\lambda}=0\\
-1 & \widehat{\beta}_{k,\lambda}<0\\
\end{cases}.
\end{equation}
\subsection{Some facts on LASSO estimate}
By Lemma 1 in \cite{ZHT2007}, the LASSO estimator satisfies that
\begin{equation}
\label{eq:lemma1-ZHT2007}
\widehat{\boldsymbol{\beta}}_{\lambda}=(\widehat{\bf X}_{\lambda}'\widehat{\bf X}_{\lambda})^{-1}
\left(\widehat{\bf X}_{\lambda}'{\boldsymbol{y}}-\lambda\widehat{\bf S}_{\lambda}\right)
\end{equation}
if $\lambda$ is not a transition point. Therefore, we have
\begin{equation}
\label{eq:evmu-lemma1-ZHT2007}
\widehat{\boldsymbol{\mu}}_{\lambda}=\widehat{\bf H}_{\lambda}{\boldsymbol{y}}-\lambda\widehat{\boldsymbol{q}}_{\lambda},
\end{equation}
where $\widehat{\bf H}_{\lambda}=\widehat{\bf X}_{\lambda}(\widehat{\bf X}_{\lambda}'\widehat{\bf X}_{\lambda})^{-1}\widehat{\bf X}_{\lambda}'$ and
$\widehat{\boldsymbol{q}}_{\lambda}=\widehat{\bf X}_{\lambda}(\widehat{\bf X}_{\lambda}'\widehat{\bf X}_{\lambda})^{-1}\widehat{\bf S}_{\lambda}$.
It is easy to check that $\widehat{\bf H}_{\lambda}$ is an idempotent matrix.
We define
$\widetilde{\boldsymbol{\beta}}_{\lambda}=(\widehat{\bf X}_{\lambda}'\widehat{\bf X}_{\lambda})^{-1}\widehat{\bf X}_{\lambda}'{\boldsymbol{y}}$.
This is the least squared estimator under $\widehat{\bf X}_{\lambda}$, thus in a
post estimation. Note that this is not a linear estimator of ${\boldsymbol{y}}$ since
$\widehat{\bf X}_{\lambda}$ is already chosen according to ${\boldsymbol{y}}$. We also define
$\widetilde{\boldsymbol{\mu}}_{\lambda}=\widehat{\bf X}_{\lambda}\widetilde{\boldsymbol{\beta}}_{\lambda}$. Obviously, this can
be written as $\widetilde{\boldsymbol{\mu}}_{\lambda}=\widehat{\bf H}_{\lambda}{\boldsymbol{y}}$. Therefore,
(\ref{eq:evmu-lemma1-ZHT2007}) can be written as
\begin{equation}
\label{eq:evmu-lemma1-ZHT2007-2}
\widehat{\boldsymbol{\mu}}_{\lambda}=\widetilde{\boldsymbol{\mu}}_{\lambda}-\lambda\widehat{\boldsymbol{q}}_{\lambda}
\end{equation}
for a non-transition $\lambda$.
We summarize some facts that are derived by
(\ref{eq:evmu-lemma1-ZHT2007}) and are used in this article.
\begin{lemma}
\label{lemma:several-equations}
If $\lambda$ is not a transition point, the following equations hold.
\begin{align}
\label{eq:evmu-eqv}
\widehat{\boldsymbol{\mu}}_{\lambda}'\widehat{\boldsymbol{q}}_{\lambda}&=\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1\\
\label{eq:evmu2-evmuy}
\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2&=\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}-\lambda\widehat{\boldsymbol{q}}_{\lambda}'{\boldsymbol{y}}
+\lambda^2\|\widehat{\boldsymbol{q}}_{\lambda}\|^2\\
\label{eq:evmu2-evmuy=l1}
\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2&=\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}-\lambda\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1\\
\label{eq:H-evmu}
{\bf{H}}_{\lambda}\widehat{\boldsymbol{\mu}}_{\lambda}&=\widehat{\boldsymbol{\mu}}_{\lambda}.
\end{align}
\end{lemma}
\begin{proof}
In this proof, we drop $\lambda$ from symbols for simplifying the
description of terms. By (\ref{eq:evmu-lemma1-ZHT2007}), we have
\begin{align}
\widehat{\boldsymbol{\mu}}'\widehat{\boldsymbol{q}}&=\widehat{\bf S}'(\widehat{\bf X}'\widehat{\bf X})^{-1}\widehat{\bf X}'\widehat{\bf X}(\widehat{\bf X}'\widehat{\bf X})^{-1}(\widehat{\bf X}'{\boldsymbol{y}}-\lambda\widehat{\bf S})
=\widehat{\bf S}'\widehat{\boldsymbol{\beta}}.
\end{align}
We then obtain (\ref{eq:evmu-eqv}) by the definition of $\widehat{\bf S}$.
We define $\widehat{\bf P}={\bf{I}}_n-\widehat{\bf H}$. By the definition of $\widehat{\bf H}$ and $\widehat{\bf P}$, we have
\begin{equation}
\label{eq:lemma-qHy=qy}
\widehat{\boldsymbol{q}}'\widehat{\bf H}{\boldsymbol{y}}=\widehat{\boldsymbol{q}}'{\boldsymbol{y}}
\end{equation}
and, thus,
\begin{equation}
\widehat{\boldsymbol{q}}'\widehat{\bf P}{\boldsymbol{y}}=\widehat{\boldsymbol{q}}'{\boldsymbol{y}}-\widehat{\boldsymbol{q}}'\widehat{\bf H}{\boldsymbol{y}}=0.
\end{equation}
Since $\widehat{\bf H}$ is an idempotent matrix, we have $\widehat{\bf H}\widehat{\bf P}=O_n$, where
$O_n$ is an $n\times n$ zero matrix. Thus, by
(\ref{eq:lemma1-ZHT2007}), (\ref{eq:evmu2-evmuy}) is obtained as
\begin{align}
\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}-\|\widehat{\boldsymbol{\mu}}\|^2
&=\widehat{\boldsymbol{\mu}}'({\boldsymbol{y}}-\widehat{\boldsymbol{\mu}})\notag\\
&=(\widehat{\bf H}{\boldsymbol{y}}-\lambda\widehat{\boldsymbol{q}})'(\widehat{\bf P}{\boldsymbol{y}}+\lambda\widehat{\boldsymbol{q}})\notag\\
&=\lambda\widehat{\boldsymbol{q}}'{\boldsymbol{y}}-\lambda^2\|\widehat{\boldsymbol{q}}\|^2.
\end{align}
Moreover, by (\ref{eq:lemma-qHy=qy}),
(\ref{eq:evmu-lemma1-ZHT2007}) and (\ref{eq:evmu-eqv}), we have
\begin{align}
\lambda\widehat{\boldsymbol{q}}'{\boldsymbol{y}}-\lambda^2\|\widehat{\boldsymbol{q}}\|^2
&=\lambda\widehat{\boldsymbol{q}}'\widehat{\bf H}{\boldsymbol{y}}-\lambda^2\|\widehat{\boldsymbol{q}}\|^2\notag\\
&=\lambda\widehat{\boldsymbol{q}}'(\widehat{\bf H}{\boldsymbol{y}}-\lambda\widehat{\boldsymbol{q}})\notag\\
&=\lambda\widehat{\boldsymbol{q}}'\widehat{\boldsymbol{\mu}}\notag\\
&=\lambda\|\widehat{\boldsymbol{\beta}}\|_1.
\end{align}
Finally, by the definition of $\widehat{\bf H}$ and $\widehat{\boldsymbol{\mu}}$, we obtain
\begin{align}
\widehat{\bf H}\widehat{\boldsymbol{\mu}}=\widehat{\bf X}(\widehat{\bf X}'\widehat{\bf X})^{-1}\widehat{\bf X}'\widehat{\bf X}(\widehat{\bf X}'\widehat{\bf X})^{-1}(\widehat{\bf X}'{\boldsymbol{y}}-\lambda\widehat{\bf S})=\widehat{\boldsymbol{\mu}}.
\end{align}
\end{proof}
\subsection{Definition of risk and its Stein's formula}
Let $\widehat{\boldsymbol{\mu}}=(\widehat{\mu}_1,\ldots,\widehat{\mu}_n)'\in\mathbb{R}^n$ be a regression estimate of
${\boldsymbol{\mu}}=\mathbb{E}\left[{\boldsymbol{y}}\right]$. A prediction capability of $\widehat{\boldsymbol{\mu}}$ is
measured by a risk :
\begin{equation}
R_n=\frac{1}{n}\mathbb{E}\left[\|\widehat{\boldsymbol{\mu}}-{\boldsymbol{\mu}}\|^2\right],
\end{equation}
where $\mathbb{E}$ is the expectation with respect to the joint probability
distribution of ${\boldsymbol{y}}$. It is easily verified that
\begin{align}
\label{eq:general-risk}
R_n&=\frac{1}{n}\mathbb{E}\left[\|\widehat{\boldsymbol{\mu}}-{\boldsymbol{y}}\|^2\right]-\sigma^2+{\rm DF}_n,
\end{align}
where
\begin{equation}
\label{eq:general-df}
{\rm DF}_n=\frac{2}{n}\mathbb{E}\left[(\widehat{\boldsymbol{\mu}}-\mathbb{E}\left[\widehat{\boldsymbol{\mu}}\right])'({\boldsymbol{y}}-{\boldsymbol{\mu}})\right]
\end{equation}
that is a covariance between $\widehat{\boldsymbol{\mu}}$ and ${\boldsymbol{y}}$.
${\rm DF}_n$ is often called the degree of freedom.
Let $\partial\widehat{\boldsymbol{\mu}}/\partial{\boldsymbol{y}}$ be an $n\times n$ matrix whose
$(i,j)$ entry is $\partial\widehat{\mu}_i/\partial y_j$.
We define
\begin{align}
\nabla\cdot\widehat{\boldsymbol{\mu}}={\rm trace}\frac{\partial\widehat{\boldsymbol{\mu}}}{\partial{\boldsymbol{y}}}=
\sum_{i=1}^n\frac{\partial\widehat{\mu}_i}{\partial y_i}
\end{align}
in which ${\rm trace}$ denotes the trace of a matrix. In \cite{CS1981}, it has
been shown that
\begin{align}
\label{eq:general-df-stein}
{\rm DF}_n&=\frac{2\sigma^2}{n}\mathbb{E}\left[\nabla\cdot\widehat{\boldsymbol{\mu}}\right]
\end{align}
holds if $\widehat{\mu}_i=\widehat{\mu}_i({\boldsymbol{y}}):\mathbb{R}^n\mapsto\mathbb{R}$, $i=1,\ldots,n$ are almost
differentiable in the term of \cite{CS1981} and the expectation in the
right hand side exists. $\nabla\cdot\widehat{\boldsymbol{\mu}}$ is called a divergence of
$\widehat{\boldsymbol{\mu}}$. By this result,
\begin{equation}
\widehat{R}_n(\sigma^2)=-\sigma^2
\frac{1}{n}\|\widehat{\boldsymbol{\mu}}-{\boldsymbol{y}}\|^2+\frac{2\sigma^2}{n}\nabla\cdot\widehat{\boldsymbol{\mu}}
\end{equation}
is an unbiased estimator of a risk $R_n$. $\widehat{R}_n(\sigma^2)$ is
called SURE (Stein's Unbiased Risk Estimate). We can then construct a
$C_p$-type model selection criterion by replacing $\sigma^2$ with an
appropriate estimate $\widehat{\sigma}^2$; e.g. \cite{ZHT2007}.
\section{LASSO with scaling}
\subsection{An optimal scaling}
We now consider to assign a positive single scaling parameter to LASSO
estimator. More precisely, the scaling parameter is denoted by
$\alpha>0$ and the modified LASSO estimator with scaling is given by
$\alpha\boldsymbol{\beta}_{\lambda}$, where $\boldsymbol{\beta}_{\lambda}$ is a vector of
non-zero elements of LASSO estimator. The output vector with a single
scaling parameter is given by
$\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}=\alpha\widehat{\boldsymbol{\mu}}_{\lambda}$. Thus,
$\widehat{\boldsymbol{\mu}}_{\lambda,1}$ is a LASSO output vector. We write
$\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}=(\widehat{\mu}_{\lambda,\alpha,1},\ldots,\widehat{\mu}_{\lambda,\alpha,n})'$,
where $\widehat{\mu}_{\lambda,\alpha,k}=\alpha\widehat{\mu}_{\lambda,k}$.
A risk of LASSO with scaling is
\begin{equation}
R_n(\lambda,\alpha)=\frac{1}{n}\mathbb{E}\left[\|\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}-{\boldsymbol{\mu}}\|^2\right].
\end{equation}
Especially, $R_n(\lambda,1)$ is a risk of LASSO.
By the previous discussion, it is given by
\begin{align}
\label{eq:risk-ls-1}
R_n(\lambda,\alpha)
&=\frac{1}{n}\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}-{\boldsymbol{y}}\|^2-\sigma^2
+{\rm DF}_n(\lambda,\alpha),
\end{align}
where
\begin{equation}
{\rm DF}_n(\lambda,\alpha)=\frac{2}{n}\mathbb{E}(\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}-\mathbb{E}\widehat{\boldsymbol{\mu}}_{\lambda,\alpha})'({\boldsymbol{y}}-{\boldsymbol{\mu}}).
\end{equation}
In \cite{ZHT2007}, for LASSO estimate,
\begin{equation}
\label{eq:risk-ls-2}
{\rm DF}_n(\lambda,1)=\frac{2\sigma^2}{n}\mathbb{E}\widehat{k}_{\lambda}
\end{equation}
has been shown via the above Stein's formula. By the definition of $\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}$, we
thus have
\begin{equation}
\label{eq:risk-ls-3}
R_n(\lambda,\alpha)
=\frac{1}{n}\mathbb{E}\|\alpha\widehat{\boldsymbol{\mu}}_{\lambda}-{\boldsymbol{y}}\|^2-\sigma^2+\frac{2\alpha\sigma^2}{n}\mathbb{E}\widehat{k}_{\lambda}.
\end{equation}
Of course, this reduces to a risk of LASSO when $\alpha=1$. By
(\ref{eq:risk-ls-3}), SURE for LASSO is given by
\begin{equation}
\label{eq:ube-risk-LASSO}
\widehat{R}_n(\lambda,\sigma^2)
=-\sigma^2+\frac{1}{n}\|\widehat{\boldsymbol{\mu}}_{\lambda}-{\boldsymbol{y}}\|^2+\frac{2\sigma^2}{n}\widehat{k}_{\lambda}.
\end{equation}
On the other hand, by setting the derivative of (\ref{eq:risk-ls-3})
with respect to $\alpha$ to zero, the minimizing scaling value of
$R_n(\lambda,\alpha)$ is given by
\begin{equation}
\label{eq:alphaopt}
\alpha_{\rm opt}=\frac{\mathbb{E}\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}-\sigma^2\mathbb{E}\widehat{k}_{\lambda}}{\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2}
\end{equation}
if $\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2\neq 0$. If $\lambda$ is not transition point
then we have
\begin{equation}
\label{eq:alphaopt-1}
\alpha_{\rm opt}
=1+\frac{\lambda\mathbb{E}\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1}{\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2}
-\frac{\sigma^2\mathbb{E}\widehat{k}_{\lambda}}{\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2}
\end{equation}
by (\ref{eq:evmu2-evmuy=l1}).
Through a simple calculation using (\ref{eq:risk-ls-3}) and
(\ref{eq:alphaopt}), we have
\begin{equation}
\label{eq:R1-Ralphaopt}
R_n(\lambda,1)-R_n(\lambda,\alpha_{\rm opt})
=\frac{1}{n}(\alpha_{\rm opt}-1)^2\mathbb{E}\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2.
\end{equation}
Therefore, the optimal scaling value improves naive LASSO at
any $\lambda$. In case of an orthogonal design in a nonparametric
regression problem such as wavelet\cite{DJ1994,DJ1995}, it is shown in
\cite{KH2016a} that the right hand side of (\ref{eq:R1-Ralphaopt}) is
$O(n^{-1}\log n)$.
\subsection{Data-dependent empirical scaling value}
One choice of a scaling value in applications is
$(\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}-\sigma^2\widehat{k}_{\lambda})/\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2$ that
is an empirical estimate of $\alpha_{\rm opt}$. In LASSO,
$\widehat{\boldsymbol{\mu}}_{\lambda}={\bf{0}}_n$ happens to occur when $\lambda$ is
large. Therefore, this scaling value may not be stable. Moreover, the
scaling value can be smaller than one depending on the noise variance.
Also, it is difficult to handle this estimate since $\widehat{k}_{\lambda}$
is a dis-continuous function of ${\boldsymbol{y}}$. As an another choice, we may have
$\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}/\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2$ that minimizes the squared
distance between ${\boldsymbol{y}}$ and $\alpha\widehat{\boldsymbol{\mu}}_{\lambda}$; i.e. it approaches
LASSO estimator to the least squares one. However, again, this may not be
stable. Then, for a stable scaling value, we consider
\begin{equation}
\label{eq:def-ealpha-1}
\widehat{\alpha}=\frac{\widehat{\boldsymbol{\mu}}_{\lambda}'{\boldsymbol{y}}+\delta}{\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2+\delta},
\end{equation}
where $\delta$ is a fixed positive constant. Note that $\delta$ is not a
tuning parameter (hyper-parameter) and is a constant for stabilizing
$\widehat{\alpha}$. Therefore, it is set to be a small value, say, $10^{-6}$ in
applications. By (\ref{eq:evmu2-evmuy=l1}), we can write
\begin{equation}
\label{eq:def-ealpha-2}
\widehat{\alpha}=1+\frac{\lambda\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1}{\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2+\delta}
\end{equation}
for non-transition $\lambda$. Therefore, $\widehat{\alpha}\ge 1$ holds; i.e. it
really behaves as an expansion parameter. Moreover, $\widehat{\alpha}\simeq 1$
for a small $\lambda$. This is a nice property since the bias problem in
LASSO is serious when $\lambda$ is large and is not essential when it is
small. We have three facts relating to $\widehat{\alpha}$.
The first one shows an effect of the introduction of $\widehat{\alpha}$.
\begin{property}
\label{lemma:smaller-RSS}
For a non-transition $\lambda$,
\begin{equation}
\|{\boldsymbol{y}}-\widetilde{\boldsymbol{\mu}}_{\lambda}\|^2\le
\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\|^2\le\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2
\end{equation}
holds.
\end{property}
\begin{proof}
The first inequality is obvious because $\widetilde{\boldsymbol{\mu}}_{\lambda}$ is the least squares
solution under $\widehat{\bf X}_{\lambda}$; i.e. it is a projection of ${\boldsymbol{y}}$ onto a
linear subspace determined by column vectors of $\widehat{\bf X}_{\lambda}$.
For simplicity, we define $m_2=\|\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2$ and
$p_1=\lambda\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1$. We then obtain
\begin{align}
\label{eq:lemma-smaller-RSS}
&\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\|^2\notag\\
&=\|{\boldsymbol{y}}-\widehat{\alpha}\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2\notag\\
&=\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}+\widehat{\boldsymbol{\mu}}_{\lambda,1}-\widehat{\alpha}\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2\notag\\
&=\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2+(1-\widehat{\alpha})^2m_2
+2(1-\widehat{\alpha})\widehat{\boldsymbol{\mu}}_{\lambda,1}'({\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1})\notag\\
&=\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2+(1-\widehat{\alpha})^2m_2+2(1-\widehat{\alpha})p_1\notag\\
&=\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2+(1-\widehat{\alpha})^2m_2-2(1-\widehat{\alpha})^2(m_2+\delta)\notag\\
&=\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2-(1-\widehat{\alpha})^2(m_2+2\delta),
\end{align}
where we used (\ref{eq:evmu2-evmuy=l1}) in the fourth line and
(\ref{eq:def-ealpha-2}) in the fifth line.
\end{proof}
Therefore, the introduction of $\widehat{\alpha}$ surely reduces the residual sum
compared to a LASSO estimate. This implies that $\widehat{\alpha}$ moves the
LASSO estimator toward the least squares estimator at each $\lambda$.
We here consider
\begin{equation}
\widehat{d}(\lambda)=(1-\widehat{\alpha})^2(m_2+2\delta).
\end{equation}
As found in (\ref{eq:lemma-smaller-RSS}),
$\widehat{d}(\lambda)$ is the difference between residuals of naive LASSO and
LASSO with scaling. Note that this is a function of
$\lambda$ if the training data is given and ${\bf{X}}$ is determined.
\begin{property}
\label{lemma:ed-lambda}
For simplicity, we consider a specific case where $\delta=0$. We assume
that $\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1\neq 0$ holds and $\lambda$ is a
non-transition point.
Let $\rho_{\min}$ and $\rho_{\max}$ be the
minimum and maximum eigenvalues of ${\bf{X}}'{\bf{X}}/n$ and assume $\rho_{\min}>0$.
Then we have
\begin{equation}
\label{eq:lemma-ed-lambda}
\frac{\lambda^2}{n\rho_{\max}}\le
\widehat{d}(\lambda)\le
\frac{\lambda^2m^2}{n\rho_{\min}}.
\end{equation}
\end{property}
\begin{proof}
Since
\begin{equation}
\widehat{d}(\lambda)
=\lambda^2\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1^2\frac{\|\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2+2\delta}
{(\|\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2+\delta)^2}
=\frac{\lambda^2\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1^2}{\|\widehat{\boldsymbol{\mu}}_{\lambda,1}\|^2}
\end{equation}
holds in case of $\delta=0$, we have
\begin{equation}
\frac{\lambda^2\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1^2}
{n\rho_{\max}\|\widehat{\boldsymbol{\beta}}_{\lambda}\|^2}\le
\widehat{d}(\lambda)
\le\frac{\lambda^2\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1^2}
{n\rho_{\min}\|\widehat{\boldsymbol{\beta}}_{\lambda}\|^2}.
\end{equation}
By the equivalence of the norms, this reduces to
(\ref{eq:lemma-ed-lambda}), where we used $\widehat{k}_{\lambda}\le m$.
\end{proof}
Therefore, the introduction of $\widehat{\alpha}$ improves the degree of fitting
to the given data especially when $\lambda$ is large; i.e. a sparse
situation. We next argue on a probabilistic behavior of $\widehat{\alpha}$.
\begin{property}
\label{lemma:ub-1-ealpha}
\begin{equation}
\label{eq:ub-1-ealpha}
\mathbb{E}\left[\widehat{\alpha}-1\right]\le \max\left(1/\delta,m^2/\rho_{\min}\right)\frac{\lambda}{\sqrt{n}}
\end{equation}
holds.
\end{property}
\begin{proof}
Since the probability that a fixed $\lambda$ is a transition point is
zero as in \cite{ZHT2007}, $\lambda$ is assumed to not be a transition
pont below. We define an event
$E=\left\{\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1\le\theta_n\right\}$, where
$\theta_n>0$. $E^C$ denotes the complement of $E$.
By (\ref{eq:def-ealpha-2}), we have
\begin{align}
\mathbb{E}\left[\widehat{\alpha}-1|E\right]\le\frac{1}{\delta}\mathbb{E}\left[\left.\lambda\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1\right|E\right]\le\lambda\theta_n/\delta.
\end{align}
We also have
\begin{align}
\mathbb{E}\left[\widehat{\alpha}-1|E^C\right]
&\le\mathbb{E}\left[\left.\frac{\lambda\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1}{n\rho_{\min}\|\widehat{\boldsymbol{\beta}}_{\lambda}\|^2}\right|E^C\right]\notag\\
&\le\mathbb{E}\left[\left.\frac{\lambda\widehat{k}_{\lambda}^2}{n\rho_{\min}\|\widehat{\boldsymbol{\beta}}_{\lambda}\|_1}\right|E^C\right]\notag\\
&\le\frac{\lambda m^2}{n\rho_{\min}\theta_n}.
\end{align}
Since $\mathbb{E}[\widehat{\alpha}-1]=\mathbb{E}[\widehat{\alpha}-1|E]\mathbb{P}[E]+\mathbb{E}[\widehat{\alpha}-1|E^C]\mathbb{P}[E^C]$,
we have (\ref{eq:ub-1-ealpha}) by taking $\theta_n=1/(2\sqrt{n})$.
\end{proof}
We consider the case where $\rho_{\min}$ and $m$ are constants. This is
a natural setting of a classical linear regression problem. In this
case, by the above result, the expectation of the degree of expansion is
bounded above by $O(1/\sqrt{n})$. Therefore, the effect of expansion by
$\widehat{\alpha}$ is small when $n$ is large and ${\bf{X}}$ is fixed. This is also
found in the previous result.
\subsection{Model selection criterion under empirical scaling}
Now, we consider to derive a $C_p$-type model selection criterion for
$\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}$. For this purpose, we derive an unbiased
estimate of a risk for $\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}$. To do this, by
(\ref{eq:risk-ls-1}), we need to calculate the degree of freedom of
$\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}$. We define it by
\begin{equation}
{\rm DF}_n^{\rm sca}(\lambda)=
\frac{2}{n}\mathbb{E}\left[(\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}-\mathbb{E}\left[\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\right])'
({\boldsymbol{y}}-{\boldsymbol{\mu}})\right].
\end{equation}
\begin{theorem}
\label{theorem:DFsca}
We have
\begin{equation}
{\rm DF}_n^{\rm sca}(\lambda)=\frac{2\sigma^2}{n}\mathbb{E}\left[\widehat{d}_1+\widehat{d}_2\right],
\end{equation}
where
\begin{align}
\label{eq:def-ed_1}
\widehat{d}_1&=(1-\widehat{\alpha})\frac{\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2-\delta}{\|\widehat{\boldsymbol{\mu}}_{\lambda}\|^2+\delta}\\
\label{eq:def-ed_2}
\widehat{d}_2&=\widehat{\alpha}\widehat{k}_{\lambda}.
\end{align}
\end{theorem}
\begin{proof}
We drop $\lambda$ from expressions for simplicity since we fix $\lambda$
below. We thus write $\widehat{\boldsymbol{\beta}}=\widehat{\boldsymbol{\beta}}_{\lambda}$, $\widehat{\bf S}=\widehat{\bf S}_{\lambda}$,
$\widehat{\boldsymbol{\mu}}_{\alpha}=\widehat{\boldsymbol{\mu}}_{\lambda,\alpha}$, $\widehat{B}=\widehat{B}_{\lambda}$ and $\widehat{k}=\widehat{k}_{\lambda}$.
We can write $\widehat{\boldsymbol{\mu}}_{\alpha}=\alpha{\bf{X}}_{\widehat{B}}\widehat{\boldsymbol{\beta}}$. Especially,
$\widehat{\boldsymbol{\mu}}_1$ is a LASSO output. For simplicity, we write $\widehat{\boldsymbol{\mu}}=\widehat{\boldsymbol{\mu}}_1$
below. We denote the $k$th member of $\widehat{\boldsymbol{\mu}}_{\alpha}$ by
$\widehat{\mu}_{\alpha,k}$. In \cite{ZHT2007}, it is shown that, for any fixed
$\lambda$, $\widehat{\mu}_{1,k}:\mathbb{R}^n\mapsto\mathbb{R}$, $k=1,\ldots,n$ are almost
differentiable. By (\ref{eq:def-ealpha-1}),
$\widehat{\alpha}\widehat{\mu}_{1,k}:\mathbb{R}^n\mapsto\mathbb{R}$ is calculated by arithmetic operations
of the components of ${\boldsymbol{y}}$ and $\widehat{\boldsymbol{\mu}}$. Therefore, $\widehat{\alpha}\widehat{\mu}_{1,k}$
is almost differentiable since it essentially requires a coordinate-wise
absolutely continuity. As a result, Stein's lemma can be applied to
$\widehat{\boldsymbol{\mu}}_{\widehat{\alpha},k}$ and, by (\ref{eq:general-df-stein}), we have
\begin{align}
{\rm DF}_n^{\rm sca}(\lambda)&=\frac{2\sigma^2}{n}\mathbb{E}\left[\nabla\cdot\widehat{\boldsymbol{\mu}}_{\widehat{\alpha}}\right],
\end{align}
where
\begin{align}
\nabla\cdot\widehat{\boldsymbol{\mu}}_{\widehat{\alpha}}=
{\rm trace}\frac{\partial\widehat{\boldsymbol{\mu}}_{\widehat{\alpha}}}{\partial{\boldsymbol{y}}}=
\sum_{i=1}^n\frac{\partial\mu_{\widehat{\alpha},i}}{\partial y_i}.
\end{align}
Since
\begin{align}
\sum_{i=1}^n\frac{\partial }{\partial y_i}\widehat{\mu}_{\lambda,\widehat{\alpha},i}
=\sum_{i=1}^n\widehat{\mu}_{1,i}\left(\frac{\partial }{\partial y_i}\widehat{\alpha}\right)
+\widehat{\alpha}\sum_{i=1}^n\left(\frac{\partial }{\partial y_i}\widehat{\mu}_{1,i}\right),
\end{align}
holds, we have
\begin{equation}
\label{eq:nabla-evmu-ealpha}
\nabla\cdot\widehat{\boldsymbol{\mu}}_{\widehat{\alpha}}=\widehat{\boldsymbol{\mu}}'\left(\frac{\partial\widehat{\alpha}}{\partial{\boldsymbol{y}}}\right)+
\widehat{\alpha}\nabla\cdot\widehat{\boldsymbol{\mu}},
\end{equation}
where $\partial \widehat{\alpha}/\partial {\boldsymbol{y}}$ is an $n$-dimensional vector whose
$i$th entry is $\partial \widehat{\alpha}/\partial y_i$.
Since the probability that a fixed $\lambda$ is a transition point is
zero as in \cite{ZHT2007}, $\lambda$ is assumed to not be a
transition pont below.
For the second term of
(\ref{eq:nabla-evmu-ealpha}), it has been shown in \cite{ZHT2007} that
\begin{equation}
\label{eq:d-evmu-y-ZHT2007}
\frac{\partial\widehat{\boldsymbol{\mu}}}{\partial{\boldsymbol{y}}}=\widehat{\bf H}
\end{equation}
by (\ref{eq:lemma1-ZHT2007}) and
the local constancy of $\widehat{\boldsymbol{q}}$ under a fixed $\lambda$. And, we thus have
\begin{equation}
\label{eq:divergence-LASSO}
\nabla\cdot\widehat{\boldsymbol{\mu}}={\rm trace}\widehat{\bf H}=\widehat{k}
\end{equation}
by the idempotence of $\widehat{\bf H}$. Therefore, the second term of
(\ref{eq:nabla-evmu-ealpha}) is equal to $\widehat{d}_2$.
We evaluate the first term in below. Since
\begin{equation}
\frac{\partial}{\partial y_k}\|\widehat{\boldsymbol{\mu}}\|^2
=\frac{\partial}{\partial y_k}\sum_{j=1}^n\widehat{\mu}_j^2
=2\sum_{j=1}^n\widehat{\mu}_j\frac{\partial\widehat{\mu}_j}{\partial y_k},
\end{equation}
we have
\begin{equation}
\label{eq:d-evmu2}
\frac{\partial\|\widehat{\boldsymbol{\mu}}\|^2}{\partial{\boldsymbol{y}}}
=2\left(\frac{\partial\widehat{\boldsymbol{\mu}}}{\partial{\boldsymbol{y}}}\right)\widehat{\boldsymbol{\mu}}=2\widehat{\bf H}\widehat{\boldsymbol{\mu}}=2\widehat{\boldsymbol{\mu}}
\end{equation}
by (\ref{eq:d-evmu-y-ZHT2007}) and (\ref{eq:H-evmu}).
On the other hand, we have
\begin{equation}
\label{eq:d-evmu-y}
\frac{\partial\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}}{\partial{\boldsymbol{y}}}
=\frac{\partial}{\partial{\boldsymbol{y}}}
\left\{\|\widehat{\boldsymbol{\mu}}\|^2+\lambda\widehat{\boldsymbol{q}}'{\boldsymbol{y}}-\lambda^2\|\widehat{\boldsymbol{q}}\|^2\right\}
=2\widehat{\boldsymbol{\mu}}+\lambda\widehat{\boldsymbol{q}}
\end{equation}
by (\ref{eq:d-evmu2}), (\ref{eq:evmu2-evmuy}) in Lemma
\ref{lemma:several-equations} and
local constancy of $\widehat{\boldsymbol{q}}$ as in \cite{ZHT2007}.
By (\ref{eq:d-evmu2}), (\ref{eq:d-evmu-y}) and (\ref{eq:evmu2-evmuy}) in
Lemma \ref{lemma:several-equations}, we have
\begin{align}
\frac{\partial\widehat{\alpha}}{\partial{\boldsymbol{y}}}
&=\frac{
\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)\frac{\partial}{\partial{\boldsymbol{y}}}\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}
-(\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}+\delta)\frac{\partial}{\partial{\boldsymbol{y}}}\|\widehat{\boldsymbol{\mu}}\|^2}
{\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)^2}\notag\\
&=\frac{\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)(2\widehat{\boldsymbol{\mu}}+\lambda\widehat{\boldsymbol{q}})
-2(\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}+\delta)\widehat{\boldsymbol{\mu}}}{\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)^2}\notag\\
&=\frac{\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)(2\widehat{\boldsymbol{\mu}}+\lambda\widehat{\boldsymbol{q}})
-2\widehat{\alpha}\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)\widehat{\boldsymbol{\mu}}}
{\left(\|\widehat{\boldsymbol{\mu}}\|^2+\delta\right)^2}\notag\\
&=\frac{1}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}
\left\{2\widehat{\boldsymbol{\mu}}+\lambda\widehat{\boldsymbol{q}}-2\widehat{\alpha}\widehat{\boldsymbol{\mu}}\right\},
\end{align}
where the third line comes from (\ref{eq:def-ealpha-1}).
Therefore, we obtain
\begin{align}
\widehat{\boldsymbol{\mu}}'\left(\frac{\partial\widehat{\alpha}}{\partial{\boldsymbol{y}}}\right)
&=\frac{1}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}
\left\{2\|\widehat{\boldsymbol{\mu}}\|^2+\lambda\widehat{\boldsymbol{\mu}}'\widehat{\boldsymbol{q}}-2\widehat{\alpha}\|\widehat{\boldsymbol{\mu}}\|^2\right\}\notag\\
&=\frac{1}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}
\left\{2\|\widehat{\boldsymbol{\mu}}\|^2+\lambda\|\widehat{\boldsymbol{\beta}}\|_1-2\widehat{\alpha}\|\widehat{\boldsymbol{\mu}}\|^2\right\}\notag\\
&=\frac{2}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}(1-\widehat{\alpha})\|\widehat{\boldsymbol{\mu}}\|^2+
\frac{\lambda\|\widehat{\boldsymbol{\beta}}\|_1}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}\notag\\
&=\frac{2}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}(1-\widehat{\alpha})\|\widehat{\boldsymbol{\mu}}\|^2+
\frac{\widehat{\boldsymbol{\mu}}'{\boldsymbol{y}}+\delta-\delta-\|\widehat{\boldsymbol{\mu}}\|^2}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}\notag\\
&=\frac{2}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}(1-\widehat{\alpha})\|\widehat{\boldsymbol{\mu}}\|^2-(1-\widehat{\alpha})\notag\\
&=(1-\widehat{\alpha})\frac{\|\widehat{\boldsymbol{\mu}}\|^2-\delta}{\|\widehat{\boldsymbol{\mu}}\|^2+\delta}
\end{align}
where we used (\ref{eq:def-ealpha-1}) and
(\ref{eq:evmu-eqv}), (\ref{eq:evmu2-evmuy=l1}).
\end{proof}
We have two remarks on this theorem.
\begin{itemize}
\item Our discussion is always applicable when ${\bf{X}}'{\bf{X}}$ is not
singular.
\item $\mathbb{E}[\widehat{d}_1]\le O\left(1/\sqrt{n}\right)$ by Lemma
\ref{lemma:ub-1-ealpha} since $|\widehat{d}_1|\le\widehat{\alpha}-1$.
\end{itemize}
By this theorem, the risk for $\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}$ is given by
\begin{align}
R_n^{\rm sca}(\lambda)
&=\frac{1}{n}\mathbb{E}\left[\|{\boldsymbol{\mu}}-\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\|^2\right]\notag\\
&=-\sigma^2 +\frac{1}{n}\mathbb{E}\left[\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\|^2\right]
+{\rm DF}_n^{\rm sca}(\lambda).
\end{align}
Therefore, SURE for LASSO with scaling is given by
\begin{equation}
\label{eq:ube-risk-LASSO-S}
\widehat{R}_n^{\rm sca}(\lambda,\sigma^2)=
-\sigma^2
+\frac{1}{n}\|{\boldsymbol{y}}-\widehat{\boldsymbol{\mu}}_{\lambda,\widehat{\alpha}}\|^2+\frac{2\sigma^2}{n}\left(\widehat{d}_1+\widehat{d}_2\right),
\end{equation}
where $\widehat{d}_1$ and $\widehat{d}_2$ are defined by (\ref{eq:def-ed_1}) and
(\ref{eq:def-ed_2}) respectively.
\subsection{Estimate of noise variance}
To compute a $C_p$-type model selection criterion based on SURE, we need
an appropriate estimate of $\sigma^2$. For estimating the noise variance in a
regression problem, \cite{CE1992} has recommended to apply
\begin{equation}
\label{eq:esigma2}
\widehat{\sigma}^2_{\rm CE}=\frac{{\boldsymbol{y}}'({\bf{I}}_n-{\bf{H}}_{\gamma})^2{\boldsymbol{y}}} {{\rm trace}[({\bf{I}}_n-{\bf{H}}_{\gamma})^2]},
\end{equation}
where ${\bf{H}}_{\gamma}={\bf{X}}({\bf{X}}'{\bf{X}}+\gamma{\bf{I}}_n)^{-1}{\bf{X}}'$ with
$\gamma>0$. ${\bf{H}}_{\gamma}$ can be viewed as the hat matrix in a ridge
regression with a ridge parameter $\gamma>0$ or, equivalently,
an $\ell_2$ regularization with a regularization parameter $\gamma$. In
general, the $\ell_2$ regularization is introduced for better
generalization and stabilization. We need to carefully select the
parameter value for the former reason. However, since the purpose to
introduce $\gamma$ here is to stabilize an estimate of the noise
variance. Therefore, we just set $\gamma$ to a small value, say,
$10^{-6}$ in applications. Especially, this is effective when $m$ is
large; i.e. when a colinearity problem arises under a full model.
\section{Numerical examples}
In this section, through a simple numerical example, we verify our result
on SURE for LASSO with scaling and compare our method with naive LASSO,
MCP and Adaptive LASSO. We refer to Adaptive LASSO as A-LASSO and LASSO
with scaling $\widehat{\alpha}$ as LASSO-S.
\subsection{Setting of experiments}
For $u\in\mathbb{R}$, we define
$g_{\tau}(u,\xi)=\exp\left\{(u-\xi)^2/(2\tau)\right\}$, where $\xi\in\mathbb{R}$
and $\tau>0$. Let $u_i$, $i=1,\ldots,n$ be equidistant points in
$[-5,5]$. Let $\{\xi_1,\ldots,\xi_m\}$ be a subset of
$\{u_1,\ldots,u_n\}$, where $m\le n$. We take $\xi_j=u_{(n/m)j}$,
$j=1,\ldots,m$ by assuming $n/m$ is an integer. We define $n\times m$
matrix ${\bf{X}}_1$ whose $(i,j)$ entry is $g_{\tau}(u_i,\xi_j)$; i.e. the
$j$th column vector of ${\bf{X}}_1$ is an output vector of
$g_{\tau}(\cdot,\xi_j)$. Let ${\bf{X}}_2$ be a normalized version of ${\bf{X}}_1$;
i.e. the mean and squared norm of each column vector of ${\bf{X}}_2$ are
equal to zero and $n$ respectively. By taking account of the intercept, we
construct a design matrix by ${\bf{X}}=({\bf{1}}_n,{\bf{X}}_2)$. Therefore, we consider a
curve fitting problem using a linear combination of $m$ Gaussian basis
functions whose centers are input data points that are appropriately
chosen. We generate $y_i$ by
$y_i=\sum_{k=1}^m\beta_k^*g_{\tau}(u_i,\xi_k)+\varepsilon_i$, where
$\varepsilon_i\sim N(0,\sigma^2)$. We define $K^*=\{k|\beta_k^*\neq
0\}$ and consider the case where $|K^*|\ll m$. This corresponds to the
case that there exists an exact sparse representation; i.e. there is a
small true representation.
\subsection{Verification of risk estimate}
In the first numerical experiment, we verify our theoretical result of
SURE for LASSO-S. We here refer to $\widehat{R}_n(\lambda,\widehat{\sigma}^2_{\rm CE})$
in (\ref{eq:ube-risk-LASSO}) and $\widehat{R}_n^{\rm sca}(\lambda,\widehat{\sigma}^2_{\rm
CE})$ in (\ref{eq:ube-risk-LASSO-S}) as SUREs of LASSO and LASSO-S
respectively; i.e. the noise variance is replaced with $\widehat{\sigma}^2_{\rm
CE}$ defined in (\ref{eq:esigma2}). These are fully empirical and,
thus, can be applied as model selection criteria.
We set $n=100$, $m=50$, $\sigma^2=1$, $\tau=0.1$, $K^*=\{5,18,31,45\}$
and $(\beta_{5}^*,\beta_{18}^*,\beta_{31}^*,\beta_{45}^*)=(1,-2,2,-1)$;
i.e. $\xi_j$'s of non-zero coefficients are almost equally positioned.
We also set $\delta=1/n$ for LASSO-S and $\gamma=10^{-6}$ in calculating
$\widehat{\sigma}^2_{\rm CE}$. We here consider two cases of $\tau=0.1$ and
$\tau=0.4$. In both cases, some Gaussian functions that are close to
each other are relatively correlated. However, $4$ Gaussian functions
with non-zero coefficients (components of a target function) are nearly
orthogonal in the former case while those are still correlated in the
latter case. This condition of correlation among components in a target
function affects the consistency of model selection of LASSO,
A-LASSO and MCP.
We here employ LARS-LASSO for calculating LASSO path\cite{LARS} and use
``lars'' package\cite{LARS} in R. Since the regularization parameter
corresponds to the number of un-removed coefficients, we here observe
the relationship between the number of un-removed coefficients and
risk. Since we know the true representation, we can calculate the actual
risk by the mean squared error between the true output and estimated
output. We repeat this procedure for $1000$ times and calculate averages
of actual risks and SUREs.
The averages of actual risks and SUREs of LASSO and LASSO-S are depicted
in Fig.\ref{fig:papsim03g}. The horizontal axis is an average of the
number of non-zero coefficients (members in active set) at the each step
in LARS-LASSO. Note that, at a fixed step of LARS-LASSO, the number of
non-zero coefficients may be different for $1000$ trials. Therefore, we
take an average of those; i.e. the horizontal axis corresponds to the
number of LARS-LASSO steps while we show the number of averages of
non-zero coefficients at the steps in the horizontal axis. In
Fig.\ref{fig:papsim03g}, we depict the results at some specific steps
(not the results at all steps) for the clarity of graphs. We have some
remarks on these results.
\begin{itemize}
\item SURE is well consistent with the actual risk for both of LASSO and
LASSO-S. Especially, the consistency for LASSO-S verifies Theorem
\ref{theorem:DFsca}.
\item When the number of non-zero coefficients is small ($\lambda$ is
large), LASSO-S shows a lower risk compared to LASSO.
This is notably for $\tau=0.1$; i.e. components of a target
function are nearly orthogonal.
\item The number of non-zero coefficients at which an averaged risk is
minimized is smaller for LASSO-S than LASSO.
This is also notable for $\tau=0.1$.
\end{itemize}
As a result, we can expect that $\widehat{R}_n^{\rm sca}(\lambda,\widehat{\sigma}^2_{\rm
CE})$ can be a good selector of $\lambda$ in applications; i.e. it can
choose a sufficiently sparse model with low risk.
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim03.eps}
(a)~$\tau=0.1$
\end{center}
\end{minipage}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim13.eps}
(b)~$\tau=0.4$
\end{center}
\end{minipage}
\caption{Averages of actual risks and SUREs for LASSO and LASSO-S.}
\label{fig:papsim03g}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim06risk.eps}
(a) $n=100$
\end{center}
\end{minipage}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim16risk.eps}
(b) $n=400$
\end{center}
\end{minipage}
\caption{Risk of selected model ($\tau=0.1$).}
\label{fig:papsim06risk}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim06df.eps}
(a) $n=100$
\end{center}
\end{minipage}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim16df.eps}
(b) $n=400$
\end{center}
\end{minipage}
\caption{The number of non-zero coefficients of selected model ($\tau=0.1$).}
\label{fig:papsim06df}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim07risk.eps}
(a) $n=100$
\end{center}
\end{minipage}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim17risk.eps}
(b) $n=400$
\end{center}
\end{minipage}
\caption{Risk of selected model ($\tau=0.4$).}
\label{fig:papsim07risk}
\end{center}
\end{figure}
\begin{figure}[p]
\begin{center}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim07df.eps}
(a) $n=100$
\end{center}
\end{minipage}
\begin{minipage}[t]{80mm}
\begin{center}
\includegraphics[width=80mm]{papsim17df.eps}
(b) $n=400$
\end{center}
\end{minipage}
\caption{The number of non-zero coefficients of selected model ($\tau=0.4$).}
\label{fig:papsim07df}
\end{center}
\end{figure}
\subsection{Comparison to the other methods}
We here compare LASSO, LASSO-S, MCP and A-LASSO in the previous setting
of experiment although we test the cases of $n=100$ and $n=400$.
We use ``glmnet'' package\cite{glmnet} for LASSO, LASSO-S, A-LASSO and
``ncvreg'' package\cite{ncvreg} for MCP in R. We conduct simulations of model
selection in which the number of simulations is $S=100$. Basically, in
all methods, the candidate values of the regularization parameter is
$20$ points in $[0.01,0.5]$ with log-scale. In LASSO and LASSO-S, we
employ SUREs with $\widehat{\sigma}^2_{\rm CE}$ for model selection. In A-LASSO,
the weight for the penalty term is set to the reciprocal of the absolute
value of the ridge estimator. This is a substitute of the least squares
estimator to avoid a collinearity problem. The ridge parameter in doing
this is selected by $10$-fold cross validation in $10$ points in
$[0.01,10]$ with log-scale. By using this initial estimator, the
regularization parameter of A-LASSO and $\gamma$-parameter (exponent of
weights) are selected by a grid search of $10$-fold cross validation in
which the candidate values for $\gamma$-parameter are
$\{0.5,1.0,2.0\}$. For MCP, the regularization parameter and
$\gamma$-parameter are selected by a grid search of $10$-fold cross
validation, in which the candidate values of $\gamma$-parameter are
$\{2.5,3.0,3.5,4.0\}$. In MCP, the choice of $\gamma$-parameter seems to
largely affect the generalization performance. At each simulation, we
calculate the number of non-zero coefficients and actual risk of a
selected model. The boxplots of risk and the number of non-zero
coefficients of a selected model is depicted in
Fig.\ref{fig:papsim06risk} and Fig.\ref{fig:papsim06df} for $\tau=0.1$
and Fig.\ref{fig:papsim07risk} and Fig.\ref{fig:papsim07df} for
$\tau=0.4$.
In Fig.\ref{fig:papsim06risk} and Fig.\ref{fig:papsim06df}, we can see
that LASSO-S tends to select a sparse model with lower risk in comparing
with LASSO. Especially, selection of a sparse representation of LASSO-S
is notable. This shows that our scaling method surely contributes to
improve model selection property even though it is a simple modification
of LASSO. Therefore, the introduction of scaling really
solves the bias problem of LASSO. LASSO-S is also comparable or superior
to A-LASSO in terms of both of sparseness and risk even though we choose
the hyper-parameters in A-LASSO by cross validation. MCP shows the best
performance in sparseness and risk. This is notable when $n=400$,
relatively large sample case. However, when $n=100$, risk of MCP tends
to be larger than the other methods in some data.
On the other hand, as mentioned above, Fig.\ref{fig:papsim07risk} and
Fig.\ref{fig:papsim07df} show results when components in a target
function are relatively correlated. In this case, we can see that MCP
shows a worse total performance compared to the other methods even when
$n=400$. Contrastly, LASSO-S shows the best performance while LASSO also
shows a good performance. These results tell us that LASSO-S bring us a
stable improvement of LASSO regardless the number of
samples and condition on a target function. Additionally, both of
optimization and model choice of LASSO-S is very simple and fast.
\section{Conclusions and future works}
LASSO is known to be suffered from a bias problem that is caused by
excessive shrinkage. In this article, we considered to improve it by a
simple scaling method. We gave an appropriate empirical scaling value
that expands LASSO estimator and actually moves LASSO estimator close to
the least squares estimator of the post estimation. This is shown to be
especially effective when the regularization parameter is large; i.e. a
sparse representation. Since it can be calculated based of LASSO
estimator, we just run a fast and stable LASSO optimization procedure
such as LARS-LASSO or coordinate descent. We also derived SURE under the
modified LASSO with scaling. This analytic solution for model selection
is also a benefit of the proposed scaling method. As a result, we gave a
fully empirical sparse modeling procedure by a scaling method. In a
simple numerical example, we verified that the proposed scaling method
actually fixes the problem in LASSO and has a stability of model
selection compared to MCP and adaptive LASSO. As a future works, we need
more application results of our scaling method. Although we considered
to assign a single scaling value for all coefficients in this article,
the assignment of coefficient-wise scaling values is expected to improve
a prediction performance. This extension of our scaling method is also a
part of future works.
\section*{Acknowledgements}
This work was supported in part by Japan Society for the Promotion
of Science (JSPS) KAKENHI Grant Number 18K11433.
| {
"timestamp": "2018-08-23T02:06:19",
"yymm": "1808",
"arxiv_id": "1808.07260",
"language": "en",
"url": "https://arxiv.org/abs/1808.07260",
"abstract": "A sparse modeling is a major topic in machine learning and statistics. LASSO (Least Absolute Shrinkage and Selection Operator) is a popular sparse modeling method while it has been known to yield unexpected large bias especially at a sparse representation. There have been several studies for improving this problem such as the introduction of non-convex regularization terms. The important point is that this bias problem directly affects model selection in applications since a sparse representation cannot be selected by a prediction error based model selection even if it is a good representation. In this article, we considered to improve this problem by introducing a scaling that expands LASSO estimator to compensate excessive shrinkage, thus a large bias in LASSO estimator. We here gave an empirical value for the amount of scaling. There are two advantages of this scaling method as follows. Since the proposed scaling value is calculated by using LASSO estimator, we only need LASSO estimator that is obtained by a fast and stable optimization procedure such as LARS (Least Angle Regression) under LASSO modification or coordinate descent. And, the simplicity of our scaling method enables us to derive SURE (Stein's Unbiased Risk Estimate) under the modified LASSO estimator with scaling. Our scaling method together with model selection based on SURE is fully empirical and do not need additional hyper-parameters. In a simple numerical example, we verified that our scaling method actually improves LASSO and the SURE based model selection criterion can stably choose an appropriate sparse model.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "On an improvement of LASSO by scaling",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104982195784,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7094608233659921
} |
https://arxiv.org/abs/math/0703381 | Polynomial ideals and directed graphs | In this paper it is shown that it is possible to associate several polynomial ideals to a directed graph $D$ in order to find properties of it. In fact by using algebraic tools it is possible to give appropriate procedures for automatic reasoning on cycles and directed cycles of graphs. | \section{Introduction}
The aim of this paper is to study some problems in graph theory
with commutative algebra tools. Connections between simplicial
complexes and polynomial
rings were first studied in ( \cite{Sta96}). There were also studies on the
connections between ideals and undirected graphs, as in Simis, Vasconcelos and Villarreal
(\cite{SVV94} and \cite{SVV98}). In 1995 Villarreal (\cite{Vi95}) and in 1998
Hibi and Ohsugi (\cite{HO98}) associated a
toric ideal, called the {\em edge ideal}, to an undirected graph. They found some relations
between ideal properties and even closed walks of the associated graph.
Other studies of undirected graphs with the edge ideal can be found
in Villarreal (\cite{Vi01}) and Herzog and Hibi (\cite{HH05}). By
using these papers the authors found other ideals associated to a
graph and other properties on cycles and
minimal vertex covers (\cite{CF04}).\\
In this paper we extend the paper \cite{CF04} to the case of
directed graphs by founding a toric ideal, such that its
generators are in one to one correspondence with directed and
undirected cycles. The existence of such ideal was proved in
different way in 2005 by Reyes (\cite{Re05}) and in 2006 by
Gitler, Reyes and Villarreal (\cite{GRV06}. Some binomial ideals
associated to a digraph can be also found in \cite{IsIm00} and
\cite{BPS01}, but there is no characterization of cycles.
Furthermore we introduce the notion of sink and source covers and
we find characteristic conditions for directly bipartite graphs.
Many properties of a digraph $D$ are studied trough some
corresponding properties of two undirected graphs associated to
$D$. Relations between Cohen-Macaulay property of such undirected
graphs and properties of the digraph will be investigated in
another paper.\\
Digraphs are very useful for many applications
like in computational molecular biology (\cite{dJ02}), in the
study of phylogenetic trees (\cite{Er04} and \cite{SS05}) and in
the minimum cost flow problem in networks, which has many physical
applications (\cite{GT89}). By using the packages {\em Groebner}
and {\em networks} of Maple 10 we have procedures for automatic
deduction in graph theory in order to find bases of directed and
undirected cycles of the given digraph and to check the existence
of sink and source covers.
\section{Preliminary tools} \label{tools}
\subsection{Gr\"{o}bner Bases}
In this section we introduce some basic notions and properties of
toric ideals. Let ${\bf N}_{0}$=$\{ 0,1,2,\ldots,n,\ldots \}$ and
let $X_{1},\ldots,X_{n}$ be $n$ variables. Let $K$ be a field of
characteristic zero. Let $A=K[X_{1},\ldots,X_{n}]$ and let
$PP(X_{1},\ldots,X_{n})$ =\{$X_{1}^{a_{1}} \cdots X_{n}^{a_{n}}$:
$(a_{1},\ldots,a_{n}) \in {\bf N}_{0}^{n}$\} be the set of power
products in $\{X_1,\ldots,X_n\}$, that is equal to the set $T_{A}$
of terms of $A$.
\begin{defn}
A term ordering $\sigma$ on $T_{A}$ is a total order such that:\\
(i) $1<_{\sigma} t$ for all $t \in T_{A} \setminus \{1\}$; \\
(ii) $t_{1}<_{\sigma} t_{2}$ implies $t_{1}t'<_{\sigma} t_{2}t'$
for all $t' \in T_{A}$.
\end{defn}
\smallskip
\noindent If $\sigma$ is a term ordering on $T_{A}$ and $f \in A$,
then $M_{\sigma}(f)$ is the monomial $c_{j}t_{j}$ iff
$t_{i}<_{\sigma}t_{j}$ for all $i \neq j$, $i=1,\ldots,r$.
$M_{\sigma}(f)$ is the {\em leading monomial} of f.
$t_j$=$T_{\sigma}(f)$ is the {\em leading term} of $f$, while
$M_{\sigma}(f)$ is the {\em leading monomial} of f.
\begin{defn}
Let $f$ be a polynomial in $A$, let $F=(f_{1},\ldots,f_{r})$ be a
finite subset of $A$ and let $\sigma$ be a term ordering on $T_A$.
If $f$=$\sum_{i=1,\ldots,s}c_{i}t_{i} \in A$, then $f$ is {\em
reduced with respect to} $F$ iff $t_{i} \neq tT_{\sigma}(f_h)$ for
all $t \in T_{A}$, all $i=1,\ldots,s$ and all $h=1,\ldots,r$.
\end{defn}
\begin{defn}
Let $\sigma$ be a term ordering on $T_A$ and let $I$ be an ideal
in $A$. The monomial ideal $M_{\sigma}(I)=(M_{\sigma}(f): f \in
I)$ is the {\em initial ideal} of $I$.
\end{defn}
\begin{defn}
{\em (\cite{Bu76})} Let $I$ be an ideal in $A$ and let $\sigma$ be
a term ordering on $T_A$. If $I=(f_{1},\ldots,f_{r})$, then
$\{f_{1},\ldots,f_{r}\}$ is a Gr\"{o}bner basis of $I$ with
respect to $\sigma$ on $T_{A}$ iff
$M_{\sigma}(I)=(M_{\sigma}(f_{1}),\ldots,M_{\sigma}(f_{r}))$.\\
$\{f_{1},\ldots,f_{r}\}$ is a reduced Gr\"{o}bner basis iff
$M_{\sigma}(f_{h})$=$T_{\sigma}(f_{h})$ and $f_h$ is reduced with
respect to $F \setminus f_{h}$ for all $h=1,\ldots,r$.
\end{defn}
\smallskip
\noindent Every ideal $I$ in $A$ has a Gr\"{o}bner basis and a
reduced Gr\"{o}bner basis with respect to a given term ordering
$\sigma$ (\cite{Bu76}).
\begin{defn}\label{univ}
Let $I$ be a nonzero ideal in $A$. The {\em universal Gr\"{o}bner
basis of} $I$ is the union of all reduced Gr\"{o}bner bases of
$I$.
\end{defn}
\subsection{Toric Ideals}
Here we introduce the notion and some properties of a toric
ideal. Let $K$ be a field and let $A=K[X_{1},\ldots,X_{n}]$ be as
above. let $B=K[X_{1},\ldots,X_{n},X_{1}^{-1},\ldots,X_{n}^{-1}]$
be the Laurent polynomial ring in the indeterminates
$\{X_{1},\ldots,X_{n}\}$ and let $EPP(X_{1},\ldots,X_{n})$
=\{$X_{1}^{a_{1}} \cdots X_{n}^{a_{n}}$: $(a_{1},\ldots,a_{n}) \in
{\bf Z}^{n}$\} be the set of power products in
$\{X_1,\ldots,X_n,X_{1}^{-1},\ldots,X_{n}^{-1}\}$, that is equal
to the set $ET_{A}$ of extended terms of $A$.
\begin{defn}\label{deftoric}
{\em (\cite{St95})}. Let
$M$=$(m_{ij})_{i=1,\ldots,m,j=1,\ldots,n}$ be a $(m,n)$-matrix
with $m_{ij}$ in ${\bf Z}$ for all $i,j$. Let $\pi_{M}$ : ${\bf
N}_{0}^{n} \longrightarrow {\bf Z}^{m}$ be the semigroup
homomorphism defined by
$\pi_{M}(u_1,\ldots,u_n)=(\sum_{j=1,\ldots,n}u_{j}m_{1j},\ldots,
\sum_{j=1,\ldots,n}u_{j}m_{mj})$. \\Let $\exp_{n}$ : ${\bf
N}_{0}^{n} \longrightarrow PP(X_{1}, \ldots, X_{n})$ be the
semigroup isomorphism defined by $\exp_{n}(u_1,\ldots,u_n)=
\prod_{j=1,\ldots,n}X_{j}^{u_{j}}$ and let $\exp_{\bf Z}$ : ${\bf
Z}^{m} \longrightarrow EPP(t_{1},\ldots, t_{m})$ be the semigroup
isomorphism defined by $\exp_{\bf Z}(a_1,\ldots, a_m)$=
$\prod_{i=1,\ldots,m}t_{j}^{a_{i}}$. \\Let $\pi$ :
$PP(X_{1},\ldots,X_{n}) \longrightarrow EPP(t_{1},\ldots,t_{m})$
be the semigroup homomorphism, that is induced by $\pi_{M}$ and it
is defined by $\pi(X_j)=\prod_{i=1,\ldots,m}t_{i}^{m_{ij}}$ for
all $j=1,\ldots,n$. $\pi$ extends uniquely to the homomorphism of
semigroup algebras $\pi':K[X_1,\ldots,X_n] \longrightarrow
K[t_1,\ldots,t_m,t_{1}^{-1},\ldots, t_{m}^{-1}]$ defined by
$\pi'(X_j)$ =$\pi(X_j)$ for all $j=1,\ldots,n$.
$I_{M}$=$ker(\pi')$ is an ideal in $A=K[X_1,\ldots,X_n]$, that is
called the {\em toric ideal of} $M$.
\end{defn}
\begin{rem}
Let $M$ be a matrix as above and let $I_M$ be the toric ideal of
$M$. It is not too hard to show that $I_M$ is an elimination
ideal. More precisely
$I_M$=$($$X_j-\prod_{i=1,\ldots,m}t_{i}^{m_{ij}},
t_{i}t_{i}^{-1}-1$: $j=1,\ldots,n$, $i=1,\ldots,m$$)$ $\cap$
$K[X_1,\ldots,X_n]$ $($\cite{St95}$)$.\\
If we put $z_i=t_{i}^{-1}$ for all $i=1,\ldots,n$, then
$I_M$ is equal to the toric ideal $I_{M'}$, where
$M'$=$(m_{ij}')_{i=1,\ldots,2m,j=1,\ldots,n}$ is a $(2m,n)$-matrix
with $m_{ij}'$ in ${\bf N}$ for all $i,j$. Moreover
$m_{ij}'$=$m_{ij}$ for all $i=1,\ldots,m$ with $m_{ij} \in {\bf
Z}^{+}$, $m_{ij}'$=$0$ for all $i=1,\ldots,m$ with $m_{ij} \in
{\bf Z}^{-}$, $m_{ij}'$=$-m_{(i-m)j}$ for all $i=m+1,\ldots,2m$
with $m_{ij} \in {\bf Z}^{-}$ and $m_{ij}'$=$0$ for all
$i=m+1,\ldots,2m$ with $m_{ij} \in {\bf Z}^{+}$.
\end{rem}
\smallskip
\noindent It is well known that every toric ideal is a prime
binomial ideal. Other properties of binomial ideals can be found
in (\cite{ES96}), while properties of toric ideals can be found in
(\cite{St95}) and (\cite{KR05}, chap.2).
\begin{thm}
{\em (\cite{St95})}. $I_M$ is generated by binomials of the type
$X^{u^{+}}-X^{u^{-}}$, where $u^{+}, u^{-} \in {\bf Z}^{n}$ are
non negative with disjoint support.
\end{thm}
\begin{defn}
A binomial $X^{u^{+}}-X^{u^{-}}$ is {\em primitive} if there
exists no other binomial $X^{v^{+}}-X^{v^{-}} \in I_M$ such that
$X^{v^{+}}$ divides $X^{u^{+}}$ and $X^{v^{-}}$ divides
$X^{u^{-}}$. A binomial $X^{u^{+}}-X^{u^{-}}$ in $I_M$ is a {\em
circuit} if $supp(u^{+}-u^{-})$ is minimal with respect to
inclusion and the
coordinates of $u^{+}-u^{-}$ are relatively prime. \\
$Gr_{M}$=\{ primitive binomials in $I_M$\} is the {\em Graver
basis of} $I_M$.
\end{defn}
\begin{thm}
{\em (\cite{St95})}. Let $U_M$ be the universal Gr\"{o}bner basis
of $I_M$ and let $C_M$ be the set of all circuits in
$I_M$. Then:\\
(i) every binomial in $U_M$ is primitive.\\
(ii) $C_M \subseteq U_M \subseteq Gr_{M}$ for every $M$.
\end{thm}
\subsection{Graphs and digraphs}
In this paper $G$=$(V(G),E(G))$ will be a finite graph with
$V(G)=\{v_1,\ldots,v_n\}$ and $E(G)=\{e_1,\ldots,e_m\}$.
Furthermore $[v_{i},v_{j}]$ denotes the oriented edge from
$v_{i}$ to $v_{j}$, while
every non oriented edge between $v_{i}$ and $v_{j}$ is denoted by $\{v_i,v_j\}$.\\
The {\em underlying graph} $G_u$ of a directed graph $G$ is the
undirected graph with $V(G_u)=V(G)$ and the same undirected edges
of $G$. Often $D$ will denote a directed graph, that will be also
called a {\em digraph}.
All graphs in this paper will be
simple, i.e. without multiple edges.
\begin{defn}
Let $D$ be a digraph. A vertex $v_i$ in $V(D)$ is called a {\em
source} if no edge is directed into $v_i$. A vertex $v_i$ in $D$
is called a {\em sink} if every adjacent edge is directed into
$v_i$.
\end{defn}
\begin{defn}
Let $D$ be a digraph. A {\em walk} of length $n$ from a vertex
$v_i$ to a vertex $v_j$ in $D$ is a sequence of vertices
$v_i$=$v_{i(1)}$, \ldots, $v_j$=$v_{i(n+1)}$, such that either
$[v_{i(h)}v_{i(h+1)}]$ or $[v_{i(h+1)}v_{i(h)}]$ is in $E(D)$ for
all $h=1,\ldots,n$. A walk is called a {\em directed walk} if
$[v_{i(h)}v_{i(h+1)}] \in E(D)$ for all $h$. If
$v_{i(1)}=v_{i(n+1)}$ in a directed walk, then it is called a {\em
directed cycle}. A walk is called {\em simple} if there are no
repeated edges, while it is called {\em elementary} whenever there
are no repeated vertices.
\end{defn}
\begin{defn}
Let $G$ be an undirected graph. $G$ is {\em bipartite} if its
vertices can be divided in two sets, such that no edge connects
vertices in the same set. Equivalently $G$ is {\em bipartite} iff
all cycles in
$G$ are even.\\
$G$ is {\em acyclic} if it has no cycle, while it is a {\em tree}
if it is connected and acyclic.
\end{defn}
\smallskip
\noindent
Some generalizations of such definitions in the directed
case are the following ones.
\begin{defn}
Let $D$ be a digraph. $D$ is called {\em directly bipartite}
whenever its underlying graph $D_u$ is bipartite with bipartition
sets $W$ and $W'$, such that every $w \in W$ is a source in $D$,
and every $w' \in W'$ is a sink in $D$. $D$ is called a {\em
directed acyclic graph}, {\em DAG} for short, when there are no
directed cycles.
\end{defn}
\section{Binomial ideals arising from a digraph}
Here we will introduce the binomial ideals, associated to a
digraph, that we will use in the paper. First of all we introduce
some binomial ideals associated to the edges of a digraph.
\begin{defn}
The binomial {\em extended diedge ideal} of a digraph $D$ is the
ideal $I(D,E)$=$($ $e_h-z_{i}v_{j}, z_{i}v_{i}-1$:
$e_h=[v_{i},v_{j}] \in E(D)$, $i=1,\ldots,n$ $)$ in
$K[e_1,\ldots,e_m,v_1,\ldots,v_n,z_{1},\ldots,z_{n}]$. The ideal
$I(E)_D=I(D,E)\cap K[e_1,\ldots, e_m]$ is the {\em binomial diedge
ideal of the digraph} $D$.
\end{defn}
The definitions as above extend the analogous ones given in the case
of undirected graphs as in \cite{HO98} and \cite{CF04}
\begin{rem}
$I(E)_D$ is the toric ideal of the matrix $IM(D)^{t}$, that is the
transpose of the incidence matrix
$IM(D)=(a_{ih})_{i=1,\ldots,n,h=1,\ldots,m}$ of $D$ defined by
$a_{ih}=-1$ if $e_{h}$ leaves $v_{i}$, $a_{ih}=1$ if $e_{h}$
arrives to $v_{i}$ and $a_{ih}=0$ if $v_{i} \notin e_{h}$ for
every $v_{i} \in V(E)$ and $e_{h} \in E(D)$.
\end{rem}
\begin{ex} \label{D1}
Let $D_1$ be the digraph with $V(D_1)=\{v_1, v_2, v_3, v_4,v_5\}$
and $E(D_1)$ = $\{e_1=[v_1,v_2]$, $e_2=[v_2,v_3]$,
$e_3=[v_3,v_1]$, $e_4=[v_1,v_4]$, $e_5=[v_3,v_4]$,
$e_6=[v_3,v_5]\}$. The binomial extended diedge ideal is
$I(D_1,E)=(e_1-z_1 v_2, e_2- z_2 v_3, e_3- z_3 v_1, e_4-z_1 v_4,
e_5-z_3 v_4, e_6-z_3 v_5, z_1 v_1-1, z_2v_2-1, z_3v_3-1, z_4v_4-1,
z_5v_5-1)$. The binomial diedge ideal of $D_1$ is
$I(E)_{D_1}$=$(e_1e_2e_3-1, e_3e_4-e_5, e_2 e_1 e_5-e_4)$.
\end{ex}
\begin{ex} \label{D2}
Let $D_2$ be the digraph with $V(D_2)=\{v_1,v_2,v_3,v_4,v_5\}$ and
$E(D_2)$ = $\{e_1=[v_1,v_2]$, $e_2=[v_2,v_3]$, $e_3=[v_1,v_3]$,
$e_4=[v_4,v_3]$, $e_5=[v_3,v_5]\}$. The binomial extended diedge
ideal of $D_2$ is $I(D_2,E)=(e_1-z_1 v_2, e_2- z_2 v_3, e_3- z_1
v_3, e_4-z_4 v_3, e_5-z_3 v_5, z_1 v_1-1, z_2v_2-1, z_3v_3-1,
z_4v_4-1, z_5v_5-1)$ while the binomial diedge ideal of $D_2$ is
$I(E)_{D_2}$=$(e_1e_2-e_3)$.
\end{ex}
\section{The undirected graph $H_{D}$ associated to $D$}
Here we associate an undirected bipartite graph $H_{D}$ to a
digraph $D$. The introduction of such graph allows to prove
properties of the cycles of $D$ trough the properties of the
cycles of $H_{D}$.
\begin{defn}
Let $D$ be a digraph. Let $H_D$ be the undirected graph with
$V(H_D)$ = $V(D) \cup \{z_{1},\ldots,z_{n}\}$ and $E(H_D)$ =
\{$e=\{z_{i},v_{j}\}$: $\{v_{i},v_{j}\} \in E(D)$\} $\cup$
\{$f_{i}=\{z_{i},v_{i}\}$: $i=1,\ldots,n\}$. Let
$R=K[v_{1},z_{1},f_{1}, \ldots, v_{n},z_{n},f_{n}, e_{1}, \ldots,
e_{m}]$ and let $\pi: R \rightarrow R/(f_{1}-1,\ldots f_{n}-1)$ be
the canonical ring homomorphism defined by $\pi(v_{i})=v_{i}$,
$\pi(z_{i})=z_{i}$, $\pi(f_{i})=1$, $\pi(e_{j})=e_{j}$ for all
$i=1,\ldots,n$ and $j=1,\ldots,m$.
\end{defn}
\smallskip
\noindent If $I(H_D,E)$ is the binomial extended edge
ideal of the undirected graph $H_D$ as in \cite{CF04}, then
$\pi(I(H_D,E))=I(D,E)$ by definition of $\pi$.
\smallskip
\noindent
The graph $H_D$ has some properties, that are
independent on the properties of the graph $D$.
\begin{defn} A {\em matching} of an undirected graph $G$ is a
subset of independent edges of $G$(i.e. edges that do not share
vertices). A matching is called {\em perfect} if its cardinality
is maximal.
\end{defn}
\smallskip
\noindent The following proposition shows the existence of a
unique perfect matching of $H_D$.
\begin{thm} \label{HDbip}
Let $D$ be a digraph. The undirected graph $H_D$ is bipartite and
it has a perfect matching.
\end{thm}
\textbf{Proof}. Let $V=\{v_1, \ldots, v_n\}$ and $Z=\{z_1, \ldots,
z_n\}$. $V$ and $Z$ are disjoint subsets of $V(H_D)$, whose union
is exactly $V(H_D)$. By definition of $E(H_D)$ every edge in $H_D$
connects a vertex in $V$ with a vertex in $Z$. So $H_D$ is
bipartite with bipartition sets $V$ and $Z$, by definition of
bipartite graph. Finally the edges $f_i=\{z_i,v_i\}$ for $i=1,
\ldots, n$ are independent and they are $n$ in a graph with $2n$
vertices. So the set $M=\{f_1, \ldots, f_n\}$ is a perfect
matching for $H_D$ by its own definition. \qed
\smallskip
\noindent
Now, given a bipartite graph $G$, we want to find a digraph $D$, with $H_D=G$.
The following theorem shows some results in this direction.
\begin{thm}
Let $G$ be a bipartite undirected graph with $V(G)=\{v_1,\ldots,
v_n,\\ z_1,\ldots,z_n\}$. Then the following propositions are
equivalent:
\begin{enumerate}
\item $G$ has a perfect matching; \item there exists a digraph
$D$, with $V(D)=\{v_1, \ldots, v_n\}$ such that $H_D=G$.
\end{enumerate}
\end{thm}
\textbf{Proof}. (1) $\Rightarrow$ (2). If $G$ has a perfect
matching, then it has $n$ edges, that share no vertices. We can
relabel the vertices in such a way as the $n$ independent edges
are $f_i=\{z_i, v_i\}$, for $i=1, \ldots, n$. So $V=\{v_1, \ldots,
v_n\}$ and $Z=\{z_1, \ldots, z_n\}$ are the two bipartition sets.
Otherwise either a vertex $z_i$ should be in a bipartition set
with some $v_j$, (or a vertex $v_i$ should be in a bipartition set
with some $z_j$). This fact would imply that the edge $f_i$ has
two vertices in the same bipartition set, against the definition
of bipartite graph. Now it is sufficient to define $D$ in such a
way as $V(D)=\{v_1, \ldots, v_n\}$ and $[v_i,v_j]\in
E(D)$ whenever $\{z_i, v_j\} \in E(G)$.\\
(2) $\Rightarrow$ (1). It follows from theorem \ref{HDbip}. \qed
\section{Cycles in $D$ and $H_{D}$}
Let $G$ be an undirected graph. There exists a binomial ideal
$I(G,E)$ in the vertices and edges associated to $G$ (\cite{CF04}),
while there exists a binomial ideal $I(E)_{E} \subseteq I(G,E)$ in
the edges associated to the even closed walks in $G$ (\cite {HO98}).
Given an even closed walk
$C=(e_{i_{1}}=\{v_{i_{1}},v_{i_{2}}\},\ldots,
e_{i_{2q-1}}=\{v_{i_{2q-1}},v_{i_{2q}}\},
e_{i_{2q}}=\\\{v_{i_{2q}},v_{i_{1}}\})$ of $G$, let
$f_{C}=\prod_{k=1,\ldots,q}e_{i_{2k-1}}-\prod_{k=1,\ldots,q}e_{i_{2k}}$
be the corresponding binomial in $I(E)_{G}$.\\
The following theorem is a well known result about even closed
walks in an undirected graph.
\begin{thm}\label{Hibi}$($\cite{Vi95}$)$,$($\cite{HO98}$)$\\
If $G$ is an undirected graph, then the toric ideal $I(E)_{G}$,
associated to its incidence matrix, is generated by all binomials
$f_C$, where $C$ is an even closed walk of $G$.
\end{thm}
\smallskip
\noindent Now we want to extend this result to the case of
digraphs.
\begin{thm}\label{cycles}
Let $D$ be a digraph. The toric ideal $I(E)_D$ is generated by all
binomials $f_{C}$, where $C$ is a cycle of $D$.
\end{thm}
\textbf{Proof}. By looking at the undirected graph $H_D$
associated to $D$, then the ideal $I(E)_{H_D}$ = $I(H_D, E) \cap
K[e_{1},\ldots,e_{m},f_{1},\ldots,f_{n}]$ is generated by all even
closed walks in $H_D$ and then by all even cycles, because $H_D$
is bipartite by theorem \ref{HDbip}. Now $I(D,E)=\pi(I(H_D,E))$
and the binomial edge ideal $I(E)_D$=$I(D,E)$ $\cap$
$K[e_{1},\ldots,\\e_{m}]$ is equal to $\pi(I(H_D,E)) \cap
K[e_{1},\ldots,e_{m},f_{1},\ldots,f_{n}]$. It follows that
$I(E)_D$ is generated by binomials $f$=$\prod_{i\in
I}f_{i}\prod_{i'\in I'}e_{i'}-\prod_{j\in J}f_{j}\prod_{j'\in
J'}e_{j'}$ with $I, J \subseteq \{1,\ldots,n\}$, $I', J'
\subseteq \{1,\ldots,m\}$, $|I|+|I'|=|J|+|J'|$ and
$f_{i}$=$f_{j}$=$1$ for all $i,j=1,\ldots,n$. If $C$ is a direct
cycle in $D$, then we can relabel the vertices and the edges is
such a way as $C$= \{$e_{1}=[v_{1},v_{2}]$,
\ldots,$e_{q-1}=[v_{q-1},v_{q}]$,$e_{q}=[v_{q},v_{1}]\}$ and
$C'$=$\{e_{1},f_{2},e_{2},f_{3}, \ldots, e_{q-1},f_{q},
e_{q},f_{1}\}$ is an even cycle in $H_D$. So the binomial
$f_{C'}$=$\prod_{h=1,\ldots,q}e_{h}- \prod_{h=1,\ldots,q}f_{h}$ is
in $I(H_D,E) \cap K[e_{1}, \ldots, e_{m}, f_{1}, \ldots, f_{n}]$
and the binomial $f_{C}$ = $\prod_{h=1,\ldots,q}e_{h}-1$ is in
$I(D,E) \cap
K[e_{1},\ldots,e_{m}]$.\\
Now let $C$ be an undirected cycle in $D$. Once again we can
relabel the vertices and the edges in such a way as $C$=
$\{\{v_{1},v_{2}\}, \ldots, \{v_{q-1},
v_{q}\},\{v_{q+1}=v_{1}\}\}$. Let $C'$ be the path in $H_D$ given
in the following way: $C'$=$g_{1},\ldots,g_{q}$, where
$g_{h}=e_{h}f_{h+1}$ if $e_{h}=\{z_{h},v_{h+1}\}$ and
$[v_{h},v_{h+1}] \in E(D)$, while $g_{h}=f_{h+1}e_{h}f_{h}$ if
$e_{h}=\{z_{h+1},v_{h}\}$ and $[v_{h+1},v_{h}] \in E(D)$.
$C'$=$\{z_{1},v_{2},z_{2},v_{3},
\ldots,z_{q-1},v_{q},z_{q},v_{1}\}$ is an even cycle in $H_D$. The
corresponding binomial $f_{C'}$ is in the ideal $I(H_D,E) \cap
K[e_{1},\ldots,e_{m},f_{1},\ldots,f_{n}]$, while its image in
$K[e_{1},\ldots,e_{m}]$ is the binomial $f_{C}$
corresponding to $C$.\\
Conversely given a cycle $C'$ in $H_D$, then it is an even cycle,
because $H_D$ is bipartite. So
$C'$=$\{z_{i(1)},v_{i(2)},z_{i(3)},v_{i(4)},
\ldots,z_{i(q-3)},v_{i(q-2)},z_{i(q-1)},v_{i(q)},z_{i(q+1)}$=$z_{i(1)}\}$.
Let $e_{h}=\{z_{i(h)},v_{i(h+1)}\}$ whenever $i(h) \neq i(h+1)$
and let $f_{h}=\{z_{i(h)},v_{i(h+1)}\}$ whenever $i(h)=i(h+1)$.
$C'$ determines the cycle $C$= $\{v_{i(1)},v_{i(2)}, \ldots,
v_{i(q)},v_{i(q+1)}=v_{i(1)}\}$, where $v_{i(h)}=v_{i(h+1)}$,
whenever $i(h)=i(h+1)$ and $e_{i(h)}=\{v_{i(h)}, v_{i(h+1)}\}$ is
in $E(D)$, whenever $i(h) \neq i(h+1)$. \qed
\begin{rem}
The existence of the toric ideal as above was proved in different
way by using circuits associated to the transpose of the incidence
matrix in 2005 by Reyes (\cite{Re05}) and in 2006 by Gitler, Reyes
and Villarreal (\cite{GRV06}.
\end{rem}
\begin{cor}\label{cycles undirected}
Let $G$ be an undirected graph and let $G^d$ be a directed graph,
whose underlying graph is $G$ (e.g. let $G^d$ be a directed graph
such that $G^d_{u}=G$). Then the ideal $I(G^d, E)$ is generated by
all binomials $f_C$, such that $C$ is a cycle of $G$.
\end{cor}
\textbf{Proof}. Let $V(G)=\{v_1, \ldots, v_n\}$ and
$E(G)=\{e_{i,j}=\{v_i,v_j\}, i,j \in \{1, \ldots, n\}\}$. It is
possible to construct such directed graph $G^d$ by setting
randomly a direction in each edge. Let $V(G^d)=V(G)$ and let
$E(G^d)=\{e_{i,j}=[v_i,v_j]$, such that $\{v_i,v_j\} \in E(G)\}$.
By theorem as above $I(G^d, E)$ is generated by binomial
$f_C=\prod_{i \in I} e_i - \prod_{j \in J} e_j$, where $C$ is a
cycle of $G^d$ and $J=\emptyset$ whenever $C$ is a directed cycle.
Since every cycle in $G^d$ is a cycle in $G$, then we have the
thesis. \qed
\noindent The theorem as above gives also decision procedures for
the existence of directed and undirected cycles in a digraph as in
the following remark.
\begin{rem}\label{diundicycles}
The proof of the theorem as above allows to check whether a given
digraph $D$ has cycles and whether the cycles are either directed
or undirected. By using Maple 10 we implemented the corresponding
decision procedures. The algorithm works as follows: given a
digraph $D$ by using the package {\em networks} we find the
incidence matrix $M$ of $D$ and then by using the package {\em
Groebner} we get a Gr\"{o}bner basis $B$ of the toric ideal $I(E)_D$
of $M$. If $B$ contains a polynomial of the form $f=\prod_{i \in I}
e_i -1$, then in $D$ there is a directed cycle of length $|I|$.
Furthermore if $B$ contains a polynomial of the form $g=\prod_{i \in
I} e_i - \prod_{j \in J} e_j$, then in $D$ there is an undirected
cycle of length $|I|+|J|$. Of course the binomials associated to
even and odd cycles in a digraph are in the edge toric ideal, while
in the undirected case only even cycles appear in the edge toric
ideal associated to the graph. By using corollary 1 we are able to
check whether an undirected graph $G$ has even and odd cycles. We
can construct $G^{d}$ by simply choosing randomly a direction for
each edge and we can find the toric ideal $I(G^{d},E)$. Such ideal
is generated by binomials in the form either $f=\prod_{i \in
I}e_{i}-1$ or $g=\prod_{i \in I}e_{i}-\prod_{j \in J}e_{j}$, that
are in one to one correspondence with cycles in $G$.
\end{rem}
\noindent By using the algorithm sketched in the last remark it is
possible to deduce properties about the digraphs $D_1$ and $D_2$ in
examples \ref{D1} and \ref{D2}.
\begin{ex}\label{tD1}
The toric ideal basis of the ideal $I(E)_{D_1}$, with respect to
the lexicographic term order $\sigma_{1}$ with
$v_{1}>_{\sigma_{1}}
z_{1}>_{\sigma_{1}}v_{2}>_{\sigma_{1}}z_{2}>_{\sigma_{1}}v_{3}>_{\sigma_{1}}z_{3}
>_{\sigma_{1}}v_{4}>_{\sigma_{1}}z_{4}>_{\sigma_{1}}v_{5}>_{\sigma_{1}}
z_{5}>_{\sigma_{1}}e_{3}>_{\sigma_{1}}e_{1}>_{\sigma_{1}}e_{4}
>_{\sigma_{1}}e_{2}>_{\sigma_{1}}e_{5}>_{\sigma_{1}}e_6$ is
$(e_1 e_2 e_3-1, e_1 e_2 e_5 - e_4, e_3 e_4-e_5)$. So the digraph
$D_1$ has a directed cycle with the edges $e_1, e_2, e_3$ and two
undirected cycles with the edges $e_3, e_4, e_5$ and $e_1, e_2,
e_4, e_5$.
\end{ex}
\begin{ex}\label{tD2}
The digraph $D_2$ is a DAG, in fact the toric ideal basis of the
ideal $I(E)_{D_2}$, with respect to the lexicographic term order
$\sigma_{2}$ with $v_{1}>_{\sigma_{2}}
z_{1}>_{\sigma_{2}}v_{2}>_{\sigma_{2}}z_{2}>_{\sigma_{2}}v_{3}>_{\sigma_{2}}z_{3}
>_{\sigma_{2}}v_{4}>_{\sigma_{2}}z_{4}>_{\sigma_{2}}v_{5}>_{\sigma_{2}}
z_{5}>_{\sigma_{2}}e_{1}>_{\sigma_{2}}e_{3}>_{\sigma_{2}}e_{2}
>_{\sigma_{2}}e_{4}>_{\sigma_{2}}e_{5}$ is
$(e_1 e_2 - e_3)$ and the only cycle with the edges $e_1, e_2,
e_3$ is undirected.
\end{ex}
\smallskip
\noindent
Now we study a property of digraphs, that can be easily
checked by using theorem \ref{cycles}.
\begin{defn}
Given a digraph $D$, a vertex $v$ is reachable from another vertex
$u$ if there is a directed path that starts from $u$ and ends at
$v$. If $v$ is reachable from $u$, then $u$ is a {\em predecessor}
of $v$ and $v$ is a {\em successor} of $u$.
\end{defn}
\begin{defn}
Let $D$ be a digraph and let $v$ and $u$ be vertices in $D$. $D$
is a $UPD$ (Unique Path Digraph) iff whenever $v$ is a successor
of $u$, then there is a unique elementary path between $u$ and
$v$.
\end{defn}
\smallskip
\noindent It is easy to show that a cycle of a $UPD$ is directed.
The following theorem is useful for a characterization of a $UPD$.
\begin{thm}\label{UPD}
Let $D$ be a digraph and let $C_1$ and $C_2$ be two directed
cycles in $D$, such that $C_1$ and $C_2$ are not edge-disjoint.
Then $C=(C_1 \cup C_2) \backslash (C_1 \cap C_2)$
is an undirected cycle.
\end{thm}
\textbf{Proof}. Let $I(E)_D$ be the edge ideal associated to $D$.
By theorem \ref{cycles} the binomials representing the two
directed cycles $C_1$ and $C_2$ lie in $I(E)_D$. Let $f_1=\prod_{i
\in I} e_i -1$ be the binomial associated to $C_1$ and let
$f_2=\prod_{j \in J} e_j -1$ be the binomial associated to $C_2$.
Since $C_1 \cap C_2 \neq \emptyset$, then $K=I \cap J \neq
\emptyset$ and $f_1=(\prod_{k \in K} e_k \cdot \prod_{i \in I
\backslash K} e_i) -1$ while $f_2=(\prod_{k \in K} e_k \cdot
\prod_{j \in J \backslash K} e_j) -1$. Let $\sigma$ be a
lexicographic term ordering in $\mathbb{K}[e_1, \ldots, e_m]$,
where $m$ is the number of edges in $D$. The S-polynomial between
$f_1$ and $f_2$ with respect to the term ordering $\sigma$ is the
binomial $f= -\prod_{j \in J \backslash K} e_j + \prod_{i \in I
\backslash K} e_i$. By remark \ref{diundicycles} the cycle $C$
corresponding to $f$ is an undirected cycle lying in $D$ and
$C$=$(C_1 \cup C_2) \backslash (C_1 \cap C_2)$. \qed
\begin{cor}\ref{UPD}
Let $D$ be a digraph. $D$ is a $UPD$ if and only if its cycles
are directed with at most a vertex in common.
\end{cor}
\textbf{Proof}. If $D$ is a tree or a forest there is nothing to
prove. Otherwise all cycles have to be directed and by theorem
\ref{UPD} they can have at most a vertex in common. \qed
\begin{rem}
The UPD property of a digraph $D$ can be easily checked by using
theorem \ref{cycles} and corollary \ref{UPD}. In fact it is easy
to show that $D$ is $UPD$ if and only if either the binomial edge
ideal $I(E)_{D}=(0)$, e.g. $D$ has no cycles, or $I(E)_{D}$ is
generated by binomials $f_{h}=\prod_{j=1,\ldots,k(h)}e_{j}-1$ with
$\prod_{j=1,\ldots,k(h)}e_{j}$ and $\prod_{j=1,\ldots,k(h')}e_{j}$
coprime for all $h \neq h'$.
\end{rem}
\section{Linear ideals arising from digraphs}
\label{linear}
Here we will introduce some linear ideals, that we can associate to
a digraph and we can use in decision procedures. By using algorithms
from linear algebra, we could loose some property of a digraph.
Anyway such algorithms are very useful, because they are fast.
\noindent Let $D$ be a digraph and let $H_D$ be the associated
undirected graph defined as before. In \cite{CF04} the extended
linear edge ideal (respectively the linear edge ideal) can be
associated to $H_D$ and relations between the extended linear edge
ideal and the extended binomial edge ideal (respectively the linear
edge ideal and the binomial edge ideal) are shown.
\smallskip
\noindent Let $R=K[v_{1},z_{1},f_{1}, \ldots, v_{n},z_{n},f_{n},
e_{1}, \ldots, e_{m}]$ and let $\pi': R \rightarrow
R/(f_{1},\ldots f_{n})$ be the canonical ring homomorphism defined
by $\pi'(v_{i})=v_{i}$, $\pi'(z_{i})=z_{i}$, $\pi'(f_{i})=0$,
$\pi'(e_{j})=e_{j}$ for all $i=1,\ldots,n$ and $j=1,\ldots,m$.
\begin{defn}
The ideal $LI(D,E)$=$\pi'(LI(H_D,E))$ in $K[e_1, \ldots, e_m, v_1,
\ldots, v_n]$ is the {\em linear extended edge ideal of} $D$. The
ideal $LI(E)_D$ = $LI(D,E)\cap K[e_1, \ldots, e_m]$ is the {\em
linear edge ideal of} $D$.
\end{defn}
\begin{rem}
It is easy to show that the ideal $LI(D,E)$=$($ $e_h+v_{i}-v_{j}$:
$e_h=[v_{i},v_{j}]$ in E(D)$)$ in $K[e_1, \ldots, e_m, v_1,
\ldots, v_n]$ by definition of $\pi'$.
\end{rem}
\noindent By repeating the proof as in \cite{CF04} if we take the
matrix $M=IM(D)$ and the linear homomorphism of semigroup algebras
$\psi$ : $K[e_1,\ldots,e_m] \longrightarrow
K[v_1,\ldots,v_n,v_{1}^{-1},\ldots,\\v_{m}^{-1}]$ defined by
$\psi(e_h)=v_j-v_i$ whenever $e_h=[v_i,v_j]$ for all
$h=1,\ldots,m$, then $LI(D,E)$ coincides with the ideal
$LI_{M}$=$ker(\psi)$ in $K[e_1,\ldots,e_m]$, that is called the
{\em linear ideal of} $M$.
\begin{ex}
Let $D_1$ be the graph as in example \ref{D1} and \ref{tD1}. The
linear extended edge ideal of $D_1$ is $LI(D_1,E)=(e_1+v_1-v_2,
e_2+v_2-v_3, e_3+v_3-v_1, e_4+v_1-v_4, e_5+v_3-v_4, e_6+v_3-v_5)$,
while the linear edge ideal of $D_1$, is
$LI(E)_{D_1}=(e_3+e_4-e_5,e_1+e_5+e_2-e_4)$.
\end{ex}
\begin{ex}
Let $D_2$ be the graph as in example \ref{D2} and \ref{tD2}. The
linear extended edge ideal of $D_2$ is $LI(D_2,E)=(e_1+v_1-v_2,
e_2+v_2-v_3, e_3+v_1-v_3, e_4+v_4-v_3, e_5+v_3-v_5)$, while the
linear edge ideal of $D_2$, is $LI(E)_{D_2}=(e_1-e_3+e_2)$.
\end{ex}
\smallskip \noindent In order to show the relations between the
linear ideal and the toric ideal associated to the incidence
matrix $IM(D)$ we need the following definition.
\begin{defn}
{\em (\cite{St95})} Let $M$=$(m_{ij})_{i=1,\ldots,m,j=1,\ldots,n}$
be a $(m,n)$-matrix with $m_{ij}$ in ${\bf N}_{0}$ for all $i,j$
and let $\pi_{M}$ : ${\bf N}_{0}^{n} \longrightarrow {\bf Z}^{m}$
and $\pi_{M}'$ : ${\bf Z}^{n} \longrightarrow {\bf Z}^{m}$ be the
semigroup homomorphisms as in definitions \ref{deftoric}. $Ker(M)$
is the kernel of the semigroup homomorphism $\pi_{M}'$. Moreover
if a finite set $F$ generates $Ker(M)$ as {\bf Z}-module, then
$I(F)$= $(e^{u^{+}}-e^{u^{-}}, u \in F)$ is a {\em lattice ideal
associated to} $M$.
\end{defn}
\begin{rem}\label{rem5.1}
$\phi_{n}^{-1}(Ker(M))$ is a {\bf Z}-submodule of
$\bigoplus_{j=1,\ldots,n}{\bf Z}X_{j}$ and it is the kernel of the
{\bf Z}-module homomorphism $\phi_{m}\pi_{M}'\phi_{n}$ by definition
of the homomorphisms $\pi_{M}'$, $\phi_{n}$ and $\phi_{m}$. Finally
since $\bigoplus_{j=1,\ldots,n}{\bf Z}X_{j}$ is a {\bf Z}-submodule
of $A=K[X_1,\ldots,X_n]$, then $\phi_{n}^{-1}(Ker(M))$=$ker(\psi)
\cap \bigoplus_{j=1,\ldots,n}{\bf Z}X_{j}$.
\end{rem}
\smallskip
\noindent The following definition is also useful and
related to the notion of saturation.
\begin{defn}
Let $I$ be an ideal in the polynomial ring $A$ and let $f \in A$.
$I:f^{\infty}$=($g$: $gf^{m} \in I$ for some $m \in {\bf N}_{0}$).
\end{defn}
\smallskip
\noindent Of course $I \subseteq I:f^{\infty}$ for all $f \in A$
\smallskip
\noindent The relation between the ideals $LI(E)_D$ and
$I(E)_D$ is
given by the following facts.\\
First of all in \cite{HS95} and (\cite{St95}, p.114) it is shown
that if a finite set $B$ generates $Ker(IM(D))$ as {\bf Z}-module,
$J_{0}$=$I(B)$=$(e^{u^{+}}-e^{u^{-}}, u \in ker(IM(D))$ and
$J_{i}$=$(J_{i-1}:e_{i}^\infty)$ for all $i=1,\ldots,m$, then
$J_{m}=I(D)_{E(D)}$. \\
Furthermore the generators of $J_{0}$ are in one to one
correspondence with the elements of $F$ and then in one to one
correspondence with the elements of $\phi_{n}^{-1}(F)$, that
generate the {\bf Z}-module $\phi_{n}^{-1}(Ker(IM(D)))$=$LI(IM(D))
\cap \bigoplus_{j=1,\ldots,n}{\bf Z}e_{j}$=$LI(D)_{E} \cap
\bigoplus_{j=1,\ldots,n}{\bf Z}e_{j}$ by remark \ref{rem5.1}.
\begin{rem}
The algorithm in \cite {HS95} called the {\em saturation
algorithm} is one of the existing algorithms for finding such
ideal. Another well known algorithm is the {\em Lift-and-Project
Algorithm} in \cite{BLR99} and recently the algorithm in
\cite{HM06}.
\end{rem}
\smallskip
\noindent Now the relation between the ideals $LI(D,E)$ and
$I(D,E)$ is the same as the relation between the ideals $LI(G,E)$
and $I(G,E)$ when $G$ is an undirected graph as in \cite{CF04}.
The relations between binomial and linear ideals associated to a
digraph give also the following theorem.
\begin{thm}\label{licycles}
Let $D$=$(V(D),E(D))$ be a simple digraph.\\
(i) The directed cycle
$C$=$(e_{1}, \ldots, e_{k})$ is in $D$ iff the linear
polynomial $h_{C}$=$\sum_{h=1, \ldots, k}e_{h}$ is in $LI(D,E)$.\\
(ii) The undirected cycle $C$=$(e_1, \ldots, e_k)$ is in $D$ iff
the linear polynomial $h_{C}$=$\sum_{i \in I} e_i - \sum_{j \in J}
e_j$, with $|I|+|J|=k$ is in $LI(D)_{E}$.
\end{thm}
\textbf{Proof}. The proof of (i) and (ii)
is the same as the proof of theorems about the corresponding
binomial ideals.
\begin{rem}
The theorem as above shows that the hypothesis on the
characteristic of the field $K$ is necessary. In fact the theorem
does not hold when the characteristic of $K$ is different from
$0$.
\end{rem}
\begin{ex}\label{lD1}
Let $D_1$ be the graph as in examples \ref{D1} and \ref{tD1}. Then
the Gr\"{o}bner basis of $LI(D_1,E)$ with respect to $\sigma_1$ is
$(e_4-e_5+e_3, -e_4+e_5+e_1+e_2, -e_5+v_4+e_6-v_5, e_6+v_3-v_5,
v_2+e_2-v_5+e_6, v_1-v_5+e_4+e_6-e_5)$.
\end{ex}
\begin{ex}\label{lD2}
Let $D_2$ be the graph as in examples \ref{D2} and \ref{tD2}. The
Gr\"obner basis of the ideal $LI(D_2,E)$ with respect to
$\sigma_2$ is $(e_1-e_3+e_2,e_5-v_5+e_4+v_4, e_3+v_1+e_5-v_5,
e_5-v_5+v_3, v_2- v_5+e_2+e_5)$.
\end{ex}
\noindent By using linear ideals, the procedures as above are very
fast. Now theorem \ref{licycles} establishes only that a
polynomial $p$, that corresponds to a directed cycle, is in the
ideal. There is no proof that $p$ is in some Gr\"obner basis of
$LI(E)_D$ as in example \ref{lD1} with $p=e_1+e_2+e_3$ and then we
cannot decide if $D$ is a DAG. Anyway we can use the linear
ideals, once we want to check the existence of cycles.
\medskip
\noindent The same results can be obtained by using the classical
tools of computational graph theory applied to cycle spaces and
cycle bases as in \cite{KMMP04} and \cite{KM05}.
\begin{defn}
Let $D$=$(V,E)$ be a simple digraph and let $C$ be cycle in $D$.
The {\em incidence vector of} $C$ is a vector in
$\{-1,0,1\}^{|E|}$ defined as follow. For every $e \in E$ C(e) is
equal to $1$ if $e=[v_i,v_j] \in C$ , C(e) is equal to $-1$ if
$-e=[v_j,v_i] \in C$ and C(e) is equal to $0$ if $e \notin C$.
The {\em cycle space of} $D$ is the vector space over ${\bf Q}$
spanned by the incidence vectors of its cycles. A {\em cycle basis
of} $D$ is a basis of the cycle space.
\end{defn}
\begin{rem}
If $D$ is connected, then the cycle space of $D$ has dimension
$d=|E|-|A|+1$.
\end{rem}
It is well known (\cite{GLS03} and \cite{Bo98}) that if $D$ is a
connected digraph, then the cycle space of $D$ is the kernel of
the linear map from the edge space $E(D)$ to the vertices space
$V(D)$ defined by the incidence matrix $IM(D)$ and a basis of such
vector space is a cycle basis of $D$. Furthermore if $D$ is
strongly connected, i.e. for every pair of vertices $u$ and $v$ in
$D$, there are directed paths both from $u$ to $v$ and from $v$ to
$u$, then it is possible to find a cycle basis that contains all
cycles of length $2$ and all directed cycles.
\medskip
\noindent Now let $D=V(D),E(D)$ be a simple digraph. Let
$n=|V(D)|$ and let $m=|E(D)|$. By looking at the ideal $LI(D,E)$,
then it is generated by the kernel of the linear map associated to
the matrix $(m,n+m)$-matrix $(IM(D)^t,-I(m))$, while $LI(E)_D$ is
generated by a cycle basis of $D$, when we identify the incidence
vector of a cycle $C$ with the corresponding linear polynomial in
the edges in $C$. The results in \cite{GLS03} shows that we can
know all directed cycles in a digraph when some other hypothesis
are satisfied, (in particular digraphs with double edges, i.e.
connecting two vertices in both directions, are allowed) while a
basis of the toric ideal associated to the transpose of the
incidence matrix, which is also an ideal in the edges containing
the lattice ideal associated to the same matrix, contains a
minimal set of directed cycles in $D$. In fact in general the set
of all directed cycles in a digraph $D$ is not a basis of a vector
space, because the sum of two directed cycles is not well defined.
\smallskip
\noindent
There are many digraphs that are not strongly
connected and we cannot apply neither the algorithm in
\cite{GLS03} nor our theorem \ref{licycles}. So we have to use
theorem \ref{cycles}. Finally in \cite{KMMP04} the authors give a
fast algorithm to compute a cycle basis of minimal weight in
undirected graphs.
\section{Sink and source covers of a digraph}
Here we introduce another undirected graph $K_{D}$, that we can
associate to a digraph $D$. We can prove some properties of $D$
trough the properties of such graph.
\begin{defn}
Let $D$ be a digraph. Let $K_D$ be the undirected subgraph of
$H_D$ with $V(K_D)$=$V(D) \cup \{z_{1},\ldots,z_{n}\}$=
and $E(K_D)$=\{$e=\{z_{i},v_{j}\}$: $[v_{i},v_{j}]
\in E(D)$\}. $K_D$ is called the {\em sink-source undirected graph} associated to $D$.
\end{defn}
\smallskip
\noindent In \cite{CF04} it is introduced the extended vertex
ideal of the undirected graph $G_u$ $I(G_u,V(G_u))$=($v_i-\prod
e_{h}$: $v_i$ is in $e_h \in E(G_u)$).
\begin{defn}
Let $D$ be a digraph and let $D^*=K_D \setminus L$, where $L$ is the
set of isolated vertices. The {\em extended divertex} ideal of $D$
is equal to $I(D^*,V)$.
\end{defn}
\smallskip
\noindent The following result can be found in \cite{CF04}).
\begin{thm}\label{bip undirected}
Let $G$=$(V(G),E(G))$ be a simple undirected connected graph without
isolated vertices and let $I(G,V)$ be the extended vertex ideal of
$G$. Then the ideal $I(V)_{G}$=$I(G,V)\cap K[v_1,\ldots,v_n]$
contains an irreducible polynomial $p$ of the form $p$=$\prod_{j \in
J}v_{j}-\prod_{k \in K}v_k$, if and only if $G$ is bipartite.
Moreover the partition sets are $V'$=$\{v_j: j \in J\}$ and
$V''$=$\{v_k: k \in K\}$.
\end{thm}
\smallskip
\noindent A possible generalization of the last theorem to the
digraph uses the notion of directly bipartite graph.
\begin{thm}
Let $D$=$(V(D),E(D))$ be a digraph and let $I(V)_{D}$ be the
divertex ideal of $D$. Then $D$ is directly bipartite if and only if
$I(V)_{D}$ contains an irreducible polynomial of the form
$p=\prod_{i \in I} z_i - \prod_{j \in J} v_j$, with $I \cap
J=\emptyset$, $I \cup J= \{1, \ldots, n\}$.
\end{thm}
\textbf{Proof}. $D$ is directly bipartite if and only if $V(D)=V_I
\cup V_J$, with $V_I=\{v_i: \\i \in I\}$, $V_J=\{v_j: j \in J\}$,
$I \cap J=\emptyset$, $I \cup J=\{1, \ldots, n\}$ and every edge
in $E(D)$ goes from a vertex in $V_I$ to a vertex in $V_J$. By
definition of $K_D$ this last fact is equivalent to say that $K_D$
is bipartite with partition sets $Z=\{z_i: i \in I\}$ and
$V=\{v_j: j \in J\}$ and with the $n$ isolated vertices $\{z_j: j
\in J\} \cup \{v_i: i \in I\}$. Now by theorem \ref{bip
undirected} this fact is equivalent to say that the binomial
$p=\prod_{i \in I} z_i - \prod_{j \in J} v_j$ is in the extended
vertex ideal of $K_D$. Finally, this last ideal coincides with the
extended vertex ideal of $D$ by definition. \qed
\smallskip
\noindent Let us recall that a {\em vertex cover} $W$ of an
undirected graph $G=(V,E)$ is a subset of $V$, such that every
edge in $E$ is incident with at least a vertex in $W$. A vertex
cover $W$ is called {\em minimal} if every subset of $W$ is not a
vertex cover. It is possible to generalize the concept of minimal
vertex cover for digraphs, in the following way:
\begin{defn}
Let $G=(V,E)$ be a digraph. A {\em source cover of} $G$ is a vertex
cover $V'$ of $G$, such that every edge in $G$ leaves every vertex
in $V'$. A source cover $V'$ of $G$ is called {\em minimal} if no
subset of
$V'$ is a source cover of $G$.\\
A {\em sink cover of} $G$ is a vertex cover $V'$ of $G$, such that
every edge in $G$ does not leave any vertex in $V'$. A sink cover
$V'$ of $G$ is called {\em minimal} if no subset of $V'$ is a sink
cover of $G$.
\end{defn}
\noindent The following proposition is the key result for our
purposes.
\begin{prop} \label{Villarreal} $($ \cite{Vi01}$)$
Let $K[v]=K[v_1, ..., v_n]$ be a polynomial ring over a field K
and let $G$ be an undirected graph. If {\em P} is the ideal of
$K[v]$ generated by $A=\{v_{i1}, \ldots, v_{ir}\}$, then {\em P}
is a minimal prime ideal containing the edge ideal $I(G)_{E}$ if
and only if $A$ is a minimal vertex cover of $G$.
\end{prop}
\smallskip
\noindent By using the undirected graph $K_G$ it is possible to
find source and sink covers of a digraph $G$, according to the
following proposition.
\begin{prop}\label{sscovers}
Let $G$ be a directed graph and let $K_G$ be the associated
source-sink undirected graph.
Let $V'$ be a vertex cover of $K_G$. $V'=\{v_{i(1)},\ldots,v_{i(l)}\}$ is a source cover of $G$ if and
only if $\{z_{i(1)},\ldots,z_{i(l)}\}$ is a vertex cover of $K_G$.
In similar way
$V'=\{v_{i(1)},\ldots,v_{i(l)}\}$ is a sink cover of $G$ if and only if $\{v_{i(1)},\ldots,v_{i(l)}\}$
is a vertex cover of $K_G$.
\end{prop}
\textbf{Proof}. Let $V'=\{v_{i(1)},\ldots,v_{i(l)}\}$ be a source
cover of $G$.
So $V'$ is a vertex cover of $G$, such that each edge of $G$ leaves at least one vertex in $V'$. It
follows that $\{z_{i(1)},\ldots,z_{i(l)}\}$ is a vertex cover of $K_G$ by its own definition. In
similar way if $V'$ is a sink cover of $G$, then
$V'$ is a vertex cover of $G$, such that each edge of $G$ does not leave any vertex in $V'$. So $V'$
is a vertex cover of $K_G$ by its own definition. Conversely, let $Z'$=$\{z_{i(1)},\ldots,z_{i(l)}\}$
be a vertex cover of $K_G$. So each edge in $K_G$ is incident with at least one vertex in $Z'$. It
follows that each edge in $G$ is incident with at least one vertex in
$V'=\{v_{i(1)},\ldots,v_{i(l)}\}$ and it leaves such vertex by definition of $K_G$. So $V'$ is a
source cover of $G$. In similar way if $V'=\{v_{i(1)},\ldots,v_{i(l)}\}$ is a vertex cover of $K_G$,
then it is a vertex cover of $G$. Moreover $V'$ is a sink cover of
$G$ by definition of $K_G$. \qed
\begin{ex}
Let $D_1$ be the graph as in example \ref{D1}. $\{z_1,z_2,z_3\}$
and $\{v_1,v_2,v_3,v_4,\\v_5\}$ are minimal vertex covers of
$K_{D_1}$. $\{v_1,v_2,v_3\}$ is a source cover of $D_1$ and
$\{v_1,v_2,v_3,v_4,v_5\}$ is a sink cover of $D_1$. $D_1$ is not
directly bipartite.
\end{ex}
\begin{ex}
Let $D_2$ be the graph as in example \ref{D2}. $\{z_1,z_2,z_3,z_4\}$
and $\{v_2,v_3,v_5\}$ are minimal vertex covers of $K_{D_2}$.
$\{v_1,v_2,v_3,v_4\}$ is a source cover of $D_2$ and
$\{v_2,v_3,v_5\}$ is a sink cover of $D_2$. $D_2$ is not directly
bipartite.
\end{ex}
\begin{ex}
Let $D_4$ be the digraph with vertex set $V(D_4)=\{v_1, \ldots,
v_5\}$ and edge set $E(D_4)$ = $\{e_1=[v_1,v_2]$, $e_2=[v_3,v_2]$,
$e_3=[v_1,v_4]$, $e_4=[v_3, v_4]$, $e_5=[v_3, v_5]$. $D_4$ is
directly bipartite and it has a sink cover and a source cover. In
fact the divertex ideal of $D_4$, obtained by intersecting the
ideal $(z_1-e_1 e_3, v_2-e_1 e_2, z_3-e_2 e_4 e_5, v_4-e_3 e_4,
v_5-e_5)$ with $K[z_1, \ldots, z_5, v_1, \ldots, v_5]$, is
$I(V)_{D_4}=(v_5 v_2 v_4-z_1 z_3)$. So $D_4$ is directly
bipartite, $\{z_1, z_3\}$ is a source cover and $\{v_2, v_4,
v_5\}$ is a sink cover. The same result can be obtained by finding
directly sink and source covers of $D_4$ trough the vertex cover
of the undirected graph $K_{D_4}$.
\end{ex}
| {
"timestamp": "2007-03-13T16:48:23",
"yymm": "0703",
"arxiv_id": "math/0703381",
"language": "en",
"url": "https://arxiv.org/abs/math/0703381",
"abstract": "In this paper it is shown that it is possible to associate several polynomial ideals to a directed graph $D$ in order to find properties of it. In fact by using algebraic tools it is possible to give appropriate procedures for automatic reasoning on cycles and directed cycles of graphs.",
"subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Polynomial ideals and directed graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104972521579,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7094608226557899
} |
https://arxiv.org/abs/1911.08801 | Ray Effect Mitigation for the Discrete Ordinates Method Using Artificial Scattering | Solving the radiative transfer equation with the discrete ordinates (S$_N$) method leads to a non-physical imprint of the chosen quadrature set on the solution. To mitigate these so-called ray effects, we propose a modification of the S$_N$ method, which we call artificial scattering S$_N$ (as-S$_N$). The method adds an artificial forward-peaked scattering operator which generates angular diffusion to the solution and thereby mitigates ray effects. Similar to artificial viscosity for spatial discretizations, the additional term vanishes as the number of ordinates approaches infinity. Our method allows an efficient implementation of explicit and implicit time integration according to standard S$_N$ solver technology. For two test cases, we demonstrate a significant reduction of the error for the as-S$_N$ method when compared to the standard S$_N$ method, both for explicit and implicit computations. Furthermore, we show that a prescribed numerical precision can be reached with less memory due to the reduction in the number of ordinates. |
\section{Results}
In the following, we evaluate the proposed method within the scope of two numerical test cases: (i) the line-source problem is used as it is inherently prone to ray-effects when using the S$_N$ method, and (ii) the lattice test case models---in a very simplified way---neutrons in a fission reactor with a source and heterogeneous materials. For both problems, we present results for the explicit and implicit methods, respectively.
Both test cases are computed on a two-dimensional regular grid for the spatial variable. We project the ${\bm{\Omega}}_q \in \mathbb{S}^2$ for $q=1,\ldots,N_q$ onto the $x$-$y$-plane.
The code used to compute the numerical results is published under the MIT license in a public repository at \texttt{https://github.com/camminady/SN}.
\label{sec:res}
\subsection{Line-source test case}
\label{subsec:results_linesource}
The goal of this test case is to numerically compute the Green's function for an initial isotropic Dirac-mass at the origin, i.e.\ $\psi(t=0,{\bm{x}},{\bm{\Omega}}) = \sfrac{1}{4\pi}\,\delta({\bm{x}})$, which is realized as a narrow Gaussian in space with $\psi(t=0,{\bm{x}},{\bm{\Omega}}) = \max\{10^{-4},\sfrac{1}{4\pi \delta}\,\exp( \sfrac{-{\bm{x}}^2}{4\delta})\}$ and $\delta = 0.03^2$. We choose $\sigma_s = \sigma_t = 1$. The spatial discretization varies from $50\times 50$ for a coarse grid to $200\times 200$ points on the domain $[-1.5,1.5]\times [-1.5,1.5]$ for a fine grid. There exists a semi-analytical solution to the full transport equation for this problem due to Ganapol et al.~\cite{ganapol2001homogeneous}. The exact solution consists of a circular front moving away from the origin as well as a tail of particles which have been scattered or not emitted perpendicularly from the center. We chose the line-source because it is a test case that lays bare almost any artifact an angular discretization might suffer from.
The parameters of the artificial scattering have been set to
$\varepsilon = \beta / N_q$ with $N_q$ as the number of quadrature points.
Obviously, choosing these parameters requires some experience. However, as in the case of filtering for $P_N$ \cite{mcclarren2010robust}, both parameters can be adjusted for coarse angular and spatial grids, and are expected to be valid for finer grids, which seems to be the case for the line-source problem as well.
The CFL number, i.e. the ratio of the time step and the spatial cell size is $0.95$ for the explicit calculations and $2$ for the implicit calculation.
For the implicit discretization, the tolerance for the GMRES solver was set to $1.5\cdot10^{-8}$, and we considered the inner source iteration to be converged at an estimated error of $10^{-4}$.
An overview of the analytics that we perform is given in Fig.~\ref{fig:1summarysn}. We evaluate the scalar flux $\Phi(t,{\bm{x}}) = \int_{\mathbb{S}^2} \psi(t,{\bm{x}},{\bm{\Omega}}')\,d{\bm{\Omega}}'$ at the final time step.
We have performed an explicit S$_4$ computation with $N_q=92$ and $N_x \times N_y = 200\times 200$. In both rows of Fig.~\ref{fig:1summarysn}, the left column shows the scalar flux at the final time step. The first row shows the solution along cuts through the domain on the right with the respective cuts on the left in white. The second row shows the solution along circles with different radii on the right and the respective circles on the left. Strong oscillations are visible due to ray-effects. For the first row, the analytical solution is given in green in the right column image. In the lower row, the analytical solution is constant along a circle with a certain radius, visualized for $r=0.2$, $r=0.6$, and $r=0.9$ in green.
In Fig.~\ref{fig:1summaryassn} and Fig.~\ref{fig:1summaryassnimpl} we see the same summary of results, now for an explicit and implicit computation, respectively. In both computations, ray-effects have been reduced significantly when comparing the results with Fig.~\ref{fig:1summarysn} despite the same number of quadrature points. The implicit calculation looks slightly more diffusive. However, the line-source problem is not a problem that would be computed implicitly in the first place and we only use it to illustrate the expected behavior for implicit computations.
The values for $\beta$ and $\varepsilon$ in the fine calculations are determined from a parameter study using coarse spatial and angular grids. The results of this parameter study are given in Fig.~\ref{fig:heatmapabsl1normalized} for the explicit algorithm and in Fig.~\ref{fig:heatmapabsl1normalizedimpl} for the implicit algorithm.
A single simulation for the coarse configuration takes $\sim \sfrac{1}{400}$ times the time of a single computation for the fine configuration. Consequently, the full parameter study with all $306$ configurations can be performed for less than the costs of a single fine computation. For the optimal parameter configuration, the error decreases down to $37.8\%$ for the explicit case and down to $41.4\%$ for the implicit case.
In both cases, implicit and explicit, we observe a region of parameters that yield similarly good results. This behavior mostly matches the predicted relation from the asymptotic analysis, i.e. when $\varepsilon$ is small, $\sigma_{\text{as}}\cdot \varepsilon$ controls the effect of artificial scattering.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{figures/s4_expl.png}
\caption{$S_4$ solution with ray-effects. We choose $N_q=92$ quadrature points, the spatial domain is composed of $N_x \times N_y = 200\times 200$ spatial cells and the CFL number is 0.95. Cuts through the domain and along circles with different radii are visualized in the right column. Only the solution along the horizontal cut is symmetric for the icosahedron quadrature.}
\label{fig:1summarysn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/ass4_expl.png}
\caption{as-$S_4$ solution with mitigated ray-effects. We choose $N_q=92$ quadrature points, the spatial domain is composed of $N_x \times N_y = 200\times 200$ spatial cells and the CFL number is 0.95. Cuts through the domain and along circles with different radii are visualized in the right column.
We set $\sigma_{\text{as}}=5$ and $\beta=4.5$. Only the solution along the horizontal cut is symmetric for the icosahedron quadrature.}
\label{fig:1summaryassn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/ass4_impl.png}
\caption{as-$S_4$ solution with mitigated ray-effects. We choose $N_q=92$ quadrature points, the spatial domain is composed of $N_x \times N_y = 200\times 200$ spatial cells and the CFL number is 2. Cuts through the domain and along circles with different radii are visualized in the right column. We set $\sigma_{\text{as}}=7$ and $\beta=4$. Only the solution along the horizontal cut is symmetric for the icosahedron quadrature.}
\label{fig:1summaryassnimpl}
\end{figure}
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/heatmap_explicit_L2.png}
\caption{Parameter study for $\sigma_{\text{as}}$ and $\varepsilon=\beta/N_q$ on a grid of $N_q\times N_x \times N_y = 12\times50\times50$ in an explicit calculation.
For every simulation we compute the L${}^2$ error of the scalar flux $\Phi$ with respect to a semi-analytical reference solution on the same spatial grid. The number in each field of the heatmap is then the baseline normalized error, i.e. the L${}^2$ error obtained for that specific parameter configuration divided by the error obtained without artificial scattering. For the case of $\beta=4.5$ and $\sigma_{\text{as}}=5$ (highlighted in yellow) the error drops down to $37.8\%$ of the original error without artificial scattering.
\label{fig:heatmapabsl1normalized}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/heatmap_implicit_L2.png}
\caption{Parameter study for $\sigma_{\text{as}}$ and $\varepsilon=\beta/N_q$ on a grid of $N_q\times N_x \times N_y = 12\times50\times50$ in an implicit calculation.
For every simulation we compute the L${}^2$ error of the scalar flux $\Phi$ with respect to a semi-analytical reference solution on the same spatial grid. The number in each field of the heatmap is then the baseline normalized error, i.e. the L${}^2$ error obtained for that specific parameter configuration divided by the error obtained without artificial scattering. For the case of $\beta=4$ and $\sigma_{\text{as}}=7$ (highlighted in yellow) the error drops down to $41.4\%$ of the original error without artificial scattering.}%
\label{fig:heatmapabsl1normalizedimpl}
\end{figure}
\end{landscape}
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{figures/relL2.png}
\caption{We computed $\delta_1 = \Vert\Phi_\text{numerical}-\Phi_\text{analytical}\Vert_2$ for the line-source test case using the implicit S$_N$ and as-S$_N$ method for $N_x \times N_y = 200\times 200$. Computations were performed on a quad core Intel\textsuperscript{\textregistered} i5-7300U CPU (2.60 GHz) with 12 GB memory.}
\label{fig:rell2}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\linewidth]{figures/drelL2.png}
\caption{We computed $\delta_2 = \Vert\nabla \Phi_\text{numerical}-\nabla\Phi_\text{analytical}\Vert_2$ for the line-source test case using the implicit S$_N$ and as-S$_N$ method for $N_x \times N_y = 200\times 200$. Computations were performed on a quad core Intel\textsuperscript{\textregistered} i5-7300U CPU (2.60 GHz) with 12 GB memory.}
\label{fig:rell2grad}
\end{figure}
We also investigate the performance of the as-S$_N$ method when measured in runtime and in memory consumption. Consider therefore the results presented in Fig.~\ref{fig:rell2} and Fig.~\ref{fig:rell2grad}. Both figures summarize the results for the line-source test case computed with the S$_N$\, and as-S$_N$\, methods for different values of $N$. Fig.~\ref{fig:rell2} measures the error between the numerical solution and the analytical solution in the L${}^2$ norm, called $\delta_1$. Fig.~\ref{fig:rell2grad} considers the $H^1$ semi-norm
We observe an increase in runtime when activating artificial scattering, but a decrease in the errors $\delta_1$ and $\delta_2$. On the right, the errors are plotted against the number of ordinates which ultimately dictates the memory consumption. For example, an S$_8$ takes about as long as an as-S$_5$ computation and yields a similar $\delta_1$ error. However, the number of ordinates can be reduced from 492 to 162.
For both, $\delta_1$ and $\delta_2$, the effect of artificial scattering vanishes in the limit of $N_q \rightarrow \infty$.
\subsection{Lattice test case}
We also investigate the lattice test case \cite{brunner2002forms,brunner2005two}, depicted in Fig.~\ref{fig:cb}. A constant, isotropic source is placed in the center of the domain in the orange square. In the white cells, the material is purely scattering, whereas the orange and black squares are purely absorbing. The boundary conditions are vacuum. All test case parameters are listed in Table \ref{tab:tabcheckerboard}.
\begin{minipage}{\textwidth}
\begin{minipage}{0.35\textwidth}
\includegraphics[width=0.99\linewidth]{figures/lattice_layout}
\captionof{figure}{Layout of the lattice test case.\\ Different materials (black, white, orange) \\with the source in the center (orange).}
\label{fig:cb}
\end{minipage}\hfill
\begin{minipage}{0.55\textwidth}
\begin{tabular}{c|c|c|c}
Color & $\sigma_a$ in cm${}^{-1}$& $\sigma_s$ in cm${}^{-1}$ & $Q$ in cm${}^{-2}$s${}^{-1}$\\
\hline
\hline
white & 0 & 1 & 0 \\
black & 10 & 0 & 0 \\
orange & 10 & 0 & 1 \\
\end{tabular}
\captionof{table}{Material properties for the lattice test case.\\ The domain is of size $[0\text{ cm},7\text{ cm}]^2$ and $t_{\text{end}}=3.2$ s. }
\label{tab:tabcheckerboard}
\end{minipage}
\end{minipage}
In Fig.~\ref{fig:checkerboard1} we see the as-S$_4$ solution to the lattice problem on the left, the S$_{15}$ solution in the center, and the S$_4$ solution on the right. Here, S$_{15}$ uses $1962$ ordinates while S$_4$ and as-S$_4$ use $92$ ordinates.
We take the S$_{15}$ solution with $N_q=1962$ as our reference solution.
When comparing the as-S$_4$ solution with the S$_4$ solution, we see an improvement in the solution quality. Ray-effects are better mitigated in regions where the scalar flux is small. The number of ordinates is kept constant.
Additionally, Fig.~\ref{fig:checkerboard2} puts the as-S$_4$ solution and the S$_{15}$ solution side-by-side in the center frame.
Minor ray-effects are visible when looking at the white isoline.
However, the number of ordinates has been reduced by a factor of $\sim 21$.
Similar to the line-source test case, we set $\beta=4.5$ and $\sigma_{\text{as}}=5.0$ for explicit calculation, and $\beta=4.0$ and $\sigma_{\text{as}}=7.0$ for the implicit calculation.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/lattice_3_side_by_side.png}
\caption{Comparison for the lattice test case at $t_{\text{end}}=3.2\, s$. Left the as-S$_4$ solution for the optimal parameter choice; center: the S$_{15}$ solution; right: the S$_{4}$ solution. Isolines are drawn at four different levels, highlighted inside the colorbar. We used $280\times280$ spatial cells.}
\label{fig:checkerboard1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/lattice_2_plus_merged.png}
\caption{Comparison for the lattice test case at $t_{\text{end}}=3.2\, s$. Left: the as-S$_4$ solution; right: the S$_{15}$ solution; center: the image merges the left half of the left image with the right half of the right image. Isolines are drawn at four different levels, highlighted inside the colorbar. We used $280\times280$ spatial cells.}
\label{fig:checkerboard2}
\end{figure}
\FloatBarrier
We also perform simulations for the lattice problem with the implicit time discretization. However, since the chosen scheme is only L${}^2$-stable, the solution becomes negative for the lattice test case as illustrated in Fig.~\ref{fig:lattice_impl_1}. Nevertheless, Fig.~\ref{fig:cfloverview} demonstrates the inherent advantage when performing implicit computations: We are able to use a very large CFL number, thus reducing the number of time steps and the overall computational costs drastically. Note that the scheme preserves positivity for the chosen CFL numbers.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figures/lattice_implicit_cfl_02_sn.png}
\caption{S$_4$.}
\label{fig:cfl2}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figures/lattice_implicit_cfl_02_assn.png}
\caption{as-S$_4$ with $\sigma_{\text{as}}=7$ and $\beta=4$.}
\label{fig:cfl2with}
\end{subfigure}
\caption{Solutions to the lattice problem with an implicit computation for a CFL number of $2$, $N_q=92$, and $N_x\times N_y=280\times 280$. The white regions indicate negativity of the solution.}
\label{fig:lattice_impl_1}
\end{figure}
\begin{figure}%
\centering
\includegraphics[width=0.99\linewidth]{figures/zoomdifferentcfl.png}
\caption{Solutions to the lattice problem with an implicit computation for different CFL numbers and $N_q=92$, $N_x\times N_y=280\times 280$, $\sigma_{\text{as}}=7$, and $\beta=4$. Zoom into the region $[3.5,7]\times[3.5,7]$.}
\label{fig:cfloverview}
\end{figure}
\section{Discretization}
\label{sec:impl}
In the following section, we will discuss discretization and implementation of the presented as-S$_N$\, method, laying the focus on how to incorporate artificial scattering into existing S$_N$\, codes.
\subsection{S$_N$ discretization}
For sake of completeness, we briefly summarize the S$_N$\, method. Given a finite number of ordinates ${\bm{\Omega}}_1,\dots,{\bm{\Omega}}_{N_q}$ and defining the S$_N$\, solution $\psi_q(t,{\bm{x}}) \approx \psi(t,{\bm{x}},{\bm{\Omega}}_q)$ the S$_N$\, method solves the semi-discretized system of $N_q$ equations
\begin{linenomath*}\begin{equation}
\begin{split}
\label{eq:SN}
\partial_t \psi_q(t,{\bm{x}}) + {\bm{\Omega}}_q \cdot \nabla_{\bm{x}} \psi_q(t,{\bm{x}}) + \sigma_{t}({\bm{x}}) \psi_q(t,{\bm{x}})= \sigma_{s}({\bm{x}}) \sum_{p=1}^{N_q} w_p \cdot s({\bm{\Omega}}_q\cdot{\bm{\Omega}}_p) \psi_p(t,{\bm{x}}) + q(t,{\bm{x}}).
\end{split}
\end{equation}\end{linenomath*}
Here, $w_p$ are quadrature weights, chosen such that
\begin{linenomath*}\begin{equation}
\int_{\mathbb{S}^2} \psi(t,{{\bm{x}}}, {{\bm{\Omega}}}) \approx \sum_{q=1}^{N_q} w_q \cdot \psi(t,{\bm{x}},{\bm{\Omega}}_q).
\end{equation}\end{linenomath*}
To compute numerical solutions, we still need to discretize \eqref{eq:SN} in space and time. In this work, we will investigate solutions for both implicit and explicit time discretizations. The explicit code uses Heun's method as well as a minmod slope limiter. It is based on \cite{camminady2019ray,garrett2013comparison}, which provide a detailed description of the chosen methods. The implicit discretization and an efficient strategy to integrate artificial scattering in a given implicit code framework will be discussed in Sections~\ref{sec:Implicit} and \ref{sec:ImplicitImplementation}.
\subsection{Adding artificial scattering to the S$_N$\, equations}
\label{sec:implassn}
Our goal is to include artificial scattering in the S$_N$ equations in \eqref{eq:SN}. By simply approximating the artificial scattering term in \eqref{eq:assneq} with the chosen quadrature rule, we obtain the as-S$_N$\, equations
\begin{linenomath*}\begin{equation}
\begin{split}
\label{eq:asSNeq}
\partial_t \psi_q(t,{\bm{x}}) + {\bm{\Omega}}_q &\cdot \nabla_{\bm{x}} \psi_q(t,{\bm{x}}) + \sigma_t({\bm{x}}) \psi_q(t,{\bm{x}}) + \sigma_{\text{as}}({\bm{x}})\psi_q(t,{\bm{x}})
\\=&\sigma_s({\bm{x}}) \sum_{p=1}^{N_q} w_p \cdot c_q \cdot s({\bm{\Omega}}_q\cdot{\bm{\Omega}}_p) \psi_p(t,{\bm{x}})
\\
&+\sigma_{\text{as}}({\bm{x}})\sum_{p=1}^{N_q}w_p \cdot c_q^{(\varepsilon)}\cdot s_\varepsilon({\bm{\Omega}}_q\cdot {\bm{\Omega}}_p)\psi_p(t,{\bm{x}}) \\
&+ q(t,{\bm{x}})
.
\end{split}
\end{equation}\end{linenomath*}
Here, $c_q := 1 / \sum_p w_p \cdot s({\bm{\Omega}}_q\cdot {\bm{\Omega}}_p)$ and $c_q^{(\varepsilon)} := 1 / \sum_p w_p \cdot s_\varepsilon({\bm{\Omega}}_q\cdot {\bm{\Omega}}_p)$ are normalization factors. While on the continuous level, these factors are the same for every direction, we obtain a dependency on the chosen ordinate due to the non-uniform discretization in angle. These normalization factors are needed to obtain a simple expression for the out-scattering terms. Moving these terms to the left-hand side of \eqref{eq:asSNeq} stabilizes the source iteration used in the implicit method.
\subsection{Quadrature}
\label{subsec:quadrature}
It remains to pick an adequate quadrature set. When applying artificial scattering, the solution smears out along the imprint of the discrete set of ordinates. To ensure an evenly spread artificial scattering effect, a quadrature with a highly uniform ordinate distribution should be chosen. Commonly, the construction of a quadrature set starts at a chosen planar geometry, which is discretized and then mapped onto the surface of a sphere. The mapped nodes of the previously chosen discretization are then taken to be the quadrature points while the weights are determined by the area associated with these points. An even node distribution is achieved by using an icosahedron as initial planar geometry, which one can find in Figure~\ref{fig:icosahedronQuadrature}. Each face of the icosahedron is triangulated to generate the nodes which will be mapped onto the surface of the sphere. There are different strategies to perform this triangulation and we choose an equidistant spacing of points on each line of the triangle as well as the corresponding points inside the triangle. The corresponding weight is the area of the hexagon which lies around the given node and is defined by connecting the midpoints of the neighboring triangles. For more details on the icosahedron quadrature, see \cite{icosahedron}.
From now on, we will exclusively use this quadrature in all S$_N$ and as-S$_N$ computations.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{figures/icosahedron.png}
\caption{Construction of the icosahedron quadrature set. One can see the icosahedron geometry with a triangulated face which is then mapped onto the sphere. For the quadrature set, we align one of the vertices with the point $(0,0,1)$.}
\label{fig:icosahedronQuadrature}
\end{figure}
\subsection{Implicit time discretization}
\label{sec:Implicit}
Implicit time discretization methods provide stability for large time steps, which are crucial in applications involving different time scales. However, when discretizing the radiative transfer equation, they require a matrix solve in every time step, which is commonly performed by a Krylov solver \cite{faber1988look,ashby1991preconditioned}. We start with an implicit Euler discretization, where we, in an abuse of notation, denote the flux at the new time step by $\psi({\bm{x}},{\bm{\Omega}})$ and at the old time step by $\psi^{\text{old}}({\bm{x}},{\bm{\Omega}})$. The equivalent as-S$_N$\, system is
\begin{linenomath*}
\begin{align}
{\bm{\Omega}} \cdot \nabla_{\bm{x}} \psi + \left(\sigma_a+ \sigma_s + \sigma_{\text{as}} + \frac{1}{\Delta t} \right)\psi = \sigma_s S^+\psi+\sigma_{\text{as}}S_{\text{as}}^+\psi+q + \frac{\psi^{\text{old}}}{\Delta t}.
\end{align}
\end{linenomath*}
Defining the streaming operator $L\psi := {\bm{\Omega}} \cdot \nabla_{\bm{x}} \psi + \left(\sigma_a+ \sigma_s + \sigma_{\text{as}} + \frac{1}{\Delta t} \right)\psi$ as well as the modified source $\tilde q := q + \psi^{\text{old}} / \Delta t$, we can put this into more compact notation
\begin{linenomath*}\begin{align}
\label{eq:asSNImplCompact}
L\psi = \sigma_s S^+\psi+\sigma_{\text{as}}S_{\text{as}}^+\psi+\tilde q.
\end{align}\end{linenomath*}
First, let us numerically treat the artificial scattering in the same way as commonly done for physical scattering. The physical in-scattering kernel can be written as
\begin{align*}
S^+ = O \Sigma M,
\end{align*}
where $\Sigma$ carries the respective expansion coefficients of the scattering kernel, $M$ maps from the ordinates to the moments and $O$ from the moments back to the angular space. Making use of this strategy to represent the artificial scattering, we get
\begin{linenomath*}\begin{align}
S_{\text{as}}^+ = O\Sigma_{\text{as}}M.
\end{align}\end{linenomath*}
When denoting the moments as $\phi = M\psi$, equation \eqref{eq:asSNImplCompact} becomes
\begin{linenomath*}
\begin{align}
L\psi = \sigma_s O \Sigma\phi +\sigma_{\text{as}}O \Sigma_{\text{as}}\phi+\tilde q\;.
\end{align}
\end{linenomath*}
Inverting $L$ and applying $M$ to both sides yields the fixed point equations
\begin{linenomath*}
\begin{align}\label{eq:KrylovNaive}
\phi = \sigma_s M L^{-1} O \Sigma\phi +\sigma_{\text{as}}ML^{-1}O\Sigma_{\text{as}} \phi+ML^{-1}\tilde q \;.
\end{align}
\end{linenomath*}
Note that with $\sigma_{\text{as}}=0$, this is the standard equation to which a Krylov solver is applied. Choosing a non-zero artificial scattering strength can result in significantly increased numerical costs when solving \eqref{eq:KrylovNaive} with a Krylov method: To show this, let us move to the discrete level, i.e. discretizing the directional domain, which requires picking a finite number of moments. In this case $\Sigma$ becomes a diagonal matrix with entries falling rapidly to zero (in the case of isotropic scattering, only the first entry is non-zero). Hence, few moments are required to capture the effects of physical scattering. However, since the artificial scattering kernel is strongly forward-peaked, the entries of the diagonal matrix $\Sigma_{\text{as}}$ do not fall to zero quickly, meaning that the method requires a large number of moments to include artificial scattering, which results in a heavily increased run time \cite{hauck2019filtered}. The slow decay of the Legendre moments $k_{\varepsilon,n} = 2\pi \int_{-1}^{+1} s_\varepsilon(\mu)P_n(\mu)\, d\mu$ for $\varepsilon\rightarrow 0$ is visualized in Fig.~\ref{fig:kerneldecay}.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{figures/legendremoments.png}
\caption{Decay of the Legendre moments $k_{\varepsilon,n} = 2\pi \int_{-1}^{+1} s_\varepsilon(\mu)P_n(\mu)\, d\mu$ for different values of $\varepsilon$ and the expansion order $n$.}
\label{fig:kerneldecay}
\end{figure}
In order to be able to choose the reduced number of moments required to resolve physical scattering, we move the artificial scattering into the sweeping step. Hence, going back to equation \eqref{eq:asSNImplCompact}, we only perform the moment decomposition on the physical scattering to obtain
\begin{linenomath*}\begin{align}
(L - \sigma_{\text{as}}S_{\text{as}}^+)\psi = \sigma_s O \Sigma\phi + \tilde q\;.
\end{align}\end{linenomath*}
Moving the operator $(L - \sigma_{\text{as}}S_{\text{as}}^+)$ to the right hand side and taking moments yields
\begin{linenomath*}\begin{align}\label{eq:KrylovFixedPoint}
\phi = \sigma_s M(L - \sigma_{\text{as}}S_{\text{as}}^+)^{-1} O \Sigma\phi + M(L - \sigma_{\text{as}}S_{\text{as}}^+)^{-1}\tilde q\;.
\end{align}\end{linenomath*}
The Krylov solver is then applied to this fixed-point iteration. In contrast to \eqref{eq:KrylovNaive}, the physical scattering term dictates the number of moments.
The computation of $(L-\sigma_{\text{as}}S_{\text{as}}^+)^{-1}$ is performed by a source iteration, where the general equation $(L-\sigma_{\text{as}}S_{\text{as}}^+)\psi = R$ is solved by
iterating on
\begin{linenomath*}\begin{align}\label{eq:sourceIteration}
L\psi^{(l+1)} = \sigma_{\text{as}}S_{\text{as}}^+\psi^{(l)} + R.
\end{align}\end{linenomath*}
This iteration is expected to converge fast since effects of artificial scattering will be small in comparison to physical scattering.
\subsection{Implementation details}
\label{sec:ImplicitImplementation}
At this point, we choose a finite number of ordinates and moments, i.e. the flux $\psi$ is now a vector with dimension $N_q$ and the moments $\phi$ have finite dimension $N$. Consequently, operators applied to the directional space become matrices. For better readability, we abuse notation and reuse the same symbols as before.
We observed that a second-order spatial scheme is required to capture the behavior of the test cases used in this work. To ensure an efficient sweeping step, we use a second-order upwind stencil without a limiter. Let us denote the operator $L$ discretized in space and direction by $L_{\Delta}$. For ease of presentation, we assume a slab geometry. i.e. we have the spatial variable $x\in\mathbb{R}$ and the directional variable $\mu\in[-1,1]$. In the following, we split the directional variable into $\mu_-\in[-1,0]$ and $\mu_+\in(0,1]$. An extension to arbitrary dimension is straight forward. Now with $\lambda_{\pm}:= \mu_{\pm}\frac{ \Delta t}{\Delta x}$ and $\sigma_t := \sigma_a+ \sigma_s + \sigma_{\text{as}} + \frac{1}{\Delta t}$, we can write the discretized streaming operator as
\begin{linenomath*}\begin{align}
L_{\Delta}\psi := \lambda_{\pm} ( g_{j+1/2} - g_{j-1/2} ) + \Delta t \sigma_t \psi.
\end{align}\end{linenomath*}
The numerical flux for $\mu_+$ is then given by
\begin{linenomath*}\begin{align}
g_{j+1/2} := a \psi_j + b \psi_{j-1}, \qquad \text{ with } a := \frac32, \enskip b:= -\frac12
\end{align}\end{linenomath*}
and for $\mu_-$ by
\begin{linenomath*}\begin{align}
g_{j+1/2} := a \psi_{j+1} + b \psi_{j+2}.
\end{align}\end{linenomath*}
This scheme is L$^2$ stable, which we show in Appendix~\ref{app:upwind}. Let us now discuss the implementation of the implicit method in more detail. As mentioned earlier, a source iteration \eqref{eq:sourceIteration} is required to invert the operator $(L-\sigma_{\text{as}}S_{\text{as}}^+)$. For an initial guess $\psi^{(0)}$ and an arbitrary right hand side $R$, this iteration is given by Alg.~\ref{alg:SourceIteration}. Note that the discrete artificial in-scattering $S_{\text{as}}^+$ is a sparse matrix, which guarantees an efficient evaluation of the matrix vector product $S_{\text{as}}^+\psi^{(l)}$ in \eqref{eq:sourceIteration}. Furthermore, the inverse of $L_{\Delta}$ can be computed by a sweeping procedure.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\Procedure{SourceIteration}{$\psi^{(0)},R$}
\State $\ell \leftarrow 0$
\State $\psi^{(\ell+1)} \leftarrow L_{\Delta}^{-1}\left(\sigma_{\text{as}} S_{\text{as}}^+ \psi^{(\ell)} + R\right)$
\While{$\Vert\psi^{(\ell+1)} - \psi^{(\ell)}\Vert_2\geq\epsilon\tilde c$}
\State $\psi^{(\ell+1)} \leftarrow L_{\Delta}^{-1}\left(\sigma_{\text{as}} S_{\text{as}}^+ \psi^{(\ell)} + R\right)$
\State $\ell \leftarrow \ell+1$
\EndWhile
\Return $\psi^{(\ell)}$
\EndProcedure
\end{algorithmic}
\caption{Source Iteration algorithm}
\label{alg:SourceIteration}
\end{algorithm}
In order to get a good error estimator, we set the constant $\tilde c := (1-T)/T$ in Algorithm~\ref{alg:SourceIteration}, where $T$ is an estimate of the Lipschitz constant and $\epsilon$ is a user-determined parameter. Our implementation solves the linear system of equations
\begin{linenomath*}\begin{subequations}
\begin{align}
A \phi &= b \label{eq:KrylovFixedPoint1} \\
\text{with }A &:= - \sigma_s M(L - \sigma_{\text{as}}S_{\text{as}}^+)^{-1} O \Sigma \\
b &:= M(L - \sigma_{\text{as}}S_{\text{as}}^+)^{-1}\tilde q
\end{align}
\end{subequations} \end{linenomath*}
using a GMRES solver. The solver requires the evaluation of the left-hand side for a given $\psi$ with an initial guess $\psi^{(0)}$, which is given by Alg.~\ref{alg:LinearFunction}.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\Procedure{LHS}{$\psi^{(0)},\phi$}
\State $\tilde\psi \leftarrow \text{SourceIteration}(\psi^{(0)},\sigma_s O\Sigma \phi)$
\State \Return $\phi - M\tilde\psi$
\EndProcedure
\end{algorithmic}
\caption{Left-hand side of \eqref{eq:KrylovFixedPoint1}}
\label{alg:LinearFunction}
\end{algorithm}
The main time stepping scheme is then given by Alg.~\ref{alg:Sweeping-Krylov}. After initializing $\psi$ and $\phi$, the right hand side to \eqref{eq:KrylovFixedPoint1} is set up in line 4. Line 5 then solves the linear system \eqref{eq:KrylovFixedPoint1} and Line 6 determines the time-updated flux $\psi$ from the moments $\phi^{\text{new}}$.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State $\psi^{\text{old}}\leftarrow \text{InitialCondition}()$
\State $\phi^{\text{old}} \leftarrow M\psi^{\text{old}}$
\While{$t<t_{\text{end}}$}
\State $b \leftarrow M \cdot \text{SourceIteration}(\psi^{old},q + \frac{1}{\Delta t}\psi^{old})$
\State $\phi^{\text{new}} \leftarrow \text{Krylov}(\text{LHS}(\psi^{\text{old}},\phi^{\text{old}}),b)$
\State $\psi^{\text{new}} \leftarrow \text{SourceIteration}(\psi^{\text{old}},\sigma_s O \Sigma \phi^{\text{new}} + \frac{1}{\Delta t}\psi^{\text{old}})$
\State $\psi^{\text{old}} \leftarrow \psi^{\text{new}}$
\State $\phi^{\text{old}} \leftarrow \phi^{\text{new}}$
\EndWhile
\end{algorithmic}
\caption{Sweeping-Krylov algorithm}
\label{alg:Sweeping-Krylov}
\end{algorithm}
There exist several ways to modify the presented algorithm to achieve higher performance. For example,
one can modify the presented method by not fully converging the source iteration in Alg.~\ref{alg:SourceIteration}. Instead, only a single iteration can be performed to drive the moments $\phi$ and the respective angular flux $\psi$ to their corresponding fixed points simultaneously. In numerical tests, we observe that this will significantly speed up the calculation. However, since we do not focus on runtime optimization, we do not further discuss this idea and leave it to future work.
\section{Main idea}
In this section, we summarize the relevant mathematical background and introduce notation. We illustrate the problem of ray effects that occurs when discretizing the transport equation in angle and how artificial scattering can be used to mitigate these ray effects. We demonstrate that artificial scattering behaves like a Fokker-Planck operator in the appropriate limit.
\subsection{Radiative transfer equation}
The radiative transfer equation describes the evolution of the angular flux $\psi(t,{\bm{x}}, {\bm{\Omega}})$ via
\begin{linenomath*}\begin{equation}\label{eq:kineticEquation}
\partial_t \psi(t,{\bm{x}},{\bm{\Omega}}) + {\bm{\Omega}} \cdot \nabla_{\bm{x}} \psi(t,{\bm{x}},{\bm{\Omega}}) +
\sigma_{t}({\bm{x}}) \psi(t,{\bm{x}},{\bm{\Omega}})= \sigma_{s}({\bm{x}}) (S^+ \psi) (t,{\bm{x}},{\bm{\Omega}}) + q(t,{\bm{x}}),
\end{equation}\end{linenomath*}
where $t\in\mathbb{R}_+$ denotes time, ${\bm{x}}\in\mathbb{R}^3$ is the spatial variable, and ${\bm{\Omega}} \in \mathbb{S}^2$ represents the direction.
The total cross section is $\sigma_{t}({{\bm{x}}})=\sigma_{a}({{\bm{x}}}) + \sigma_{s}({{\bm{x}}})$.
In the case of scattering, the in-scattering kernel operator $S^+(\psi)(t,{\bm{x}}, {\bm{\Omega}})$ describes the gain of particles that were previously traveling along direction ${\bm{\Omega}}'$ and changed to direction ${\bm{\Omega}}$. It is given by
\begin{linenomath*}\begin{equation}\label{eq:in-scattering}
(S^+ \psi) (t,{\bm{x}},{\bm{\Omega}}) = \int_{\mathbb{S}^2} s({\bm{\Omega}}\cdot{\bm{\Omega}}')\psi(t,{\bm{x}},{\bm{\Omega}}') d{\bm{\Omega}}',
\end{equation}\end{linenomath*}
where $s({\bm{\Omega}}\cdot{\bm{\Omega}}')$ is the probability of transitioning from direction ${\bm{\Omega}}'$ into direction ${\bm{\Omega}}$ or vice versa.
We assume---for simplicity---that the source $q(t,{\bm{x}})$ is isotropic.
\subsection{Ray effects}
As previously explained,
the S$_N$ method preserves positivity but suffers from ray effects.
An example of these artifacts is demonstrated for the line-source benchmark in Fig.\ref{fig:illustratingrayeffects}.
While the true scalar flux $\Phi(t,{\bm{x}}):=\int_{\mathbb{S}^2}\psi(t,{\bm{x}},{\bm{\Omega}}') d{\bm{\Omega}}'$ is radially symmetric, the numerical solution has artifacts in the form of oscillations. We will discuss the line-source problem in more detail in Section \ref{subsec:results_linesource}.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figures/linesource_reference.png}
\caption{The semi-analytical reference solution to the line source problem.}
\label{fig:semianalyticallinesource}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.99\linewidth]{figures/linesource_rayeffects.png}
\caption{Numerical solution using the S$_N$ method with a tensorized quadrature. Ray effects are clearly visible.}
\label{fig:numericallinesource}
\end{subfigure}
\caption{Illustrating ray effects with the line-source problem.}
\label{fig:illustratingrayeffects}
\end{figure}
\subsection{Artificial scattering}
We propose to address the problem of ray effects by adding artificial scattering to the right-hand side of \eqref{eq:kineticEquation}, in the form of an anisotropic scattering operator.
The modified system is then
\begin{linenomath*}\begin{equation}\label{eq:assneq}
\begin{split}
\partial_t \psi(t,{\bm{x}},{\bm{\Omega}}) &+ {\bm{\Omega}} \cdot \nabla_{\bm{x}} \psi(t,{\bm{x}},{\bm{\Omega}}) +
\sigma_{t}({\bm{x}}) \psi(t,{\bm{x}},{\bm{\Omega}}) \\
&= \sigma_{s}({\bm{x}}) (S^+ \psi) (t,{\bm{x}},{\bm{\Omega}})+\sigma_{\text{as}}({\bm{x}}) (S_{\text{as}}\psi)(t,{\bm{x}},{\bm{\Omega}}) + q(t,{\bm{x}}),
\end{split}
\end{equation}\end{linenomath*}
where
\begin{linenomath*}\begin{equation} \label{eq:artificialscattering}
\sigma_{\text{as}}({\bm{x}}) (S_{\text{as}}\psi)(t,{\bm{x}},{\bm{\Omega}}) = \sigma_{\text{as}}({\bm{x}}) \int_{\mathbb{S}^2} s_\varepsilon({\bm{\Omega}}' \cdot {\bm{\Omega}})
\left(\psi(t,{\bm{x}},{\bm{\Omega}}')-\psi(t,{\bm{x}},{\bm{\Omega}})\right)\, d{\bm{\Omega}}'.
\end{equation}\end{linenomath*}
Here, $s_\varepsilon$ can be any Dirac-like sequence\footnote{While the idea of artificial scattering works with any Dirac-sequence, the asymptotic analysis that is performed later imposes stronger requirements to obtain a Fokker-Planck operator in the respective limit.} , i.e.
\begin{linenomath*}
\begin{align}
\int_{-1}^{+1} s_\epsilon(\mu)\, d\mu =1
\text{ and }\int_{-1}^{+1}s_\varepsilon(\mu) f(\mu)\, d\mu \to f(1)
\end{align}
\end{linenomath*}
for any sufficiently smooth function $f$ as $\varepsilon\to 0$. In our experiments, we choose
\begin{linenomath*}\begin{equation} \label{eq:scatteringkernel_artificialscattering}
s_\varepsilon(\mu) = \frac{2}{\sqrt{\pi}\, \varepsilon \,\text{Erf}\left(\frac{2}{\varepsilon}\right)}\, e^{-\sfrac{(1-\mu)^2}{\varepsilon^2}},
\end{equation}\end{linenomath*}
where the error function satisfies $\text{Erf}(x)\to 1$ as $x\to \infty$.
The proposed method, which we call artificial scattering-S$_N$ (as-S$_N$), has the following effects:
\begin{enumerate}
\item Similar to the artificial viscosity used to stabilize spatial discretizations of hyperbolic operators \cite[Chapter~16.1]{leveque1992numerical}, the artificial scattering adds an angular diffusion term to the radiative transfer equations.
This term should vanish when the discretization is refined. Therefore, the variance of the artificial scattering kernel should be chosen to vanish in the limit $N_q\rightarrow\infty$.
We choose the variance to be $\varepsilon = \beta/N_q$, where $\beta$ is a constant, user-determined parameter.
This choice ensures that $\varepsilon$ scales the average width of quadrature points, meaning that the domain of influence includes roughly the same number of ordinates when $N_q$ increases. In the limit, the as-S$_N$\, solution converges to the classical S$_N$\, solution.
\item The total number of particles is preserved by the artificial scattering term. Higher order moments are however dampened by the magnitude of artificial scattering. A beam of particles inside a void will be subject to scattering by the as-S$_N$\, method, however artifacts that result from the standard S$_N$\, method dominate the overall error, unless the beam is aligned with the quadrature set.
\item as-S$_N$\, has similarities to the P$_{N-1}$-equivalent S$_N$\, method \cite{miller1977ray}: To mitigate ray effect, this method adds a fictitious source to the radiative transfer equation. This source, though derived by a different strategy, requires similar modifications of the standard S$_N$\, implementation. The main difference is that the artificial scattering kernel of as-S$_N$\, is forward peaked, which can be used to design an efficient numerical treatment.
\item as-S$_N$ can be compared to filtered P$_N$ \cite{mcclarren2010robust,hauck2019filtered}, since the artificial scattering acts as a filter on the moment level.
\item The as-S$_N$ equation \eqref{eq:assneq} can---with appropriate boundary and initial conditions---be solved in a straight-forward manner using common S$_N$ implementations. When discussing one such implementation, we will focus on implicit discretization techniques and derive an efficient algorithm to treat the artificial scattering term.
\end{enumerate}
\subsection{Artificial scattering kernel}
To better distinguish between the two types of scattering, we will call the naturally occurring scattering of \eqref{eq:kineticEquation} \textit{physical scattering} and the scattering in \eqref{eq:artificialscattering} \textit{artificial scattering}. The way the artificial scattering is written in \eqref{eq:artificialscattering}, it includes in-scattering and out-scattering. We can split this further into
\begin{linenomath*}\begin{equation} \label{eq:split_in_out_scattering}
(S_{\text{as}}\psi)(t,{\bm{x}},{\bm{\Omega}}) = \int_{\mathbb{S}^2} s_\varepsilon({\bm{\Omega}}' \cdot {\bm{\Omega}})
\left(\psi(t,{\bm{x}},{\bm{\Omega}}')-\psi(t,{\bm{x}},{\bm{\Omega}})\right)\, d{\bm{\Omega}}' = (S_{\text{as}}^+\psi)(t,{\bm{x}},{\bm{\Omega}}) -(S_{\text{as}}^-\psi)(t,{\bm{x}},{\bm{\Omega}}),
\end{equation}\end{linenomath*}
with
\begin{linenomath*}\begin{equation}\label{eq:artificial_inscattering}
(S_{\text{as}}^+\psi)(t,{\bm{x}},{\bm{\Omega}}) = \int_{\mathbb{S}^2} s_\varepsilon({\bm{\Omega}}' \cdot {\bm{\Omega}})
\psi(t,{\bm{x}},{\bm{\Omega}}')\, d{\bm{\Omega}}'
\end{equation}\end{linenomath*}
and
\begin{linenomath*}\begin{equation}\label{eq:artificial_outscattering}
(S_{\text{as}}^-\psi)(t,{\bm{x}},{\bm{\Omega}}) = \psi(t,{\bm{x}},{\bm{\Omega}}).
\end{equation}\end{linenomath*}
\subsection{Modified equation analysis}
According to Pomraning \cite{pomraning1992fokker}, the Fokker-Planck operator can be a legitimate description of highly peaked scattering.
This is true if (i) the scattering kernel $s_\epsilon(\mu)$ is a Dirac sequence, and (ii) the transport coefficients $p_{\varepsilon,i}:= \int_{-1}^{+1} (1-\mu)^i s_\varepsilon(\mu)\, d\mu$ are of order $\mathcal{O}(\varepsilon^i)$.
The resulting modified equation then reads
\begin{linenomath*}\begin{equation} \label{eq:fokkerplancklimit}
\begin{split}
\partial_t \psi(t,{\bm{x}},{\bm{\Omega}}) &+ {\bm{\Omega}}\cdot \nabla_{\bm{x}} \psi + (\sigma_a+ \sigma_s ) \psi \\
&= \sigma_s \cdot \Phi +\pi\cdot p_{\varepsilon,1}\cdot \sigma_{as}\cdot \Delta_{\bm{\Omega}}\, \psi+ \mathcal{O}\left(\varepsilon^2\right),
\end{split}
\end{equation}\end{linenomath*}
where $\Delta_{\bm{\Omega}}$ is the Laplace operator in spherical coordinates.
We have already shown (i).
To verify (ii), let $y=(1-\mu) / \varepsilon$. Then
\begin{linenomath*}\begin{align}
p_{\varepsilon,i} &= \int_{2/\varepsilon}^0 (\varepsilon \, y)^i \frac{2}{\sqrt{\pi}\, \varepsilon \,\text{Erf}\left(\frac{2}{\varepsilon}\right)}
e^{-y^2} (-\varepsilon) \, dy \\
&=
\frac{2}{\sqrt{\pi}\, \varepsilon \,\text{Erf}\left(\frac{2}{\varepsilon}\right)}\, \varepsilon^i \, \int_0^{2/\varepsilon}y^i e^{-y^2}\, dy \\
&= \frac{2}{\sqrt{\pi}\, \varepsilon \,\text{Erf}\left(\frac{2}{\varepsilon}\right)}\, \varepsilon^i
\left[\Gamma\left(\frac{1+i}{2}\right)-\Gamma\left(\frac{1+i}{2},\frac{4}{\varepsilon^2}\right)\right] \\
&= \mathcal{O}(\varepsilon^i),
\end{align}\end{linenomath*}
where $\Gamma(\cdot)$ and $\Gamma(\cdot,\cdot)$ denote the gamma function and the upper incomplete gamma function, respectively.
Since (ii) implies that $p_{\varepsilon,1} = \mathcal{O}(\varepsilon)$, the operator vanishes if we let $\varepsilon\to 0$.
We set $\varepsilon=\beta /N_q$ in the discrete case so that the angular diffusion vanishes if the number of ordinates $N_q$ tends to infinity.
This analysis shows that the product $\sigma_{\text{as}}\cdot \beta$ controls the strength of the added angular diffusion. Section \ref{subsec:results_linesource} confirms this numerically.
\section{Introduction}
Several applications in the field of physics require an accurate solution of the radiative transfer equation. This equation describes the evolution of the angular flux of particles moving through a material medium. Examples include nuclear engineering \cite{duderstadt1976nuclear,henry1977nuclear}, high-energy astrophysics \cite{lowrie1999coupling,mcclarren2008manufactured}, supernovae \cite{fryer2006snsph,swesty2009numerical}, and fusion \cite{matzen2005pulsed,marinak2001three}. A major challenge when solving the radiative transfer equation numerically is the high-dimensional phase space on which it is defined. There are three spatial dimensions, two directional (angular) parameters, velocity, and time. In many applications, there is additional frequency or energy dependence. Hence, numerical methods to approximate the solution require a carefully chosen phase space discretization.
There are several strategies to discretize the angular variables, and they all have certain strengths and weaknesses \cite{brunner2002forms}. The spherical harmonics (P$_N$) method
\cite{case1967linear,pomraning1973equations,lewis1984computational} is a spectral Galerkin discretization of the radiative transfer equation. It uses the spherical harmonics basis functions to represent the solution in terms of
angular variables with finitely many expansion coefficients, called moments. The P$_N$ method preserves rotational symmetry and shows spectral convergence for smooth solutions. However, like most spectral methods, P$_N$ yields oscillatory solution approximations in non-smooth regimes, which can lead to negative, non-physical angular flux values. Filtering of the expansion coefficients has shown to mitigate oscillations \cite{mcclarren2010robust} and a modified equation analysis has shown that filtering adds an artificial forward-peaked scattering operator to the equation if a certain scaling strength of the filter is chosen.
The discrete ordinates (S$_N$) method
\cite{lewis1984computational} approximates the radiative transfer equation on a set of discrete angular directions.
The S$_N$\, discretization preserves positivity of the angular flux while yielding an efficient and straight forward implementation of time-implicit methods.
However, the method is plagued by numerical artifacts, know as ray effects, when there are not enough ordinates to resolve the angular flux.
Because increasing the number of ordinates significantly increases numerical costs of simulations,
a major task to improve the solution accuracy of S$_N$\, methods is to mitigate these ray effects \cite{lathrop1968ray,morel2003analysis,mathews1999propagation} without simply adding more ordinates.
Various strategies to mitigate ray effects at affordable costs have been developed.
In \cite{abu2001angular}, a biased quadrature set, which reflects the importance of certain ordinates is used.
Furthermore, \cite{lathrop1971remedies} presents a method combining the P$_N$\, with the S$_N$\, method. Further studies for this method can be found in \cite{jung1972discrete,reed1972spherical,miller1977ray}, which show a reduction of ray effects.
In \cite{morel2003analysis}, a comparison of these methods can be found.
In \cite{tencer2016ray}, computing the angular flux for differently oriented quadrature sets and averaging over different solutions has been proposed to reduce ray artifacts.
In \cite{camminady2019ray}, a rotated S$_N$\, method has been developed, which rotates the quadrature set after every time iteration. Consequently, particles can move on a heavily increased set of directions of travel, leading to a reduction of ray effects. Analytic results show that rotating the quadrature set plays the role of an angular diffusion operator, which smears out artifacts that stem from the finite number of ordinates. Unfortunately, this method does not allow a straight forward implementation of sweeping, complicating the use of implicit methods.
The idea of this work is to add angular diffusion directly with the help of a forward-peaked artificial scattering operator. We choose this operator so that the effect of artificial scattering vanishes in the limit of infinitely many ordinates, but at finite order adds angular diffusion in such a way that it mitigates ray effects. Unlike the rotated S$_N$\, method in \cite{camminady2019ray}, the current approach allows for a straight forward implementation of sweeping, which we use to implement an implicit method.
\section{Implicit second order upwind scheme}
\label{app:upwind}
In the following, we show that the chosen numerical flux is $L^2$ stable. For simplicity, we look at the one-dimensional advection equation
\begin{linenomath*}\begin{align}
\partial_t \psi + \Omega \partial_x \psi = 0
\end{align}\end{linenomath*}
with $\Omega\in\mathbb{R}_+$. A finite volume discretization is given by
\begin{linenomath*}\begin{align}\label{eq:scheme}
\psi_j^{n+1} = \psi_j^n - \lambda \left( g_{j+1/2} - g_{j-1/2}\right),
\end{align}\end{linenomath*}
where we use $\lambda := \Omega\Delta t / \Delta x$. A second order, implicit numerical flux is given by
\begin{linenomath*}\begin{align}
g_{j+1/2} := a \psi_j^{n+1} + b \psi_{j-1}^{n+1}
\end{align}\end{linenomath*}
with
\begin{linenomath*}\begin{align}
a := \frac32 , \enskip b:= -\frac12.
\end{align}\end{linenomath*}
Let us check if the scheme dissipates the $L^2$ entropy $\eta(\psi) := \psi^2/2$. For this we multiply our scheme \eqref{eq:scheme} with $\psi_j^{n+1}$, i.e. we obtain
\begin{linenomath*}\begin{align}\label{eq:term1}
\psi_j^{n+1}\psi_j^{n+1} = \psi_j^n \psi_j^{n+1} - \lambda \left( g_{j+1/2} - g_{j-1/2}\right)\psi_j^{n+1}.
\end{align}\end{linenomath*}
Now one needs to remove the cross term $\psi_j^n \psi_j^{n+1}$ which can be done by reversing the binomial formula
\begin{linenomath*}\begin{align}
\psi_j^n \psi_j^{n+1} = \frac12 (\psi_j^{n+1})^2 + \frac12 \left(\psi_j^{n}\right)^2 - \frac12 (\psi_j^{n+1}-\psi_j^{n})^2.
\end{align}\end{linenomath*}
Plugging this formulation for the cross term into \eqref{eq:term1} and making use of the definition of the square entropy $\eta$ gives
\begin{linenomath*}\begin{align}
\eta(\psi_j^{n+1}) = \eta(\psi_j^{n}) - \frac12 (\psi_j^{n+1}-\psi_j^{n})^2 - \lambda \left( g_{j+1/2} - g_{j-1/2}\right)\psi_j^{n+1}.
\end{align}\end{linenomath*}
This shows that in order to achieve entropy dissipation, i.e.
\begin{linenomath*}\begin{align}
\sum_{j=1}^{N_x}\eta(\psi_j^{n+1}) \leq \sum_{j=1}^{N_x}\eta(\psi_j^{n+1}),
\end{align}\end{linenomath*}
we need
\begin{linenomath*}\begin{align*}\label{eq:dissipationL2Term}
\mathcal{E} = \sum_{j=1}^{N_x}\frac12 (\psi_j^{n+1}-\psi_j^{n})^2 + \lambda \sum_{j=1}^{N_x}\left( g_{j+1/2} - g_{j-1/2}\right)\psi_j^{n+1} \stackrel{!}{\geq} 0.
\end{align*}\end{linenomath*}
Note that the first term of $\mathcal{E}$, which essentially comes from the implicit time discretization is always positive. It remains to show that $\sum_{j=1}^{N_x}\left( g_{j+1/2} - g_{j-1/2}\right)\psi_j^{n+1}$ is positive as well. Let us rewrite this term for all spatial cells as a matrix vector product. I.e. when collecting the solution at time step $n+1$ for all $N_x$ spatial cells in a vector $\psi\in\mathbb{R}^{N_x}$, this term becomes
\begin{linenomath*}\begin{align}
\sum_{j=1}^{N_x}\left(\left( g_{j+1/2} - g_{j-1/2}\right)\psi_j^{n+1}\right) = \psi^T B\psi
\end{align}\end{linenomath*}
where $B\in\mathbb{R}^{N_x\times N_x}$ is a lower triangular matrix. This product can be symmetrized with $S:=\frac{1}{2}(B+B^T)$, meaning that we have $\psi^T B\psi = \psi^T S\psi$. For our stencil, the matrix $S$ has entries $s_{jj} = \frac32$ on the diagonal and $s_{j,j-1} = s_{j-1,j} = -1$ as $s_{j,j-2} = s_{j-2,j} = \frac14$ on the lower and upper diagonals. Positivity of $\psi^T S\psi$ and thereby of the entropy dissipation term $\mathcal{E}$ in \eqref{eq:dissipationL2Term} is guaranteed if $S$ is positive definite, i.e. has positive eigenvalues. The eigenvalues for $S$ have been computed numerically to verify positivity in Fig.~\ref{fig:eigenvaluesS}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\linewidth]{figures/eigenvalues_of_S}
\caption{Eigenvalues of $S$ for $N_x=200$. All eigenvalues remain positive.}
\label{fig:eigenvaluesS}
\end{figure}
\section{Icosahedron quadrature}
The quadrature points and weights for the quadrature in Section \ref{subsec:quadrature} are given for order 2 (12 quadrature points). Every line contains four entries: the $x$, $y$, and $z$ position, as well as the quadrature weight. The quadrature weights sum to $4\pi$. All entries are in \texttt{double} precision. The quadratures for order 2, order 3 (42 quadrature points), order 4 (92 quadrature points), and order 5 (162 quadrature points) can be downloaded as \texttt{.txt} files from a public repository at \texttt{github.com/camminady/IcosahedronQuadrature}.
\VerbatimInput{chapters/Order_2.txt}
\section{Conclusion \& Outlook}
We have presented a new ray effect mitigation technique that relies on an additional, artificial scattering operator introduced into the radiative transfer equation. When the number of ordinates tends to infinity, the artificial scattering vanishes and the modified equation reduces to the original transport equation. In this case, when choosing the product of the scattering strength and the variance constant, the term tending to zero with the slowest rate is the Fokker-Planck operator.
The artificial scattering operator can be integrated into standard S$_N$\, codes. Solution algorithms, both for the explicit and implicit case, have been presented and rigorously analyzed in the non-standard, implicit case. To avoid using a large number of moments for the Krylov solver in the implicit case, we propose to invert the artificial scattering operator by a source iteration.
We have presented numerical results for the line-source and lattice test case. The results demonstrate that artificial scattering yields the same accuracy as S$_N$\,, but for a reduced number of ordinates.
For the second-order implicit computations the solutions might turn negative since L${}^2$ stability does not guarantee positivity of the solution. However, when choosing a sufficiently large CFL number, the solution values in our numerical experiments remain positive. A rigorous investigation of this effect and possibly the derivation of a CFL number ensuring positivity is left to future work
Note that our test cases chose a constant value for the artificial scattering strength, however it seems plausible to make this strength spatially dependent to ensure that artificial scattering is only turned on when required. It remains to demonstrate the feasibility of the as-S$_N$ method in real-world applications using large-scale, highly parallelizable codes.
\section*{Acknowledgment}
The authors wish to thank Ryan G. McClarren (University of Notre Dame) for many fruitful discussions.
| {
"timestamp": "2019-11-22T02:09:19",
"yymm": "1911",
"arxiv_id": "1911.08801",
"language": "en",
"url": "https://arxiv.org/abs/1911.08801",
"abstract": "Solving the radiative transfer equation with the discrete ordinates (S$_N$) method leads to a non-physical imprint of the chosen quadrature set on the solution. To mitigate these so-called ray effects, we propose a modification of the S$_N$ method, which we call artificial scattering S$_N$ (as-S$_N$). The method adds an artificial forward-peaked scattering operator which generates angular diffusion to the solution and thereby mitigates ray effects. Similar to artificial viscosity for spatial discretizations, the additional term vanishes as the number of ordinates approaches infinity. Our method allows an efficient implementation of explicit and implicit time integration according to standard S$_N$ solver technology. For two test cases, we demonstrate a significant reduction of the error for the as-S$_N$ method when compared to the standard S$_N$ method, both for explicit and implicit computations. Furthermore, we show that a prescribed numerical precision can be reached with less memory due to the reduction in the number of ordinates.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Ray Effect Mitigation for the Discrete Ordinates Method Using Artificial Scattering",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966410497252158,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7094608226557899
} |
https://arxiv.org/abs/quant-ph/0410127 | A systematic study on the exact solution of the position dependent mass Schroedinger equation | An algebraic method of constructing potentials for which the Schroedinger equation with position dependent mass can be solved exactly is presented. A general form of the generators of su(1,1) algebra has been employed with a unified approach to the problem. Our systematic approach reproduces a number of earlier results and also leads to some novelties. We show that the solutions of the Schroedinger equation with position dependent mass are free from the choice of parameters for position dependent mass. Two classes of potentials are constructed that include almost all exactly solvable potentials. | \section{Introduction}
The study of position dependent mass (PDM) Schr\"{o}dinger equation has
recently attracted some interest\cite{roy, milan} arising from the study of
electronic properties of semiconductors, liquid crystals, quantum dots, the
recent progress of crystal-growth techniques for production of non-uniform
semiconductor specimen in which carrier effective mass depends on position%
\cite{serra}. It is obvious that the study of PDM Schr\"{o}dinger equation
has considerable impact on condensed matter physics as well as related
fields of the physics.
Exact solvability of the Schr\"{o}dinger equation with constant mass has
been the main interest since the early days of quantum mechanics\cite{levai}%
. It has been solved exactly for a large number of potentials by employing
various techniques. In fact, for exactly solvable potentials its general
solution can be obtained in terms of some special functions by transforming
the original Schr\"{o}dinger equation into the second order differential
equation . Systematic studies of these transformations have been given in%
\cite{natan} regarding the confluent hypergeometric and hypergeometric
functions. The relations between the algebraic technique and the special
function theory have been discussed in\cite{cordero}. Recently various
approaches have been presented in a unified way and a number of earlier
results have been reproduced\cite{levai}. In the present work we use the Lie
algebraic technique to construct the Hamiltonian for the PDM Schr\"{o}dinger
equation and obtain the solutions in terms of the special functions.
In the PDM Schr\"{o}dinger equation the mass and momentum operator no longer
commute. The general expression for the kinetic energy operator have been
introduced by von Roos\cite{roos}:%
\begin{equation}
T=\frac{1}{4}\left( m^{\eta }\mathbf{p}m^{\varepsilon }\mathbf{p}m^{\rho
}+m^{\rho }\mathbf{p}m^{\varepsilon }\mathbf{p}m^{\eta }\right) \label{eq:1}
\end{equation}%
where $\eta +\varepsilon +\rho =-1$ is a constraint. One of the problem is
the choice of parameters\cite{dutra, dekar}. In our approach we obtain exact
solution of the PDM Schr\"{o}dinger equation without any particular choice
which leads to a general solution where the choice of the parameters
distinguishes the physical systems.
One can display a number of fruitful applications of the Lie algebraic
technique, in particular, in atomic and nuclear physics and other fields of
the physics. Our task here is to obtain the exact solution of the PDM Schr%
\"{o}dinger equation by the use of the $su(1,1)$ algebra technique.
The paper is organized as follows. In section 2 we present a general
Hamiltonian by using $su(1,1)$ algebra and we discuss its relation with the
PDM Schr\"{o}dinger equation. We obtain a general expression for the
potential. In section 3 we describe the application of the $su(1,1)$ algebra
to obtain Coulomb, harmonic oscillator and Morse family potentials. In
section 4 we construct hyperbolic and trigonometric potentials. Finally we
discuss our results in section 5.
\section{Structure of the $su(1,1)$ Lie algebra and PDM Schr\"{o}dinger
equation}
Lie algebraic technique is suitable to study PDM Schr\"{o}dinger equation,
because they contain a first-derivative term. The $su(1,1)$ Lie algebra is
described by the commutation relations,%
\begin{equation}
\left[ J_{+},J_{-}\right] =-2J_{0},\quad \left[ J_{0},J_{\pm }\right] =\pm
J_{\pm }. \label{eq:2}
\end{equation}%
Casimir operator of this structure is given by%
\begin{equation}
J^{2}=-J_{\pm }J_{\mp }+J_{0}^{2}\mp J_{0}. \label{eq:3}
\end{equation}%
The eigenstate of $J^{2}$ and $J_{0}$ can be denoted by $|jN>$ where%
\begin{equation}
J^{2}|jN>=j(j+1)|jN>,\quad J_{0}|jN>=N|jN> \label{eq:4}
\end{equation}%
while the allowed values of $N$ are%
\begin{equation}
N=-j,-j+1,-j+2,\cdots =(n+j) \label{eq:5}
\end{equation}%
where $n$ is an integer. We consider the most general form of the generators
of the algebra which introduced by Sukumar\cite{sukumar}%
\begin{eqnarray}
J_{\pm } &=&e^{\pm i\phi }\left( \pm h(x)\frac{\partial }{\partial x}\right)
\pm g(x)+f(x)J_{0}+c(x) \notag \\
J_{0} &=&-i\frac{\partial }{\partial \phi }. \label{eq:6}
\end{eqnarray}%
The commutation relations (\ref{eq:2}) is satisfied when the functions $%
h(x),\quad f(x)$ and $c(x)$ takes the forms%
\begin{equation}
h(x)=\frac{r}{r^{\prime }},\quad f(x)=\frac{1+ar^{2}}{1-ar^{2}},\quad c(x)=-%
\frac{br}{1-ar^{2}} \label{eq:7}
\end{equation}%
where $r=r(x)$ and $a$ and $b$ are constants. The differential realization (%
\ref{eq:6}) can be used to derive the second order differential equations of
the orthogonal polynomials. The differential equations of these polynomials
can be expressed in terms of Casimir operator $J^{2}$:%
\begin{equation}
H=J^{2};\quad H|jN>=j(j+1)|jN>. \label{eq:8}
\end{equation}%
Let us consider the basis function,%
\begin{equation}
|jN>=e^{-iN\phi }\Re _{jN}(x). \label{eq:9}
\end{equation}%
Interms of the realizations (\ref{eq:6}) and with the basis (\ref{eq:9}) the
Hamiltonian (\ref{eq:8}) takes the form
\begin{eqnarray}
H &=&\frac{r^{2}}{r^{\prime 2}}\frac{d^{2}}{dx^{2}}+\frac{r}{r^{\prime }}%
\left( 2g-\frac{2ar^{2}}{1-ar^{2}}-\frac{rr^{\prime \prime }}{r^{\prime 2}}%
\right) \frac{d}{dx}+ \notag \\
&&4g^{2}+g+\frac{rg^{\prime }}{r^{\prime }}-\frac{2g}{1-ar^{2}}-\frac{%
r(2N+br)(2aNr+b)}{(1-ar^{2})^{2}}. \label{eq:10}
\end{eqnarray}%
Let us now turn our attention to the PDM Schr\"{o}dinger equation which can
be written as%
\begin{equation}
H^{\prime }=T+V(x),\quad H^{\prime }\psi (x)=E\psi (x) \label{eq:12}
\end{equation}%
where $V(x)$ is the potential of the physical system and $\psi (x)$ and $E$
are eigenstates and eigenvalues of the PDM Schr\"{o}dinger equation.
Introducing the eigenfunction and momentum operator $p$%
\begin{equation}
\psi (x)=-\frac{2mr^{2}}{r^{\prime 2}}\Re (x);\quad p=-i\frac{d}{dx}
\label{eq:13}
\end{equation}%
respectively, then the position dependent mass Hamiltonian takes the form%
\begin{eqnarray}
H^{\prime } &=&\frac{r^{2}}{r^{\prime 2}}\frac{d^{2}}{dx^{2}}+\frac{r}{%
r^{\prime }}\left( 4-\frac{4rr^{\prime \prime }}{r^{\prime 2}}+\frac{%
rm^{\prime }}{r^{\prime }m}\right) \frac{d}{dx}+ \notag \\
&&2+\frac{2r}{r^{\prime 2}}\left( \frac{3rr^{\prime \prime 2}}{r^{\prime 2}}-%
\frac{rr^{\prime \prime \prime }}{r^{\prime }}-3r^{\prime \prime }\right) +
\notag \\
&&\frac{m^{\prime }r^{2}}{mr^{\prime 2}}\left( \frac{(1+\eta )(\varepsilon
+\eta )m^{\prime }}{m}+\frac{(1-\varepsilon )m^{\prime \prime }}{2m}+\frac{%
2(r^{\prime 2}-rr^{\prime \prime })}{rr^{\prime }}\right) - \label{eq:13x}
\\
&&\frac{2mr^{2}}{r^{\prime 2}}V(x)
\end{eqnarray}%
then comparing the (\ref{eq:13x}), (\ref{eq:8}), (\ref{eq:10}) and we obtain
the following general expression for the potential,%
\begin{eqnarray}
V(x)-E &=& \notag \\
&&\frac{(2bN+r(b^{2}+a(4N^{2}-1)+2abNr))r^{\prime 2}}{2mr(1-ar^{2})^{2}}+
\notag \\
&&\frac{(j(j+1))r^{\prime 2}}{2mr^{2}}+\frac{3r^{\prime \prime ^{2}}}{%
8mr^{\prime 2}}-\frac{r^{\prime \prime \prime }}{4mr^{\prime }}+V_{m}(x)
\label{eq:14}
\end{eqnarray}%
where $V_{m}(x)$ is given by%
\begin{equation}
V_{m}(x)=\frac{1}{4m^{2}}\left( \frac{\left( 4\varepsilon (1+\eta )+(1+2\eta
)^{2}\right) m^{\prime 2}}{2m}-\varepsilon m^{\prime \prime }\right) .
\label{eq:15}
\end{equation}%
when the function $g(x)$ constrained to%
\begin{equation}
g(x)=\frac{ar^{2}-2}{ar^{2}-1}+\frac{m^{\prime }r}{2mr^{\prime }}-\frac{%
3rr^{\prime \prime }}{2r^{\prime 2}}. \label{eq:11}
\end{equation}%
We note here that the potential reduces to the Natanzon class potentials for
the constant mass. In the following section we construct the quantum
mechanical potentials.
\section{Coulomb, Harmonic oscillator and Morse family potentials}
In order to obtain the corresponding potentials we choose $a=0,$ then the
potential (\ref{eq:14}) takes the form%
\begin{eqnarray}
V(x)-E &=&\left( \frac{b^{2}}{2}+\frac{j(j+1)}{2r^{2}}+\frac{bN}{r}\right)
\frac{r^{\prime 2}}{m}+ \notag \\
&&\frac{3r^{\prime \prime ^{2}}}{8mr^{\prime 2}}-\frac{r^{\prime \prime
\prime }}{4mr^{\prime }}+V_{m}(x). \label{eq:16}
\end{eqnarray}%
In the above potential the energy term on the left-hand side should be
represent by a constant term of the right-hand side. This condition can be
satisfied when%
\begin{equation}
\left( \lambda _{0}+\lambda _{1}r^{-1}+\lambda _{2}r^{-2}\right) \frac{%
r^{\prime 2}}{m}=1 \label{eq:17}
\end{equation}%
where $\lambda _{0},\quad \lambda _{1}$ and $\lambda _{2}$ are constants.
Choosing appropriate values of $\lambda _{0},\quad \lambda _{1}$ and $%
\lambda _{2}$ one can generate quantum mechanical potentials.
\subsection{Coulomb family potentials}
In order to generate Coulomb family potentials we choose $\lambda _{0}=1,$
and$\quad \lambda _{1}=$ $\lambda _{2}=0$. Solving (\ref{eq:17}) for $r$ and
substituting in to (\ref{eq:16}) we obtain the following potential%
\begin{equation}
V(x)=\frac{j(j+1)}{2u^{2}}+\frac{Ze^{2}}{2u}+U_{m}(x) \label{eq:18}
\end{equation}%
with the eigenvalues
\begin{equation}
E=-\frac{Z^{2}e^{4}}{2N^{2}}. \label{eq:19}
\end{equation}%
where $u=\int_{0}^{x}\sqrt{m}dx,$ and the parameter $b$ of the potential (%
\ref{eq:16}) is defined as $\quad b=Ze^{2}/N$. The potential is isospectral
with the constant mass Schr\"{o}dinger equation. The mass dependent function
U$_{m}(x)$ is given by%
\begin{equation}
U(m)=\frac{m^{\prime }}{8m^{2}}\left( \frac{5m^{\prime }}{4m}-\frac{%
m^{\prime \prime }}{m^{\prime }}\right) +V_{m}(x)
\end{equation}
\subsection{Harmonic oscillator potential}
The harmonic oscillator potential can be generated from (\ref{eq:16}) when
we set the parameter $A=-\frac{3}{16}(1+2j)^{2}$, under the condition $%
\lambda _{1}=1/2,$ and$\quad \lambda _{0}=$ $\lambda _{2}=0$. In this case $%
r=\frac{u^{2}}{2}$, and the potential takes the form
\begin{equation}
V=\frac{3+16j(j+1)}{8u^{2}}+\frac{b}{2}u^{2}+U_{m}(x) \label{eq:20}
\end{equation}%
with the eigenvalues%
\begin{equation}
E=2bN \label{eq:21}
\end{equation}
\subsection{Morse family potential}
Our last example in this class of potential is the Morse family potential.
This potential can be obtained by setting parameters $\lambda _{2}=1,$ and$%
\quad \lambda _{0}=$ $\lambda _{1}=0$. Solving (\ref{eq:17}) for $r$ we
obtain $r=e^{-\alpha u}$ and the potential takes the form%
\begin{equation}
V(x)=Nb\alpha ^{2}e^{-\alpha u}+\frac{b^{2}\alpha ^{2}}{2}e^{-2\alpha
u}+U_{m}(x) \label{eq:22}
\end{equation}%
with the eigenvalues%
\begin{equation}
E=-\frac{\alpha ^{2}}{8}\left( (1+2j)^{2}\right) \label{eq:23}
\end{equation}
\section{Hyperbolic and Trigonometric Potentials}
In this section we construct hyperbolic and trigonometric potentials. Some
of these potentials have important applications in condensed matter
phenomena because of its periodicity. As we mentioned before in the
potential (\ref{eq:14}) a constant term should be represented with the
energy term. We discuss below the problem for various potentials.
\subsection{P\"{o}schl-Teller family potential}
For the choice of $r=e^{-2\alpha u},\quad a=-1$ the result is \
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{8}((b-2N)^{2}-1)\csc h^{2}\alpha u-
\label{eq:24a} \\
&&\frac{\alpha ^{2}}{8}((b+2N)^{2}-1)\sec h^{2}\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{2}\left( (1+2j)^{2}\right) \label{eq:24b}
\end{eqnarray}%
which is the P\"{o}schl-Teller potential. The function $u$ is given by
\end{subequations}
\begin{equation}
u=\int_{0}^{x}\sqrt{m}dx. \label{eq:25}
\end{equation}%
For the given mass term $u$ should be integrable. The trigonometric form of
the P\"{o}schl-Teller potential can be obtained by substituting $\alpha
\rightarrow i\alpha .$ In this case the potential and its eigenvalues are
given by
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{8}((b-2N)^{2}-1)\csc ^{2}\alpha x+ \label{et:1}
\\
&&\frac{\alpha ^{2}}{8}(\left( b+2N\right) ^{2}-1)\sec ^{2}\alpha x+U_{m}(x)
\notag \\
E &=&\frac{\alpha ^{2}}{2}\left( (1+2j)^{2}\right) \label{et:2}
\end{eqnarray}
\subsection{Generalized P\"{o}schl-Teller family potential}
In order to construct the generalized P\"{o}schl-Teller family potential we
introduce
\end{subequations}
\begin{equation}
r=e^{-\alpha u},a=-1. \label{eq:26}
\end{equation}%
Substituting $r$ into (\ref{eq:14}) the resulting potential and
corresponding eigenvalues read as
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{8}(b^{2}+4N^{2}-1)\csc h^{2}\alpha u-
\label{eq:27a} \\
&&\frac{\alpha ^{2}}{2}bN\coth \alpha u\csc h\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}\left( 4A+(1+2j)^{2}\right) \label{eq:27b}
\end{eqnarray}%
Trigonometric form of this potential can be obtained replacing $\alpha $ by $%
i\alpha .$Then the potential is given by
\end{subequations}
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{8}(b^{2}+4N^{2}-1)\csc ^{2}\alpha u- \label{et:3}
\\
&&\frac{\alpha ^{2}}{2}bN\cot \alpha u\csc \alpha u+U_{m}(x) \notag \\
E &=&\frac{\alpha ^{2}}{8}\left( (1+2j)^{2}\right) . \label{et:4}
\end{eqnarray}
\subsection{Scarf family potential}
Let us now construct another potential by substituting $r=ie^{-\alpha
u},\quad a=-1$ into equation (\ref{eq:14}). In this case we obtain PT
symmetric Scarf family potential\cite{bender},
\end{subequations}
\begin{subequations}
\begin{eqnarray}
V(x) &=&-\frac{\alpha ^{2}}{8}(b^{2}+4N^{2}-1)\sec h^{2}\alpha u+
\label{eq:28a} \\
&&\frac{i\alpha ^{2}}{2}bN\sec h\alpha u\tanh \alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}\left( (1+2j)^{2}\right) . \label{eq:28b}
\end{eqnarray}%
When we replace $b\rightarrow ib$ then the potential becomes Scarf family
potential. When we replace $\alpha $ by $i\alpha :$%
\end{subequations}
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{8}(b^{2}+4N^{2}-1)\sec h^{2}\alpha u+
\label{et:5} \\
&&\frac{\alpha ^{2}}{2}bN\sec \alpha u\tan \alpha u+U_{m}(x) \notag \\
E &=&\frac{\alpha ^{2}}{8}\left( (1+2j)^{2}\right) \label{et:6}
\end{eqnarray}%
The Scarf, PT symmetric Scarf and Generalized P\"{o}schl-Teller potentials
are isospectral potentials. The last six potentials have already constructed
by choosing r as an exponential function. This property implies that these
potentials form the same family potentials and they can be obtained from
each others by a simple coordinate transformation.
\subsection{Eckart family potential}
The Eckart family potential can be constructed by introducing $\ r=\coth
\frac{\alpha x}{2},\quad a=-1$ The corresponding potential and eigenvalues
are given by
\end{subequations}
\begin{subequations}
\begin{eqnarray}
V(x) &=&-\frac{\alpha ^{2}}{2}bN\coth \alpha u+ \label{eq:29a} \\
&&\frac{\alpha ^{2}}{2}(A+j(j+1))\csc h^{2}\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}(b^{2}+N^{2}) \label{eq:29b}
\end{eqnarray}%
Trigonometric form of this potential can be obtained by the choice of
\end{subequations}
\begin{equation}
r=\cot \frac{\alpha x}{2},a=-1,b\rightarrow ib \label{et:7}
\end{equation}%
then the potential (\ref{eq:14}) takes the form
\begin{subequations}
\begin{eqnarray}
V(x) &=&\frac{\alpha ^{2}}{2}bN\cot \alpha u+ \label{et:8} \\
&&\frac{\alpha ^{2}}{2}(j(j+1))\csc ^{2}\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}(b^{2}-4N^{2}) \label{et:9}
\end{eqnarray}
\subsection{Hulthen family potential}
Another important potential of the quantum mechanic is the Hulthen
potential. the choice of $r=\coth \frac{\alpha x}{4},\quad a=-1$ produce the
following potential,
\end{subequations}
\begin{subequations}
\begin{eqnarray}
V &=&\frac{(j(j+1)-bN/2)\alpha ^{2}e^{-\alpha u}}{2(1-e^{-\alpha u})}+
\label{eq:30a} \\
&&\frac{(j(j+1))\alpha ^{2}e^{-2\alpha u}}{2(1-e^{-\alpha u})^{2}}+U_{m}(x)
\notag \\
E &=&-\frac{\alpha ^{2}}{32}(b-2N)^{2} \label{eq:30b}
\end{eqnarray}
\subsection{Rosen-Morse family potential}
The last example in this category is the Rosen-Morse family potential. This
potential is isospectral with the Eckart family potential and can be
obtained by introducing
\end{subequations}
\begin{equation}
r=\coth \left( \frac{\alpha x}{2}+i\frac{\pi }{4}\right) ,\quad a=-1
\label{eq:31}
\end{equation}%
Substituting (\ref{eq:31}) into (\ref{eq:14}) we obtain the following
potential with the eigenvalues $E$%
\begin{subequations}
\begin{eqnarray}
V(x) &=&-\frac{\alpha ^{2}}{2}bN\tanh \alpha u- \label{eq:32a} \\
&&\frac{\alpha ^{2}}{2}(j(j+1))\sec h^{2}\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}(b^{2}+4N^{2}) \label{eq:32b}
\end{eqnarray}%
In order to obtain trigonometric form of the Rosen-Morse family potential we
substitute
\end{subequations}
\begin{equation}
r=-i\cot \left( \frac{\alpha x}{2}+\frac{\pi }{4}\right) ,a=-1,b\rightarrow
ib \label{et:10}
\end{equation}%
into (\ref{eq:14}) and we obtain the following potential
\begin{subequations}
\begin{eqnarray}
V(x) &=&-\frac{\alpha ^{2}}{2}bN\tan \alpha u+ \label{et:11} \\
&&\frac{\alpha ^{2}}{2}(j(j+1))\sec ^{2}\alpha u+U_{m}(x) \notag \\
E &=&-\frac{\alpha ^{2}}{8}(b^{2}-4N^{2}) \label{et:12}
\end{eqnarray}%
It is obvious that the Eckart, Hulten and Rosen-Morse family potentials can
be mapped onto each others by a simple coordinate transformation.
\section{Conclusions}
In this work we have made a systematic study to obtain the exact solution of
the PDM Schr\"{o}dinger equation within the context $su(1,1)$ algebra. We
have obtained a number of potentials some of which are already known while
the others are new. Another issue here is that the choice of the parameters $%
\rho $, $\eta $ and $\varepsilon $. It has been shown that the exact
solvability of the PDM Schr\"{o}dinger equation is independent of the these
parameters.
| {
"timestamp": "2004-10-17T12:12:18",
"yymm": "0410",
"arxiv_id": "quant-ph/0410127",
"language": "en",
"url": "https://arxiv.org/abs/quant-ph/0410127",
"abstract": "An algebraic method of constructing potentials for which the Schroedinger equation with position dependent mass can be solved exactly is presented. A general form of the generators of su(1,1) algebra has been employed with a unified approach to the problem. Our systematic approach reproduces a number of earlier results and also leads to some novelties. We show that the solutions of the Schroedinger equation with position dependent mass are free from the choice of parameters for position dependent mass. Two classes of potentials are constructed that include almost all exactly solvable potentials.",
"subjects": "Quantum Physics (quant-ph)",
"title": "A systematic study on the exact solution of the position dependent mass Schroedinger equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104914476339,
"lm_q2_score": 0.7341195327172401,
"lm_q1q2_score": 0.7094608183945753
} |
https://arxiv.org/abs/hep-lat/9609047 | Monte Carlo Study of Cluster-Diameter Distribution: A New Observable to Estimate Correlation Lengths | We report numerical simulations of two-dimensional $q$-state Potts models with emphasis on a new quantity for the computation of spatial correlation lengths. This quantity is the cluster-diameter distribution function $G_{diam}(x)$, which measures the distribution of the diameter of stochastically defined cluster. Theoretically it is predicted to fall off exponentially for large diameter $x$, $G_{diam} \propto \exp(-x/\xi)$, where $\xi$ is the correlation length as usually defined through the large-distance behavior of two-point correlation functions. The results of our extensive Monte Carlo study in the disordered phase of the models with $q=10$, 15, and $20$ on large square lattices of size $300 \times 300$, $120 \times 120$, and $80 \times 80$, respectively, clearly confirm the theoretically predicted behavior. Moreover, using this observable we are able to verify an exact formula for the correlation length $\xi_d(\beta_t)$ in the disordered phase at the first-order transition point $\beta_t$ with an accuracy of about $1%-2%$ for all considered values of $q$. This is a considerable improvement over estimates derived from the large-distance behavior of standard (projected) two-point correlation functions, which are also discussed for comparison. | \section{Introduction}
The physics of phase transitions is essentially governed by the behavior of
the spatial correlation length $\xi$. While in some problems, e.g. at a
continuous phase transition where $\xi$ diverges, it is often sufficient to
know the qualitative behavior, there are also many applications which rely
on quantitative estimates of $\xi$. This applies in particular to the
finite-size scaling behavior near a first-order phase transition \cite{review}
where $\xi$ stays finite and sets the length scale above which asymptotic
considerations should apply \cite{athens}. Since analytical predictions are
scarce it is therefore of great practical importance to develop refined
numerical methods for reliable computations of correlation lengths.
In order to evaluate the accuracy of a newly proposed method one should apply
it first to models where analytical predictions are available. The best known
example is the two-dimensional (2D) Ising model where $\xi$ is exactly known
at all temperatures \cite{ising}. But already the generalization to the 2D
$q$-state Potts model \cite{wu} complicates the theoretical analysis
considerably, and much less is known analytically. It was therefore a great
success when a few years ago at least the correlation length $\xi_d(\beta_t)$
in the disordered phase {\em at\/} the first-order transition point $\beta_t$
for $q \ge 5$ could be calculated exactly \cite{xi_1,xi_2,bj92}. Apart from
heuristic arguments, no analytical predictions are available for the
correlation length $\xi_o(\beta_t)$ in the ordered phase, and previous
numerical simulations \cite{previous1,previous2,previous3} turned out to be
difficult to interpret. This was
the physical motivation to start a project \cite{jk95a,jk95b} with the goal
to clarify conflicting conjectures for the ratio $\xi_o/\xi_d$ at $\beta_t$.
The idea was, of course, to test the employed numerical methods first for
the exactly known correlation length in the disordered phase \cite{jk95a}
and then to proceed to the so far unexplored ordered phase \cite{jk95b}.
One often employed way to extract correlation lengths is to study the
exponential decay of two-point correlation functions in the asymptotic limit
of large distances. While this methods works perfectly for the 2D Ising model,
for $\xi_d(\beta_t)$ of the 2D $q$-state Potts models with $q=10$, 15, and 20
we experienced quite nasty systematic deviations from the exact answer by
about $10\% - 20\%$ \cite{jk95a}. The deviations could be traced back to the
unexpected importance of higher order excitations, but even though the
Monte Carlo simulations were performed on quite large lattices and with a
high statistics of about $50\,000 - 100\,000$ uncorrelated measurements,
least-square fits with sufficiently many correction terms turned out to be
too unstable to predict reliable numbers.
A way out of this problem is to search for a different estimator of $\xi$
which is less affected by correction terms. A systematic search is certainly
very difficult, but one possible candidate was recently suggested in
analytical work \cite{bc94} making extensively use of the Fortuin-Kasteleyn
cluster representation \cite{FoKa} of the Potts model. In Ref.~\cite{bc94} it
was shown that the distribution of the cluster diameter, $G_{\rm diam}(x)$,
decays exponentially for large diameter $x$, and that the decay constant is
identical to the inverse correlation length (as defined from the decay of the
two-point correlation function). This prompted us to investigate if the
cluster-diameter distribution function is better suited for a numerical
determination of the correlation length. In the following we report
high-statistics Monte Carlo simulations of the models with $q=10$, 15, and 20,
focussing on the properties of the new observable. As the main
result it turns out to be indeed very well suited in the disordered phase,
allowing for the first time a confirmation of the analytical formula for
$\xi_d(\beta_t)$ with an accuracy of about $1\% - 2\%$. Since we used larger
lattices and considerably higher statistics than in our previous
studies \cite{jk95a}, we discuss for comparison also the newly obtained
estimates for $\xi_d(\beta_t)$ from two different projections of the
standard two-point correlation function.
The remainder of the paper is organized as follows. In Sec.~2 we first recall
the definition of the model and some exact results. We then discuss the
simulation techniques and in particular describe the various estimators used
to measure the correlation length. The results of our simulations are
presented in Sec.~3, and in Sec.~4 we conclude with a brief summary of the
main results and some final remarks.
\section{Model and observables}
In our Monte Carlo simulations we used the standard definition of the Potts
model partition function \cite{wu},
\begin{equation}
Z = \sum_{\{s_i\}} e^{-\beta E}; \, E = -\sum_{\langle ij \rangle}
\delta_{s_i s_j}; \, s_i = 1,\dots,q,
\label{eq:model}
\end{equation}
where $\beta = J/k_BT$ is the inverse temperature in natural units, $i$ denote
the lattice sites of a square lattice, $\langle ij \rangle$
are nearest-neighbor pairs, $\delta_{s_i s_j}$ is the Kronecker delta
symbol, and $q$ is the number of states per spin. In all simulations we used
periodic boundary conditions to minimize finite-size effects.
In the following we report results for the models with $q=10$, 15, and 20,
employing lattices of size $V =L \times L$ with $L = 300$, 120, and 80,
respectively. All simulations were performed in the canonical ensemble at
the infinite volume first-order transition point
$\beta_t = \ln (1+\sqrt{q})$, at which the ordered and disordered phase can
coexist. In a Monte Carlo simulation, the system can be biased into one
of the two phases by the choice of the initial spin configuration.
To update the spins we used the Wolff single-cluster
algorithm \cite{wolff_algo}. From a previous comparative study \cite{jk95a}
we knew that in the disordered phase this
algorithm clearly outperforms all other standard algorithms such as Metropolis,
heat-bath, and Swendsen-Wang multiple cluster \cite{sw}.
The lattice sizes were chosen such that, for each value of $q$,
$L \approx 28 \xi_d(\beta_t)$, with $\xi_d(\beta_t) = 10.559519\dots$,
$4.180954\dots$, and $2.695502\dots$ for $q=10$, 15, and 20,
respectively \cite{xi_1,xi_2,bj92}. Starting from a completely random
configuration of spins it is then extremely probable
that the system will stay in the disordered phase for a sufficiently long
time, allowing statistically meaningful measurements of quantities being
characteristic for the pure disordered phase. More precisely, by recalling
that the escape probability ${\cal P}$ is proportional to
$\exp(-2\sigma_{od}L)$ and that in two dimensions the interface tension
$\sigma_{od}$ can be expressed in terms of the correlation length of the
disordered phase \cite{bj92}, $\sigma_{od} = 1/2\xi_d(\beta_t)$, one easily
arrives at the order-of-magnitude estimate
${\cal P} \propto \exp(-L/\xi_d(\beta_t)) \approx \exp(-28) \approx 10^{-12}$.
Finite-size corrections in the pure disordered phase are expected to be of
the same order.
In this work we mainly focussed on measurements of the
probability distribution of the cluster diameter, ${\rm diam} \,C_{i_0}$,
which, in general, is defined as the ma\-xi\-mal extension of a cluster
in any of the $D$ coordinate directions of a hypercubic lattice; for an
illustration see Fig.~\ref{fig:sketch}. The cluster-diameter distribution
function $\mathop{G^{^{\rm diam}}(x)}$ is then the probability,
\begin{equation}
\mathop{G^{^{\rm diam}}(x)} = \mu(\, {\rm diam} \,C_{i_0} = x),
\label{eq:gdiam}
\end{equation}
that the cluster $C_{i_0}$ connected to a lattice site $i_0$ has a given
diameter $x$ \cite{bc94}. To increase the statistics we took advantage of the
periodic boundary conditions and ave\-raged $\mathop{G^{^{\rm diam}}(x)}$ over all lattice sites
$i_0$. In practice this amounts to recording a histogram $H^{\rm diam}(x)$,
whose entries at $x= {\rm diam} \,C$ are incremented by the size or weight
$|C|$ of each simulated cluster. If properly normalized, $H^{\rm diam}(x)$
is then an estimator of the probability distribution $\mathop{G^{^{\rm diam}}(x)}$. As discussed in
the Introduction the theoretically expected asymptotic behavior of $\mathop{G^{^{\rm diam}}(x)}$ in
the disordered phase is an exponential decay governed by the correlation
length $\xi_d$ \cite{bc94},
\begin{equation}
\mathop{G^{^{\rm diam}}(x)} = a \exp(-x/\xi_d) + \dots.
\label{eq:Gdfit}
\end{equation}
By taking the logarithm of $\mathop{G^{^{\rm diam}}}$ and performing linear two-parameter fits
it is then straightforward to extract $\xi_d$.
For comparison we considered also in the new simulations the
$k_y^{(n)} = 2\pi n/L$ momentum projections ($i=(i_x,i_y)$),
\begin{equation}
g^{(n)}(i_x,j_x) = \frac{1}{L} \sum_{i_y,j_y} G(i,j)
e^{i k_y^{(n)}(i_y-j_y)},
\label{eq:g}
\end{equation}
with $n=0$ and 1 of the two-point correlation function
\begin{equation}
G(i,j) \equiv \langle \delta_{s_i s_j} - \frac{1}{q} \rangle.
\label{eq:G}
\end{equation}
For the measurements we actually decomposed the whole spin configuration
into stochastic Swendsen-Wang (multiple) cluster and used the improved
cluster estimator \cite{sokal}
\begin{equation}
G(i,j) = \frac{q-1}{q} \langle \Theta(i,j) \rangle,
\label{eq:Gimp}
\end{equation}
where $\Theta(i,j)=1$, if $i$ and $j$ belong to the same cluster, and
$\Theta = 0$ otherwise. In particular for small average single-cluster sizes
(cp. Table~\ref{tab:stat}), this procedure is more efficient than using
directly the corresponding improved single-cluster estimator.
As discussed previously \cite{jk95a}, to extract $\xi_d$ from the
large distance behavior of (\ref{eq:g}),
non-linear four-parameter fits of the form
\begin{equation}
g^{(n)}(x) \equiv g^{(n)}(i_x,0) = a \,{\rm ch}( \frac{L/2-x}{\xi_d^{(n)}} )
+ b \,{\rm ch}( c \frac{L/2-x}{\xi_d^{(n)}} ),
\label{eq:fit_4}
\end{equation}
with
\begin{equation}
\xi_d \approx \xi_d^{(n)} / \sqrt{1 \!-\! (2 \pi n \xi_d^{(n)}/L)^2}.
\label{eq:xi}
\end{equation}
are necessary. Below we shall report results for the first two projections
with $n=0$ and $n=1$. While the $n=0$ projection has been studied before on
smaller lattices \cite{jk95a}, the use of the $n=1$ projection
in the disordered phase is actually also new. Originally this projection
was applied in the ordered phase where it is essential for removing
constant background terms caused by the non-zero
magnetization \cite{previous3,jk95b}. Notice that for the large lattice sizes
used in this study, $L \approx 28 \xi_d$, the difference in (\ref{eq:xi})
between the fit parameter $\xi_d^{(n)}$ and $\xi_d$ is only about $2.4\%$.
The computer code was implemented on a T3D parallel computer in a trivial
way by running 64 independent simulations in parallel.
This allowed us to generate the very high statistics
compiled in Table~\ref{tab:stat}. Here we followed the usual convention
and defined $V/\langle |C| \rangle_{\rm SC}$ single-cluster steps as one
Monte Carlo update sweep (MCS), where $\langle |C| \rangle_{\rm SC}$
is the average cluster size, and rescaled the integrated autocorrelation
time of the internal energy, $\tau_{\rm int,e}$, to this unit of time.
Per $\tau_{\rm int,e}$ we performed about two measurements of the projected
correlation functions. The size and diameter of the clusters were measured
for each generated cluster. The statistical error bars are estimated from
the fluctuations among the 64 independent copies by using the standard
jack-knife procedure \cite{jack}. The total running time of the simulations
amounts to about five years of CPU time on a typical workstation.
\section{Results}
In all simulations we monitored the time evolution of the energy and
magnetization to convince ourselves that the system never escaped into
the ordered phase. As a more quantitative measure we also computed energy
and magnetization moments which can be compared with exact \cite{wu,baxter1}
or series expansion results \cite{large_q}. The average and maximum cluster
sizes and the maximum cluster diameter found in the
simulations are given in Table~{\ref{tab:stat}. As a result
of these tests we are convinced that, despite the very long
run times, our results for $\xi_d$ can be identified with the {\em pure}
disordered phase correlation length.
The data for $\mathop{G^{^{\rm diam}}(x)}$ and $\mathop{g^{(0)}(x)}$ are shown for $q=10$, 15, and 20 in the
semi-log plots of Fig.~\ref{fig:gdiam}. The continuous lines are
one- and three-parameter fits to the Ansatz (\ref{eq:Gdfit}) and
(\ref{eq:fit_4}), respectively, with $\xi_d$ held fixed at its theoretical
value ($ = 10.559519\dots$, $4.180954\dots$, and $2.695502\dots$ for $q=10$,
15, and 20 \cite{xi_1,xi_2,bj92}). Let us first concentrate on the new
observable, the cluster-diameter probability distribution $\mathop{G^{^{\rm diam}}(x)}$. At first
sight the constrained one-parameter fit to $\mathop{G^{^{\rm diam}}}$ looks
less perfect than the constrained three-parameter fit to $\mathop{g^{(0)}}$, since the data
points are more randomly scattered around the fit. The reason is that the
correlations between the estimates at $x$ and $x + \Delta x$ are much smaller
for $\mathop{G^{^{\rm diam}}(x)}$ than for $\mathop{g^{(0)}(x)}$. This can be understood by noting that a cluster of
diameter $x_0$ contributes only to the {\em one\/} estimate
of $\mathop{G^{^{\rm diam}}(x)}$ at $x=x_0$, but to {\em all\/} estimates of $\mathop{g^{(0)}(x)}$ with $x \le x_0$
(recall the cluster estimator (\ref{eq:Gimp})).
The correlation length estimates resulting from various unconstrained
two-pa\-ra\-me\-ter fits to $\mathop{G^{^{\rm diam}}}$ in intervals $x_{\rm min} \dots x_{\rm max}$
with $x_{\rm max} = 130$, 50, and 40 for $q=10$, 15, and 20, respectively,
are collected in Table~\ref{tab:xi.2Lx2L.fit}.
We see that the results are in very good agreement with the theoretically
expected values, with only slight systematic deviations of about $1\%-2\%$.
Contrary to the results \cite{jk95a} obtained from $\mathop{g^{(0)}(x)}$ the fitted values
tend now to be overestimates for small $x_{\rm min}$. This tendency becomes
obvious in Fig.~\ref{fig:gdiam_eff} where we show the effective correlation
lengths
\begin{equation}
\xi^{\rm eff}_d(x) = 1/\ln [C(x)/C(x+1)],
\label{eq:xi_eff}
\end{equation}
with $C = \mathop{G^{^{\rm diam}}}$ or $\mathop{g^{(0)}}$. The $\xi^{\rm eff}_d(x)$ are just the inverse of the
local slopes in Fig.~\ref{fig:gdiam}. By recalling that neighboring values of
$\mathop{G^{^{\rm diam}}}$ are much less correlated than those of $\mathop{g^{(0)}}$, this explains
the much larger error bars on the data for $\xi^{\rm eff}_d$ derived
from $\mathop{G^{^{\rm diam}}}$. Observe that $\xi^{\rm eff}_d$ obtained from $\mathop{G^{^{\rm diam}}}$
develop a much more pronounced plateau for $q=15$ and $20$ than for $q=10$,
before also here the statistical errors increase
and the data start to fluctuate around the theoretically expected value.
To conclude this subsection, by using the cluster-diameter probability
distribution as an estimator for the correlation length, we succeeded to
confirm the theoretical prediction for $\xi_d(\beta_t)$ at a $1\% - 2\%$
level.
It would of course be unfair to compare the final estimates obtained from
$\mathop{G^{^{\rm diam}}(x)}$ of the present study with the results from $\mathop{g^{(0)}(x)}$ of previous
work \cite{jk95a} which used smaller lattices and lower statistics.
In the present study we have therefore analyzed again $\mathop{g^{(0)}(x)}$. Furthermore we
discuss for the first time also $g^{(1)}(x)$ in the disordered phase. In
Fig.~\ref{fig:gdiam} we see that the constrained three-parameter fit to $\mathop{g^{(0)}}$
yields an excellent description of the fall-off of $\mathop{g^{(0)}(x)}$ over more than four
decades. Still, from an unconstrained four-parameter fit over the same
$x$ range with $\xi_d$ as a free parameter we obtain for $q=10$ an about
$10\%$ smaller value of $\xi_d = 9.5(4)$. This confirms our earlier
observation in Ref.~\cite{jk95a} that four-parameter fits
to $\mathop{g^{(0)}}$ systematically underestimate $\xi_d$. This is demonstrated
in more detail in Table~\ref{tab:xi.2Lx2L.fit} where we have collected
the results of various fits in the intervals $x_{\rm min} \dots x_{\rm max} =
L/2$. For all three values of $q$ we observe a clear tendency of increasing
estimates for $\xi_d$ with increasing $x_{\rm min}$. Still, the estimates in
the last line for $\mathop{g^{(0)}(x)}$ are about $8\%$ - $10\%$ below the theoretical
values. This tendency is also clearly visible in the behavior of the
$\xi^{\rm eff}_d(x)$ of $\mathop{g^{(0)}}$ shown in Fig.~\ref{fig:gdiam_eff}.
Compared with Ref.~\cite{jk95a}
the statistics of the present simulations is higher by more than one order of
magnitude. This allowed us to include larger $x$ values in the fits and,
as expected, improved the estimates of $\xi_d$, in particular for
$q=15$ and 20. This clearly indicates that
by further increasing the statistics also the remaining discrepancies could
be removed. We can therefore conclude that there is nothing wrong,
in principle, in using the standard two-point correlation function to
estimate $\xi_d$. Numerically, however, accurate estimates would require
an enormous effort and would thus be a rather expensive enterprise.
In Table~\ref{tab:xi.2Lx2L.fit} we also give the results of unconstrained
four-parameter fits of the form (\ref{eq:fit_4}) to
$g^{(1)}(x)$ where, by using (\ref{eq:xi}), we have already converted
$\xi_d^{(1)}$ to $\xi_d \approx 1.02 \xi_d^{(1)}$. We see that the estimates
from $\mathop{g^{(0)}}$ and $g^{(1)}$ are strongly correlated, so that in the disordered
phase nothing is gained by studying also the higher projections.
Further investigations of the cluster-diameter distribution in the
disordered phase of the two-dimensional Ising and 3-state Potts models
revealed, however, that the new observable is not always advantageous.
Our results for the Ising model from a very long simulation of a
$80 \times 80$ lattice with
${\rm MCS}/\tau_{{\rm int},e} \approx 12\,288\,000 = N_{\rm meas}$
in the disordered phase at $\beta = 0.70340888 \approx 0.8 \beta_c$ are
shown in Fig.~\ref{fig:gdiam_is_eff}. For $\beta < \beta_c$
the exact expression for the 2D Ising model correlation length is
$\xi_d = 1/(\beta^{\star} - \beta)$ \cite{ising}, where the dual inverse
temperature
$\beta^{\star}$ is given by $(\exp(\beta)-1)(\exp(\beta^{\star}-1) = q =2$.
We see that here the $\xi_d^{\rm eff}$ derived
from $\mathop{G^{^{\rm diam}}}$ clearly overshoot the exact value of $\xi_d = 2.6202906\dots$
before they slowly approach it from above. Notice that $\beta$ was adjusted
such that $\xi_d$ agrees roughly with the value of the $q=20$ model at
$\beta_t$. The $\xi_d^{\rm eff}$ of $\mathop{g^{(0)}}$,
on the other hand, coincide with the exact value already for very small $x$,
and a simple two-parameter fit of the form (\ref{eq:fit_4}) with $b=c=0$
in the range $x=1,\dots,40 = L/2$ yields $\xi_d=2.62029(14)$, in perfect
agreement with the exact result. A fit of $\mathop{G^{^{\rm diam}}}$ according to (\ref{eq:Gdfit})
using only large $x$ values in the interval $x=40,\dots,56$ gives a
considerably higher estimate of $\xi_d = 2.89(8)$ which, despite its
large error bar, is only barely compatible with the theoretical value. The
deviation is still about $10\%$.
If we choose $\beta = 0.7891847 \approx 0.9 \beta_c$ (the dual
inverse temperature of $\beta^{\star}=0.98$), such that the exact
correlation length is twice as large, $\xi_d = 5.2405812\dots$, we obtain
qualitatively the same picture. This is shown in Fig.~\ref{fig:gdiam_is160_eff}
for a simulation of a $160 \times 160$ lattice with
${\rm MCS}/\tau_{{\rm int},e} \approx 3\,200\,000 = N_{\rm meas}$. Here
the linear fit of the data for $\mathop{g^{(0)}}$ in
the range $x=1,\dots,80 = L/2$ yields $\xi_d = 5.2400(6)$, again in nice
agreement with the exact value. A fit of $\mathop{G^{^{\rm diam}}}$ in the interval
$x=80,\dots,100$ with $\xi_d = 5.6(3)$, on the other hand, deviates again
considerably by about $7\%$.
Finally we show in Fig.~\ref{fig:gdiam_q=3_eff} our results for the
two-dimensional $q=3$ Potts model at
$\beta = 0.95179503 \approx 0.95 \beta_c$ (where
$\beta^{\star}=1.06$). Here the lattice size is
$160 \times 160$, ${\rm MCS}/\tau_{{\rm int},e} \approx 2\,285\,000$, and
$N_{\rm meas} = 3\,200\,000$. For small $x$ we observe the influence of
higher excitations in $\mathop{g^{(0)}}$ which, however, die out rapidly. Discarding
therefore the smallest distances and choosing a fit interval of
$x=7,\dots,80$ we obtain an estimate of $\xi_d = 5.838(2)$, which is shown
as the horizontal line in the lower plot of Fig.~\ref{fig:gdiam_q=3_eff}.
Here the cluster-diameter distribution $\mathop{G^{^{\rm diam}}}$ is already slightly better behaved
than for the two-dimensional Ising model, and a fit in the interval
$x=80,\dots,106$ gives a compatible value of $\xi_d = 6.0(2)$, which
now deviates only by $2\%$ from the result of $\mathop{g^{(0)}}$.
\section{Discussion}
Our numerical results clearly show that the cluster-diameter distribution
$\mathop{G^{^{\rm diam}}(x)}$ is very well suited to extract the correlation length $\xi_d(\beta_t)$
of two-dimensional $q$-state Potts models with relatively large values of $q$.
While analyses of the standard (projected) two-point function are plaqued by
large systematic errors, with the new observable we succeeded for the first
time to reproduce the theoretically expected values at a $1\% - 2\%$ level.
For small values of $q$, however, the standard correlation function
gives much more reliable results. For reasons not well understood to date,
the two quite different correlators thus seem to behave complementary to each
other.
Also for the three-dimensional $q$-state Potts models with $q=3$, 4, and 5,
which undergo a first-order phase transition already for $q \ge 3$,
our results \cite{tobe} for $\mathop{G^{^{\rm diam}}}$ and $\mathop{g^{(0)}}$ in the disordered phase at
the transition point $\beta_t$ as well as the corresponding effective
correlation lengths look qualitatively as for the 2D Ising model
in Fig.~\ref{fig:gdiam_is_eff}. Also in these cases we found that $\mathop{g^{(0)}}$ gives
much more reliable estimates of $\xi_d$. This suggests that the behavior
of $\mathop{G^{^{\rm diam}}}$ does depend crucially on the value of $q$, but certainly not on the
fact that the two-dimensional Potts models with $q=10$, 15, and 20 were
studied at their first-order transition point $\beta_t$. The details of
the 3D study will be published elsewhere \cite{tobe}.
\section*{Acknowledgements}
WJ thanks the DFG for a Heisenberg fellowship and
SK gratefully acknowledges a fellowship by the
Graduierten\-kolleg ``Physik and Chemie supra\-moleku\-larer Systeme''.
Work supported by computer grants hkf001 of HLRZ J\"ulich and
bvpf03 of Norddeutscher Vektorrechnerverbund (NVV) Berlin-Hannover-Kiel.
| {
"timestamp": "1996-09-28T21:12:16",
"yymm": "9609",
"arxiv_id": "hep-lat/9609047",
"language": "en",
"url": "https://arxiv.org/abs/hep-lat/9609047",
"abstract": "We report numerical simulations of two-dimensional $q$-state Potts models with emphasis on a new quantity for the computation of spatial correlation lengths. This quantity is the cluster-diameter distribution function $G_{diam}(x)$, which measures the distribution of the diameter of stochastically defined cluster. Theoretically it is predicted to fall off exponentially for large diameter $x$, $G_{diam} \\propto \\exp(-x/\\xi)$, where $\\xi$ is the correlation length as usually defined through the large-distance behavior of two-point correlation functions. The results of our extensive Monte Carlo study in the disordered phase of the models with $q=10$, 15, and $20$ on large square lattices of size $300 \\times 300$, $120 \\times 120$, and $80 \\times 80$, respectively, clearly confirm the theoretically predicted behavior. Moreover, using this observable we are able to verify an exact formula for the correlation length $\\xi_d(\\beta_t)$ in the disordered phase at the first-order transition point $\\beta_t$ with an accuracy of about $1%-2%$ for all considered values of $q$. This is a considerable improvement over estimates derived from the large-distance behavior of standard (projected) two-point correlation functions, which are also discussed for comparison.",
"subjects": "High Energy Physics - Lattice (hep-lat)",
"title": "Monte Carlo Study of Cluster-Diameter Distribution: A New Observable to Estimate Correlation Lengths",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104962847372,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.7094608163239224
} |
https://arxiv.org/abs/2202.04096 | Berry phase in the rigid rotor: the emergent physics of odd antiferromagnets | The rigid rotor is a classic problem in quantum mechanics, describing the dynamics of a rigid body with its centre of mass held fixed. The configuration space of this problem is $SO(3)$, the space of all rotations in three dimensions. This is a topological space with two types of closed loops: trivial loops that can be adiabatically shrunk to a point and non-trivial loops that cannot. In the traditional formulation of the problem, stationary states are periodic over both types of closed loops. However, periodicity conditions may change if Berry phases are introduced. We argue that time-reversal-symmetry allows for only one new possibility -- a Berry phase of $\pi$ attached to all non-trivial loops. We derive the corresponding stationary states by exploiting the connection between $SO(3)$ and $SU(2)$ spaces. The solutions are anti-periodic over any non-trivial loop, i.e., stationary states reverse sign under a $2\pi$ rotation about any axis. Remarkably, this framework is realized in the low-energy physics of certain quantum magnets. The magnets must satisfy the following conditions: (a) the classical ground states are unpolarized, carrying no net magnetization, (b) the set of classical ground states is indexed by $SO(3)$, and (c) the product $N\times S$ is a half-integer, where $N$ is the number of spins and $S$ is the spin quantum number. We demonstrate this result in a family of Heisenberg antiferromagnets defined on polygons with an odd number of vertices. At each vertex, we have a spin-$S$ moment that is coupled to its nearest neighbours. In the classical limit, these magnets have coplanar ground states. Their quantum spectra, at low energies, correspond to `spherical top' and `symmetric top' rigid rotors. For integer values of $S$, we recover traditional rigid rotor spectra. With half-integer-$S$, we obtain rotor spectra with a Berry phase of $\pi$. | \section{Introduction}
The rotation of a rigid body is a fundamental problem in classical and quantum mechanics. It is one of the early problems where quantum spectra could be worked out and compared against experiments. It laid the foundation for the field of microwave rotational spectroscopy
\cite{Bunker2006,Xu2011}, with applications ranging from laboratory organic chemistry to interstellar space\cite{Arunan2015}. The solutions of the quantum rigid rotor have been known since the work of Casimir in 1931\cite{casimir}. Over the decades, this problem has been expanded to include details such as asymmetries, centrifugal distortion, etc. In this article, we revisit this problem and show that it allows for a non-trivial Berry phase structure. This can be viewed as reframing the problem with a new boundary condition that is consistent with the underlying topology. We evaluate the resulting spectrum and suggest magnetic realizations where this Berry phase structure is realized.
The Berry phase or geometric phase can play a strong role in quantum mechanics\cite{Cohen2019}. As a simple illustration, we consider a particle on a circle -- a one-dimensional space with periodic boundaries. Solving the Schr\"odinger equation leads to wave-like solutions. Periodic boundary conditions then restrict the particle to quantized levels. If the circle were threaded by a magnetic flux, the particle accrues an Aharonov-Bohm phase with every revolution. This changes the boundary condition and thereby, the spectrum of the particle. The symmetries of the system strongly constrain the Aharonov-Bohm phase. For example, time-reversal symmetry only allows for two values: $0$ or $\pi$. In the latter case, the wavefunction at a given point cannot be defined uniquely. It must necessarily be double-valued as it switches sign after one revolution. The Aharonov-Bohm phase is a particular example of a more general notion -- the Berry phase, which may arise even in the absence of external fields. For example, in crystalline solids, Bloch wavefunctions may accrue phases over loops in the Brillouin zone\cite{Hasan2010}. This phenomenon serves as the starting point for the field of topological insulators. These phases are strongly constrained by time-reversal symmetry, space group symmetries, etc. They may even give rise to wavefunctions that are multi-valued. In this article, we consider Berry phase structure in the rigid rotor, constrained by time-reversal symmetry. We discuss magnetic realizations where the Berry phase is intrinsic and can be switched on or off by changing the spin quantum number. Our results can be viewed in analogy with the `Haldane gap' in one-dimensional spin chains, where Berry phase effects lead to a qualitative change in the excitation spectrum\cite{Haldane1983,Affleck1989}.
We present a class of magnetic systems as realizations of these ideas. We consider rotationally-symmetric magnets with Heisenberg couplings. In the classical limit, they possess degenerate ground states that are related to one another by rotations. We focus on magnets where each classical ground state can be associated to a unique rotation operation. In such systems, the quantum problem shows a characteristic low-energy spectrum, precisely that of a rigid rotor. For certain system sizes and values of the spin quantum number $S$, this `emergent' rotor accrues a non-trivial Berry phase. This modifies the eigenvalues and degeneracies, resulting in a new spectrum that is markedly different from the traditional rigid rotor. We support these assertions with analytic arguments and exact diagonalization results on a family of antiferromagnets.
\section{The rigid rotor and the topology of $SO(3)$}
\label{sec.rrotor}
The elements of $SO(3)$ can be expressed in various representations\cite{Morrison1987}. We use the axis-angle representation as it best brings out the connectivity of the space. Following Euler's rotation theorem, any rotation in three dimensions can be characterized as $R(\hat{n},\theta)$ using two quantities: an axis $\hat{n}$ (a unit vector) and an angle $\theta$. We can view any given rotation as a vector $\vec{\rho} = \theta ~\hat{n}$, with orientation fixed by the axis and length set by the angle. We can restrict the length of the vectors to $\theta \leq \pi$, using a property of rotations about opposite axes: $R(\hat{n},\pi+\theta) \equiv R(-\hat{n},\pi - \theta)$. With these arguments, we can give a geometric interpretation to $SO(3)$. It corresponds to a solid sphere of radius $\pi$, with each point within the sphere corresponding to a unique rotation operation as depicted in Fig.~\ref{fig.twoloops}(left). However, careful attention must be paid to the surface of this sphere. As $R(\hat{n},\pi) \equiv R(-\hat{n},\pi)$, antipodal points on the surface are identified. That is, each point on the surface is, in fact, the same as its partner at the other end of a diagonal passing through the centre.
We now seek to characterize closed loops within this space\cite{Balakrishnan2018}. We have simple loops as shown in Fig.~\ref{fig.twoloops}(centre). They can be smoothly deformed to a point. We have a second type consisting of loops that connect antipodal points on the surface. While these loops are closed, they cannot be shrunk to a point. We designate these two classes of loops as trivial and non-trivial respectively. A loop of any complexity, e.g., one that passes through several pairs of antipodal points, can be smoothly deformed into one of these classes.
\begin{figure}
\includegraphics[width=3in]{twoloops_B}
\caption{Left: $SO(3)$ as a solid sphere. Each point within the sphere corresponds to a unique rotation. The direction from the origin to the point represents the axis of rotation. The distance from the origin to the point represents the angle of rotation. Centre: A `trivial' loop consisting of a closed path that can be smoothly shrunk to a point. Right: A `non-trivial' loop, consisting of a path that connects antipodal points on the surface.}
\label{fig.twoloops}
\end{figure}
The $SO(3)$ space is the configuration space of a rigid rotor. To see this, consider an object that is free to rotate, with its centre of mass held fixed. We may designate a particular configuration of the object as our reference, say the configuration at a particular instant of time. The configuration at any other time can be obtained by effecting a rotation, $R (\hat{n},\theta)$, on the reference configuration. In other words, the configuration at any time can be described using a rotation matrix, $R (\hat{n},\theta)$. When this problem is quantized, we arrive at a wavefunction defined on $SO(3)$ space, $\psi(\hat{n},\theta)$. The system is described by the Hamiltonian,
\bea
\hat{H} = \frac{\hat{L}_x^2}{2 I_x} + \frac{\hat{L}_y^2}{2 I_y} + \frac{\hat{L}_z^2}{2 I_z},
\label{eq.Ham}
\eea
where $\hat{L}_{x/y/z}$ are angular momentum operators defined in the body-fixed frame along the three principal axes. The quantities $I_{x/y/z}$ represent the corresponding moments of inertia. In the rest of this article, in the interest of simplicity, we will consider two cases: (i) $I_x = I_y= I_z \equiv I_0$, known as the `spherical top'. This case arises when the rotating object is a perfect sphere. (ii) $I_x = I_y = I_z/\alpha \equiv I_0$, with $\alpha \neq 1$. This case is known as the `symmetric top', arising in the context of an ellipsoidal rigid body.
The spectrum of this problem was first worked out by Casimir in 1931. Eigenstates can be labelled by three quantum numbers, $j$, $m$ and $m'$. The first, $j$, represents the total angular momentum. It takes non-negative integer values, $j=0,1,2,3$, etc. The second and third quantum numbers denote angular momenta in the body-fixed frame and in the space-fixed frame respectively\cite{Zare}. They are both defined with respect to an arbitrarily chosen $z$-axis. Each takes one of $(2j+1)$ possible values with $m, m'\in \{-j,-j+1,\ldots,j-1,j\}$. The eigenstates (wavefunctions) are complex functions defined over $SO(3)$ space, given by Wigner $D$ matrices\cite{Varshalovich1988,Edmondsbook,Atkinsbook},
\bea
\psi_{j,m,m'} (\hat{n},\theta)= D_{m',m}^j (\hat{n},\theta)^*
\eea
where
\bea
D_{m',m}^j (\hat{n},\theta) &=& \langle j,m'\vert e^{-i\theta \hat{n}\cdot \vec{\hat{L}} } \vert j,m\rangle.
\label{eq.Dmat}
\eea
Here, $\vec{\hat{L}} = (\hat{L}_x,\hat{L}_y,\hat{L}_z)$ is the angular momentum vector and $\vert j,m\rangle$'s are the usual spherical harmonics. Note that we have expressed the Wigner $D$ matrices in the axis-angle representation here, as opposed to the more-commonly-used Euler angle representation\cite{Rose1957}. In the spherical top rotor, the eigenenergies are given by $\epsilon_j = \frac{\hbar^2}{2I_0} j(j+1)$. As the energies only depend on $j$, each level has a $(2j+1)^2$-fold degeneracy -- corresponding to all possible values of $m$ and $m'$.
In the symmetric top rotor, eigenenergies are given by $\epsilon_{j,m} = \frac{\hbar^2}{2I_0}\big[ j(j+1) - \gamma m^2 \big] $, where $\gamma= 1 -\alpha $. Each level has a degeneracy of $(2j+1)$, corresponding to different choices for $m'$.
\section{Berry phase structure in the rigid rotor}
\label{sec.berry}
We consider the Berry phase attached to a loop in a generic time-reversal-symmetric system. A loop can be traversed in two directions, which will result in opposite values for the Berry phase. However, two trajectories that correspond to motion in opposite directions are time-reversed copies of one another. In a system with time-reversal symmetry, they must accrue the same Berry phase. With these arguments, the Berry phase must satisfy $\theta_B \equiv -\theta_B$. This has two possible solutions: $\theta_B = 0$ or $\pi$. As these are two discrete values, we argue that Berry phase should be a topological quantity that is invariant under smooth deformations of the path. In particular, if the Berry phase is $\pi$ for a certain path, it must have the same value for all topologically equivalent paths. A trivial path in $SO(3)$, by definition, can be smoothly deformed to a point -- a limiting case with no scope for a Berry phase. We conclude that no trivial path can have an attached Berry phase.
We next consider non-trivial paths as shown in Fig.~\ref{fig.twoloops}(right). As argued above, time-reversal symmetry constrains the Berry phase of a closed loop to be $0$ or $\pi$. With either value, the Berry phase must be robust to smooth deformations of paths. In $SO(3)$, it can be shown that all non-trivial paths are topologically equivalent\cite{Balakrishnan2018}. That is, any two non-trivial paths can be smoothly deformed into one another. As a result, there are only two possibilities: (i) all trivial and non-trivial paths have a Berry phase of zero. This is the traditional formulation of the rigid rotor problem whose solutions were discussed in Sec.~\ref{sec.rrotor} above. (ii) All trivial paths have Berry phase of zero while all non-trivial paths have a Berry phase of $\pi$. The latter case is the focus of this article. It amounts to a non-trivial boundary condition for the rigid rotor problem. The stationary states are smooth in the interior of the $SO(3)$ sphere. However, they reverse sign under a non-trivial path. This translates to anti-periodic boundary conditions across any diameter of the $SO(3)$ sphere.
In the context of a rigid rotor, this situation can be described as follows. Consider rotations about an arbitrary axis, $\hat{n}$, denoted as $R(\hat{n},\theta)$. If $\theta$ is taken to run from $-\pi$ to $\pi$, these rotations lie on a diagonal in the $SO(3)$ sphere of Fig.~\ref{fig.twoloops}. As we traverse this non-trivial loop connecting antipodal points, the rigid body effectively rotates by $2\pi$ about a fixed axis. This operation should attach a negative sign to the wavefunction. In the usual problem of rigid body dynamics, such a negative sign does not arise. It can not be easily realized in a physical setup, say by distributing electric charge on the body and threading a magnetic field. However, this situation naturally arises in magnetic analogues of the rigid rotor as we show in the following sections.
To conclude this section, we discuss a simpler problem where the role of a $\pi$-Berry phase can be easily understood. Consider a particle on a circle, a space parametrized by an angle variable, $\phi\in(0,2\pi]$. The non-trivial paths here correspond to a full revolution around the circle. If the circle is threaded by a $\pi$ flux, the particle's wavefunction picks up a negative sign after a revolution. The resulting wavefunctions are of the form $e^{i (n+\frac{1}{2})\phi}$, where $n$ is any integer. These states reverse sign under $\phi \rightarrow \phi+2\pi$, but are periodic under $\phi\rightarrow \phi+4\pi$. They are double-valued, with two possible values for any given $\phi$. These features are closely mirrored by the rigid rotor when a Berry phase of $\pi$ is introduced.
\section{Drawing solutions from the $SU(2)$ rotor }
\label{sec.sol}
We have established that time-reversal symmetry allows for a situation in which non-trivial paths in $SO(3)$ are associated with a $\pi$-Berry phase. In order to find the corresponding stationary states, we appeal to the $SU(2)$ rotor. A generic $SU(2)$ matrix can be represented using three parameters: a unit vector $\hat{n}$, an angle $\theta\in [0,\pi]$ and an Ising variable $\mu = \pm 1$,
\bea
R_{SU(2)}(\hat{n},\theta,\mu) = \mu ~e^{ i \frac{\theta}{2} \hat{n}\cdot \vec{\sigma}},
\label{eq.su2def}
\eea
where $\vec{\sigma} = (\sigma_x,\sigma_y,\sigma_z)$ is a vector of Pauli matrices. This form brings out the relation between $SU(2)$ and $SO(3)$ spaces\cite{Wignerbook,Edmondsbook}. They bear a two-to-one relation, with $SU(2)$ consisting of two copies of the $SO(3)$ solid sphere -- one for each value of $\mu$. The two solid spheres of $SU(2)$ share a common boundary, with
\bea
R_{SU(2)}(\hat{n},\pi,+1) = R_{SU(2)}(-\hat{n},\pi,-1),
\eea
which follows from the definition in Eq.~\ref{eq.su2def}.
The $SU(2)$ rotor is very similar to the $SO(3)$ rigid rotor, with a Hamiltonian of the same form as in Eq.~\ref{eq.Ham}. Its stationary states are also similar, given by Wigner $D$ matrices of Eq.~\ref{eq.Dmat} above. However, in the $SU(2)$ case, the total angular momentum quantum number may take non-negative values that are integers or half-integers. That is, $j=0, \frac{1}{2}, 1, \frac{3}{2},2,$ etc. For each $j$, the $m$ and $m'$ quantum numbers take values from $\{-j,-j+1,\ldots,j-1,j\}$. The stationary state wavefunctions are also similar to the $SO(3)$ case. However, careful attention must be paid to continuity at the boundary of the sphere, as we discuss below.
The Wigner $D$ matrices are complex-valued functions of $\hat{n}$ and $\theta$. However, $SU(2)$ space has an additional coordinate in $\mu$. In order to define wavefunctions in a consistent fashion, we note that the Wigner $D$ matrices satisfy
\bea
\psi(-\hat{n},\pi) =(-1)^{2j}\psi(\hat{n},\pi).
\label{eq.jrelation}
\eea
This relation, demonstrated explicitly in App.~\ref{app.periodicity}, connects the values of the wavefunction at antipodal points of the sphere. We see that stationary states fall into two classes, based on the value of $j$. Solutions with integer values ($j=0,1,2,3,\ldots$) have the same value at any pair of antipodal points. Those with half-integer values ($j=\frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots$) have a relative minus sign. To define wavefunctions that are smooth over the $SU(2)$ configuration space, we take
\bea
\psi_{SU(2)}(\hat{n},\theta,\mu) = \left\{
\begin{array}{c}
D_{m',m}^j (\hat{n},\theta)^*,~~j=0,1,2,\ldots \\
\mu ~D_{m',m}^j (\hat{n},\theta)^*,~~j=\frac{1}{2},\frac{3}{2},\ldots
\end{array}\right..
\eea
For integer-valued $j$'s, the wavefunctions are identical between the two spheres. For half-integer $j$'s, the wavefunctions differ by a negative sign. These forms lead to smooth evolution across the boundary that separates the two spheres of $SU(2)$. In the language of Ref.~\onlinecite{Wignerbook}, integer and half-integer $j$ values correspond to even and odd representations respectively.
We now revert to the problem of the $SO(3)$ rigid rotor with a $\pi$-Berry phase. We seek to find eigenstates of the Hamiltonian in Eq.~\ref{eq.Ham} that are smooth within the $SO(3)$ sphere and \textit{anti-periodic} across diagonals. These requirements are precisely met in the $SU(2)$ solutions with half-integer $j$ values, as seen from Eq.~\ref{eq.jrelation}. We conclude that the required stationary states are Wigner $D$ matrices with $j=\frac{1}{2}, \frac{3}{2}, \frac{5}{2}$, etc. The expressions for the eigenenergies and level degeneracies are the same as those given in Sec.~\ref{sec.rrotor}, but with $j$ taking half-integer values. In Tab.~\ref{tab.spec}, we describe the resulting spectrum and compare it with that of the traditional Berry-phase-free formulation.
\begin{table*}
\begin{tabular}{|c|c|c|c|c|}
\hline
Berry phase & Total angular momentum quantum number & $m$ and $m'$ quantum numbers & Eigenvalue & Degeneracy\\
\hline
$\begin{array}{c}
0 \\
\pi \end{array}$
&
$\begin{array}{c}
j = 0,1,2,3,\ldots \\
j = \frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots
\end{array}$
&
$m / m' = -j, -j+1,\ldots,j-1,j$
&
$\epsilon_j = j(j+1)$ & $(2j+1)^2$ \\
\hline
\end{tabular}
\caption{Spectrum of the spherical-top rigid rotor with and without Berry phase.}
\label{tab.spec}
\end{table*}
\section{Rigid rotor as an effective description: the triangle antiferromagnet}
The rigid rotor can emerge as the low energy description of certain quantum magnets. Our discussion below follows a general principle laid down in Ref.~\onlinecite{Khatua2019}. The physics of a quantum magnet, at low energies, resembles that of a single particle problem. The particle moves in the abstract space of all classical ground states. This mapping is readily seen from the low-lying portion of the magnet's energy spectrum. Below, we will consider a family of quantum magnets where the set of classical ground states is isomorphic to $SO(3)$. Their low-energy physics corresponds to a particle moving in $SO(3)$ -- a rigid rotor.
We begin with perhaps the simplest example -- a three-spin magnet with spin-$S$ moments at the corners of an equilateral triangle. Neighbouring spins interact via an antiferromagnetic Heisenberg coupling. The Hamiltonian for this system can be written as
\bea
H = J\sum_{(jk)} \vec{\hat{S}}_{j} \cdot \vec{\hat{S}}_{k} \sim \frac{J}{2}\big(\sum_j\vec{\hat{S}}\big)^2,
\eea
where the sum over $(jk)$ runs over all pairs of spins. This can
be rewritten as the square of a sum over each spin. We have expressed the Hamiltonian as the square of the total spin, by removing a constant term. In the classical limit, the spins can be viewed as three-component vectors. The classical energy is minimized when the three spin vectors add to zero. This can only happen if all three spins lie on a plane and form the sides of an equilateral triangle. As is typical in frustrated systems, there are, in fact, many classical ground states. The choice of the ordering plane corresponds to a choice of normal unit vector, $\hat{n}$. Within the plane, the first spin may be oriented in any direction. This corresponds to an angle variable, $\theta$. These two parameters, $\hat{n}$ and $\theta$, suggest that the space of classical ground states is equivalent to $SO(3)$ which is also parametrized by a unit vector and an angle. Indeed, it can be rigorously shown that any classical ground state can be obtained from a reference ground state, by effecting a global $SO(3)$ rotation. We say that the set of classical ground states is $SO(3)$, i.e., the set of classical ground states has a one-to-one and onto mapping with $SO(3)$.
\subsection{Low-energy effective theory }
\label{ssec.tri_eff}
The low energy physics of this magnet can be systematically studied using a non-linear sigma model approach. A detailed calculation, applicable to an entire family of magnets, is outlined in Sec.~\ref{ssec.nlsm} below. Here, we describe the final result for the case of the triangle magnet. The partition function can be expressed within the spin path integral formalism. It involves an integral over all paths in configuration space,
\bea
\mathcal{Z} &=&\int \big\{\prod_{j=1}^3 \mathcal{D}S_j \big\} e^{-\mathcal{L}} = \sum_{\mathrm{loops}} e^{-\mathcal{L}_{loop}},\\
\mathcal{L}_{loop} &=& i{\Theta}_{loop} + J\int_0^\beta d\tau \sum_{(jk)} \vec{S}_j (\tau) \cdot \vec{S}_k (\tau).
\label{eq.path_int_action}
\eea
The loops lie within the space of all classical configurations of spins. They run over imaginary time, from $\tau=0$ to $\beta$, where $\beta$ is the inverse temperature. Each loop contributes with an action that has two terms, the Berry phase, $\Theta_{loop}$, and the energy. The latter can be written as an expansion in powers of $S$, with the leading $\mathcal{O}(S^2)$ term corresponding to the classical energy.
In the large-$S$ low-energy regime, we may restrict the path integral to configurations that are close to classical ground states. Any `hard' deviation (one that increases the classical energy) is exponentially suppressed. In this regime, we may restrict our attention to loops that can be decomposed into two pieces: (i) a closed loop within the space of classical ground states, and (ii) a small fluctuation out of the ground state space. The former determines the topological character of the loop. Here, the space of classical ground states is $SO(3)$. As a result, a loop can be classified as trivial (shrinkable to a point) or non-trivial, as shown in Fig.~\ref{fig.twoloops}. The fluctuation out of the ground state space represents a small deformation that does not modify topological character.
After several simplifications, the action for a given loop takes a remarkably simple form,
\bea
\mathcal{L}_{loop} = 6\pi i S \nu -i \vec{L}'\cdot \vec{V} + \beta_3 \vec{L}'^2,
\eea
Here, $\nu=0,1$ is a topological index. It is $0$ for a trivial path within $SO(3)$ and $1$ for a non-trivial path. The remaining terms have a standard form -- the action of a spherical top rigid rotor. Here, $\vec{L}'$ represents the angular momentum, $\vec{V}$ represents angular velocity and $\beta_3$ is an energy scale, proportional to $J$. Explicit expressions for these quantities are given in Sec.~\ref{ssec.nlsm} below.
We have arrived at a remarkable picture. The effective theory for a triangle antiferromagnet is precisely a spherical top rigid rotor. This result is well known and has been used in field theoretic studies of triangular lattice antiferromagnets, where each triangular motif is viewed as a rigid rotor\cite{Dombre1989,Azaria1993,Diptiman1993}. However, the Berry phase term has not been adequately appreciated in earlier studies. Our analysis shows that it can play a strong role. For integer $S$, the Berry phase is inconsequential as it is an integer multiple of $2\pi$. This leads to an effective theory of a traditional spherical top rotor. However, for half-integer values of $S$, a Berry phase of $\pi$ is attached to non-trivial paths in $SO(3)$. The effective theory is the spherical top rigid rotor with a $\pi$-Berry phase, precisely as described in Secs.~\ref{sec.berry} and \ref{sec.sol} above.
\subsection{Rigid rotor in the quantum spectrum}
\label{ssec.tri_spectrum}
We now discuss the energy spectrum of the triangle antiferromagnet. As the Hamiltonian is proportional to the total-spin-squared, its eigenvalues are $\frac{J}{2} S_t (S_t+1)$, where $S_t$ is the total spin. We arrive at a simple problem of angular momentum addition,
\bea
S\otimes S\otimes S = \{0 \oplus1 \oplus 2\ldots \oplus 2S\}\otimes S.
\eea
We have added the first two spins before adding the third. The final result depends on the nature of $S$.
We first consider integer values of $S$, where we find $S_t \in \{0,1,2,\ldots\}$. The ground state, corresponding to $S_t=0$, is non-degenerate. The value $S_t=0$ only arises in the case where the sum of the first two spins is $S$. The first excited state corresponds to $S_t = 1$, which occurs when the first two spins add to $S-1$, $S$ or $S+1$. In each case, as $S_t = 1$ is a triplet, we have an additional three-fold degeneracy. This results in a net nine-fold degeneracy of the first excited state. Higher energy levels can be found from similar arguments. We summarize as follows: eigenstates correspond to $S_t=0,1,2,\ldots$ with each level having a degeneracy of $(2 S_t +1)^2$. Note that this only represents the low-energy spectrum -- the pattern holds for $S_t \leq S$. Remarkably, the low-energy spectrum is precisely that of a spherical top rigid rotor with no Berry phase, as described in Sec.~\ref{sec.rrotor}.
For half-integer values of $S$, the spectrum is qualitatively different as we have $S_t = \frac{1}{2},\frac{3}{2},\frac{5}{2},$ etc. The ground state corresponds to $S_t=\frac{1}{2}$, which can occur when the first two spins add to $S\pm \frac{1}{2}$. In addition, each $S_t=\frac{1}{2}$ level has an inherent two-fold degeneracy. This leads to a net four-fold ground state degeneracy. On the same lines, the first excited state corresponds $S_t =\frac{3}{2}$ and is sixteen-fold degenerate. In summary, eigenstates are labelled by $S_t = \frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots$, with degeneracies $(2S_t+1)^2$. This picture holds for $S_t \leq S$, representing the low-energy spectrum. Remarkably, this is the spectrum of a spherical top rigid rotor with a Berry phase of $\pi$. The spectrum for the case of $S=5/2$ is plotted in Fig.~\ref{fig.trispec}. The effective description in terms of a spherical top with a $\pi$-Berry phase holds at low energies, for $E\lesssim \frac{35J}{8}$.
\begin{figure}
\includegraphics[width=3.4in]{trimer_spec_mod.pdf}
\caption{The spectrum of a triangle antiferromagnet with a half-integer value of $S$. The degeneracy of each level is shown in parentheses. Within the low energy window indicated, the spectrum is quantitatively equivalent to that of a spherical top rigid rotor with a $\pi$-Berry phase.}
\label{fig.trispec}
\end{figure}
The energy spectra, obtained from analytical arguments, are consistent with the low-energy theory outlined in Sec.~\ref{ssec.tri_eff} above. The triangular antiferromagnet, at low energies, is a realization of the spherical top rigid rotor. Depending on the value of $S$, the rotor may have a Berry phase of zero or $\pi$.
\section{Odd-polygon antiferromagnets}
\label{sec.oddgon_antiferromagnet}
We next consider the more general case of odd-polygon antiferromagnets. We take polygons with $N$ vertices, where $N=3,5,7,9,$ etc. We have spin-$S$ moments at each vertex with Heisenberg antiferromangetic couplings between nearest neighbours,
\bea
H = J\sum_{i=1}^N \vec{\hat{S}}_{i} \cdot \vec{\hat{S}}_{i+1},
\label{eq.Haml}
\eea
where $\vec{\hat{S}}_{N+1}\equiv \vec{\hat{S}}_{1}$. Unlike the triangle antiferromagnet, this Hamiltonian cannot be written as the square of the total spin. Nevertheless, as we show below, its structure and high degree of symmetry lead to an elegant low-energy description.
We first discuss the classical ground states of this problem. For polygons with even $N$, there is no frustration. The classical ground state is a N\'eel antiferromagnet at the ordering wavevector $\pi$. Moments alternate between two opposite orientations as we move from one site to the next, reverting to the initial orientation when we return to the starting point. However, when $N$ is odd, N\'eel order cannot be accommodated. Energy is minimized by ordering at $\pi \pm \pi/N$. These are the closest wavevectors to $\pi$ that leave the system invariant after $N$ unit translations. Effectively, the classical ground state is a coplanar state with the ordering-plane chosen spontaneously. Within the plane, neighbouring spins subtend an angle $\pi \pm \pi/N$ with one another. When moving from one site to the next clockwise, the subtended angle is always the same, either $\pi + \pi/N$ or $\pi - \pi/N$. A rigorous discussion can be found in Ref.~\onlinecite{Schmidt2003}.
Furthermore, all classical ground states can be reached from an arbitrary reference state by effecting a global spin rotation. The choice of ordering plane corresponds to choosing a normal unit vector $\hat{n}$. We may fix its orientation to be parallel to $\vec{S}_2 \times \vec{S}_1$. Within the plane, the orientation of the first spin corresponds to choosing an angle $\theta$. In order to avoid double-counting, $\theta$ can be restricted from 0 to $\pi$ while $\hat{n}$ ranges over all orientations. This parametrization is equivalent to specifying an $SO(3)$ rotation using the axis-angle representation. We assert that the set of all classical ground states is isomorphic to $SO(3)$.
\begin{figure}
\includegraphics[width=3.3in]{odd_gon_ordering.pdf}
\caption{Ordering in odd antiferromagnetic polygons. (a) A reference classical ground state on the pentagon. All spins lie on the same plane with neighbouring spins subtending an angle of $4\pi/5$ with one another. (b) Any classical ground state can be obtained by rotating the reference state. The orientations of the five spins are shown. The plane of ordering is fixed by the normal vector $\hat{n}$ while the position of the first spin is set by $\theta$, measured from an arbitrary axis. All ground states can be obtained by varying the normal vector $\hat{n}$ or the angle $\theta$. (c) A reference ground state on a heptagon ($N=7$). (d) A reference ground state on a nonagon ($N=9$). }
\label{fig.oddorder}
\end{figure}
Below, we discuss a low-energy effective theory for odd-$N$ polygons. For $N=3$, our theory yields a spherical top rotor while $N>3$ gives rise to a symmetric top. Subsequently, we present the energy spectrum obtained from exact diagonalization.
\subsection{Effective theory}
\label{ssec.nlsm}
We derive a low-energy effective description using the non-linear sigma model approach. Our approach applies to the entire family of odd-polygon antiferromagnets, where $N=3,5,7,9,$ etc. For any $N$, the classical ground states are coplanar and accessible by global rotations acting on a reference ground state as shown in Fig.~\ref{fig.oddorder}. We define a reference state in the $x-y$ plane,
\bea
\nonumber \{\vec{S}_1, \vec{S}_2, \cdots,\vec{S}_N \}_{ref.} \equiv S\{\hat{n}_1, \hat{n}_{2},\ldots, \hat{n}_{N-1}, \hat{n}_{N} \} \\
= S\{\hat{\nu}_\phi, \hat{\nu}_{\phi+\theta_N}, \hat{\nu}_{\phi+2 \theta_N}, \ldots, \hat{\nu}_{\phi+(N-1)\theta_N} \},
\label{eq.Sref}
\eea
where $\hat{\nu}_\xi = \cos \xi ~\hat{x} + \sin \xi~ \hat{y}$ and $\theta_N = \pi + \pi/N$. The orientation of the first spin is fixed by an angle $\phi$ ranging from $0$ to $2\pi$. A generic classical ground state can now be written as $\{\vec{S}_1, \vec{S}_2, \cdots,\vec{S}_N \} = SR(\hat{n},\theta)\{\hat{n}_1,\hat{n}_2,\cdots,\hat{n}_N\}$, where the rotation $R(\hat{n},\theta)$ acts on each of the $N$ unit vectors.
In the spirit of a low energy theory, we introduce a small deviation away from the ground state. The deviation is parametrized by a vector, $\vec{L}$, which will turn out to be proportional to uniform polarization,
\begin{eqnarray}
\label{eq.polygon_fluc}
&&\vec{S}_i = S\hat{\Omega}_i=\frac{S R(\hat{n},\theta) \big\{ \hat{n}_i + M_i \vec{L}/S \big\}}{\sqrt{1+ (M_i\vec{L}/S)^2}}\nonumber\\
&\approx & S R(\hat{n},\theta) \left\{ \hat{n}_i \left(1-\frac{\vec{L}\cdot M_i\vec{L}}{2S^2}\right)+ M_i \vec{L}/S \right\},
\end{eqnarray}
where $M_i^{ab} = (\delta^{ab} -\hat{n}_i^a \hat{n}_i^b )$ is a tensor operator that projects onto the plane perpendicular to $\hat{n}_i$. When it acts on a vector to the right, it picks out the component perpendicular $\hat{n}_i$. Note that $M_i^2 = M_i$. In this parametrization, $\hat{n}$ and $\theta$ represent `soft' modes that preserve the classical energy. The vector, $\vec{L}$, encodes modes that are canonically conjugate to the soft modes, as we will show below. It represents a deviation that cants each spin towards the direction of $\vec{L}$.
Apart from $\vec{L}$, we may introduce many other deformations in Eq.~\ref{eq.polygon_fluc}, e.g., a staggered canting along each direction. However, such fluctuations can be integrated out from the path integral action. We are interested in the low-energy physics involving the soft modes and their conjugate degrees of freedom.
We next describe the magnet using the well-known spin-path-integral formalism\cite{Auerbach,Fradkin}. The partition function is written as a sum over loops in the space of all classical configurations. Each loop is associated with an action that has two terms: the Berry phase and the energy. We evaluate these contributions using the low energy spin-parametrization of Eq.~\ref{eq.polygon_fluc}.\\
{\bf{Berry phase:}} This represents a geometric contribution, proportional to the sum of solid angles swept out by each spin. It can be written as
\bea
iS\int_0^\beta d\tau \sum_{j=1}^N\vec{A}(\hat{\Omega}_j)\cdot\partial_\tau(\hat{\Omega}_j),
\eea
where $\vec{A}(\hat{\Omega}_j)$ is the vector potential of magnetic monopole at the origin. With the parametrization of Eq.~\ref{eq.polygon_fluc}, the Berry phase takes the form
\bea
\nonumber iS\int_0^\beta d\tau \sum_{j=1}^N\vec{A}\left(R \big\{ \hat{n}_j + M_j \frac{\vec{L}}{S} \big\}\right)\\
\cdot\partial_\tau\left(R \big\{ \hat{n}_j + M_j \frac{\vec{L}}{S} \big\}\right)
+ \mathcal{O}\Big(\frac{1}{S}\Big).
\label{eq.Berryexp}
\eea
We retain terms of $\mathcal{O}(S^0)$, in the spirit of an expansion in powers of $S$. As we operate in the low-energy regime, we take each trajectory to consist of two parts: a closed loop entirely within the classical ground state space and a small deviation out of it. The latter amounts to a small, smooth deformation of the former. This picture, after some straightforward simplifications, leads to a remarkably simple form,
\bea
2\pi i N S\nu - i\vec{L}^{'}\cdot\vec{V} + \mathcal{O}\Big(\frac{1}{S}\Big).
\label{eq.Berryform}
\eea
The leading $\mathcal{O}(S)$ term only depends on the trajectory within the classical ground state space. This gives a topological contribution, with $\nu=0$ for contractible loops and $\nu=1$ for non-trivial loops. This is explicitly demonstrated in App.~\ref{app.Berry_phase}.
In writing the $\mathcal{O}(S^0)$ term, we have defined two new variables. The first is $\vec{L}^{'} = \sum_{j=1}^N \vec{S}_j = RM\vec{L}$, where $M$ is given by $\sum_{j=1}^N M_j = \mathrm{Diag} \{N/2,N/2,N\}$. The vector $\vec{L}^{'}$ has a clear physical interpretation, as the net magnetization of the system.
The second variable is $V_\alpha = -\frac{1}{2}\epsilon_{\alpha\beta\gamma}\{(\partial_\tau R)R^{-1}\}^{\beta\gamma}$. If we take $R$ to denote the configuration of a rigid body, $V_\alpha$ represents its angular velocity. From the linear coupling between $\vec{L}^{'}$ and the angular velocity, we see that $\vec{L}^{'}$ is canonically conjugate to $R$.
{\bf{Energy:}} Using the parametrization of Eq.~\ref{eq.polygon_fluc} in the Hamiltonian of Eq.~\ref{eq.Haml}, we obtain
\bea
\beta E_{CGS}
+ \int_0^\beta d\tau E_2.
\label{eq.oddgon_energy}
\eea
The leading $\mathcal{O}(S^2)$ contribution is $E_{CGS}$, the classical ground state energy. We have $E_{cgs} = NJS^2\cos \theta_N$, where $\theta_N$ is as defined below Eq.~\ref{eq.Sref}. The $\mathcal{O}(S)$ contribution to the energy vanishes as it is linear in $\vec{L}$. As we are expanding about an extremum of the classical energy, all linear terms vanish. The second term in Eq.~\ref{eq.oddgon_energy} is $\mathcal{O}(S^0)$, or equivalently $\mathcal{O}(L^2)$. It is given by
\bea
E_2 &=& J\left[NL^2 - 2\sum_{j=1}^N (\hat{n}_i\cdot \vec{L})^2- \cos(\theta_N)(M\vec{L}\cdot\vec{L}) \right.\nonumber\\
&&\left. \hspace{0.3cm}+ \cos(\theta_N)\sum_{j=1}^N (\hat{n}_j\cdot \vec{L})(\hat{n}_{j+1}\cdot \vec{L})\right]\nonumber\\
&=& \beta_N\vec{L}^{'2} - \gamma_N (M\vec{L})_z^{2} = \beta_N\vec{\tilde{L}}^{2} - \gamma_N {\tilde{L}}_z {}^{2} .
\label{eq.efftheory}
\eea
where the explicit forms for $\beta_N$ and $\gamma_N$ are given in App.~\ref{app.exps}. We have defined $\vec{\tilde{L}} \equiv M \vec{L}$ as the net magnetization in the body-fixed frame.
Putting together the Berry phase and energy terms, the action for a given loop takes the form
\bea
\mathcal{L}_{\mathrm{loop}} \approx 2\pi i NS\nu - i\vec{L}^{'}.\vec{V} + \beta_N\vec{\tilde{L}}^{2}- \gamma_N {\tilde{L}}_z^2.
\label{eq.sym_top}
\eea
We have neglected a constant contribution given by $\beta E_{CGS}$, as well as $\mathcal{O}(\frac{1}{S})$ contributions. The result is a recognizable form -- the action of a symmetric top rigid rotor with an additional Berry phase term. For integer values of $S$, the Berry phase can be neglected as it is always an integer multiple of $2\pi$. For half-integer $S$ values, a $\pi$-Berry phase is attached to non-trivial loops. For $N=3$, it turns out that the asymmetry coefficient vanishes ($\gamma_3=0$). This is a special case where we obtain a spherical top rotor.
The effective action above has characteristic symmetries. It is invariant under rotations of the space-fixed frame. This corresponds to a transformation $R (\hat{n},\theta) \rightarrow R_0 R(\hat{n},\theta)$ in the parametrization of Eq.~\ref{eq.polygon_fluc}, where $R_0$ is a constant $SO(3)$ matrix. In contrast, a rotation of the body-fixed frame corresponds to modifying the choice of reference state in Eq.~\ref{eq.Sref}. We immediately see that the action is independent of the angle $\phi$ used in Eq.~\ref{eq.Sref}. A generic body-frame rotation may also change the plane of ordering. This only modifies the last term in the action, with ${\tilde{L}}_z^2 \rightarrow {\tilde{L}}_n^2$, where ${\tilde{L}}_n$ is the component normal to the plane of the reference state.
The action of Eq.~\ref{eq.sym_top} corresponds to characteristic patterns in the energy spectrum. The level-spacing and degeneracies depend on the relative strengths of $\beta_N$ and $\gamma_N$ coefficients. In App.~\ref{app.exps}, we discuss the variation of these parameters with $N$. For the limiting value of $N=3$, $\gamma_3$ vanishes. This leads to the spectrum of a spherical top rigid rotor, with eigenvalues $\beta_3 j(j+1)$. When $S$ is a half-integer, $j=\frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots$, as discussed in Sec.~\ref{ssec.tri_eff} above. For all higher $N$, energy eigenvalues are given by $\beta_N j (j+1) - \gamma_N m^2$. As long as $S$ is a half-integer, $j=\frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots$ with $m = -j,\ldots,j$. Each $(j,m)$ level has a $(2j+1)$-degeneracy, arising from all allowed values of the $m'$ quantum number. For all $N>3$, we find a small, positive value for the asymmetry coefficient $\gamma_N$, with $\gamma_N \lesssim \beta_N/2$. In this regime, the ground state corresponds to $(j=\frac{1}{2}, m =\pm \frac{1}{2})$ with four-fold degeneracy; the first excited state corresponds to $(j=\frac{3}{2}, m=\pm \frac{3}{2})$ with eight-fold degeneracy, and so on. The effective theory yields the same pattern in the energy spectrum for all $N>3$.
In the thermodynamic limit, where $N\rightarrow \infty$, the effective theory retains its symmetric-top-rotor character. The asymmetry is small with $\gamma_N / \beta_N \rightarrow \frac{1}{2}$. However, both $\beta_N$ and $\gamma_N$ decrease as $\sim1/N$. This can be interpreted as stiffening of the rotor with the moment of inertia increasing linearly with $N$.
\subsection{Energy spectra using exact diagonalization}
We have argued that odd-polygon antiferromagnets, at low energies, acquire an emergent description in terms of symmetric-top rigid rotors. We now support this assertion with evidence from exact diagonalization spectra.
{\bf{Methodology:}} We discuss the $N$-site antiferromagnetic chain, where $N$ is odd. The spectrum of the $N=3$ problem can be solved analytically, as presented in Sec.~\ref{ssec.tri_spectrum} above. Here, we discuss a numerical approach for $N=5,7,9$, for various $S$ values.
The Hilbert space is $(2S+1)^N$-dimensional, growing rapidly with $N$ and $S$. For larger values of $N$ and $S$, we use the following four symmetries to diagonalize the Hamiltonian: a) Invariance under global spin rotations about the $z$ axis. This divides the Hilbert space into orthogonal sectors labelled by $S_z^{total}$. b) Invariance under a global spin rotation by $\pi$ about the spin-$x$ axis. This operation takes $S_z^{total}\rightarrow -S_z^{total}$. As this is a symmetry, we only need to solve the problem in sectors with positive values of $S_z^{total}$. c) Invariance under a unit translation in real space, i.e., $\vec{\hat{S}}_1\rightarrow\vec{\hat{S}}_2\rightarrow\vec{\hat{S}}_3\rightarrow\cdots\rightarrow\vec{\hat{S}}_N\rightarrow\vec{\hat{S}}_1$. This symmetry allows us to subdivide the Hilbert space into blocks of different momenta, $k = 2p\pi/N$ where $p = 0,1,2,\cdots,N-1$. d) Invariance under a mirror operation which transforms sites as $(1,2,\ldots,N-1,N)\rightarrow (N-1,N-2,\ldots,2,1,N)$. This symmetry relates $k=2p \pi/N$ and $k=2(N-p) \pi/N$ subsectors within a given $S_z^{total}$ sector. For $k \neq 0$, this reduces the computational effort by half.
For smaller values for $N$ and $S$ (e.g., for $N=5$, $S\leq 9$), we diagonalize each block of the Hamiltonian matrix to find the spectrum. For larger $S$, block sizes are too large for full diagonalization. However, the Hamiltonian is sparse as it consists only of nearest-neighbour couplings. This allows us to use Lanczos diagonalizatoin implemented using the ARPACKPP package\cite{arpackpp}, focussing on the lowest few eigenvalues.
{\bf{Ground state energy:}} We first discuss the ground state energy, obtained by numerical diagonalization of the Hamiltonian. Fig.~\ref{fig.ground_state_energy} plots the ground state energy per bond ($E_{gs}/N$) vs. $S$ for the pentagon $(N=5$) and the heptagon $(N=7$). The points fall on smooth curves of the form $E_{gs}/N\sim a S^2 + b S +c$, where $a$, $b$ and $c$ are obtained as fitting parameters. We interpret the $\mathcal{O}(S^2)$ term as the classical energy, while the others are quantum corrections. From fitting the data, the $\mathcal{O}(S^2)$ terms comes out to be $-0.808579 J S^2$ for the pentagon and $-0.900543 J S^2$ for the heptagon. These estimates are consistent with our semiclassical analysis to find the effective theory. The starting point of our analysis was the assertion that the classical ground states are coplanar, with neighbouring spins subtending an angle $\pi\pm\pi/N$ with each other. The resulting classical energy per bond is $E_{coplanar} /N = \cos(\pi +\pi/N) JS^2$, yielding $-0.809017 JS^2 $ for the pentagon and $-0.900969 JS^2$ for the heptagon. These are in excellent agreement with the estimates obtained by fitting the numerical data.
\begin{figure}
\includegraphics[height = 5.3 cm]{Pentagon_EgsVs.pdf}\\
\includegraphics[height = 5.3 cm]{Heptagon_EgsVs.pdf}
\caption{Top: Ground state energy per bond in the pentagon vs. spin $(S)$. Datapoints are fit to the curve, $f(S) = 0.0347328 - 0.496368S - 0.808579 S^2$. Bottom: Ground state energy per bond in the hepatgon vs. spin $(S)$. Datapoints are fit to $g(S) = -0.0019346 - 0.448329S - 0.900543S^2$. Energies are measured in units of $J$.}
\label{fig.ground_state_energy}
\end{figure}
{\bf{Energy spectrum of pentagon:}} Fig.~\ref{fig.pent_half_int} shows numerically-obtained low-energy spectra for the pentagon $(N = 5)$ for various half-integer values of $S$. The low-lying levels show a degeneracy pattern that is consistent with the effective description -- that of a symmetric-top rigid rotor with a $\pi$-Berry phase. To perform a quantitative comparison, for each $S$, we fit the numerically-obtained spectrum (within a suitable low-energy window) to the form
\bea
E = -\Delta_5 + \beta_5 j(j+1) - \gamma_5 m^2,
\label{eq.form}
\eea
where $j=\frac{1}{2}, \frac{3}{2}, \frac{5}{2}$, etc. and $m=-j,-j+1,\ldots,j-1,j$. We attach a degeneracy of $(2j+1)$ to each $(j,m)$. The co-efficient $\Delta_5$ represents a constant shift. The rotor coefficients $\beta_5$ and $\gamma_5$ are found by a least-squares fit. As seen from Fig.~\ref{fig.pent_half_int}, we obtain excellent agreement with this form. Note that each panel only shows a low-energy window where the spectrum is in good agreement with Eq.~\ref{eq.form}. At higher energies, the numerically-obtained degeneracies deviate from the rigid-rotor pattern.
The form in Eq.~\ref{eq.form} is motivated by the effective theory derived in Sec.~\ref{ssec.nlsm}. The analytically obtained values for the coefficients are $\beta_5^{eff.theory} = 0.58541J$ and $\gamma_5^{eff.theory} = 0.22361J$ (see App.~\ref{app.exps}). For all $S$, the values obtained by fitting the numerical data are close to the analytic result. As $S$ increases towards $\infty$, the fit values come closer to the analytic result. This can be seen in Fig.~\ref{fig.pent_beta_gamma}, where the $S$-dependence of $\beta_5$ and $\gamma_5$ is plotted. As described in the caption, they can be fit to polynomial forms to extrapolate to the $S\rightarrow\infty$ limit. This yields $\beta_5^{S\rightarrow \infty}=0.58507J$ and $\gamma_5^{S\rightarrow \infty}=0.22286J$ respectively. These values are very close to the analytic result, as we expect from the large-$S$ approach of the effective theory.
\begin{figure*}
\includegraphics[height = 10.5 cm]{Pent_hint_spin.pdf}
\caption{Low-lying spectra of the pentagon antiferromagnet for various half-integer values of $S$. Plots show numerically-obtained spectra as well as fits to the effective theory. Fitting parameters are shown in each panel. Energies are measured in units of $J$. Numbers in parenthesis denote the degeneracy of each level. }
\label{fig.pent_half_int}
\end{figure*}
\begin{figure}
\includegraphics[height=6 cm]{pentagon_beta_gamma.pdf}
\caption{$\beta_5$ and $\gamma_5$ vs. $S$. $\beta_5$ values are fit to the curve $\beta_5(S) = 0.58507 + 0.07628/S- 0.11165/S^2$. Those for $\gamma_5$ are fit to $\gamma_5(S) = 0.22286 + 0.09662/S-0.12751/S^2$. Both quantities are measured in units of $J$.}
\label{fig.pent_beta_gamma}
\end{figure}
\begin{figure*}
\includegraphics[height = 5.3 cm]{Pent_int_spin.pdf}
\caption{Low energy spectra obtained from numerical diagonalization of the pentagon antiferromagnet for two integer values of $S$. Data have been fit to the rigid rotor form, with fitting parameters are shown in each panel. Energies are measured in units of $J$.}
\label{fig.pent_int}
\end{figure*}
\begin{figure*}
\includegraphics[height = 5.3 cm]{Hept_hint_spin.pdf}
\caption{Low energy spectra of the heptagon antiferromagnet for two half-integer values of $S$. Energies are measured in units of $J$.}
\label{fig.hept_half_int}
\end{figure*}
\begin{figure*}
\includegraphics[height = 5.3 cm]{Hept_int_spin.pdf}
\caption{Low energy spectra of the heptagon antiferromagnet for two integer values of $S$. Energies are measured in units of $J$. }
\label{fig.hept_int}
\end{figure*}
As a counterpoint, we present results for \textit{integer} values of $S$ in Fig.~\ref{fig.pent_int}. The spectra show excellent agreement with a symmetric-top rigid rotor \textit{without} Berry phases. Note that the degeneracy pattern is very different from those of half-integer $S$. The change in pattern is a direct manifestation of the Berry phase.
\begin{figure*}
\includegraphics[height = 5.3 cm]{Non_int_spin.pdf}
\caption{Low energy spectra of the nonagon antiferromagnet for two integer spin values. Energies are measured in units of $J$.}
\label{fig.non_int}
\end{figure*}
\begin{figure}
\includegraphics[height = 5.3 cm]{Non_hint_spin.pdf}
\caption{Low energy spectrum of the nonagon antiferromagnet with $S = \frac{3}{2}$. Energies are measured in units of $J$.}
\label{fig.non_half_int}
\end{figure}
{\bf{Spectrum of larger polygons:}} We follow the same numerical approach and fitting procedure for heptagon ($N=7$) and nonagon ($N=9$) antiferromagnets. Heptagon spectra for half-integer and integer $S$ values are presented in Figs.~\ref{fig.hept_half_int} and \ref{fig.hept_int} respectively. Nonagon spectra with half-integer and integer spins are shown in Figs.~\ref{fig.non_int} and \ref{fig.non_half_int}. In all cases, spectra resemble those of a symmetric-top rotor. Integer-spins resemble the case with no Berry phase, while half-integer spins carry a Berry phase of $\pi$. For example, integer $S$ always leads to a non-degenerate ground state and a six-fold first excited state. In contrast, half-integer $S$ is invariably associated with a four-fold degenerate ground state and an eight-fold degenerate first excited state.
\section{Discussion}
The topology of $SO(3)$ has drawn the attention of physicists for many decades. It is well known that the homotopy group is $\mathbb{Z}_2$, with two types of closed loops -- trivial and non-trivial. We may expect this rich topological structure to give rise to many observable consequences. However, relatively few examples are known. Perhaps the best known is the $\mathbb{Z}_2$ vortex. This represents a topological defect in $SO(3)$-field theories, with a textured $SO(3)$ order parameter in two dimensions\cite{Kawamura1984}. While this concept has been explored in many theoretical studies\cite{Kawamura2010,Rahmani2013,Osorio2019}, it has recently been invoked in experiments as well\cite{Tomiyasu2021}. Our results bring out a more direct consequence of $SO(3)$'s topology -- at the level of a single rotor, without requiring a field theory. Our results apply to a potentially large family of magnets. Our arguments regarding the Berry phase hold for any magnet that satisfies the three conditions listed in the abstract -- as emphasized in App.~\ref{app.Berry_phase}.
As specific examples, we have discussed odd-polygon antiferromagnets. Our discussion here builds upon previous studies on antiferromagnetic chains. Our results are consistent with early studies including analytic solutions for small spin values\cite{Kouzoudis1997} and observations from exact diagonalization spectra\cite{Schnack2000,Barwinkel2000a,Barwinkel2000b,Barwinkel2003}. Our analysis shows that features in low-energy spectra can be cleanly understood in terms of an effective rigid rotor description. These include degeneracy and momentum carried by the ground state.
Effective low-energy theories offer a starting point to understand spontaneous symmetry breaking\cite{Anderson1952}. Classical ordering requires breaking the symmetries of the system. However, this is not possible in any finite quantum system. Rather, we obtain a characteristic spectrum of states, called the `Anderson tower' or the `thin spectrum'. Our results can be viewed in this perspective, as determining the nature of the Anderson tower for non-collinear ordering in a class of magnets. The tower of states corresponds precisely to rigid rotor spectra. We provide an effective low-energy theory for polygons with $N$ vertices. We support this with an analytic calculation of the spectrum for $N=3$ and numerically obtained spectra for $N=5,7$ and $9$.
A deeper view of our results reveals a propensity towards ordering when $N\rightarrow \infty$. In the effective rigid-rotor picture, we find that the moment of inertia increases with $N$. In the thermodynamic limit, we have a `massive' rotor that can be easily pinned by an external symmetry breaking field. Formally, this can be stated as a sequence of limits. If a symmetry breaking field is held at a fixed strength while $N$ is increased, the system will order. This order will persist if the external field is then smoothly taken to zero.
Within this paradigm, our results show a qualitative difference between systems with integer and half-integer $S$. The nature of the low-lying spectra are different in the two cases. An interesting future direction is to examine whether this leads to observable differences in the approach to classical ordering.
The thin spectrum is of significant interest in numerical studies on quantum magnets\cite{Wietek2017}. It serves as a signature for classical ordering that is otherwise inaccessible in a finite system. This line of reasoning has played an important role in demonstrating the emergence of classical ordering in the Heisenberg antiferromagnet on the triangular lattice\cite{Bernu1992,Bernu1994}. Rigid-rotor based field theories have been used to determine the thin spectrum in triangle-based antiferromagnets, with the structure also appearing in the entanglement spectrum\cite{Rademaker2015}.
Our study highlights the role of the Berry phase in this problem. This gives rise to an odd-even effect where the spectrum oscillates between two characteristic patterns as $S$ is varied.
\section*{Acknowledgement}
We thank G. Baskaran, R. Shankar, and Diptiman Sen for many insightful discussions.
| {
"timestamp": "2022-05-05T02:06:12",
"yymm": "2202",
"arxiv_id": "2202.04096",
"language": "en",
"url": "https://arxiv.org/abs/2202.04096",
"abstract": "The rigid rotor is a classic problem in quantum mechanics, describing the dynamics of a rigid body with its centre of mass held fixed. The configuration space of this problem is $SO(3)$, the space of all rotations in three dimensions. This is a topological space with two types of closed loops: trivial loops that can be adiabatically shrunk to a point and non-trivial loops that cannot. In the traditional formulation of the problem, stationary states are periodic over both types of closed loops. However, periodicity conditions may change if Berry phases are introduced. We argue that time-reversal-symmetry allows for only one new possibility -- a Berry phase of $\\pi$ attached to all non-trivial loops. We derive the corresponding stationary states by exploiting the connection between $SO(3)$ and $SU(2)$ spaces. The solutions are anti-periodic over any non-trivial loop, i.e., stationary states reverse sign under a $2\\pi$ rotation about any axis. Remarkably, this framework is realized in the low-energy physics of certain quantum magnets. The magnets must satisfy the following conditions: (a) the classical ground states are unpolarized, carrying no net magnetization, (b) the set of classical ground states is indexed by $SO(3)$, and (c) the product $N\\times S$ is a half-integer, where $N$ is the number of spins and $S$ is the spin quantum number. We demonstrate this result in a family of Heisenberg antiferromagnets defined on polygons with an odd number of vertices. At each vertex, we have a spin-$S$ moment that is coupled to its nearest neighbours. In the classical limit, these magnets have coplanar ground states. Their quantum spectra, at low energies, correspond to `spherical top' and `symmetric top' rigid rotors. For integer values of $S$, we recover traditional rigid rotor spectra. With half-integer-$S$, we obtain rotor spectra with a Berry phase of $\\pi$.",
"subjects": "Strongly Correlated Electrons (cond-mat.str-el)",
"title": "Berry phase in the rigid rotor: the emergent physics of odd antiferromagnets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104924150546,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.7094608134831129
} |
https://arxiv.org/abs/1903.07995 | Bose-Einstein condensation in spherically symmetric traps | We present a pedagogical introduction to Bose-Einstein condensation in traps with spherical symmetry, namely the spherical box and the thick shell, sometimes called bubble trap. In order to obtain the critical temperature for Bose-Einstein condensation, we describe how to calculate the cumulative state number and density of states in these geometries, using numerical and analytical (semi-classical) approaches. The differences in the results of both methods are a manifestation of Weyl's theorem, i.e., they reveal how the geometry of the trap (boundary condition) affects the number of the eigenstates counted. Using the same calculation procedure, we analyzed the impact of going from three-dimensions to two-dimensions, as we move from a thick shell to a two-dimensional shell. The temperature range we obtained, for most commonly used atomic species and reasonable confinement volumes, is compatible with current cold atom experiments, which demonstrates that these trapping potentials may be employed in experiments. | \section{Introduction}
A Bose-Einstein condensate (BEC) corresponds to the macroscopic occupation
of the lowest energy quantum state by the particles of a system
\cite{bose24}. Bose-Einstein condensation occurs when the system is cooled below a critical
temperature $T_c$ and the mean interparticle distance
$\bar{l}=\rho^{-1/3}$, $\rho$ being the number density of $N$
particles in a volume $V$, becomes comparable to the de Broglie wavelength,
\begin{equation}\label{eq:deBroglie}
\lambda=\frac{h}{Mv},
\end{equation}
where $M$ is the mass of the atoms, and
$v=\sqrt{k_B T/M}$ is their thermal velocity, $k_B$ being
the Boltzmann constant. Imposing $\lambda\sim \bar{l}$ implies that a homogeneous gas will undergo a Bose-Einstein condensation at a temperature
\begin{equation}
T_c \sim \frac{h^2\rho^{2/3}}{M k_B}.
\end{equation}
This simple qualitative argument differs from the accurate result
only by a factor of $\approx$ 3.3 \cite{pethick02}.
The first experimental realizations of Bose-Einstein condensation in
dilute gases were achieved in 1995 \cite{anderson95,bradley95,davis95}, and currently several
laboratories around the world produce BECs on a daily basis.
One feature of experiments with cold atomic gases that led to rapid advances in the field is the ability to control the parameters of the system \cite{griffin96,ketterle99}.
The interatomic interactions and trapping potentials can be changed by external electromagnetic fields, with
unprecedented control. Although harmonic potentials are the most commonly used
traps in experiments, other geometries, such as box traps
\cite{gaunt13}, recently became available.
In this work we are interested in dilute gases. Here we study a BEC trapped in spherically symmetric potentials,
the spherical box and the thick shell, sometimes called bubble
trap. Our theoretical studies are motivated by the experimental possibility of confining the atoms in this kind of trap \cite{zobay01,zobay04,garraway16}, which has to be inserted in a microgravity setting to produce a spherical atom distribution \cite{elliott18}.
We determined the cumulative state number and density of
states in these geometries in order to calculate
the critical temperature for Bose-Einstein condensation.
The temperature range we obtained
is compatible with current cold atom experiments, which
demonstrates that these trapping potentials
may be employed in experiments.
We also discuss, very briefly,
the effects
of reducing the dimensionality of the system of interest from 3D to 2D,
which is what happens when the thickness of the shell goes to zero.
The study of cold gases has proven to be a very rich research field, and the investigation of low-dimensional systems has become an active area in this context \cite{giorgini08,bloch08}.
We wrote this manuscript in a pedagogical way, hoping that
dedicated undergraduate students will find all the necessary
ingredients to reproduce the results presented here.
Moreover, we wish to show that even if some problems in statistical
physics do not have analytical solutions,
numerical methods offer some insight into the underlying
physics of the system, as we will show here.
This work is structured as it follows.
In Sec.~\ref{sec:cumu_dos} we introduce the concepts related
to the cumulative state number and density of states.
We begin by calculating the energy levels of a particle in a
rigid box, Sec.~\ref{sec:rigid_box}; then we show how the density
of states can be obtained from the cumulative state number,
Sec.~\ref{sec:nspace}; we write expressions for these quantities
in the high-energy limit, Sec.~\ref{sec:analytic_cumu_dos},
and semi-classical approximations, Sec.~\ref{sec:semi-classical}.
Weyl's theorem is presented in Sec.~\ref{sec:weyl}.
Bose-Einstein condensation is introduced in
Sec.~\ref{sec:bose}, where we derive the expression for the critical
temperature in three-dimensions.
Sec.~\ref{sec:sph_sym_pot} deals with the solution of Schr\"odinger's
equation for a spherically symmetric potential, which is then applied to
two different trapping potentials: the spherical box
and the thick shell, Secs.~\ref{sec:sphere} and \ref{sec:shell},
respectively.
The critical temperatures are calculated in
Sec.~\ref{sec:temp}, for three-dimensional,
Sec.~\ref{sec:temp_3D}, and two-dimensional systems,
Sec.~\ref{sec:3D_2D}. Finally, we summarize our findings in
Sec.~\ref{sec:summary}.
Appendix~\ref{app:Boseint} deals with the generalization of the critical temperature expression for $D$ dimensions.
\section{Cumulative state number and density of states}
\label{sec:cumu_dos}
\subsection{Particle in a rigid box}
\label{sec:rigid_box}
The concept of density of states (DOS) is ubiquitous to many areas
of physics, such as: specific heat calculations, black-body radiation,
phonon spectra, reaction rates in nuclear physics, and many more.
For a pedagogical overview the reader is referred to Ref.~\cite{mulhall14}.
In this work, we are going to use the DOS to calculate the critical temperature
of a trapped BEC.
In statistical physics many quantities can be expressed as
integrations over the phase space, which can be very complicated.
An alternative is to replace the variables in terms of the energy
of the system, thus replacing the volume in phase space by a
weight factor in the energy integral. This weight factor is the
density of states, which typically makes the integrals more
tractable.
Let us begin with the case of a particle in a rigid box, that is,
subjected to a potential which is zero inside the box and
infinite outside it.
Although it is a very simple example, it exhibits the nonclassical
behavior expected from a quantum mechanical problem, and it also serves
as a building block to more complex examples
(scattering, double-well, among many others).
A nonrelativistic particle of mass $M$ inside a one-dimensional box
of size $L$ has energy levels given by \cite{griffiths18}
\begin{equation}
\varepsilon_n^{\rm 1D}=\frac{\hbar^2}{2M}\frac{\pi^2}{L^2}n_x^2=\varepsilon_0 n_x^2,
\end{equation}
where we defined $\varepsilon_0=\pi^2\hbar^2/(2mL^2)$ and $n_x$
is an integer. In a two-dimensional square box of sides $L$,
the energy levels are simply
$\varepsilon_n^{\rm 2D}=\varepsilon_0 (n_x^2+n_y^2)$, where we introduced
an extra integer $n_y$ to take into account the $y$-dimension. Finally,
a straightforward generalization to three-dimensions yields
$\varepsilon_n^{\rm 3D}=\varepsilon_0 (n_x^2+n_y^2+n_z^2)$.
\subsection{$n$-space representation}
\label{sec:nspace}
For the following discussion we are going to assume the two-dimensional
case because its visualization is easier, but the arguments hold in
the other cases. The momentum space is defined by the variables $p_x$
and $p_y$, but they only differ from $n_x$ and $n_y$ by a constant,
$p_i=\hbar k_i=n_i\pi\hbar/L$ with $i=x$, $y$. So let us call
this space, defined by $n_x$ and $n_y$, $n$-space.
We can think of each quantum number being a line, and the intersection
of the lines correspond to the allowed quantum states $(n_x,n_y)$.
In Fig.~\ref{fig:enegy2d} we represent the two-dimensional $n$-space,
and for each quantum state we write the energy $\varepsilon_n^{\rm 2D}$
in units of $\varepsilon_0$.
A curve with constant energy or, conversely, constant $n^2$, is given
by $n=\sqrt{n_x^2+n_y^2}$.
When independent states correspond to the same energy we say they
are degenerate. This is illustrated in Fig.~\ref{fig:enegy2d}
by the quarter circle $n=\sqrt{n_x^2+n_y^2}=5$ which intersects
two grid points, (3,4) and (4,3), corresponding to the two degenerate
energy states. Notice, however, that not all energies
are allowed, for example $n=\sqrt{n_x^2+n_y^2}=6$ does not intersect
any points.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth]{energy_2d.pdf}
\caption{Energies, in units of $\varepsilon_0^{\rm 2D}$, of a particle in a 2D square box as a
function of the integers $n_x$ and $n_y$.
The quarter circles correspond to $n=\sqrt{n_x^2+n_y^2}=$ 5 and 6.
Notice that $n=5$ intersects two grid points, (3,4) and (4,3),
corresponding to the degeneracy of this energy level, whereas $n=6$
does not intersect any points.
}
\label{fig:enegy2d}
\end{figure}
If we list all the allowed energies $\varepsilon$ of our system, or more
practically
all the possible energies up to a cutoff, and their corresponding
degeneracies $d(\varepsilon)$, we could make a plot of $d(\varepsilon)$,
which would correspond to the
``number of states
with
energy $\varepsilon$'' vs $\varepsilon$. This graph would be a series
of spikes, at the allowed energies $\varepsilon$, each with height
$d(\varepsilon)$.
At this point it is helpful to introduce a new quantity, the cumulative
state number $\mathcal{N}(\varepsilon)$ defined as
the number of states with energy less than or equal to $\varepsilon$.
Its graph is a staircase where each step has a height $d(\varepsilon)$
and a width given by the gap between two consecutive energy levels.
Finally, we can introduce the density of states function $g(\varepsilon)$
as being related to the cumulative state number through
$g(\varepsilon)d\varepsilon=d\mathcal{N}(\varepsilon)$, so
we identify $g(\varepsilon)$ with the slope of $\mathcal{N}(\varepsilon)$.
From a computational point of view, we can take the numerical
derivative using a finite difference expression,
\begin{equation}
\label{eq:g_der}
g(\varepsilon)=\frac{d\mathcal{N}}{d\varepsilon}=
\frac{\mathcal{N}(\varepsilon+\delta\varepsilon)-\mathcal{N}(\varepsilon-\delta\varepsilon)}{2\delta\varepsilon},
\end{equation}
where $\delta\varepsilon$ is small compared to $\varepsilon$.
Then, if we divide the energy interval into bins of
width $\delta\varepsilon$,
$g(\varepsilon)$ will correspond to ``number of states in a bin'' divided
by the ``width of the bin'', in accordance with our definition of
the density of states.
Throughout this paper, we favor working with $\mathcal{N}(\epsilon)$ rather
than $g(\varepsilon)$. From the theoretical point of view,
they contain the same physical information and they are interchangeable.
However, from the computational perspective, the
cumulative state number will be a smoother function due to the fact
it corresponds simply to the addition of integers, whereas the density
of states corresponds to numerical derivatives, hence it suffers
more from noisy data.
\subsection{Analytic expressions for the cumulative state number
and density of states}
\label{sec:analytic_cumu_dos}
Equation~(\ref{eq:g_der}) corresponds to a numerical representation
of $g(\varepsilon)$. However, there are analytic expressions
for the rigid box potentials we introduced earlier,
when the DOS is large and well approximated by a smooth function.
The states in the energy interval between $\varepsilon$ and
$\varepsilon+d\varepsilon$ are represented in $n$-space by
a spherical shell of thickness $dn$ with positive coordinates.
In the two-dimensional example of Fig.~\ref{fig:enegy2d}, the number
of states between $n$ and $n+dn$ is proportional to the area of the band.
Clearly this is an approximation, since $n_x$ and $n_y$ are discrete,
however this becomes increasingly accurate when the energy levels become
closely spaced. Hence, the 2D DOS is given by
$g_{\rm 2D}(\varepsilon)d\varepsilon=(1/4)(2\pi)(n dn)$,
where the factor of 1/4
corresponds to the positive quadrant, and we consider polar coordinates
such that the radial coordinate is $n=\sqrt{n_x^2+n_y^2}$ and the factor
of $2\pi$ accounts for the angular direction (supposing that the function
is isotropic). Thus, we can write the DOS as
$g_{\rm 2D}(\varepsilon)=(1/2)\pi n(\varepsilon) dn/d\varepsilon$.
Substituting $n(\varepsilon)=\sqrt{\varepsilon/\varepsilon_0}$ yields
$g_{\rm 2D}(\varepsilon)=\pi/(4\varepsilon_0)$, that is, a constant.
Since the cumulative state number is the integral of $g(\varepsilon)$, then
$\mathcal{N}_{\rm 2D}(\varepsilon)=(\pi/(4\varepsilon_0))\varepsilon$
is a straight line.
For the three-dimensional case, the appropriate construction in
$n$-space is a shell of thickness $dn$ in the all positive coordinates octant of a sphere, which leads to
$g_{\rm 3D}(\varepsilon)d\varepsilon=(1/8)(4\pi)(n^2 dn)$, where the factor
of 1/8 corresponds to only one octant, and we consider spherical coordinates, such that $n=\sqrt{n_x^2+n_y^2+n_z^2}$ is the
radial coordinate, and the factor of $4\pi$ corresponds to the solid angle
average. Hence,
\begin{equation}
\label{eq:g3D}
g_{\rm 3D}(\varepsilon)=\frac{\pi}{4\varepsilon_0^{3/2}}\sqrt{\varepsilon},
\end{equation}
and
\begin{equation}
\label{eq:N3D}
\mathcal{N}_{\rm 3D}(\varepsilon)=\frac{\pi}{6\varepsilon_0^{3/2}}\varepsilon^{3/2}.
\end{equation}
So far we were restricted to the problem of one particle in a D-dimensional
box. If we have $N$ noninteracting particles in a cube,
then the total energy
is the sum of the energy of individual particles, which can be related
to the surface of a $D$-dimensional hypersphere, with $D=3N$.
The ``content'' (in 2D it is the area, in 3D the volume, and so on)
of a $D$-dimensional hypersphere of radius $R$ is given by
\cite{sommerville29}
\begin{equation}
V_D=\frac{\pi^{D/2}}{\Gamma(D/2+1)}R^D=C_D'R_D,
\end{equation}
where $\Gamma$ is the gamma function \cite{arfken11}, and we defined
$C_D'=\pi^{D/2}/\Gamma(D/2+1)$.
Notice that this formula reproduces the familiar results
$C_2'=\pi$, and $C_3'=4\pi/3$.
The hyper-surface area (in 2D the perimeter, and in 3D the surface)
is given by $S_D=D C_D' R^{D-1}$, and its portion in the all
positive coordinates region is given by $(1/2^D)S_D$.
Thus, the cumulative state number is given by the phase space volume
enclosed by $n=\sqrt{\varepsilon/\varepsilon_0}$,
\begin{equation}
\label{eq:cumu_D}
\mathcal{N}_D(\varepsilon)=\frac{1}{2^D}C_D'n^D=\frac{1}{2^D}C_D'
\left( \frac{\varepsilon}{\varepsilon_0}\right)^{D/2}.
\end{equation}
The DOS is obtained by deriving the expression above,
\begin{equation}
\label{eq:DOS_D}
g_D(\varepsilon)=\frac{1}{2^{D+1}}C_D' D
\frac{\varepsilon^{D/2-1}}{\varepsilon_0^{D/2}}.
\end{equation}
\subsection{The semi-classical approximation}
\label{sec:semi-classical}
The energy levels we employed in the sections above were obtained
analytically. However, such
calculations are possible only for a few systems in quantum mechanics.
Nevertheless, it is possible to calculate the density of
states employing the so-called semi-classical approximation \cite{bagnato87}. The main idea behind it is that
the volume in phase space between two surfaces of energy $\varepsilon$
and $\varepsilon+d\varepsilon$ is proportional to the number of
states in that interval.
The uncertainty principle defines the smallest volume in phase
space as being
$dV = dp^{3}dr^{3}/h^3$.
If we want to calculate the cumulative state number as a function
of the momentum $p$, then
\begin{equation}
\mathcal{N}_{\rm SC}(p) = \frac{1}{h^3}\int d^3r \int_{0}^{p} 4\pi p'^2 dp' =\frac{4\pi}{3 h^3}\int d^3r \ p^3,
\end{equation}
where we used spherical coordinates to do the integral over the momenta.
The total energy is equal to $\varepsilon=p^2/(2M)+U(\bvec{r})$,
and solving for $p$ yields
$p=(2M(\varepsilon-U(\mathbf{r}))^{1/2}$, so that
\begin{equation}
\label{eq:cumu_semi_classical}
\mathcal{N}_{\rm SC}(\varepsilon)=
\frac{1}{6\pi^2}\left(\frac{2M}{\hbar^2}\right)^{3/2}
\hspace{-0.25cm}
\int\limits_{V^*(\epsilon)}
\hspace{-0.25cm} d^3r
\left( \varepsilon-U(\bvec{r}) \right)^{3/2},
\end{equation}
where the integration is done over the volume $V^*(\varepsilon)$
available to the particle with energy $\varepsilon$. Note that the external potential $U(\bvec{r})$ has an important contribution to the calculation of
the DOS, since it constrains the space available to the system.
Taking the derivative of Eq.~(\ref{eq:cumu_semi_classical})
gives us the 3D DOS in this semi-classical approximation,
\begin{equation}
\label{eq:DOS_semiclassical}
g_{\rm SC}(\varepsilon)=\frac{1}{4\pi^2}\left(\frac{2M}{\hbar^2}\right)^{3/2}
\hspace{-0.25cm}
\int\limits_{V^*(\varepsilon)} \hspace{-0.25cm} d^3r \sqrt{\varepsilon-U(\bvec{r})}.
\end{equation}
For the rigid box,
$\mathcal{N}_{\rm SC}(\varepsilon)$
agrees with $\mathcal{N}_{\rm 3D}(\varepsilon)$,
Eq.~(\ref{eq:N3D}), and
$g_{SC}(\varepsilon)$ agrees with $g_{\rm 3D}(\varepsilon)$,
Eq.~(\ref{eq:g3D}).
\subsection{Weyl's theorem}
\label{sec:weyl}
So far we discussed only $D$-dimensional rigid boxes, and
Eqs.~(\ref{eq:cumu_D}) and (\ref{eq:DOS_D}) were derived
for the high energy limit assuming these cubical geometries.
One might ask if these expressions would be modified in different
geometries.
If the box is sufficiently large, the shape of the ``box''
(we use this word in the sense of the region in which the
particle is trapped, much like $V^*$ in Eq.~(\ref{eq:cumu_semi_classical}))
should not affect the particle, as long as $\lambda^D\ll V$,
where $\lambda=2\pi/k$ is the de Broglie wavelength of the particle (see Eq.~(\ref{eq:deBroglie})).
Thus a slow particle, with long wavelength,
will know about the edge of the box,
whereas a fast particle, with short wavelength, will not be
sensitive to the walls.
This physical intuition is in agreement with the so-called
Weyl's theorem \cite{weyl1912}, which can be paraphrased as
``high energy eigenvalues of the wave function are insensitive
to the shape of the boundary''.
A good explanation about the emergence of the theorem is given
in Ref.~\cite{kac66}, and an explicit proof for the sphere is
given in Ref.~\cite{lambert68}.
Hence, the conclusion is that for $\lambda^D\ll V$, the high energy
limit, the density of states and the cumulative state number
are unaffected by the shape of the box. This is also why the
semi-classical approximation yields good results for large values of $k$.
As we will see, for $\lambda^D\gg V$, deviations from
Eqs.~(\ref{eq:cumu_D}) and (\ref{eq:DOS_D}) might occur, and they can affect considerably the calculation of thermodynamical quantities, as we will demonstrate here.
\section{Bose-Einstein condensation}
\label{sec:bose}
We work within the grand-canonical ensemble, that is,
our system is in contact with heat and particle baths.
For a didactic approach to the topic
of ensembles in statistical physics,
the reader is referred to
Ref.~\cite{salinas13}.
The thermodynamical quantities are functions of the
volume $V$, the temperature $T$, and the chemical potential $\mu$.
The grand-canonical partition function is given by
\begin{equation}
\ln \Xi(T,V,\mu)=-\sum_j \ln\left\{1-\exp\left[-\beta(\varepsilon_j-\mu)\right]\right\},
\end{equation}
where the sum is done over single-particle states,
$\beta=1/(k_B T)$, and
$\varepsilon_j$ is the energy of the $j$-th level of the system.
From the partition function it is possible to obtain the expected value
of the occupation of the $j$-th level,
\begin{equation}
\label{eq:occ}
\langle n_j \rangle = \frac{1}{\exp\left[\beta(\varepsilon_j-\mu)\right]-1},
\end{equation}
and the total number of particles,
\begin{equation}
\label{eq:Ndiscrete}
N=\sum_j\langle n_j \rangle =\sum_j\frac{1}{\exp\left[\beta(\varepsilon_j-\mu)\right]-1}.
\end{equation}
These equations only make sense if $\varepsilon_j-\mu > 0$, that is,
a strictly negative chemical potential.
For the classical limit of high temperatures, it is easy to
see that $\mu<0$. However, in the quantum mechanical context,
$\mu=0$ gives rise to the Bose-Einstein condensation.
In order to calculate the critical temperature $T_c$ where $\mu\to 0^-$,
let us take Eq.~(\ref{eq:Ndiscrete}) with $\mu=0$. Furthermore,
let us assume that these are free-particles, with an energy spectrum
of $\varepsilon_j=\hbar^2 k^2/(2M)$. In the thermodynamical limit,
the sum may be replaced by an integral, and the set of expected occupation
numbers $\langle n_j \rangle$ becomes a smooth function of the energy,
that we denote by
$f(\varepsilon)=1/(\exp\left[\beta(\varepsilon-\mu)\right]-1)$.
This function is often called Bose-Einstein distribution.
Putting all this information together, we have an expression that
relates the number of particles with the temperature,
\begin{eqnarray}
\label{eq:Ncont}
N=\int d\varepsilon g(\varepsilon) f(\varepsilon).
\end{eqnarray}
Here we see the importance of the DOS function, see Sec.~\ref{sec:cumu_dos}.
The Bose-Einstein distribution $f(\varepsilon)$
gives us the expected number of
occupied states at a given energy $f(\varepsilon)$, that is,
a number between 0 and 1. However, the energies might be degenerate,
so
we use $g(\varepsilon)d\varepsilon$ to count the number of available
states between $\varepsilon$ and $\varepsilon+d\varepsilon$.
A straightforward substitution of Eq.~(\ref{eq:g3D}) into (\ref{eq:Ncont})
yields
\begin{eqnarray}
N=\frac{1}{4\pi^2}\left(\frac{2M}{\hbar^2}\right)^{3/2}
\int_0^\infty d\varepsilon
\frac{\varepsilon^{1/2}}{\exp(\beta_c\varepsilon)-1},
\end{eqnarray}
where we defined $\beta_c=1/(k_B T_c)$. This integral can
be solved analytically, see Appendix~\ref{app:Boseint} for a step by step solution.
Solving for $T_c$ yields
\begin{eqnarray}
\label{eq:Tc}
T_c=\frac{\hbar^2}{2 M k_B}\left[
\frac{4\pi^2}{\Gamma\left(\frac{3}{2}\right)\zeta\left(\frac{3}{2}\right)}
\right]^{2/3} \left(\frac{N}{V}\right)^{2/3},
\end{eqnarray}
where $\zeta$ is the Riemann zeta function \cite{abramowitz12}.
Notice that if we rewrite the expression above as a function of
$\lambda_{dB}$, we recover the relation we presented in the
introduction, $\rho\lambda_{dB}^3=2.612$. In Appendix~\ref{app:Boseint}
we also present the critical temperature expression of a $D$-dimensional
gas.
\section{Spherically symmetric potentials}
\label{sec:sph_sym_pot}
Let us consider a particle of mass $M$ and energy $E>0$
subjected to an external potential $V(r)$ which depends only of the
distance $r$ from the origin.
The time-independent Schr\"odinger equation obeyed by the wave function
of the particle
$\Psi(\mathbf{r})$ is
\begin{equation}
\label{eq:sch}
-\frac{\hbar^2}{2M}\nabla^{2}\Psi(\bvec{r})+V(r)\Psi(\bvec{r})= E \Psi(\bvec{r}).
\end{equation}
The fact that the potential is spherically symmetric suggests that
our calculations might be easier in spherical coordinates, where
we employ the usual convention for $(r,\theta,\phi)$.
Equation~(\ref{eq:sch}) takes the form
\begin{eqnarray}
&&-\frac{\hbar^2}{2M}
\left[
\frac{1}{r^2}\frac{\partial}{\partial r}\left(
r^2\frac{\partial\Psi}{\partial r}
\right)
+\frac{1}{r^2 \sin\theta}\frac{\partial}{\partial \theta}\left(
\sin\theta\frac{\partial\Psi}{\partial\theta}\right)
\right.\nonumber\\
&&\left.
+\frac{1}{r^2\sin^2\theta}\left(\frac{\partial^2\Psi}{\partial\varphi^2}\right)
\right]
+V(r)\Psi= E \Psi.
\end{eqnarray}
Let us look for solutions that are separable into products \cite{butkov68,griffiths18},
\begin{equation}
\label{eq:psi_sep}
\Psi _{nlm} \left ( r,\theta ,\varphi \right )=R_{nl}\left ( r \right )Y_{lm}\left ( \theta ,\varphi \right ).
\end{equation}
After a few mathematical manipulations,
\begin{flalign}
&\left[
\frac{1}{R_{nl}}\frac{d}{dr}\left(
r^2\frac{dR_{nl}}{dr}
\right)
-\frac{2Mr^2}{\hbar^2}(V(r)-E)
\right]
\nonumber\\
&+\frac{1}{Y_{lm}}
\left[
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(
\sin\theta\frac{\partial Y_{lm}}{\partial\theta}\right)
+\frac{1}{\sin^2\theta}\left(\frac{\partial^2Y_{lm}}{\partial\varphi^2}\right)
\right]=0.
\end{flalign}
The terms inside the first brackets depend only on $r$,
while the terms inside the second brackets contain only terms that
depend on $\theta$
and $\varphi$. For this equation to be true for all values of
$r$, $\theta$, and $\varphi$, the first term must be equal to a constant,
and the second one to minus the same constant.
For convenience, we will call this constant $l(l+1)$,
\begin{flalign}
\label{eq:radial}
&\frac{1}{R_{nl}}\frac{d}{dr}\left(
r^2\frac{dR_{nl}}{dr}
\right)
-\frac{2Mr^2}{\hbar^2}(V(r)-E)
=l(l+1),
\\
&\frac{1}{Y_{lm}}\left[
\frac{1}{\sin\theta}\frac{\partial}{\partial \theta}\left(
\sin\theta\frac{\partial Y_{lm}}{\partial\theta}\right)
+\frac{1}{\sin^2\theta}\left(\frac{\partial^2Y_{lm}}{\partial\varphi^2}\right)
\right]
=\nonumber\\
&-l(l+1).
\label{eq:angular}
\end{flalign}
In principle $l(l+1)$ could be any complex number, and there is no
loss of generality in writing the separation constant this way.
However, if the reader is familiar with quantum mechanics,
it is known
that $l$ turns out to be an integer, $l=0,1,\cdots$, and the
quantum number associated with orbital angular momentum.
The angular equation gives rise to the spherical harmonics,
\begin{eqnarray}
Y_{lm}(\theta,\varphi)=
\epsilon\sqrt{\frac{(2l+1)}{4\pi}\frac{(l-|m|)!}{(l+|m|)!}}
e^{im\varphi} P_l^m(\cos\theta),\quad
\end{eqnarray}
where $\epsilon=(-1)^m$ for $m\geqslant 0$ and $\epsilon=1$ for
$m\leqslant 0$, and $P_l^m$ is the associated Legendre function \cite{griffiths18}. The quantum number $m$, sometimes called
magnetic quantum number, takes the integer values $m=-l,\cdots,0,\cdots,l$.
We do not discuss the angular solutions in detail -- the reader is
referred to an undergraduate-level quantum mechanics textbook for
this matter \cite{griffiths18} -- because we will see that, for
our purposes,
the only
pertinent detail of the angular solutions that we need is their degeneracy.
For a fixed value of $l$ the degeneracy is $2l+1$,
corresponding to how many values $m$ can take.
Notice that, so far, we did not specify $V(r)$.
That is because the angular equation, Eq.~(\ref{eq:angular}),
does not depend on the potential,
it only appears in the radial equation, Eq.~(\ref{eq:radial}).
In Secs.~\ref{sec:sphere} and \ref{sec:shell} we solve the radial
equation for two cases: a spherical box and a spherical shell of finite
thickness.
\section{Spherical box}
\label{sec:sphere}
Let us consider the external potential
\begin{equation}
\label{eq:Vsph}
V(r)=
\begin{cases}
0\phantom{+\infty} \text{ if } 0\leqslant r<a,\\
+\infty\phantom{0} \text{ if } r\geqslant a,
\end{cases}
\end{equation}
$a$ being the radius of the sphere where the particle is confined.
Equation~(\ref{eq:radial}) for the region $0\leqslant r<a$ now reads
\begin{eqnarray}
\label{eq:bessel_sphere}
\frac{d^2R_{nl}}{dr^2}+\frac{2}{r}\frac{dR_{nl}}{dr}+
\left(k^2-\frac{l(l+1)}{r^2}\right)R_{nl}=0,
\end{eqnarray}
where we introduced $k^2=2ME/\hbar^2$. The change in variables
$z=kr$ allows us to recast this equation into
\begin{eqnarray}
\label{eq:besselz}
\frac{d^2R_{nl}}{dz^2}+\frac{2}{z}\frac{dR_{nl}}{dz}+
\left(1-\frac{l(l+1)}{z^2}\right)R_{nl}=0,
\end{eqnarray}
which is the spherical Bessel differential equation \cite{abramowitz12}.
Its solutions are given by linear combinations of
\begin{eqnarray}
j_l(z)=(-z)^l\left(\frac{1}{z}\frac{d}{dz}\right)^l\frac{\sin z}{z},
\label{eq:besselj}
\\
y_l(z)=-(-z)^l\left(\frac{1}{z}\frac{d}{dz}\right)^l\frac{\cos z}{z}.
\label{eq:bessely}
\end{eqnarray}
The functions of Eq.~(\ref{eq:besselj}) are known as
spherical Bessel functions of the first kind, while the second kind
functions are given by Eq.~(\ref{eq:bessely}).
In Fig.~\ref{fig:bessel} we plot these functions for the orders $l=0,1,2$.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{bessel.pdf}
\caption{
(Color online)
Examples of Bessel functions of the first, Eq.~(\ref{eq:besselj}), and second,
Eq.~(\ref{eq:bessely}), kinds.
We plot the first three orders, $l=0,1,2$, using
solid (red),
dashed (green),
short-dashed (blue),
long-dashed (magenta),
dash-dotted (orange), and
short-dash-dotted (gray)
curves to denote $j_0$, $j_1$, $j_2$, $y_0$, $y_1$, and $y_2$,
respectively.
The (black) solid circles denote the Bessel zeros $z_{10}$, $z_{11}$, and
$z_{12}$. Notice that the Bessel functions of the first kind are
well-behaved near the origin, whereas the ones of the second kind diverge.
}
\label{fig:bessel}
\end{figure}
To obtain the energy levels, we need to apply the
boundary conditions of our problem into the
solutions of Eq.~(\ref{eq:besselz}).
The wave function must be well-behaved at the origin,
hence the spherical Bessel functions of the second kind
are not acceptable solutions. Also,
it cannot have any kinks at the origin, thus
$R_{nl}'(0)=0$, which is satisfied by the spherical Bessel functions of the first kind.
The boundary condition at $r=a$, where the wave function must vanish,
gives us the condition $R_{nl}(ka)=0$. Denoting the
$n$-th zero of $j_l$ by $z_{nl}$, we have $k=z_{nl}/a$, and the
energy levels are
\begin{equation}
\label{eq:levels_sphere}
\varepsilon_{nl}=\frac{\hbar^2}{2M} \frac{z^2_{nl}}{a^2}.
\end{equation}
Thus our problem of determining the energy levels for this system
reduces to finding the zeros of Bessel functions of the first kind.
In Fig.~\ref{fig:bessel} we show the first zeros for $l=0,1,2$.
Although there are no analytical expressions for the $z_{nl}$,
we can easily find them numerically \cite{hamming12}.
As we found out in Sec.~\ref{sec:sph_sym_pot}, each of these
levels has a $2l+1$ degeneracy corresponding to the angular
part of the solution.
Now that we determined the energy levels and their degeneracies,
the cumulative state number
function $\mathcal{N}(\varepsilon)$,
Sec.~\ref{sec:analytic_cumu_dos}, can be easily calculated.
The steps can be summarized as
\begin{enumerate}
\item Choose a maximum value of the energy $\varepsilon_m$,
or equivalently, a
maximum value of $k$, $k_m=\sqrt{2M\varepsilon_m}/\hbar$.
\item Choose a number of bins, $n_{\rm bin}$. Each bin will correspond
to an energy interval of width $\hbar^2k_m^2/(2M n_{\rm bin})$,
centered at $\varepsilon_{\rm bin}$.
\item Find all the $z_{nl} \leqslant k_m a$. For each one of the zeros, we consider its $2l+1$ degenerescence in the corresponding bin.
\item For each of the bins, add the value of all the preceding bins
to it. This guarantees that we are counting the total number of states with
energy $\varepsilon\leqslant \varepsilon_{\rm bin}$,
as required by the definition
of $\mathcal{N}(\varepsilon)$.
\end{enumerate}
We used this procedure to calculate the
cumulative state number and density of states of a spherical
box, Fig.~\ref{fig:sphere}, which we compared with the
predictions of the semi-classical approximation,
Eqs.~(\ref{eq:cumu_semi_classical}) and (\ref{eq:DOS_semiclassical}).
Two main features are illustrated in this plot.
The cumulative state function we obtained from our quantum mechanical
calculation is slightly below the semi-classical approximation
result, which means that thermodynamical quantities
differ in these two schemes, as we will see in Sec.~\ref{sec:temp_3D}.
Another feature is that the numerical calculation
of the cumulative state number is smoother than the
respective density of states, as discussed in Sec.~\ref{sec:nspace}.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{sphere.pdf}
\caption{
(Color online)
Cumulative state number and density of states of a spherical
box as a function of the energy.
The points correspond to our numerical calculations,
(red) squares denote the cumulative state number $\mathcal{N}(\varepsilon)$,
while (green) circles represent the density of states $g(\varepsilon)$.
The curves are given by the semi-classical approximation,
the solid (blue) curve corresponds to Eq.~(\ref{eq:cumu_semi_classical}),
$\mathcal{N}_{\rm SC}(\varepsilon)$, and the dashed (cyan) curve to
Eq.~(\ref{eq:DOS_semiclassical}), $g_{\rm SC}(\varepsilon)$.
The energies are expressed in terms of the
energy unit $\varepsilon_{\rm sp}=\hbar^2/(2Ma^2)$.
Notice that the $\mathcal{N}(\varepsilon)$ from our quantum calculation
is slightly lower than the expected result from the semi-classical
approximation.
Another feature this plot illustrates is that
numerical calculations of the cumulative state number
are smoother than the density of states.
}
\label{fig:sphere}
\end{figure}
The energy levels of the sphere, Eq.~(\ref{eq:levels_sphere}),
can be written as $\varepsilon_{nl}=\varepsilon_{sp}(z_{nl})^2$, with
$\varepsilon_{\rm sp}=\hbar^2/(2Ma^2)$. That is why we chose to
express energy dependent quantities in
energy units of $\varepsilon_{\rm sp}$.
This has the advantage of making our results system-independent,
in the sense that the calculation is the same for different values
of the mass of the atoms $M$ and radius of the sphere $a$. Once
values of $M$ and $a$ are chosen, then the energy is rescaled
by the value of $\varepsilon_{\rm sp}$, accordingly.
Equation (\ref{eq:cumu_D}) gives us the cumulative state number
for a $D$-dimensional system.
In particular, for the 3D sphere we can rewrite the
equation as
\begin{equation}
\label{eq:cumu_sphere}
\mathcal{N}(\varepsilon)=C_{sp} \varepsilon^\alpha,
\end{equation}
where
\begin{equation}
\label{eq:coef_sphere}
C_{sp}=\frac{2}{9\pi \varepsilon_{sp}^{3/2}} \text{ and } \alpha=\frac{3}{2}.
\end{equation}
A close inspection of Fig.~\ref{fig:sphere} reveals that
the relative difference
between our numerical results and the semi-classical approximation
is of the order of 1\% for
$\varepsilon=10^4\varepsilon_{\rm sp}$.
If we increase the energy cutoff,
beyond the range of the graph,
then it would drop to $\approx 0.1$\% for
$\varepsilon=1.5$ $10^5$ $\varepsilon_{\rm sp}$, and
the difference between them continues to
decrease
as we increase the energy cutoff.
This is in agreement with the findings of Sec.~\ref{sec:analytic_cumu_dos},
for large energy values the two expressions should coincide.
However, this difference impacts the behavior of the system for
small energies. In order to quantify this deviation, we took the logarithm
of Eq.~(\ref{eq:cumu_sphere}),
\begin{equation}
\label{eq:sphere_log_log}
\ln\mathcal{N}(\varepsilon)=\ln C_{\rm sp} +\alpha\ln\varepsilon.
\end{equation}
The plot of $\ln\mathcal{N}$ vs. $\ln\varepsilon$ graph is simply a line, with angular coefficient $\alpha$
and linear coefficient $\ln C_{\rm sp}$.
In Fig.~\ref{fig:ang_lin_weyl} we show the angular and linear coefficients
for the $\varepsilon\leqslant 12000 \varepsilon_{\rm sp}$ energy range.
Each of the points $\{\varepsilon_i\}$ correspond
to a linear fit of our data, up to that energy, to
Eq.~(\ref{eq:sphere_log_log}).
We can see that increasing the energy cutoff yields coefficients
that are much closer to the expected high energy limits.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{ang_lin_weyl.pdf}
\caption{
(Color online) Angular coefficient $\alpha$, linear coefficient $\ln C_{\rm sp}$,
and volume over wavelength cubed $V/\lambda^3$, for a spherical
box as a function of the energy.
The dashed lines correspond to the classical (high energy) limit of
$\alpha=3/2$ and $\ln C_{\rm sp}=\ln(2/(9\pi\varepsilon_{\rm sp}^{3/2}))$.
The bottom panel illustrates Weyl's theorem: we fixed the volume $V$
and varied the wavelength $\lambda=2\pi/k$. Larger values of
$V/\lambda^3$
correspond to angular and linear coefficients that are closer to
the expected classical limit.
}
\label{fig:ang_lin_weyl}
\end{figure}
Another feature that we chose to illustrate in Fig.~\ref{fig:ang_lin_weyl}
is Weyl's theorem.
The bottom panel shows, for a fixed volume, the ratio
$V/\lambda^3$, which increases with the energy.
As we can see, as the ratio increases, the closer the angular and linear coefficients become to the high energy limit given by
Eq.~(\ref{eq:coef_sphere}).
This is consistent with what we presented in Sec.~\ref{sec:weyl},
as the energy of the particle increases, it becomes insensitive to
the shape of the sphere, and its cumulative state number
approaches the expression we derived for a rigid box.
\section{Thick shell}
\label{sec:shell}
Let us consider the external potential
\begin{equation}
\label{eq:Vshell}
V(r)=
\begin{cases}
0\phantom{+\infty} \text{ if } a<r<b,\\
+\infty\phantom{0} \text{ otherwise}.
\end{cases}
\end{equation}
We refer to this potential as a thick shell because
a shell is a two-dimensional object, whereas
the potential of Eq.~(\ref{eq:Vshell}) traps
the particle in a spherically symmetric region
with
thickness $\delta=b-a$.
Equation~(\ref{eq:radial}) for the region $a<r<b$
is the same as Eq.~(\ref{eq:bessel_sphere}), which means
that linear combinations of the spherical Bessel functions
of the first and second kinds, Eqs.~(\ref{eq:besselj}) and
(\ref{eq:bessely}), are also solutions to this
equation.
However, the boundary conditions are different from the ones
employed in the spherical box, Sec.~\ref{sec:sphere},
$R_{nl}(r=a)=R_{nl}(r=b)=0$. This yields the system of linear
equations
\begin{eqnarray}
A j_l(ka)+B y_l(ka)&=&0,\nonumber\\
A j_l(kb)+B y_l(kb)&=&0,
\end{eqnarray}
where $A$ and $B$ are constants that need to be determined.
The non-trivial solution requires
\begin{equation}
\label{eq:shell}
j_l(ka)y_l(kb)-j_l(kb)y_l(ka)=0.
\end{equation}
Again, our problem reduces to finding the values of $k$
that satisfy the equation above. We employ numerical
methods to find them \cite{hamming12}.
Unlike the spherical box, where the only length scale of the
problem is the radius of the sphere, there are two length
scales present in the thick shell: the radii $a$ and $b$ or,
equivalently, the thickness $\delta$ and the center
of the sphere $R=(a+b)/2$.
This means that the approach we employed in the case of the sphere, of defining quantities in energy
units of $\varepsilon_{\rm sp}$, will not work here.
Hence, the parameter choice was made keeping in mind typical
values for the number density employed in trapped BECs \cite{dalfovo99},
which yields the range
between 10 and 15 $\mu m$
for $a$ and $b$.
In Fig.~\ref{fig:cumu_box_shell} we plot the cumulative
state number for the spherical box and the thick shell.
For both sets of internal radii $a=$ 10 $\mu$m
and $a=$ 14 $\mu$m, with the external radius $b=$ 15 $\mu$m fixed,
our (quantum) numerical calculations yield slightly lower
values if compared to the semi-classical approximation
of Eq.~(\ref{eq:cumu_semi_classical}).
Again it is possible to see the manifestation of Weyl's
theorem. The spherical box with radius
$a=(15^3-14^3)^{1/3}\mu$m $\approx 8.6$ $\mu$m
and the thick shell with $a=$ 14 $\mu$m and $b=$ 15 $\mu$m
have the same volumes,
however totally different shapes. Their cumulative state number function presents a small deviation, which increases with the decreasing of the trap volume.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{cumu_box_shell.pdf}
\caption{(Color online) Cumulative state number for the spherical box and thick shell
as a function of the energy.
The points correspond to our numerical calculations, and the curves
to the semi-classical approximation of Eq.~(\ref{eq:cumu_semi_classical}).
The open (red) circles correspond to the spherical box
with radius $a=(15^3-14^3)^{1/3}\mu$m $\approx 8.6$ $\mu$m, which
was chosen such that the sphere has the same volume as
the thick shell with $a=$ 14 $\mu$m and $b=$ 15 $\mu$m, open (green)
triangles.
We also plot the cumulative state number for a different internal
radius, $a=$ 10 $\mu$m, while keeping the external radius fixed at
$b=$ 15 $\mu$m, denoted by the solid (green) triangles, and the spherical box (same volume) of radius $\approx 13.3$ $\mu$m solid (red) circles.
The semi-classical approximations for $a=$ 10 $\mu$m
and $a=$ 14 $\mu$m, solid and dashed (blue) curves
respectively, are slightly above the corresponding quantum
calculations.
}
\label{fig:cumu_box_shell}
\end{figure}
In order to quantify this difference, we proceeded analogously
to what we did in Sec.~\ref{sec:sphere}. The logarithm of the
state number function is given by
\begin{equation}
\label{eq:ln_shell}
\ln N(\varepsilon)=\ln C_{\rm sh}+\alpha\ln \varepsilon ,
\end{equation}
where the high energy limit corresponds to $\alpha=3/2$
and $C_{\rm sh}=[2(b^3-a^3)/(9\pi)](2M/\hbar^2)^{3/2}$.
In Fig.~\ref{fig:fit_shell} we show the
linear fit of our data to Eq.~(\ref{eq:ln_shell}).
It is possible to see that larger values of the thickness yield
angular and linear coefficients that are closer to the high
energy limit, as expected.
We should note that the angular coefficients $\alpha$
are slightly lower than 3/2 for $\delta \gtrsim$ 8 $\mu$m.
This is explained by the fact that increasing the volume, or
the energy cutoff, makes the angular coefficient approach
3/2 from below, as was the case with the spherical box,
see Fig.~\ref{fig:ang_lin_weyl}. For the range
$\delta \lesssim$ 8 $\mu$m there is competition
between the energy cutoff, the change in volume, and also
the change in dimensionality, as $\delta/R \ll 1$.
We also verified Weyl's theorem by varying both the volume $V$
and the wavelength $\lambda$ and calculating the ratio
$V/\lambda^3$. For a fixed value of the thickness
(for example $\delta$= 1 $\mu$m) the larger the ratio,
the closer the angular and linear coefficients are to the expected
limits.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{fit_shell.pdf}
\caption{(Color online) Angular coefficient $\alpha$,
linear coefficient $\ln C_{\rm sh}$,
and the ratio $V/\lambda^3$, for a thick
shell as a function of the thickness $\delta$.
The external radius was kept fixed at 15 $\mu$m, while the
internal radius $a$ was varied between 4 and 14 $\mu$m.
We plot the data points corresponding to our numerical calculations
for the cutoffs $k_m=$ 40, 50, and 60 $\mu$m$^{-1}$, (blue) triangles,
(green) circles, and (red) squares, respectively.
The dashed lines correspond to the classical (high energy) limit of
$\alpha=3/2$ and $\ln C_{\rm sh}=\ln[2(b^3-a^3)/(9\pi)](2M/\hbar^2)^{3/2}$.
The bottom panel illustrates Weyl's theorem:
for different values of $k$ we calculated the
ratio $V/\lambda^3$, with $\lambda=2\pi/k$.
We show the ratios for $k=$ 40, 50, and 60 $\mu$m$^{-1}$,
(blue) short-dashed,
(green) dashed, and (red) solid curve, respectively.
Larger values of $V/\lambda^3$
correspond to angular and linear coefficients that are closer to
the expected classical limit as illustrated, for example,
by the values of $\alpha$
for $\delta=$ 1 $\mu$m.
}
\label{fig:fit_shell}
\end{figure}
\section{Critical temperature}
\label{sec:temp}
\subsection{Three-dimensional systems}
\label{sec:temp_3D}
Finally, we have all the ingredients to calculate the critical
temperature for Bose-Einstein condensation in the spherical box
and thick shell traps. The semi-classical calculation corresponds to
Eq.~(\ref{eq:Tc}) with the pertinent volume. We assume $N=10^5$ particles,
which is consistent with cold gases in harmonic traps \cite{dalfovo99}.
We considered 3 atomic species which are commonly employed in cold
atoms experiments: $^{23}$Na, $^{87}$Rb, and $^{133}$Cs. We disregard the interaction between the atoms, i.e., we are assuming an ideal Bose gas.
Their atomic masses are available in Ref.~\cite{wang17}
in unified atomic mass units.
A useful reference for physical constants is the
``2014 CODATA (Committee on Data for Science and Technology) recommended
values'', which is generally recognized worldwide for use in all fields of
science and technology \cite{mohr16}.
We used their values for atomic units [u c$^2$], $\hbar$c [eV $\mu$m],
and $k_B$ [eV/K] to compute Eq.~(\ref{eq:Tc}).
We present our results for the semi-classical values
of $T_c$
in Fig.~\ref{fig:temp} as open symbols.
Equation~(\ref{eq:Tc}) shows that
$T_c$ is inversely proportional to the atomic mass $M$ hence,
for a given geometry, $^{23}$Na displays the highest critical temperature
and $^{133}$Cs the lowest.
We should also note that the spherical trap with
$a=(15^3-14^3)^{1/3}\mu$m $\approx 8.6$ $\mu$m and the thick shell
with $a=$ 14 $\mu$m and $b=$ 15 $\mu$m have the same volumes, thus
their critical temperatures are the same in the semi-classical scheme.
\begin{figure}[!htb]
\centering
\includegraphics[angle=-90,width=\columnwidth]{temp.pdf}
\caption{(Color online) Critical temperature for
Bose-Einstein condensation for different atomic species
in spherically symmetric traps.
Open symbols stand for the semi-classical approximation of
Eq.~(\ref{eq:Tc}), while solid symbols correspond to our numerical
calculations. We denote
$^{23}$Na, $^{87}$Rb, and $^{133}$Cs by
(red) squares, (green) circles, and (blue) triangles, respectively.
Note that the spherical trap with
$a=(15^3-14^3)^{1/3}\mu$m $\approx 8.6$ $\mu$m and the thick shell
with $a=$ 14 $\mu$m and $b=$ 15 $\mu$m contain the same volumes, thus
their critical temperatures are the same in the semi-classical
approximation. The same is true for the sphere with
$a=(15^3-10^3)^{1/3}\mu$m $\approx 13.3$ $\mu$m and the thick shell
with $a=$ 10 $\mu$m and $b=$ 15 $\mu$m.
}
\label{fig:temp}
\end{figure}
We also calculated the critical temperature using our numerical
calculations of the density of states $g(\varepsilon)$ and
Eq.~(\ref{eq:Ncont}). We show the results in Fig.~\ref{fig:temp}
using solid symbols. Although many of the results are within the error
bars (the computation of the density of
states introduces numerical errors), our quantum results are
consistently larger than the semi-classical ones, mainly when we consider the thinner shell case.
This is in agreement with our findings in Secs.~\ref{sec:sphere} and
\ref{sec:shell}, where our cumulative state number functions are
smaller than the semi-classical approximation.
\subsection{From 3D to 2D}
\label{sec:3D_2D}
As the thickness $\delta$ of the shell approaches zero, we expect
the behavior of the system to transition from 3D to 2D.
Let us see what happens when the external radius
$b=a+\delta$ goes to the internal radius $a$, $\delta\to 0$.
We can perform a Taylor expansion of the spherical Bessel
functions, Eqs.~(\ref{eq:besselj}) and (\ref{eq:bessely}),
\begin{flalign}
&f_l(k(a+\delta))=f_l(ka)\nonumber\\
&+\frac{\delta}{2}
\left(
k f_{l-1}(ka)-\frac{f_l(ka)}{a+\delta}-k f_{l+1}(ka)
\right)
+ \mathcal{O}(\delta^2),
\end{flalign}
where $f_l$ can denote either $j_l$ or $y_l$, and we used the
property $d f_l(z)/dz=(1/2)(f_{l-1}(z)-f_l(z)/z+f_{l+1}(z))$.
Substituting this into Eq.~(\ref{eq:shell}) yields
\begin{equation}
k\delta\left(j_l(ka)y_{l-1}(ka)-j_{l-1}(ka)y_l(ka)\right)=0.
\end{equation}
Another property of the spherical functions is \cite{abramowitz12}
\begin{equation}
j_l(z)y_{l-1}(z)-j_{l-1}(z)y_l(z)=\frac{1}{z^2}.
\end{equation}
Putting everything together we have,
\begin{equation}
\left(\frac{\delta}{a}\right)\left(\frac{1}{ka}\right)=0.
\end{equation}
This should not be surprising: as $\delta/a$ goes to zero we need
an infinite amount of energy, here represented by $ka\to\infty$, to excite the radial degree of freedom.
The proper way to determine the energy levels of a \textit{truly}
two-dimensional shell is to start from the 2D Schr\"odinger equation.
However, we already saw in Sec.~\ref{sec:sph_sym_pot} that the
spherical harmonics are the solutions for this case,
\begin{equation}
-\frac{\hbar^2}{2M}\nabla^2 Y_{lm}=\frac{\hbar^2}{2Ma^2}l(l+1),
\end{equation}
from where we get the energy levels,
\begin{equation}
\varepsilon_{l}=\varepsilon_{\rm sp}l(l+1),
\end{equation}
with degeneracy $2l+1$, as argued in Sec.~\ref{sec:sph_sym_pot}.
The total number of bosons is given by Eq.~(\ref{eq:Ndiscrete}),
\begin{equation}
N=\sum_{l=0}^{+\infty} \frac{2l+1}{\exp[(\varepsilon_l-\mu)/(k_B T)]-1}.
\end{equation}
In the Bose-Einstein condensate we can set $\mu=0$ and
we can separate the number of atoms in the lowest energy state $N_0$,
\begin{equation}
N=N_0+\sum_{l=1}^{+\infty} \frac{2l+1}{\exp[\varepsilon_l/(k_B T)]-1}.
\end{equation}
The critical temperature corresponds to one above which $N_0=0$.
Within a semi-classical approximation
\footnote{A. Tononi and L. Salasnich, private communication.},
we can take
$\sum_{l=1}^{+\infty}\to \int_1^{+\infty} dl$, yielding
\begin{flalign}
&N=N_0+\frac{4\pi a^2 Mk_B T}{2\pi\hbar^2}\times
\nonumber\\
&
\left(\frac{\hbar^2}{Ma^2 k_B T} -\ln\left(
\exp\left[\frac{\hbar^2}{(ma^2 k_B T)}\right]-1
\right)
\right).
\end{flalign}
In the low-temperature limit, the second term on the right hand side
vanishes and $N$ coincides with $N_0$. At $T_c$, $N_0$ must be zero,
hence we have the implicit equation for $T_c$:
\begin{equation}
\label{eq:temp_2D}
T_c=
\frac{\frac{2\pi\hbar^2}{Mk_B}\left(\frac{N}{A}\right)}{
\left(\frac{\hbar^2}{Ma^2 k_B T_c} -\ln\left(
\exp[\hbar^2/(ma^2 k_B T_c)]-1
\right)
\right)
},
\end{equation}
where $A=4\pi a^2$ is the area of the shell.
We used Eq.~(\ref{eq:temp_2D}) to compute the critical temperature
for 2D shells of radii compatible with the thick shells
we studied in Sec.~\ref{sec:temp_3D}. For example, the thick
shell with internal radius 10 $\mu$m
and external radius 15 $\mu$m, was compared with a shell
at 12.5 $\mu$m. We found that the critical temperature of the
shells is 1.5 to 2 times larger than the one for the thick
shells. This means that our thick shells are far away from
being two-dimensional systems.
It is worth mentioning that the semi-classical approximation for the
two-dimensional shell, the 2D equivalent
of Eq.~(\ref{eq:DOS_semiclassical}), does not give a finite critical temperature for
Bose-Einstein condensation, with $T_c$ being zero in the limit of a plane geometry. It is the curvature of the spherical shell
that allows a finite critical temperature.
\section{Summary}
\label{sec:summary}
One of the main goals of this work was to compare and contrast the
semi-classical approximation for the density of states, and cumulative
state number, with quantum mechanical calculations.
We found differences at the low-energy regime, which is the most relevant
for cold atomic gases, which impact the thermodynamical properties
of these systems.
We also verified the manifestation of Weyl's theorem by comparing
the same geometry with different energy regimes, or the
spherical box and thick shell with the same volume.
The critical temperature range we obtained, see
Fig.~\ref{fig:temp},
is compatible with current cold atom experiments.
Indeed, systems with thick shell trapping potentials, usually
called bubble traps, are being investigated theoretically \cite{padavic17}
and experimentally \cite{elliott18,becker18}.
In Sec.~\ref{sec:3D_2D} we discuss the effects
of reducing the dimensionality of the system of interest from 3D to 2D,
which is what happens when the thickness of the shell goes to zero.
The change of dimensionality is an active topic of research in
cold atoms \cite{goerlitz01,bloch08}.
We consider the calculations presented in this paper good
introductory examples for
numerical computations in statistical physics.
Understandably, undergraduate physics courses tend to focus on analytically solvable problems. However it is of paramount importance that
students learn to perform numerical calculations, since analytical
solutions are very rare in active research areas.
This manuscript can also be used as a starting point to study
trapping geometries with other symmetries.
For example, cylindrical geometries are useful in the study
of vortex lines in cold gases \cite{vitiello96,madeira16,madeira19}.
In two-dimensions, disks can be used to investigate
point-like vortices
\cite{ortiz95,giorgini96,madeira17}.
\begin{acknowledgments}
We thank A. Tononi and L. Salasnich
for sharing their findings concerning Bose-Einstein
condensation on the surface
of a sphere.
This work was supported by
the São Paulo Research Foundation (FAPESP)
under the grant 2018/09191-7 and the grant 2013/07276-1.
We also thank Centro de Pesquisa em Ótica e Fotônica (CePOF) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES/PROEX) for
their
financial support.
\end{acknowledgments}
| {
"timestamp": "2019-03-20T01:21:31",
"yymm": "1903",
"arxiv_id": "1903.07995",
"language": "en",
"url": "https://arxiv.org/abs/1903.07995",
"abstract": "We present a pedagogical introduction to Bose-Einstein condensation in traps with spherical symmetry, namely the spherical box and the thick shell, sometimes called bubble trap. In order to obtain the critical temperature for Bose-Einstein condensation, we describe how to calculate the cumulative state number and density of states in these geometries, using numerical and analytical (semi-classical) approaches. The differences in the results of both methods are a manifestation of Weyl's theorem, i.e., they reveal how the geometry of the trap (boundary condition) affects the number of the eigenstates counted. Using the same calculation procedure, we analyzed the impact of going from three-dimensions to two-dimensions, as we move from a thick shell to a two-dimensional shell. The temperature range we obtained, for most commonly used atomic species and reasonable confinement volumes, is compatible with current cold atom experiments, which demonstrates that these trapping potentials may be employed in experiments.",
"subjects": "Quantum Gases (cond-mat.quant-gas)",
"title": "Bose-Einstein condensation in spherically symmetric traps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104962847373,
"lm_q2_score": 0.7341195210831258,
"lm_q1q2_score": 0.7094608107022573
} |
https://arxiv.org/abs/2205.08609 | Bagged Polynomial Regression and Neural Networks | Series and polynomial regression are able to approximate the same function classes as neural networks. However, these methods are rarely used in practice, although they offer more interpretability than neural networks. In this paper, we show that a potential reason for this is the slow convergence rate of polynomial regression estimators and propose the use of bagged polynomial regression (BPR) as an attractive alternative to neural networks. Theoretically, we derive new finite sample and asymptotic $L^2$ convergence rates for series estimators. We show that the rates can be improved in smooth settings by splitting the feature space and generating polynomial features separately for each partition. Empirically, we show that our proposed estimator, the BPR, can perform as well as more complex models with more parameters. Our estimator also performs close to state-of-the-art prediction methods in the benchmark MNIST handwritten digit dataset. | \section{Introduction}
Deep learning models have become ubiquitous in the machine learning literature. A cornerstone of the success of deep neural networks is that they can over-fit the training data while maintaining excellent generalization error. This feat is possible by training very large (over-parametrized) models with methods that induce self-regularization. At the same time, an extensive literature on universal approximators (starting with \cite{cybenko1989approximation} and \cite{hornik1991approximation} among others) suggests that most neural network architectures can approximate the same function classes as polynomial regression models. In fact, polynomial regression approximations are used to study the generalization capacities of neural networks, for example, recently in \cite{emschwiller2020neural} in a student/teacher model. In light of this, a question arises: why is polynomial regression not used more often in practice for predictive tasks?
One of the main reasons is that the number of transformed features in polynomial regression models increases exponentially with the dimension of the feature space and the degree of the polynomial embedding. This large number of features leads to two important problems in practice: (1) computationally, the model becomes very expensive very fast, and (2) the finite sample rates for the $L^2$ estimation error may require a prohibitively large sample size (i.e., slow convergence). Together these two problems imply that even in big data settings with access to computational power, it may be unfeasible to use polynomial regression for predictive tasks. Besides explicit regularization (shrinkage through penalty terms), some of the frequently suggested solutions to this problem include (1) embedding/dimensionality reduction, for example, by using PCA or by extracting an auto-encoder embedding \cite{kramer1991nonlinear}; (2) constraining the feature map, for instance for image data by adding a convolution step in generating the features \cite{krizhevsky2012imagenet}, or (3) model ensembling, for example by bagging models \cite{breiman1996bagging}.
In this paper, we focus on the second solution by showing theoretically how constraining the feature map in series regression affects the finite sample rate of the learning problem and proposing the use of a more flexible estimator, the \textit{bagged} polynomial regression (BPR). Our theoretical contribution is characterizing a class of series regression models for which the $L^2$ finite sample rate for the learning problem can be optimized by building the polynomial features group-wise for a partition of the feature space rather than for all features. Intuitively, our theoretical results show how partitioning the feature space can optimally trade off the estimation and approximation errors to improve the rate of convergence and, therefore, the prediction error. These results are especially useful for smooth dense cases (not sparse) in which good models need to include high-order interactions to achieve a good fit.
Drawing from the theoretical insights, we propose the use of BPR as an alternative to neural networks that is computationally attractive and readily implementable in most machine learning software packages. By only building the polynomial embeddings for subsets of the feature space and averaging across multiple models, BPR can reduce the number of parameters needed to estimate models while maintaining low prediction and generalization errors. We analyze the performance of BPR in a standard prediction task to the MNIST hand-written digit database \cite{lecun1998mnist}. Our application shows that training BPR using only a fraction of the available features in each estimator performs comparably well to larger models. Furthermore, BPR performs better than standard polynomial regressions and achieves a test accuracy on MNIST close to convolutional neural networks and state-of-the-art methods. We believe that a fully tuned BPR model could perform like state-of-the-art in standard prediction tasks.
\noindent \textbf{Related Literature} On the theoretical side, this paper is mostly related with the series regression literature (see for example \cite{newey1997convergence}) and the non-parametric regression literature (see for example \cite{stone1982optimal}). Our paper builds on \cite{belloni2015} by using \cite{rudelson2007sampling} to get new asymptotic $L^2$ rates and then provides new finite sample rates for series regression models. Then, it uses these results to show that the rates can be optimized for partitioned series regression models. Theoretical treatments of similar partitioning models have been explored in the literature, for example \cite{andrews1990additive} for additive interactive regression models, \cite{cattaneo2013optimal} for partitioning estimators or more recently \cite{khosravi2019non} on sub-sampling ensembling in high dimensions. Our results can complement this literature by providing new finite sample rates for a related but different class of models. This paper may also be of interest to the machine learning literature regarding universal approximators for neural networks and polynomial regression and complement the recent literature comparing the two models, for example, \cite{emschwiller2020neural} or \cite{cheng2018polynomial}. In particular, our results improve on the \cite{emschwiller2020neural} finding that polynomial regression can perform comparably well to neural networks.
\noindent \textbf{Notation} In what follows we use $E$ to denote the expectation operator and $\mathbb{E}$ to denote the empirical expectation operator such that $\mathbb{E}_n[f(x)] = 1/n\sum_{i=1}^n f(x_i)$. Furthermore, unless stated otherwise we denote the $l_2$ norm for a vector by $\| \cdot \|$, the operator norm for a matrix $\| \cdot \|_{op}$ and let $a \lor b = \max\{a,b\}$. Finally, we write $ a \lesssim b$ to mean $a \leq Cb$ for some fixed constant $C>0$, $ a \lesssim_P b$ to mean $a = O_p(b)$ and $a\asymp b$ to mean $a$ is asymptotic to $b$. All proofs can be found in the supplementary material appendix.
\section{Theoretical results}
We derive asymptotic and finite sample rates for sequences of models indexed by sample size $n$ that satisfy the following sampling assumption.
\begin{assumption}[Sampling model]
\label{eq:assump_sample_model}
For each $n$, random vectors $\{(y_i, x_i')\}_{i=1}^n$ are $i.i.d$ and given by the series regression model
\begin{equation}
y_i = g(x_i) +\epsilon_i,\quad E[\epsilon_i|x_i]=0,\quad x_i\in\mathcal{X}\subset \mathbb{R}^d,\quad i=1,\dots, n,
\end{equation}
where $y_i\in \mathbb{R}$ is the response variable, $x_i$ are the basic features in some bounded set $\mathcal{X}$, $\epsilon_i$ is a noise term and $g: \mathcal{X} \to \mathbb{R}$ is the conditional expectation function belonging to an arbitrary function class $\mathcal{G}$.
\end{assumption}
Given the sampling process, we focus on the set of models in which $g$ is approximated by linear forms $p(x)' b$, where $p(x):\mathbb{R}^d \to \mathbb{R}^k$ is a tensor operator that generates polynomial features, with $k$ being the total number of features generated from the basic features $x$. For an iid sample $\{y_i, x_i'\}$, $i = 1,\dots, n$, we estimate the series regressors by the minimizing the statistical risk for the square loss function (i.e. the least squares problem):
\begin{equation}
\beta_g = \text{argmin}_{b\in \mathbb{R}^k} E[(g(x_i) - p(x_i)'b)^2],
\end{equation}
\noindent where $\beta_g$ is the least squares estimate for a given conditional mean function $g$. This set up follows the set up in \cite{belloni2015} and the standard framework in the series regression literature (see \cite{newey1997convergence} or \cite{andrews1990additive}). A key object in this literature is the approximation error $r_g(x)$ for features $x$ and target function $g$, defined as
\begin{equation}
r_g(x) = g(x) - p(x)'\beta_g.
\end{equation}
\noindent We can then re-write the sampling model as linear regression model
\begin{equation}
y_i = p_i'\beta + u_i, \quad E[u_ix_i] = 0,\quad u_i = r_i + \epsilon_i,
\end{equation}
and define the standard least squares projection estimator:
\begin{equation}
\hat{\beta} = \text{argmin}_{b\in \mathbb{R}^k}\mathbb{E}_n [(y_i - p_i'b)^2] = \mathbb{E}_n[p_ip_i']^{-1}\mathbb{E}_n[p_iy_i].
\end{equation}
Given the regression model we can decompose the error in estimating the target function $g(x)$ in two components, an estimation error and an approximation error
\begin{equation}
\hat{g}(x) - g(x) = \underbrace{p(x)'(\hat{\beta} - \beta)}_{\text{estimation error}}\quad - \underbrace{r(x).}_{\text{approximation error}}
\end{equation}
Intuitively, the richer the polynomial embedding the smaller the approximation error will be, however this may come at the cost of a larger estimation error as the number of model parameters to estimate increases. Our theoretical results will show how to trade off these two errors in models in which we can constrain the feature map $p(x)$ to generate polynomial features for partitions of the feature space rather for all features. Observe that this trade off and our finite sample results go beyond the standard bias-variance trade-off that only considers the estimation error. Even if the model optimally trade offs bias and variance to control the estimation error, it may still be beneficial to reduce or increase the richness of the feature map depending on the approximation error.
An important quantity to understand the role of the estimation and approximation errors when estimating the target function $g$ is
\begin{equation}
\xi_k = \sup_{x\in \mathcal{X}} \|p(x)\|.
\end{equation}
When we have multiple basic features, and $\mathcal{X}$ is multi-dimensional, building a series of polynomial degree $J$ for each basic feature and then constructing all interactions leads to $k = \sum_{b=1}^{d} {d\choose b} J^b = (J+1)^d -1$ total features\footnote{This number can be thought as an upper bound on the amount of polynomial features built in practice, for instance with $d=2$ and $J=2$ such that $x = (x_1,x_2)$ it means we build \textit{all} interactions $\{x_1,x_1x_2,x_1x_2^2, x_1^2, x_1^2x_2, x_1^2x_2^2,x_2,x_2^2\}$. Our results, however, are easily amenable to other constructions of the polynomial embedding.}. Furthermore, when $\mathcal{X}$ is bounded (as required by assumption \ref{eq:assump_sample_model}), it can be shown that $\xi_k \precsim k$, which explodes exponentially in $d$, the dimension of the feature space $\mathcal{X}$. This is the main problem with polynomial regression that we motivated in the introduction. As $J$ and $d$ grow, $k$ becomes very large. For example, for $J=2$ and $d=40$, not a very high dimensional setting, $k$ is already of the order of $10^{19}$. On one hand, this leads to computational intractability as the OLS method requires inverting a matrix of size $k\times k$, and on the other hand, given that the $L^2$ convergence rate will depend on $k$ and require $k\to \infty$, as we will show in Theorem \ref{theorem:l_2_rates}, this means that the sample size necessary for convergence will also have to grow exponentially.
We derive the $L^2$ convergence rates with respect to $\|\cdot\|_{F,2}$ when $x \sim F$ for a probability measure $F$ over $\mathcal{X}$, where
\begin{equation}
\|r_g\|_{F,2} \equiv \sqrt{\int_{x\in\mathcal{X}} r^2_g(x)dF(x)},\quad \|r_g\|_{F,\infty} \equiv \sup_{x\in\mathcal{X}}|r_g(x)|,
\end{equation}
\noindent characterize the approximation properties of the underlying class of functions under $L^2(\mathcal{X},F)$ and uniform distances for any function $g\in \mathcal{G}$.
Our main result, Theorem \ref{theorem:l_2_rates}, extends the $L^2$ convergence result in \cite{belloni2015} by deriving new finite sample rates as well as asymptotic rates using the results from \cite{rudelson2007sampling}. This result is valid for a large class of series regression models that satisfy assumption \ref{eq:assump_sample_model}). Therefore, it is of interest beyond the case of polynomial regression further discussed in this paper.
\begin{theorem}[$L^2$ rates]
\label{theorem:l_2_rates}
Consider the following assumptions
\begin{enumerate}
\item For each $n$, random vectors $\{(y_i, x_i')\}_{i=1}^n$ are $i.i.d$ and given by the series regression model (1) with $\bar{\sigma}^2 \equiv \sup_{x\in \mathcal{X}}E[\epsilon_i^2 |x_i=x] < \infty$.
\item Uniformly over all $n$, eigenvalues of $Q\equiv E[p_ip_i']$ are bounded above and away from zero.
\item For each $n$ and $k$ there exist a finite constant such that for all $g\in \mathcal{G}$
$$
\|r_g\|_{F,2} \leq c_k.
$$
Then, if $c_k \to 0$
\begin{equation}
\| \hat{g} - g\|_{F,2} = O_p\left(\left(\sqrt{\frac{\log(n)}{n}}\xi_k + 1\right)\left(\sqrt{\frac{k}{n}} + c_k\right)\right),
\end{equation}
and if $\epsilon_i \sim \text{subG}(\sigma_i^2)$, for $t \in (0,1)$ such that $c_k \leq t/2$, and $\frac{c_k \xi_k}{\sqrt{n}}\lor\sqrt{k/n} < t/8$,
\begin{equation}
P(\| \hat{g} - g\|_{F,2} > t)
\lesssim \exp\left\{-\frac{t}{a^2}\right\} + \exp\left\{ -\frac{nt^2}{\xi_k c_k}\right\} + \exp\left\{ -\frac{n t^2}{\xi_k\bar{\sigma}^2}\right\},
\end{equation}
where $a = \sqrt{\frac{\log n}{n}} \xi_k$.
\end{enumerate}
\end{theorem}
The main takeaways from the first part of Theorem \ref{theorem:l_2_rates} is that the error in estimating the target function $g$ by series regression is bounded by the sum of an estimation error term $\sqrt{\frac{k}{n}}$ and an approximation error term $c_k$. This bound highlights the dimensionality problem of polynomial regression: $\xi_k\lesssim k$ and $\sqrt{\frac{\log(n)}{n}}\xi_k \to 0$ together imply that $n$ has to grow at a rate of at least $k^2$. Since for a polynomial embedding $k = O(J^d)$, this quickly yields unfeasible sample sizes as $d$ increases. The second part of Theorem \ref{theorem:l_2_rates} provides finite sample rates valid for all $n$ and $k$ when the bound on the approximation error is small enough. While this finite sample rate is not of immediate practical interest as it does require large $n$ to be useful (and yield a rate smaller than one), it clarifies the role of the approximation and estimation errors. This situation is similar to the approximation rates developed in \cite{farrell2021deep} and \cite{yarotsky2017error} and 2018 for neural networks with growing layer size, that also require large $n$ to be of practical relevance.
Theorem \ref{theorem:l_2_rates} can be used to characterize which models have the fastest rates for a given class of embeddings $p(x)$ and target functions spaces $\mathcal{G}$. Since different modelling settings imply different bounds for the terms $\xi_k$ and $c_k$, the finite sample rate will depend on the modelling assumptions. In particular, in this paper we consider $\mathcal{G}$ to be belong to $\Sigma_s(\mathcal{X})$, the class of functions of Holder smoothness of order $s$ defined by
\begin{equation}
\Sigma_s(\mathcal{X}) = \{g: \mathcal{X} \to \mathbb{R}\quad | \quad \forall x, \tilde{x} \in \mathcal{X}\quad |g(x) - g(\tilde{x})| \lesssim (\sum_{j=1}^d(x_j - \tilde{x}_j)^2)^{s/2}\},
\end{equation}
when $s\in (0,1]$. The definition can be extended for $s>1$ by bounding the difference in derivatives (see the supplementary materials appendix).
This implies that if $\mathcal{G}$ is contained in a ball of finite radius in $\Sigma_s(\mathcal{X})$, for the polynomial series
\begin{equation}
c_k \lesssim k^{-s/d},
\end{equation}
which means the approximation error is bounded by the inverse of the number of terms in the polynomial series, see for example, \cite{newey1997convergence} for references. In this case, the approximation error becomes bounded by a function of order $J^{-s}$.
To solve the curse of dimensionality problem in series regression, we study \textit{partioned} series regression models in which features are built group-wise.
\begin{definition}[Partitioned Polynomial Regression]
A series regression model in which features $(x_i^1, \dots, x_i^d)$ are divided in $B\in \{1,\dots, d\}$ equally sized groups and polynomial embeddings of order $J$ are generated independently for each group.
\end{definition}
A \textit{partitioned} series regression model with $B$ groups reduces the number of transformed features from $k = (J+1)^d-1$ to $k \leq B((J+1)^{\lceil d/B\rceil}-1)$. This change implies a trade-off: it decreases the estimation error by reducing the number of parameters to estimate, but it increases the approximation error. A rough bound on the approximation error for the \textit{partitioned} polynomial series regression model when $\mathcal{G}\subset \Sigma_s(\mathcal{X})$ is
\begin{equation}
c_k \lesssim B^{-s/d}(J+1)^{-\lfloor s/B\rfloor},
\end{equation}
which is increasing in $B$. This bound, while not applying directly for BPR, is informative for ensemble polynomial regression models. Indeed, as shown by \cite{breiman1996bagging} bagged models with random sampling of features (like random forests), have lower generalization error than their aggregated counterparts. Hence, one can think of this bound as being useful for BPR in which $M$ polynomial models are built by generating polynomial features for $\lceil d/M\rceil$ randomly sampled features without replacement as then $M = B$. So, there is a \textit{trade off} for BPR between the approximation error and the estimation error that is parameterized by the number of weak estimators and sampled features per estimator. Corollary \ref{corollary:opt_rate_bpr} relates the group size $B$ to the $L^2$ rate of Theorem \ref{theorem:l_2_rates}.
\begin{corollary}[Optimized rate for Partitioned Polynomial Regression]
\label{corollary:opt_rate_bpr}
Let $g\in \mathcal{G}$ where $\mathcal{G}$ is a bounded subset of $\Sigma_s(\mathcal{X})$. Under the assumptions of Theorem \ref{theorem:l_2_rates} when $k\asymp B(J)^{d/B}$ it follows that $c_k \lesssim B^{-s/d}( J)^{-\lfloor s/B\rfloor}$ and $\xi_k \lesssim B(J)^{d/B}$. Furthermore, for $t \in (0,1)$ such that $B^{-s/d}(J)^{-s/B} \leq t/2$, and large enough $n$, let $G$ be defined as follows
\begin{equation}
G(n,d,J,s,t, B) = \exp\left\{-\frac{nt}{\log(n)d^2(J/B)^{2d}}\right\} + \exp\left\{ -\frac{nt^2}{B^{(d-s)/d}(J/B)^{d-s}}\right\} + \exp\left\{ -\frac{n t^2}{d(J/B)^{d}}\right\},
\end{equation}
then,
\begin{equation}
P(\| \hat{g} - g\|_{F,2} > t)
\lesssim G(n,d,J,s,t, B),
\end{equation}
and given $n, d, J$, $s$ and $t$ there exists an optimized rate given by the function
\begin{equation}
G^*(n,d,J,s, t) = \text{argmin}_{B\in \mathbb{Z}^+_{\leq d}} G(n,d,J,s,t,B)
\end{equation}
that is achieved at
\begin{equation}
B^* = \sup\{ B\in\{1,\dots, d\}| B^{-s/d}(J)^{-s/B} \leq t/2\}
\end{equation}
when $d\geq s$.
\end{corollary}
Corollary \ref{corollary:opt_rate_bpr} states that when features are built at the rate of the leading term in the number of features for \textit{partitioned} polynomial regression or BPR, $k\asymp B(J)^{d/B}$, there exists an optimal finite sample rate that minimizes the rate found in Theorem \ref{theorem:l_2_rates}. Furthermore, for a specific setting (fixing $n, d, J, s$ and $t$) the optimal group size $B^*$ is the largest possible $B$ that keeps the approximation error bound $c_k$ small enough with respect to $t$. The optimal $B^*$ is increasing in the degree $J$, the smoothness parameter $s$, and slowly decreasing in the feature space dimension $d$. Intuitively this result means that in settings in which approximating the target function is easier (low dimensions, smoother spaces, or being able to build polynomials of high degree) we are better off splitting the sample space in more groups as the estimation error will be more important than the approximation error.
The implications of Corollary \ref{corollary:opt_rate_bpr} are better seen in Figure 1. Panels (a) and (b) show that for smooth settings (high $s$) or when we have series of high polynomial degree (high $J$) it is optimal to partition the features in groups. In fact, in some very smooth cases ($J=15$) it may be optimal to have groups of just 2 features (i.e. $B^* = d/2$). This relationship holds regardless of the tolerance parameter $t$ as shown by the confidence bands in the figure. Panel (c) shows that not partitioning the features in groups can lead to very slow rates as $d$ increases. Indeed when $B=1$ the rate rapidly increases with $d$, whereas when $B=10$ the rate is controlled even in large dimensions. This highlights the problem with standard polynomial regression and the potential benefits of using \textit{partitioned} or BPR in high dimensions. Finally, panel (d) shows that $B$ also has a big impact on the sample size required for the rate to converge to zero. In a smooth setting with $J=3$, only when $B$ is high do we have convergence when $n=10^8$. This again speaks to the slow rate of the standard polynomial regression and how \textit{partitioned} and how \textit{bagged} models may be able to offer a faster solution.
\newpage
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=1.1\linewidth]{max_b_Js.png}
\caption*{(a) $B^*$ by $J$ and $d$.}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=1.1\linewidth]{max_B_ss.png}
\caption*{(b) $B^*$ by $s$ and $J$.}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=1.1\linewidth]{rate_J3_n10e6_s5_dlt05.png}
\caption*{(c) Rate by $B$ and $d$.}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=1.1\linewidth]{rate_J3_s5_dlt05_overd.png}
\caption*{(d) Rate by $B$ and $n$.}
\end{subfigure}
\caption{Finite sample rates and the choice of partition number $B$.}
\begin{tablenotes}
\small\item\textbf{Notes}: Panel (a) plots the upper bound on $B^*$ for different $J$ and $d$ values when $s=5$. Panel (b) shows the upper bound on $B^*$ for different $s$ and $J$ values when $d=40$. In both panels the solid lines and the bands are the means and confidence bands over values for $t \in [0.001, 0.5]$. Panel (c) plots the finite sample rate, normalized to be bounded by 1, for different $B$ and $d$ values when $J=3$, $n=10^7$, $s=5$ and $t=0.05$. Panel (d) plots the same rate as panel (c) over different values of $n$, with confidence bands for values over $d \in [1,50]$. Discrete jumps in the curves are due to $d/B$ being coarsened to have integer values.
\end{tablenotes}
\label{figure:finite_rate_partition_choice}
\end{figure}
\section{Bagged Polynomial Regression}
Motivated by the theoretical insights we propose a \textit{bagged} polynomial regression method. The aim of the method is to address the computational problem and slow rates when we have to estimate a large number of parameters while maintaining interpretability. We do so by relying on the feature splitting idea of \textit{partitioned} series regression and through explicit regularization by penalization and model ensembling. This method draws from the drop-out idea for neural networks of only using some neurons in each training iteration \cite{srivastava2014dropout} and from the model ensembling and bagging literature \cite{breiman2001random}.
The BPR averages $M$ regression models trained on polynomial embeddings of degree $J$ of a random subset of basic features of size $F$. This setup allows us to choose $F$ and $k$ to control model complexity and make the problem computationally feasible even when we build polynomials of high degree. More precisely, we can write the output of the BPR model as
\begin{equation}
Y = \frac{1}{M}\sum_{m=1}^M \frac{1}{|N_m|} \sum_{i\in N_m}\langle W^m, \Psi^J(X_i^{F_m})\rangle,
\end{equation}
\noindent where $N_m$ is the sample of observations of size $|N_m|$ used to train model $m$, $W^m$ the weights for model $m$, and $\Psi^J$ is a function that generates polynomial features of degree $J$ from a random set of features denoted by $X_i^{F_m}$ of size $F$. The polynomial regression estimator for a model $m$ is defined by the $\hat{W}^m$ weights computed by solving the least squares problem for the restricted sample $N_m$ of observations and the polynomial features generated by $\Psi^J(X_i^{F_m})$. For simplicity, we assume that $S = |N_m|$ for all $m$, so that all models are trained on random samples of the same size. The following algorithm details the procedure:
\begin{algorithm}[H]
\label{algorithm:BPR}
\SetAlgoLined
\KwResult{$\hat{Y}$.}
\kwInput{$Y \in \mathbb{R}^n, X\in \mathbb{R}^{n\times d}$, $J$ degree of polynomial, $F$ number of features used, $M$ number of models trained, $S$ sample size used for training and $\lambda$ regularization parameter.}
\For{each $m \in \{1, \dots, M\}$}{
$X^{F_m}$ $\leftarrow$ random sample of $F$ features from $X$\;
$N_m$ $\leftarrow$ random sample of $S$ observations indices\;
generate polynomial features $\Psi^J(X_i^{F_m})$\;
get $\hat{W}^m$ by minimizing
$$
\sum_{i\in N_m} (Y_i - \langle W^m, \Psi^k(X_i^{F_m})\rangle)^2 + \lambda \|W^m\|_2^2
$$
via stochastic gradient descent\;
$\hat{Y}^m \leftarrow \frac{1}{S} \sum_{i\in N_m} Y_i - \langle \hat{W}^m, \Psi^J(X_i^{F_m})\rangle$
}
$\hat{Y} \leftarrow \frac{1}{M} \sum_{m=1}^M \hat{Y}^m$\;
\caption{\textit{Bagged} Polynomial Regression}
\end{algorithm}
Note that we also include an $l_2$ regularization term denoted by $\lambda$, to avoid collinearity problems and improve out-of-sample fit, that can be hyper-tuned, along with the other parameters, using cross-validation.
\noindent \textbf{Performance}. By hyper-tuning the parameters $J$, $F$, $M$, $S$ and $\lambda$, BPR optimally chooses how to partition the feature space to build the polynomial features to navigate the approximation vs. estimation error trade-off. Choosing the hyper-parameters through cross-validation can be thought as estimating the optimal $B^*$ in Corollary \ref{corollary:opt_rate_bpr} given the data setting. Furthermore, the additional $l_2$ penalty and model ensembling ensure that the generalization error is minimized.
This procedure can be used for both classification and regression problems. In the case of classification problems, we use logistic regression and minimize a logistic loss in step 5 of Algorithm 1, and instead of taking the average of the $M$ models we take the majority vote (median). In this case, bagging can be particularly useful as the activation function is non-linear so the models we train will have low bias and high variance. As in random forest, by taking the median/average of $M$ models we can successfully control the variance of the overall model and achieve a lower generalization error.
\noindent \textbf{Interpretability}. The BPR estimator is a good alternative to neural networks both in terms of performance and interpretability. Focusing on the case of classification, our estimator can be thought of as a weighted sum of logistic regression coefficients on our polynomial regressors. We weight all $M$ of our regressions equally – and if a regressor $X_1$ in included in a specific regression then the logistic coefficient $\beta_1$ will be weighted 1/$M$. Using logistic regression enables these $\beta_1$ parameters to have log-odds interpretations. Furthermore, using logistic regression makes sure we avoid the prediction problems of linear probability models that can predict values outside the support of our outcome model. These features make understanding how our variables impact the prediction of our outcome much more transparent than in the neural network case.
\section{Application}
In this section, we describe our computational results when the BPR is applied to the MNIST \href{http://yann.lecun.com/exdb/mnist/}{dataset} \cite{lecun1998mnist}. We selected this dataset as it is a standard dataset for training and testing deep learning methods. Using MNIST enables us to compare our method to state-of-the-art prediction methods easily. We approach the prediction task as a classification problem with ten classes. The data has 60,000 observations and 784 features, where each feature is one pixel of the handwritten image and the labels are the 0-9 digits. Given the high-dimension of the feature space ($d=784$) standard polynomial regression may be ill-suited for this task, so we consider this example as a suitable setting to compare BPR to other models.
\textbf{Main model}. Our primary model for predicting the ten digits is based on combining ten different BPR models; one for each digit we are trying to predict, similarly to \cite{emschwiller2020neural}. For each one of these ten models, we train a BPR model to predict whether the observed image is one of the digits or not. Each digit-specific model is trained following the BPR algorithm described in the last section. In practice, the number of bags we use is in the range 20 - 500. However, the main empirical results we report in this paper are for models that use a subset of features of size between 20 and 80 and polynomial embeddings of degree 2. When we create the polynomial embedding for a given subset of features, we include all of the base terms, all of the interactions between these terms, and their second-order polynomials. Note that this yields a number of transformed features smaller than the one used in the theoretical derivations, where \textit{all} interactions were built. We choose not to include higher order interactions to reduce the computational burden of training the model.
\textbf{Results}. We start by showing how the choice of the number of bags and the number of features per sub-model affects performance for the a one digit prediction task. Figure \ref{figure:test_accuracy} shows that models trained on \textit{only} 40 out of the 784 features already achieve near perfect accuracy, while increasing the number of bags improves the convergence path. This can be seen in Figure 2 as larger ensemble models (dashed lines) only require 30 features per model to reach a near perfect accuracy. This empirical result highlights the theoretical intuition that it may be optimal to generate polynomial features only for subgroups to improve the convergence rate.
Our second empirical finding is that BPR can perform close to the state-of-the-art for the MNIST task while using significantly less parameters. Table \ref{table:mnist_10_digit} shows the out-of-sample accuracy for a BPR trained with 100 bags, 80 features per model and degree 2 polynomial transformations.
\begin{figure}[H]
\centering
\includegraphics[scale = .07]{plot.png}
\caption{Test Accuracy By Number of Estimators.}
\label{figure:test_accuracy}
\begin{tablenotes}
\small\item\textbf{Notes}: This figure shows test accuracy for predicting the digit 1 in the MNIST data. On the x-axis we have max features - the number of base regressors used in each weak estimator. On the y-axis we have the test accuracy which gives the percent of the digits correctly classified. The different dashed lines represent the number of models bagged in each case.
\end{tablenotes}
\end{figure}
\begin{table}[H]
\centering
\renewcommand\arraystretch{1.5}
\caption{MNIST 10 digit out-of-sample accuracy comparison between methods.}
\label{table:mnist_10_digit}
\begin{tabular}[t]{lcccc}
\toprule
method & BPR & Polynomial regression & Convolutional Neural Net & State-of-the-Art \\ \hline
\midrule
accuracy & 97.58\% & 95.94\% & 99.03 \% & 99.79\% \\ \hline
\bottomrule
\end{tabular}
\end{table}%
\textbf{Comparison}. In Table \ref{table:mnist_10_digit} we compare our BPR method to the polynomial regression method of \cite{emschwiller2020neural}, and the two neural net approaches cited in their paper. Our model achieves higher accuracy than \cite{emschwiller2020neural}, while estimating fewer parameters: \cite{emschwiller2020neural} have 18,740 parameters per sub-model and our BPR is trained on 3,321 per sub-model. This is striking as we sample our features randomly for each sub-model, while \cite{emschwiller2020neural} use the less naive approach of including the baseline interactions terms of features that are bordering pixels. More importantly, both our method and \cite{emschwiller2020neural} use fewer parameters than the state-of-the-art and convolutional neural network that typically contain on the order of hundreds of thousands if not millions of parameters per model. Finally, it is important to note that we did not fully hyper-tune our BPR. We expect that a fully tuned BPR, with careful hyper-tuning of the parameters $J$, $F$, $M$, $S$ and $\lambda$, would be able to perform closer to the state-of-the-art.
\section{Conclusion}
In this paper, we highlight why polynomial regression, while offering more interpretability than neural networks and being able to approximate the same function classes, is rarely used in practice. By deriving new finite sample and asymptotic $L^2$ rates for series regression estimators, we show that the convergence rate for polynomial regression can be very slow. However, the rate can be improved when the polynomial embeddings are generated group-wise for partitions of the feature space rather than for all features. The improvement is particularly salient when the function class we are trying to estimate is smooth. Motivated by these results, we propose the use of BPR instead of standard polynomial regression as a potential substitute for neural networks. BPR draws from the theoretical insight of building polynomial embeddings for subsets of the feature space and from ensembling models by averaging multiple estimators to improve the out-of-sample performance. Finally, we show that it can perform similarly to more complex models, using fewer parameters, in an application to the MNIST handwritten digit data set.
\noindent \textbf{Limitations and Future Work}. This paper provides a formal reason why polynomial regression is ill-suited for prediction tasks in high-dimensional settings. However a limitation of the paper is that, while the main theorem (Theorem \ref{theorem:l_2_rates}) applies to a large class of series regression models, the result for polynomial regression (Corollary \ref{corollary:opt_rate_bpr}) focuses on specific function classes (Holder smoothness of degree s) and a subset of models (\textit{partitioned} polynomial regression). Future work should expand the application of the main result to include ensembles of polynomial regression models and formally link our theory with our proposed estimator the BPR. Furthermore, a more extensive benchmarking exercise should be carried out to compare BPR with existing state of the art methods. Given that in this paper we did not fully tune BPR, we expect future results to validate that BPR can perform as well as neural networks.
\noindent \textbf{Societal and Ethic Impact}. BPR could be used in a wide variety of applications, including recommender systems, modeling in economics, and classification problems more generally. Our method can be used to help researchers develop high performance classification models that retain interpretability. The interpretability of BPR over neural networks could help researchers better understand the underlying models and the importance of certain features in their data. While more accurate classification methods have the risk of helping to automate or disrupt various industries. For example, better recommender systems could lead to more market concentration by firms that gather large amounts of data, hurting consumer welfare. By proposing a method that is more interpretable and fully open source, we hope to improve competition and transparency.
\newpage
\printbibliography
\newpage
\centerline{\Large\bf Supplementary Materials}
\medskip
| {
"timestamp": "2022-05-19T02:01:54",
"yymm": "2205",
"arxiv_id": "2205.08609",
"language": "en",
"url": "https://arxiv.org/abs/2205.08609",
"abstract": "Series and polynomial regression are able to approximate the same function classes as neural networks. However, these methods are rarely used in practice, although they offer more interpretability than neural networks. In this paper, we show that a potential reason for this is the slow convergence rate of polynomial regression estimators and propose the use of bagged polynomial regression (BPR) as an attractive alternative to neural networks. Theoretically, we derive new finite sample and asymptotic $L^2$ convergence rates for series estimators. We show that the rates can be improved in smooth settings by splitting the feature space and generating polynomial features separately for each partition. Empirically, we show that our proposed estimator, the BPR, can perform as well as more complex models with more parameters. Our estimator also performs close to state-of-the-art prediction methods in the benchmark MNIST handwritten digit dataset.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Methodology (stat.ME)",
"title": "Bagged Polynomial Regression and Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104933824754,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7094608085716503
} |
https://arxiv.org/abs/1503.04499 | Cumulative Conditional Expectation Index | In this paper we study the cumulative conditional expectation function (CCEF) in the copula context. It is shown how to compute CCEF in terms of the cumulative copula function, this natural representation allows to deduce some useful properties, for instance with applications to convex combination of copulas. We introduce approximations of CCEF based on Bernstein polynomial copulas. We introduce estimators for CCEF, which were constructed through Bernstein polynomial estimators for copulas. The estimators are asymptotically normal and biased for CCEF. | \section{Introduction}
In this paper we explore $\mathbb{E}[V\vert U \leq u]$ as a function of $u \in (0,1),$ we denote this quantity as {\it{cumulative conditional expectation function}}. One of the motivations of working with this quantity is its use for making decisions in real problems.
Our target is to give the foundation for the construction of approximations and estimators for the cumulative conditional expected function. Also, we present in the first case the rate of approximation and in the second, the asymptotic distribution of the empirical process related to the estimator. \\
\section{Cumulative conditional expectation}\label{concept}
For some applied problems, it is helpful knowing the mean performance of one variable conditioned to another variable restricted to a region and not just to a single observation. In order to reach that knowledge we study the following measure.
\begin{definition}Let $(U,V)$ be a random vector with associated 2-copula $C.$ Then the cumulative conditional expectation function of $V$ on $U$ is given by $R_C(u)=\mathbb{E}[V \vert U \leq u], \,\,\, \forall u \in (0,1).$
\end{definition}
\begin{Remark}
\noindent The cases $u=1$ and $u=0$ are excluded because they restrict themselves to the copula regression function. Indeed $\mathbb{E}[V \vert U \leq 1]=\mathbb{E}[V]=0.5$ is trivial and $\mathbb{E}[V \vert U \leq 0]=\mathbb{E}[V \vert U=0]$ has been studied in \cite{sungur2}.
\end{Remark}
\noindent We note the previous concept was already explored in \cite{MFandVG-L2014}, but that work focused on some bounds of the measure $R_C$. The following theorem gives a representation for $R_C(u)$ that let us to simplify its computation.
\begin{theorem}\label{le:integral}
Let $(U,V)$ be a random vector with associated 2-copula $C.$ Then $R_C(u)=1- \frac{1}{u}\int_0^1C(u,v)dv,$ $\forall u \in (0,1).$
\end{theorem}
\noindent In order to simplify the notation, hereafter we denote the partial derivatives as done in \cite{Janssen2014}, this is $C^{(1)}(u,v)=\frac{\partial C(u,v)}{\partial u}$, $C^{(2)}(u,v)=\frac{\partial C(u,v)}{\partial v}$, $C^{(1,1)}(u,v)=\frac{\partial^2 C(u,v)}{\partial u^2}$ and $C^{(2,2)}(u,v)=\frac{\partial^2 C(u,v)}{\partial v^2}$. It can be seen that $\mathbb{E}[V\vert U \leq u ] $ is the average of the copula regression function $\mathbb{E}[V \vert U=u]$.
\begin{corollary
Under the hypotheses of theorem \ref{le:integral}, $R_{C}(u)= \frac{1}{u} \int_0^u \mathbb{E}[V \vert U=w] dw,\,\,\,\, \forall u \in (0,1).$
\end{corollary}
\noindent{ \it {Proof.~}}
By Theorem \ref{le:integral} we have $R_C(u)=1- \frac{1}{u}\int_0^1C(u,v)dv $, then
$$
R_C(u)= \frac{1}{u}\int_0^u dw -\frac{1}{u}\int_{0}^1 \int_0^u C^{(1)} (w,v)dw dv \\
= \frac{1}{u}\int_0^u \Big(1- \int_0^1 C^{(1)}(w,v)dv\Big) dw.
$$
The last term is equal to $\frac{1}{u} \int_0^u \mathbb{E}[V \vert U =w] dw$ according to \cite{sungur2}.$\hfill \blacksquare$
By means of Theorem \ref{le:integral} we can compute the cumulative conditional expectation function for several cases, as it was explored in \cite{MFandVG-L2014}.
The following result shows that the cumulative conditional expectation of a convex mixture of co\-pu\-las is the convex mixture of the cumulative conditional expectations.
\begin{corollary}\label{cor:convex}Under the hypotheses of theorem \ref{le:integral}, let $C$ be the convex combination copula, $C(u,v)=\sum_{\gamma=1}^\Gamma p_{\gamma}C_{\gamma}(u,v)$ where $C_{\gamma}$ is a copula function, $0 \leq p_{\gamma} \leq 1$ for $1 \leq \gamma \leq \Gamma$ and $\sum_{\gamma=1}^{\Gamma} p_{\gamma}=1.$ Then, $R_C(u) = \sum_{\gamma=1}^{\Gamma} p_{\gamma} R_{C_{\gamma}}(u)$.
\end{corollary}
\noindent In the next result we study a specific class of copulas, the polynomial ones. Recalling that a copula has polynomial cross sections in $u$ if it can be written as $C(u,v)=\sum_{i=1}^k \alpha_i(v) u^i$ for each $u \in [0,1],$ to suitable functions $\alpha_i, i=1,\ldots, k.$ More details about these copulas can be found in \cite{nelsen} \S3.2. As an immediate consequence of Theorem \ref{le:integral} we have the following corollary.
\begin{corollary}
Under the hypotheses of theorem \ref{le:integral}, if $C(u,v)=\sum_{i=1}^k \alpha_i(v) u^i.$ Then, $R_C(u)$ is a polynomial of degree $k-1$ given by $R_C(u) = 1-\sum_{i=0}^{k-1} u^i \int_0^1 \alpha_{i+1}(v)dv.$
\end{corollary}
\begin{example}
Quadratic and cubic cross sections copulas.
\begin{enumerate}
\item The Farlie-Gumbel-Morgenstern (FGM) family contains all the copulas with quadratic cross sections in both variables. $C(u,v)=uv +\theta uv(1-u)(1-v)$ with $\theta \in [-1,1],$ then $R_C(u)=\big(3+\theta (u-1)\big)/6$.
\item The Lin's iterated FGM family is defined by $C(u,v) = uv + \theta uv(1-u)(1-v) \big( 1+\varphi (1-u)(1-v) \big)$ with $\theta$ and $\varphi$ real numbers satisfying $\theta \in [-1,1]$ and $-1-\theta \leq \theta(1+\varphi) \leq \big(3-\theta+(9-6\theta-3\theta^2)^{1/2}\big)/2.$ $C$ has cubic cross sections in both variables. For $u \in [0,1],$ $C(u,v)=\sum_{i=1}^3 \alpha_i(v) u^i$ where
\begin{eqnarray*}
\alpha_1(v) &=& (1+\theta +\varphi \theta) v+ (-\theta -2\varphi \theta) v^2 +\varphi \theta v^3 \\
\alpha_2(v) &=& (-\theta -2\varphi \theta)v + (\theta +4\varphi \theta) v^2 - 2 \varphi \theta v^3 \\
\alpha_3(v) &=& \varphi \theta v -2 \varphi \theta v^2 +\varphi \theta v^3.
\end{eqnarray*}
Then, $R_C(u)=(\frac{1}{2} -\frac{\theta}{6} -\frac{\varphi \theta}{12}) + (\frac{\theta}{6}+\frac{\theta \varphi }{6})u - \frac{\varphi \theta}{12} u^2$.
\end{enumerate}
\end{example}
\section{Bernstein Copulas}\label{sec:polynomial}
\begin{definition}\label{eq:BC}
Given a 2-copula $C$ and a grid of points $(u_k,\ v_l) \in [0,1]^2$ with $k,l=1, \dots, m,$ the Bernstein copula approximation of $C$ of order $m$ is given by $$B_mC(u,v)=\sum_{k=1}^m\sum_{l=1}^m C\Big(\frac{k}{m}, \frac{l}{m}\Big)P_{k,m}(u)P_{l,m}(v),\,\,\forall (u,v) \in[0,1]^2,$$ where $P_{j,m}(x)=\binom{m}{j} x^j(1-x)^{m-j},\,\,x \in [0,1].$
\end{definition}
The function $B_mC$ approximates the copula $C$ and is a copula itself.
The usefulness of the next theorem is to numerically simplify the computation of $R_C$, through solving polynomial integrals, when $C$ is a parametric copula described by a complex analytical expression.
\begin{definition}\label{eq:aproximaR}
Consider a 2-copula $C,$ and a grid of points $(u_k,\ v_l) \in [0,1]^2$ with $k,l=1, \dots, m.$ Set the Bernstein approximation of $R_C$ of order $m$ as
\begin{equation*}\label{eq:aproximaR}
R_{B_mC}(u)=1-\frac{1}{(m+1)u}\sum_{k=1}^m\sum_{l=1}^m C\Big(\frac{k}{m}, \frac{l}{m}\Big)P_{k,m}(u),\,\,\,\forall u \in (0,1),
\end{equation*}
where $P_{k,m}(x)=\binom{m}{k} x^k(1-x)^{m-k},\,\,x \in [0,1].$
\end{definition}
\begin{theorem}\label{teo:converg}
Under the hypotheses of theorem \ref{le:integral}, $R_{B_mC}(u)$ converges to $R_C(u)$ pointwise, when $m \to \infty .$
\end{theorem}
\noindent For appropriate copulas it is possible to give a rate of the approximation for $R_C$, and also we observe that this rate is improved when $u$ increases.
\begin{theorem}\label{teo:unif_converg}
Let $C$ be a continuous copula with first order partial derivatives being Lipschitz and $u_0 \in (0,1)$. For $u \in [u_0,1)$ it holds $\big|R_{C}(u) - R_{B_mC}(u)\big| \leq \frac{7M}{12u_0m}$ for a constant $M$. Then, $R_{B_mC}$ converges uniformly to $R_C$ in $[u_0,1)$, when $m \to \infty .$
\end{theorem}
\section{Estimation}\label{sec:estimation}
Consider a bivariate random sample $\{ (U_j,V_j)\}_{j=1}^n$ of the vector $(U,V)$ with associated unknown 2-copula $C.$ Let be $R_{U_j}=$ rank of $U_j$ in $\left\{ U_1,\ldots, U_n\right\}$ and $R_{V_j}=$ rank of $V_j$ in $\left\{ V_1,\ldots, V_n\right\}$.
Denote the empirical copula as $$C_n(u,v)=\frac{1}{n} \sum_{j=1}^n I\Big( \frac{R_{U_j}}{n} \leq u\Big) I\Big(\frac{R_{V_j}}{n} \leq v\Big),\,\,\,\forall u, v \in [0,1].$$ By using the Bernstein estimator of the copula function $C$ proposed in \cite{sancetta} we obtain the estimator of the cumulative conditional expectation function.
\begin{definition}\label{eq:estimador}
Given a 2-copula $C,$ the estimator of $R_C(u)$ is
\begin{equation*}\label{eq:estimador}
\hat{R}_{B_mC_n}(u)=1-\frac{1}{(m+1)u}\sum_{k=0}^{m}\sum_{l=0}^{m}C_n\Big(\frac{k}{m},\frac{l}{m}\Big)P_{k,m}(u), \,\,\, \forall u \in (0,1),
\end{equation*}
where $C_n$ is the empirical copula.
\end{definition}
In order to explore the asymptotic behavior of the estimator $\hat{R}_{B_mC_n}(u)$ we make use of some known results about the Bernstein estimator. In \cite{Janssen2012} (Theorem 2 and Remark 3) it is proved that for a copula $C$ with bounded third order partial derivatives on $(0,1)^2,$ if $\sqrt{n}/m \rightarrow d$, $0 \leq d < \infty,$ when $n \to \infty ,$ then the process $\sqrt{n}\Big( B_mC_n(u,v) - C(u,v) \Big)\leadsto {\mathcal{G}}_{C}(u,v)$ in the space $l^{\infty}\big((0,1)^2\big)$ of bounded functions, where $\leadsto$ denotes weak convergence. In addition, the limiting process ${\mathcal{G}}_{C}(u,v)$ is a tight Gaussian process with mean function $db(u,v)$ where
\begin{equation}\label{funcionb}
b(u,v)=\frac{1}{2}\Big(u(1-u)C^{(1,1)}(u,v)+v(1-v) C^{(2,2)}(u,v)\Big)
\end{equation}
and covariance function given by $E[h(u,v)h(u^{\prime},v^{\prime}) ]$ for $0<u,\ u^{\prime},\ v,\ v^{\prime}<1$ with
\begin{equation}\label{funcionh}
h(u,v)= I(U \leq u, V \leq v)-C(u,v)- C^{(1)}(u,v)\big(I(U \leq u)-u\big) - C^{(2)}(u,v)\big(I(V \leq v)-v\big).
\end{equation}
This time we consider the process $\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv, $ with $u \in (0,1),$ which is the process that defines the asymptotic behavior of the proposed estimator of $R_C.$
\begin{theorem}\label{teo:asypmtotic}
Let $C$ be a 2-copula with bounded third order partial derivatives on $(0,1)^2$. If $n$ and $m$ are positive integers such that $\sqrt{n}/m \rightarrow d$ with $0 \leq d < \infty$ when $n, m \to \infty,$ then for $u \in (0,1)$,
$$
\sqrt{n}\Big( \hat{R}_{B_mC_n}(u)-R_{C}(u)\Big) \leadsto -\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv.
$$
$-\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv$ is a Gaussian process with mean function $d(\frac{1}{2}-R_C(u))+\frac{d(u-1)}{2}\int_{0}^{1}C^{(1,1)}(u,v)dv$ and
variance function
\begin{eqnarray*}
\hspace{-0.5cm}Var\Big[-\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv\Big]&=&-4R_C^2(u)+\big(4-\frac{1}{u}\big)R_C(u)-4\big(2-\frac{1}{u}\big)R_C(u)\mathcal{H}_1(u,u)\\
&&\hspace{-1.5cm}+2\big(3-\frac{2}{u}\big)\mathcal{H}_1(u,u)-4\big(1-\frac{1}{u}\big)\mathcal{H}_1^2(u,u)-2+\frac{1}{u}+\frac{1}{u^2}\int_{0}^{1}\big(C^{(2)}(u,v)\big)^2vdv.
\end{eqnarray*}
where $2 \mathcal{H}_1(u,u)=\int_0^1C^{(1)}(u,v)dv.$
\end{theorem}
\begin{Remark}\label{remark:covar}
The covariance function of the process $-\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv$ is
\begin{eqnarray*}
\hspace{-1cm}Cov\Big[-\frac{1}{u}\int_{0}^{1}{\mathcal{G}}_{C}(u,v)dv,-\frac{1}{u'}\int_{0}^{1}{\mathcal{G}}_{C}(u',v')dv'\Big]=\\
&&\hspace{-7cm}=\frac{1}{uu'}\int_{0}^{1}\int_{0}^{1}C( u\wedge u', v\wedge v')dv'dv+\big(1-R_C(u)\big)\big(R_C(u')-1\big)+\mathcal{H}_3(u,u') +\mathcal{H}_3(u',u)\\
&&\hspace{-7cm}+\big( \frac{1}{u\vee u'}-1\big)\int_0^1C^{(1)}(u',v)dv\int_0^1C^{(1)}(u,v)dv -\mathcal{H}_2(u,u')-\mathcal{H}_2(u',u)\\
&&\hspace{-7cm}+\frac{1}{uu'}\int_{0}^{1}\int_{0}^{1}C^{(2)}(u',v')C^{(2)}(u,v)(v\wedge v')dvdv'-R_C(u)R_C(u')+\mathcal{H}_1(u,u')+\mathcal{H}_1(u',u).
\end{eqnarray*}
with
\begin{eqnarray*}
\mathcal{H}_1(w_1,w_2) &=& \frac{1}{w_1w_2}\int_0^1C^{(1)}(w_1,v)dv\int_{0}^{1}C( w_1,v)C^{(2)}(w_2,v)dv\\
\mathcal{H}_2(w_1,w_2) &=& \frac{1}{w_1w_2}\int_{0}^{1}\int_{0}^{1}C^{(2)}(w_2,v)C(w_1,v\wedge v')dvdv'-\big(1-R_C(w_1)\big)R_C(w_2)\\
\mathcal{H}_3(w_1,w_2)&=&\Big(\frac{1}{w_1\vee w_2}\big(R_C(w_1\wedge w_2)-1\big)+1-2R_C( w_1)\Big)\int_0^1C^{(1)}(w_2,v)dv.
\end{eqnarray*}
for $w_1\vee w_2 = \max\{w_1,w_2\}$ and $w_1\wedge w_2 = \min\{w_1,w_2\}$.
\end{Remark}
| {
"timestamp": "2015-03-17T01:11:57",
"yymm": "1503",
"arxiv_id": "1503.04499",
"language": "en",
"url": "https://arxiv.org/abs/1503.04499",
"abstract": "In this paper we study the cumulative conditional expectation function (CCEF) in the copula context. It is shown how to compute CCEF in terms of the cumulative copula function, this natural representation allows to deduce some useful properties, for instance with applications to convex combination of copulas. We introduce approximations of CCEF based on Bernstein polynomial copulas. We introduce estimators for CCEF, which were constructed through Bernstein polynomial estimators for copulas. The estimators are asymptotically normal and biased for CCEF.",
"subjects": "Methodology (stat.ME); Applications (stat.AP)",
"title": "Cumulative Conditional Expectation Index",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104924150547,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7094608078614479
} |
https://arxiv.org/abs/1408.5271 | An algorithmic framework for obtaining lower bounds for random Ramsey problems | In this paper we introduce a general framework for proving lower bounds for various Ramsey type problems within random settings. The main idea is to view the problem from an algorithmic perspective: we aim at providing an algorithm that finds the desired colouring with high probability. Our framework allows to reduce the probabilistic problem of whether the Ramsey property at hand holds for random (hyper)graphs with edge probability $p$ to a deterministic question of whether there exists a finite graph that forms an obstruction.In the second part of the paper we apply this framework to address and solve various open problems. In particular, we extend the result of Bohman, Frieze, Pikhurko and Smyth (2010) for bounded anti-Ramsey problems in random graphs to the case of $2$ colors and to hypergraph cliques. As a corollary, this proves a matching lower bound for the result of Friedgut, Rödl and Schacht (2010) and, independently, Conlon and Gowers (2014+) for the classical Ramsey problem for hypergraphs in the case of cliques. Finally, we provide matching lower bounds for a proper-colouring version of anti-Ramsey problems introduced by Kohayakawa, Konstadinidis and Mota~(2014) in the case of cliques and cycles. | \section{Applications}\label{sec:applications}
\subsection{Anti-Ramsey property -- proper coloring} \label{sec:ar_proper}
The key ingredient for the proof of Theorem \ref{thm:ar_proper} is the following lemma whose proof we defer to the next section.
\begin{lemma} \label{lemma:ar_proper_cases}
Let $F$ be a graph isomorphic to either a cycle on at least $7$ vertices or a complete graph on at least $19$ vertices. Then for any graph $G$ such that $m(G) \le m_2(F)$ it holds that $G \xslashedrightarrowa[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} F$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:ar_proper}]
Let $F$ be some graph as stated in the theorem and $c$ a constant given by Corollary~\ref{lemma:main_density} when applied to $F$. Let $p \le cn^{-1/m_2(F)}$ and $G \sim G(n, p)$. We use Algorithm \ref{algo:ar_proper} to find a proper coloring of $G$ without a rainbow $F$-copy.
{\LinesNumbered
\begin{algorithm}
\DontPrintSemicolon
$\hat G \leftarrow G$\;
$\textrm{col} \leftarrow 0$\;
\While{$\exists e_1, e_2 \in E(\hat G) \; : \; e_1 \equiv_F e_2$ in $\hat G$ and $e_1 \cap e_2 = \emptyset$}{
color $e_1, e_2$ with $\textrm{col}$\;
$\hat G \leftarrow \hat G \setminus \{e_1, e_2\}$ and $\textrm{col} \leftarrow \textrm{col} + 1$ \label{line:proper_f_equiv}\;
}
\While{$\exists e \in E(\hat G) \; : \; e$ does not belong to an $F$-copy}{
color $e$ with $\textrm{col}$\;
$\hat G \leftarrow \hat G \setminus \{e\}$ and $\textrm{col} \leftarrow \textrm{col} + 1$
}
Remove isolated vertices in $\hat G$\;
$\{B_1, \ldots, B_k\} \leftarrow $ $F$-blocks obtained by applying Lemma \ref{lemma:block_split} on $\hat G$ \;
Color (properly) each $B_j$ without a rainbow $F$-copy using distinct sets of colors (cf.~text why this is possible) \label{line:proper_G_hat} \vspace{0.2cm}
\caption{Proper colouring without rainbow $F$-copy.} \label{algo:ar_proper}
\end{algorithm}}
\begin{comment}
{\LinesNumbered
\begin{algorithm}
\DontPrintSemicolon
$i = 0$ \;
$G_0\leftarrow G$\;
$c \leftarrow 0$\;
\While{$\exists e_1, e_2 \in E(G_i) \; : \; e_1 \equiv_F e_2$ in $G_i$ and $e_1 \cap e_2 = \emptyset$}{
$c \leftarrow c + 1$ \;
color $e_1, e_2$ with $c$\;
$G_{i+1} \leftarrow G_i \setminus \{e_1, e_2\}$ and $i = i + 1$ \label{line:proper_f_equiv}\;
}
\While{$\exists e \in E(G_i) \; : \; e$ does not belong to an $F$-copy}{
$c \leftarrow c + 1$ \;
color $e$ with $c$\;
$ G_{i+1} \leftarrow G_i \setminus \{e\}$ and $i = i +1$ \label{line:proper_no_f_copy} \;
}
$\hat G \leftarrow$ a subgraph of $G_i$ induced by the edges $E(G_i)$ \label{line:proper_after_removing} \;
$\{B_1, \ldots, B_k\} \leftarrow $ $F$-blocks obtained by applying Lemma \ref{lemma:block_split} on $\hat G$ \;
Color (properly) each $B_j$ without a rainbow $F$-copy using distinct sets of colors (cf.~text why this is possible) \label{line:proper_G_hat}
\vspace{0.2cm}
\caption{Proper colouring without rainbow $F$-copy.} \label{algo:ar_proper}
\end{algorithm}}
\end{comment}
To see the correctness of the algorithm, observe first that it suffices to argue that the graph $\hat G$ obtained in line \ref{line:proper_after_removing} can be properly colored without a rainbow copy of $F$. Indeed, we only remove edges that are not contained in an $F$-copy (and can thus be colored arbitrarily) or pairs of (non-adjacent) edges that are both contained in exactly the same $F$-copies (and can thus not be contained in a rainbow copy, if we give them the same color).
It thus remains to prove that line \ref{line:proper_G_hat} is indeed possible. We first show that the graph $\hat G$ is $F$-closed.
Assume otherwise. Then there has to exist an $F$-copy $F'$ which has at most two closed edges (as there are no vertices and edges which are not a part of an $F$-copy). If $F' \cong K_\ell$ then as $\ell \ge 19 $ there at least
$\binom{\ell}{2} -2 > \ell$ edges of $E(F')$ which are not closed. One easily checks that this implies that there are two edges $e_1, e_2 \in E(F')$ that satisfy $e_1 \cap e_2 = \emptyset$ and are not closed. Thus, $F'$ is the only $F$-copy to which $e_1$ and $e_2$ belong, implying that $e_1 \equiv_F e_2$. However, this can't be, as such a pair would have been removed in line~\ref{line:proper_f_equiv} of the algorithm.
If $F' \cong C_\ell$ then there are at least $\ell - 2 \geq 5 $ edges of $F'$ which are not closed and as $F'$ is a cycle two of those must be non-intersecting, again yielding a contradiction similarly as in the previous case.
So we know that $\hat G$ is $F$-closed. We thus can apply Lemma \ref{lemma:main} to deduce that we have w.h.p. that each $F$-block $B$ in $G$ satisfies $m(B) \le m_2(F)$. By Lemma \ref{lemma:block_split}, coloring one block $B_i$ does not influence the coloring of any $F$-copy which does not lie in $B_i$ and all $B_i$'s are edge-disjoint. Finally, by Lemma \ref{lemma:ar_proper_cases} there exists a desired proper coloring of every block $B_i$, which gives a proper coloring of $\hat G$ (and of the graph $G$) without a rainbow $F$-copy.
\end{proof}
\subsubsection{Proof of Lemma \ref{lemma:ar_proper_cases}}
We start with a technical observation that will help us prove the case of forbidden complete graphs
\begin{claim} \label{claim:cliques_overlap}
Let $\ell \ge 4$ be an integer and let $G$ be a graph with $m(G) \le (\ell + 1)/2$. Then for any vertex $v \in V(G)$ and a subset $A \subseteq N_G(v)$ of size $|A| \le \ell + 1$, there exist at most $\lfloor 6 \cdot \ell / (\ell - 3)\rfloor$ vertices $w \in V(G) \setminus (A \cup \{v\})$ with the property that $G[A' \cup \{v, w\}] \cong K_{\ell}$ for some $A' \subseteq A$.
\end{claim}
\begin{proof}
First, note that if $G[A]$ does not contain a copy of $K_{\ell - 2}$ then there is no such vertex $w \in V(G) \setminus (A \cup \{v\})$. Therefore, we can assume that $|A| \ge \ell - 2$ and $G[A]$ contains at least $\binom{\ell - 2}{2}$ edges. Note that then $e(G[A \cup \{v\}]) \ge \binom{\ell - 2}{2} + \ell - 2$. Assume now that there are $k$ vertices $W = \{w_1, \ldots, w_k\} \subseteq V(G) \setminus (A \cup \{v\})$ with the described property. Then each such vertex $w_i$ has at least $\ell - 1$ neighbours among vertices in $A \cup \{v\}$, thus
\begin{align}
e(G[A \cup \{v\} \cup W]) &\ge \frac{(\ell-2)(\ell - 3)}{2} + \ell - 2 + k\cdot(\ell - 1) \notag \\
&= \frac{(\ell - 2)(\ell - 1)}{2} + k(\ell - 1) = (\ell - 1)(\ell/2 - 1 + k). \label{eq:contra1}
\end{align}
On the other hand, from $m(G) \le (\ell + 1)/2$ and $|A| \le \ell + 1$ we have
\begin{equation}
e(G[A \cup \{v\} \cup W]) \le \frac{\ell + 1}{2} (\ell + 2 + k) = (\ell + 1)(\ell/2 + 1 + k/2). \label{eq:contra2}
\end{equation}
Finally, combining \eqref{eq:contra1} and \eqref{eq:contra2} gives $k \le 6 \cdot \ell / (\ell - 3)$ which concludes the proof of the claim as $k$ has to be an integer.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:ar_proper_cases} - complete graphs]
Let $\ell \ge 19$ and $G$ be a graph on $n$ vertices with $m(G) \le m_2(K_\ell) = (\ell + 1)/2$. By Lemma~\ref{thm:k-deg} there exists an ordering $v_1, \ldots, v_n$ of the vertices of $G$ such that \begin{equation} \label{eq:order}
|N(v_i) \cap \{v_1, \ldots, v_{i-1}\}| \le \ell + 1
\end{equation}
for every $i \in [n]$ and let $G_i := G[\{v_1, \ldots, v_i\}]$. Given a (partial) edge-coloring $p$ of $G$, we say that an edge $e \in E(G)$ is \emph{$i$-new} if no edge in $G_{i-1}$ is colored with $p(e)$. We will inductively find a proper coloring $p_i$ of $G_i$ such that the following holds,
\begin{enumerate}[(i)]
\item $G_i$ does not contain a rainbow copy of $K_\ell$ under coloring $p_i$,
\item for every $j \in [i]$: all but at most three edges incident to $v_j$ in $G_j$ are $j$-new, and
\item for every $j < r \le i$: if an edge $\{v_j, v_r\} \in E(G)$ is not $j$-new, then there exists a subset of vertices $S \subseteq \{v_1, \ldots, v_{j-1}\}$ such that $G[\{v_j, v_r\} \cup S] \cong K_\ell$.
\end{enumerate}
The base of the induction trivially holds, thus assume that the induction hypothesis holds for all $i < k$, for some $2 \le k \le n$.
Let $p_{k-1}$ be any coloring of $G_{k-1}$ which satisfies $(i)$-$(iii)$. We create a coloring $p_k$ by extending the coloring $p_{k-1}$ to the edges incident to $v_k$ in $G_k$. Note that this implies that the only $K_\ell$-copies we have to take care of are those which contain the vertex $v_k$. Similarly, the only edges which might violate properties $(ii)$ and $(iii)$ are those incident to $v_k$.
Let $v_{i_1}, \ldots, v_{i_q}$ be the neighbours of $v_k$ in $G_k$, with $i_{j} < i_{j+1}$ for all $j \in [q-1]$. It follows from \eqref{eq:order} that $q \le \ell + 1$. Initially, assign an arbitrary new color to each edge $\{v_k, v_{i_j}\}$ for $j \le \min\{q, \ell - 2\}$. Note that this leaves at most three edges of $G_k$ uncolored, thus the property $(ii)$ is guaranteed to be satisfied. If $q < \ell - 1$, then the vertex $v_k$ does not belong to any copy of $K_\ell$ in $G_k$ and properties $(i)$ and $(iii)$ remain satisfied as well -- in which case we are done. Therefore, from now on we assume that $q \in \{\ell - 1, \ell, \ell + 1\}$.
Let $R = \{v_{i_{\ell-1}}, \ldots, v_{i_q}\}$ be the set of the remaining neighbours of $v_k$, i.e. endpoints of edges that are not yet colored. We first "clean" $R$ as follows: for any $v_j \in R$ for which there does not exist a subset $S \subseteq \{v_1, \ldots, v_{j-1}\}$ such that $G[S \cup \{v_j, v_k\}] \cong K_{\ell}$, assign an arbitrary new color to $\{v_j, v_k\}$ and set $R := R \setminus \{v_j\}$. Note that if $R = \emptyset$ after this procedure, then $v_k$ does not belong to a copy of $K_\ell$ in $G_k$ and it is easy to see that properties $(i)$-$(iii)$ are satisfied. Therefore, we can assume that $R \neq \emptyset$ and observe that any coloring we assign to the remaining edges will satisfy $(iii)$. Furthermore, note that every copy of $K_\ell$ which contains $v_k$ in $G_k$ also contains at least one vertex from $R$.
Before we proceed with the coloring of the remaining edges, we first make an observation about the coloring of the edges in $G_{k-1}$. Let $v_j \in R$ be an arbitrary vertex. An application of Claim~\ref{claim:cliques_overlap} to $A:=N(v_j)\cap\{v_1,\ldots,v_{j-1}\}$, which is by~\eqref{eq:order} at most $\ell+1$, yields that there exist at most
\begin{equation}
\lfloor6\ell/(\ell-3)\rfloor \stackrel{(\ell \ge 19)}{\le} 7 \label{eq:7}
\end{equation}
vertices $v_z \in V(G) \setminus (A \cup \{v_j\})$ such that there exists $S_z \subseteq A$ with $G[\{v_j, v_z\} \cup S_z] \cong K_\ell$. Since, by the definition of $R$, $v_k$ is such a vertex, it follows from \eqref{eq:7} and the proeprty $(iii)$ that there are at most $6$ vertices $v_z$, $j < z < k$ such that the edge $\{v_j, v_z\}$ is not $j$-new. Combining this observation with property $(ii)$, we have that there are at most $9$ colors assigned to edges incident to $v_j$ which are also assigned to some edge in $G_{j-1}$. Let us denote the set of such colors with $C_j$ and
\begin{equation}
|C_j| \le 9. \label{eq:bound_C}
\end{equation}
With this observation at hand, we go back to the coloring of the remaining edges.
Let $W := \{v_{i_1}, \ldots, v_{i_{\ell - 2}}\}$.
Our aim now is as follows: for each vertex $v_j \in R$ we want to find pairwise disjoint $2$-sets $S_j \subseteq W$ such that either $S_j \notin E(G)$ or $p_{k-1}(S_j) \notin C_j$ and $p_{k-1}(S_j) \neq p_{k-1}(S_{j'})$ for distinct $v_j, v_{j'} \in R$. Then the coloring can be completed by setting $p_k(\{v_k, v_j\}) := p_{k-1}(S_j)$ if $S_j \in E(G)$ and assigning an arbitrary new color otherwise. Clearly, a rainbow $K_\ell$-copy which contains $v_k$ and $v_j \in R$ cannot contain both vertices in $S_j$, thus if it contains $v_k$ then it has to miss at least $|R|$ vertices from $W \cup R$. As $|W| \le \ell - 2$ this shows that no such rainbow $K_\ell$-copy exists, which finishes the proof.
We find these sets $S_j$ in a greedy fashion as follows. Let $R' := R$ and $W' := W$ and repeat the following until $R' = \emptyset$: if there exist two vertices $a,b\in W'$ such that $a$ and $b$ do not form an edge, choose $v_j \in R'$ arbitrarily and set $S_j :=\{a,b\}$, $R' := R' \setminus \{v_j\}$ and $W' := W' \setminus \{a, b\}$. Otherwise, choose $v_j \in R'$ arbitrarily and let $a, b \in W'$ be such that $p_{k-1}(\{a,b\}) \notin C_j$ and $p_{k-1}(S_j) \neq p_{k-1}(S_{j'})$ for previously defined sets $S_{j'}$. If this procedure exhausts $R'$, then by the construction of the sets $S_j$ we are done. Furthermore, since in each iteration the size of $R'$ decreases, it suffices to show that both cases are well-defined.
If there exists two vertices $a, b \in W'$ that do not form an edge then there is nothing to show. Therefore, we can assume that $W'$ induces a clique. Note that, for each $v_j \in R$, at most $11$ colors are forbidden; at most two because of the previously defined sets $S_{j'}$ and at most $9$ because of $C_j$. Thus, in order to show that we can find an edge $S_j$ in $W'$ which satisfies the desired property, it suffices to show that there are more than $11$ different colors appearing in the clique $W'$. Since $|R| \le 3$ and $|W| = \ell - 2$ we have $|W'| \ge \ell - 2 - 2 \cdot 2 = \ell - 6$ as long as $R' \neq \emptyset$. On the other hand, every proper coloring of a clique on at least $\ell - 6$ vertices contains at least $\ell - 7 > 11$ different colors, which finishes the proof
\end{proof}
We remark that more careful counting of the number of different colors in the clique $W'$ gives a slightly better lower bound on $\ell$. Next, we prove the case of cycles.
\begin{proof}[Proof of Lemma \ref{lemma:ar_proper_cases} - cycles]
Let $\ell \ge 7$ and $G$ be a graph on $n$ vertices such that $m(G) \le m_2(C_\ell) = 1 + 1/(\ell - 2)$. Let us assume towards a contradiction that $G$ is a minimal graph with respect to the number of vertices such that $G \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} C_\ell$.
First, observe that in $G$ no two vertices of degree $2$ are adjacent. To see this, let us assume that two such vertices $v_1, v_2 \in V(G)$ exist. Then $N(v_1) \cap N(v_2) = \emptyset$ as otherwise $v_1$ and $v_2$ do not belong to a $C_\ell$-copy thus contradicting the minimality of $G$. Therefore, the edges $e_1$ and $e_2$ incident to $v_1$ and $v_2$, different from the edge $\{v_1, v_2\}$, satisfy $e_1 \cap e_2 = \emptyset$. Furthermore, it follows again from the minimality of $G$ that
$$ G \setminus \{v_1, v_2\} \xslashedrightarrowa[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} C_\ell.$$
Consider an arbitrary coloring of $G \setminus \{v_1, v_2\}$ without a rainbow $C_\ell$-copy. We assign the same (new) color to $e_1$ and $e_2$. Observe that no rainbow $C_\ell$-copy can contain both $v_1$ and $v_2$. On the other hand, since $e_1 \equiv_{C_\ell} e_2$ in $G$ and there is no rainbow $C_\ell$-copy in $G \setminus \{v_1, v_2\}$ this implies $G \xslashedrightarrowa[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} C_\ell$, a contradiction.
Next, it is easy to see that $G$ does not contain a vertex $v$ of degree $1$ as such a vertex does not belong to a $C_\ell$-copy and would contradict the minimality of $G$. Therefore, $\delta(G) \ge 2$ and by the previous observation the set $V_2 \subseteq V(G)$ of all the vertices of degree $2$ is an independent set. We estimate the size of $V_2$ as follows,
$$ 2m(G) n \ge 2e(G)=\sum_{v \in G} \deg(v) \ge \sum_{v \in V_2}2 + \sum_{v \in V(G) \setminus V_2}3 = |V_2| \cdot 2 + (n - |V_2|) \cdot 3 $$
and therefore $|V_2| \ge (1 - 2/(\ell - 2)) n$. Since $V_2$ is an independent set this implies
$$(1+\tfrac{1}{\ell-2})n \ge e(G) \ge e(V_2, V(G) \setminus V_2) = |V_2| \cdot 2 \ge (2 - \tfrac4{\ell - 2}) n.$$
One easily checks that this a contradiction for all $\ell\ge 8$. For $\ell=7$ we have that the left hand side is equal to the right hand side, which implies that the graph $G$ is bipartite.
Since $C_7$ is not bipartite, $G$ does not contain $C_\ell$-copy, implying the desired contradiction also in this case.
\end{proof}
\subsection{Anti-Ramsey property -- 2-bounded colorings} \label{sec:ar_bounded}
Here we give a proof of Theorem \ref{thm:2_bnd}. We use the following three lemmas which provide a density condition of graphs that are not anti-Ramsey corresponding to the three cases from Theorem \ref{thm:2_bnd}. We defer the proofs to the next subsection.
\begin{lemma} \label{lemma:nbounded_general}
Let $F$ be a strictly $2$-balanced graph on at least $4$ vertices which contains a cycle and is not isomorphic to $C_4$. Then for any graph $G$ such that $m(G) \le m_2(F)$ it holds that $G \naramcolk[2] F$.
\end{lemma}
\begin{lemma} \label{lemma:nbounded_C4}
For any graph $G$ such that $m(G) < m_2(C_4)$ it holds that $G \naramcolk[2] C_4$. Moreover, there exists a graph $G$ with $m(G) = m_2(C_4)$ such that $G \aramcolk[2] C_4$.
\end{lemma}
\begin{lemma} \label{lemma:nbounded_hyper}
Let $r, \ell \in \mathbb{N}$ be such that $2 \le \ell \le r - 1$ and $(\ell,r) \notin \{(2,3), (3,4)\}$. Then for any $\ell$-graph $G$ with $m(G) \le m_{\ell}(K_r^{(\ell)})$ it holds that $G \naramcolk[2] K_r^{(\ell)}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:2_bnd}]
Let $\ell \ge 2$ be an integer and consider some strictly $\ell$-balanced $\ell$-graph $F$ which satisfies one of the conditions of the theorem and let $c$ be a constant given by Corollary \ref{lemma:main_density} when applied to $F$. Let $G \sim G^{(\ell)}(n, p)$ for $p$ which we will specify later. We use Algorithm~\ref{algo:ar_2bnd} to find a $2$-bounded coloring of $G$ without a rainbow $F$-copy.
{\LinesNumbered
\begin{algorithm}
\DontPrintSemicolon
$\hat G \leftarrow G$\;
$\textrm{col} \leftarrow 0$\;
\While{$\exists e_1, e_2 \in E(\hat G) \; : \; e_1 \equiv_F e_2$ in $\hat G$}{
color $e_1, e_2$ with $\textrm{col}$\;
$\hat G \leftarrow \hat G \setminus \{e_1, e_2\}$ and $\textrm{col} \leftarrow \textrm{col} + 1$
}
\While{$\exists e \in E(\hat G) \; : \; e$ does not belong to an $F$-copy}{
color $e$ with $\textrm{col}$\;
$\hat G \leftarrow \hat G \setminus \{e\}$ and $\textrm{col} \leftarrow \textrm{col} + 1$
}
Remove isolated vertices in $\hat G$\; \label{line:proper_after_removing}
$\{B_1, \ldots, B_k\} \leftarrow $ $F$-blocks obtained by applying Lemma \ref{lemma:block_split} with $\hat G$\;
Color ($2$-bounded) each $B_i$ without a rainbow $F$-copy using a distinct set of colors
\vspace{0.2cm}
\caption{$2$-bounded colouring of $G$ without rainbow $F$-copy.} \label{algo:ar_2bnd}
\end{algorithm}}
The only difference between Algorithm \ref{algo:ar_proper} and Algorithm \ref{algo:ar_2bnd} is in the condition in line 3. In particular, in Algorithm \ref{algo:ar_2bnd} we don't require edges $e_1$ and $e_2$ to be disjoint. Following the same lines as in the proof of Theorem \ref{thm:ar_proper} together with Lemma \ref{lemma:nbounded_general} (provided $p \le cn^{-1/m_2(F)}$), Lemma \ref{lemma:nbounded_C4} (provided $F\cong C_4$ and $p \ll n^{-1/m_2(F)}$) and Lemma \ref{lemma:nbounded_hyper} (provided $p \le cn^{-1/m_\ell(F)}$) shows that w.h.p. $G$ is such that the Algorithm \ref{algo:ar_2bnd} finds the desired colouring.
\end{proof}
\subsubsection{Proof of Lemmas \ref{lemma:nbounded_general} and \ref{lemma:nbounded_C4}}
Proof of Lemma \ref{lemma:nbounded_general} splits into a couple of cases. We first state claims which cover these cases. Throughout this section, we say that a $2$-bounded coloring of edges incident to some vertex $v$ is \emph{maximal} if all but at most one color appears exactly twice.
\begin{claim} \label{claim:min_degree_col}
Let $G$ and $F$ be graphs such that $m(G) < \delta(F) - 1/2$. Then $G \naramcolk[2] F$.
\end{claim}
\begin{proof}
Consider some graph $F$ and assume towards a contradiction that there exists a graph $G$ on $n$ vertices with $m(G) < \delta(F) - 1/2$ such that $G \aramcolk[2] F$. Furthermore, let us assume that $G$ is a minimal such graph with respect to the number of vertices. It then follows from
$$\sum_{v \in V(G)} \deg(v) = 2e(G) \le 2m(G) n$$
that there exists a vertex $u \in V(G)$ with $\deg(u) \le 2m(G) < 2\delta(F) - 1$. Since $\deg(u) \in \mathbb{Z}$ we can further improve this bound to $\deg(u) \le \lfloor 2m(G) \rfloor \leq 2(\delta(F) - 1)$. Now consider an arbitrary maximal $2$-bounded coloring of the edges incident to $u$ and color $G - \{u\}$ using the minimality assumption. Then in any rainbow subgraph of $G$ the vertex $u$ has degree at most $\delta(F) - 1$, thus $u$ cannot belong to a rainbow $F$-copy. However, as there are no rainbow $F$-copies in $G - \{u\}$ we have a $2$-bounded coloring of $G$ without a rainbow $F$-copy, which is a contradiction.
\end{proof}
The proof of the next claim uses similar ideas as the proof of Lemma \ref{lemma:ar_proper_cases} in the case of cycles.
\begin{claim} \label{claim:2_deg_adj}
Let $G$ and $F$ be graphs such that $m(G) < \delta(F) - 2/7$, $\delta(F) \ge 2$ and $F$ does not contain two adjacent vertices of degree $\delta(F)$. Then $G \naramcolk[2] F$.
\end{claim}
\begin{proof}
Let us consider some graph $F$ as in the statement of the claim and assume towards a contradiction that there exists a graph $G$ on $n$ vertices with $m(G) < \delta(F) - 2/7$ such that $G \aramcolk[2] F$. Furthermore, assume that $G$ is a minimal such graph with respect to the number of vertices.
First, we can assume that $\delta(G) \ge 2\delta(F) - 1$ as otherwise the claim follows from the same arguments as in the proof of Claim \ref{claim:min_degree_col}. Furthermore, similarly as in the proof of the cycle case of Lemma \ref{lemma:ar_proper_cases} we can show that $G$ does not contain two adjacent vertices $v_1, v_2 \in V(G)$ with $\deg(v_1) = \deg(v_2) = 2\delta(F) - 1$. Indeed, assume that two such vertices $v_1, v_2 \in V(G)$ exist. Then we color $G\setminus\{v_1,v_2\}$ by the minimality assumption without a rainbow $F$-copy, assign a new color to the edge $\{v_1, v_2\}$ and color the remaining edges incident to $v_1$ and $v_2$ both by a maximal $2$-bounded coloring. Then the degree of $v_1$ and $v_2$ in any rainbow subgraph $R \subseteq G$ is at most $\delta(F)$. If $\{v_1, v_2\} \subseteq R$ then $R \ncong F$ since $F$ does not contain two adjacent vertices of degree $\delta(F)$. Otherwise, $v_1$ and $v_2$ can have degree at most $\delta(F)-1$ in $R$, which again implies that $R \ncong F$ or $v_1,v_2\not\in R$. Therefore, any rainbow $F$-copy has to lie completely in $G - \{v_1, v_2\}$ which is not possible.
To summarize, we have $\delta(G) \ge 2\delta(F) - 1$ and the set $S \subseteq V(G)$ of all the vertices of degree exactly $2\delta(F) - 1$ is an independent set. We estimate the number of edges in $G$ as follows,
\begin{align*}
2m(G) n &\ge \sum_{v \in V(G)} \deg(v) \ge |S|(2\delta(F) - 1) + (n - |S|)2\delta(F) = n \cdot 2\delta(F) - |S|
\end{align*}
and thus $|S| \ge 2n (\delta(F) - m(G))$. Now $m(G) < \delta(F) - 2/7$ implies that $|S| > 4/7 \cdot n$. Since $S$ is an independent set, we further have
\begin{multline*}
(\delta(F) - 2/7)n \ge m(G)n\ge e(G) \ge e(S, V(G) \setminus S) \geq \\ |S| \cdot (2\delta(F) - 1) > n(8/7 \cdot \delta(F) - 4/7),
\end{multline*}
which easily implies $\delta(F) < 2$, hence a contradiction. Therefore, such graph a $G$ does not exist.
\end{proof}
\begin{claim} \label{claim:orientation}
Let $F$ and $G$ be graphs such that
\begin{enumerate}[(i)]
\item $\lceil m(G) / 2 \rceil < m(F)$ or
\item $\lceil m(G) / 2 \rceil = m(F)$, $m(G) < \lceil m(G) \rceil$ and $\lceil m(G) \rceil$ is odd.
\end{enumerate}
Then $G \naramcolk[2] F$.
\end{claim}
\begin{proof}
Let $F$ and $G$ be graphs which satisfy condition $(i)$ of the claim. By Lemma~\ref{thm:k-orient} there exists an orientation of the edges of $G$ such that each vertex has out-degree at most $\lceil m(G) \rceil$. Let us consider one such orientation and arbitrarily pair the out-edges incident to each vertex. Assigning the same color to edges in each pair, in any rainbow (oriented) subgraph $R \subseteq G$ we have for the out-degree of any vertex $v\in V(R)$
\begin{equation} \label{eq:orient}
\deg^+_R(v) \le \left\lceil \frac{\lceil m(G) \rceil}{2} \right\rceil = \left\lceil \frac{m(G)}{2} \right\rceil < m(F).
\end{equation}
In particular, the density of $R$ is strictly smaller than $m(F)$ thus $R \ncong F$.
Let now $F$ and $G$ be graphs such that condition $(ii)$ holds. As in the previous case, let us fix an orientation of the edges of $G$ such that each vertex has out-degree at most $\lceil m(G) \rceil$. Note that in every (oriented) subgraph $G' \subseteq G$ there exists a vertex with out-degree strictly smaller than $\lceil m(G) \rceil$ as otherwise we would have that the density of such a subgraph is $\lceil m(G) \rceil > m(G)$. Therefore, we can greedily arrange the vertices of $G$ into a sequence $v_1, \ldots, v_n$ such that $N_i := N^+(v_i) \cap \{v_{i+1}, \ldots, v_{n}\}$ is of size at most $\lceil m(G) \rceil-1$. Now the coloring strategy is as follows: for each vertex $v_i$, first arbitrarily pair all the out-edges corresponding to $N_i$ and then all other out-edges incident to $v_i$ and assign a new color to each pair. It remains to prove that there are no rainbow $F$-copies under such coloring.
Consider some rainbow subgraph $R \subseteq G$. It follows from the pairing strategy that every vertex in $R$ has out-degree at most $\lceil \lceil m(G) \rceil / 2\rceil = \lceil m(G)/ 2 \rceil = m(F)$. Now consider
the vertex $v_i \in V(R)$ with the smallest index $i$ among all the vertices in $R$. Observe that all out-neighbours of $v_i$ in $R$ have index larger than $i$. Since $|N_i| \le \lceil m(G) \rceil - 1$ the pairing strategy ensures that the out-degree of $v_i$ in $R$ is at most
$$ \left\lceil \frac{\lceil m(G) \rceil - 1}{2} \right\rceil < \left\lceil \frac{\lceil m(G) \rceil}{2} \right\rceil = m(F), $$
where the strict inequality follows from the fact that $\lceil m(G) \rceil$ is odd. Thus all the vertices in $R$ have out-degree at most $m(F)$ and at least one vertex has out-degree strictly smaller than $m(F)$. Therefore, the density of any rainbow subgraph $R$ is strictly smaller than $m(F)$ hence there is no rainbow $F$-copy in $G$.
\end{proof}
It remains to cover the case $F = K_4$.
\begin{lemma} \label{lemma:k4_2bnd}
Let $G$ be a graph such that $m(G) \le m_2(K_4) = 2.5$. Then $G \naramcolk[2] K_4$.
\end{lemma}
\begin{proof}
Let us assume towards a contradiction that there exists a graph $G$ on $n$ vertices with $m(G) \le 2.5$ and such that $G \aramcolk[2] K_4$. Without loss of generality let $G$ be a minimal such graph with respect to the number of vertices.
First, observe that $G$ does not contain a vertex $v \in V(G)$ with $\deg(v) < 5$. Otherwise, by taking any maximal coloring of edges incident to $v$, we have that no rainbow $K_4$-copy can contain $v$. Since it follows from the minimality of $G$ that there is no rainbow $K_4$-copy in $G \setminus \{v\}$ we get that $G$ does not contain a rainbow $F$-copy, thus a contradiction. Therefore, $\delta(G) \ge 5$ and since
$$\sum_{v \in V(G)} \deg(v) \le m(G) \cdot 2n \le m_2(K_4) \cdot 2n = 5 n$$
it follows that $G$ is $5$-regular. Observe that $G \ncong K_6$, as the coloring (see Figure~\ref{fig:k6})
\begin{align*}
& (\{v_1, v_2\}, \{v_1, v_3\}), (\{v_1, v_4\}, \{v_1, v_5\}), \\
& (\{v_1, v_6\}, \{v_5, v_6\}), (\{v_2, v_4\}, \{v_2, v_6\}),\\
& (\{v_3, v_4\}, \{v_3, v_6\}), (\{v_3, v_5\}, \{v_2, v_5\}), (\{v_4, v_5\}, \{v_4, v_6\})
\end{align*}
shows that $K_6 \naramcolk[2] K_4$.
\begin{figure}[h]
\center
\includegraphics[trim = 30mm 60mm 20mm 45mm, clip, scale = 0.2]{k6.jpg}
\caption{A coloring of $K_6$ without a rainbow $K_4$-copy.}\label{fig:k6}
\end{figure}
Let now $v \in G$ be an arbitrary vertex and $N(v) = \{w_1, \ldots, w_5\}$. Assume first that
$\delta(G[N(v)])\leq 2$ and w.l.o.g.\ let $w_1, w_2$ and $w_3$ be the vertices such that
$\{w_1, w_2\}, \{w_1, w_3\} \notin E(G)$. Consider the following coloring of the edges incident to $v$:
$$
(\{v, w_1\} ), (\{v, w_2\}, \{v, w_3\}), (\{v, w_4\}, \{v, w_5\}).
$$
Now any possible rainbow $K_4$-copy which contains the vertex $v$ must also contain the vertex $w_1$ and one of the vertices from $\{w_2, w_3\}$. However, that is not possible as $w_1$ is not connected to any of $w_2$ and $w_3$. On the other hand, by the minimality of $G$ no rainbow $K_4$-copy lies completely in $G \setminus \{v\}$. Thus $G$ contains no rainbow $K_4$-copy, which is a contradiction with the choice of $G$.
Therefore, we can assume that $\delta(G[N(v)])\geq 3$. As $G$ is $5$-regular, this implies that every vertex $w_i\in N(v)$ has at most one neighbor in $V(G) \setminus (N(v)\cup\{v\})$. Thus, any $K_4$-copy
that contains a vertex from $N(v)\cup\{v\}$ can contain at most one vertex from $V(G) \setminus (N(v)\cup\{v\})$, which in turn implies that any such clique has to contain three vertices in $N(v)$. However, one easily checks that this can only be if one of the remaining two vertices in $N(v)$ has degree at most two within $G[N(v)]$, which we have already excluded. Thus, there exists no $K_4$-copy which contains a vertex in $N(v)$ and a vertex in $V(G) \setminus (N(v)\cup\{v\})$. We can thus color $G[N(v) \cup \{v\}]$ and $G[V(G) \setminus (N(v) \cup \{v\})]$ separately and by the minimality of $G$ a coloring without rainbow $K_4$-copy exists for both these graphs. This concludes the proof of the lemma.
\end{proof}
We are now ready to combine the previous claims.
\begin{proof}[Proof of Lemma \ref{lemma:nbounded_general}]
Let us first consider a graph $F$ on four vertices. There exist only two such graphs that are strictly $2$-balanced: $C_4$ and $K_4$. Therefore, if $F$ is a graph on four vertices then $F \cong K_4$ and the conclusion of the lemma follows from Lemma \ref{lemma:k4_2bnd}. For the rest of the proof we assume that $F$ contains at least $5$ vertices and since $F$ is a strictly $2$-balanced graph we have $\delta(F) \ge 2$.
Let $m_2(F) = k + x$ for some $k \in \mathbb{N}$, $k\ge 1$ and $x \in [0, 1)$. Observe that $\delta(F) > m_2(F)$ as otherwise removing a vertex with degree at most $m_2(F)$ would result in a graph with the same or larger $2$-density, which cannot be since $F$ is strictly $2$-balanced. Thus $\delta(F) \ge k + 1$. If $x < 1/2$ then $m(G) < k + 1/2 = (k+1) - 1/2$ and the lemma follows from Claim \ref{claim:min_degree_col}. So we may assume in the following that $x\ge 1/2$.
One easily checks that
$$\frac{3}{4} v(F)^2 - v(F) > \binom{v(F)}{2} \ge e(F)$$
(as $v(F)\ge 5$) and thus
$$ \frac{e(F)}{v(F)} + 3/2 > \frac{e(F) - 1}{v(F) - 2}. $$
As $x \ge 1/2$ this implies $m(F) > m_2(F) - 3/2 \ge k - 1$. For $k\ge 3$ we therefore have
$$\lceil m(G) / 2 \rceil \le \lceil (k+1)/2 \rceil \stackrel{(k\ge 3)}{\leq} k - 1 < m(F),$$
and $G \naramcolk[2] F$ follows from Claim \ref{claim:orientation}. So from now on we may assume that $x\ge 1/2$ and $k\in\{1,2\}$.
Furthermore, if $F$ contains two adjacent vertices of degree $\delta(F)$ then from the fact that $F$ is strictly $2$-balanced and $v(F) \ge 5$ we have
$$ \frac{e(F) - 1 - (2\delta(F) - 1)}{v(F) - 2 - 2} < \frac{e(F) - 1}{v(F) - 2} $$
and so $(2\delta(F) - 1)/2 > m_2(F)\ge k+1/2$. Therefore, either $\delta(F) \ge k + 2$ or $\delta(F) = k + 1$ and $F$ does not contain two adjacent vertices of degree $\delta(F)$. In the first case we trivially have $m(G) \le m_2(F)< k + 1 < k + 2 - 1/2$ and the lemma follows again from Claim \ref{claim:min_degree_col}. In the latter case, if we additionally assume that $x < 5/7$ then
$$\delta(F) - 2/7 \ge k + 5/7 > k + x = m_2(F)$$
and the lemma follows from Claim \ref{claim:2_deg_adj}. Thus we may assume from now on that $x\ge 5/7$ and $k\in\{1,2\}$.
Finally, if $e(F) < (5v(F)^2 - 3v(F))/14$ then
$$ \frac{e(F)}{v(F)} + \frac{5}{7} > \frac{e(F) - 1}{v(F) - 2},$$
and $x \ge 5/7$ implies that $m(F) > m_2(F) - 5/7 \ge k$. Similarly as before we have
$$ \lceil m(G) / 2 \rceil \le \lceil (k+1)/2 \rceil \le k < m(F), $$
for $k \in \{1,2\}$ and the lemma follows from Claim \ref{claim:orientation}.
To summarize, we have shown that $G \naramcolk[2] F$ unless the following three conditions hold simultaneously:
\begin{enumerate}[(a)]
\item $x \ge 5/7$,
\item $k \in \{1,2\}$ and
\item $e(F) \ge (5v(F)^2 - 3v(F))/14$.
\end{enumerate}
Let us consider some $F$ such that all three properties apply. Then from (b) and (c) we have
\begin{equation}
3 > m_2(F) \ge \frac{(5v(F)^2 - 3v(F))/14 - 1}{v(F) - 2}. \label{eq:v_F}
\end{equation}
A simple calculation yields that \eqref{eq:v_F} implies $v(F) < 7$. If $v(F) = 6$ then from (c) we have $e(F) \ge 12$ while from $m_2(F)<3$ we obtain $e(F) \le 12$.
But then $m(F)\ge 2$ and $\lceil m_2(F) \rceil = 3$ and the lemma follows from the part $(ii)$ of Claim \ref{claim:orientation}. Otherwise, if $v(F) = 5$ then from (c) we have $e(F) \ge 8$ while from $m_2(F)<3$ we obtain $e(F) \le 9$. However, for $e(F) \in \{8,9\}$ we have $m_2(F) \in \{2 + 1/3, 2 + 2/3\}$ thus $F$ does not satisfy (a). This finishes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:nbounded_C4}]
Assume towards a contradiction that $G$ is a graph on $n$ vertices such that $m(G) < m_2(C_4) = 3/2$ and $G \aramcolk[2] C_4$. Furthermore, let $G$ be a minimal such graph with respect to the number of vertices. Then
$$\sum_{v \in V(G)} \deg(v) \le 2m(G) n < 3n$$
implies that there exists a vertex $v \in V(G)$ with $\deg(v) \le 2$. Coloring $G\setminus\{u\}$ by the minimality assumption on $G$ and the two edges incident to $v$ with the same (new) color yields a coloring of $G$ with no rainbow $C_4$-copy, contradicting our choice of $G$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale = 0.8]
\node [draw, circle] (0) at (-0.75, 3) {};
\node [draw, circle] (1) at (0.75, 3) {};
\node [draw, circle] (2) at (-2.5, 0) {};
\node [draw, circle] (3) at (-1.5, -1.25) {};
\node [draw, circle] (4) at (1.5, -1.25) {};
\node [draw, circle] (5) at (2.5, 0) {};
\draw (0) to (1);
\draw (1) to (5);
\draw (5) to (4);
\draw (4) to (3);
\draw (3) to (2);
\draw (2) to (0);
\draw (0) to (4);
\draw (1) to (3);
\draw (2) to (5);
\end{tikzpicture}
\caption{A counter-example for the case $F = C_4$.} \label{fig:C6}
\end{figure}
For the second part of the lemma, consider the graph $C_6^{3+}$ given in Figure~\ref{fig:C6}. It is easy to see that $m(C_6^{3+}) = 3/2$. Furthermore, it follows from the fact that the graph is $3$-regular that every pair of edges is contained in at most two $C_4$-copies. As there are $9$ edges in $C_6^{3+}$, in every $2$-bounded coloring there are at most $4$ pairs of edges which are colored the same. It now follows from the previous observation that every such pair of edges can prevent at most two $C_4$-copies from being rainbow. However, there are $9$ copies of $C_4$, thus at least one copy has to be rainbow. This finishes the proof.
\end{proof}
\subsubsection{Proof of Lemma \ref{lemma:nbounded_hyper}}
We use the following notion of a \emph{link} in a hypergraph.
\begin{definition}[Hypergraph link]
Let $\ell \ge 2$ be an integer and $G$ an $\ell$-graph. Then for a vertex $v \in V(G)$ we define the \emph{link} of $v$ in $G$ to be the $(\ell-1)$-graph $G_v$ induced by the set of edges
$$\{e \setminus \{v\} \; : e \in E(G) \; \text{and} \; v \in e\; \}.$$
Furthermore, define the link of two vertices $v$, $w$ in $G$ to be the $(\ell-2)$-graph $G_{v,w}$ induced by the set of edges
$$\{e \setminus \{v,w\} \; : e \in E(G) \; \text{and} \; v,w \in e\; \}.$$
\end{definition}
We make a series of claims towards the proof of Lemma~\ref{lemma:nbounded_hyper}.
\begin{claim}\label{obs:link_col}
Let $G$ be a vertex minimal $\ell$-graph such that $G\aramcolk[2] K^{(\ell)}_r$. Then
\[
G_u \;\aramcolk[2]\; K_{r-1}^{(\ell - 1)}.
\]
for every vertex $u$.
\end{claim}
\begin{proof}
Assume the contrary. Then there exists a $2$-bounded coloring $c_u$ of $G_u$ without a rainbow $K^{(\ell-1)}_{r-1}$-copy. Let $c$ be the partial coloring of $G$ given by
$$ c(e) := c_u(e \setminus \{u\}) $$
for all $e \in E(G)$ with $u \in e$. Then $u$ cannot belong to a rainbow $K_{r}^{(\ell)}$-copy in $G$. As we can also color $G\setminus\{u\}$ without a rainbow $K_{\ell+1}^{(\ell)}$-copy by the the minimality assumption on $G$, this thus contradicts the assumption of the claim $G\aramcolk[2] K^{(\ell)}_r$.
\end{proof}
\begin{claim}\label{obs:except_1}
Let $G$ be a graph with at most $8$ edges. Then $G \aramcolk[2] K_3$ if and only if $G$ contains a copy of $K_4$. Furthermore, if $G[K] \cong K_4$ for some $K \subseteq V(G)$, then for every $T\in\binom{K}{3}$ there is a $2$-bounded colouring of $G$ with $G[T]$ being the only rainbow $K_3$-copy in $G$.
\end{claim}
\begin{proof}
One easily checks that $K_4 \aramcolk[2] K_3$, thus if $G$ contains $K_4$ then $G \aramcolk[2] K_3$ as well. In the other direction, let $G$ be a vertex minimal graph with at most $8$ edges without a copy of $K_4$ such that $G \aramcolk[2] K_3$. If $v(G)\ge 6$ then $\delta(G)\le \lfloor16/6\rfloor= 2$, allowing thus a $2$-bounded colouring without a rainbow $K_3$-copy similar to the argument in Lemma~\ref{lemma:nbounded_C4}. Otherwise, for $v(G) \in \{4,5\}$ one easily checks that $G \naramcolk[2] K_3$, thus contradicting the choice of $G$.
For the furthermore-part, observe that if $G[K] \cong K_4$ for some $K \subseteq V(G)$, then $G$ contains at most two additional edges $e_1, e_2 \notin G[K]$. Let us assume that $K = \{v_1, v_2, v_3, v_4\}$ and, without loss of generality, $T = \{v_1, v_2, v_3\}$.
Then the following $2$-bounded colouring has the required property:
\[
(e_1, e_2), (\{v_1, v_2\},\{v_1,v_3\}), (\{v_1,v_4\},\{v_4,v_2\}), (\{v_2,v_3\},\{v_3,v_4\}).
\]
\end{proof}
\begin{claim}\label{obs:except_2}
Let $G$ be a $3$-graph with at most $16$ edges and no isolated vertices.
Then $G \aramcolk[2] K^{(3)}_4$ if and only if $G$ is isomorphic to a $3$-graph which consists of two copies of $K^{(3)}_5$ that share $4$ vertices.
\end{claim}
\begin{proof}
If $G$ consists of two copies of $K^{(3)}_5$ that share $4$ vertices, then $v(G)=6$,
$e(G)=16$ and $G$ contains $9$ copies of $K^{(3)}_4$. Since any pair of edges coloured the
same can prevent at most one rainbow $K^{(3)}_4$-copy and in any $2$-bounded colouring of $G$ there are at most $8$ different pairs of edges which
are coloured the same, it follows that one copy of $K^{(3)}_4$ will always be rainbow.
In the other direction, let $G$ be a vertex-minimal $3$-graph on the vertex set $\{v_1, \ldots, v_n\}$ with at most $16$ edges such that $G \aramcolk[2] K^{(3)}_4$. If $n \le 5$ then $G \subseteq K_5^{(3)}$ and the following $2$-bounded colouring of $K_5^{(3)}$ gives a contradiction with the choice of $G$:
\begin{align*}
&(\{v_1,v_2,v_5\},\{v_1,v_3,v_5\}), (\{v_1,v_4,v_5\},\{v_3,v_4,v_5\}), (\{v_2,v_4,v_5\},\{v_1,v_2,v_4\}),\\
& (\{v_2,v_3,v_4\},\{v_2,v_3,v_5\}), (\{v_1,v_2,v_3\},\{v_1,v_3,v_4\}).
\end{align*}
Therefore, from now on we can assume that $n \ge 6$. Next, let us assume towards the contradiction that $G$ does not contain a $K_5^{(3)}$-copy. Let $v_i$ be a vertex of minimum degree which is at most $\lfloor16\cdot 3/n \rfloor\le 48/6= 8$. Then by Claims~\ref{obs:link_col} and~\ref{obs:except_1} and the minimality assumption on $G$, the link of $v_i$ contains a $K_4$-copy.
As $G$ does not contain a $K^{(3)}_5$-copy, we know that there exists a $3$-subset $T$ of the vertices of a $K_4$-copy in $G_v$ such that $T \notin E(G)$.
Since $e(G_v)\le 8$, Claim~\ref{obs:except_1} asserts the existence of a $2$-bounded colouring $c_v$ of $G_v$ such that the only rainbow $K_3$-copy is induced by $T$.
By the minimality of $G$ we can colour $G\setminus\{v\}$ without a rainbow $K^{(3)}_4$-copy. We then extend such colouring to $G$ by using $c_v$ to color the edges containing $v$,
thus obtaining a colouring without a rainbow $K^{(3)}_4$-copy.
This is a contradiction with the choice of $G$.
Without loss of generality, we may now assume $G[\{v_1, \ldots, v_5\}] \cong K_5^{(3)}$. Then from $e(G)\le16$ and $e(K^{(3)}_5)=10$ it follows that $\deg_G(v_i) \le 6$ for every $v_i \in \{v_6, \ldots, v_n\}$. By Claims~\ref{obs:link_col} and~\ref{obs:except_1}, we know that the link of every vertex has to contain a copy of $K_4$. Thus $G_{v_6} \cong K_4$ and every edge of $G$ has to either contain $v_6$ or belong to $G[\{v_1, \ldots v_5\}]$. This is only possible if $v(G) = 6$ and so $G$ is isomorphic to two copies of $K^{(3)}_5$ that share $4$ vertices.
\end{proof}
We combine the previous claims to derive the following lemma, which we then use as a base for the induction in the proof of Lemma \ref{lemma:nbounded_hyper}.
\begin{lemma}\label{lem:four_critical}
If $G$ is a $4$-graph with $m(G) \le 4$ then $G\naramcolk[2] K^{(4)}_5$.
\end{lemma}
\begin{proof}
Suppose the claim is false and let $G$ be a vertex-minimal $4$-graph with $m(G)\le 4$ and $G \aramcolk[2] K_5^{(4)}$. Since $4\ge m(G)\ge \sum_{x\in V(G)} \deg(x)/(4 v(G))$, it follows from the minimality of $G$ and Claims~\ref{obs:link_col} and~\ref{obs:except_2} that for all $x \in V(G)$ we have $\deg(x)=16$ and the link $G_x$ is isomorphic to two copies of $K^{(3)}_5$ sharing $4$ vertices. Consider any vertex $x\in V(G)$ and let two copies of $K^{(3)}_5$ in $G_x$ be on the vertex sets $\{a_1,b_1,b_2, b_3, b_4\}$ and $\{a_2,b_1,b_2,b_3, b_4\}$. Note that $\{x, a_1, a_2, b_i\} \notin E(G)$ for every $b_i \in \{b_1, b_2, b_3, b_4\}$.
Next, we consider the link $G_{a_1}$. Then $\{b_1, b_2, b_3, b_4, x\} \in V(G_{a_1})$ and let $a'$ be the remaining vertex. If $a' \neq a_2$ then there exists $b_i$, say $b_1$, such that $\{b_2, b_3, b_4, a_1, a_2, a', x\} \in V(G_{b_1})$, which is not possible. Applying the same argument to $G_{a_2}$, we have
$$ V(G_{a_1}) = \{b_1, b_2, b_3, b_4, x, a_2\} \quad \text{and} \quad V(G_{a_2}) = \{b_1, b_2, b_3, b_4, x, a_1\}. $$
It follows now from $\{x, a_1, a_2, b_i\} \notin E(G)$ that $\{b_1, b_2, b_3, b_4\}$ induces a $K_4^{(3)}$-copy in $G_{a_1}$ and $G_{a_2}$, and furthermore $\{b_1, b_j, a_1, a_2\} \in E(G)$ for every $b_j \in \{b_2, b_3, b_4\}$. This implies $\deg(b_1) \ge 18$, thus a contradiction.
\end{proof}
We are now ready to prove Lemma \ref{lemma:nbounded_hyper}. We split the proof into two parts. First, we consider cliques of the type $K_{\ell + 1}^{(\ell)}$.
\begin{proof}[Proof of Lemma \ref{lemma:nbounded_hyper} -- small cliques $K^{(\ell)}_{\ell+1}$, $\ell\ge 4$ ]
We prove the assertion by induction on $\ell$. The case $\ell=4$ follows from Lemma~\ref{lem:four_critical} as
$m_4(K^{(4)}_5)=4$. Next, let $\ell>4$ and assume that the claim holds for $K^{(\ell-1)}_{\ell}$. Let us assume towards the contradiction that there exists an $\ell$-graph $G$ with $m(G)\le m_{\ell}(K^{(\ell)}_{\ell+1})=\ell$ such that $G \aramcolk[2] K_{\ell+1}^{(\ell)}$. Furthermore, let $G$ be a vertex-minimal such $\ell$-graph.
Claim~\ref{obs:link_col} implies
\[
G_u \aramcolk[2] K^{(\ell-1)}_{\ell}
\]
for every vertex $u\in V(G)$.
By the induction hypothesis we must have
$$m(G_u)>m_{\ell-1}(K^{(\ell-1)}_{\ell})=\ell-1.$$
Consider some $S\subseteq V(G_u)$ such that $m(G_u) = e(G_u[S]) / |S|$. Note that $|S|\ge \ell+2$ as otherwise $e(G_u[S]) \le \binom{\ell+1}{\ell-1} = \binom{\ell+1}{2}$ and thus $m(G_u) \le \ell/2<\ell-1$, contradicting our assumption. Hence, $e(G_u) \ge e(G_u[S]) = m(G_u) \cdot |S| \ge(\ell-1) (\ell+2) > \ell^2$. On the other hand, a vertex $u$ of minimum degree satisfies
\begin{equation}\label{mindegree}
e(G_u)\le \ell\cdot m(G)\le\ell \cdot m_{\ell}(K^{(\ell)}_{\ell+1})=\ell^2,
\end{equation}
yielding the desired contradiction.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:nbounded_hyper} -- large cliques $K^{(\ell)}_{r}$, $r\ge \ell+2$ ]
We prove the lemma by induction on $\ell$. For $\ell = 2$ the claim follows from Lemma \ref{lemma:nbounded_general}. Let now $\ell > 2$ and assume that the claim holds for all $K_{r}^{(\ell-1)}$ with $r \ge \ell + 2$.
Let us assume towards a contradiction that there exists some $r \ge \ell + 2$ and an $\ell$-graph $G$ with $m(G) \le m_\ell(K_r^{\ell})$ such that $G \aramcolk[2] K_r^{(\ell)}$. Furthermore, we assume that $G$ is a minimal such $\ell$-graph with respect to the number of vertices. We show that then
\begin{equation}
\delta(G) > (r + 1) \cdot m_{\ell - 1}(K_{r-1}^{(\ell - 1)}). \label{eq:hyper_min_degree}
\end{equation}
Assuming that equation \eqref{eq:hyper_min_degree} holds, we can lower bound $m(G)$ as follows,
\begin{align}
e(G) / v(G) &= \frac{ \sum_{v \in V(G)}\deg(v)}{v(G) \cdot \ell} > \frac{r+1}{\ell} \cdot m_{\ell - 1}(K_{r-1}^{(\ell - 1)}) = \frac{(r+1) \cdot (\binom{r-1}{\ell - 1} - 1)}{ \ell (r - 1 - \ell + 1)} \nonumber \\
&= \frac{(r + 1) \binom{r-1}{\ell - 1}}{\ell (r - \ell)} - \frac{r + 1}{\ell (r - \ell)} \stackrel{(r\ge\ell+2)} \ge \frac{(r + 1) \cdot \tfrac{\ell}{r} \binom{r}{\ell}}{\ell (r - \ell)} - \frac{r + 1}{r} \nonumber \\
&> \frac{r + 1}{r} \cdot m_\ell({K_r^{(\ell)}}) - \frac{r + 1}{r} = m_\ell(K_r^{(\ell)}) + \frac{m_\ell(K_r^{(\ell)}) - (r + 1)}{r}. \label{eq:hyper_density_est}
\end{align}
On the other hand, for $r \geq \ell+3$ and since $\ell \ge 3$ we have
$$ m_\ell(K_r^{(\ell)}) = \frac{\binom{r}{\ell} - 1}{r - \ell} \ge \frac{\binom{r}{3} - 1}{r - 3} \ge r + 1. $$
Furthermore, for $r=\ell+2\ge 6$ we have $m_\ell(K_r^{(\ell)})\ge r+1$ as well.
Together with \eqref{eq:hyper_density_est} this implies $m(G) > m_\ell(K_r^{(\ell)})$ for $r\ge \ell+2$ but $(r,\ell)\neq(5,3)$, which contradicts our choice of $G$ in this case. It remains to consider the cases $r = 5$ and $\ell = 3$.
One easily checks that in this case
$$ m(G) \stackrel{\eqref{eq:hyper_density_est}}> \frac{r + 1}{\ell} \cdot m_{\ell-1}(K_{r-1}^{(\ell - 1)}) \ge m_\ell(K_r^{(\ell)}), $$
again contradicting the assumption on $G$. Therefore, no such $G$ exists and the claim follows.
It remains to prove equation \eqref{eq:hyper_min_degree}. Consider some vertex $u \in V(G)$ of minimum degree. Similarly to the case of $K^{(\ell)}_{\ell+1}$ cliques, the minimality of $G$ implies that
\begin{equation}
G_u \;\aramcolk[2]\; K_{r-1}^{(\ell - 1)}. \label{eq:link_G_v}
\end{equation}
With~\eqref{eq:link_G_v} it follows from the induction assumption that
\begin{equation}
m(G_u) > m_{\ell - 1}(K_{r-1}^{(\ell - 1)}). \label{eq:m_G_u}
\end{equation}
One easily checks that
\begin{align*}
m(K_{r}^{(\ell - 1)}) = \frac 1 r \binom{r}{\ell - 1} = \frac{1}{r} \cdot \frac{r}{r - \ell + 1} \binom{r-1}{\ell - 1} < \frac{\binom{r-1}{\ell - 1} - 1}{r - \ell} = m_\ell(K_{r-1}^{(\ell - 1)}).
\end{align*}
Together with \eqref{eq:m_G_u} this implies that the densest subgraph of $G_u$ has to be a graph on at least $r+1$ vertices. Thus, we get from \eqref{eq:m_G_u} that
$e(G_u) > (r+1) \cdot m_{\ell-1}(K_{r-1}^{(\ell - 1)})$ and as $\delta(G) \geq e(G_u)$ this concludes the proof of \eqref{eq:hyper_min_degree}.
\end{proof}
\subsection{The Ramsey problem for hypergraph cliques} \label{sec:ramsey_cliques}
As a last application of our method we give a proof of Theorem \ref{thm:ramsey_cliques}.
\begin{proof}[Proof of Theorem \ref{thm:ramsey_cliques}]
Observe that if a hypergraph $G$ is not $2$-bounded anti-Ramsey for $F$ then it is also not Ramsey for $F$. Indeed, consider some $2$-bounded colouring of $G$ without a rainbow copy of $F$. As each colour occurs at most twice, we can colour one edge red and the other one blue. Now observe that any monochromatic subgraph in this colouring corresponds to a rainbow subgraph in the original colouring. Thus, no monochromatic copy of $F$ appears. As an immediate consequence of Theorem \ref{thm:2_bnd} we get the $0$-statement of Theorem \ref{thm:ramsey_cliques} for all $\ell$-graphs which are cliques of size at least $\ell + 1$ with the exception of the (hyper)graphs $K_3$ and $K^{(3)}_4$. The case of $K_3$ was already shown in Theorem~\ref{thm:rr}, thus it remains to consider $K^{(3)}_4$.
Note that Algorithm \ref{algo:ar_2bnd}, with line $4$ changed such that it assigns red colour to $e_1$ and blue to $e_2$, provides a $2$-colouring of the hypergraph $G$. As the analysis and the correctness of the algorithm remains the same as in the proof of Theorem \ref{thm:2_bnd}, it suffices to show that if $m(G') \le m_3(K^{(3)}_4)=3$ then $G' \nramcolk[2] K^{(3)}_4$.
Let $G$ be a vertex minimal graph with $G \ramcolk[2] K^{(3)}_4$ and let
$u \in V(G)$ be a vertex of minimum degree. Claim~\ref{obs:link_col} yields $G_u \ramcolk[2] K_3$. However, $\deg(u)\le 3\cdot m(G)\le 3\cdot m_{3}(K^{(3)}_4)=9$ and it is easy to see that any graph with less than $15$ edges is not Ramsey for $K_3$ and two colors, see e.g.~\cite{EFRS78}.
\begin{comment}
Consider a vertex $u \in V(G)$ and its link graph $G_u$. As in the proof of Lemma \ref{lemma:nbounded_hyper} it follows from the minimality assumption on $G$ that $G_u \ramcolk[2] K_{\ell+1}^{(\ell)}$ and the induction hypothesis thus implies $m(G_u) > m_{\ell - 1}(K_{\ell}^{(\ell - 1)}) = \ell - 1$. One easily checks that for $\ell \geq 3$ we have $
m(K_{\ell + 1}^{(\ell - 1)}) = \ell / 2 < m_{\ell - 1}(K_{\ell}^{(\ell - 1)})
$. Therefore, the densest subgraph of $G_u$ has to be a graph on at least $\ell + 2$ vertices. This implies together with $m(G_u) > m_{\ell - 1}(K_{\ell}^{(\ell - 1)})$ that $e(G_u) > (\ell + 2) (\ell - 1)$ and thus
$\delta(G) > (\ell -1 ) (\ell + 2) $. Therefore,
$$
m(G) \geq e(G) / v(G) > \frac{(\ell -1 )(\ell + 2)}{\ell} = \ell +1 - 2 / \ell> \ell = m_{\ell} (K^{(\ell)}_{\ell + 1}),
$$
which contradicts our assumption and thus concludes the proof of the theorem.
\end{comment}
\end{proof}
In a forthcoming paper~\cite{GNPSST} we extend Theorem~\ref{thm:ramsey_cliques} to various classes of $\ell$-graphs other than cliques. Furthermore we find examples of $\ell$-graphs $F$, where the threshold $p$ for $G^{(\ell)}(n, p)\ramcolk[k] F$ is neither determined by $m_{\ell}(F)$ nor by a density $m(G)$ of some obstruction $\ell$-graph $G$,
but rather exhibits some asymmetric behaviour.
\section{Introduction and Results} \label{sec:intro}
A hypergraph $G$ is \emph{Ramsey} for a hypergraph $F$ and an integer $r$, if every colouring of the edges of $G$ with $r$ colours contains a copy of $F$ with all its edges having the same colour. A celebrated theorem of Ramsey~\cite{ramsey1930problem} states that if $G$ is a large enough complete hypergraph then $G$ is Ramsey for $F$ and $r$. A priori it is not clear whether this follows from the density of a complete hypergraph or its rich structure. It was shown only later that actually the latter is the case: there exist sparse graphs with rich enough structure so that they are Ramsey for $F$. For example, a result of Ne\v set\v ril and R\"odl~\cite{nevsetvril1976ramsey} states that for every $k$ there exists a sparse graph $G$ that does not contain a clique of size $k+1$, but that nevertheless is Ramsey for a clique of size $k$. Nowadays, the easiest way to prove such result is by studying Ramsey properties of random (hyper)graphs.
Over the last decades the study of various Ramsey-type problems for random (hyper)graphs received a lot of attention. In their landmark result, R\"odl and Ruci\'nski~\cite{RR93,RR94,Rodl:1995} gave a precise characterization of all edge probabilities $p = p(n)$ for which Ramsey's theorem holds in
the random graph $G(n, p)$ for a given graph $F$ and $r$ colors.
The corresponding problem for hypergraphs remained open for more than 15 years. Only recently, Friedgut, R\"odl and Schacht~\cite{friedgut2010ramsey} and independently Conlon and Gowers~\cite{conlon2010combinatorial} obtained an upper bound analogous to the graph case. However, the question whether there exists a matching lower bound remained open.
More recently, other variations on Ramsey-type problems in random graphs have been
investigated. These are so-called anti-Ramsey properties such as finding
rainbow copies of a given graph $F$ in any $r$-bounded colouring of $G(n, p)$, initiated by Bohman, Frieze, Pikhurko and Smyth~\cite{Bohman:2010}, and
in any proper edge-colouring of $G(n, p)$, introduced by Kohayakawa, Konstadinidis and Mota~\cite{Kohayakawa:2011,Kohayakawa:2014}.
The aim of our paper is twofold. First we introduce a general framework for proving lower bounds for Ramsey-type problems for random hypergraphs. Roughly speaking, the framework allows to reduce the probabilistic problem
\begin{center}
\emph{Does the Ramsey property at hand hold for}\\
\emph{random (hyper)graphs with edge probability $p$ w.h.p.?}
\end{center}
to a deterministic question of whether there exists a (hyper)graph that forms an \emph{obstruction}, or more precisely
\begin{center}
\emph{Does there exist a (hyper)graph with density at most $d(F,r)$ on at most $v(F,r)$ vertices }\\
\emph{that does not have the given Ramsey property?}
\end{center}
In the second part of the paper we then apply this framework to various Ramsey-type problems in random (hyper)graphs by
providing proofs of lower bounds that match the known upper bounds up to a constant factor.
\begin{comment}
In addition to the classical Ramsey problem, other variations of the above question were also investigated. A hypergraph $G$ is called anti-Ramsey for $F$ if every edge colouring of $G$ contains a rainbow-copy of $F$, i.e.\ a copy of $F$ with all edges having pairwise different colours. To avoid trivialities one has to restrict the type of permissible edge colourings. Two well-studied restrictions are to either restrict the use of every colour to at most $r$ edges (so called $r$-bounded colourings) or to require that the colouring is a proper edge colouring (i.e., any two intersecting edges are coloured differently). The study of $r$-bounded colourings in random graphs was initiated by Bohman, Frieze, Pikhurko and Smyth~\cite{Bohman:2010}. They determined the threshold probability for which a random graph is anti-Ramsey for $F$ and $r$-bounded colourings. However, their lower bound applies only for large enough $r$ which depends on $F$, while a priori there is no reason why the result should not be true for all $r \ge 2$. The question of proper colourings was introduced by Kohayakawa, Konstadinidis and Mota~\cite{Kohayakawa:2011,Kohayakawa:2014}, where they proved upper bounds.
Ramsey type problems have also been studied in a game-theoretic setting. Here two players, Maker and Breaker, alternately claim unclaimed edges of a hypergraph $G$, until all the edges are claimed. Maker wins if he claims all the edges of a copy of a fixed hypergraph $F$ and otherwise Breaker wins.
The aim of our paper is twofold. First we introduce a general framework for proving lower bounds for Ramsey problems for random hypergraphs. Roughly speaking, the framework allows to reduce the probabilistic problem
\begin{center}
\emph{Does the Ramsey property at hand hold for}\\
\emph{random (hyper)graphs with edge probability $p$ a.a.s.?}
\end{center}
to a deterministic question
\begin{center}
\emph{Does there exist a hypergraph with density at most $d(F,r)$ on at most $v(F,r)$ vertices }\\
\emph{that does not have the given Ramsey property?}
\end{center}
In the second part of the paper we then apply this framework to address all the open problems mentioned above. In particular, we prove a lower bound for $2$-bounded anti-Ramsey problem in random graphs, for all graphs $F$ that contain a cycle and a densest subgraph is not a triangle, thus improving the result by Bohman et. al~\cite{Bohman:2010} by removing the condition on $r$. Furthermore, we generalize this result to random hypergraphs. Here we determine the threshold for all complete $\ell$-uniform hypergraphs of size at least $\ell + 2$. As lower bounds for the $2$-bounded anti-Ramsey problem imply lower bounds for the classical Ramsey problem, this result in turn also provides a matching lower bound for the upper bounds from~\cite{friedgut2010ramsey,conlon2010combinatorial} in the case of $\ell$-uniform hypergraph cliques of size at least $\ell + 2$. The remaining case of $\ell$-uniform hypergraph cliques of size $\ell + 1$ we prove directly using the framework, thus completely characterizing edge probabilities for the Ramsey problem in the case of hypergraph cliques. Next, a moment's thought shows that if a graph is $2$-bounded anti-Ramsey for $F$, then the Breaker has a winning strategy in the Maker-Breaker $F$-game. Thus our result implies the result of Nenadov, Steger and Stojakovi\'c~\cite{NenadovSteger:2014}, where the authors have characterized edge probabilities for which a random graph is Maker's win in the Maker-Breaker $F$-game. Finally, for the anti-Ramsey problem in random graphs with respect to proper-colourings we determine the lower bound for all large enough cliques and cycles.
\end{comment}
\subsection{Definitions and Notations}
For background on graph theory we refer the reader to standard text books, see e.g.~\cite{Bollobas_book}.
In particular, we denote the number of vertices and edges of a graph $G = (V, E)$ with $v(G)$ and $e(G)$, respectively. For a subset of vertices $V' \subseteq V$, we denote with $G[V']$ the subgraph of $G$ induced by the vertices in $V'$. Furthermore, for a subset of vertices $S \subseteq V$ we use the shorthand $G \setminus S$ to denote the subgraph $G[V \setminus S]$. Similarly, for $E' \subseteq E$ we write $G \setminus E'$ to denote the graph $(V, E \setminus E')$, and by $G[E']$ we mean a graph with the edge set $E'$ on the vertex set $\cup_{e\in E'}e$. Given a graph $G$ and a vertex $v \in V(G)$, we write $N_G(v)$ for the set of neighbours of $v$ in $G$, $\deg_G(v) := |N_G(v)|$ for its degree and $\delta(G) = \min_{v \in V(G)} \deg_G(v)$
denotes the minimum degree of $G$. If the graph $G$ is clear from the context, we omit it in the subscript. For two graphs $G_1$ and $G_2$, we write $G_1 \cong G_2$ if they are isomorphic.
An $\ell$-uniform hypergraph $G$, or $\ell$-graph for short, is a pair $(V,E)$ with the vertex set $V$ and $E\subseteq\binom{V}{\ell}$ the set of (hyper)edges. We will use the same notation as above for hypergraphs. Furthermore, a $k$-set is a set of cardinality $k$.
The classical Ramsey problem is the following. Given two $\ell$-graphs $F$ and $G$ and an integer~$r$, we write
$$G \ramcolk[r] F$$
if every edge colouring of $G$ with $r$ colours contains a monochromatic copy of $F$. Clearly, it is essential to restrict the number of colours. Otherwise, using a different colour for each edge in $G$ trivially avoids any monochromatic copy of $F$. Theorem of Ramsey~\cite{ramsey1930problem} states that for every $\ell$ and $r$ and
every $\ell$-graph $F$
we have, for a large enough $n$, that
\[
K^{(\ell)}_n \ramcolk[r] F,
\]
where $K^{(\ell)}_n$ denotes the complete $\ell$-graph $\left([n],\binom{[n]}{\ell}\right)$ on $n$ vertices.
It is natural to study analogues of Ramsey's theorem in the random setting. More precisely, we consider a binomial random $\ell$-uniform hypergraph $G^{(\ell)}(n, p)$ on $n$ vertices in which every subset of size $\ell$ forms an edge with probability $p$ independently. In the case $\ell=2$ (the graph case) we use $G(n, p)$ instead of $G^{(2)}(n,p)$. Given a (hyper)graph property $\mathcal{P}$, we say that a function $p_0 = p_0(n)$ is a threshold for $\mathcal{P}$ if
$$
\lim_{n \rightarrow \infty} \Pr[G^{(\ell)}(n, p) \in \mathcal{P}] = \begin{cases}
1 \quad \text{if} &p \gg p_0(n),\\
0 \quad \text{if} &p \ll p_0(n).
\end{cases}
$$
We say that an event $\mathcal{E}$ holds with high probability (w.h.p.\ for short) if $\lim_{n \rightarrow \infty} \Pr[\mathcal{E}] = 1$. It is easy to see that the Ramsey problem induces a monotone property and it follows from the result of Bollob{\'a}s and Thomason~\cite{bollobas1987threshold} that there has to exist some threshold $p_0(n)$.
In this paper we will study $0$-statements of the above Ramsey-type problem and its variations for random $\ell$-graphs. Before giving an account on the previous and our results let us provide an intuition where the threshold for various Ramsey properties may be located (for most graphs $F$). Observe that the expected number of copies of $F$ in $G^{(\ell)}(n, p)$ has the order of $n^{v(F)}p^{e(F)}$, where by $v(F)$ and $e(F)$ we denote the number of vertices and edges of $F$, respectively. On the other hand, the expected number of edges of $G^{(\ell)}(n, p)$ is in the order of $n^\ell p$. That is, if $n^{v(F)}p^{e(F)} \ll n^\ell p$ then we expect the copies of $F$ to be loosely scattered -- and finding a colouring that avoids the desired copy of $F$ should be an easy task. Similarly, if $n^{v(F)}p^{e(F)} \gg n^\ell p$ we expect that the copies of $F$ overlap so heavily that any colouring should contain the desired copy $F$.
Actually, the same argument holds for any subgraph of $F$ and this thus motivates the definition of the so-called \emph{$\ell$-density} that we now give. For an $\ell$-graph $G=(V,E)$ on at least $\ell + 1$ vertices, we set $d_\ell(G) := (e(G) - 1)/(v(G)-\ell)$ and denote by $m_\ell(G)$ the maximum $\ell$-density of any subgraph of $G$,
$m_\ell(G) = \max_{{J \subseteq G, v(J) \geq \ell + 1}} d_\ell(J)$.
If $m_\ell(G) = d_\ell(G)$, we say that $G$ is \emph{$\ell$-balanced}, and if in addition $m_\ell(G) > d_\ell(J)$ for every subgraph $J \subsetneq G$ with $v(J) \geq \ell + 1$, we say that $G$ is \emph{strictly $\ell$-balanced}. Another related notion which will be used extensively throughout the paper is the \emph{density} of an $\ell$-graph defined as $d(G) = e(G) / v(G)$. Similarly, we denote with $m(G)$ the maximum density over all subgraphs of $G$, i.e.
$m(G) = \max_{J \subseteq G} d(J)$.
\begin{comment}
\subsection{Definitions, Notations, Problem Statement}\label{sec:def}
If we allow colourings with an unbounded number of colours we arrive at the so-called anti-Ramsey problem where we are interested in finding a rainbow copy of $F$, i.e., a copy of $F$ in which each edge uses a different colour.
Again, to avoid trivialities we need to forbid colourings with too few colours. There are several ways how this can be enforced. In this paper we consider the following two versions: we insist that each colour is used at most $r$ times (we call this an $r$-bounded colouring) or that the edge colouring is a proper edge colouring, i.e.\ edges sharing a vertex need to receive different colours.
We write
$$G \aramcolk[r] F$$
if every $r$-bounded edge colouring of $G$ contains a rainbow copy of $F$. Similarly, we write
$$G \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} F$$
if every proper edge colouring of $G$ contains a rainbow copy of $F$.
We also move the static problem into a game theoretic setting. We write
$$G \xrightarrow{\text{{\tiny $game$}}} F$$
if in the following game Maker has a winning strategy: two players,
Maker and Breaker, alternately claim unclaimed edges of $G$ until all the edges are claimed. Maker wins if he claims all the edges of some copy of $F$; otherwise Breaker wins. (For sake of definiteness we assume that Maker has the first move.)
The following intuition provides a guess of where this threshold should be. Observe that the expected number of copies of $F$ in $G^{(\ell)}(n, p)$ has the order of $n^{v(F)}p^{e(F)}$, where by $v(F)$ and $e(F)$ we denote the number of vertices and edges of $F$, respectively. On the other hand, the expected number of edges of $G^{(\ell)}(n, p)$ is in the order of $n^\ell p$. That is, if $n^{v_F}p^{e_F} \ll n^\ell p$ then we expect the copies of $F$ to be loosely scattered -- and finding a colouring that avoids the desired copy of $F$ resp.\ finding a winning strategy for Breaker should be an easy task. Similarly, if $n^{v(F)}p^{e(F)} \gg n^\ell p$ we expect that the copies of $F$ overlap so heavily that any colouring should contain the desired copy $F$ resp.\ that Maker has a winning strategy.
Actually, the same argument holds for any subgraph of $F$ and this thus motivates the definition of the so-called \emph{$\ell$-density} that we now give. For an $\ell$-uniform hypergraph (an $\ell$-graph for short) $G=(V,E)$ on at least $\ell + 1$ vertices, we let $d_\ell(G) := (e(G) - 1)/(v(G)-\ell)$ and denote by $m_\ell(G)$ the maximum $\ell$-density of any subgraph of $G$,
$m_\ell(G) = \max_{{J \subseteq G, v(J) \geq \ell + 1}} d_\ell(J)$.
If $m_\ell(G) = d_\ell(G)$, we say that $G$ is \emph{$\ell$-balanced}, and if in addition $m_\ell(G) > d_\ell(J)$ for every subgraph $J \subset G$ with $v(J) \geq \ell + 1$, we say that $G$ is \emph{strictly $\ell$-balanced}. Another related notion which will be used extensively throughout the paper is the \emph{density} of an $\ell$-graph defined as $d(G) = e(G) / v(G)$. Similarly, we denote with $m(G)$ the maximum density over all subgraphs of $G$, i.e.
$m(G) = \max_{J \subseteq G} d(J)$.
From the arguments above, it is reasonable to suspect that the threshold for the introduced Ramsey type problems should be given by the $\ell$-density. For some of the problems a corresponding result is known. For some others just the $1$-statement is known, while the proof of the $0$-statement is still missing. In any case,
despite all having the same answer, the proofs of these results tend to be quite different, owing to the different nature of the problem under consideration.
The main goal of this paper is to provide a unifying framework for proving such $0$-statements. The main idea is to view the problem from an algorithmic perspective: we aim at providing an algorithm that finds the desired colouring with high probability. To do this we take the given random hypergraph $G^{(\ell)}(n, p)$ as input and first 'strip of' easily colourable edges, where the definition of 'easily colourable' depends on the type of the given Ramsey problem. We then argue that whatever remains after the end of this stripping procedure can be partitioned into blocks that can be coloured separately. Our key result (Theorem~\ref{lemma:main}) then states that with probability $1-o(1)$ these blocks will have size at most some constant $L$ that depends (in some well-understood way) on the graph $F$. As it is well known that in a typical random hypergraph with density $n^{-\alpha}$ all subgraphs of constant size have density at most $1/\alpha$ this implies that it suffices to prove that a statement of the form (where by $\xrightarrow{\ *\ }} %\text{{{\ *\ }}}}$ we mean one of the above Ramsey properties)
\begin{equation}\label{eq:deterministic}
\text{all $\ell$-graphs } G\text{ with $m(G)\le m_\ell(F)$ satisfy}\quad G\xslashedrightarrowa{\ *\ } F
\end{equation}
holds \emph{deterministically}. Note that any graph with density $m_\ell(F)$ appears in $G^{(\ell)}(n, p)$ with constant probability for $p=cn^{-m_\ell(F)}$ (cf. proof of Corollary~\ref{lemma:main_density} for details). Thus, the condition in \eqref{eq:deterministic} is actually {\em necessary} for the $0$-statement to hold.
We call a graph $G$ an obstruction if $m(G)\le m_\ell(F)$ and $ G\xrightarrow{\ *\ }} %\text{{{\ *\ }}}} F$.
Note that such obstructing graphs $G$ indeed do exist. For some Ramsey type problems there are only a few, for others there exist infinitely many. We comment on that in more detail later.
Our main aim is to show that the condition in \eqref{eq:deterministic} is also {\em sufficient}, i.e. it is sufficient to show that obstructions do not exist.
We summarize this in the following 'meta-theorem'.
\begin{metathm}
Let $F$ be an $\ell$-graph for which \eqref{eq:deterministic} holds. Then
\begin{equation*}
\lim_{n\to \infty}\Pr[G^{(\ell)}(n, p) \xrightarrow{\ *\ }} %\text{{{\ *\ }}}} F] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_\ell(F)}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_\ell(F)}\)}.
\end{cases}
\end{equation*}
\end{metathm}
In the next section we provide an overview of what was known and which new lower bounds we obtain.
\end{comment}
\subsection{Results -- old and new}
\subsubsection{Ramsey's theorem for random $\ell$-graphs}
The systematic study of Ramsey properties of random graphs was initiated by
\L{}uczak, Ruci\'nski and Voigt~\cite{LRV92} in the early nineties. Shortly thereafter
R\"odl and Ruci\'nski determined the threshold function
of the graph Ramsey property
for all graphs $F$. Below we state their result for all but a very special class of acyclic graphs.
\begin{theorem}[\cite{RR93,RR94,Rodl:1995}] \label{thm:rr}
Let $H$ be a graph that is not a forest of stars and, if $r = 2$, paths of length 3. Then there exist constants $c, C > 0$ such that
\begin{equation*}
\lim_{n\to \infty}\Pr[G(n, p) \ramcolk[r] H] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_2(H)}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_2(H)}\)}.
\end{cases}
\end{equation*}
\end{theorem}
In the case when $F$ is a triangle, Friedgut, R\"odl, Ruci\'nski and Tetali~\cite{friedgut_sharp} have strengthened Theorem \ref{thm:rr} by showing that there exists a sharp threshold
Extending Theorem \ref{thm:rr} to hypergraphs, R\"odl and Ruci\'nski~\cite{rodl1998ramsey} proved that for the $3$-uniform clique on $4$ vertices and $2$ colours the $1$-statement is determined by the $3$-density, as one would expect. They also conjectured that, similarly to the graph case, the threshold should be determined by the $\ell$-density for ``most'' of the $\ell$-graphs $F$. R\"odl, Ruci\'nski and Schacht~\cite{rodl2007ramsey} later showed that the $1$-statement actually holds for all $\ell$-partite $\ell$-graphs. In full generality the $1$-statement was resolved only recently by Friedgut, R\"{o}dl and Schacht~\cite{friedgut2010ramsey} and independently by Conlon and Gowers~\cite{conlon2010combinatorial}.
\begin{theorem}[\cite{friedgut2010ramsey,conlon2010combinatorial}]\label{thm:hypergraph-1-statement}
Let $F$ be an $\ell$-graph with maximum degree at least 2 and let $r \geq 2$. Then there exists a constant $C > 0$ such that for $p \geq C n^{-1/m_\ell(F)}$ we have
\begin{equation*}
\lim_{n\to \infty} \P[G^{(\ell)}(n, p) \ramcolk[r] F] = 1.
\end{equation*}
\end{theorem}
Recall, that $K_k^{(\ell)}$ denotes a complete $\ell$-graph on $k$ vertices.
In this paper we make progress towards providing the missing lower bounds by resolving the case of cliques.
\begin{theorem} \label{thm:ramsey_cliques}
Let $k, \ell$ be such that $2 \le \ell < k$ and let $r \ge 2$. Then there exist constants $c, C > 0$ such that
\begin{equation*}
\lim_{n\to \infty}\Pr[G^{(\ell)}(n, p) \ramcolk[r] K_k^{(\ell)}] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_\ell(K_k^{(\ell)})}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_\ell(K_k^{(\ell)})}\)}.
\end{cases}
\end{equation*}
\end{theorem}
We will deduce Theorem~\ref{thm:ramsey_cliques} as a straightforward corollary from the results in the next subsection. We note that~\cite{PhDhenning} also contains a proof of Theorem~\ref{thm:ramsey_cliques} by different means.
\subsubsection{Anti-Ramsey property for $r$-bounded colourings}
If we allow colourings with an unbounded number of colours we arrive at the so-called anti-Ramsey problem where we are interested in finding a rainbow copy of $F$, i.e., a copy of $F$ in which each edge uses a different colour.
Again, to avoid trivialities one needs to forbid colourings with too few colours. This has been done in several different ways. Here we insist that each colour is used at most $r$ times (we call this an $r$-bounded colouring).
We write
$$G \aramcolk[r] F$$
if every $r$-bounded edge colouring of $G$ contains a rainbow copy of $F$.
Lefmann, R\"odl and Wysocka \cite{Lefmann:1996} considered the following question. Given a complete graph $G$ with edges colored using an $r$-bounded coloring, what is the largest $\ell$ such that $G$ contains a rainbow copy of $K_\ell$.
Bohman, Frieze, Pikhurko and Smyth~\cite{Bohman:2010} initiated the study of a similar question in $G(n, p)$. The authors proved that given a graph $F$ and a constant $r \ge r(F)$, the threshold for the property of being $r$-bounded anti-Ramsey matches the intuition.
\begin{theorem}[\cite{Bohman:2010}] \label{thm:bohman_ar}
Let $F$ be a graph which contains a cycle. Then there exists a constant $r_0 = r_0(F)$ such that for each $r \ge r_0(F)$ there exist constants $c, C > 0$ and
\begin{equation*}
\lim_{n\to \infty}\Pr[G(n, p) \aramcolk[r] F] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_2(F)}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_2(F)}\)}.
\end{cases}
\end{equation*}
\end{theorem}
It is easy to see that for the case $F = K_3$ and $2$-bounded colourings there exists an obstruction, namely the complete graph on $4$ vertices. We refer the reader to \cite{Bohman:2010} for details regarding the results in the case $F = K_3$. For other graphs $F$ it is not obvious whether the restriction on $r$ is really needed. Indeed, the following theorem strengthens the 0-statement of Theorem \ref{thm:bohman_ar} by showing that $r=2$ actually suffices for most cases. In part $(iii)$ we also provide an extension to hypergraphs in the case of cliques.
\begin{theorem}\label{thm:2_bnd}
Let $\ell \ge 2$ and $F$ be an $\ell$-graph. Let $F' \subseteq F$ be a strictly $\ell$-balanced subgraph such that $m_\ell(F') = m_\ell(F)$ . Then there exists a constant $c > 0$ such that $G \sim G^{(\ell)}(n, p)$ w.h.p.\ satisfies $G \naramcolk[2] F$ if one of the following holds,
\begin{enumerate}[(i)]
\item $\ell = 2$, $F'$ contains a cycle, $F' \ncong \{K_3, C_4\}$ and $p \le cn^{-1/m_2(F)}$, or
\item $\ell=2$, $F' \cong C_4$ and $p \ll n^{-1/m_2(C_4)}$, or
\item $\ell \ge 3$, $r \ge \ell + 1$ and $(\ell, r) \neq (3, 4)$, $F' \cong K_r^{(\ell)}$ and $p \le cn^{-1/m_\ell(K_r^{(\ell)})}$.
\end{enumerate}
\end{theorem}
As an interesting corollary of Theorem \ref{thm:2_bnd}, we briefly mention the question of Maker-Breaker $F$-games on random (hyper)graphs.
We write
$$G \xrightarrow{\text{{\tiny $game$}}} F$$
if in the following game Maker has a winning strategy: two players,
Maker and Breaker, alternately claim unclaimed edges of $G$ until all the edges are claimed. Maker wins if he claims all the edges of some copy of $F$; otherwise Breaker wins. (For the sake of definiteness we assume that Maker has the first move.)
It is easy to see that the property of not being $2$-bounded anti-Ramsey for $F$ is stronger than being a Breaker's win in the Maker-Breaker $F$-game. Indeed, assume that a hypergraph $G$ is such that $G \naramcolk[2] F$. Then Breaker can apply the following strategy: fix some $2$-bounded colouring of $G$ without a rainbow copy of $F$ and whenever Maker claims an edge, claim the other edge with the same colour. Then Maker's graph corresponds to a rainbow subgraph of $G$ and thus does not contain an $F$-copy. Therefore, Theorem \ref{thm:2_bnd} slightly extends the result of Nenadov, Steger and Stojakovi\'c \cite{NenadovSteger:2014} by also providing a lower bound in the case of hypergraph cliques.
\subsubsection{Anti-Ramsey property for proper edge colourings}
We write
$$G \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} F$$
if every proper edge colouring of $G$ contains a rainbow copy of $F$.
The first result on the relation between random graphs and the proper-colouring version of the anti-Ramsey property comes from the following question raised by Spencer: is it true that for every $g$ there exists a graph $G$ with girth at least $g$ such that $G \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} C_\ell$ for some $\ell$. The question was answered in positive by R\"odl and Tuza~\cite{rodl1992rainbow}. They proved that for every $\ell$ there exists some sufficiently small $p = p(n)$ such that w.h.p.\ $G(n, p) \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} C_\ell$. Only much later, Kohayakawa, Kostadinidis and Mota~\cite{Kohayakawa:2011, Kohayakawa:2014} started a systematic study of this property in the random settings. In particular, they proved that the upper bound is as expected.
\begin{theorem}[\cite{Kohayakawa:2014}]
Let $F$ be a graph. Then there exists a constant $C > 0$ such that for $p \ge Cn^{-1/m_2(F)}$ we have
$$ \lim_{n \rightarrow \infty} \Pr[G(n, p) \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} F] = 1. $$
\end{theorem}
Note that $F = K_3$ is a trivial case since $K_3$ is an obvious obstruction. Therefore, any graph $F$ which contains $K_3$ as the $2$-densest subgraph is a potential candidate for having an obstruction. Indeed, the above authors showed in~\cite{Kohayakawa-Anti:2014} that there exists an infinite family of graphs for which the threshold is asymptotically below the guessed one. Here we prove that at least in the case of sufficiently large complete graphs and cycles, the situation is as expected.
\begin{theorem}\label{thm:ar_proper}
Let $F$ be a graph isomorphic to either a cycle on at least $7$ vertices or a complete graph on at least $19$ vertices. Then there exist constants $c,C > 0$ such that
\begin{equation*}
\lim_{n\to \infty}\Pr[G(n, p) \xrightarrow[{\text{\raisebox{7.5pt}[\ht\strutbox]{{\tiny $prp$}}}}]{\text{{\tiny $a$-$ram$}}} F] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_2(F)}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_2(F)}\)}.
\end{cases}
\end{equation*}
\end{theorem}
We remark that our bounds on the minimum size of the cliques resp.\ cycles are simply a consequence of our proof and probably not tight. As far as we know, the result actually could hold for all cliques and cycles of size at least $4$.
\subsection{Outline of the Proof and organisation of the paper}\label{sec:outline}
The main goal of this paper is to provide a unifying framework for proving $0$-statements for Ramsey-type properties. The main idea is to view the problem from an algorithmic perspective: we aim at providing an algorithm that finds the desired colouring with high probability. To do this we take the given random hypergraph $G^{(\ell)}(n, p)$ as input and first 'strip of' easily colourable edges, where the definition of 'easily colourable' depends on the type of the given Ramsey problem. We then argue that whatever remains after the end of this stripping procedure can be partitioned into blocks that can be coloured separately. Our key result (Theorem~\ref{lemma:main}) states that with probability $1-o(1)$ these blocks will have size at most some constant $L$ that depends (in some well-understood way) on the graph $F$. It is well known that in a typical random hypergraph with density $n^{-\alpha}$ all subgraphs of constant size have density at most $1/\alpha$. This implies that it suffices to prove that a statement of the form
\begin{equation}\label{eq:deterministic}
\text{all $\ell$-graphs } G\text{ with $m(G)\le m_\ell(F)$ satisfy}\quad G\xslashedrightarrowa{\ *\ } F
\end{equation}
holds \emph{deterministically}, where by $\xrightarrow{\ *\ }} %\text{{{\ *\ }}}}$ we mean any of the discussed Ramsey properties. Note that any graph with density $m_\ell(F)$ appears in $G^{(\ell)}(n, p)$ with constant probability for $p=cn^{-m_\ell(F)}$ (cf. proof of Corollary~\ref{lemma:main_density} for details). Thus, the condition in \eqref{eq:deterministic} is actually {\em necessary} for the $0$-statement to hold.
Formally, we call a graph $G$ an \emph{obstruction} for $F$ if $m(G)\le m_\ell(F)$ and $ G\xrightarrow{\ *\ }} %\text{{{\ *\ }}}} F$.
Note that such obstructing graphs $G$ indeed do exist. For some Ramsey type problems there are only a few, for others there exist infinitely many. We comment on that in more detail later.
Our aim is to show that the condition in \eqref{eq:deterministic} is also {\em sufficient}, i.e.\ in order to prove the $0$-statement it is sufficient to show that obstructions do not exist.
We summarize this in the following ``meta-theorem''.
\begin{metathm}
Let $F$ be an $\ell$-graph for which \eqref{eq:deterministic} holds. Then
\begin{equation*}
\lim_{n\to \infty}\Pr[G^{(\ell)}(n, p) \xrightarrow{\ *\ }} %\text{{{\ *\ }}}} F] =
\begin{cases}
1,&\text{if \(p \geq Cn^{-1/m_\ell(F)}\)}, \\
0,&\text{if \(p \leq cn^{-1/m_\ell(F)}\)}.
\end{cases}
\end{equation*}
\end{metathm}
Recall from the previous section that the $1$-statements are known to hold for all Ramsey problems considered in this paper. The key statement of our meta theorem is thus that the bound from the $1$-statement is actually tight, whenever \eqref{eq:deterministic} holds.
In Section~\ref{sec:grow_approach} we prove our framework theorem, Theorem~\ref{lemma:main}. In Section~\ref{sec:applications} we provide the proofs for Theorems~\ref{thm:ramsey_cliques},~\ref{thm:2_bnd} and~\ref{thm:ar_proper} by showing deterministic statements corresponding to~\eqref{eq:deterministic}.
\section{A general framework}\label{sec:grow_approach}
\subsection{Outline of the Method}
The key idea for the proof of the Meta-Theorem from Section~\ref{sec:outline} is to introduce appropriate notions that capture the structure of overlapping copies of $F$. In the following definitions we always assume that $F$ contains at least two edges.
\begin{definition}[$F$-equivalence]
Given $\ell$-graphs $F$ and $G$, we say that two edges $e_1, e_2 \in E(G)$ are \emph{$F$-equivalent}, with notation $e_1 \equiv_F e_2$, if for every $F$-copy $F'$ in $G$ we have $e_1 \in E(F')$ if and only if $e_2 \in E(F')$.
\end{definition}
\begin{definition}
Given an $\ell$-graph $F$ we define $\gamma(F)$ to be the largest intersection of two distinct edges in $F$, i.e.
$$ \gamma(F) := \max \{ |e_1 \cap e_2| \; : \; e_1, e_2 \in E(F) \text{ and } e_1 \neq e_2\}. $$
\end{definition}
\begin{definition}[$F$-closed property]
For given $\ell$-graphs $F$ and $G$, we define the property of being \emph{$F$-closed} as follows:
\begin{itemize}
\item an edge $e \in E(G)$ is \emph{$F$-closed} if
\begin{enumerate}[(a)]
\item $\gamma(F) = \ell - 1$ and $e$ belongs to at least two $F$-copies in $G$ or
\item $\gamma(F) < \ell - 1$ and $e$ belongs to at least two $F$-copies in $G$ and no edge $e' \in E(G) \setminus \{e\}$ is $F$-equivalent to $e$,
\end{enumerate}
\item an $F$-copy $F'$ in $G$ is \emph{$F$-closed} if at least three edges from $E(F')$ are closed,
\item the $\ell$-graph $G$ is \emph{$F$-closed} if every vertex and edge of $G$ belongs to at least one $F$-copy and every $F$-copy in $G$ is closed.
\end{itemize}
If the $\ell$-graph $F$ is clear from the context, we simply write \emph{closed}.
\end{definition}
\begin{definition}[$F$-blocks]
Given $\ell$-graphs $F$ and $G$ such that $G$ is $F$-closed, we say that $G$ is an $F$-block if for every non-empty proper subset of edges $E' \subsetneq E(G)$ there exists an $F$-copy $F'$ in $G$ such that $E(F') \cap E' \neq \emptyset$ and $E(F') \setminus E' \neq \emptyset$ (in other words, there exists an $F$-copy which partially lies in $E'$).
\end{definition}
With these definitions at hand we can now formulate our key result:
\begin{theorem} \label{lemma:main}
Let $\ell \ge 2$ be an integer and $F$ a strictly $\ell$-balanced $\ell$-graph such that either $F$ has exactly three edges and $\gamma(F) = \ell - 1$ or $F$ contains at least $4$ edges. Then there exist constants $c, L > 0$ such that for $p \leq cn^{-1/m_{\ell}(F)}$, $G \sim G^{(\ell)}(n, p)$ satisfies w.h.p. that every $F$-block $B \subseteq G$ contains at most $L$ vertices.
\end{theorem}
In all our applications we will use the following corollary of Theorem \ref{lemma:main} which gives a bound on the density $m$ of $F$-blocks.
\begin{corollary} \label{lemma:main_density}
Let $\ell \ge 2$ be an integer and $F$ a strictly $\ell$-balanced $\ell$-graph such that either $\gamma(F) = \ell - 1$ and $F$ contains at least $3$ edges or $F$ contains at least $4$ edges. Then there exists a constant $c > 0$ such that for $p \le cn^{-1/m_\ell(F)}$, $G \sim G^{(\ell)}(n, p)$ w.h.p. satisfies that for every $F$-block $B \subseteq G$ we have $m(B) \le m_\ell(F)$. Moreover, if $p \ll n^{-1/m_\ell(F)}$ then strict inequality holds.
\end{corollary}
We conclude this section with a basic property of $F$-closed graphs that will be used throughout the applications.
\begin{lemma} \label{lemma:block_split}
Let $F$ be an $\ell$-graph. Then if an $\ell$-graph $G$ is $F$-closed, there exists a partitioning $E(G) = E_1 \cup \ldots \cup E_k$, for some $k \in \mathbb{N}$, such that each subgraph $B_i$ induced by the set of edges $E_i$ is an $F$-block and each $F$-copy in $G$ is entirely contained in some block $B_i$.
\end{lemma}
\begin{proof}
Let $G$ be an $F$-closed $\ell$-graph and consider a smallest non-empty subset of edges $E' \subseteq E(G)$ such that every $F$-copy is either completely contained in $E'$ or avoids edges in $E'$. Observe that if an $F$-copy $F'$ in $G$ contains an edge $e \in E'$, then by the choice of $E'$ we have $E(F') \subseteq E'$. Similarly, if an $F$-copy $F'$ in $G$ contains an edge $e \in E(G) \setminus E'$ then $E(F') \subseteq E(G) \setminus E'$. Therefore, every edge $e \in E'$, resp. $e \in E(G) \setminus E'$ which was $F$-closed in $G$ remains $F$-closed in $G[E']$, resp. $G \setminus E'$, thus both $G[E']$ and $G \setminus E'$ are $F$-closed. By the minimality of $E'$ it follows that $G[E']$ is an $F$-block. We can now set $E_1 := E'$ and repeat the procedure on $G' := G \setminus E'$. In this way we obtain the desired partition $E_1, \ldots, E_k$.
\end{proof}
\subsection{Some useful facts} \label{sec:prelim}
The following lemma is a standard exercise in graph theory that we leave to the reader.
\begin{lemma}[$k$-degeneracy] \label{thm:k-deg}
Let $G$ be a graph with $m(G) \le k$ for some $k \in \mathbb{R}$. Then there exists an ordering $(v_1, \ldots, v_n)$ of the vertices of $G$ such that
$$ |N(v_i) \cap \{v_1, \ldots, v_{i-1}\}| \le \lfloor 2k \rfloor $$
for every $i \in [n]$.
\end{lemma}
The proof of the following fact follows easily from Hall's theorem, cf.\ e.g.\cite{NenadovSteger:2014}
\begin{lemma} \label{thm:k-orient}
Let $G$ be a graph with $m(G) \le k$ for some $k \in \mathbb{N}$. Then there exists an orientation of the edges of $G$ such that in the resulting directed graph each vertex has out-degree at most $k$.
\end{lemma}
\begin{lemma}[Markov's Inequality] \label{thm:markov}
Let $X$ be a non-negative random variable. For all $t > 0$ we have $\Pr[X \geq t] \le \frac{\mathbb{E}[X]}{t}$.
\end{lemma}
\subsection{Proof of Theorem \ref{lemma:main}} \label{sec:proof_of_main}
Here we show that $F$-blocks are with high probability only of constant size (Theorem~\ref{lemma:main}).
Before we prove Theorem \ref{lemma:main}, we first show how it implies Corollary \ref{lemma:main_density}.
\begin{proof}[Proof of Corollary \ref{lemma:main_density}]
Let $L$ and $c$ be constants given by Theorem \ref{lemma:main} when applied to an $\ell$-graph $F$. Without loss of generality, we may assume that $c < 1$. We first consider the case $p \le cn^{-1/m_\ell(F)}$.
Let $\alpha \in \mathbb{R}$ be a strictly positive constant such that for every $\ell$-graph $S$ on at most $L$ vertices with $m(S) > m_\ell(F)$ we have $m(S) \ge m_\ell(F) + \alpha$. More formally, we define
an $\alpha>0$ as follows,
$$ \alpha := \min \{ m(S) - m_\ell(F) \mid v(S) \le L \; \text{ and } \; m(S) > m_\ell(F) \}.$$
Since there are only finitely many such $\ell$-graphs $S$, $\alpha$ is well-defined. Consider now some $\ell$-graph $S$ on at most $L$ vertices with $m(S) \ge m_\ell(F) + \alpha$ and let $S' \subseteq S$ be a subgraph such that $e(S') / v(S') = m(S)$. Let $X_{S'}$ be the random variable which denotes the number of $S'$-copies in $G$.
Then the expected number $\ensuremath{\mathbb E} X_{S'}$ of $S'$-copies in $G \sim G^{(\ell)}(n, p)$ is at most
\begin{align*}
\ensuremath{\mathbb E} X_{S'} &\le n^{v(S')} p^{e(S')} \le n^{v(S') - e(S') / m_\ell(F)} \\
&= \left( n^{1 - m(S')/m_\ell(F)} \right)^{v(S')} \le n^{- \alpha \cdot v(S')/m_\ell(F)} = o(1).
\end{align*}
Therefore, by Markov's inequality (Lemma \ref{thm:markov}) we have
$$\Pr[G \; \text{contains an} \; S\text{-copy}] \le \Pr[G\; \text{contains an} \; S'\text{-copy}] = \Pr[X_{S'} \ge 1] \le \ensuremath{\mathbb E} X_{S'}.$$
As there exist less than $2^{\binom{L}{\ell}}$ different $\ell$-graphs on at most $L$ vertices, a union-bound over all such $\ell$-graphs thus also gives
$$ \Pr[ \exists S \subseteq G \; \text{ such that } \; v(S) \le L \; \text{and} \; m(S) > m_\ell(F)] = o(1).$$
In particular, since w.h.p.\ $G$ is such that every $F$-block $B \subseteq G$ contains at most $L$ vertices it follows that $m(B) \le m_\ell(F)$, as required.
Let us now assume that $p \ll n^{-1/m_\ell(F)}$. Similarly as in the previous case, if $S$ is an $\ell$-graph on at most $L$ vertices with $m(S) \ge m_\ell(F)$, then for $p \ll n^{-1/m_\ell(F)}$ we have
that the expected number of $S'$-copies is
$$ \ensuremath{\mathbb E} X_{S'} \le n^{v(S')} p^{e(S')} = o(n^{v(S') - e(S') / m_\ell(F)}) = o(1),$$
where $S' \subseteq S$ is such that $e(S') / v(S') = m(S)$. The same argument as before shows that $G$ contains no copy of $S$, which finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{lemma:main}]
Our proof is a generalization of the approach from~\cite{NenadovSteger:2014Ramsey} to hypergraphs and general Ramsey problems.
The proof is essentially a first moment argument. We enumerate all possible $F$-blocks on more than $L$ vertices and show that the probability that one or more of them appears in $G \sim G^{(\ell)}(n, p)$ is $o(1)$. The difficulty lies in the fact that straightforward enumerations (like choosing subsets of edges) do not work: we have too many choices. We thus have to design a more efficient way to encode $F$-blocks. To do that we make use of Algorithm~\ref{alg:grow_seq} that enumerates $F$-copies of a block in some clever way.
{\LinesNumbered
\begin{algorithm}[h]\label{alg:grow_seq}
\DontPrintSemicolon
$F_0 \leftarrow$ an arbitrary $F$-copy in $B$\;
$G_0 \leftarrow F_0$\;
$i \leftarrow 0$\;
\While{$G_i \neq B$}{
$i \leftarrow i + 1$\;
\eIf{$G_{i-1}$ contains an $F$-copy which is not closed}{
$j \leftarrow$ smallest index $j < i$ such that $F_j$ is not closed\;
$e \leftarrow$
an edge in $F_j$ which is not closed in $G_{i-1}$ but closed in $B$ \label{line:choose_e}\;
$F_i \leftarrow$ an $F$-copy in $B$ but not $G_{i-1}$ which contains $e$\label{line:open-case}\;
}{
$F_i \leftarrow$ an arbitrary $F$-copy in $B$ but not
$G_{i-1}$ which intersects $G_{i-1}$ in at least one edge\label{line:closed-case}\;
}
$G_i \leftarrow G_{i-1} \cup F_{i}$\;
}
$s \leftarrow i$
\caption{Construction of a grow sequence for an $F$-block $B$.}\label{algo:grow-sequence-algo}
\end{algorithm}}
Let $B$ be an $F$-block.
Algorithm \ref{algo:grow-sequence-algo} maps $B$ to a sequence $(F_0, \dotsc, F_s)$ of copies of $F$. In order to see that the algorithm is well-defined it suffices to show that lines \ref{line:open-case} and \ref{line:closed-case} can always be executed. For line \ref{line:open-case} this follows directly from the condition in the if-statement: an $F$-copy that is not yet closed contains an edge $e$ that is closed in $B$ but not yet in $G_{i-1}$. As in line \ref{line:choose_e} we choose exactly such an edge, the desired copy in line \ref{line:open-case} exists. Similarly, if at some point the execution of line \ref{line:closed-case} would not be possible, this would imply that there exists a subgraph $G_i \subsetneq B$ such that every $F$-copy in $B$ is completely contained in either $G_i$ or $\overline{G}_i = B \setminus E(G_i)$. Since $G_i$ is non-empty (it contains $F_0$) this contradicts the assumption that $B$ is an $F$-block. Thus line \ref{line:closed-case} is well-defined. Finally, as the number of edges in $G_i$ increases with each iteration and $E(G_i) \subseteq E(B)$, at some point $G_i$ will be equal to $B$ and the algorithm will stop.
Note that the sequence $(F_0, \dotsc, F_s)$ fully describes a run of the algorithm.
We call it a \emph{grow sequence} for $B$ and each $F_i$ in
it a \emph{step} of the sequence, $0 \leq i \leq s$.
Given some grow sequence $S := (F_0, \dotsc, F_s)$ for $B$ we can easily reconstruct $B$ as the union of all $F_i$, $0 \leq i \leq s$.
We now turn to the question of how to enumerate such sequences efficiently.
Let us fix an arbitrary labeling of the vertices of $F$, say $V(F) = \{w_1, \ldots, w_{v(F)}\}$. Every $F$-copy in $B$ can be specified by an injective mapping $f: V(F) \rightarrow V(B)$, thus we can represent every $F$-copy in $B$ as a $v(F)$-tuple of vertices of $B$ where the $i$-th element of the tuple determines $f(w_i)$, for $1 \le i \le v(F)$. Accordingly, we could represent every grow sequence as a sequences of $v(F)$-tuples of vertices in $V(B)$. Unfortunately, such an encoding is still too inefficient. We improve on this by using the fact that every $F$-copy $F_i$ from a grow sequence $(F_0, \ldots, F_s)$
has a non-empty intersection with $F_0 \cup \ldots \cup F_{i-1}$ .
We now make this more precise.
%
We distinguish three step types. We call $F_0$ the \emph{first}
step.
For $i \geq 1$ we call the step $F_i$ \emph{regular} if the intersecting subgraph $G_{i-1} \cap F_i$ corresponds to exactly one edge, and \emph{degenerate} otherwise.
In the first moment argument that we elaborate on below we choose the type of each step (regular or degenerate). For each type we then have to multiply the number of choices by the probability that the new edges (the edges in $E(F_i)\setminus E(G_{i-1})$) are present in $G$.
For a regular step $F_i$ created in line \ref{line:open-case}, the intersection with $G_{i-1}$ corresponds exactly to a non-closed edge $e$ in $F_j$, where $j<i$ is the smallest index $j < i$ such that $F_j$ is not closed. Note that the index $j$ can be uniquely reconstructed from the graph $G_i$. That is, we do not have to choose it. This edge can be chosen in $e(F)$ ways. Furthermore, we have to choose which vertices in $F_i$ correspond to these vertices, giving another factor of $v(F)^\ell$. It remains to choose the other $v(F)-\ell$ new vertices of $F_i$, which in turn describe the $e(F)-1$ new edges that are required to be present. The total contribution of such a step is thus
\begin{equation}\label{eq:regular-open}
e(F)v(F)^{\ell} n^{v(F)-\ell}p^{e(F)-1} \leq e(F)v(F)^{\ell} c^{e(F)-1} \leq c < 1,
\end{equation}
where $c$ is the constant in Theorem~\ref{lemma:main} which we
choose small enough for the above to hold.
In contrast to regular steps created in line \ref{line:open-case}, if a regular step $F_i$ is created in line \ref{line:closed-case} then a copy $F_j$ which contains an intersecting edge of $F_i$ and $G_{i-1}$ is not fully determined by $G_{i-1}$ and we need to choose
it. By construction, the $\ell$-graph $G_{i-1}$ contains at most $v(F)\cdot i$
vertices, thus there are at most $(v(F) \cdot i)^\ell$ choices for the vertices in the attachment edge in $G_{i-1}$ and the contribution of such a step is
\begin{equation}\label{eq:regular-closed}
e(F) (v(F)\cdot i)^{\ell} n^{v(F)-\ell}p^{e(F)-1} \stackrel{\eqref{eq:regular-open}}{\le} i^{\ell},
\end{equation}
again using the assumptions on the choice of $c$ in $(\ref{eq:regular-open})$.
Now consider the case of degenerate steps, i.e.\ those for which \(H := F_i
\cap G_{i-1}\) satisfies $v(H) > \ell$. We can choose which vertices of $G_{i-1}$ correspond to $H$ in $(v(F) \cdot i)^{v(H)}$ many ways. Furthermore, recall that $F$ is strictly
$\ell$-balanced, so for any subgraph $H \subsetneq F$ with $v(H) > \ell$ we
have
\begin{equation*}
\frac{e(H)-1}{v(H) - \ell} < \frac{e(F)- 1}{v(F)-\ell} = m_\ell(F)
\end{equation*}
and thus
\begin{equation}\label{eq:degenerate-hurts}
\frac{e(F)-e(H)}{v(F)-v(H)} = \frac{(e(F)-1) - (e(H)-1)}{(v(F)-\ell) -
(v(H)-\ell)} > m_\ell(F).
\end{equation}
This implies that we can choose a constant $\alpha > 0$ such that
for all $H \subsetneq F$ with $v(H) > \ell$ it holds that
\begin{equation*}
v(F) - v(H) - \frac{e(F) - e(H)}{m_\ell(F)} < -\alpha.
\end{equation*}
Applying this to a degenerate step $F_i$, we obtain that the contribution is upper-bounded by
\begin{align}\label{eq:degenerate}
\sum_{\substack{H\subsetneq F\\v(H) > \ell}}(v(F)\cdot i)^{v(H)}
n^{v(F)-v(H)}p^{e(F) - e(H)} \nonumber
& \leq
i^{v(F)} \cdot c n^{-\alpha} \sum_{\substack{H\subsetneq F\\v(H) > \ell}}v(F)^{v(H)} \nonumber
\\
& \leq i^{v(F)} \cdot c n^{-\alpha} \cdot v(F)^{v(F)} 2^{v(F)^2} \\
& \le i^{v(F)} n^{-\alpha},\nonumber
\end{align}
where we again assume that $c$ is chosen small enough for the above to hold.
Thus, degenerate steps introduce a factor $i^{v(F)}n^{-\alpha}$, which suggests that sequences containing (constantly) many of them are very unlikely to appear in $G$. Similarly, regular steps created in line~\ref{line:open-case} introduce a factor of $c<1$, which suggests that sequences containing $\Theta(\log n)$ of these steps are also unlikely to appear in $G$.
The next claim provides bounds on the number of degenerate and regular steps created in line \ref{line:closed-case} that will allow us to conclude the proof.
\begin{claim}\label{claim:open-edges-bound}
Let $S = (F_0, \ldots, F_s)$ be a grow sequence corresponding to an execution of Algorithm~\ref{algo:grow-sequence-algo}. Then the following holds:
\begin{enumerate}[(a)]
\item If $S$ contains at most $d$ degenerate steps, then $s \leq 3d \cdot v(F)$. \label{item:claim-open-edges-bound-1}
\item If a prefix $S'$ of $S$ contains at most $d$ degenerate steps, then every regular step $F_j$ in $S'$, with $j \ge 3d\cdot v(F) + 2$, is created in line \ref{line:open-case}. \label{item:claim-open-edges-bound-2}
\end{enumerate}
\end{claim}
Intuitively, what Claim \ref{claim:open-edges-bound} tells us is that in a long grow sequence either there will be
many degenerate steps or most of the steps will be regular steps created in line \ref{line:open-case}.
Note that every degenerate step, as Equation~\eqref{eq:degenerate} shows, introduces a factor of $\Theta(n^{-\alpha+o(1)})$ to the expectation of the number of appearances of $S$ (for $i=O(\log n)$) and regular step created in line \ref{line:open-case} introduces a constant factor $c < 1$. We defer the formal proof of Claim \ref{claim:open-edges-bound} to the next section.
With the help of Claim \ref{claim:open-edges-bound} we can now finish our first moment argument. Set $d_{\text{max}} := v(F)/\alpha + 1$
and $L := 3d_{\text{max}} v(F) + 1$ and let $S = (F_0, \ldots, F_s)$ be a grow sequence of length more than $L$. By Claim~\ref{claim:open-edges-bound}\eqref{item:claim-open-edges-bound-1} every such sequence $S$
must contain at least $d_{\text{max}}$ degenerate steps.
We now distinguish two cases. Let $s_d$ be the step in which the $d_{\text{max}}$-th degenerate step occurs in $S$.
If $s_d <s_{\max}$, where $s_{\max} := v(F)\log n + d_{\text{max}}+L$, then we set $S' := (F_0, \ldots, F_{s_d})$. Otherwise, we set $S' := (F_0, \ldots, F_{s_{\max}})$.
We prove that in both cases the expected number of possible grow sequences $S$ longer than $L$ which have a prefix $S'$ is $o(1)$.
Observe that, in any case, $S'$ is a prefix of $S$ that contains at most $d_{\text{max}}$ degenerate steps. Then, by Claim~\ref{claim:open-edges-bound}\eqref{item:claim-open-edges-bound-2}, if $F_i$ is a regular step from $S'$ created in line \ref{line:closed-case}, we have $i \le L$. Let us first consider the case when the $d_{\max}$-th degenerate step occurs before step~$s_{\max}$, that is $s_d \in \{d_{\max}, \ldots, s_{\max} - 1\}$. For a fixed such $s_d$ there are $\binom{s_d-1}{d_{max} - 1}$ ways to choose steps in which the first $d_{\max} - 1$ degenerate steps have occured. We can now upper bound the expected number of such sequences $S'$ as follows
\begin{multline*}
\sum_{s_d = d_{\max}}^{s_{\max}-1} \binoms{s_d - 1}{d_{\text{max}} - 1} n^{v(F)}
\bigl( \underbrace{\strut s_d^{v(F)}n^{-\alpha}}_{ \text{eq. }\eqref{eq:degenerate}}\bigr)^{d_{\text{max}}}
\bigl(\underbrace{(
L)^{\ell}}_{\text{eq. } \eqref{eq:regular-closed}}\bigr)^{L} = \mathrm{polylog}(n) \cdot n^{v(F)}n^{-\alpha\cdotd_{\text{max}}} = o(1).
\end{multline*}
Here we bound the contribution of the first step by $n^{v(F)}$, drop the
contribution of $c < 1$ for all regular steps created in line \ref{line:open-case}, and use the fact that only
the first $L + 1$ steps can be regular steps created in line \ref{line:closed-case}.
Let us now consider the case $s_d \ge s_{\max}$. Note that then there are $d \in \{0, \ldots, d_{\text{max}}\}$ degenerate steps within the first $s_{\max}$ steps. Similarly as in the previous case, we can upper bound the expected number of such sequences $S'$ as follows:
\begin{multline*}
\sum_{d = 0}^{d_{\text{max}}}\binoms{s_{\max}}{d} n^{v(F)}
\bigl( \underbrace{\strut s_{\max}^{v(F)}n^{-\alpha}}_{ \text{eq. }\eqref{eq:degenerate}}\bigr)^{d}
\bigl(\underbrace{(
L)^{\ell}}_{\text{eq. } \eqref{eq:regular-closed}}\bigr)^{L}
\underbrace{\strut c^{s_{\max}-d-L}}_{\text{eq. } \eqref{eq:regular-open}} \\= \mathrm{polylog}(n)\cdot n^{v(F)}\cdot c^{s_{\max}-d-L} =
\mathrm{polylog}(n)\cdot
2^{v(F) \log n }
c^{v(F)\log n}
= o(1),
\end{multline*}
where we used the fact that $c$ is small enough and in particular smaller than $1/2$.
We can now conclude that the probability that $G$ contains a possible grow sequence $S$ of length longer than $L$ as follows
$$ \Pr[S \text{ of length at least } L] \le \Pr[S \text{ contains a prefix $S'$ as described}] = o(1),$$
where the last inequality follows from Markov's inequality. Thus, with probability $1-o(1)$, every $F$-block in $G$
contains at most $v(F)\cdot (L+1)$ vertices.
\end{proof}
\subsubsection{Proof of Claim~\ref{claim:open-edges-bound}}
\label{sec:proof-open-edges-lemma}
Let $S_i := (F_0, \ldots, F_i)$, for $0 \leq i \leq s$. For any $S_i$ and any
regular step $F_j$, $j \leq i$ we call the
edge $e := E(G_{j-1}) \cap E(F_j)$ the \emph{attachment edge} of $F_j$ and
the vertices in $V(F_{j}) \setminus V(G_{j-1})$ the \emph{inner vertices} of
$F_j$. For $j\le i$, we say that a regular step $F_j$ is \emph{fully-open} in $S_i$ if $\bigcup_{j' = j + 1}^i V(F_{j'})$ does not contain any inner vertex of $F_j$ (i.e., the inner vertices of $F_j$ have not been touched by any of the copies $F_{j+1},\ldots,F_i$). The first step $F_0$ is always fully-open by definition, and all its vertices are inner. Finally, we denote by
$\textup{reg}(S_i)$, $\deg(S_i)$ and $\textup{fo}(S_i)$ the number of regular, degenerate and fully-open steps in $S_i$.
It follows from the definition that a newly added regular step $F_i$ is fully-open in $S_i$. Next, we show a series of claims which will be used later in the proof of Claim~\ref{claim:open-edges-bound}.
\begin{claim}\label{claim:tilde-F-is-F-f+e}
Let $F$ be a strictly $\ell$-balanced $\ell$-graph with at least three edges. Furthermore, let $G$ be an arbitrary $\ell$-graph and $e\in E(G)$ an edge in $G$. Let $F_e$ be an $F$-copy such that $G \cap F_e = (e,\{e\})$.
Then all $F$-copies $\tilde F$ in $G^+ := G \cup F_e$ which are
not contained in $G$ have the form
\begin{equation*}
\tilde F = F_e - e + \tilde e := \bigl((V(F_e) \setminus
e) \cup \tilde e, (E(F_e) \setminus \{e\}) \cup \{\tilde e\}\bigr),
\end{equation*}
where $\tilde e \in E(G)$ and $\abs{\tilde e \cap e} > \gamma(F)$,
cf.~Figure~\ref{fig:tilde-F-is-F-f+e}.
\end{claim}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.7]
\draw[dotted] (-2, -0.4) .. controls (-1.3,1) and (0.3,0.9) .. (0.3, 0.9);
\draw[dotted] (9, -0.4) .. controls (8.3,1) and (6.7,0.9) .. (6.7, 0.9);
\fill[rounded corners=5pt,fill=white,draw=black] (0,0) rectangle (7,1);
\draw (0,0.7) .. controls (0,5) and (7,5) .. (7,0.7);
\draw[dashed] (0.2,0.8) .. controls (0.2,4.8) and (6.8,4.8) .. (6.8,0.8);
\draw[dashed, rounded corners=5pt] (3, 0.8) -- (6.8, 0.8) -- (6.8, -1) -- (5, -1) -- (5, 0.2) -- (3,0.2) -- cycle;
\draw[dashed] (0.2, 0.8) -- (3,0.8);
\node (G) at (-1.5,-0.4) {$G$};
\node (e) at (1.5,0.4) {$e$};
\node (et) at (5.9,-0.5) {$\tilde e$};
\node (Ft) at (3.5, 2.2) {$F_e - e + \tilde e$};
\end{tikzpicture}
\caption{The possible copies of $F$ created in a regular step. The solid
lines represent $F_e$, the dashed ones $\tilde F$.}
\label{fig:tilde-F-is-F-f+e}
\end{figure}
\begin{proof}
\newcommand{\text{new}}{\text{new}}
\newcommand{\text{old}}{\text{old}}
Let $\tilde F$ be some $F$-copy in $G^+$ which is not fully
contained in $G$. If $\tilde F = F_e$, then the lemma is true for $\tilde e =
e$, so we assume $\tilde F \neq F_e$.
Let $\tilde e$ be an arbitrary edge of $\tilde F$ which is not contained
in $E(F_e)$. Note that this implies $\tilde e \in E(G)$.
First we show that $E(\tilde F) \setminus \{\tilde e\}$ must be contained
in $E(F_e)\setminus \{e\}$, which implies that the two sets are
equal. Assume this is not true. Set $\tilde F_\text{new} := \tilde F[V(F_e)]$,
$\tilde F_\text{old} := \tilde F[V(G)]$ and $\tilde F_{\text{new}}^{+e} = \tilde F_\text{new} + e$. As we assumed that \(E(\tilde F)\setminus \{\tilde e\}
\nsubseteq E(F_e)\setminus\{e\}\) we know that \(\tilde F\) must contain an
edge different from \(\tilde e\) that is not contained in
$E(F_e) \setminus \{e\}$,
and is thus contained in \(E(G)\). This implies that
$e(\tilde
F_{\text{old}}) \geq 2$. As \(\tilde F\) is not fully contained in \(G\) it must
contain at least one edge of \(E(F_e) \setminus E(G)\), which in turn implies that \(e(\tilde
F_\text{new}^{+e}) \geq 2\).
Subgraph $\tilde F_\text{old}$ is a strict subgraph of $F$ as $\tilde F$ is not fully contained in $G$.
Moreover, $\tilde F_\text{new}^{+e}$ is also a strict subgraph of $F$ as by definition
$E(\tilde F_\text{new}^{+e}) \subseteq E(F_e)$ and $|E(\tilde F_\text{new}^{+e})| < |E(F_e)|$.
One easily checks that regardless of whether \(e\) is an edge of \(\tilde F_{\text{new}}\) or not we have
\begin{equation*}
e(\tilde F) = e(\tilde F_\text{old}) + e(\tilde F_\text{new}^{+e}) - 1
\quad\text{and}\quad
v(\tilde F) \geq v(\tilde F_\text{old}) + v(\tilde F_\text{new}^{+e}) - \ell.
\end{equation*}
Thus
\begin{equation*}
m_\ell(\tilde F) = \frac{e(\tilde F)-1}{v(\tilde F)-\ell} \leq
\frac{e(\tilde F_\text{old}) - 1 + e(\tilde F_\text{new}^{+e}) -1}{v(\tilde F_\text{old}) -
\ell + v(\tilde F_\text{new}^{+e}) - \ell} < m_\ell(F),
\end{equation*}
which is a contradiction, as $\tilde F$ is an $F$-copy. (Here the last inequality follows from the fact that $F$ is strictly $\ell$-balanced and $\tilde F_\text{new}^{+e}, \tilde F_\text{old} \subsetneq \tilde F$ are copies of a proper subgraph of $F$, each with at least $\ell+1$ vertices. ) Hence, our assumption $E(\tilde F) \setminus \{\tilde e\}\neq E(F_e)\setminus \{e\}$ is not valid
It remains to show that $\abs{\tilde e \cap e} > \gamma(F)$.
Let $X := \tilde{e} \setminus e$ and assume $\abs X \geq \ell - \gamma(F)$, i.e.
$ \abs{\tilde e \cap e} \leq \gamma(F)$. As
$\tilde F \setminus \{ \tilde e\} = F_e \setminus \{e\}$ we know that
no edge of $\tilde F$, except $\tilde e$, can contain a vertex in $X$.
Let $H := \tilde F \setminus X$. By the previous observation we have
$$
v(H) = v(\tilde F) - \abs{X}\ge \ell+1 \quad \text{and} \quad e(H) = e(\tilde F) - 1\ge 2,
$$
thus
\begin{equation}\label{eq:claim9}
m_\ell(H) \geq \frac{e(H) - 1}{v(H) - \ell} = \frac{e(\tilde F) - 1 - 1}{v(\tilde F) - \abs{X} - \ell} \geq
\frac{e(\tilde F) - 1 - 1}{v(\tilde F) - \ell - (\ell - \gamma(F))},
\end{equation}
where the last inequality holds because of the assumption on $X$. We have $m_\ell(F) = \frac{e(\tilde F)-1}{v(\tilde F)-\ell}$ by the assumptions on $F$ being strictly $\ell$-balanced and $m_\ell(F) \ge \frac{1}{\ell - \gamma(F)}$ by the definition of $\gamma(F)$. Inequality $(\ref{eq:claim9})$ thus implies that $m_\ell(H) \ge m_\ell(F)$, which is a contradiction as $ H$ is a copy of a proper subgraph of $\tilde F$ with more than one edge. Thus we have $|\tilde e \cap e| > \gamma(F)$, as desired.
\end{proof}
Note that Claim~\ref{claim:tilde-F-is-F-f+e} implies that for $\ell$-graphs $F$ with $\gamma(F)=\ell-1$ (and in particular for {\em graphs}) we have that $G^+ = G \cup F_e$ does not contain {\em any} $F$-copy that intersects both $F_e\setminus e$ {\em and} $G\setminus e$. For these $\ell$-graphs the following claim is thus straightforward while for all other $\ell$-graphs it needs a small argument.
\begin{claim}\label{claim:regular-fully-open}
Let $1\le j \le i$ and $F_j$ be a fully-open step in $S_i$. Let $e_j\in E(F_j)$ denote the attachment edge of $F_j$. Then any two distinct edges $e, e' \in E(F_j)\setminus\{e_j\}$ of $F_j$ are $F$-equivalent in $G_i$.
\end{claim}
\begin{proof}
As $F_j$ is fully-open in $S_i$ we know
that $G_i$ can be partitioned as $G_i = F_j \cup G_i'$ such that $F_j \cap G_i' = e_j$. From Claim~\ref{claim:tilde-F-is-F-f+e} we know that
any $F$-copy in $G_i$ which contains some edge $e\in E(F_j)\setminus\{e_j\}$ must also contain all other edges $e'\in E(F_j)\setminus\{e_j\}$, hence the claim follows.
\end{proof}
For $i \geq 1$, let $\Delta(i)$ denote the number of fully-open copies
``destroyed'' by step $F_i$, i.e.\ let
\begin{equation*}
\Delta(i) = \abs{\{j < i \mid \text{$F_j$ fully-open in $S_{i-1}$ but not in $S_i$}\}}.
\end{equation*}
\begin{claim} \label{claim:max_delta}
$$
\Delta(i) \le
\begin{cases}
1,&\mbox{if $F_i$ is a regular step}\\
v(F) - \ell +1,&\mbox{if $F_i$ is a degenerate step}.
\end{cases}$$
\end{claim}
\begin{proof}
Fix any edge $e \in E(G_{i-1})$ and
let $F_t$, $t<i$, be a step with $e\in E(F_t)$. Note that such a step has to exist as $e\in E(G_{i-1})$. Assume $e$ contains an inner vertex of some step $F_j$, $j<i$, which is fully-open in $S_{i-1}$.
If $t>j$ then $F_t$ contains an inner vertex
of $F_j$, which contradicts our assumption that $F_j$ is fully-open in $S_{i-1}$. If $t<j$ then some inner vertex of $F_j$ is contained in an edge of $F_t$, which contradicts the definition of inner vertices of $F_j$. It follows that $t=j$ and $e\in E(F_j)$.
This easily implies the first part of the claim. Indeed, let $F_i$ be a regular step and $e_i = F_i \cap G_{i-1}$ its attachment edge. From the previous observation we have that $e_i$ can contain inner vertices of at most one $F$-copy $F_j$ which is fully-open in $S_{i-1}$, thus $\Delta(i) \le 1$ as required.
Next, similarly as in the case of edges we show that any vertex $v \in V(G_{i-1})$ can be an inner vertex of at most one $F$-copy $F_j$ which is fully-open in $S_{i-1}$. Fix any vertex $v\in V(G_{i-1})$ and assume that $F_j$
is fully-open in $S_{i-1}$ with $v$ being its inner vertex. Let $F_t$, $t < i$, be a step containing $v$.
Then, by the same argument as above, it can not be that $t < j$.
By the definition of fully-open, the set $\cup_{j' = j+1}^{i-1} V(F_j')$ does not contain any inner vertex of $F_j$. In particular, if $t > j$ then this also holds for $F_t$. Therefore, $v$ can be an inner vertex only of step $F_j$.
We can now derive the second part of the claim. Let $F_i$ be a degenerate step and $e \in E(F_i \cap G_{i-1})$ an arbitrary edge of $F_i$ which exists in $G_{i-1}$. By the first observation we have that $e$ contains inner vertices of at most one fully-open step in $S_{i-1}$. By the second observation, every vertex $v \in V(F_i \cap G_{i-1}) \setminus V(e)$ is an inner vertex of at most one fully-open step in $S_{i-1}$. In total, the step $F_i$ can touch inner vertices of at most $v(F) - \ell + 1$ fully-open copies.
\end{proof}
\begin{claim}\label{claim:consecutive-regular-bound}
Let $F_i$ and $F_{i+1}$ be consecutive regular steps. If $\Delta(i) = 1$ then $\Delta(i+1) = 0$.
\end{claim}
\begin{proof}
As $\Delta(i) = 1$ we know that $F_i$ is the first step which
intersects the inner vertices of a fully-open step $F_j$ in $S_{i-1}$, for some $j <
i$. Denote the attachment edges of $F_j$ and $F_i$ by $e_j$ and $e_i$, respectively. Before step $F_i$, by Claim \ref{claim:regular-fully-open} (if $\gamma(F) < \ell - 1$) and Claim \ref{claim:tilde-F-is-F-f+e} (if $\gamma(F) = \ell - 1$) the step $F_j$ had $e(F)-1 \ge 2$ edges which were not closed in $G_{i-1}$. We show below that step $F_i$ closes exactly one edge of $F_j$. Thus, after the step $F_i$ the copy $F_j$ still contains at least one edge that is
not closed. Therefore, in the $(i+1)$-iteration of the Algorithm~\ref{algo:grow-sequence-algo}, $F_{i+1}$ will be chosen in such a way that it intersects one of the edges of $F_j$ which are not yet closed. As $F_{i+1}$ is regular, it follows from the same arguments as in the proof of Claim \ref{claim:regular-fully-open} that it does not intersect the inner vertices of any other fully-open step in $S_i$ and we can conclude that $\Delta(i+1) = 0$.
It remains to show that $F_i$ closes exactly one edge in $F_j$. We do this by a case distinction based on $\gamma(F)$.
Assume first that $\gamma(F) = \ell - 1$ and consider some edge $e \in E(F_j) \setminus \{e_i,e_j\}$. By Claim~\ref{claim:tilde-F-is-F-f+e} the only $F$-copy in $G_{i-1}$ that contains $e$ is $F_j$. Moreover, again by Claim~\ref{claim:tilde-F-is-F-f+e} the only $F$-copy in $G_i$ which does not belong to $G_{i-1}$ is $F_i$. Since $e \notin E(F_i)$, $e$ also belongs to less than two copies in $G_i$ and thus it remains not closed.
Assume now that $\gamma(F) < \ell - 1$. Since in this case $F$ contains at least $4$ edges, let us consider any two distinct edges $e', e'' \in E(F_j) \setminus \{e_i, e_j\}$. First, it follows from Claim \ref{claim:regular-fully-open} that $e' \equiv_F e''$ in $G_{i-1}$. Furthermore, let us assume that there exists an $F$-copy $F'$ in $G_i$, not fully contained in $G_{i-1}$, which contains $e'$. Then, by Claim \ref{claim:tilde-F-is-F-f+e} there exists a unique such copy $F' = F_i - e_i + e'$ and $|e_i \cap e'| > \gamma(F)$. However, as $e_i$ and $e'$ both belong to the copy $F_j$, this contradicts the definition of $\gamma(F)$. Therefore, such an $F$-copy $F'$ does not exist and, by symmetry, the same is true for the edge $e''$. In other words, the property that an $F$-copy $\hat F$ in $G_{i}$ contains $e'$ if and only if it contains $e''$ remains true, thus $e'$ is not closed in $G_i$.
\end{proof}
As a final step before proving Claim \ref{claim:open-edges-bound}, we prove a lower bound on the number of fully-open steps that must be contained in any grow sequence of length $s$ with at most $d$ degenerate steps. Using Claim \ref{claim:consecutive-regular-bound}, the proof of the following claim is identical to the proof of Claim 11 from \cite{NenadovSteger:2014}. We include it for the sake of completeness.
\begin{claim}\label{claim:fully-open-bound}
For all $1 \leq i \leq s$ it holds that
\begin{equation}\label{eq:fully-open-bound}
\textup{fo}(S_i) \geq \textup{reg}(S_i)/2 -
\deg(S_i)\cdot v(F).
\end{equation}
\end{claim}
\begin{proof}
Let us denote by $\phi(i) := \textup{reg}(S_i)/2 - \deg(S_i) \cdot v(F)$ the right hand side of Equation \eqref{eq:fully-open-bound}. We use induction to prove the following slightly stronger statement,
$$
\textup{fo}(S_i) \geq
\begin{cases}
\phi(i)&\text{ if $F_i$ is a regular step}\\
\phi(i) + 1&\text{ if $F_i$ is a degenerate step,}
\end{cases}$$
for all $1 \le i \le s$.
One easily checks that this holds for $i=1$: if $F_1$ is a regular step then $\textup{fo}(S_1) = 1 > 1/2$, otherwise $\textup{fo}(S_1) = 0 > -v(F) + 1$. Consider now some $i \ge 2$.
If $F_i$ is a degenerate step then from Claim \ref{claim:max_delta} we have $\Delta(i) \le v(F) - \ell + 1 \le v(F) - 1$ and so $\textup{fo}(S_i) = \textup{fo}(S_{i - 1}) - \Delta(i) \geq \textup{fo}(S_{i - 1}) - v(F) + 1$. The claim now easily follows from $\textup{reg}(S_i) = \textup{reg}(S_{i - 1})$ and $\deg(S_i) = \deg(S_{i-1}) + 1$.
Otherwise, assume that $F_i$ is a regular step and let
$$j := \max\{ 1\le j < i \mid \Delta(j) > 0\text{ or } F_{j} \text{ is a degenerate step}\}.$$
Note that $j$ is well defined, as $\Delta(1)=1$. Further, by the definition of $j$, $F_{i'}$ is a regular step for all $j < i' \le i$, thus $\phi(i) = \phi(j) + (i -j)/2$. In addition, we deduce from $\Delta(i') = 0$ for $j<i'<i$ that all steps $F_{i'}$ are fully-open in $S_{i-1}$. We thus have
$$
\textup{fo}(S_i) = \textup{fo}(S_j) + (1 - \Delta(i)) + (i - j - 1) = \textup{fo}(S_j) + i - j - \Delta(i).
$$
If $F_{j}$ is a degenerate step then the induction assumption implies $\textup{fo}(S_j) \geq \phi(j) + 1$. As $F_i$ is a regular step and thus $\Delta(i) \le 1$, this implies $\textup{fo}(S_i) \geq \phi(j) + i - j \ge \phi(i)$, as claimed. Finally, assume that $F_{j}$ is a regular copy. If $\Delta(i)=0$, then the claim follows trivially by the induction. Otherwise we have $\Delta(i) = 1$ and as $\Delta(j) = 1$ by Claim~\ref{claim:consecutive-regular-bound} we have that $i \geq j + 2$. Therefore
$$\textup{fo}(S_i) = \textup{fo}(S_j) + i - j - 1 \geq \textup{fo}(S_j) + (i-j)/2 \ge\phi(j) + (i-j)/2=\phi(i),$$ similarly as before. This finishes the proof of the claim.
\end{proof}
Finally, we are ready to prove Claim~\ref{claim:open-edges-bound}.
\begin{proof}[Proof of Claim~\ref{claim:open-edges-bound}]
We prove part (\ref{item:claim-open-edges-bound-1}) first.
Let us assume that $S=(F_0,\ldots,F_s)$ contains at most $d$
degenerate steps. Every $F$-copy in $B:=\cup_{i=0}^s F_i$ is closed by the property of $S$, thus by Claim \ref{claim:regular-fully-open} there are no fully-open steps in $S$. By Claim~\ref{claim:fully-open-bound} this implies that
\begin{equation}\label{eq:no-open-edges-bound}
\deg(S)\cdot v(F) \geq \textup{reg}(S) / 2
\end{equation}
must hold. We have $\deg(S) \leq d$ and $\textup{reg}(S) \geq s - d $ (the first step is neither degenerate nor regular). We obtain
\begin{equation*}
d\cdot v(F) \geq (s - d)/2.
\end{equation*}
Solving for $s$ we get
\begin{equation*}
s \leq 2 d \left(v(F) + 1/2 \right) \leq 3d\cdot v(F),
\end{equation*}
which proves the first part.
For part~(\ref{item:claim-open-edges-bound-2}) of Claim~\ref{claim:open-edges-bound} let $S_i$ be a prefix of $S$, for some $1 \leq i \leq s$, with at most $d$ degenerate steps. Note that before any regular step $F_j$, $j \leq i$ created in line \ref{line:closed-case}, all $F$-copies of $G_{j-1}$ are closed and thus by Claim \ref{claim:regular-fully-open} we have $\textup{fo}(S_{j-1}) = 0$. Similarly as above, we have $\deg(S_{j-1}) v(F) \geq \textup{reg}(S_{j-1})/2$.
As we know that $\deg(S_{j-1}) \leq d$ we obtain $ j - 1 \leq 3 d v(F)$, which concludes the proof.
\end{proof}
\input{applications_jctb}
\bibliographystyle{abbrv}
| {
"timestamp": "2014-08-25T02:08:54",
"yymm": "1408",
"arxiv_id": "1408.5271",
"language": "en",
"url": "https://arxiv.org/abs/1408.5271",
"abstract": "In this paper we introduce a general framework for proving lower bounds for various Ramsey type problems within random settings. The main idea is to view the problem from an algorithmic perspective: we aim at providing an algorithm that finds the desired colouring with high probability. Our framework allows to reduce the probabilistic problem of whether the Ramsey property at hand holds for random (hyper)graphs with edge probability $p$ to a deterministic question of whether there exists a finite graph that forms an obstruction.In the second part of the paper we apply this framework to address and solve various open problems. In particular, we extend the result of Bohman, Frieze, Pikhurko and Smyth (2010) for bounded anti-Ramsey problems in random graphs to the case of $2$ colors and to hypergraph cliques. As a corollary, this proves a matching lower bound for the result of Friedgut, Rödl and Schacht (2010) and, independently, Conlon and Gowers (2014+) for the classical Ramsey problem for hypergraphs in the case of cliques. Finally, we provide matching lower bounds for a proper-colouring version of anti-Ramsey problems introduced by Kohayakawa, Konstadinidis and Mota~(2014) in the case of cliques and cycles.",
"subjects": "Combinatorics (math.CO)",
"title": "An algorithmic framework for obtaining lower bounds for random Ramsey problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966410494349896,
"lm_q2_score": 0.7341195152660687,
"lm_q1q2_score": 0.7094608036601875
} |
https://arxiv.org/abs/1001.0489 | Absence of torsion for NK_1(R) over associative rings | When R is a commutative ring with identity, and if k is a natural number with kR = R, then C. Weibel proved that SK_1(R[X]) has no k-torsion. We reprove his result for any associative ring R with identity in which kR = R. | \section{Introduction}
~~~~Let $R$ be an associative ring with identity element $1$.
Let ${\rm K_1}(R)$ denote the Whitehead group. In case $R$ is commutative, let ${\rm SK_1}(R)$ be the kernel of
the determinant map from ${\rm K_1}(R)$ to the group of units of $R$.
Let ${\rm W}(R)$ be the ring of big Witt vectors. We denote ${\rm NK_1}(R)=\textnormal{ker}
({\rm K_1}(R[X])\rightarrow {\rm K_1}(R))$; $X=0$.
In \cite{STI}, J. Stienstra, using ideas of S. Bloch in \cite{BL1}, showed that
${\rm NK_1}(R)$ is a ${\rm W}(R)$-module. Consequently, as noted by C. Weibel in
(\cite{WEL}, \S 3), if $k$ is a unit in $R$, then ${\rm SK_1}(R[X])$ has no
$k$-torsion, when $R$ is a commutative local ring. For a commutative ring with identity
${\rm NK_1}(R)$ coincides with
${\rm SK_1}(R[X])$ if we take $R$ to be local.
In this note we generalize Weibel's observation for ${\rm NK_1}(R)$, where $R$ is a commutative local ring.
We prove for an \textit{associative} ring $R$ with identity, ${\rm NK_1}(R)$ has no $k$-torsion if $k$ is a unit in $R$. In particular, this shows Weibel's result is a special case.
The method of proof may be considered as a simplified version of J. Stienstra's approach via big Witt vectors. This will help the reader to appreciate why big
Witt vectors come into the picture in the functorial approach of S. Bloch,
{\it et al}. \rm \vspace{0.1cm} \\
\begin{tr} \label{th1} Let $R$ be an associative ring with identity. If $k$ is a unit
in $R$, then ${\rm NK_1}(R)$ has no $k$-torsion.
\end{tr}
\begin{co}
Let $R$ be a commutative local ring with identity. If $k$ is a unit in $R$, then
${\rm SK_1}(R[X])$ has no $k$-torsion.
\end{co}
As a consequence of Theorem 1 we prove
\begin{tr} \label{th2}
Let $R=R_0\oplus R_1\oplus \cdots $ be a graded commutative ring with identity.
Let $k$ be a unit in $R_0$.
Let $N=N_0+N_1+\cdots+N_r \in {\rm M}_r(R)$ be a nilpotent matrix.
If $[(I+N)]^k=[I]$ in ${\rm SK_1}(R)$, then $[I+N] = [I+N_0]$.
In particular, if $R_0$ is a reduced local ring, then ${\rm SK_1}(R)$ has no
$k$-torsion.
\end{tr}
\section{Prologue}
~~ Let $R$ be an associative ring with identity.
${\rm GL}_n(R)$ denotes the group of invertible matrices, ${\rm SL}_n(R)$ its subgroup
of matrices of determinant $1$ (when $R$ is a commutative ring),
${\rm E}_n(R)$ the subgroup of elementary matrices,
{\it i.e.} generated by $\{{\rm E}_{ij}(\lambda):\lambda \in R, i\ne j \}$, where
${\rm E}_{ij}(\lambda)=I+\lambda e_{ij}$ and $e_{ij}$ is the matrix with $1$ on the
$ij$-th position and 0's elsewhere. For $\alpha\in {\rm M}_r(R)$, $\beta\in {\rm M}_s(R)$
we have $\alpha\perp \beta\in {\rm M}_{r+s}(R)$, where $$\alpha\perp \beta =
\left(\begin{array}{cc} \alpha & 0 \\ 0 & \beta
\end{array}\right).$$
There is an infinite counterpart: identifying each matrix $\alpha\in {\rm GL}_n(R)$
with the large matrix $(\alpha\perp 1)$
gives an embedding of ${\rm GL}_n(R)$ into ${\rm GL}_{n+1}(R)$.
Let ${\rm GL}(R)=\cup_{n=1}^{\infty} {\rm GL}_n(R)$,
${\rm SL}(R)=\cup_{n=1}^{\infty} {\rm SL}_n(R)$, and
${\rm E}(R)=\cup_{n=1}^{\infty} {\rm E}_n(R)$ be the corresponding
infinite linear groups.
The well known Whitehead's Lemma asserts that if $\alpha\in {\rm GL}_n(R)$ then we have
$(\alpha\perp \alpha^{-1})\in {\rm E}_{2n}(R)$. Thus we have $$[{\rm GL}(R),{\rm GL}(R)]=[{\rm E}(R),{\rm E}(R)]={\rm E}(R)$$ and hence
${\rm E}(R)$ is a normal subgroup of ${\rm GL}(R)$.
The quotient ${\rm GL}(R)/{\rm E}(R)$ is called the
{\bf Whitehead group} of the ring $R$ and is denoted by ${\rm K_1}(R)$.
For $\alpha\in {\rm GL}_n(R)$ let $[\alpha]$ denote its equivalence class in
${\rm K_1}(R)$.
Also, as a consequence of Whitehead's lemma
one sees that if $\alpha,\beta\in {\rm GL}_n(R)$ then
$[\alpha,\beta]\in {\rm E}_{2n}(R)$;
whence ${\rm K_1}(R)$ is an abelian group. For details
{\it cf.} \cite{B}.
In case $R$ is commutative the determinant map from
${\rm GL}_n(R)$ to $R^{*}$ induces a map,
$\det: {\rm K_1}(R) \rightarrow R^{*}$ given by $\alpha E(R)\mapsto \det \alpha$.
The kernel of the map is denoted by ${\rm SK_1}(R)$ and equals to ${\rm SL}(R)/{\rm E}(R)$.
We write ${\rm NK_1}(R)$ for $\textnormal{ker} ({\rm K_1}(R[X])\rightarrow {\rm K_1}(R))$; $X=0$,
{\it i.e.} the subgroup consisting of elements $[\alpha(X)]\in {\rm K_1}(R[X])$
such that $[\alpha(0)]=[I]$. Note that if $R$ is a commutative local ring then
${\rm SK_1}(R[X])$ coincides with ${\rm NK_1}(R)$; indeed, if $R$ is a local ring then
${\rm SL}_n(R)={\rm E}_n(R)$ for all $n>0$. Therefore, we may replace $\alpha(X)$ by
$\alpha(X)\alpha(0)^{-1}$ and assume that $[\alpha(0)]=[I]$. \rm \vspace{0.1cm}\\
For a commutative ring $R$ the group ${\rm W}(R)$ of big {\bf Witt vectors} is
defined by: $${\rm W}(R)=(1+XR[[X]])^{\times}.$$
For $P(X)\in (1+XR[[X]])$, let $\omega(P)$ denote the corresponding
element of ${\rm W}(R)$. The group structure in ${\rm W}(R)$ is given by:
$$\omega(P)+\omega(Q)=\omega(P.Q).$$
Any $P(X)\in (1+XR[[X]])$ can be written uniquely as a
product: $$P(X)=\underset{n\ge 1}\Pi (1-a_nX^n)^{-1}, \,\, a_n\in R.$$
The elements $(a_1,a_2,\dots \dots)$ are called the
{\bf Witt co-ordinates} of $\omega(P)$. Also, there exists a unique structure
of commutative ring on ${\rm W}(R)$ such that
$$\omega ((1-aX^m)^{-1}).\omega ((1-bX^n)^{-1})=
\omega ((1-a^{n/r}b^{m/r}X^{mn/r})^{-r}),$$
where $r=\textnormal{ g.c.d} (m,n)$. The identity element in ${\rm W}(R)$ is presented by the power
series $(1-X)^{-1}$. For details {\it cf.} (\cite{BL1}, Prop. I.I), \cite{L}.
For the ${\rm W}(R)$-module structure of ${\rm NK_1}(R)$ see \cite{WEL1}
(for previous articles {\it cf.} \cite{BL1}, \cite{BL2}, \cite{STI},
\cite{WEL}).
\section{Higman Linearization}
~~Two matrices $\alpha\in {\rm M}_r(R)$ and $\beta\in {\rm M}_s(R)$ are said
to be {\bf stably equivalent} if there exists $\varepsilon_1, \varepsilon_2\in {\rm E}_t(R)$
(for some $t\ge \textnormal{max} \{r,s\}$) such that
$\varepsilon_1 (\alpha \perp I_{t-r})\varepsilon_2 = (\beta \perp I_{t-s})$.
\begin{lm} \label{hig} {\bf (Higman Linearization Process)}
Let $\alpha(X)$ be a matrix over $R[X]$. Then $\alpha(X)$ is
stably equivalent to a linear matrix in ${\rm M}_s(R[X])$ for some $s$.
\end{lm}
{\bf Proof.} We may assume that $n\ge 2$.
Let $$\alpha(X)=a_0+a_1X+a_2X^2+\cdots +a_nX^n\in {\rm M}_r(R), \,\,\, r>1.$$
Then $\alpha(X)$ is stably equivalent to a matrix of degree $n-1$ over
$R[X]$ in the following manner:
$$\left(\begin{array}{cc}
I_r & -a_nX\\
0 & I_r
\end{array} \right)
\left(\begin{array}{cc}
a_0+\cdots +a_nX^n & 0\\
0 & I_r
\end{array} \right)
\left(\begin{array}{cc}
I_r & 0\\
X^{n-1}I_r & I_r
\end{array} \right) \\ $$
$$=
\left(\begin{array}{cc}
a_0+\cdots +a_{n-1}X^{n-1} & -a_nX\\
X^{n-1}I_r & I_r
\end{array} \right)=\alpha_1$$
has degree $(n-1)$. Hence $\alpha$ is stably equivalent to $\alpha_1$.
Repeating the above process $(n-2)$ times we get the result. \hfill$\Box$
\begin{co} \label{hig1} Let $R$ be an associative ring with identity.
Let $\alpha(X)\in {\rm GL}_r(R[X])$ with $\alpha(0)=I_n$. Then in ${\rm K_1}(R[X])$
we have $[\alpha(X)]=[I_s+NX]$ for some $s>0$ and some matrix $N\in \rm M_s(R)$.
\end{co}
{\bf Proof.} By Lemma \ref{hig}
there exists $\varepsilon_1, \varepsilon_2\in {\rm E}_t(R[X])$ (for some $t>r$) such that
$(\alpha(X) \perp I_{t-r})= \varepsilon_1((I_s+NX) \perp I_{t-s})\varepsilon_2$ for
some $s>0$ and $N\in \rm M_s(R)$. Now as ${\rm E}(R)$ is a normal subgroup of
${\rm GL}(R)$, for some integer $u$ there exists $\varepsilon_1'\in {\rm E}_{t+u}(R[X])$ such that
$$(\varepsilon_1\perp I_u)((I_s+NX) \perp I_{t-s+u})=
((I_s+NX) \perp I_{t-s+u})\varepsilon_1'.$$
Hence in ${\rm K_1}(R[X])$ we have
$$[\alpha(X)]=[((I_s+NX) \perp I_{t-s+u})\varepsilon_1'(\varepsilon_2\perp I_u)]=
[I_s+NX].$$ \hfill$\Box$
\section{Main Theorem}
~~Let $R_t$ denote the ring $R[X]/(X^{t+1})$.
\begin{lm} \label{la3}
Let $R$ be a ring and $P(X)\in R[X]$ be any polynomial.
Then the following identity holds in the ring $R_t:$
\begin{equation*}
(1+X^r P(X))=(1+X^rP(0))(1+X^{r+1}Q(X)),
\end{equation*}
where $r>0$ and $Q(X)\in R[X]$, with $\deg(Q(X))< t-r$.
\end{lm}
{\bf Proof.} Let us write $P(X)=a_0+a_1X+\cdots+a_{t}X^{t}$. Then we can
write $P(X)=P(0)+XP'(X)$ for some $P'(X)\in R[X]$. Now, in $R_t$
{\small
\begin{align*}
(1+X^r P(X))(1+X^r P(0))^{-1}
& = (1+X^r P(0)+X^{r+1}P'(X))(1+X^r P(0))^{-1}\\
& = 1+X^{r+1}P'(X)(1-X^rP(0)+X^{2r}(P(0))^2-\cdots)\\
& = 1+X^{r+1}Q(X)
\end{align*}}
\!\!where $Q(X)\in R[X]$ with $\deg(Q(X))< t-r$.
Hence the lemma follows. \hfill$\Box$ \rm \vspace{0.1cm} \\
{\bf Remark.} Iterating the above process we can write for any polynomial
$P(X)\in R[X]$,
$(1+XP(X))=\Pi_{i=1}^t(1+a_iX^i)$ in $R_t$,
for some $a_i\in R$. By ascending induction it will follow that the $a_i$'s
are uniquely determined. In fact, if $R$ is commutative then
$a_i$'s are the $i$-th component of the
ghost vector corresponding to the big Witt vector of
$(1+XP(X))\in {\rm W}(R)=(1+XR[[X]])^{\times}$. For details see
(\cite{BL1}, $\mathcal{x}$I).
\begin{lm} \label{la4}
Let $R$ be a ring with $\frac{1}{k}\in R$ and $P(X)\in R[X]$.
Assume $P(0)$ lies in the center of $R$.
Then
$$(1+X^rP(X))^{k^r}=1 \Rightarrow (1+X^rP(X))=(1+X^{r+1}Q(X))$$ in the ring $R_t$
for some $r>0$ and $Q(X)\in R[X]$ with $\deg(Q(X))< t-r$.
\end{lm}
{\bf Proof.} By Lemma \ref{la3}
\begin{equation} \label{use4}
(1+X^rP(X))=(1+X^rP(0))(1+X^{r+1}P_1(X)),
\end{equation}
for some $P_1(X)\in R[X]$ with $\deg(P_1(X))< t-r$. Therefore, in $R_t$
$$(1+X^rP(X))^{k^r}=1\Rightarrow
(1+X^rP(0))^{k^r}=(1+X^{r+1}P_1(X))^{-{k^r}}.$$
As $\frac{1}{k}\in R$, we have
$$(1+{k^r}X^rP(0)+X^{r+1}P_2(X))=(1+X^{r+1}P_1(X))^{-{k^r}}.$$ This implies
\begin{align*}
(1+{k^r}X^rP(0))
& = \,\, (1+X^{r+1}P_1(X))^{-{k^r}}
(1+(1+{k^r}X^rP(0))^{-1}X^{r+1}P_2(X))^{-1}\\
& = \,\, (1+X^{r+1}P_3(X))
\end{align*}
for some $P_2(X),P_3(X)\in R[X]$ with deg $P_2(X),P_3(X)< t-r$. Now,
applying homomorphism $X\mapsto \frac{1}{k}X$ we get
$$(1+X^rP(0))=(1+X^{r+1}P_4(X))$$ for some $P_4(X)\in R[X]$
with $\deg(P_4(X))< t-r$.
Substituting this in \eqref{use4} we get
$$(1+X^rP(X))=(1+X^{r+1}Q(X))$$ for some $Q(X)\in R[X]$ with
$\deg (Q(X))< t-r$. \hfill$\Box$ \rm \vspace{0.1cm} \\
{\bf Proof of Theorem} \ref{th1}.
Let $\alpha(X)\in {\rm GL}_n(R[X])$ with $[\alpha(0)]=[I]$ be a $k$-torsion.
By Corollary \ref{hig1} in ${\rm K_1}(R[X])$,
$[\alpha(X)]=[(I_s+NX)]$ for some $s>0$ and
$N\in {\rm M}_s(R[X])$.
Since $(I_s+NX)$ is invertible, $N$ is nilpotent.
Let $N^{t+1}=0$. Since $[(I_s+NX)]^k=[I]$ in ${\rm K_1}(R[X])$, it follows that
$$[I_s+kNX+N^2X^2P_1(NX)] = [I]$$ for some $P_1(X)\in R[X]$. Hence, as before,
as $\frac{1}{k}\in R$,
\begin{align*}
[(I_s+kNX)^{-1}] & = [(I_s+(I_s+kNX)^{-1}N^2X^2P_1(NX)]\\
& = [I_s+(I_s-kNX+N^2X^2P_2(NX))N^2X^2P_1(NX)]\\
& = [I_s+N^2X^2P(NX)]
\end{align*}
for some $P(X)\in R[X]$.
Since $[(I_s+N^2X^2P(NX))]^k=[I]$, arguing as in the proof of Lemma \ref{la4}
we get in ${\rm K_1}(R[X])$
$$[I_s+N^2X^2P(NX)]=[I_s+N^3X^3Q(NX)]$$ for some $Q(X)\in R[X]$.
Now by repeating the above mentioned
argument we get $$[I_s+N^2X^2P(NX)]=[I].$$
Finally, applying homomorphism
$X\mapsto \frac{1}{k}X$ we get the desired result.\hfill$\Box$ \rm \vspace{0.1cm} \\
Now we prove Theorem \ref{th2} as a consequence of Swan-Weibel homotopy trick.
For details see (\cite{GUB1}, Proof of Prop. 2.22).
First we recall the Local-Global Principle for a graded ring.
\rm \vspace{0.1cm} \\
{\bf Graded Local-Global Principle:}
Let $R=R_0\oplus R_1\oplus \cdots$ be a graded commutative ring with
$k$ a unit in $R_0$ and $\alpha(X)\in {\rm GL}_n(R[X])$ with $\alpha(0)=I_n$. If
$\alpha_{\mf{m}}(X)\in {\rm E}_n(R_{\mf{m}}[X])$ for all $\mf{m} \in {\rm Max} \, (R_0)$,
then $\alpha(X)\in {\rm E}_n(R[X])$. \rm \vspace{0.1cm}
This is derived from the usual Local-Global Principle by using the following
homotopy trick due to Swan and Weibel.
\rm \vspace{0.1cm} \\
{\bf Proof of Theorem \ref{th2}.}
Consider the ring homomorphism
$\theta: R\rightarrow R[X]$, given by
$(a_0+a_1+\cdots)\mapsto a_0+a_1X+\cdots$.
Then
\begin{align*}
[(I+N)]^k=[I] &\Rightarrow \theta([(I+N)]^k)=[(\theta(I+N))]^k=[I]\\
& \Rightarrow [(I+N_0+N_1X+\cdots+N_rX^r)]^k=[I]
\end{align*}
Let $\mf{m}$ be a maximal ideal in $R_0$.
By Theorem 1
\begin{equation*}[(I+N_0+N_1X+\cdots+N_rX^r)]=[I]
\end{equation*} in ${\rm SK_1}(R_0)_{\mf{m}}.$
Hence by the Graded Local-Global Principle $[(I+N)]=[(I+N_0)]$ in ${\rm SK_1}(R)$.
In particular, if $R_0$ is a reduced local ring then
units of $R_0[X]$ = units of $R_0$.
Since $N_0$ is nilpotent,
$\det(I+N_0X)$= constant = $1$. Hence
$I+N_0$ is in ${\rm SL}_r(R_0)={\rm E}_r(R_0)$. This completes the proof. \hfill$\Box$
\section{Examples}
~~We show by an example that the the condition
$\frac{1}{k}\in R$ is necessary in Theorem \ref{th2}, whence in
Theorem \ref{th1}. \rm \vspace{0.1cm} \\
{\bf Example.}
Let $R$ be a commutative ring with identity and $\alpha$ a $2 \times 2$
completion of $(1 - XY, X^2)$ over $R[X^2, XY, Y^2]$.
In (\cite{GUB}, \S 8, Example 8.2) it is shown that $\alpha \in
{\rm SK_1}(R[X^2, XY, Y^2]) \setminus {\rm SK_1}(R)$.
Now take $R = \ZZ_2$.
Then the square of the Mennicke symbol of the vector
$(1 - XY, X^2)$ is clearly trivial,
{\it i.e.} $\alpha^2 \in {\rm E}_3(\mathbb{Z}_2[X^2, XY, Y^2])$.
(See \cite{B} for definition of Mennicke symbol.) \rm \vspace{0.1cm} \\
{\bf Remark.}
Let $P$ be a finitely generated projective $R$-module. A theorem of
M.R. Gabel in \cite{GAB} asserts that $mP=P\oplus \cdots \oplus P$ (m times)
is free, for some $m$. (A result of T.Y. Lam in \cite{LAM} sharpens this bound
on $m$).
Ravi A. Rao has asked if the converse of the above is true over polynomial rings $R[X]$
with $R$ local.
More precisely, if R is a local ring with kR = R, does ${\rm K}_0(R[X])$ have
non-trivial k-torsion?
\footnotesize
\setlength{\itemsep}{-0.5ex}
| {
"timestamp": "2010-01-04T12:48:47",
"yymm": "1001",
"arxiv_id": "1001.0489",
"language": "en",
"url": "https://arxiv.org/abs/1001.0489",
"abstract": "When R is a commutative ring with identity, and if k is a natural number with kR = R, then C. Weibel proved that SK_1(R[X]) has no k-torsion. We reprove his result for any associative ring R with identity in which kR = R.",
"subjects": "K-Theory and Homology (math.KT); Commutative Algebra (math.AC)",
"title": "Absence of torsion for NK_1(R) over associative rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.966410494349896,
"lm_q2_score": 0.7341195152660687,
"lm_q1q2_score": 0.7094608036601875
} |
https://arxiv.org/abs/math/0610165 | Multiplication maps and vanishing theorems for toric varieties | We use multiplication maps to give a characteristic-free approach to vanishing theorems on toric varieties. Our approach is very elementary but is enough powerful to prove vanishing theorems. | \section{Introduction}\label{sec1}
The main purpose of this paper is to understand
various vanishing theorems on toric varieties
through multiplication maps.
We give an elementary and unified approach to vanishing theorems on
toric varieties.
The following theorem is the main theorem of this paper.
Some important special cases were already investigated in
various papers. See, for example, \cite[7.5.2.~Theorem]{danilov},
\cite[Section 7]{bc},
\cite[Theorem 5]{btlm}, and \cite[Section 2]{mustata}.
\begin{thm}[Main theorem I]\label{main}
Let $X$ be a toric variety defined over a field $k$ of
arbitrary characteristic and $B$
a reduced torus invariant Weil divisor on $X$.
Let $L$ be a line bundle on $X$. If
$H^i(X, \widetilde {\Omega}^a_{X}(\log B)\otimes L^l)=0$ for
some positive integer $l$,
then $H^i(X, \widetilde {\Omega}^a_{X}(\log B)\otimes L)=0$.
In particular, if $X$ is projective and
$L$ is ample, then $H^i(X, \widetilde {\Omega}^a_{X}(\log B)\otimes L)=0$
for any $i>0$.
\end{thm}
Before we go further, let us recall the definition of $\widetilde \Omega^a_X(\log B)$
(cf.~\cite[15.2]{danilov}).
\begin{defn}\label{12}
Let $W$ be any Zariski open set of $X$ such that $W$ is non-singular and
$\codim _X(X\setminus W)\geq 2$.
Furthermore, we assume that $B$ is a simple normal crossing divisor on
$W$. On this assumption,
$\Omega^a_{W}(\log B)$
is a well-defined locally free sheaf on $W$.
Let $\iota:W\hookrightarrow X$ be the natural
open immersion.
Then we put $\widetilde \Omega^a_X(\log B)=\iota_*\Omega^a_W(\log B)$ for any
$a\geq 0$.
It is easy to see that the reflexive sheaf
$\widetilde \Omega^a_{X}(\log B)$ on $X$ does not
depend on the choice of $W$.
Note that $B$ is a simple normal crossing
divisor on $W$ if $W$ is a non-singular
toric variety.
If $B=0$, then we write $\widetilde \Omega^a_{X}=
\widetilde \Omega^a_X(\log B)$ for any $a\geq 0$.
\end{defn}
The above theorem contains the
following important vanishing theorems.
If $B=0$, then we obtain the famous
Bott type vanishing theorem for
toric varieties. It was first claimed
in \cite[7.5.2~Theorem]{danilov} without
proof. See \cite[Theorem 5]{btlm}.
The readers can find that this famous
vanishing theorem is stated in the standard reference
\cite[p.130]{odasensei} without proof.
\begin{cor}[Bott, Danilov, $\cdots$]\label{bott}
Let $X$ be a projective toric variety over $k$
and $L$ an ample line bundle on $X$.
Then $H^i(X, \widetilde \Omega^a_X\otimes L)=0$ for any $i>0$ and $a\geq 0$.
\end{cor}
In the main theorem:~Theorem \ref{main}, if we put $a=\dim X$, then we obtain
the toric version of Norimatsu type vanishing theorem.
It is nothing but Musta\c t\u a's vanishing theorem in \cite[Corollary 2.5 (iii)]{mustata}.
The readers can find the original formulation in
Corollary \ref{honyaku} below. One of my motivations
is to give an elementary proof to Musta\c t\u a's vanishing theorem.
\begin{cor}[Norimatsu, Musta\c t\u a, $\cdots$]\label{nori}
Let $X$ be a projective toric variety over $k$ and $B$ a reduced torus invariant
Weil divisor on $X$. Let $L$ be an ample line bundle on $X$.
Then $H^i(X, \mathcal O_X(K_X+B)\otimes L)=0$ for any $i>0$.
\end{cor}
Note that $K_X$ is the canonical divisor of $X$.
It is well known that $\mathcal O_X(K_X)\simeq \mathcal O_X(-\sum _iD_i)$,
where the summation $\sum _iD_i$ runs over
all the torus invariant prime divisors on $X$.
The final one is the Kodaira type vanishing theorem for toric varieties.
It is sufficiently powerful in the toric geometry (see Section \ref{sec3}).
\begin{cor}[Kodaira, $\cdots$]\label{kodaira}
Let $X$ be a projective toric variety over $k$ and $L$ an ample line bundle on $X$.
Then $H^i(X, \mathcal O_X(K_X)\otimes L)=0$ for any $i>0$.
\end{cor}
The next theorem is another main theorem of this paper.
It contains the Kawamata-Viehweg type vanishing theorem obtained by Musta\c t\u a
(see \cite[Corollary 2.5 (i) and (ii)]{mustata}).
Our formulation is very similar to Musta\c t\u a's
theorem:~\cite[Theorem 0.1]{mustata}, but is slightly different.
We will
quickly see the relationship between
Musta\c t\u a's original statement and
Theorem \ref{main2} in \ref{214}.
His statement is a special case of our theorem (see Corollary
\ref{musumusu}).
See also Proposition \ref{vari},
where we will treat a variant of Theorem \ref{main2}.
\begin{thm}[Main theorem II]\label{main2}
Let $X$ be a toric variety defined
over a field
$k$ of arbitrary characteristic and $D$ a torus invariant $\mathbb Q$-Weil
divisor on $X$. Assume that $lD$ is an integral
Weil divisor for some positive integer $l$.
If $H^i(X, \mathcal O_X(lD))=0$ $($resp.~$H^i(X, \mathcal O_X(K_X+lD))=0$$)$,
then we have
$H^i(X, \mathcal O_X(\llcorner D\lrcorner ))=0$ $($resp.~$H^i(X, \mathcal
O_X(K_X+\ulcorner D\urcorner ))=0$$)$.
\end{thm}
The following corollary easily follows from Theorem
\ref{main2}.
However, \cite[Theorem 0.1]{mustata} produced
it only when $D$ is an ample $\mathbb Q$-Cartier
divisor (see \cite[Corollary 2.5 (i)
and (ii)]{mustata}).
\begin{cor}[Kawamata-Viehweg, Musta\c t\u a, $\cdots$]\label{kavi}
Let $X$ be a complete toric variety over $k$ and $D$ a nef
$\mathbb Q$-Cartier torus invariant $\mathbb Q$-Weil divisor on $X$
with
the Iitaka dimension
$\kappa (X, D)=\kappa$.
Then we obtain $H^i(X, \mathcal O_X(\llcorner D\lrcorner))=0$ for
$i\ne 0$ and
$H^i(X, \mathcal O_X(K_X+\ulcorner D\urcorner ))=0$ for
$i\ne n-\kappa$, where $n=\dim X$.
\end{cor}
We note that for a $\mathbb Q$-Weil divisor
$D=\sum _{j=1}^{r}d_jD_j$ on $X$, we
define the round-up
$\ulcorner D\urcorner =\sum _{j=1}^{r}\ulcorner d_j\urcorner
D_j$ (resp.~the round-down
$\llcorner D\lrcorner =\sum _{j=1}^{r}\llcorner d_j\lrcorner D_j$),
where for any real number $x$, $\ulcorner x\urcorner $ (resp.~$\llcorner
x\lrcorner$) is the integer defined by $x\leq \ulcorner x\urcorner <x+1$ (resp.~$x-1
<\llcorner x\lrcorner \leq x$).
The fractional part $\{D\}$ of the
$\mathbb Q$-Weil divisor $D$ denotes $D-\llcorner D\lrcorner$.
We summarize the contents of this
paper:~In Section \ref{sec2}, we will prove Theorem \ref{main}
and Theorem \ref{main2}.
The main ingredient of our proof is the
{\em{multiplication map}}.
It is a mystery that no standard references on the
toric geometry treat the multiplication map systematically.
Let us introduce the $l$ times multiplication
map for toric varieties.
We consider $\mathbb P^n$ and
a finite surjective
morphism $F:\mathbb P^n\to \mathbb P^n: [X_0:\cdots: X_n]\mapsto
[X^l_0:\cdots:X^l_n]$.
It is the simplest example of $l$ times
multiplication maps for projective
toric varieties.
On the big torus $T\subset \mathbb P^n$, the
restriction $F_T:=F|_T:T\to T$ is nothing
but the group homomorphism expressed
by $(x_1, \cdots, x_n)\mapsto
(x_1^l, \cdots, x_n^l)$. For
an arbitrary $n$-dimensional
toric variety $X$, $F_T:T\to T$ naturally
extends to a finite surjective toric morphism
$F:X\to X$.
We call this $F:X\to X$ the {\em{$l$ times
multiplication map}} of $X$.
I believe that the multiplication map
will play important roles in the toric geometry.
Here, I will show its usefulness by proving
various vanishing theorems.
Our approach is very
elementary but is sufficiently powerful to prove
vanishing theorems.
Related topics are in Section 7 of \cite{arapura}.
We do not use Frobenius morphisms (cf.~\cite{btlm} and \cite{blickle})
nor Cox's homogeneous coordinate rings (cf.~\cite{mustata}).
We do not need any cumbersome combinatorial arguments
nor the Hodge theory (cf.~\cite{bc}).
We recommend the readers to compare our proof with the
others (cf.~\cite{bc}, \cite{btlm}, \cite{mustata}, etc.).
In Section \ref{varivari}, we consider slight generalizations
of the main theorems:~Theorem \ref{main} and Theorem
\ref{main2}. Let $\mathcal E$ be a reflexive sheaf on $X$.
Roughly speaking, we treat
$(\mathcal E\otimes {\widetilde \Omega}^a(\log B))^{**}$,
$(\mathcal E\otimes \mathcal O_X(\llcorner
D\lrcorner))^{**}$,
and $(\mathcal E\otimes \mathcal O_X(K_X+\ulcorner
D\urcorner))^{**}$
instead of ${\widetilde \Omega}^a(\log B)$,
$\mathcal O_X(\llcorner D\lrcorner)$,
and $\mathcal O_X(K_X+\ulcorner D\urcorner)$
respectively. We note
that $\mathcal E$ is not assumed to be equivariant.
In Section \ref{sec3}, we will treat Koll\'ar's injectivity
theorem for toric varieties. For toric varieties, it easily follows from
the Kodaira type vanishing theorem.
In Section \ref{sec4}, which is an appendix,
we will state relative vanishing
theorems explicitly for future uses.
We note that our reference list does not cover all the papers
treating the related topics. We apologize in advance to the colleagues whose
works are not appropriately mentioned in this paper.
\begin{ack}
I would like to express my gratitude to Professors Kazuhiro Fujiwara and
Takeshi Abe for giving me much advice and encouraging me
during the preparation of this paper. I thank
Doctor Hiroshi Sato for valuable
discussions and Professor Noboru Nakayam for
pointing out a little mistake. I was
partially supported by The Sumitomo
Foundation and by the Grant-in-Aid for
Young Scientists (A) $\sharp$17684001
from JSPS.
I would like to thank Professor Donu Arapura and
Doctor Sam Payne, who gave me comments by e-mails
after I circulated the first version of this paper.
Professor Donu Arapura informed me of his recent paper \cite{arapura}.
Doctor Sam Payne sent me his private notes on vanishing theorems
for toric varieties.
The discussions with him helped me revise this paper.
\end{ack}
Let $k$ be a fixed field of arbitrary characteristic $p$ ($p$ may be zero).
In this paper, everything is defined over $k$.
We do not assume that $k$ is algebraically closed.
\section{Multiplication maps and
vanishing theorems}\label{sec2}
We fix our notation and define the multiplication map.
\begin{say}\label{21}
Let $N\simeq \mathbb Z^n$ be a lattice
and $M=\Hom _{\mathbb Z}(N, \mathbb Z)$ the dual lattice.
For a fan $\Delta$ in $N_{\mathbb R}=N\otimes _{\mathbb Z}\mathbb R$, we
have the associated toric variety $X=X(\Delta)$.
We put $N'=\frac{1}{l}N$ and $M'=\Hom _{\mathbb Z}(N', \mathbb Z)$ for
any positive integer $l$.
We note that $M'=lM$.
Since $N_{\mathbb R}=N'_{\mathbb R}$, $\Delta$ is also a fan in
$N'_{\mathbb R}$.
We write $\Delta'$ to express the fan $\Delta$ in $N'_{\mathbb R}$.
Let $X'=X(\Delta')$ be the associated toric variety.
We note that $X\simeq X'$ as toric varieties.
We consider
the natural inclusion $\varphi:N\to N'$.
Then $\varphi$ induces a finite surjective toric morphism $F:X\to X'$.
We call it the {\em{$l$ times multiplication map of $X$}}.
The following is the most important example of $l$
times multiplication maps.
\begin{ex}
The finite surjective morphism
$F:\mathbb A^n\to \mathbb A^n$ given by $(a_1, \cdots, a_n)\mapsto
(a_1^l, \cdots, a_n^l)$ is the $l$ times multiplication
map of $\mathbb A^n$.
\end{ex}
\end{say}
Let us start the proof of the main theorem I:~Theorem \ref{main}.
\begin{say}\label{22}
Let $\mathcal A$ be an object on $X$. Then we write $\mathcal A'$
to indicate the corresponding
object on $X'$.
Let $T$ be the big torus of $X$.
We construct a split injection $\Omega^1_{T'}\to F_*\Omega^1_{T}$.
Note that $\Omega ^1_T$ is nothing but a $k[M]$-module
$M\otimes _\mathbb Z k[M]$.
\end{say}
We recall the toric description of $\Omega^1_T$ more precisely.
For the details, see \cite[\S 4]{danilov} and \cite{ishida}.
\begin{say}\label{23}
By choosing a base suitably, we have
$k[M]\simeq k[x_1, x^{-1}_1, \cdots, x_n, x^{-1}_n]$.
We can write $x^m=x^{m_1}_1 x^{m_2}_2\cdots x^{m_n}_n$ for
$m=(m_1, \cdots, m_n)\in \mathbb Z^n=M$.
Then we have the isomorphism of $k[M]$-modules
$M\otimes _{\mathbb Z}k[M]\to H^0(T, \Omega^1_T)$ induced by
$m\otimes
x^{\widetilde m}\mapsto \frac{dx^m}{x^m}\cdot
x^{\widetilde m}=x^{\widetilde m-m}dx^m$,
where $m, \widetilde m\in \mathbb Z^n=M$.
Note that $\wedge ^a M\otimes _{\mathbb Z}k[M]\simeq
H^0(T, \Omega^a_T)$ as $k[M]$-modules for any $a\geq 0$.
\end{say}
We go back to the proof of the main theorem.
\begin{say}\label{24}
Therefore, $F_*\Omega^1_T$ corresponds to a $k[M']$-module $M\otimes
_{\mathbb Z}k[M]$.
We consider the $k[M']$-module homomorphism
$M'\otimes _{\mathbb Z}k[M']\to M\otimes _{\mathbb Z}k[M]$ induced
by $m_{\alpha}\otimes x^{m_{\beta}}\mapsto m_{\alpha}\otimes x^{lm_{\beta}}$.
This gives an injection $\Omega^1_{T'}\to F_*\Omega^1_T$.
We also consider the $k[M']$-module homomorphism
$M\otimes _{\mathbb Z}k[M]\to M'\otimes _{\mathbb Z}k[M']$ obtained
from $m_\alpha\otimes x^{m_\gamma}\mapsto m_\alpha\otimes x^{m_\beta}$ if
$m_\gamma=lm_\beta$ and $m_\alpha\otimes x^{m_\gamma}\mapsto 0$ otherwise.
By this homomorphism, the above injection
$\Omega^1_{T'}\to F_*\Omega^1_T$
splits.
We can generalize the above construction to $\wedge ^{a}M'\otimes
_{\mathbb Z}k[M']$ and $\wedge ^{a}M\otimes _{\mathbb Z}k[M]$.
More precisely, we consider the $k[M']$-module
homomorphisms $\wedge ^aM'\otimes _{\mathbb Z}k[M']
\to \wedge ^aM\otimes _{\mathbb Z}k[M]$ given by
$m_{\alpha_1}\wedge \cdots \wedge
m_{\alpha_a}\otimes x^{m_{\beta}}\mapsto
m_{\alpha_1}\wedge \cdots \wedge
m_{\alpha_a}\otimes x^{lm_{\beta}}$,
and $\wedge ^a M\otimes _{\mathbb Z}k[M]\to
\wedge^aM'\otimes _{\mathbb Z}k[M']$ induced by
$m_{\alpha_1}\wedge \cdots \wedge
m_{\alpha_a}\otimes x^{m_{\gamma}}\mapsto
m_{\alpha_1}\wedge \cdots \wedge
m_{\alpha_a}\otimes x^{m_{\beta}}$ if $m_{\gamma}=lm_{\beta}$
and $m_{\alpha_1}\wedge \cdots \wedge
m_{\alpha_a}\otimes x^{m_{\gamma}}\mapsto 0$ otherwise.
So,
we obtain split injections
$\Omega^a_{T'}\to F_*\Omega^a_T$ for any $a\geq 0$.
\end{say}
\begin{say}\label{26}
Let $\Delta_V$ be the fan in $N_{\mathbb R}$ that is
obtained from $\Delta$ by removing the cones with
dimensions $\geq 2$.
Then $V=X(\Delta_V)$ is a non-singular toric variety such that
$\codim _X(X\setminus V)\geq 2$.
Let $B$ be a reduced torus invariant Weil divisor on $X$.
Then we can construct split injections
$\psi: \Omega^a_{V'}(\log B')\to F_*\Omega^a_V(\log B)$
for all $a\geq 0$, which are induced by
$\Omega^a_{T'}\to F_*\Omega^a_T$.
Note that we can see $\Omega^a_W(\log B)\subset \Omega^a_T$
for each affine toric open set $W$ of $V$.
To check that $\psi: \Omega^a_{V'}(\log B')\to F_*\Omega^a_V(\log B)$
is a split injection, it is sufficient to check
it on $U=k\times (k^{\times})^{n-1}\subset V$ since
$V$ is covered by finitely many
$k\times (k^{\times})^{n-1}$. On the open set
$U$, it is easy to see that
$\psi$ is a split injection by direct local computations.
\end{say}
\begin{say}\label{27}
Let $\iota: V\hookrightarrow X$ be the natural open immersion.
Since the following diagram
$$
\begin{CD}
V@>{\iota}>>X\\
@V{F}VV @VV{F}V \\
V'@>{\iota'}>> X'
\end{CD}
$$
is commutative,
we obtain split injections
$\widetilde \psi =\iota'_*\psi:
\widetilde \Omega^a_{X'}(\log B')\to F_*\widetilde \Omega^a_{X}(\log B)$
for all $a$.
Note that
$\widetilde \Omega^a_X(\log B)=\iota_*\Omega^a_{V}(\log B)$ by Definition
\ref{12}.
\end{say}
\begin{say}\label{28}
Let $L$ be a line bundle on $X$. Since $L\simeq \mathcal O_X(G)$
for some torus invariant Cartier divisor $G$, we can see that
$F^*L'\simeq L^l$.
By combining these results,
\begin{align*}
H^i(X, \widetilde \Omega^a_X(\log B)\otimes L)&\simeq
H^i(X', \widetilde \Omega^a_{X'}(\log B')\otimes L')
\\
&\subset
H^i(X', F_*\widetilde \Omega^a_X(\log B)\otimes L')\\&\simeq
H^i(X, \widetilde \Omega^a_X(\log B)\otimes L^l).
\end{align*}
This inclusion and Serre's vanishing theorem imply Theorem \ref{main}.
\end{say}
\begin{say}
The corollaries in Section \ref{sec1} directly follow from
the main theorem I:~Theorem \ref{main}. We note that
Corollary \ref{nori} is equivalent to
the following statement.
This formulation seems to be more
useful for various applications.
\begin{cor}[{cf.~\cite[Corollary 2.5 (iii)]{mustata}}]\label{honyaku}
Let $X$ be a projective toric variety over $k$ and $L$ an ample
line bundle on $X$.
If $D_{j_1}, \cdots, D_{j_r}$ are distinct
torus invariant prime divisors, then
$H^i(X, L\otimes \mathcal O_X(-D_{j_1}-\cdots -D_{j_r}))=0$ for every
$i>0$.
\end{cor}
\end{say}
Let us go to the proofs
of the main theorem II:~Theorem \ref{main2}, and Corollary \ref{kavi}.
\begin{say}[Proof of Theorem \ref{main2}]\label{210}
Let $F:X\to X'$ be the $l$ times multiplication
map constructed in \ref{21}. Then there
exist natural split injections
$\mathcal O_{V'}(\llcorner D'\lrcorner)\to F_*\mathcal O_V(lD)$ and
$\mathcal O_{V'}(K_{V'}+\ulcorner D'\urcorner)\to F_*\mathcal O_V(K_V+lD)$,
which are induced by the split injections
$\mathcal O_{T'}\to F_*\mathcal O_T$ and
$\Omega^n_{T'}\to F_*\Omega^n_{T}$ (see \ref{24}).
By pushing them forward to $X'$, we obtain split injections
$\mathcal O_{X'}(\llcorner D'\lrcorner)\to F_*\mathcal O_X(lD)$ and
$\mathcal O_{X'}(K_{X'}+\ulcorner D'\urcorner )\to F_*
\mathcal O_X(K_X+lD)$.
So, we obtain
\begin{align*}
H^i(X, \mathcal O_{X}(\llcorner D\lrcorner))&\simeq
H^i(X', \mathcal O_{X'}(\llcorner D'\lrcorner))\\
&\subset
H^i(X', F_*\mathcal O_X(lD) )\simeq
H^i(X, \mathcal O_X(lD))
\end{align*}
and
\begin{align*}
H^i(X, \mathcal O_{X}(K_X+\ulcorner D\urcorner))&\simeq
H^i(X', \mathcal O_{X'}(K_{X'}+\ulcorner D'\urcorner))\\
&\subset
H^i(X', F_*\mathcal O_X(K_X+lD) )\\&\simeq
H^i(X, \mathcal O_X(K_X+lD)).
\end{align*}
Thus, Theorem \ref{main2} is obvious.
\end{say}
\begin{say}[Proof of Corollary \ref{kavi}]
We take a positive integer $l$ such that $lD$ is integral and Cartier.
Then $\mathcal O_X(K_X+lD)\simeq \mathcal O_X(K_X)\otimes
\mathcal O_X(lD)$ since
$\mathcal O_X(lD)$ is locally free.
Thus, $H^i(X, \mathcal O_X(K_X)\otimes \mathcal
O_X(lD))=0$ for
$i\ne n-\kappa$ (see Theorem \ref{kol} below) and
$H^i(X, \mathcal O_X(lD))=0$ for
$i\ne 0$ since $lD$ is a nef Cartier divisor (see, for example,
\cite[p.74 Corollary]{fulton}).
This implies the desired vanishing theorems in
Corollary \ref{kavi}.
\end{say}
\begin{rem}
Note that there are complete toric varieties that have no
non-trivial nef line bundles (see \cite{f-k} and \cite{fp}).
\end{rem}
The next remark is due to Nakayama.
\begin{rem}\label{linear}
In Theorem \ref{main2} and Corollary \ref{kavi}, the assumption that $D$
is a torus invariant $\mathbb Q$-Weil
divisor on $X$ can be slightly weakened.
It is sufficient to assume that the fractional
part $\{D\}$ is a torus invariant $\mathbb Q$-Weil
divisor on $X$. We note that
the integral part $\llcorner D\lrcorner$ is
always linearly equivalent to a torus invariant Weil divisor on $X$.
Similar modifications work for
Propositions \ref{vari}, \ref{sp}, Corollary \ref{musumusu},
Theorems \ref{43}, and \ref{44} below.
We leave the details for the readers' exercises.
\end{rem}
\begin{say}
The following proposition is a variant of Theorem \ref{main2}.
\begin{prop}\label{vari}
We use the same notation as in {\em{Theorem \ref{main2}}}.
Let $B$ be a reduced torus invariant Weil divisor on $X$ such that
$B$ and $\{D\}$ have no common irreducible components.
If $H^i(X, \mathcal O_X(K_X+B+lD))=0$,
then $H^i(X, \mathcal O_X(K_X+B+\ulcorner D\urcorner))=0$.
We further assume that
$X$ is projective and $D$ is an ample $\mathbb Q$-Cartier $\mathbb Q$-Weil
divisor. Then
$H^i(X, \mathcal O_X(K_X+B+\ulcorner D\urcorner))=0$ for
$i>0$.
\end{prop}
The proof is essentially the same as that of Theorem \ref{main2} if we
use Corollary \ref{nori}. We leave it for the readers' exercise.
\end{say}
\begin{say}\label{214}
Let us compare
Musta\c t\u a's original vanishing
theorem:~\cite[Theorem 0.1]{mustata}
with Theorem \ref{main2}.
The following corollary is nothing but a reformulation
of Theorem \ref{main2}, which is a slight but
important generalization
of Musta\c t\u a's vanishing theorem.
We note that Doctor Sam Payne independently
obtained the first part of Corollary \ref{musumusu}
by another method.
\begin{cor}\label{musumusu}
Let $X$ be a toric variety defined over $k$ and $D$ a
torus invariant Weil divisor on $X$.
Suppose that we have $E=\sum _{j=1}^{d}a_jD_j$
with $0\leq a_j\leq 1$, where $D_1, \cdots, D_d$ are
distinct torus invariant prime divisors on $X$,
such that $mE$ is an integral Weil divisor for some integer
$m\geq 1$. If $H^i(X, \mathcal O_X(D+m(D+E)))=0$ for
some $i\geq 0$, then $H^i(X, \mathcal O_X(D))=0$.
Moreover, if $H^i(X, \mathcal O_X(K_X+D+m(D+E)))=0$
for some $i\geq 0$, then
$H^i(X, \mathcal O_X(K_X+D+\ulcorner E\urcorner))=0$.
\end{cor}
\begin{proof}
We put $l:=m+1$ and consider a $\mathbb Q$-Weil divisor
$D^{\dag}:=D+\frac{m}{m+1}E$.
Then, apply Theorem \ref{main2}.
We note that $l D^{\dag}=D+m(D+E)$,
$\llcorner D^{\dag}\lrcorner =D$,
and $\ulcorner D^{\dag}\urcorner =D+\ulcorner E\urcorner$.
\end{proof}
\begin{rem}
In Corollary \ref{musumusu}, we do not assume that
$m(D+E)$ is Cartier. So, the first statement is slightly
better than Musta\c t\u a's original
one:~\cite[Theorem 0.1]{mustata}.
This difference may look very small.
However, it causes big differences in various
applications (see Corollary \ref{kavi} and Remark \ref{2200} below).
The latter statement is new.
As we saw in Remark \ref{linear}, we do not have to assume
that $D$ is torus invariant.
\end{rem}
\begin{rem}\label{2200}
Let $D$ be a torus invariant $\mathbb Q$-Weil divisor on $X$ such that
$lD$ is integral.
If we put $D^{\spadesuit}:=\llcorner D\lrcorner$
and $E^{\spadesuit}:=\frac{l}{l-1}\{D\}$,
and apply Corollary \ref{musumusu}
to $D^{\spadesuit}$ and $E^{\spadesuit}$ with $m:=l-1$,
then we can recover Theorem \ref{main2} from
Corollary \ref{musumusu}.
To recover Theorem \ref{main2} from
Musta\c t\u a's theorem:~\cite[Theorem 0.1]{mustata},
we have to assume that $m(D^{\spadesuit}+E^{\spadesuit})
=lD-\llcorner D\lrcorner$ is Cartier.
It seems to be a very artificial assumption.
Thus, I believe that our theorem is much better.
\end{rem}
\end{say}
\begin{say}
In \cite{vie}, Viehweg obtained his vanishing theorems
as applications of the Bogomolov type vanishing
theorem (cf.~\cite[Theorem III]{vie}).
For toric varieties, we can easily check the following
Bogomolov type vanishing theorem.
\begin{thm}[{Bogomolov, $\cdots$}]\label{bogo}
Let $X$ be a complete toric
variety defined over a field $k$ and $B$ a reduced torus invariant
Weil divisor on $X$.
Let $L$ be a line bundle on $X$ with
the Iitaka dimension $\kappa (X, L)\geq 0$.
Then $H^0(X, \widetilde {\Omega}^a_X(\log B)\otimes L^{-1})=0$
for any $a\geq 0$ unless $L\simeq \mathcal O_X$.
\end{thm}
\begin{proof}
Assume that $H^0(X, \widetilde {\Omega}^a_X(\log B)
\otimes L^{-1})\ne 0$.
Since $\widetilde {\Omega}^a_{X}(\log B)\subset \wedge^a
M\otimes \mathcal O_X$, we obtain
$H^0(X, L^{-1})\ne 0$. Therefore,
$L\simeq \mathcal O_X$ by the
assumption $\kappa (X, L)\geq 0$.
\end{proof}
We think that the Kawamata-Viehweg type vanishing theorem for toric
varieties (cf.~Corollary \ref{kavi}) does not directly follow from
Theorem \ref{bogo}.
\end{say}
\begin{say} We close this section with the following three remarks.
\begin{rem}\label{coxx}
In \cite[Theorem 7.1]{bc}, Corollary \ref{bott} was proved under the
assumption that the toric variety is $\mathbb Q$-factorial, equivalently,
has only quotient singularities.
Batyrev and Cox proved it as a special case of \cite[Theorem 7.2]{bc}.
We note that we can easily prove \cite[Theorem 7.2]{bc} by
\cite[Theorem 5.4]{bc}, which is \cite[15.7]{danilov}, and Corollary \ref{bott}
using induction on $k$ (not on $p-k$).
For $k$ and $p-k$, see the proof of \cite[Theorem 7.2]{bc}.
Therefore, we can obtain \cite[Lemma 7.4]{bc} as a corollary
of the vanishing theorem:~Corollary \ref{bott}.
Here, we do not pursue this subject anymore since we need the
Hodge theory.
\end{rem}
\begin{rem}[Frobenius morphisms]\label{fro}
If $l=p$ and $k$ is a perfect
field, then $F:V\to V'$ is the relative Frobenius
morphism and $\psi$ induces
the inverse Cartier isomorphisms
$\wedge ^aC^{-1}: \Omega^a_{V'/k}(\log B')\simeq \mathcal H^a(F_*
\Omega^{\bullet}_{V/k}(\log B))$ for any $a\geq 0$.
All the computations we need were described in \cite[9.14.~Theorem]{ev}.
We note that
this technique produces the
$E_1$-degeneration of the spectral sequence
$E^{ij}_1=H^j(X, \widetilde \Omega^i_{X}(\log B))\Rightarrow
\mathbb H^{i+j}(X, \widetilde \Omega^{\bullet}_{X}(\log B))$
(see \cite[Remark 1]{btlm}).
We do not pursue this topic since it was already
treated in \cite{btlm} and \cite{blickle}.
\end{rem}
\begin{rem}[Applications of vanishing theorems]
In Section 4 in \cite{mustata}, Musta\c t\u a
obtained various results on linear systems on toric
varieties as applications of
his vanishing theorem (cf.~\cite[Corollary 2.5 (iii)]{mustata}
or Corollaries \ref{nori} and \ref{honyaku}).
In those applications, the considered toric varieties are always
non-singular.
In \cite{fujino}, Musta\c t\u a's results in \cite[Section 4]{mustata}
were reproved and some of them were
generalized for singular toric varieties.
See \cite[Section 4 and Remark 3.3]{fujino}.
However, the proofs in \cite{fujino} are
quite different from Musta\c t\u a's.
They depend on the toric Mori theory.
Note that the foundation of the toric Mori theory was constructed
without using vanishing theorems (see \cite{reid}, \cite{fs}, \cite{f-3}, and
\cite{sato}).
See also \cite[\S 4.~Applications]{sato} for some generalizations of
Musta\c t\u a's results for the relative setting.
\end{rem}
\end{say}
\section{Variants of the main vanishing theorems}\label{varivari}
In this section, we treat slight
generalizations of the main vanishing theorems.
We need no new arguments.
\begin{say}\label{300}
The following theorem is a small generalization of Theorem \ref{main}.
It may be useful in the future. So, we state it here.
\begin{prop}\label{newnew}
Let $X$ and $B$ be the same as in
{\em{Theorem \ref{main}}}.
Let $D$ be a $($not necessarily torus invariant$)$ Weil divisor on $X$
and $\mathcal E$ a reflexive sheaf on $X$.
We consider the $l$ times multiplication
map $F:X\to X'\simeq X$, which was defined in {\em{\ref{21}}},
for some positive integer $l$.
If $H^i(X, (\widetilde \Omega^a_X(\log B)
\otimes F^*\mathcal E\otimes \mathcal O_X(lD))^{**})=0$,
then $H^i(X, (\widetilde \Omega^a_X(\log B)\otimes
\mathcal E\otimes \mathcal O_X(D))^{**})=0$.
In particular,
$H^i(X, (\widetilde \Omega^a_X(\log B)
\otimes \mathcal O_X(lD))^{**})=0$ implies
$H^i(X, (\widetilde \Omega^a_X(\log B)
\otimes \mathcal O_X(D))^{**})=0$.
\end{prop}
We will prove this proposition
after the proof of Proposition \ref{sp}.
\begin{rem}
In Proposition \ref{newnew},
if $\mathcal E$ is locally free
and $D$ (resp.~$lD$) is Cartier,
or $X$ is non-singular, then
$(\widetilde \Omega^a_X(\log B)
\otimes \mathcal E\otimes \mathcal O_X(D))^{**}
\simeq \widetilde \Omega^a_X(\log B)
\otimes \mathcal E\otimes \mathcal O_X(D)
$
(resp.~$(\widetilde \Omega^a_X(\log B)
\otimes F^*\mathcal E\otimes \mathcal O_X(lD))^{**}
\simeq \widetilde \Omega^a_X(\log B)
\otimes F^*\mathcal E\otimes \mathcal O_X(lD)
$)
in Proposition \ref{newnew}.
See Remark \ref{2111} (ii) below.
\end{rem}
\end{say}
\begin{say}
We treat a similar variant of Theorem \ref{main2} here.
Doctor Sam Payne independently obtained
a special case of the
following theorem under the
extra assumption that $\mathcal E$ is equivariant.
I was inspired by his private notes.
\begin{prop}\label{sp}
We use the same notation as in {\em{Theorem \ref{main2}}}.
Let $\mathcal E$ be a reflexive sheaf on $X$.
Let $F:X\to X'\simeq X$ be the
$l$ times multiplication map as in {\em{\ref{21}}}.
If $H^i(X, (F^*\mathcal E\otimes \mathcal O_X(lD))^{**})=0$
$($resp.~$H^i(X, (F^*\mathcal E
\otimes \mathcal O_X(K_X+lD))^{**})
=0$$)$, then we have $H^i(X, (\mathcal E\otimes
\mathcal O_X(\llcorner D\lrcorner ))^{**})=0$
$($resp.~$H^i(X, (\mathcal E\otimes
\mathcal O_X(K_X+\ulcorner
D\urcorner))^{**})=0$$)$.
\end{prop}
\begin{rem}
If $\mathcal E$ is a locally free sheaf
or $X$ is non-singular, then we
do not need to take double duals in Proposition \ref{sp}.
See Remark \ref{2111} (ii) below.
If $\mathcal E\simeq \mathcal O_X$, then Proposition
\ref{sp} is nothing but Theorem \ref{main2}.
\end{rem}
Before we go to the proofs, we make some remarks on
reflexive sheaves.
\begin{rem}\label{2111}
(i) Let $\mathcal F$ be a coherent sheaf on a normal variety $X$.
Then $\mathcal F^{**}$
denotes the double dual of $\mathcal F$.
(ii) Let $\mathcal F_1$ and $\mathcal F_2$ be reflexive
sheaves on a normal variety $X$.
Then $(\mathcal F_1\otimes \mathcal F_2)^{**}
\simeq \mathcal F_1\otimes \mathcal F_2$ if
one of the $\mathcal F_i$
is locally free.
\end{rem}
\begin{proof}[Proof of {\em{Proposition \ref{sp}}}]
Let $V'$ be the Zariski open set of $X'$ as in \ref{26}.
We take a Zariski open set
$W'$ of $V'$ such that
$\mathcal E'$ is locally free on $W'$ and $\codim _{X'}(X'\setminus
W')\geq 2$.
Note that $W'$ is not torus invariant when $W'\ne V'$.
We put $W=F^{-1}(W')\subset V$. Then we obtain
the following commutative diagram
$$
\begin{CD}
W@>>>X\\
@V{F}VV @VV{F}V\\
W'@>>>X'
\end{CD}
$$
as in \ref{27}, where the horizontal arrows are
natural open immersions.
We have split injections
$$
\mathcal E'\otimes \mathcal O_{W'}(\llcorner
D'\lrcorner)\to \mathcal E'\otimes F_*\mathcal O_W(lD)
\simeq F_*(F^*\mathcal E'\otimes \mathcal O_W(lD))
$$
and
\begin{align*}
\mathcal E'\otimes \mathcal O_{W'}(K_{W'}+\ulcorner
D'\urcorner )&\to \mathcal E'\otimes
F_*\mathcal O_W(K_W+lD)\\
&\simeq
F_*(F^*\mathcal E'\otimes \mathcal O_{W}(K_W+lD))
\end{align*}
on $W'$ by \ref{210} and the projection formula.
By pushing them forward to $X'$, we
obtain the desired vanishing theorems by the same arguments
as in \ref{210}.
\end{proof}
\end{say}
\begin{proof}[Proof of {\em{Proposition \ref{newnew}}}]
We note that we can replace $D$ by a linearly equivalent torus
invariant Weil divisor. So, we assume that $D$ is
torus invariant.
By the arguments in \ref{26} and \ref{28},
we can check that there exist split injections
$$\Omega^a_{W'}(\log B')\otimes \mathcal E\otimes
\mathcal O_{W'}(D')\to
F_*(\Omega ^a_W(\log B)\otimes F^*\mathcal E
\otimes \mathcal O_W(lD))$$ for
any $a\geq 0$, where $W'$ (resp.~$W$) is
the Zariski open set of $V'$ (resp.~$V$)
defined in the proof of Proposition \ref{sp}.
Note that $F^*\mathcal O_{V'}(D')\simeq
\mathcal O_V(lD)$ since $D$ is Cartier
on $V$. It is because $V$ is non-singular.
Therefore, by pushing the above split injections to
$X'$,
we have split injections
$(\widetilde \Omega^a_{X'}(\log B')\otimes
\mathcal E\otimes
\mathcal O_{X'}(D'))^{**}
\to F_*((\widetilde \Omega ^a_X(\log B)
\otimes F^*\mathcal E\otimes \mathcal O_X(lD))^{**})$ for
all $a\geq 0$ (see \ref{27}).
This obviously implies Proposition \ref{newnew}.
\end{proof}
By applying Proposition
\ref{newnew} in place of Theorem \ref{main},
some vanishing theorems in this paper can be generalized slightly.
We leave the details for the readers' exercise.
We also leave to the interested readers the pleasure of
combining
the latter part of Proposition \ref{sp} with
Proposition \ref{vari}.
\section{Koll\'ar's injectivity theorem}\label{sec3}
In this section, we treat Koll\'ar's injectivity theorem
(cf.~\cite[Theorem 2.2]{koll}) for toric varieties.
It is an application of Corollary \ref{kodaira}.
\begin{thm}\label{kol}
Let $X$ be a complete toric variety defined over $k$ and $L$ a nef line
bundle on $X$.
Let $s$ be a non-zero holomorphic section of $L^l$, where
$l\geq 0$.
Then
$$
\times s:H^i(X, \mathcal O_X(K_X)\otimes
L^m)\to H^i(X, \mathcal O_X(K_X)\otimes L^{m+l})
$$
is injective for any $m\geq 1$ and $i\geq 0$,
where $\times s$ is the morphism induced by the tensor product with $s$.
More precisely, $H^i(X, \mathcal O_X(K_X)\otimes L^m)=0$ for any $m\geq 1$
when $i\ne n-\kappa$.
Here, $n=\dim X$ and $\kappa =\kappa (X, L)$.
\end{thm}
The following lemma is well known. The readers can find it in
any text book on the toric geometry (see, for example, \cite[p.76
Proposition and p.89 Proposition]{fulton}).
\begin{lem}\label{classical}
Let $f:X\to Y$ be a proper birational toric morphism.
Then $R^if_*\mathcal O_X=0$ and $R^if_*\mathcal O_X(K_X)=0$ for all
$i>0$,
$f_*\mathcal O_X\simeq \mathcal O_Y$, and $f_*\mathcal O_X(K_X)\simeq
\mathcal O_Y(K_Y)$.
\end{lem}
The next lemma is a slight generalization of Lemma \ref{classical}.
\begin{lem}\label{new}
Let $f:X\to Y$ be a proper surjective toric morphism with connected fibers.
Then $R^if_*\mathcal O_X=0$ for $i>0$ and
$f_*\mathcal O_X\simeq \mathcal O_Y$.
Moreover, $R^{n-m}f_*\mathcal O_X(K_X)\simeq
\mathcal O_Y(K_Y)$ and $R^if_*\mathcal O_X(K_X)=0$ for $i\ne n-m$,
where $n=\dim X$ and $m=\dim Y$.
\end{lem}
\begin{proof}[Sketch of the proof]
The former statement is an exercise if we use Lemma \ref{classical}.
For the proof, see, for example, \cite[Theorem 3.2]{ishida}.
The latter part follows from the Grothendieck duality and the former
statement.
\end{proof}
Lemma \ref{new} implies that Koll\'ar's torsion-freeness (cf.~\cite
[Theorem 2.1 (i)]{koll}) is obvious
for toric varieties and Koll\'ar's vanishing theorem (cf.~\cite[Theorem
2.1 (iii)]{koll}) is a special case
of Corollary \ref{kodaira} in the toric geometry.
\begin{proof}[{Proof of {\em{Theorem \ref{kol}}}}]
Since $L$ is nef, there exists a proper surjective toric
morphism with connected fibers $f:X\to Y$ such that
$L\simeq f^*H$, where $H$ is an ample line bundle on $Y$.
By the definition of $\kappa$, we have $\dim Y=\kappa$.
We consider the spectral sequence $H^i(Y, R^jf_*\mathcal O_X(K_X)
\otimes H^b)\Rightarrow H^{i+j}(X, \mathcal O_X(K_X)\otimes L^b)$ for any
integer $b$.
By Lemma \ref{new}, we obtain $H^i(Y, \mathcal O_Y(K_Y)\otimes
H^b)\simeq H^{i+n-\kappa}(X, \mathcal O_X(K_X)\otimes L^b)$.
Therefore, we have $H^{n-\kappa}(X, \mathcal O_X(K_X)\otimes L^b)\simeq
H^0(Y, \mathcal O_Y(K_Y)\otimes H^b)$ and
$H^{i}(X, \mathcal O_X(K_X)\otimes L^m)=0$ for $i\ne n-\kappa$
and $m\geq 1$ by Corollary \ref{kodaira}.
Note that $H^0(X, L^l)\simeq H^0(Y, H^l)$.
So, there exists a non-zero $t\in H^0(Y, H^l)$ such that $s=f^*t$.
Thus,
$
\times s:H^{n-\kappa}(X, \mathcal O_X(K_X)\otimes
L^m)\to H^{n-\kappa}(X, \mathcal O_X(K_X)\otimes L^{m+l})$
is nothing but
$
\times t :H^0(Y, \mathcal O_Y(K_Y)\otimes
H^m)\to H^0(Y, \mathcal O_Y(K_Y)\otimes H^{m+l})
$.
Therefore, $\times s$ is injective since $\times t$ is injective.
\end{proof}
\begin{say}
As we saw in Theorem \ref{kol},
the Kodaira type vanishing theorem (cf.~Corollary \ref{kodaira})
holds for nef and big line bundles.
However, the Norimatsu type vanishing theorem (cf.~Corollary \ref{nori})
does not always hold for nef and big line bundles by the next example.
\begin{ex}
In this example, we assume $k=\mathbb C$,
the complex number field, for simplicity.
Let $P\in \mathbb P^2$ be a torus invariant
closed point
and let $f:X\to \mathbb P^2$ be the
blow-up at $P$. Let $B$ be the $f$-exceptional curve on $X$.
Then we obtain
$0\to \mathcal O_X(K_X)\to \mathcal
O_X(K_X+B)\to \mathcal O_B(K_B)\to 0$ by adjunction.
By applying $R^if_*$, we obtain
$f_*\mathcal O_X(K_X+B)\simeq
\mathcal O_{\mathbb P^2}(K_{\mathbb P^2})$
and $R^1f_*\mathcal O_X(K_X+B)\simeq
\mathbb C(P)$ since $R^if_*\mathcal O_X(K_X)=0$ for
$i>0$. We put $H=\mathcal O_{\mathbb P^2}(1)$
and $L=f^*H$. Note that
$L$ is nef and big. Then,
by the Leray spectral sequence,
we have the following exact sequence:
$0\to H^1(\mathbb P^2, f_*\mathcal O_X(K_X+B)\otimes H)\to
H^1(X, \mathcal O_X(K_X+B)\otimes L)\to
H^0(\mathbb P^2, R^1f_*\mathcal O_X(K_X+B)\otimes H)\to
H^2(\mathbb P^2, f_*\mathcal O_X(K_X+B)\otimes H)\to \cdots .$
Since the first and the last terms are zero, $H^1(X, \mathcal O_X(K_X+B)\otimes L)
\simeq H^0(\mathbb P^2, \mathbb C(P)\otimes H)\simeq \mathbb C$.
\end{ex}
\end{say}
\section{Appendix:~Relative vanishing theorems}\label{sec4}
In this appendix, we state relative vanishing theorems explicitly
for future uses.
All the vanishing theorems easily follow from the main theorems
and their proofs. We only give a proof of Theorem \ref{43} for the
readers' convenience. The others are similar and easier to prove.
\begin{say}
Let $f:X\to Y$ be a proper
surjective toric morphism and $B$ a reduced
torus invariant Weil divisor on $X$. We put $n=\dim X$
and $m=\dim Y$.
\begin{thm}\label{th-a}
Let $L$ be an $f$-ample line bundle
on $X$.
Then we have $R^if_*(\widetilde \Omega^a_{X}(\log B)\otimes
L)=0$ for $i>0$.
In particular, $R^if_*(\widetilde \Omega^a_X\otimes L)=0$,
$R^if_*(\mathcal O_X(K_X+B)\otimes L)=0$, and
$R^if_*(\mathcal O_X(K_X)\otimes L)=0$ for $i>0$.
\end{thm}
\begin{thm}\label{43}
Let $D$ be a torus invariant $\mathbb Q$-Weil divisor on $X$.
Assume that $D$ is $\mathbb Q$-Cartier and $f$-nef
with the relative Iitaka dimension
$\kappa (X/Y, D)=\kappa$.
Then $R^if_*\mathcal O_X(K_X+\ulcorner D\urcorner )=0$ for
$i\ne n-m-
\kappa$ and $R^if_*\mathcal O_X(\llcorner D\lrcorner )=0$ for
$i\ne 0$.
\end{thm}
\begin{proof}
Let $l$ be a positive integer such that
$lD$ is integral and Cartier.
We note that $f^*f_*\mathcal O_X(lD)\to \mathcal O_X(lD)$ is
surjective since $lD$ is an $f$-nef
Cartier divisor on $X$ (see, for example, \cite[Chapter
IV, 1.13 Lemma]{nakayamabon}).
We can assume that $Y$ is affine since the problem is
local.
By \ref{210},
it is sufficient to prove that
$R^if_*(\mathcal O_X(K_X)\otimes
\mathcal O_X(lD))=0$ for $i\ne n-m-\kappa$ and
$R^if_*\mathcal O_X(lD)=0$ for $i\ne 0$.
First, we prove $R^if_*\mathcal O_X(lD)=0$ for
$i>0$.
In this case, $R^if_*\mathcal O_X(lD)\simeq
H^i(X, \mathcal O_X(lD))=0$ by \cite[p.74 Corollary]{fulton}
since $\mathcal O_X(lD)$ is generated by its global
sections and the support of
the fan associated to
$X$ is convex.
Next, we prove
$R^if_*(\mathcal O_X(K_X)\otimes
\mathcal O_X(lD))=0$ for $i\ne n-m-\kappa$.
Let $g:X\to Z$ be a proper surjective
toric morphism over $Y$ with connected fibers such
that $\mathcal O_X(lD)\simeq g^*\mathcal O_Z(H)$,
where $H$ is a Cartier divisor on $Z$ which is ample over $Y$.
We note that $\dim Z=m+\kappa$.
By Lemma \ref{new} and
Leray's spectral sequence, we obtain
$R^if_*(\mathcal O_X(K_X)\otimes \mathcal O_X(lD))\simeq
R^{i-(n-m-\kappa)}h_*(\mathcal O_Z(K_Z)\otimes
\mathcal O_Z(H))$,
where $h:Z\to Y$.
In particular, $R^if_*(\mathcal O_X(K_X)\otimes \mathcal O_X(lD))
=0$ for $i<n-m-\kappa$.
By the same arguments as in \ref{28}, we have
$R^{i-(n-m-\kappa)}h_*(\mathcal O_Z(K_Z)\otimes
\mathcal O_Z(H))\subset
R^{i-(n-m-\kappa)}h_*(\mathcal O_Z(K_Z)\otimes
\mathcal O_Z(l'H))$ for any positive integer $l'$.
If $l'\gg 0$, then $R^{i-(n-m-\kappa)}h_*(\mathcal
O_Z(K_Z)\otimes \mathcal O_Z(l'H))=0$ for
$i>n-m-\kappa$ by Serre's vanishing theorem.
Thus, $R^{i-(n-m-\kappa)}h_*(\mathcal O_Z(K_Z)
\otimes \mathcal O_Z(H))=0$ for
$i>n-m-\kappa$.
Therefore, we obtain the desired vanishing theorems.
\end{proof}
\begin{thm}\label{44}
Let $D$ be an $f$-ample
$\mathbb Q$-Cartier torus invariant $\mathbb Q$-Weil divisor on $X$ such that
$B$ and $\{D\}$ have no common irreducible components.
Then $R^if_*\mathcal O_X(K_X+B+\ulcorner D\urcorner )=0$ for $i>0$.
\end{thm}
We note that Theorem \ref{kol} can be generalized for the relative
setting if we use Theorem \ref{th-a} instead of Corollary \ref{kodaira}.
\end{say}
\ifx\undefined\bysame
\newcommand{\bysame|{leavemode\hbox to3em{\hrulefill}\,}
\fi
| {
"timestamp": "2006-11-30T04:35:33",
"yymm": "0610",
"arxiv_id": "math/0610165",
"language": "en",
"url": "https://arxiv.org/abs/math/0610165",
"abstract": "We use multiplication maps to give a characteristic-free approach to vanishing theorems on toric varieties. Our approach is very elementary but is enough powerful to prove vanishing theorems.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Multiplication maps and vanishing theorems for toric varieties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9664104924150547,
"lm_q2_score": 0.7341195152660688,
"lm_q1q2_score": 0.7094608022397828
} |
https://arxiv.org/abs/2202.11842 | U-statistics of growing order and sub-Gaussian mean estimators with sharp constants | This paper addresses the following question: given a sample of i.i.d. random variables with finite variance, can one construct an estimator of the unknown mean that performs nearly as well as if the data were normally distributed? One of the most popular examples achieving this goal is the median of means estimator. However, it is inefficient in a sense that the constants in the resulting bounds are suboptimal. We show that a permutation-invariant modification of the median of means estimator admits deviation guarantees that are sharp up to $1+o(1)$ factor if the underlying distribution possesses more than $\frac{3+\sqrt{5}}{2}\approx 2.62$ moments and is absolutely continuous with respect to the Lebesgue measure. This result yields potential improvements for a variety of algorithms that rely on the median of means estimator as a building block. At the core of our argument is are the new deviation inequalities for the U-statistics of order that is allowed to grow with the sample size, a result that could be of independent interest. | \section{Introduction.}
\label{sec:intro}
Let $X_1,\ldots,X_N$ be i.i.d. random variables with distribution $P$ having mean $\mu$ and finite variance $\sigma^2$. At the core of this paper is the following question: given $1\leq t \leq t_{\max}(N)$, construct an estimator $\widetilde \mu_N = \widetilde \mu_N(X_1,\ldots,X_N)$ such that
\begin{equation}{
\label{eq:sg}
\pr{\left| \widetilde\mu_N-\mu \right|\geq \sigma\sqrt{\frac t N}} \leq 2 e^{-\frac{t}{L}}
}
for some absolute positive constant $L$. Estimators that satisfy this deviation property are called sub-Gaussian. For example, the sample mean $\bar X_N = \frac{1}{N}\sum_{j=1}^N X_j$ is sub-Gaussian for $t_{\max} \asymp q(N,P)$ where $q(N,P)\to \infty$ as $N\to\infty$ and the constant $L$ equals $2$: this immediately follows from the fact that convergence of the distribution functions is uniform in the central limit theorem. However, $q(N,P)$ can grow arbitrarily slow in general, and it grows as $\log^{1/2}(N)$ if $\mathbb E|X|^{2+\varepsilon}<\infty$ for some $\varepsilon>0$ in view of the Berry-Esseen theorem \citep[for instance, see the book by][]{Petrov_1975}. At the same time, the so-called median of means (MOM) estimator, originally introduced by \citet[][]{Nemirovski1983Problem-complex00,alon1996space,jerrum1986random} and studied recently in relation to the problem at hand satisfies inequality \eqref{eq:sg} with $t_{\max}$ of order $N$ and $L = 24e$ \citep{lerasle2011robust}, although the latter can be improved. A large body of existing work used the MOM estimator as a core subroutine to relax underlying assumptions for a variety of statistical problems, in particular the methods based on the empirical risk minimization; we refer the reader to an excellent survey paper by \citet{lugosi2019mean} for a detailed overview of the recent advances.
The exact value of constant $L$ in inequality \eqref{eq:sg} is less important in problems where only the minimax rates are of interest, but it becomes crucial in terms of practical value and sample efficiency of the algorithms. The benchmark here is the situation when observations are normally distributed: \citet{catoni2012challenging} showed that no estimator can outperform the sample mean in this situation. The latter satisfies the relation
\[
\pr{\left| \bar X_N - \mu \right|\geq \sigma\frac{\Phi^{-1}(1-e^{-t/2})}{\sqrt N}} = 2 e^{-\frac{t}{2}}
\]
where $\Phi^{-1}(\cdot)$ denotes the quantile function of the standard normal law. As $\Phi^{-1}(1-e^{-t/2})=(1+o(1))\sqrt{t}$ as $t\to\infty$, the best guarantee of the form \eqref{eq:sg} one can hope for is attained for $L = 2$. It is therefore natural to ask whether there exist sharp sub-Gaussian estimators of the mean, that is, estimators satisfying \eqref{eq:sg} with $L=2(1+o(1))$ where $o(1)$ is a sequence that converges to $0$ as $N\to\infty$, under minimal assumptions on the underlying distribution. This question has previously been posed by \citet{devroye2016sub} as an open problem, and several results appeared since then that give partial answers. We proceed with a brief review of the state of the art.
\subsection{Overview of the existing results.}
\citet{catoni2012challenging} presented the first known example of a sharp sub-Gaussian estimator with $t_{\max} = o(N/\kappa)$ for distributions with finite fourth moment and a known value of the kurtosis $\kappa$ (or its upper bound). \citet{devroye2016sub} presented another construction that also required finite fourth moment but did not explicitly depend on the value of the kurtosis as an input while satisfying required guarantees for $t_{\max} = o\left( (N/\kappa)^{2/3}\right)$. \citet{minsker2021robust} designed an asymptotically efficient sub-Gaussian estimator $\widetilde \mu_N$ that satisfies $\sqrt{N}\left( \widetilde\mu_N - \mu\right)\xrightarrow{d} N(0,\sigma^2)$ assuming only the finite second moment plus a mild, ``small-ball'' type condition. However, the constants in the non-asymptotic version of their bounds were not sharp.
Finally, \citet{lee2020optimal} constructed an estimator with required properties assuming just the finite second moment, however, their guarantees hold with optimal constants only for $t_{\min}\leq t\leq t_{\max}$ where $t_{\max} = o(N)$ and $t_{\min}\to\infty$ as $N\to\infty$. In particular, this range excludes $t$ in the neighborhood of $0$ which is often the region of most practical interest.
\subsection{Summary of the main contributions.}
The reasons for the popularity of MOM estimator are plenty: it is simple to define and to compute, it admits strong theoretical guarantees, moreover it is scale-invariant and therefore essentially tuning-free.
Thus, we believe that any quantifiable improvements to its performance are worth investigating.
We start by showing that the standard MOM estimator achieves bound \eqref{eq:sg} with $L=\pi(1+o(1))$ where $o(1)\to 0$ as $N\to\infty$; this fact is formally stated in Theorem \ref{th:dev-1}. We then define a permutation-invariant version of MOM, denoted $\widehat\mu_N$, and show in Corollary \ref{th:clt-1} that, surprisingly, it is asymptotically optimal in a sense that $\sqrt{N}\left( \widehat\mu_N - \mu\right)\xrightarrow{d} N(0,\sigma^2)$ under minimal assumptions; compare this the the standard MOM estimator that has a limiting variance $\frac{\pi}{2}\sigma^2$. The main result of the paper, Theorem \ref{th:U-mom}, demonstrates that optimality of $\widehat\mu_N$ holds also in the non-asymptotic sense, namely, that \eqref{eq:sg} is valid for a wide range of the confidence parameter assuming the distribution of $X_1$ possesses $3+p$ moments for some possibly unknown $p>0$ and that its characteristic function satisfies a mild decay bound.
Analysis of the estimator $\widehat\mu_N$ requires new inequalities for the U-statistics of order that grows with the sample size. In particular, we prove novel bounds for large deviations of the degenerate, higher order terms of Hoeffding decomposition (Theorem \ref{th:concentration}), and deduce sub-Gaussian deviation guarantees for the non-degenerate U-statistics (Corollary \ref{th:bernstein}) with ``correct'' sub-Gaussian parameter.
Finally, in a situation when the distribution $P$ is fixed and the sample size grows, we present an estimator that satisfies sub-Gaussian deviation inequalities with optimal constant $L=2+o(1)$ in the whole range $1\leq t \leq t_{\max} = o(N)$ and requires only the finite variance assumption. Detailed statement is given in Theorem \ref{th:hybrid}.
\subsection{Notation.}
\label{section:def}
Unspecified absolute constants will be denoted $C,c,C_1,c'$, etc., and may take different values in different parts of the paper. Given $a,b\in \mathbb R$, we will write $a\wedge b$ for $\min(a,b)$ and $a\vee b$ for
$\max(a,b)$. For a positive integer $M$, $[M]$ denotes the set $\{1,\ldots,M\}$.
We will frequently use the standard big-O and small-o notation for asymptotic relations between functions and sequences. Moreover, given two sequences $\{a_n\}_{n\geq 1}$ and $\{b_n\}_{n\geq 1}$ where $b_n\ne 0$ for all $n$, we will write that $a_n\ll b_n$ if $\frac{a_n}{b_n}=o(1)$ as $n\to\infty$.
For a function $f:\mathbb R\mapsto\mathbb R$, $f^{(m)}$ will denote its $m$-th derivative whenever it exists. Similarly, given $g:\mathbb R^d\mapsto\mathbb R$, $\partial_{x_j} g(x_1,\ldots,x_d)$ will stand for the partial derivative of $g$ with respect to the $j$-th variable. Finally, the sup-norm of $g$ is defined via $\|g\|_\infty:=\mathrm{ess \,sup}\{ |g(y)|: \, y\in \mathbb R^d\}$.
Given i.i.d. random variables $X_1,\ldots,X_N$ distributed according to $P$, $P_N:=\frac{1}{N}\sum_{j=1}^N \delta_{X_j}$ will stand for the associated empirical measure, where $\delta_{X}(f):=f(X)$.
For a real-valued function $f$ and a signed measure $Q$, we will write $Qf$ for $\int f dQ$, assuming that the last integral is well-defined. Additional notation and auxiliary results will be introduced on demand.
\begin{comment}
For a function $f:\mathbb R^d\mapsto\mathbb R$, define
\[
\argmin_{y\in\mathbb R^d} f(y) := \{y\in\mathbb R^d: f(y)\leq f(x)\text{ for all }x\in \mathbb R^d\},
\]
and $\|f\|_\infty:=\mathrm{ess \,sup}\{ |f(y)|: \, y\in \mathbb R^d\}$. Moreover, $L(f)$ will stand for the Lipschitz constant of $f$; if $d=1$ and For a function $g(\theta,x)$ mapping $\mathbb R^d\times \mathbb R$ to $\mathbb R$, $\partial_\theta g$ will denote the vector of partial derivatives with respect to the coordinates of $\theta$; similarly, $\partial^2_\theta g$ will denote the matrix of second partial derivatives.
For $x\in \mathbb R^d$, $\|x\|$ will stand for the Euclidean norm of $x$, $\|x\|_\infty:=\max_j |x_j|$, and for a matrix $A\in \mathbb R^{d\times d}$, $\|A\|$ will denote the spectral norm of $A$. We will frequently use the standard big-O and small-o notation, as well as their in-probability siblings $o_P$ and $O_P$. For vector-valued sequences $\{x_j\}_{j\geq 1}, \ \{y_j\}_{j\geq 1}\subset \mathbb R^d$, expressions $x_j = o(y_j)$ and $x_j = O(y_j)$ are assumed to hold coordinate-wise. For a square matrix $A\in \mathbb R^{d\times d}$, $\tr A:=\sum_{j=1}^d A_{j,j}$ denotes the trace of $A$.
Given a function $g:\mathbb R\mapsto \mathbb R$, measure $Q$ and $1\leq p<\infty$, we set $\|g\|^p_{L_p(Q)}:= \int_\mathbb R |g(x)|^p dQ$.
For i.i.d. random variables $X_1,\ldots,X_N$ distributed according to $P$, $P_N:=\frac{1}{N}\sum_{j=1}^N \delta_{X_j}$ will stand for the empirical measure; here, $\delta_{X}(g):=g(X)$. The expectation with respect to a probability measure $Q$ will be denoted $\mathbb E_Q$; if the measure is not specified, it will be assumed that the expectation is taken with respect to $P$, the distribution of $X$. Given $f:S\mapsto \mathbb R^d$, we will write $Qf$ for $\int f dQ\in \mathbb R^d$, assuming that the last integral is calculated coordinate-wise.
For $\theta\in \Theta$, let $\sigma^2(\theta) = \var\left( \ell(\theta,X)\right)$ and for $\Theta'\subseteq \Theta$, define
$\sigma^2(\Theta'):=\sup_{\theta\in \Theta'} \sigma^2(\theta)$.
\end{comment}
\section{Optimal constants for the median of means estimator.}
Recall that we are given an i.i.d. sample $X_1,\ldots,X_N$ from distribution $P$ with mean $\mu$ and variance $\sigma^2$. The median of means estimator of $\mu$ is constructed as follows: let $G_1\cup\ldots\cup G_k\subseteq [N]$ be an arbitrary (possibly random but independent from the data) collection of $k\leq N/2$ disjoint subsets (``blocks'') of cardinality $\lfloor N/k\rfloor$ each, $\bar X_j:=\frac{1}{|G_j|}\sum_{i\in G_j} X_i$ and
\[
\widehat\mu_{\mathrm{MOM}} = \med{\bar X_1,\ldots,\bar X_k}.
\]
It is known that $\widehat\mu_{\mathrm{MOM}}$ is asymptotically normal: specifically, it follows from Theorem 5 in \citep{minsker2017distributed} that if $k\to\infty$ sufficiently slow so that the bias of $\widehat\mu_{\mathrm{MOM}}$ is of order $o(N^{-1/2})$, then
\begin{equation}{
\label{eq:clt1}
\sqrt{N}(\widehat\mu_{\mathrm{MOM}} - \mu)\xrightarrow{d} N\left(0, \frac{\pi}{2}\sigma^2\right)
}
as $k,N/k\to\infty$. In particular, if $\mathbb E|X|^{2+\delta}<\infty$ for some $0<\delta\leq 1$, then $k=o\left(N^{\delta/(1+\delta)}\right)$ suffices for the asymptotic unbiasedness and asymptotic normality to hold.
Asymptotic relation \eqref{eq:clt1} suggests that the best value of the constant $L$ in the deviation inequality \eqref{eq:sg} for the estimator $\widehat\mu_{\mathrm{MOM}}$ is $\sqrt{\pi}+o(1)$. We will demonstrate that this is indeed the case. Denote
\begin{equation}{
\label{eq:g}
g(m):= \frac{1}{\sqrt m} \mathbb E\left[ \left(\frac{X_1 - \mu}{\sigma}\right)^2 \min\left(\left|\frac{X_1-\mu}{\sigma}\right|, \sqrt m\right) \right],
}
Clearly, $g(m)\to 0$ as $m\to\infty$ for distributions with finite variance. \citet{feller1968berry} proved that $\sup_{t\in \mathbb R}\left| \Phi_m(t) - \Phi(t)\right|\leq 6g(m)$ where $\Phi_m$ and $\Phi$ are the distribution functions of $\frac{\sum_{j=1}^m X_j - \mu}{\sigma\sqrt{m}}$ and the standard normal law respectively.
The next result can be viewed as a non-asymptotic analogue of relation \eqref{eq:clt1}.
\begin{theorem}
\label{th:dev-1}
The following bound holds:
\begin{equation}{
\label{eq:mom-subopt}
\pr{\left|\sqrt{N}(\widehat\mu_{\mathrm{MOM}} - \mu)\right|\geq \sigma\sqrt t} \leq 2\exp{- \frac{t}{\pi}(1+o(1))}.
}
Here, $o(1)$ is a function that goes to $0$ as $k,N/k\to\infty$, uniformly over $t\in \left[ l_{k,N},u_{k,N} \right]$ for any sequences $l_{k,N}\gg k\,g^2(N/k)$ and $u_{k} \ll k$.
\end{theorem}
Note that the bound of the theorem holds in some range of the confidence parameter (such estimators are often called ``multiple-$\delta$'' in the literature, e.g., see \citet{devroye2016sub}), however, this range is distribution-dependent. In particular, if $\sqrt{k}\,g(N/k)\to 0$ as $k,N\to\infty$, the previous bound holds in the range $1\leq t \ll k$, but the function $g(\cdot)$ depends on $P$ and may converge to $0$ arbitrarily slow. Under additional assumptions, more concrete bounds can be deduced: for instance, if $\mathbb E|X/\sigma|^{2+\varepsilon}<\infty$ for some $0<\varepsilon\leq 1$, the condition $\sqrt{k}\,g(N/k)\to 0$ is satisfied if $k=o\left( N^{\frac{\varepsilon}{1+\varepsilon}} \right)$ as $N\to\infty$. In general, by choosing $k$ appropriately, we can construct a version of the median of means estimator that satisfies required guarantees for any $1\leq t \ll N$.
As we observed earlier, one of the advantages of the MOM estimator is its scale-invariance, in particular we may allow $\sigma^2$ to depend on $N$. If one is willing to give up this property (and, with it, much of the practical appeal of the estimator), then it is possible to get optimal constant $2$ instead of $\pi$ the tail bound assuming only the existence of a second moment. This can be achieved simply by using a Huber/Catoni-type estimator of location in place of the median. Such an estimator can be viewed as a ``hybrid'' between MOM and Catoni estimators, and we formally define and analyze it in Appendix \ref{sec:hybrid}.
\begin{proof}[Proof of Theorem \ref{th:dev-1}]
As $\widehat\mu_{\mathrm{MOM}}$ is scale-invariant, we can assume without loss of generality that
$\sigma^2=1$.
Denote $m=\lfloor N/k\rfloor$ for brevity, let $\rho(x) = |x|$, and note that the equivalent characterization of $\widehat\mu_{\mathrm{MOM}}$ is
\[
\widehat\mu_{\mathrm{MOM}}\in \argmin_{z\in \mathbb R} \sum_{j=1}^k \rho\left( \sqrt{m}\left( \bar X_j - z\right)\right).
\]
The necessary conditions for the minimum of $F(z):=\sum_{j=1}^k \rho\left( \sqrt{m}\left( \bar X_j - z\right)\right)$ imply that $0\in \partial F(\widehat\mu_{\mathrm{MOM}})$ -- the subgradient of $F$, hence the left derivative
$F'_-(\widehat\mu_{\mathrm{MOM}})\leq 0$. Therefore, if $\sqrt{N}\left(\widehat\mu_{\mathrm{MOM}} - \mu\right)\geq \sqrt t$ for some $t > 0$, then $\widehat\mu_{\mathrm{MOM}} \geq \mu + \sqrt{t/N}$ and, due to $F'_-$ being nondecreasing,
$F'_-\left( \mu + \sqrt{t/N} \right)\leq 0$. It implies that
\mln{
\label{eq:b01}
\pr{\sqrt{N}(\widehat\mu_{\mathrm{MOM}} - \mu)\geq \sqrt t} \leq \pr{\sum_{j=1}^k \rho'_-\left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right) \geq 0}
\\
= \pr{\frac{1}{\sqrt k}\sum_{j=1}^k \left(\rho'_-\left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right) - \mathbb E\rho'_-\right) \geq
-\sqrt{k}\mathbb E\rho'_- }
}
where we used the shortcut $\mathbb E\rho'_-$ in place of $\mathbb E\rho'_-\left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right)$.
Note that
\mln{
\label{eq:b11}
-\sqrt{k} \mathbb E\rho'_-\left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right) =
-\sqrt{k}\left(1 - 2 \pr{ \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right) \leq 0} \right)
\\
= 2\sqrt{k}\left( \Phi\left( \frac{\sqrt t}{\sqrt{k}}\right) - \Phi(0) \right) - 2\sqrt{k}\left(\Phi\left( \frac{\sqrt t}{\sqrt{k}} \right) - \pr{ \sqrt{m}\left( \bar X_j - \mu \right) \leq \frac{\sqrt t}{\sqrt{k}}} \right)
\\
\leq 2\sqrt{k} \cdot g(m) + 2\sqrt t \frac{1}{\sqrt t/\sqrt{N/m}}\left( \Phi\left( \frac{\sqrt t}{\sqrt{N/m}}\right) - \Phi(0) \right).
}
Since
\mln{
\label{eq:b12}
2 \sqrt{t}\frac{1}{\sqrt t/\sqrt{N/m}}\left( \Phi\left( \frac{\sqrt t}{\sqrt{N/m}}\right) - \Phi(0) \right) = 2 \sqrt t \left( \phi(0) + O(t/\sqrt{N/m}) \right)
\\
= \sqrt t \left(\sqrt{\frac 2 \pi} + O(t/\sqrt{N/m}) \right)
}
where $\phi(t) = \Phi'(t)$, we see that
\[
-\sqrt{k}\, \mathbb E\rho'_-\left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right)
\leq 2\sqrt{k}\cdot g(m) + \sqrt t\left( \sqrt{\frac 2 \pi} + O(\sqrt{t/k}) \right)
\]
which is $\sqrt t\sqrt{\frac 2 \pi} \left(1 + o(1)\right)$ whenever $t\ll k$ and $t \gg k\, g^2(m)$.
It remains to apply Bernstein's inequality to the right-hand side in \eqref{eq:b01}. Observe that
\ml{
\var\left( \rho'_- \left( \sqrt{m}\left( \bar X_j - \mu - \sqrt{t/N}\right)\right) \right) = 4 \var\left( I \left\{ \sqrt{m}\left( \bar X_j - \mu \right) \leq \sqrt{t/k} \right\} \right)
\\
=4 \pr{\sqrt{m}\left( \bar X_j - \mu \right) \leq \sqrt{t/k}}\left( 1 - \pr{\sqrt{m}\left( \bar X_j - \mu \right) \leq \sqrt{t/k}} \right)
\leq 1,
}
therefore
\ml{
\pr{\sqrt{N}(\widehat\mu_{\mathrm{MOM}} - \mu)\geq \sqrt t} \leq
\exp{-\frac{t}{\pi(1+o(1)) + \frac{2\sqrt t\sqrt{2\pi}}{3}\frac{1}{\sqrt k} \left(1 + o(1)\right) }}
\\
= \exp{- \frac{t}{\pi}(1+o(1))}
}
whenever $\sqrt{k}\,g(m) \ll t\ll \sqrt{k}$.
Similar reasoning gives a matching bound for $\pr{\sqrt{N}(\widehat\mu_{\mathrm{MOM}} - \mu)\leq -\sqrt t}$, and the result follows.
\end{proof}
One may ask whether the median of means estimator admits a more sample-efficient modification, one that would satisfy inequality \eqref{eq:sg} with a constant $L$ smaller than $\pi$. A natural idea is to require that the estimator is invariant with respect to permutations of the data or, equivalently, is a function of order statistics only. Such an extension of the MOM estimator was proposed by \citet{minsker2017distributed}, however no provable improvements for the performance over the standard MOM estimator were established rigorously. The question of such improvements, especially the guarantees expressed in the form \eqref{eq:sg}, is addressed next.
Let us recall the proposed construction. Assume that $2\leq m < N$ and, given $J\subseteq [N]$ of cardinality $|J|=m$, set $\bar X_J:= \frac{1}{m}\sum_{j\in J}X_j$. Define $\mathcal A_{N}^{(m)} = \left\{ J\subset [N]: \ |J|=m\right\}$ and
\begin{equation}{
\label{eq:U-mom-est}
\widehat\mu_{N} : =\med{\bar X_J, \ J\in \mathcal A_{N}^{(m)}},
}
where $\left\{\bar X_J, \ J\in \mathcal A_{N}^{(m)}\right\}$ denotes the set of sample averages computed over all possible subsets of $[N]$ of cardinality $m$; in particular, unlike the standard median-of-means estimator,
$\widehat\mu_{N}$ is uniquely defined.
Note that for $m=2$, $\widehat\mu_N$ coincides with the well known Hodges-Lehmann estimator of location \citep{hodges1963estimates}. When $m$ is a fixed integer greater than $2$, $\widehat\mu_N$ is known as the generalized Hodges-Lehmann estimator. Its asymptotic properties are well-understood, and can be deduced from results by \citet{serfling1984generalized}, among other works. For example, its breakdown point is $1-(1/2)^{1/m}$ and, in case of normally distributed data, the asymptotic distribution of $\sqrt{N}(\widehat\mu_N-\mu)$ is centered normal with variance $\Delta_m^2 = m\sigma^2\arctan\left( \frac{1}{\sqrt{m^2-1}}\right)$. In particular, $\Delta_m^2 = \sigma^2(1+o(1))$ as $m\to\infty$. When the underlying distribution is not symmetric however, $\widehat\mu_N$ is biased for the mean, and the properties of this estimator in the regime $m\to\infty$ have not been investigated in the robust statistics literature (to the best of our knowledge). Only very recently, \cite{diciccio2022clt} proved that whenever $m\to\infty$, $m=o(\sqrt{N})$ and the sample is normally distributed, $\sqrt{N}(\widehat\mu_N-\mu)\to N(0,\sigma^2)$. We will extend this result in several directions: first, by allowing a much wider class of underlying distributions, second, by including the case when $\sqrt{N} \ll m \ll N$ which is interesting as $\mathrm{bias}\left( \widehat\mu_N\right)$ is $o\left( N^{-1/2}\right)$ in this regime, and finally by presenting sharp sub-Gaussian deviation inequalities for $\widehat\mu_N$ that hold for heavy-tailed data.
Let us remark that an argument behind Theorem \ref{th:dev-1} combined with a version of Bernstein's inequality for U-statistics due to \cite{hoeffding1963probability} immediately implies that $\widehat\mu_N$ satisfies relation \eqref{eq:mom-subopt}, so in this sense its performs at least as good as $\widehat\mu_{\mathrm{MOM}}$.
Analysis of the estimator $\widehat\mu_{N}$ is most naturally carried out using the language of U-statistics. The following section introduces the necessary background, while additional useful facts are summarized in section \ref{sec:tools}.
\section{Asymptotic normality of U-statistics and the implications for $\widehat\mu_N$.}
\label{sec:normality}
Let $Y_1,\ldots,Y_N$ be i.i.d. random variables with distribution $P_Y$ and assume that $h_m:\mathbb R^m\mapsto \mathbb R, \ m\geq 1$ are square-integrable with respect to $P_Y^{m}$ and permutation-symmetric functions, meaning that that
$\mathbb E h_m^2(Y_1,\ldots,Y_m)<\infty$ and $h_m(x_{\pi(1)},\ldots,x_{\pi(m)}) = h_m(x_1,\ldots,x_m)$ for any $x_1,\ldots,x_m\in \mathbb R$ and any permutation $\pi:[m]\mapsto [m]$. Without loss of generality, we will also assume that
$\mathbb E h_m:=\mathbb Eh_m(Y_1,\ldots,Y_m)=0$. Recall that $\mathcal A_{N}^{(m)} = \left\{ J\subseteq [N]: \ |J|=m\right\}$. The U-statistic with kernel $h_m$ is defined as
\begin{equation}{
\label{eq:U-stat1}
U_{N,m} = \frac{1}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} h_m(Y_i, \ i\in J).
}
For $i\in[N]$, let
\begin{equation}{
\label{eq:a00}
h_m^{(1)}(Y_i) = \mathbb E\left[ h_m(Y_1,\ldots,Y_{m})\,|\,Y_i\right].
}
We will assume that $\pr{h_m^{(1)}(Y_1)\ne 0}>0$ for all $m$, meaning that the kernels $h_m$ are non-degenerate. The random variable
\[
S_{N,m}:=\sum_{j=1}^N \mathbb E \left[U_{N,m} \,|\, Y_j \right] = \frac{m}{N} \sum_{j=1}^N h_m^{(1)}(Y_j),
\]
known as the H\'{a}jek projection of $U_{N,m}$, is essentially the best approximation of $U_{N,m}$ in terms of the sum of i.i.d. random variables of the form $f(Y_1)+\ldots+f(Y_m)$.
We are interested in the sufficient conditions guaranteeing that $\frac{U_{N,m} - S_{N,m}}{\sqrt{\var(S_{N,m})}} = o_P(1)$ as $N,m\to\infty$. Such asymptotic relation immediately implies that the limiting behavior of $U_{N,m}$ is defined by the H\'{a}jek projection $S_{N,m}$.
Results of these type for U-statistics of fixed order $m$ are standard and well-known \citep{hoeffding1948class,serfling2009approximation,lee1990u}. However, we are interested in the situation when $m$ is allowed to grow with $N$, possibly up to the order $m=o(N)$. U-statistics of growing order were studied for example by \citet{frees1989infinite}, however existing results are not readily applicable in our framework. Very recently, such U-statistics have been investigated in relation to performance of Breiman's random forests algorithm (e.g. see the papers by \citet{song2019approximating} and \citet{peng2022rates}). The following theorem is essentially due to \cite{peng2022rates}; we give a different proof of this fact in Appendix \ref{proof:U-stat} as we rely on parts of the argument elsewhere in the paper.
\begin{theorem}
\label{th:U-stat}
Assume that $\frac{\var\left( h_m(Y_1,\ldots,Y_m)\right)}{\var \left(h_m^{(1)}(Y_1)\right)} = o(N)$ as $N,m\to\infty$. \footnote{It is well known \citep{hoeffding1948class} that $\var \left(h^{(1)}(Y_1)\right) \leq \frac{\var(h_m)}{m}$, therefore the condition imposed on the ratio of variances implies that $m=o(N)$.}
Then $\frac{U_{N,m} - S_{N,m}}{\sqrt{\var(S_{N,m})}} = o_P(1)$ as $N,m\to\infty$.
\end{theorem}
It is easy to see that asymptotic normality of $\frac{U_{N,m}}{\sqrt{\var(S_{N,m})}}$ immediately follows from the previous theorem whenever its assumptions are satisfied. Next, we will apply this result to establish asymptotic normality of the estimator $\widehat\mu_N$ defined via \eqref{eq:U-mom-est}.
\begin{comment}
a class of robust estimators of the mean. We are interested in the estimators $\widehat\mu_N$ that are asymptotically normal and have asymptotic variance that is as small as possible in the minimax sense, that is,
$\sqrt{N}\left( \widehat\mu_N - \mu\right)\xrightarrow{d} \mathcal N(0,\nu^2)$ as $N\to\infty$, where $\xrightarrow{d}$ denotes convergence in distribution and $\nu^2:=\nu^2(\widehat\mu_N,P)$ is such that
\begin{equation*}{
\sup_{P\in\mathcal P_{2,\sigma}} \nu^2(\widehat\mu_N,P) = \inf_{\widetilde \mu_N} \sup_{P\in\mathcal P_{2,\sigma}} \nu^2(\widetilde\mu_N,P).
}
Here, the infimum is taken over all estimators $\widetilde\mu_N$ of $\mu$ and $\nu^2$ is allowed to take values $0$ and $+\infty$.
It is easy to see that $\inf_{\widetilde \mu_N} \sup_{P\in\mathcal P_{2,\sigma}} \nu^2(\widetilde\mu_N,P) = \sigma^2$ (see Lemma 6.5 in \citep{minsker2020robust}), therefore, our task reduced to finding robust estimators that satisfy
\[
\sqrt{N}\left( \widehat\mu_N - \mu\right)\xrightarrow{d} \mathcal N(0,\sigma^2)
\]
for all $P\in \mathcal P_{2,\sigma}$. We will call such estimators asymptotically efficient, in a sense described above.
As the deviation guarantees for $\widehat \mu_N$ can not be uniformly better than the guarantees for the limiting normal distribution, it is clear that only the estimators that are asymptotically efficient can possibly satisfy non-asymptotic deviation guarantees with an optimal constant $\sqrt{2}+o(1)$.
Our proposed construction can be viewed as a permutation-invariant version of the median-of-means estimator. Let $m\geq 2$; given $J\subseteq [N]$ of cardinality $|J|=m$, set $\bar X_J:= \frac{1}{m}\sum_{j\in J}X_j$, and define
\begin{equation*}{
\widehat\mu_{N} : =\med{\bar X_J, \ J\in \mathcal A_{N}^{(m)}}.
}
This estimator has been introduced in \citep{minsker2017distributed}, however, its detailed analysis was missing. Hoeffding's representation of the U-statistic as an average of independent random variables \citep[][section 5.1.6]{serfling2009approximation} can be used to show that all standard deviation guarantees for the classical median-of-means estimator, such as inequality \eqref{eq:mom-1} and the bound of Theorem \ref{th:dev-1}, also hold for its permutation-invariant version; see the proof of Theorem 5 in \citep{minsker2017distributed} for an example of such an argument.
Unlike the standard median-of-means estimator, $\widehat\mu_{N}$ is uniquely defined and does not depend on the choice of a partition $[N]$. Specifically, it is a function of the vector of order statistics $X_{(1)},\ldots,X_{(N)}$ only which is complete, sufficient for the family of distributions $\mathcal P_{2,\sigma}$.
It is therefore natural to expect that $\widehat\mu_{N}$ will have improved efficiency compared to
$\widehat\mu_{\mathrm{MOM}}$ and that the deviation guarantees will hold with better constants.
It might still be somewhat surprising however that the resulting constants are essentially optimal: this is precisely the claim of the following two theorems.
\begin{remark}
Result of this theorem holds for a wider class of robust estimators of the mean defined as follows. Let $\rho$ be an even, convex, increasing on $(0,\infty)$ function such that $\|\rho'_+\|_\infty < \infty$, and set
\[
\widehat\mu_{N,\rho}:=\argmin_{z\in \mathbb R} \sum_{J\in \mathcal A_N^{(m)}} \rho\left( \sqrt{m}\left( \bar X_J - z\right)\right).
\]
Estimator $\widehat\mu_N$ defined above corresponds to $\rho(x)=|x|$; we only give the proof for this special, more challenging, case. When $\rho$ is smooth (e.g. continuously differentiable), the same argument goes through with several simplifications.
\end{remark}
\end{comment}
\begin{corollary}
\label{th:clt-1}
Let $X_1,\ldots,X_N$ be i.i.d. with finite variance $\sigma^2$. Moreover, assume that $\sqrt{\frac{N}{m}}\, g(m)\to 0$ as $N/m$ and $m\to\infty$. Then
\[
\sqrt{N}\left( \widehat\mu_N - \mu\right)\xrightarrow{d} \mathcal N(0,\sigma^2)
\]
as $N/m$ and $m\to\infty$.
\end{corollary}
\begin{remark}
Requirement $\sqrt{\frac{N}{m}}\, g(m)\to 0$ guarantees that $\mathrm{bias}(\widehat\mu_N)=o(N^{-1/2})$. Without this requirement, asymptotic normality can be established for the debiased estimator $\widehat\mu_N - \mathbb E\widehat\mu_N$.
\end{remark}
\begin{proof}
Let $\rho(x)=|x|$ and note that the equivalent characterization of $\widehat\mu_{N}$ is
\[
\widehat\mu_{N}\in \argmin_{z\in \mathbb R} \sum_{J\in \mathcal A_N^{(m)}} \rho\left( \sqrt{m}\left( \bar X_J - z\right)\right).
\]
The necessary conditions for the minimum of this problem imply that for any fixed $t\geq 0$,
\aln{
\label{eq:c00}
&\pr{\sum_{J\in \mathcal A_N^{(m)}} \rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) > 0} \leq
\pr{\sqrt{N}(\widehat\mu_{N} - \mu)\geq t} \text{ and }
\\
\nonumber
&\pr{\sqrt{N}(\widehat\mu_{N} - \mu)\geq t} \leq \pr{\sum_{J\in \mathcal A_N^{(m)}} \rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) \geq 0}.
}
Therefore, it suffices to show that the upper and lower bounds for $\pr{\sqrt{N}(\widehat\mu_{N} - \mu)\geq t}$ converge to the same limit. To this end, we see that
\mln{
\label{eq:b01}
\pr{\sum_{J\in \mathcal A_N^{(m)}} \rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) \geq 0}
\\
= \pr{\frac{\sqrt{N/m}}{{N \choose m}}\sum_{J\in \mathcal A_N^{(m)}} \left(\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) - \mathbb E\rho'_-\right) \geq -\sqrt{N/m}\mathbb E\rho'_- },
}
where $\mathbb E\rho'_-$ stands for $\mathbb E\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right)$.
As it the proof of Theorem \ref{th:dev-1}, we deduce that
$-\sqrt{N/m}\,\mathbb E\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) \to \frac{t}{\sigma}\sqrt{\frac{2}{\pi}}$ whenever $\sqrt{N/m}\,g(m)\to 0$ and $N/m\to\infty$.
It remains to analyze the U-statistic
\[
\sqrt{\frac N m} \,U_{N,m} = \frac{\sqrt{N/m}}{{N \choose m}}\sum_{J\in \mathcal A_N^{(m)}} \left(\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right) - \mathbb E\rho'_-\right).
\]
As the expression above is invariant with respect to the shift $X_j \mapsto X_j - \mu$, we can assume that $\mu=0$.
To complete the proof, we will verify the conditions of Theorem \ref{th:U-stat} allowing one to reduce the asymptotic behavior of $U_{N,m}$ to the analysis of sums of i.i.d. random variables.
For $i \in [N]$, let
\begin{equation*}{
h^{(1)}(X_i)
= \sqrt{\frac{N}{m}} \,\mathbb E\left[ \rho'_-\left( \frac{1}{\sqrt m}\sum_{j=1}^{m-1} \tilde X_j + \frac{X_i}{\sqrt m} - t/\sqrt{N/m} \right)\,\big|\,X_i \right] - \sqrt{\frac{N}{m}}\mathbb E\rho'_-,
}
where $(\tilde X_1,\ldots,\tilde X_m)$ is an independent copy of $(X_1,\ldots,X_m)$. Our goal is to understand the size of $\var(h^{(1)}(X_1))$: specifically, we will show that
$\var\left( \frac{m}{\sqrt N} h^{(1)}(X_1) \right) \to \frac{2}{\pi}$ as both $m$ and $N/m\to \infty$.
Given an integer $l\geq 1$, let $\widetilde\Phi_{l}(t)$ be the cumulative distribution function of $\sum_{j=1}^l X_j$. Then
\ml{
\frac{m}{\sqrt N} h^{(1)}(X_1) = \sqrt{m}\left(2\widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - X_1\right) -1 \right) - \sqrt{m} \mathbb E\,\rho'_-
\\
= 2\sqrt{m}\left( \widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - X_1 \right) -
\mathbb E\widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - X_1\right) \right)
\\
=2\sqrt{m}\int_\mathbb R \left( \widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - X_1 \right) - \widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - x \right) \right)dP(x).
}
We will apply the dominated convergence theorem to analyze this expression. Consider first the situation when the distribution of $X_1$ is non-lattice
\footnote{We say that $X_1$ has lattice distribution if $\pr{X_1 \in \alpha+k\beta, \ k\in \mathbb Z} = 1$ and there is no arithmetic progression $A\subset Z$ such that $\pr{X_1 \in \alpha+k\beta, \ k\in A} = 1$}. Then the local limit theorem for non-lattice distributions \citep[][Theorem 2]{shepp1964local} implies that
\[
\widetilde\Phi_{m-1}\left(a+h\right) - \widetilde\Phi_{m-1}\left( a \right) = \frac{h}{\sqrt{2\pi (m-1)}\sigma} \exp{-\frac{a^2}{2(m-1)\sigma^2}}+ o(m^{-1/2}),
\]
where $\sqrt{m}\cdot o(m^{-1/2})$ converges to $0$ as $m\to\infty$ for every $h$ and uniformly in $a$. Therefore, we see that conditionally on $X_1$ and for every $x$,
\mln{
\label{eq:c01}
\widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - x + (x - X_1) \right) - \widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - x \right)
\\
= \frac{x - X_1}{\sqrt{2\pi (m-1)}\sigma } \exp{-(tm/\sqrt{N}-x)^2/2(m-1)\sigma^2} + o(m^{-1/2})
}
uniformly in $m$. Since $m=o(N)$ by assumption, $\exp{-(tm/\sqrt{N}-x)^2/2(m-1)\sigma^2} = 1 + o(1)$ as $m,N\to\infty$, hence
\[
2\sqrt{m}\left( \widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - x + (x - X_1) \right) - \widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - x \right) \right)
= 2\frac{x - X_1}{\sqrt{2\pi}\sigma } + o(1)
\]
$P$-almost everywhere. Next, we will show that $q_m(x,X_1):=\sqrt m\left( \widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - X_1 \right) - \widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - x \right) \right)$ admits an integrable majorant that does not depend on $m$.
Note that
\[
|q_m(x,X_1)| \leq \sup_{z}\sqrt m\,\pr{\sum_{j=1}^{m-1} X_j \in \big(z, z+|x-X_1| \big]}
\leq C |x-X_1|,
\]
where the last inequality follows from the well known bound for the concentration function (Theorem 2.20 in the book by \cite{petrov1995limit}); here, $C=C(P)>0$ is a constant that may depend on the distribution of $X_1$.
We conclude that by the dominated convergence theorem,
\[
\frac{m}{\sqrt N} h^{(1)}(X_1) \to \sqrt{\frac{2}{\pi}}\frac{X_1}{\sigma}
\]
as $m,N/m\to\infty$, $P$-almost everywhere. As
\[
\left| \frac{m}{\sqrt N} h^{(1)}(X_1) \right|\leq 2\left| \int_\mathbb R q_m(x,X_1) dP(x) \right| \leq C\int_\mathbb R |x-X_1| dP(x)
\]
and $\mathbb E\left( \int_\mathbb R |x-X_1| dP(x) \right)^2<\infty,$ the second application of the dominated convergence theorem yields that $\var\left( \frac{m}{\sqrt N} h^{(1)}(X_1)\right) \to \var\left(\sqrt{\frac{2}{\pi}}\frac{X_1}{\sigma} \right) = \frac{2}{\pi}$ as $N/m$ and $m\to\infty$.
It remains to consider the case when $X_1$ has a lattice distribution. In this case, a version of the local limit theorem \citep{petrov1995limit} states that
\begin{equation*}{
\pr{\sum_{j=1}^{m-1} X_j = (m-1)\alpha + q\beta} = \frac{\beta}{\sqrt{2\pi(m-1)}\sigma} e^{-\frac{((m-1)\alpha + q\beta)^2}{2\sigma^2 (m-1)}}+o(m^{-1/2})
}
where the $o(m^{-1/2})$ term is uniform in $q\in Z$.
For any $y$ in the interval $\big( \frac{tm}{\sqrt{N}} - x, \frac{tm}{\sqrt{N}} - x + (x - X_1) \big]$ of the form $y = (m-1)\alpha + q\beta$, we have that $e^{-\frac{y^2}{2\sigma^2 (m-1)}} = 1 + o(1)$ as $\frac m N \to 0$. Therefore, similarly to \eqref{eq:c01}, in this case
\begin{equation*}{
2\sqrt{m}\left(\widetilde\Phi_{m-1}\left(\frac{tm}{\sqrt{N}} - x + (x - X_1) \right) - \widetilde\Phi_{m-1}\left( \frac{tm}{\sqrt{N}} - x \right) \right)
= 2\frac{x-X_1}{\sqrt{2\pi} \sigma} + o(1)
}
$P$-almost everywhere, where we also used the fact that the number of points of the form $(m-1)\alpha+q\beta$ in the interval of interest equals $\frac{x - X_1}{\beta}$. The rest of the proof proceeds exactly as in the case of non-lattice distributions, and concludes the part of the argument related to $\var\left( \frac{m}{\sqrt N} h^{(1)}(X_1) \right)$.
To finish the proof, note that, since $\|\rho'_-\|_\infty=1$, $\var\left( \sqrt{N/m}\,\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right)\right)\leq \frac{N}{m}$, hence
\[
\frac{\var\left( \sqrt{N/m}\,\rho'_-\left( \sqrt{m}\left( \bar X_J - \mu - tN^{-1/2}\right)\right)\right)}{\var\left( h^{(1)}(X_1)\right)}
\leq \frac{N/m}{\frac{2}{\pi}(1+o(1))N/m^2} = \frac{m}{\frac{2}{\pi}(1+o(1))} = o(N)
\]
as $m\to\infty$ and $N/m\to\infty$.
Therefore, Theorem \ref{th:U-stat} applies and yields that
\[
\frac{\sqrt{\frac N m}U_{N,m} - \frac{m}{N}\sum_{j=1}^N h^{(1)}(X_j)}{ \frac{m^2}{N} \var\left( h^{(1)}(X_j) \right)} = o_P(1),
\]
where $\frac{m^2}{N} \var\left( h^{(1)}(X_j) \right) = \frac{2}{\pi}(1+o(1))$.
In view of the Central Limit Theorem,
$\frac{m}{N}\sum_{j=1}^N h^{(1)}(X_j) \xrightarrow{d} N\left(0,\frac{2}{\pi}\right)$, and we conclude that
$\sqrt{\frac{N}{m}} U_{N,m} \xrightarrow{d} N\left(0,\frac{2}{\pi}\right)$. Recalling \eqref{eq:b01}, we see that
\[
\pr{\sqrt{\frac{N}{m}} U_{N,m} \leq \sqrt{\frac N m} \mathbb E\rho'_-} \to 1 - \widetilde\Phi\left( \frac{t}{\sigma}\right),
\]
or $\limsup\limits_{m,N/m\to\infty}\pr{\sqrt{N}\left( \widehat\mu_N - \mu\right)\geq t}\leq 1 - \widetilde\Phi\left( \frac{t}{\sigma}\right)$.
Repeating the preceding argument for the lower bound for $\pr{\sqrt{N}\left( \widehat\mu_N - \mu\right)\geq t}$, we get that $\liminf\limits_{m,N/m\to\infty}\pr{\sqrt{N}\left( \widehat\mu_N - \mu\right)\geq t}\geq 1 - \widetilde\Phi\left( \frac{t}{\sigma}\right)$, whence the claim of the theorem follows.
\end{proof}
Corollary \ref{th:clt-1} implies that asymptotically, the estimator $\widehat\mu_N$ improves upon
$\widehat\mu_{\mathrm{MOM}}$. The more interesting, and difficult, question is whether non-asymptotic sub-Gaussian deviation bounds for $\widehat\mu_N$ with improved constant can be established, and to understand the range of the deviation parameter in which such bounds are valid.
\section{Deviation inequalities for U-statistics of growing order.}
\label{sec:growing-order}
The ultimate goal of this section is to establish a non-asymptotic analogue of Corollary \ref{th:clt-1}. Recall that its proof relied on the classical strategy of showing that the higher-order terms in the Hoeffding decomposition of certain U-statistics are asymptotically negligible. To prove the desired non-asymptotic extension, one has be able to show that these higher-order terms are sufficiently small with exponentially high probability. However, classical tools used to prove such bounds rely on decoupling inequalities due to \cite{PM-decoupling-1995}. Unfortunately, the constants appearing in decoupling inequalities grow very fast with respect to the order $m$ of U-statistics, at least like $m^m$. As $m$ is allowed to grow with the sample size $N$ in our examples, such tools become insufficient to get the desired bounds in our framework. \citet{MA-ustat} derived an improved version of Bernstein's inequality for non-degenerate U-statistics where the sub-Gaussian deviations regime is controlled by $m\var(\h{1}_m(X))$ defined in equation \eqref{eq:a00}, rather than the larger quantity $\var(h_m)$ appearing in the inequality due to \citet{hoeffding1963probability}; however, this result is only useful when $m$ is fixed. \citet{song2019approximating} made significant progress in studying U-statistics of growing order and developed tools that avoid using decoupling inequalities, however, their techniques apply when $m=o\left(\sqrt{N}\right)$, while we only require that $m=o(N)$.
We will be interested in U-statistics with kernels of special structure that assumes ``weak'' dependence on each of the variables. Specifically, let the kernel be centered and written in the form $h_m\left(\frac{x_1}{\sqrt m},\ldots,\frac{x_m}{\sqrt m}\right)$, whence the corresponding U-statistic is
\begin{equation}{
\label{eq:U-stat1}
U_{N,m} = \frac{1}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} h_m\left( \frac{X_i}{\sqrt{m}}, \ i\in J \right).
}
The Hoeffding decomposition of $U_{N,m}$ is defined as the sum
\begin{equation}{
\label{eq:f02}
U_{N,m} = \frac{m}{N}\sum_{j=1}^N h_m^{(1)}(X_j) + \sum_{j=2}^m \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J),
}
where $\h{j}_m(x_1,\ldots,x_j)=(\delta_{x_1} - P)\times\ldots\times(\delta_{x_j}-P)\times P^{m-j} h_m$ . We refer the reader to section \ref{sec:tools} where the Hoeffding decomposition and related background material is reviewed in more detail.
We will assume that $U_{N,m}$ is non-degenerate, in particular, one can expect that the behavior of $U_{N,m}$ is determined by the first term $\frac{m}{N}\sum_{j=1}^N h_m^{(1)}(X_j)$ in the decomposition.
In order to make this intuition rigorous, we need to prove that the higher-order terms are of smaller order with exponentially high probability. It is shown in the course of the proof of Theorem \ref{th:U-stat} that
$\var\left( \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right)\leq \var(h_m) \left( \frac{m}{N}\right)^j$.
However, to achieve our current goal, bounds for the moments of higher order are required.
More specifically, the key technical difficulty lies in establishing the correct rate of decay of the higher moments with respect to the order $m$ of the U-statistic. We will show that under suitable assumptions, $\mathbb E^{1/q}\left| \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J)\right|^q = O\left(j^{\gamma}\left( \frac{m}{N}\right)^{j/2}\right)$ for some $\gamma>0$ and for all $q\geq 2$ and $2\leq j\leq j_{\mathrm{\max}}$ for a sufficiently large $j_{\mathrm{max}}$.
The following result, essentially implied by the moment inequalities of this form, is a main technical novelty and a key ingredient needed to control large deviations of the higher order terms in the Hoeffding. decomposition.
\begin{theorem}
\label{th:concentration}
Let
\[
V_{N,j}= \frac{ {m\choose j}^{1/2} }{{N\choose j}^{1/2}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m\left(\frac{X_i}{\sqrt m},\, i\in J\right), \ f_j(x_1,\ldots,x_j)=\mathbb E h_m\left(\frac{x_1}{\sqrt m},\ldots,\frac{x_j}{\sqrt m},\frac{X_{j+1}}{\sqrt m},\ldots,\frac{X_m}{\sqrt m}\right)
\]
and $\nu_k = \mathbb E^{1/k} |X_1|^k$.
If the kernel $h_m$ is uniformly bounded, then there exists an absolute constant $c>0$ such that
\begin{equation*}{
\pr{|V_{N.j}|\geq t} \leq \exp{ -\min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \frac{\left( \frac{t}{\|h_m\|_\infty} \sqrt{\frac N j}\right)^{\frac{2}{j+1}}}{ c\left(m/j \right)^{\frac{j}{j+1}}} \right)}
}
whenever $\min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \frac{\left( \frac{t}{\|h_m\|_\infty} \sqrt{\frac N j}\right)^{\frac{2}{j+1}}}{ c\left(m/j \right)^{\frac{j}{j+1}}} \right) \geq 2 $.
Alternatively, suppose that
\begin{enumerate}
\item[(i)] $\left\| \partial_{x_1}\ldots\partial_{x_j} f_j \right\|_\infty \leq \left(\frac{C_1(P)}{m}\right)^{j/2} j^{\gamma_1 j}$ for some $\gamma_1\geq \frac{1}{2}$;
\item[(ii)] $\nu_k \leq k^{\gamma_2}M$ for all integers $k\geq 2$ and some $\gamma_2,M>0$.
\end{enumerate}
Then there exist constants $c_1,c_2>0$ that depend on $\gamma_1$ and $\gamma_2$ only such that
\begin{equation*}{
\pr{|V_{N,j}|\geq t}
\leq \exp{ -\min\left( \frac{1}{c_1}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c_2 M j^{\gamma_1-1/2}\right)^j}\right)^{\frac{2}{1+j(2\gamma_2+1)}} \right) }
}
whenever $\min\left( \frac{1}{c_1}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c_2 M j^{\gamma_1-1/2}\right)^j}\right)^{\frac{2}{1+j(2\gamma_2+1)}} \right) \geq \max\left(2, \frac{\log(N/j)}{\gamma_2 j}\right)$. \footnote{It the course of the proof, we also deduce a version of the bound for smaller values of $t$.}
\end{theorem}
The proof of the theorem is given in section \ref{proof:concentration1}. Let us briefly discuss the imposed conditions. The first inequality requires only the boundedness of the kernel and follows from a standard argument; it is mostly useful for the degenerate kernels of higher order $j$, for instance when $j \geq C m/\log(m)$). The main result is the second inequality of the theorem that provides a much better dependence of the tails on $m$ for small and moderate values of $j$. Assumption (ii) is a standard one: for instance, it holds with $\gamma_2=0$ for bounded random variables, $\gamma_2=1/2$ for sub-Gaussian and with $\gamma_2=1$ -- for sub-exponential random variables. As for assumption (i), suppose that the kernel $h_m$ is sufficiently smooth. In this case,
\[
\partial_{x_1}\ldots\partial_{x_j} f_j(x_1,\ldots,x_j) = m^{-j/2} \mathbb E\left[ \left(\partial_{x_1}\ldots\partial_{x_j} h_m\right)\left(\frac{x_1}{\sqrt m},\ldots,\frac{x_j}{\sqrt m}, \frac{X_{j+1}}{\sqrt m},\ldots,\frac{X_m}{\sqrt m}\right)\right],
\]
which is indeed of order $m^{-j/2}$ with respect to $m$. However, the functions $f_j$ are often smooth even if the kernel $h_m$ is not, as we will show later for the case of an indicator function (specifically, we will prove that required inequalities hold with $\gamma_1=\frac12$ for all $j\ll m/\log(m)$ under mild assumptions on the distribution of $X_1$).
Next, we state a corollary -- a deviation inequality that takes a particularly simple form and suffices for most of the applications discussed later. It can be viewed as an extension of Arcones' version \citep{MA-ustat} of Bernstein's inequality for the case of U-statistics of growing order.
\begin{corollary}
\label{th:bernstein}
Suppose that
\begin{enumerate}
\item[(i)] assumptions of Theorem \ref{th:concentration} hold for all $2\leq j\leq j_{\mathrm{max}}$ with $\gamma_1=\frac{1}{2}$;
\item[(ii)] the kernel $h_m$ is uniformly bounded;
\item[(iii)] $\liminf_{m\to\infty}\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)>0$;
\item[(iv)] $mM^2 = o\left(N^{1-\delta}\right)$ for some $\delta>0$.
\end{enumerate}
Moreover, let $q(N,m)$ be an increasing function such that
\[
q(N,m)=o\left(\min\left(\left(\frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}},j_{\mathrm{max}}\log(N/m)\right)\right) \text{ as } N/m\to\infty.
\]
Then for all $2\leq t\leq q(N,m)$,
\[
\pr{\left|U_{N,m}\right| \geq \sqrt{\frac{tm}{N}}} \leq 2\exp{-\frac{t}{2(1+o(1))\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)}},
\]
where $o(1)\to 0$ as $N/m\to \infty$ uniformly over $2\leq t\leq q(N,m)$.
If $m = o\left( \frac{N^{1/2}}{\log(N)} \right)$, we can instead choose $q(N,m)$ such that $q(N,m)=o\left(\min\left(\left(\frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}}, \frac{Nj_{\mathrm{max}}}{m^2} \right)\right)$.
\end{corollary}
\begin{remark}
The key point of the inequality is that the sub-Gaussian deviations are controlled by $\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)$ rather than the sub-optimal quantity $\var(h_m)$ appearing in Hoeffding's version of Bernstein's inequality for U-statistics. Moreover, the range in which $U_{N,m}$ admits sub-Gaussian deviations is much wider compared to the implications of Arcones' inequality when $m$ is allowed to grow with $N$. Several comments regarding the additional assumptions are in order:
\begin{enumerate}
\item Assumption of uniform boundedness of the kernel $h_m$ is needed to ensure that we can apply Bernstein's concentration inequality to the first term of the Hoeffding decomposition. This suffices for our purposes but in general this condition can be relaxed.
\item Assumption on the asymptotic behavior of the variance is made to simplify the statement and the proof; if it does not hold, the result is still valid once the definition of $q(N,m)$ is modified to reflect the different behavior of the this quantity.
We include the following heuristic argument which shows that $\lim_{m\to\infty}\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)$ often admits a simple closed-form expression. Indeed, note that $\sqrt{m}\left( h_m^{(1)}(X_1) - h_m^{(1)}(0)\right) = \int_0^{X_1} \sqrt{m}\,\partial_{u} h_m^{(1)}(u) du$. If $\left\|\partial^2_{u} h_m^{(1)}\right\|_\infty = o(m^{-1/2})$, then
\begin{equation*}{
\sqrt{m} \left| \partial_{u} h_m^{(1)}(u) - \partial_{u} h_m^{(1)}(0) \right|
\leq \sqrt{m} \left\| \partial^2_{u} h_m^{(1)}\right\|_\infty u \to 0
}
pointwise as $m\to\infty$. If the limit $\sqrt{m}\,\partial_{u} h_m^{(1)}(0)$ exists, then
$
\sqrt{m}\left( h_m^{(1)}(X_1) - h_m^{(1)}(0)\right) \to \lim_{m\to\infty} \sqrt m \,\partial_u h_m^{(1)}(0) X_1
$, P-almost everywhere. Moreover, as $\sqrt m\| \partial_{u} h_m^{(1)}\|_\infty$ admits an upper bound independent of $m$ by assumption (i) of Theorem \ref{th:concentration} and $X_1$ is sufficiently integrable, Lebesgue's dominated convergence theorem applies and yields that $\var\left( \sqrt{m}\, h_m^{(1)}(X_1)\right) \to \left(\lim_{m\to\infty} \partial_u h_m^{(1)}(0)\right)^2 \var(X_1)$. For instance, this heuristic argument can often be made precise for kernels of the form $h\left( \sum_{j=1}^m \frac{x_j}{\sqrt m}\right)$.
\item Finally, condition requiring that $mM^2 = o\left(N^{1-\delta}\right)$ is used to ensure that $\left(\frac{N}{mM^2}\right)^\tau \gg \log(m)$ for any fixed $\tau>0$ which simplifies the statement and the proof.
\end{enumerate}
\end{remark}
\begin{proof}
\begin{comment}
For the first part, it suffices to check that Theorem \ref{th:concentration} applies with with $t := \frac{\sqrt s}{j^2} \left( \frac{N}{m} \right)^{\frac{j-1}{2}}$, and that
\[
\min\left( \frac{t^{\frac{2}{j}}}{c}, \left( \frac{t\sqrt{N/j}}{\left(c M j^{\gamma_1-1/2}\right)^j}\right)^{\frac{2}{1+j(2\gamma_2+1)}} \right) \gg s
\]
under the stated assumptions. We omit simple algebraic calculations. To establish the second part
\end{comment}
The union bound together with Hoeffding's decomposition entails that for any $t>0$ and $0<\varepsilon<1$ (to be chosen later),
\ml{
\pr{\left|U_{N,m}\right| \geq \sqrt{\frac{tm}{N}}}
\\
\leq \pr{\left| \frac{m}{N}\sum_{j=1}^N h_m^{(1)}(X_j) \right|\geq (1-\varepsilon) \sqrt t\sqrt{\frac{m}{N}}}
+ \pr{\left| \sum_{j=2}^m \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \varepsilon\sqrt t\sqrt{\frac{m}{N}}}.
}
Bernstein's inequality yields that
\ml{
\pr{\left| \frac{m}{N}\sum_{j=1}^N h_m^{(1)}(X_j) \right|\geq (1-\varepsilon)\sqrt t\sqrt{\frac{m}{N}}}
\\
\leq 2\exp{-\frac{(1-\varepsilon)^2 \,t/2}{ \var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right) + (1-\varepsilon)\frac13\sqrt{\frac{m}{N}}\|h_m\|_\infty t^{1/2}}}
\\
=2\exp{-\frac{(1-\varepsilon)^2\, t}{2\,\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)(1+o(1))}}
}
where $o(1)\to 0$ as $N/m\to\infty$ uniformly over $s\leq q(N/m)$.
It remains to control the term involving higher order Hoeffding decomposition terms: specifically, we will show that under our assumptions, it is bounded from above by $\exp{-\frac{t}{2\,\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)}} \cdot o(1)$ where $o(1)\to 0$ uniformly over the range of $t$.
To this end, denote $t_\varepsilon:=\varepsilon^2 t$ and $j_\ast:=\min\left( j_{\mathrm{max}}, \lfloor \log(N/m) \rfloor +1\right)$. Observe that
\mln{
\label{eq:e01}
\pr{\left| \sum_{j=2}^m \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \sqrt{t_\varepsilon}\sqrt{\frac{m}{N}}}
\\ \leq
\pr{\left| \sum_{j=2}^{j_\ast} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}}
\\
+\pr{\left| \sum_{j=j_\ast+1}^{j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}}
\\
+\pr{\left| \sum_{j>j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}},
}
where the second sum may be empty depending on the value of $j_\ast$.
First, we estimate the last term using Chebyshev's inequality: repeating the reasoning leading to equation \eqref{eq:a12} in the proof of Theorem \ref{th:U-stat}, we see that
$\var\left( \sum_{j>j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right) \leq \var(h_m)\left(\frac{m}{N}\right)^{ j_{\mathrm{max}}+1 } \left(1-m/N\right)^{-1}$, hence
\ml{
\pr{\left| \sum_{j>j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}} \leq \frac{18\var(h_m)}{t_\varepsilon}\left(\frac{m}{N}\right)^{ j_{\mathrm{max}}}
\\
= 18\var(h_m) \exp{- j_{\mathrm{max}}\log(N/m) + \log(t_\varepsilon) }
}
whenever $N/m\geq 2$.
Alternatively, we can apply the first inequality of Theorem \ref{th:concentration} instead of Chebyshev's inequality to each term corresponding to $j>j_{\mathrm{max}}$ individually, with $t = t_{j,\varepsilon}:= \frac{\sqrt{t_\varepsilon}}{3 j^2} \left( \frac{N}{m} \right)^{\frac{j-1}{2}}$. It implies that
\ml{
\pr{\left| \sum_{j>j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}}
\\
\leq \sum_{j>j_{\mathrm{max}}} \pr{\left| \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_{j,\varepsilon}}}{3}\sqrt{\frac{m}{N}} }
\\
\leq m \max_{j>j_{\mathrm{max}}} \exp{-c\min\left(t_\varepsilon^{1/j}\left( \frac Nm\right)^{\frac{j-1}{j}},\left( \frac{t_\varepsilon}{\|h\|_\infty^2}\right)^{\frac{1}{j+1}} \left( \frac{Nj}{m^2} \right)^{\frac{j}{j+1}} \right)}.
}
This bound is useful when $\left( \frac{Nj_{\mathrm{max}}}{m^2} \right)^{\frac{j_{\mathrm{max}}}{j_{\mathrm{max}}+1}} \gg j_{\mathrm{max}} \log(N/m)$, which is true whenever $m^2 \ll \frac{N}{\log^2(N)}$. If moreover $\varepsilon \gg \frac{1}{\sqrt{\log(N)}}$, then the last probability is bounded from above by
\[
\max_{j>j_{\mathrm{max}}} \exp{-c'\min\left(t_\varepsilon^{1/j}\left( \frac Nm\right)^{\frac{j-1}{j}},\left( \frac{t_\varepsilon}{\|h\|_\infty^2}\right)^{\frac{1}{j+1}} \left( \frac{Nj}{m^2} \right)^{\frac{j}{j+1}} \right)}.
\]
To estimate the middle term (the probability involving the terms indexed by $j_\ast+1\leq j \leq j_{\mathrm{max}}$), we apply Theorem \ref{th:concentration} to each term individually for $t = t_{j,\varepsilon}:= \frac{\sqrt{t_\varepsilon}}{3 j^2} \left( \frac{N}{m} \right)^{\frac{j-1}{2}}$, keeping in mind that $\sum_{j\geq j_\ast+1} t_{j,\varepsilon}\leq \frac{\pi^2}{18} \left( \frac{N}{m} \right)^{\frac{j-1}{2}}\sqrt{t_\varepsilon}$. Note that for any $2\leq t \leq \frac{N}{m}$, $\varepsilon > \frac{m}{N}$ and $j\geq \lfloor \log(N/m)\rfloor + 1$,
\[
\min\left( \frac{t_{j,\varepsilon}^{\frac{2}{j}}}{c}, \left( \frac{t_{j,\varepsilon}\sqrt{N/j}}{\left(c M j^{\gamma_1-1/2}\right)^j}\right)^{\frac{2}{1+j(2\gamma_2+1)}} \right) \geq \frac{c_1}{M^{\frac{2}{1+2\gamma_2}}} \left( \frac{N}{m}\right)^{\frac{1}{1+2\gamma_2}},
\]
whence
\ml{
\pr{\left| \sum_{j=j_\ast+1}^{j_{\mathrm{max}}} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}}
\\
\leq j_{\mathrm{max}} \exp{ -\frac{c_1}{M^{\frac{2}{1+2\gamma_2}}} \left( \frac{N}{m}\right)^{\frac{1}{1+2\gamma_2}} }
\leq \exp{ -\frac{c_2}{M^{\frac{2}{1+2\gamma_2}}}\left( \frac{N}{m}\right)^{\frac{1}{1+2\gamma_2}} }.
}
Finally, to estimate the first term in the right side of inequality \eqref{eq:e01}, we again apply Theorem \ref{th:concentration}. With $t_{j,\varepsilon}$ defined as above,
\ml{
\pr{\left| \sum_{j=2}^{j_\ast} \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{\sqrt{t_\varepsilon}}{3}\sqrt{\frac{m}{N}}}
\\
\leq \sum_{j=2}^{j_\ast} \pr{\left| \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \frac{6}{\pi^2}\frac{\sqrt{t_{j,\varepsilon}}}{3}\sqrt{\frac{m}{N}}}
\\
\leq \sum_{j=2}^{j_\ast} \exp{- c\min\left(t^{1/j}_{\varepsilon} \left( \frac N m\right)^{\frac{j-1}{j}}, \frac{t_{\varepsilon}^{\frac{1}{1+j(1+2\gamma_2)}}}{M^{\frac{2j}{1+j(1+2\gamma_2)}}} \left( \frac N m\right)^{\frac{j}{1+j(1+2\gamma_2)}} \right)}
\\
\leq j_\ast \max_{2\leq j\leq j_\ast} \exp{- c\min\left(t^{1/j}_{\varepsilon} \left( \frac N m\right)^{\frac{j-1}{j}}, \frac{t_{\varepsilon}^{\frac{1}{1+j(1+2\gamma_2)}}}{M^{\frac{2j}{1+j(1+2\gamma_2)}}} \left( \frac N m\right)^{\frac{j}{1+j(1+2\gamma_2)}} \right)}.
}
Whenever $\varepsilon\geq \frac{1}{\sqrt{N/m)}}$, the last expression is upper bounded by
\begin{equation*}{
\max_{2\leq j\leq j_\ast} \exp{- c_3\min\left(t^{1/j}_{\varepsilon} \left( \frac N m\right)^{\frac{j-1}{j}}, \frac{t_{\varepsilon}^{\frac{1}{1+j(1+2\gamma_2)}}}{M^{\frac{2j}{1+j(1+2\gamma_2)}}} \left( \frac N m\right)^{\frac{j}{1+j(1+2\gamma_2)}} \right)}
}
for $c_3$ small enough. Combining all the estimates, we obtain the inequality
\mln{
\label{eq:base-1}
\pr{\left| \sum_{j=2}^m \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \sqrt{t_\varepsilon}\sqrt{\frac{m}{N}}}
\leq
\\ \max_{2\leq j\leq j_\ast} \exp{- c_3\min\left(t^{1/j}_{\varepsilon} \left( \frac N m\right)^{\frac{j-1}{j}}, t_{\varepsilon}^{\frac{1}{1+j(1+2\gamma_2)}} \left( \frac{N}{mM^2}\right)^{\frac{j}{1+j(1+2\gamma_2)}} \right)}
\\
+ \exp{ -c_2\left( \frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}} } + c_4\var(h_m) \exp{- j_{\mathrm{max}}\log(N/m) + \log(t_\varepsilon) }
}
that holds if $\varepsilon\geq \frac{1}{\sqrt{N/m}}$ and $2\leq t\leq \frac{N}{m}$.
If $t<\left( \frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}} \varepsilon^4$, then the first two terms on the right hand side of the previous display are bounded by $e^{-\frac{ct_{\varepsilon}}{\varepsilon^3} } = e^{-\frac{ct}{\varepsilon}}$ each, and if $t<\varepsilon (j_{\mathrm{max}}-1) \log(N/m)$, the same is true for the last term. Therefore,
if
\[
t<\varepsilon^4\min\left( \left( \frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}}\,,(j_{\mathrm{max}}-1) \log(N/m)\right),
\]
then
\ml{
\pr{\left| \sum_{j=2}^m \frac{{m\choose j} }{{N\choose j}} \sum_{J\in \mathcal A_N^{(j)}} \h{j}_m(X_i,\, i\in J) \right| \geq \sqrt{t_\varepsilon}\sqrt{\frac{m}{N}}}
\\
\leq 3 \exp{-\frac{ct}{\varepsilon}} = \exp{-\frac{t}{2\,\var\left( \sqrt{m}\, h_m^{(1)}(X_1) \right)}} \cdot o(1)
}
where the last equality holds whenever we choose $\varepsilon:=\varepsilon(N,m)$ such that $\varepsilon(N,m)\to 0$ as $N/m\to\infty$. Specifically, take $\varepsilon = \left( \frac{q(N,m)}{\min\left( \left( \frac{N}{mM^2}\right)^{\frac{1}{1+2\gamma_2}}\,,j_{\mathrm{max}}\log(N/m)\right)} \right)^{1/4} $ where the function $q$ was defined in the statement of the corollary, and conclusion follows immediately.
If $m^2\ll \frac{N}{\log^2(N)}$, we can replace the last term in equation \eqref{eq:base-1} by
\[
\max_{j>j_{\mathrm{max}}} \exp{-c'\min\left(t_\varepsilon^{1/j}\left( \frac Nm\right)^{\frac{j-1}{j}},\left( \frac{t_\varepsilon}{\|h\|_\infty^2}\right)^{\frac{1}{j+1}} \left( \frac{Nj}{m^2} \right)^{\frac{j}{j+1}} \right)},
\]
which is bounded by $e^{-\frac{ct}{\varepsilon}}$ whenever $t<\frac{Nj_{\mathrm{max}}}{m^2} \varepsilon^4$. Final result in this case follows similarly.
\end{proof}
\section{Implications for the median of means estimator.}
We are going to apply results of the previous section to deduce non-asymptotic bounds for the permutation-invariant version of the median of means estimator. Recall that it was defined as
\begin{equation*}{
\widehat\mu_{N} : =\med{\bar X_J, \ J\in \mathcal A_{N}^{(m)}}.
}
\begin{theorem}
\label{th:U-mom}
Assume that $X_1,\ldots,X_N$ are i.i.d. copies of a random variable $X$ with mean $\mu$ and variance $\sigma^2$. Moreover, suppose that
\begin{enumerate}
\item[(i)] the distribution of $X_1$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb R$ with density $\phi_1$;
\item[(ii)] the Fourier transform $\widehat \phi_1$ of the density satisfies the inequality $\left| \widehat\phi_1(x)\right| \leq \frac{C_1}{(1+|x|)^\delta}$ for some positive constants $C_1$ and $\delta$;
\item[(iii)] $\mathbb E \left|X_1/\sigma \right|^{3+p}<\infty$ for some $p>0$;
\item[(iv)] $m=O\left( N^{1-\varepsilon}\right)$ for some $\varepsilon>0$.
\end{enumerate}
Then the estimator $\widehat\mu_{N}$ satisfies
\[
\pr{\left|\sqrt{N}(\widehat\mu - \mu)\right|\geq \sigma\sqrt t} \leq 2\exp{-\frac{t}{2(1+o(1))}}
\]
where $o(1)\to 0$ as $N/m\to\infty$ uniformly for all $t\in \left[ l_{N,m},u_{N,m} \right]$ for any sequences $\{ l_{N,m}\}\,,\{u_{N,m}\}$ such that $l_{N,m}\gg \frac{N}{m^2}$ and $u_{N,m} \ll \frac{N}{m^{2-\frac{p}{3+p}} \vee \log^2(N)}$.
\end{theorem}
Let us recall the well known fact stating that $|\widehat\phi_1(x)|\to 0$ as $|x|\to\infty$ for any absolutely continuous distribution, so the additional assumption (ii) is mild and non-restrictive.
\begin{proof}
Throughout the course of the proof, we will assume without loss of generality that $\sigma^2=1$; general case follows by rescaling.
Note that direct application of Corollary \ref{th:bernstein} requires existence of all moments of $X_1$, which is too prohibitive. Therefore, we will first show that we can reduce the problem to the case of bounded random variables.
Define $\rho(x)=|x|$. Proceeding as in the proof of Theorem \ref{th:dev-1}, we observe that
\begin{equation*}{
\pr{\sqrt{N}(\widehat\mu - \mu)\geq \sqrt t} \leq \pr{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\in \mathcal A_N^{(m)}} \rho_-'\left( \sqrt{m}\left( \bar X_J - \mu - \sqrt{t/N}\right)\right)\geq 0 }.
}
Given a random variable $Y$ and $R>0$, set $Y^{(R)}:=(Y-\mathbb EY) \cdot I\{ |Y-\mathbb EY| \leq R\}$. We will also denote $\bar X_J^{(R)}:= \frac{1}{m}\sum_{j\in J}X_j^{(R)}$ and $J^{(R)}:=\{j\in [N]: \, |Y_j|>R \}$. Our next goal is to show that for sufficiently large $R$, the U-statistic with kernel $\rho'_-$ evaluated at $X_1,\ldots,X_N$ can be replaced by the corresponding U-statistic evaluated at $X_{1,R},\ldots,X_{N,R}$ where $X_{j,R}$ are i.i.d. with the law $P_R(B):=\pr{X-\mu\in B\,\big| \, |X-\mu|\leq R}$. To this end, note that
\mln{
\label{eq:g01}
\left| \sum_{J\in \mathcal A_N^{(m)}} \rho_-'\left( \sqrt{m}\left( \bar X_J - \mu - \sqrt{t/N}\right)\right) - \sum_{J\in \mathcal A_N^{(m)}:\,J\cap J^{(R)}=\emptyset} \rho_-'\left( \sqrt{m}\left( \bar X_J^{(R)} - \sqrt{t/N}\right)\right)\right|
\\
\leq {N\choose m} - {N - |J^{(R)}| \choose m}.
}
Next, one can easily verify that whenever $m=o(N)$ and $|J^{(R)}| \leq N/2 -m$, $\frac{{N - |J^{(R)}| \choose m}}{{N\choose m}} \geq \left( \frac{N-m-|J^{(R)}|+1}{N-|J^{(R)}|+1}\right)^{|J^{(R)}|}\geq \exp{-\frac{|J^{(R)}|}{N/(2m)}} \geq 1 - \frac{|J^{(R)}|}{N/(2m)}$. In particular, this inequality holds on the event $\left\{|J^{(R)}| \leq \sqrt{\frac{Nt}{m}}\, o(1)\right\}$ where $o(1)$ is any function such that $o(1)\to 0$ as $N/m\to\infty$. Conditionally on the event $\{ J^{(R)} = I \}$ where $J\subseteq [N]$ is arbitrary, the random variables $X_j^{(R)}, \ j\in \bar I$ are i.i.d. with the law $P_R$. Therefore,
\ml{
\pr{\sqrt{N}(\widehat\mu - \mu)\geq \sqrt t}
\\
\leq \sum_{I:\,|I|\leq \sqrt{\frac N m} o(1)} \pr{\left\{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\cap I=\emptyset} \rho_-'\left( \sqrt{m}\left( \bar X^{(R)}_J - \sqrt{t/N}\right)\right)\geq -\sqrt{\frac{N}{m}}\frac{|J^{(R)}|}{\frac{N}{2m}} \right\} \cap \{ J^{(R)} = I\}}
\\
+\pr{|J^{(R)}| > \sqrt{\frac{Nt}{m}}\, o(1) }
\\
\leq \max_{I:\,|I|\leq \sqrt{\frac N m} o(1)} \pr{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\cap I=\emptyset} \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right) \geq -o(1)}
\\
+ \pr{|J^{(R)}| > \sqrt{\frac{Nt}{m}}\, o(1) }
}
where $\bar X_{J,R} = \frac{1}{|J|}\sum_{j\in J} X_{j,R}$. Finally, we can repeat the reasoning similar to \eqref{eq:g01} to replace the sum $\sum_{J\cap I=\emptyset}$ by the sum ranging over $\mathcal A_{N}^{(m)}$ to get that
\ml{
\pr{\sqrt{N}(\widehat\mu - \mu)\geq \sqrt t}
\leq \pr{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right) \geq -\sqrt{t}o(1)}
\\
+ \pr{|J^{(R)}| > \sqrt{\frac{Nt}{m}}\, o(1) }
\\
= \pr{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} \left( \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right) - \mathbb E \rho'_- \right)\geq -o(1) - \sqrt{N/m}\,\mathbb E \rho'_-}
\\
+ \pr{|J^{(R)}| > \sqrt{\frac{Nt}{m}}\, o(1) }
}
where we used the shortcut $\mathbb E\rho'_-$ in place of $\mathbb E \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right)$. Set $p_R:=\pr{|X-\mu| > R}$, and assume that $p_R \leq \left(t/(Nm)\right)^{1/2} o^2(1)$ where $o(1)$ is the same function as above. Chernoff bound immediately yields that
\begin{equation}{
\label{eq:g02}
\pr{|J^{(R)}| > \sqrt{\frac{Nt}{m}}\, o(1) } \leq \exp{-c\sqrt{\frac{Nt}{m}}\,o(1)}.
}
Next, consider the expression $\mathbb E \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right)$. Let $q=3+p$ and note that $\left|\mathbb E X_{1,R}\right| \leq \frac{\mathbb E |X_1-\mu|^q I\{ |X_1-\mu |>R\}}{R^{q-1}}$ in view of H\"{o}lder and Markov inequalities, hence $\left|\mathbb E X_{1,R}\right| = o(\sqrt{t/N})$ whenever $t\geq c\left(\mathbb E |X_1-\mu|^q \right)^2 NR^{-2(q-1)}$ for some $c>0$.
Assuming that $R$ satisfies this condition and repeating the steps from equations \eqref{eq:b11}, \eqref{eq:b12} in the proof of Theorem \ref{th:dev-1}, we see that
\ml{
-\sqrt{N/m}\, \mathbb E \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \sqrt{t/N}\right)\right) =
-\sqrt{N/m}\,\mathbb E \rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \mathbb E X_{1,R} - (1+o(1))\sqrt{t/N}\right)\right)
\\
\leq C\sqrt{k}\cdot g(m) + \sqrt{t}\left( \sqrt{\frac 2 \pi} + O(\sqrt{t/k}) \right)
= \sqrt{t} \sqrt{\frac 2 \pi} \left(1 + o(1)\right)
}
whenever $t\ll N/m$ and $t \gg \frac{N}{m}\, g^2(m)$, where we also used the fact that the moments of $X_{1,R}$ are dominated by the corresponding moments of $X-\mu$. Indeed, one immediately checks using Jensen's inequality that for any $q>1$,
\[
\mathbb E|X_{1,R} - \mathbb EX_{1,R}|^q \leq \mathbb E |X_{1,R} - X'_{1,R}|^q \leq 2^{q-1} \mathbb E|X_{1,R}|^q \leq \frac{2^{q-1}}{1-p_R}\mathbb E |X_1|^q,
\]
where $X'_{1,R}$ is an independent copy of $X_{1,R}$. We also remark that in view of imposed moment assumptions, $g(m)\leq C\frac{\mathbb E|X_1-\mu|^3}{\sqrt m}\leq C\frac{\left(\mathbb E|X_1-\mu|^q\right)^{3/q}}{\sqrt m}$. Let us now summarize the previously made assumptions: (a) $\frac{N \left(\mathbb E|X_1-\mu|^3\right)^{2}}{m^2} \ll t \ll N/m$, (b) $p_R=\pr{|X_1-\mu| > R}\ll \sqrt{\frac{t}{Nm}}$ and (c) $R^{q-1}\geq c \mathbb E |X_1-\mu|^q \sqrt{N/t}$. In view of Markov's inequality, (b) holds whenever $R^q \geq c \mathbb E|X_1-\mu|^q \sqrt{\frac{Nm}{t}}$, whence it suffices to assume that
\[
R\geq c \max\left(\left(\frac{Nm \left(\mathbb E |X_1-\mu|^q \right)^2}{t}\right)^{\frac{1}{2q}}, \left( \frac{N\left(\mathbb E |X_1-\mu|^q\right)^2}{t}\right)^{\frac{1}{2(q-1)}}\right)
\]
which implies that $R = \begin{cases}
\left(\frac{Nm\left(\mathbb E |X_1-\mu|^q \right)^2}{t}\right)^{\frac{1}{2q}}, & t\geq c'\frac{N \left(\mathbb E|X_1-\mu|^q\right)^2}{m^{q-1}}, \\
\left(\frac{N\left(\mathbb E |X_1-\mu|^q \right)^2}{t}\right)^{\frac{1}{2(q-1)}}, & 1\leq t < c'\frac{N\left( \mathbb E|X_1-\mu|^q\right)^2}{m^{q-1}}.
\end{cases}$.
But, as $t\gg \frac{N \left(\mathbb E|X_1-\mu|^3\right)^{2}}{m^2}$ and $\frac{\mathbb E|X_1-\mu|^q}{\mathbb E|X_1-\mu|^3}\leq cm^{\frac{1}{2(q-3)}}$ by assumption, the second option above is not possible, hence we deduce that it suffices to choose $R \geq \left(\frac{Nm\left(\mathbb E |X_1-\mu|^q \right)^2}{t}\right)^{\frac{1}{2q}}$ for all admissible values of $t$. In particular, $R = m^{\frac{3}{2q}}\left(\frac{\mathbb E |X_1-\mu|^q}{\mathbb E|X_1-\mu|^3} \right)^{1/q}$ satisfies the requirements, and the necessary conditions reduce to
(a) $\frac{N \left(\mathbb E|X_1-\mu|^q\right)^{6/q}}{m^2} \ll t \ll N/m$ and (b) $R = m^{\frac{3}{2q}}\left(\frac{\mathbb E |X_1-\mu|^q}{\mathbb E|X_1-\mu|^3} \right)^{1/q}$. It remains to estimate
\begin{equation}{
\label{eq:U-bdd}
\pr{ \frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} \left(\rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \mathbb E X_{1,R} - (1+o(1))\sqrt{t/N}\right)\right) - \mathbb E \rho'_- \right)\geq \sqrt{t} \sqrt{\frac 2 \pi} \left(1 + o(1)\right) }.
}
Note that the U-statistic is now a function of bounded random variables, hence we can apply Corollary \ref{th:bernstein} with $\gamma_2=0$. As $\|\rho'_-\|_\infty =1$, condition (ii) of the corollary holds.
Let $\sqrt{\frac{m}{N}}\sum_{j=1}^N h^{(1)}(X_{j,R})$ be the first term in Hoeffding decomposition of the U-statistic
\[
\frac{\sqrt{N/m}}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} \left(\rho_-'\left( \sqrt{m}\left( \bar X_{J,R} - \mathbb E X_{1,R} - (1+o(1))\sqrt{t/N}\right)\right) - \mathbb E \rho'_- \right)
\]
It has been established in the course of the proof of Theorem \ref{th:clt-1} that
\begin{equation*}{
\var\left( \sqrt{m} h^{(1)}(X_{1,R}) \right) = \frac{2}{\pi}(1+o(1))
}
where $o(1)\to 0$ as $m,N/m\to\infty$, validating assumption (iii) of the corollary. It remains to verify assumption (i) and specify the value of $j_{\max}$.
Recall that $\rho'_-(x) = I\{ x\geq 0\} - I\{x<0\}$. The function $f_j(u_1,\ldots,u_j)$ appearing in the statement of Theorem \ref{th:concentration} can therefore be expressed as
\ml{
f_j(u_1,\ldots,u_j) = \mathbb E \rho'_-\left( \frac{1}{\sqrt m}\sum_{i=1}^j u_i + \sqrt{\frac{m-j}{m}}\frac{\sum_{i=j+1}^m X_i}{\sqrt{m-j}} - t\sqrt{\frac m N}\right)
\\
= 2 \Phi_{m-j}\left( \frac{1}{\sqrt{m-j}}\sum_{i=1}^j u_i - t\sqrt{\frac{m}{m-j}}\sqrt{\frac m N}\right) - 1.
}
where for any integer $k\geq 1$, $\Phi_k$ stands for the cumulative distribution function of $\frac{1}{\sqrt k}\sum_{j=1}^k X_j$ and $\phi_k$ is the corresponding density function that exists by assumption. Therefore,
\[
\partial_{u_j}\ldots\partial_{u_1} f_1(u_1,\ldots,u_j) = \frac{2}{(m-j)^{j/2}}\phi^{(j-1)}_{m-j}\left( \frac{1}{\sqrt{m-j}}\sum_{i=1}^j u_i - t\sqrt{\frac{m}{m-j}}\sqrt{\frac m N}\right).
\]
The following lemma demonstrates that Theorem \ref{th:concentration} applies with $\gamma_1=1/2$ and that $j_{\max} = \frac{m}{\log(m)} \,o(1)$ in the statement of Corollary \ref{th:bernstein}.
\begin{lemma}
\label{lemma:deriv-bound}
Let assumptions of Theorem \ref{th:U-mom} hold. Then for $m$ large enough and $j=o(m/\log m)$,
\begin{equation*}{
\left\| \phi_{m-j}^{(j-1)} \right\|_\infty \leq C \left(\frac{2j}{e}\right)^{j/2}
}
for a sufficiently large enough constant $C=C(P)$.
\end{lemma}
We postpone the proof of this lemma to section \ref{proof:deriv-bound}. As all the conditions have been verified, the inequality of Corollary \ref{th:bernstein} applies and yields that the probability in \eqref{eq:U-bdd} can be bounded from above by $\exp{-\frac{t}{2\sigma^2(1+o(1))}}$ for all $2\leq t\leq q(N,m)$ whenever
\[
q(N,m)=\min\left(\frac{N}{m R^2}, \frac{N}{m \log^2(N)}\right)\cdot o\left(1\right) \text{ as } N/m\to\infty.
\]
To get the expression for the second term in the minimum above from the bound of the corollary, it suffices to consider the cases when $m\geq \frac{\sqrt N}{\log(N)}o(1)$ and $m\leq \frac{\sqrt N}{\log(N)}o(1)$ separately; we omit the simple algebra.
Choosing $R^2 = m^{\frac{3}{q}} \left(\frac{\mathbb E |X_1-\mu|^q}{\mathbb E|X_1-\mu|^3} \right)^{2/q}$ and recalling that $q=3+p$, we deduce the final form of the bound stating that
\[
\pr{\sqrt{N}(\widehat\mu - \mu)\geq \sigma\sqrt t} \leq \exp{-\frac{t}{2(1+o(1))}}
\]
uniformly for all $\frac{N\left(\mathbb E|X_1-\mu|^3\right)^{2}}{m^2}\ll t \ll \frac{N}{\left(\frac{\mathbb E |X_1-\mu|^q}{\mathbb E|X_1-\mu|^3} \right)^{2/q} m^{2-\frac{p}{3+p}} \vee \log^2(N)}$. The argument needed to estimate $\pr{\sqrt{N}(\widehat\mu - \mu)\leq -\sigma\sqrt t}$ is identical.
\end{proof}
\section{Open questions.}
\label{sec:discussion}
Several potentially interesting questions and directions have not been addressed in this paper. We summarize few of them below.
\begin{itemize}
\item[(i)] First is the question related to assumptions in Theorem \ref{th:U-mom}: does it still hold for distributions with only $2$ (or $2+\varepsilon$) moments, instead of $3+\varepsilon$? And can the assumptions requiring absolute continuity and a bound on the rate of decay of the characteristic function be dropped? For example, Corollary \ref{th:clt-1} holds for lattice distributions as well.
\item[(ii)] It is known that \citep{hanson1971bound} the sample mean based on i.i.d. observations from the multivariate normal distribution $N(\mu,\Sigma)$ satisfies the inequality
\[
\left\| \bar X_N - \mu\right\|_2 \leq \sqrt{\frac{\mathrm{trace}(\Sigma)}{N}}+\sqrt{\frac{2t\|\Sigma\|}{N}}
\]
with probability at least $1-e^{-t}$. Does there exist an estimator of the mean that achieves this bound (up to $o(1)$ factors) for the heavy-tailed distributions? A natural candidate would be the tournaments-type estimator \citep[see][]{lugosi2019mean,lugosi2019near} that uses the univariate estimator $\widehat\mu_n$ defined in \eqref{eq:U-mom-est} as a subroutine.
\item[(iii)] Exact computation of the estimator $\widehat\mu_N$ is infeasible, as it requires evaluation and sorting of $\asymp \left( \frac{N}{m}\right)^m$ sample means. Therefore, it is interesting to understand whether it can be replaced by $\med{\bar X_J, \ J\in \mathcal B}$ where $\mathcal B$ is a (deterministic or random) subset of $\mathcal A_{N}^{(m)}$ of much smaller cardinality, while preserving the deviation guarantees. For instance, it is easy to deduce from results on incomplete U-statistics in section 4.3 of the book by \citet{lee1990u} combined with the proof of Corollary \ref{th:clt-1} that if $\mathcal B$ consists of $M$ subsets selected at random with replacement from $\mathcal A_N^{m}$, then the asymptotic distribution of $\sqrt{N}\left( \med{\bar X_J, \ J\in \mathcal B} - \mu\right)$ is still $N(0,\sigma^2)$ as long as $M\gg N$. However, establishing results in spirit of Theorem \ref{th:U-mom} appears to be more difficult.
\end{itemize}
\section{Remaining proofs.}
\label{sec:proofs}
The proofs omitted in the main text are presented in this section.
\subsection{Technical tools.}
\label{sec:tools}
Let us recall the definition of Hoeffding's decomposition \citep{hoeffding1948class} and closely related concepts that are at the core of many arguments related to U-statistics. Assume that $Y_1,\ldots,Y_N$ are i.i.d. random variables with distribution $P_Y$.
Recall that $\mathcal A_{N}^{(m)} = \left\{ J\subseteq [N]: \ |J|=m\right\}$ and that the U-statistic with permutation-symmetric kernel $h_m$ is defined as
\[
U_{N,m} = \frac{1}{{N\choose m}}\sum_{J\in \mathcal A_{N}^{(m)}} h_m(Y_i, \ i\in J),
\]
where we assume that $\mathbb Eh_m=0$. Moreover, for $j=1,\ldots,m,$ define the projections
\begin{equation}{
\label{eq:pi}
(\pi_j h_m)(y_1,\ldots,y_j):=(\delta_{y_1} - P_Y)\times\ldots\times(\delta_{y_j}-P_Y)\times P_Y^{m-j} h_m.
}
For brevity and to ease notation, we will often write $h_m^{(j)}$ in place of $\pi_j h_m$. The variances of these projections will be denoted by
\[
\delta_j^2 :=\var\left( \h{j}_m(Y_1,\ldots,Y_j) \right).
\]
In particular, $\delta_m^2 = \var(h_m)$. It is well known \citep{lee1990u} that $\h{j}_m$ can be viewed geometrically as orthogonal projections of $h_m$ onto a particular subspace of $L_2(P_Y^{m})$.
The kernels $\h{j}_m$ have the property of complete degeneracy, meaning that $\mathbb E \h{j}_m(y_1,\ldots,y_{j-1},Y_j) = 0$ for $P_Y$-almost all $y_1,\ldots,y_{j-1}$ while $\h{j}_m(Y_1,\ldots,Y_j)$ is non-zero with positive probability. One can easily check that $h(y_1,\ldots,y_m) = \sum_{j=1}^m \sum_{J\subseteq [m]: |J|=j} \h{j}_m(y_i,\, i\in J)$, in particular, the partial sum $\sum_{j=1}^k \sum_{J\subseteq [m]: |J|=j} \h{j}_m(y_i,\, i\in J)$ is the best approximation of $h_m$, in the mean-squared sense, in terms of sums of functions of at most $k$ variables.
The Hoeffding decomposition states that (see \citep{hoeffding1948class} as well as the book by \citet{lee1990u})
\begin{equation}{
\label{eq:hoeffding}
U_{N,m} = \sum_{j=1}^m {m\choose j} U_{N,m}^{(j)},
}
where $U_{N,m}^{(j)}$ are U-statistics with kernels $\h{j}_m$, namely
$U_{N,m}^{(j)}:=\frac{1}{{N\choose j}} \sum\limits_{J\in \mathcal A_{N}^{(j)}} \h{j}_m(Y_i, \ i\in J)$. Moreover, all terms in representation \eqref{eq:hoeffding} are uncorrelated.
\begin{comment}
When dealing with U-statistics, we are often interested in the exponential moments of the form
$\mathbb E \exp{|Z|^{\alpha}}$ for some random variable $Z$ and $0<\alpha<1$. As the function $\psi_\alpha(z) = \exp{|z|^\alpha}$ is only convex in the region $\{ |z|^\alpha\geq \frac{1-\alpha}{\alpha} \}$, we will often use its convex modification
\begin{equation}{
\label{eq:psi}
\tilde \psi_\alpha(z) = \begin{cases} e^{|z|^\alpha}, & |z|^\alpha > \frac{1-\alpha}{\alpha}, \\
1 + \frac{e^{\frac{1-\alpha}{\alpha}}-1}{\left( \frac{1-\alpha}{\alpha}\right)^{1/\alpha}} |z|, & |z|^\alpha \leq \frac{1-\alpha}{\alpha}. \end{cases}
}
It is also easy to see that
\begin{equation*}{
\label{eq:psi2}
\tilde\psi_\alpha(z) \leq \psi_\alpha(z) \leq \min\left( \tilde \psi_\alpha(z) + e^{\frac{1-\alpha}{\alpha}}, \tilde \psi_\alpha(z) e^{\frac{1-\alpha}{\alpha}}\right).
}
\end{comment}
Next, we recall some useful moment bounds, found for instance in the book by \citet{Decoupling}, for the Rademacher chaos variables. Let $\varepsilon_1,\ldots,\varepsilon_N$ be i.i.d. Rademacher random variables (random signs), $\{ a_J, \ J\in \mathcal A_N^{(l)} \} \subset \mathbb R$, and $Z = \sum_{J\in \mathcal A_{N}^{(l)} } a_J \prod_{i\in J} \varepsilon_i$. Here, $\prod_{i\in J} \varepsilon_i = \varepsilon_{i_1}\cdot\ldots\cdot \varepsilon_{i_l}$ for $J=\{i_1,\ldots,i_l\}$.
\begin{fact}[Bonami inequality]
\label{fact:1}
Let $\sigma^2(Z) = \var(Z) = \sum_{J\in \mathcal A_{N}^{(l)}} a_J^2$. Then for any $q>2$,
\[
\mathbb E |Z|^q \leq \left( q-1\right)^{ql/2} \left(\sigma^2(Z) \right)^{q/2}.
\]
\end{fact}
Now we state a version of the symmetrization inequality for completely degenerate U-statistics due to \citet{sherman1994maximal}, also see the paper by \citet{song2019approximating} for the modern exposition of the proof. The main feature of this inequality, put forward by \citet{song2019approximating}, is the fact that its proof does not rely on decoupling, and yields constants that do not grow too fast with the order of U-statistics.
\begin{fact}
\label{fact:2}
Let $h$ be a completely degenerate kernel of order $l$, and $\Phi$ -- a convex, nonnegative, non-decreasing function. Moreover, assume that $\varepsilon_1,\ldots,\varepsilon_N$ are i.i.d. Rademacher random variables. Then
\[
\mathbb E\Phi\left( \sum_{ 1\leq j_1<\ldots<j_l\leq N} h(Y_{j_1},\ldots,Y_{j_l})\right) \leq \mathbb E\Phi\left(2^l \sum_{1\leq j_1<\ldots<j_l\leq N} \varepsilon_{j_1}\ldots\varepsilon_{j_l} h(Y_{j_1},\ldots,Y_{j_l}) \right).
\]
\end{fact}
Next is the well-known identity, due to \citet{hoeffding1963probability}, that allows to reduce many problems for non-degenerate U-statistics to the corresponding problems for the sums of i.i.d. random variables.
\begin{fact}
\label{fact:3}
The following representation holds:
\[
U_{N,m} = \frac{1}{N!} \sum_{\pi} W_{\pi},
\]
where the sum is over all permutations $\pi:[N]\mapsto[N]$, and
\[
W_{\pi} = \frac{1}{k}\left( h_m\left(Y_{\pi(1)},Y_{\pi(2)},\ldots,Y_{\pi(m)}\right) + \ldots + h_m\left(Y_{\pi((k-1)m+1)},Y_{\pi((k-1)m+2)},\ldots,Y_{\pi(km)}\right)\right)
\]
for $k=\lfloor N/m \rfloor$.
\end{fact}
Finally, we state a version of Rosenthal's inequality for the moments of sums of independent, nonnegative random variables with explicit constants, see \citep{boucheron2013concentration,chen2012masked}.
\begin{fact}
\label{fact:rosenthal}
Let $Y_1,\ldots,Y_N$ be independent random variables such that $Y_j\geq 0$ with probability $1$ for all $j\in [N]$. Then for any $q\geq 1$,
\[
\left(\mathbb E\left| \sum_{j=1}^N Y_j \right|^q\right)^{1/q}\leq \left( \left(\sum_{j=1}^N \mathbb E Y_j\right)^{1/2} + 2\sqrt{eq} \left( \mathbb E\max_{j=1,\ldots,N}Y_j^{q}\right)^{1/2q}\right)^2.
\]
\end{fact}
\begin{comment}
Finally, we recall the result due to \citet{bourgain1979walsh} and its version with better constants due to \citet{kwapien2010hoeffding} regarding the continuity of the operators $\pi_j$ in $L_p$ (we will only state the version for $p>2$ but the general result holds for $p>1$).
\begin{fact}
\label{fact:4}
Let $h:\mathbb R^m\mapsto \mathbb R$ be in $L_p(P^m)$ for some $p>2$. Then
\[
\mathbb E^{1/p} \left| \sum_{|J|=j}(\pi_j h)(X_i, \ i\in J)\right|^p \leq \left(C \frac{p}{\log p}\right)^j \mathbb E^{1/p} |h(X_1,\ldots,X_m)|^p,
\]
where $C>0$ is an absolute constant.
\end{fact}
The Hermite polynomials $H_j(x), \ j \geq 0$ are defined via $H_j(x)=(-1)^j e^{x^2} \frac{d^j}{dx^j}e^{-x^2}$, while the closely related Hermite functions are $\psi_j(x) = (2^j j!)^{-1/2} \pi^{-1/4} e^{-x^2/2}H_j(x)$.
\begin{fact}[Cram\'{e}r's inequality, see \cite{indritz1961inequality}]
\label{fact:5}
The following inequality holds:
\[
\sup_{x\in \mathbb R} |\psi_j(x)| \leq \pi^{-1/4}.
\]
\end{fact}
\begin{fact}
\label{fact:6}
Let $\xi_j, \ j=1,\ldots,N$ be i.i.d. centered random variables such that $|\xi_1|\leq M$ almost surely and $\var(\xi_1) = \sigma^2$. Then
\[
\mathbb E\exp{t\sum_{j=1}^N \xi_j}\leq \exp{\frac{N\sigma^2 t^2}{2- \frac{2}{3}Mt}}
\]
for $t<\frac{3}{M}$.
\end{fact}
\end{comment}
\subsection{Proof of Theorem \ref{th:U-stat}.}
\label{proof:U-stat}
Recall that
\al{
\h{j}_m(y_1,\ldots,y_j)&:=(\delta_{y_1} - P_Y)\times\ldots\times(\delta_{y_j}-P_Y)\times P_Y^{m-j} h_m,
\\
\delta_j^2 &:=\var\left( \h{j}_m(Y_1,\ldots,Y_j) \right).
}
It is easy to verify that
\ml{
h_m(Y_1,\ldots,Y_m) = (\delta_{Y_1} - P_Y + P_Y)\times\ldots\times(\delta_{Y_m}- P_Y + P_Y) h_m
= \sum_{j=1}^m \sum_{J\subseteq [m]: |J|=j} \h{j}_m(Y_i,\, i\in J)
}
and that the terms in the sum above are mutually orthogonal, yielding that
\begin{equation}{
\label{eq:a01}
\var\left(h_m(Y_1,\ldots,Y_m)\right) = \sum_{j=1}^m {m\choose j} \delta_j^2.
}
Moreover, as a corollary of Hoeffding's decomposition, one can get the well known identities
\al{
&\var(U_{N,m}) = \sum_{j=1}^m \frac{{m\choose j}^2}{{N\choose j}}\delta_j^2,
\quad\var(S_{N,m}) = \frac{m^2}{N}\delta_1^2,
\\
&\var(U_{N,m} - S_{N,m}) = \var(U_{N,m}) - \var(S_{N,m}) = \sum_{j=2}^m \frac{{m\choose j}^2}{{N\choose j}}\delta_j^2.
}
See Chapters 1.6 and 1.7 in the book by \cite{lee1990u} for detailed derivations of these facts.
The simple but key observation following from equation \eqref{eq:a01} is that for any $j\in[m]$, $\var(h_m)\geq {m\choose j}\delta_j^2$, or
\begin{equation}{
\label{eq:a02}
\delta_j^2\leq \frac{\var(h_m)}{{m\choose j}}.
}
Therefore,
\mln{
\label{eq:a12}
\var(U_{N,m} - S_{N,m}) = \sum_{j=2}^m \frac{{m\choose j}^2}{{N\choose j}}\delta_j^2
\leq \var(h)\sum_{j=2}^m \frac{{m\choose j}}{{N\choose j}} \leq \var(h)\sum_{j\geq 2} \left( \frac{m}{N}\right)^j
\\
=\var(h)\left(\frac{m}{N}\right)^2 \left(1-m/N\right)^{-1},
}
where we used the fact that $\frac{{m\choose j}}{{N\choose j}}\leq \left( \frac m N\right)^j$ for $m\leq N$: indeed, the latter easily follows from the identity $\frac{{m\choose j}}{{N\choose j}} = \frac{m(m-1)\ldots(m-j+1)}{N(N-1)\ldots (N-j+1)}$. It is well known \citep{hoeffding1948class} that $\var \left(h^{(1)}(Y_1)\right) \leq \frac{\var(h_m)}{m}$, therefore the condition $\frac{\var\left( h_m(Y_1,\ldots,Y_m)\right)}{\var \left(h_m^{(1)}(Y_1)\right)} = o(N)$ imposed on the ratio of variances implies that $m=o(N)$. Therefore, for $m,N$ large enough (so that $m/N\leq 1/2$),
\begin{equation*}{
\frac{\var(U_{N,m} - S_{N,m})}{\var(S_{N,m})} \leq 2 \frac{\var(h_m)\left(\frac{m}{N}\right)^2 }{\delta_1^2 m^2/N}
= 2 \frac{\var(h_m)}{N\delta_1^2} = o(1)
}
by assumption, yielding that $\frac{U_{N,m} - S_N}{\var^{1/2}(S_N)} = o_P(1)$ as $N,m\to\infty$.
\subsection{Proof of Theorem \ref{th:concentration}.}
\label{proof:concentration1}
We are going to estimate $\mathbb E|V_{N,j}|^q$ for an arbitrary $q>2$. It follows from the symmetrization inequality (Fact \ref{fact:2}) followed by the moment bound stated in Fact \ref{fact:1} that
\ml{
\mathbb E|V_{N,j}|^q \leq 2^{jq}\,\mathbb E_X \mathbb E_\varepsilon \left| \frac{{m\choose j}^{1/2} }{{N\choose j}^{1/2}} \sum_{(i_1,\ldots,i_j)\in \mathcal A_N^{(j)}} \varepsilon_{i_1}\ldots\varepsilon_{i_j}\h{j}_m(X_{i_1},\ldots, X_{i_j}) \right|^q
\\
\leq 2^{jq}(q-1)^{jq/2} \mathbb E \left| \frac{{m\choose j} }{{N\choose j}}\sum_{(i_1,\ldots,i_j)\in \mathcal A_N^{(j)}} \left(\h{j}_m(X_{i_1},\ldots, X_{i_j})\right)^2 \right|^{q/2}.
}
Next, Hoeffding's representation of the U-statistic (Fact \ref{fact:3}) together with Jensen's inequality yields that
\begin{equation*}{
\mathbb E \left| \frac{{m\choose j} }{{N\choose j}}\sum_{(i_1,\ldots,i_j)\in \mathcal A_N^{(j)}} \left(\h{j}(X_{i_1},\ldots, X_{i_j})\right)^2 \right|^{q/2}
\leq \mathbb E \left|\frac{{m\choose j}}{\lfloor N/j \rfloor} \sum_{i=1}^{\lfloor N/j \rfloor} W_i\right|^{q/2},
}
where $W_i:=\left(\h{j}_m(X_{(i-1)j+1},\ldots,X_{ij}) \right)^2$.
We are going to estimate $\mathbb E \max_{j=1,\ldots,\lfloor N/j\rfloor} W_j^p$ in two different ways. First, recall that
\begin{equation*}{
(\pi_j h_m)(x_1,\ldots,x_j)=(\delta_{x_1} - P_X)\times\ldots\times(\delta_{x_j}-P_X)\times P_X^{m-j} h_m.
}
Therefore, $(\pi_j h)(x_1,\ldots,x_j)$ is a linear combination of $2^j$ terms of the form $\prod_{i\in I}\delta_{x_i} \,P_X^{m - | I |} \,h_m$, for all choices of $I\subseteq [j]$. Consequently, $\left|(\pi_j h_m)(x_1,\ldots,x_j)\right|^2 \leq 2^{2j} \|h_m\|^2_\infty$, and the same bound also holds (almost surely) for the maximum of $W_j$'s. Therefore, $\mathbb E \max_{j=1,\ldots,\lfloor N/j\rfloor} W_j^p \leq 2^{2jp}\|h_m\|^{2p}_\infty$
and $\mathbb E \left( {m\choose j}W_1 \right)^p \leq (2e)^{2jp}\left( \frac{m}{j}\right)^{jp} \|h_m\|_\infty^{2p}$.
Moreover, equation \eqref{eq:a02} in the proof of Theorem \ref{th:U-stat} implies that
$\mathbb E W_1 \leq \frac{\var(h_m)}{{m\choose j}}$. Therefore,
Rosenthal's inequality for nonnegative random variables (Fact \ref{fact:rosenthal}) entails that for $q\geq 2$,
\ml{
\mathbb E \left|\frac{{m\choose j}}{\lfloor N/j \rfloor} \sum_{i=1}^{\lfloor N/j \rfloor} W_i\right|^{q/2}
\leq C^{q/2}\left( \var^{q/2}(h_m) + \left( \frac{q}{2}\right)^{q/2} \left( \frac{j}{N}\right)^{q/2} \mathbb E \left( {m\choose j}\max_{j=1,\ldots,\lfloor N/j \rfloor}W_1 \right)^{q/2} \right)
\\
\leq C^{q/2}\left( \var^{q/2}(h_m) + \left( \frac{q}{2}\right)^{q/2} \left( \frac{j}{N}\right)^{q/2} (2e)^{jq}\left( \frac{m}{j}\right)^{jq/2} \|h_m\|_\infty^{q} \right)
}
and
\begin{equation*}{
\mathbb E|V_{N,j}|^q \leq (Cq^{1/2})^{qj}\left( \var^{q/2}(h_m) \vee \left( \left( \frac{qj}{N}\right)^{1/2} \left( \frac{m}{j}\right)^{j/2} \|h_m\|_\infty \right)^q \right).
}
Markov's inequality therefore yields that
\begin{equation*}{
\pr{|V_{N,j}|\geq (C_1 q)^{j/2}\left( \var^{1/2}(h_m)\vee \left( \frac{qj}{N}\right)^{1/2} \left( \frac{m}{j}\right)^{j/2} \|h_m\|_\infty \right)} \leq e^{-q}.
}
Let $A(q) = (C_1 q)^{j/2}\var^{1/2}(h_m)$ and $B(q) = \|h_m\|_\infty\left( \frac{qj}{N}\right)^{1/2}\left( C_1 q^{1/2} \left( \frac{m}{j}\right)^{1/2} \right)^{j}$. If $t=A(q)\vee B(q)$, then $q = A^{-1}(t)\wedge B^{-1}(t)$.
We can solve the inequalities explicitly to get, after some algebra, that
\begin{equation}{
\label{eq:bound-f1}
\pr{|V_{N.j}|\geq t} \leq \exp{ \min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \frac{\left( \frac{t}{\|h_m\|_\infty} \sqrt{\frac N j}\right)^{\frac{2}{j+1}}}{ \left( \frac{cm}{j} \right)^{\frac{j}{j+1}}} \right)}.
}
The previous bound is mostly useful when $\frac{m}{j}$ is not too large. Now we will present a second way to estimate $\mathbb E \max_{j=1,\ldots,\lfloor N/j\rfloor} W_j^p$ that will yield much better inequalities for small values of $j$. The key technical element that we rely on is the following lemma that allows one to control the growth of moments of $W_1$ with respect to $m$. Define
\[
f_j(x_1,\ldots,x_j):=\mathbb E h_m(x_1,\ldots,x_j,X_{j+1},\ldots,X_m).
\]
\begin{lemma}
\label{lemma:variance}
Let conditions of the theorem hold.
Then there exists $C=C(P)>0$ such that for any $p>2$,
\begin{equation*}{
\mathbb E \left| (\pi_j h_m)(X_1,\ldots,X_j)\right|^p \leq
C^{pj}\, \left\|\partial_{u_j}\ldots\partial_{u_1} f_j \right\|^p_\infty \mathbb E \left| X_1\ldots X_j\right|^p.
}
\end{lemma}
\noindent The proof of the lemma is outlined in section \ref{proof:variance}. As $\left\| \partial_{u_j}\ldots\partial_{u_1} f_j \right\|_\infty \leq \left(\frac{C_1(P)}{m}\right)^{j/2} j^{\gamma_1 j}$ by assumption, the bound of the lemma can be written as
\[
\mathbb E \left| (\pi_j h_m)(X_1,\ldots,X_j)\right|^p \leq
C_2^{pj} m^{-jp/2} j^{\gamma_1 pj} \left(\mathbb E \left| X_1\right|^{p}\right)^j.
\]
Recall that $\nu_k = \mathbb E^{1/k} |X_1|^k$ and that under the stated assumptions, $\nu_k \leq k^{\gamma_2}M$ for all integers $k\geq 2$ and some $\gamma_2,M>0$.
Therefore,
\begin{equation}{
\label{eq:moment01}
\mathbb E W_1^p \leq C^{2pj} j^{2\gamma_1 p\,j} m^{-pj}\nu_{2p}^{2pj}
\leq \left( C' M j^{\gamma_1} m^{-1/2} p^{\gamma_2} \right)^{2pj},
}
and consequently $\mathbb E \left( {m\choose j}W_1 \right)^p \leq \left( C' M j^{\gamma_1-1/2} p^{\gamma_2} \right)^{2pj}$.
The rest of the argument proceeds in a similar way as before. Recall again that $\mathbb E W_1 \leq \frac{\var(h_m)}{{m\choose j}}$. Rosenthal's inequality for nonnegative random variables (Fact \ref{fact:rosenthal}) implies that for $q\geq 2$,
\begin{equation*}{
\mathbb E \left|\frac{{m\choose j}}{\lfloor N/j \rfloor} \sum_{i=1}^{\lfloor N/j \rfloor} W_i\right|^{q/2}
\leq C^{q/2}\left( \var^{q/2}(h_m) + \left( \frac{q}{2}\right)^{q/2} \left( \frac{j}{N}\right)^{q/2} \mathbb E \left( {m\choose j}\max_{j=1,\ldots,\lfloor N/j \rfloor}W_1 \right)^{q/2} \right).
}
With the inequality for $\mathbb E W_1^p$ in hand, the expectation $\mathbb E \left( {m\choose j}\max_{j=1,\ldots,\lfloor N/j \rfloor}W_1 \right)^{q/2}$ can be upper bounded in two ways: first, trivially,
\[
\mathbb E \left( {m\choose j}\max_{j=1,\ldots,\lfloor N/j \rfloor}W_1 \right)^{q/2}\leq \lfloor N/j \rfloor \mathbb E \left( {m\choose j} W_1 \right)^{q/2} \leq \lfloor N/j \rfloor \left( C_1M j^{\gamma_1-1/2} q^{\gamma_2} \right)^{qj}.
\]
On the other hand, for any identically distributed $\xi_1,\ldots,\xi_k$ and any $p>1$, $\mathbb E\max_{j=1,\ldots,k}|\xi_j| \leq k^{1/p} \max_{j=1,\ldots,k}\mathbb E^{1/p}|\xi_j|^p$. Choosing $\xi_j = {m\choose j} W_j$ and $p = \lfloor \log(N/j) \rfloor +1$, we obtain the inequality
\[
\mathbb E \left( {m\choose j}\max_{j=1,\ldots,\lfloor N/j \rfloor}W_1 \right)^{q/2}\leq
\left( \log(N/j)\right)^{\gamma_2 qj} \left( C_1M j^{\gamma_1-1/2} q^{\gamma_2} \right)^{qj}.
\]
The second bound is better for $q\leq \frac{\log(N/j)}{\gamma_2 j\log\log(N/j)}$, therefore we get an estimate
\begin{equation*}{
\mathbb E \left|\frac{{m\choose j}}{\lfloor N/j \rfloor} \sum_{i=1}^{\lfloor N/j \rfloor} W_i\right|^{q/2}
\leq C^{q/2}\left( \var^{q/2}(h_m) + \left( C_3^j\left( \frac{qj}{N}\right)^{1/2}
\left(\log^{\gamma_2 }(N/j) Mj^{\gamma_1-1/2} q^{\gamma_2}\right)^j \right)^q \right)
}
and
\begin{equation*}{
\mathbb E|V_{N,j}|^q \leq (Cq^{1/2})^{qj}\left( \var^{q/2}(h_m) \vee \left( \left( \frac{qj}{N}\right)^{1/2}
\left(\log^{\gamma_2 }(N/j) Mj^{\gamma_1-1/2} q^{\gamma_2}\right)^j \right)^q \right)
}
that we will use for $2\leq q\leq \frac{\log(N/j)}{\gamma_2 j}$, while for larger values of $q$, $(N/j)^{1/q}\leq e^{\gamma_2 j}$ and
\begin{equation*}{
\mathbb E|V_{N,j}|^q
\leq (Cq^{1/2})^{qj}\left( \var^{q/2}(h_m) \vee \left( \left( \frac{qj}{N}\right)^{1/2} \left(Mj^{\gamma_1-1/2} q^{\gamma_2}\right)^j \right)^q \right).
}
Markov's inequality therefore yields that for small values of $q$ (that is, whenever $2\leq q \leq \frac{\log(N/j)}{\gamma_2 j}$),
\begin{equation*}{
\pr{|V_{N,j}|\geq (C q)^{j/2}\left( \var^{1/2}(h_m)\vee\left( \frac{qj}{N}\right)^{1/2}\left( \log^{\gamma_2 }(N/j) M j^{\gamma_1-1/2} q^{\gamma_2} \right)^{j} \right)} \leq e^{-q}.
}
Let $A(q) = (C q)^{j/2}\var^{1/2}(h_m)$ and $B(q) = \left( \frac{qj}{N}\right)^{1/2}\left( Cq^{1/2}\log^{\gamma_2 }(N/j) M j^{\gamma_1-1/2} q^{\gamma_2} \right)^{j}$. If $t=A(q)\vee B(q)$, then $q = A^{-1}(t)\wedge B^{-1}(t)$.
Solving these inequalities explicitly to get, after some algebra, that
\begin{equation*}{
\pr{|V_{N,j}|\geq t}\leq \exp{ \min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c \log^{\gamma_2}(N/j) M j^{\gamma_1-1/2}\right)^j}\right) \right)^{\frac{2}{1+j(2\gamma_2+1)}} }
}
for values of $t$ satisfying $2\leq \min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c \log^{\gamma_2}(N/j) M j^{\gamma_1-1/2}\right)^j}\right)^{\frac{2}{1+j(2\gamma_2+1)}} \right) \leq \frac{\log(N/j)}{\gamma_2 j}$.
Similarly, for $q\geq \max\left(2, \frac{\log(N/j)}{\gamma_2 j}\right)$, the previously established bounds yield that
\begin{equation*}{
\pr{|V_{N,j}|\geq (Cq)^{j/2}\left( \var^{1/q}(h_m) \vee \left( \frac{qj}{N}\right)^{1/2} \left(Mj^{\gamma_1-1/2} q^{\gamma_2}\right)^j \right)} \leq e^{-q},
}
or equivalently
\begin{equation}{
\label{eq:bound-f2}
\pr{|V_{N,j}|\geq t}
\leq \exp{ \min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c M j^{\gamma_1-1/2}\right)^j}\right) \right)^{\frac{2}{1+j(2\gamma_2+1)}} }
}
whenever $ \min\left( \frac{1}{c}\left(\frac{t^2}{\var(h_m)}\right)^{{\frac{1}{j}}}, \left( \frac{t\sqrt{N/j}}{\left(c M j^{\gamma_1-1/2}\right)^j}\right) \right)^{\frac{2}{1+j(2\gamma_2+1)}} \geq \max\left(2, \frac{\log(N/j)}{\gamma_2 j}\right)$.
Combination of inequalities \eqref{eq:bound-f1} and \eqref{eq:bound-f2} yields the final result.
\subsection{Proof of Lemma \ref{lemma:variance}.}
\label{proof:variance}
Recall that $f_j(x_1,\ldots,x_j)=\mathbb E h_m\left(\frac{x_1}{\sqrt m},\ldots,\frac{x_j}{\sqrt m},\frac{X_{j+1}}{\sqrt m},\ldots,\frac{X_m}{\sqrt m}\right)$
where $j<m$. It is easy to see from the definition of $\pi_j$ that $(\pi_j h)(x_1,\ldots,x_j) = (\pi_j f_j)(x_1,\ldots,x_j)$.
Next, observe that for any function $g:\mathbb R^{j-1}\mapsto \mathbb R$ of $j-1$ variables such that $\mathbb Eg^2(X_1,\ldots,X_{j-1})<\infty,$
$\pi_j g = 0$ $P^{j-1}$-almost everywhere. Indeed, this follows immediately from the definition \eqref{eq:pi} of the operator $\pi_j$ since $g$ is a constant when viewed as a function of $y_j$.
Based on this fact, it is easy to see that $f_j(x_1,\ldots,x_j)$ and $f_j(x_1,\ldots,x_j) - f_j|_{x_1=0}(x_2,\ldots,x_j)$, where $f_j|_{x_1=0}(x_2,\ldots,x_j) := f_j(0,x_2,\ldots,x_j)$, are mapped to the same function by $\pi_j$. In particular, $(\pi_j h)(x_1,\ldots,x_j) = \left(\pi_j (f_j - f_j|_{x_1=0})\right)(x_1,\ldots,x_j)$.
Moreover,
\begin{equation*}{
f_j(x_1,\ldots,x_j) - f_j|_{x_1=0}(x_2,\ldots,x_j) = \int_0^{x_1} \partial_{u_1} f_j(u_1,x_2,\ldots,x_j) du_1
}
Next, we repeat the same argument with $f_j$ replaced by
\[
f_{j,2}(x_2,\ldots,x_j; u_1):=\partial_{u_1} f_j(u_1,x_2,\ldots,x_j)
\]
and noting that
\begin{equation*}{
f_{j,2}(x_2,\ldots,x_j;u_1) - f_{2,j}|_{x_2=0}(x_3,\ldots,x_j;u_1)
= \int_{0}^{x_2} \partial_{u_2} f_{j,2}(u_2,x_3,\ldots,x_j;u_1) du_2.
}
The expression $\int_0^{x_1} f_{j,2} |_{x_2=0}(x_3,\ldots,x_j;u_1) du_1$ is a function of $j-1$ variables, hence $\pi_j$ maps it to $0$ so that
\begin{equation*}{
(\pi_j h_m)(x_1,\ldots,x_j)
= \pi_j \left( \int_0^{x_1}\int_0^{x_2} \partial_{u_2} f_{j,2}(u_2,x_3,\ldots,x_j;u_1) du_2 du_1 \right).
}
Iterating this process, we arrive at the expression
\begin{equation}{
\label{eq:f03}
(\pi_j h_m)(x_1,\ldots,x_j)
= \pi_j \left( \int_0^{x_1}\ldots\int_0^{x_j} \partial_{u_j}\ldots\partial_{u_1} f_j(u_1,\ldots,u_j) du_j\ldots du_1\right).
}
\begin{comment}
Given $l\geq 1$, let $\Phi_l(y)$ be the distribution function of $\frac{1}{\sqrt l}\sum_{j=1}^l X_j$ and $\phi_l(y)$ - the corresponding density function. Recalling that $\rho'(x) = I\{ x\geq 0\} - I\{x<0\}$, we deduce that
\ml{
f_1(u_1,\ldots,u_j) = \mathbb E \rho'\left( \frac{1}{\sqrt m}\sum_{i=1}^j u_i + \sqrt{\frac{m-j}{m}}\frac{\sum_{i=j+1}^m X_i}{\sqrt{m-j}} - t\sqrt{\frac m N}\right)
\\
= 2 \Phi_{m-j}\left( \frac{1}{\sqrt{m-j}}\sum_{i=1}^j u_i - t\sqrt{\frac{m}{m-j}}\sqrt{\frac m N}\right) - 1.
}
Therefore,
\begin{equation*}{
\partial_{u_j}\ldots\partial_{u_1} f_1(u_1,\ldots,u_j) = \frac{2}{(m-j)^{j/2}}\phi^{(j-1)}_{m-j}\left( \frac{1}{\sqrt{m-j}}\sum_{i=1}^j u_i - t\sqrt{\frac{m}{m-j}}\sqrt{\frac m N}\right).
}
\end{comment}
\noindent Next, observe that
\ml{
(\pi_j h_m)(x_1,\ldots,x_j)=(\delta_{x_1} - P_X)\times\ldots\times(\delta_{x_j}-P_X)\times P_X^{m-j} h_m
\\
= \mathbb E_{\tilde X} \left[ (\delta_{x_1} - \delta_{\tilde X_1})\times\ldots\times(\delta_{x_j}-\delta_{\tilde X_j})\times P_X^{m-j} h_m\right],
}
where $\tilde X_1,\ldots,\tilde X_j$ are i.i.d. with the same law as $X$, and independent from $X_1,\ldots,X_N$. Therefore,
$(\pi_j h_m)(x_1,\ldots,x_j)$ is a linear combination of $2^j$ terms of the form $\mathbb E_{\tilde X}\left(\prod_{i\in I}\delta_{x_i}\prod_{j\in I^c}\delta_{\tilde X_j} \,P_X^{m-j} \,h_m \right)$, for all choices of $I\subseteq [j]$. Since $X_1,\ldots,X_j, \tilde X_1,\ldots,\tilde X_j$ are i.i.d. and in view of convexity of the function $x\mapsto |x|^p$ for $p\geq 1$,
\ml{
\mathbb E \left| (\pi_j h_m)(X_1,\ldots,X_j)\right|^p \leq 2^{(p-1)j}
\mathbb E \left|\int_0^{X_1}\ldots\int_0^{X_j} \partial_{u_j}\ldots\partial_{u_1} f_j(u_1,\ldots,u_j) \,du_j\ldots du_1\right|^p
\\
\leq 2^{(p-1)j} \left\| \partial_{u_j}\ldots\partial_{u_1} f_j \right\|^p_\infty \mathbb E \left| X_1\ldots X_j\right|^p.
}
\subsection{Proof of Lemma \ref{lemma:deriv-bound}.}
\label{proof:deriv-bound}
The proof proceeds using the standard Fourier-analytic tools. Let $\widehat \phi_1:=\mathcal F[\phi_1]$ be the Fourier transform of $\phi_1$, whence $\mathcal F\left[\phi_{m-j}\right](t) = \left( \widehat \phi_1\left(\frac{t}{\sqrt{m-j}}\right)\right)^{m-j}$. Therefore,
\begin{equation*}{
\phi_{m-j}^{(j-1)}(t) = \frac{1}{2\pi}\int_\mathbb R \exp{-itx} (ix)^{j-1}\left( \widehat \phi_1\left(\frac{x}{\sqrt{m-j}}\right)\right)^{m-j}dx
}
and
$\left\| \phi_{m-j}^{(j-1)} \right\|_\infty \leq \frac{1}{2\pi} \int_\mathbb R |x|^{j-1} \left| \widehat \phi_1\left(\frac{x}{\sqrt{m-j}}\right)\right|^{m-j}dx
= \frac{(m-j)^{j/2}}{2\pi}\int_\mathbb R |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx$.
As $\left| \widehat\phi_1(x)\right| \leq \frac{C_1}{(1+|x|)^\delta}$ by assumption, the integral is finite when $\delta(m-j)>j$ (in particular, this inequality holds when $m$ is large enough and $j=o(m)$ as $m\to\infty$).
To get an explicit bound, we will estimate the integral over $[-\eta,\eta]$ and $\mathbb R \setminus [-\eta,\eta]$ separately, for a specific choice of $\eta>0$. To this end, observe that $\widehat\phi_1(x) = \psi_\sigma(x)+o(x^2)$ where $\psi_\sigma(x)=\exp{-\frac{\sigma^2 x^2}{2}}$ is the characteristic function of the normal law $N(0,\sigma^2)$. Therefore, there exists $\eta>0$ such that for all $|x|\leq \eta$, $\left| \widehat\phi_1(x)\right| \leq \exp{-\frac{\sigma^2 x^2}{4}}$, and
\ml{
(m-j)^{j/2}\int_{-\eta}^\eta |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
\leq
(m-j)^{j/2}\int_{\mathbb R} |x|^{j-1} \exp{-\frac{\sigma^2 x^2 (m-j)}{4}} dx
\\
= \int_\mathbb R |y|^{j-1} \exp{-\frac{\sigma^2 y^2}{4}}dy = \frac{2^{j}}{\sigma^{j}} \Gamma\left( \frac{j}{2}\right)
}
where we used the exact expression for the absolute moments of the normal distribution. As
$\Gamma(x+1)\leq C_2 \sqrt{2\pi x}\left(\frac{x}{e} \right)^{x}$ for all $x\geq 1$ and an absolute constant $C_2$ large enough, $ \frac{2^{j}}{\sigma^{j}} \Gamma\left( \frac{j}{2}\right) \leq \frac{C_2}{\sigma^{j}} \left(\frac{2j}{e}\right)^{j/2}$. At the same time,
\ml{
(m-j)^{j/2}\int_{\mathbb R\setminus [-\eta,\eta]} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
= (m-j)^{j/2}\int_{\mathbb R\setminus [-(2C_1)^{2/\delta},(2C_1)^{2/\delta}]} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
\\
+ (m-j)^{j/2}\int_{[-(2C_1)^{2/\delta},(2C_1)^{2/\delta}]\setminus [-\eta,\eta]} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
}
where $C_1\geq 1$ is a constant such that $\left| \widehat\phi_1(x)\right| \leq \frac{C_1}{(1+|x|)^\delta}$. The first term can be estimated via
\ml{
(m-j)^{j/2}\int_{\mathbb R\setminus [-(2C_1)^{2/\delta},(2C_1)^{2/\delta}]} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
\\
\leq
C_1^{m-j} (m-j)^{j/2} \int_{\mathbb R\setminus [-(2C_1)^{2/\delta},(2C_1)^{2/\delta}]} \frac{|x|^{j-1}}{(1+|x|)^{\delta(m-j)}} dx
\\
\leq \frac{2 C_1^{m-j} (m-j)^{j/2}}{\delta (m - j) - j } \frac{1}{(2C_1)^{2(m - j) - 2j/\delta }}.
}
Whenever $m > 2j+2j/\delta $, we can bound the last expression from above by
$C_3 m^{j/2} 2^{-m}$. Finally, as $\sup_{|x|>\eta}| \widehat \phi_1(x) | \leq 1-\gamma$ for some $0<\gamma<1$,
\begin{equation*}{
(m-j)^{j/2}\int_{[-(2C_1)^{2/\delta},(2C_1)^{2/\delta}]\setminus [-\eta,\eta]} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
\leq 2(m-j)^{j/2} (1-\gamma)^{m-j} \frac{(2C_1)^{2j/\delta}}{j}.
}
Putting the estimates together, we deduce that
\ml{
\left\| \phi_{m-j}^{(j-1)} \right\|_\infty \leq \frac{(m-j)^{j/2}}{2\pi}\int_{\mathbb R} |x|^{j-1} \left| \widehat\phi_1(x)\right|^{m-j} dx
\\
\leq \frac{C_2}{\sigma^{j}} \left(\frac{2j}{e}\right)^{j/2} + C_3 m^{j/2} 2^{-m} + C_4\left( (2C_1)^{4/\delta}m \right)^{j/2} (1-\gamma)^{m-j}.
}
Whenever $j = o(m/\log m)$, the last two terms in the sum above are negligible so that for $m$ large enough,
\begin{equation*}{
\left\| \phi_{m-j}^{(j-1)} \right\|_\infty \leq \frac{C_5}{\sigma^{j}} \left(\frac{2j}{e}\right)^{j/2},
}
as claimed.
\begin{comment}
\noindent Result of the lemma implies that
\ml{
\mathbb E \left| (\pi_j h)(X_1,\ldots,X_j)\right|^p \leq \frac{2^{p(j+1)}}{\left( m-j \right)^{pj/2}} \, \left\| \phi_{m-j}^{(j-1)} \right\|^p_\infty \mathbb E\left| X_1\ldots X_j\right|^p
\\
\leq C_6^p \left( \frac{2\sqrt 2}{\sigma} \right)^{pj}\left( \frac{j}{m}\right)^{pj/2}\,\mathbb E\left| X_1\ldots X_j\right|^p,
}
hence the result follows.
\end{comment}
| {
"timestamp": "2022-06-22T02:45:34",
"yymm": "2202",
"arxiv_id": "2202.11842",
"language": "en",
"url": "https://arxiv.org/abs/2202.11842",
"abstract": "This paper addresses the following question: given a sample of i.i.d. random variables with finite variance, can one construct an estimator of the unknown mean that performs nearly as well as if the data were normally distributed? One of the most popular examples achieving this goal is the median of means estimator. However, it is inefficient in a sense that the constants in the resulting bounds are suboptimal. We show that a permutation-invariant modification of the median of means estimator admits deviation guarantees that are sharp up to $1+o(1)$ factor if the underlying distribution possesses more than $\\frac{3+\\sqrt{5}}{2}\\approx 2.62$ moments and is absolutely continuous with respect to the Lebesgue measure. This result yields potential improvements for a variety of algorithms that rely on the median of means estimator as a building block. At the core of our argument is are the new deviation inequalities for the U-statistics of order that is allowed to grow with the sample size, a result that could be of independent interest.",
"subjects": "Statistics Theory (math.ST); Probability (math.PR)",
"title": "U-statistics of growing order and sub-Gaussian mean estimators with sharp constants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126506901791,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7094397210376507
} |
https://arxiv.org/abs/2010.10072 | Starlike Functions associated with a Petal Shaped Domain | This paper deals with some radius results and inclusion relations that are established for functions in a newly defined subclass of starlike functions associated with a petal shaped domain. | \section{Introduction}
Let the open unit disk $\{z\in\mathbb{C}:|z|<1\}$ be represented by $\mathbb{D}$ and denote the class of all analytic functions in $\mathbb{D}$ by $\mathcal{H}$. Consider $\mathcal{A}_n$ as the class of analytic functions $f$ in $\mathbb{D}$ represented by
\begin{equation}\label{A_n}
f(z)=z+a_{n+1}z^{n+1}+a_{n+2}z^{n+2}+...
\end{equation}
In particular, denote $\mathcal{A}_1:=\mathcal{A}$ and let $\mathcal{S}$ be the subclass of $\mathcal{A}$ such that it involves all univalent functions $f(z)$ in $\mathbb{D}$. Let $g, h$ be two analytic functions and $\omega$ be a Schwarz function satisfying $\omega(0)=0$ and $|\omega(z)|\leq|z|$ such that $g(z)=h(\omega(z))$ then $g$ is said to be subordinate to $h$, or $g \prec h$. If $h$ is univalent, then $g\prec h$ iff $g(0)=h(0)$ and $g(\mathbb{D})\subset h(\mathbb{D})$. Ma and Minda \cite{minda94} introduced the univalent function $\psi$ satisfying $\RE{\psi(\mathbb{D})}>0$, $\psi(\mathbb{D})$ starlike with respect to $\psi(0)=1$ and $\psi'(0)>0$ and the domain $\psi(\mathbb{D})$ being symmetric about the real axis. Further, they gave the definitions for the general subclasses of starlike and convex functions, respectively as follows:
\begin{equation}
\mathcal{S}^*(\psi):=\left\{f\in\mathcal{S}: zf'(z)/f(z)\prec\psi(z)\right\}
\end{equation}
and
\begin{equation}
\mathcal{K}(\psi):= \left\{f\in\mathcal{S}: 1 + zf''(z)/f'(z) \prec \psi(z)\right\}.
\end{equation}
For different choices of $\psi$, many subclasses of $\mathcal{S}^*$ and $\mathcal{K}$ can be obtained. For example, the notable classes of Janowski starlike and convex functions \cite{janow} are represented by $\mathcal{S}^*[C,D] := \mathcal{S}^*((1+Cz)/(1+Dz))$ and $\mathcal{K}[C,D]:= \mathcal{K}((1+Cz)/(1+Dz))$ for $-1 \leq D < C \leq 1$, respectively. Further, $\mathcal{S}^*_{\alpha}:= \mathcal{S}^*[1-2\alpha,-1]$ and $\mathcal{K}_{\alpha}:= \mathcal{K}^*[1-2\alpha,-1]$ represents the classes of starlike and convex functions of order $\alpha \in [0,1)$, respectively. Note that $\mathcal{S}^*:= \mathcal{S}^*_0$ and $\mathcal{K}:=\mathcal{K}_0$ represent the well-known classes of starlike and convex functions, respectively. We denote $\mathcal{SS}^*(\gamma):= \mathcal{S}^*(((1+z)/(1-z))^\gamma)$ and $\mathcal{SK}(\gamma):= \mathcal{K}(((1+z)/(1-z))^\gamma)$ representing the class of strongly starlike and strongly convex functions of order $\gamma \in (0,1]$ respectively.
Recall that for two subfamilies $G_1$ and $G_2$ of $\mathcal{A}$, we say that $r_0$ is the $G_1$ -- radius for the class $G_2$, if $r_0$, $(0 < r \leq r_0)$, is the greatest number which satisfies $r^{-1}g(rz)\in G_1$ where $g \in G_2$. Moreover, starlike class $\mathcal{S}^*(\psi)$ for different $\psi(z)$ were considered by many authors, whose works examined the geometrical properties, radius results and coefficient estimates of the functions of their respective classes. Sok\'{o}\l \; and Stankiewicz \cite{sokol96,sokol09} considered the class $\mathcal{S}^*_{L}:= \mathcal{S}^*(\sqrt{1+z})$ and Mendiratta $et\; al.$ \cite{mendi} worked on the class $\mathcal{S}^*_{RL}:= \mathcal{S}^*(\sqrt{2} - (\sqrt{2}-1) ((1-z)/((1+2(\sqrt{2}-1)z)))^{1/2})$. Sharma $et\; al.$ \cite{naveen14} studied the class $\mathcal{S}^*_{C}:= \mathcal{S}^*(1+4z/3+2z^2/3)$ while the class $\mathcal{S}^*_s:=\mathcal{S}^*(1+\sin{z})$ was examined by Cho $et\; al.$ \cite{sinefun}. The classes $\mathcal{S}^*_{e}:=\mathcal{S}^*(e^z)$ and $\Delta^*:= \mathcal{S}^*(z+\sqrt{1+z^2})$ were considered by Mendiratta $et\; al.$ \cite{mendi2exp} and Raina $et\; al.$ \cite{raina}, respectively. Kargar $et\; al.$ \cite{kargarbooth} introduced and studied the class $\mathcal{BS}^*(\alpha) := \mathcal{S}^*(1+z/(1-\alpha z^2)), \; \alpha \in [0,1]$, associated with the Booth lemniscate which was also investigated by Cho $et\;al.$ \cite{chobooth}. Some more recent work on radius problems can be found in \cite{aghalary, chobell, bano, priyanka, swaminathan}.
Motivated by the classes defined in \cite{sokol96,kargarbooth,mendi,naveen14,mendi2exp,sinefun,raina}, we consider the petal shaped region ${\Omega}_{\rho}:=\{ w\in \mathbb{C}: |\sinh(w - 1)| < 1 \}$, which is characterised functionally as $\rho(z)=1+\sinh^{-1}(z)$ to define our class. Clearly, $\rho(z)$ is a Ma-Minda function. See Figure {\ref{fig:incl_rel}} for its boundary curve $\gamma_0$ which is petal shaped. Note that $\sinh^{-1}(z)$ is a multivalued function and has the branch cuts along the line segments $(-i\infty,-i) \cup (i,i\infty)$, on the imaginary axis
and hence it is analytic in $\mathbb{D}$. Now we introduce a new class of starlike functions
\begin{equation}
\label{func_def}
\mathcal{S}^*_{\rho}:=\left\{f\in\mathcal{A}: \frac{zf'(z)}{f(z)} \prec 1+\sinh^{-1}(z)\right\} \quad (z\in\mathbb{D}),
\end{equation}
which is associated with the petal-shaped domain $\rho(\mathbb{D})$. From the above definition, we deduce that $f\in \mathcal{S}^*_{\rho} $ iff there exists an analytic function $q(z)\prec \rho(z)$ such that
\begin{equation}\label{gen_int_rep}
f(z) = z \exp\left(\int^z_0 \frac{q(t)-1}{t}dt\right).
\end{equation}
Table \ref{func_exa_table} presents some functions in the class $\mathcal{S}^*_{\rho}$ where $q_j \prec \rho$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccc}
\hline
$j$ & $q_j(z)$ & $f_j(z)$\\
\hline
$1$ & $1 + z/5$ & $z \exp(z/5)$\\
$2$ & $(5+2z)/(5+z)$ & $z + z^2/5$\\
$3$ & $(7+4z)/(7+z)$ & $z(1+z/7)^3$\\
\hline
\end{tabular}
\end{center}
\caption{Some functions in the class $\mathcal{S}^*_{\rho}$}
\label{func_exa_table}
\end{table}
Since $\rho$ is univalent in $\mathbb{D},\, q_j(\mathbb{D}) \subset \rho(\mathbb{D})$ and $q_j(0)=\rho(0) \; (j=1, 2, 3)$, it follows that each $q_j \prec \rho$. Thus the functions $f_j(z)$ obtained from \eqref{gen_int_rep} are in the class $\mathcal{S}^*_{\rho}$. In particular, if we choose
\[
q(z)=1+\sinh^{-1}(z) = 1+z-\dfrac{z^3}{6}+\dfrac{3z^5}{40}-\dfrac{5z^7}{112}...,
\]
then \eqref{gen_int_rep} gives
\begin{equation}\label{func_int_rep}
f_0(z) = z \exp\left(\int^z_0\frac{\sinh^{-1}(t)}{t}dt\right) = z + z^2 + \frac{z^3}{2} + \frac{z^4}{9} - \frac{z^5}{72} - \frac{z^6}{225}\cdots,
\end{equation}
which often acts as the extremal function for the class $\mathcal{S}^*_{\rho}$ yielding sharp results.
\begin{remark}
Note that $\sinh^{-1}(z) = \ln(z+\sqrt{1+z^2})$. Let $w=zf'(z)/f(z)$, where $f \in \mathcal{S}^*_{\rho}$. Then the class $\mathcal{S}^*_{\rho}$ can be alternatively represented by
$\exp(w-1) \prec z+\sqrt{1+z^2}$, where $z+\sqrt{1+z^2}$ represents the Crescent shaped domain \cite{raina}. Thus, there exists an exponential relation among the functions in the classes $\mathcal{S}^*_{\rho}$ and $\Delta^*$.
\end{remark}
In the present investigation, the geometrical properties of the function $1+\sinh^{-1}(z)$ are studied and certain inclusion properties as well as radius problems are established for the class $\mathcal{S}^*_{\rho}$.
\section{Properties of the function $1+\sinh^{-1}(z)$}
The current section deals with the study of some geometric properties of the function $1+\sinh^{-1}(z)$.
\begin{theorem}\label{convexfunction}
The function $\rho(z)=1+\sinh^{-1}(z)$ is a convex univalent function.
\end{theorem}
\begin{proof}
Let $h(z) = \sinh^{-1}(z)$. Clearly, $h(0)=0$. Since $h'(z)=1/\sqrt{1+z^2}$ and $\sqrt{1+z^2} \prec \sqrt{1+z} \in \mathcal{P}$, where $\mathcal{P}$ is the Carath\'{e}odory class. Therefore, $1/\sqrt{1+z^2} \in \mathcal{P}$ which implies that $\RE h'(z) > 0$. Hence $\rho$ is univalent.
Now a calculation yields
\begin{equation*}
1+\frac{zh''(z)}{h'(z)}=\frac{1}{1+z^2}.
\end{equation*}
Since
\begin{equation*}
\frac{1}{1+z^2}\prec \frac{1}{1+z} \in \mathcal{P}.
\end{equation*}
Therefore, $\RE(1+zh''(z)/h'(z))>0$ which implies that $h$ (and thus $\rho$) is a convex univalent function.
\qed \qed \end{proof}
\begin{remark}\label{real_symmetry}
Note that $\rho'(0)>0$ and the function $\varphi(z)= z+\sqrt{1+z^2}$ satisfies $\varphi(\bar{z})=\overline{\varphi(z)}$. Therefore, $\rho(\bar{z})=\overline{\rho(z)}$ and hence, the domain $\Omega_{\rho}=\rho(\mathbb{D})$ is symmetric about the real axis.
\end{remark}
\begin{theorem}\label{imag_symmetry}
The domain $\Omega_{\rho}$ is symmetric about the line $\RE(w)=1$.
\end{theorem}
\begin{proof}
Since $\Omega_{\rho}$ is symmetric about the real axis, the condition $0 \leq \theta \leq \pi/2$ is sufficient to prove our result. As we know that symmetry along imaginary axis for $f \in \mathcal{A}$ holds if $\RE(f(\theta)) = -\RE(f(\pi-\theta))$ and $\IM(f(\theta)) = \IM(f(\pi-\theta))$.
Now let $h(z) = \sinh^{-1}(z) = \ln(z+\sqrt{1+z^2})$. Then $\IM(h(z)) = \arg(z+\sqrt{1+z^2})$. For $z = re^{it}, t \in [0,\pi]$ and fixed $ r \in (0,1)$, we have the following expressions for $t \rightarrow \theta$
\[
\begin{array}{ll}
I_1 &= \arg\left(r(\cos\theta + i\sin\theta) + \sqrt{1+r^2(\cos(2\theta) + i\sin(2\theta))}\right)\\
&= \arg\left(z+\sqrt{1+z^2}\right),
\end{array}
\]
and for $t \rightarrow \pi-\theta$
\[
\begin{array}{ll}
I_2 &= \arg\left(r(\cos(\pi-\theta) + i\sin(\pi-\theta)) + \sqrt{1+r^2(\cos(2(\pi-\theta)) + i\sin(2(\pi-\theta)))}\right)\\
&= \arg\left(r(-\cos\theta + i\sin\theta) + \sqrt{1+r^2(\cos(2\theta) - i\sin(2\theta))}\right)\\
&= \arg\left(-\overline{z}+\sqrt{1+\overline{z}^2}\right).
\end{array}
\]
Now let us consider $(z+\sqrt{1+z^2})/({-\overline{z}+\sqrt{1+\overline{z}^2}})$. On rationalising the denominator, we get
\[
\displaystyle{\frac{z+\sqrt{1+z^2}}{-\overline{z}+\sqrt{1+\overline{z}^2}}} = \displaystyle{\frac{(z+\sqrt{1+z^2})(-z+\sqrt{1+z^2})}{(-\overline{z}+\sqrt{1+\overline{z}^2})(-z+\sqrt{1+z^2})}} = \displaystyle{\frac{1}{|-z+\sqrt{1+z^2}|^2}} = k > 0,
\]
where $k$ is some real positive constant. Thus,
\[
\begin{array}{ll}
&\arg\left( \displaystyle{\frac{z+\sqrt{1+z^2}}{-\overline{z}+\sqrt{1+\overline{z}^2}}} \right) = \arg(k) = 0\\
\Rightarrow & \arg\left(z+\sqrt{1+z^2}\right) = \arg\left(-\overline{z}+\sqrt{1+\overline{z}^2}\right)\\
\Rightarrow & I_1 = I_2.
\end{array}
\]
Similarly, $\RE(h(\theta)) = -\RE(h(\pi-\theta))$ for $0 \leq \theta \leq \pi/2$. Hence, $h(z)$ is symmetric about the imaginary axis and thus, by translation property, $\rho(z)$ is symmetric about the line $\RE(w) = 1$.
\qed \end{proof}
Now using Theorem~\ref{imag_symmetry}, we obtain the next result:
\begin{corollary}\label{g-disk}
The disk $\{w: |w-1|\leq \sinh^{-1}(r)\}$ is contained in $\rho(|z|\leq r)$ and is maximal.
\end{corollary}
\begin{proof}
Since $\min_{|z|=r} |\sinh^{-1}(z)| = |\sinh^{-1}(-r)|=\sinh^{-1}(r)$ and hence the conclusion can be drawn at once.
\qed \end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{modulus.pdf}
\caption{ $\rho(\mathbb{D})$ lies in the annular region bounded between the circles $C1$ and $C2$.}
\label{fig:modulus}
\end{figure}
\begin{theorem}\label{geo_bounds}
We find that the following properties hold for $\rho(z)=1+\sinh^{-1}(z)$:
\begin{enumerate}[(i)]
\item $\rho(-r) \leq \RE \rho(z) \leq \rho(r) \quad (|z|\leq r<1)$;
\item $|\IM \rho(z)| \leq \pi/2 \quad (|z|\leq 1)$;
\item $\rho(-r) \leq |\rho(z)| \leq \rho(r)\quad (|z|\leq r<1)$;
\item $|\arg \rho(z)| \leq \tan^{-1}(1/t)$ where $t= \dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}$.
\end{enumerate}
\end{theorem}
\begin{proof} (i) Since $\rho(z)$ is convex and typically real, the value of $\RE\rho(z)$ falls between $\lim_{\theta\rightarrow 0}\rho(re^\theta)$ and $\lim_{\theta\rightarrow \pi}\rho(re^\theta)$, thus the result follows.
(ii) Using Theorem \ref{imag_symmetry}, it suffices to take $\theta \in [0,\pi/2]$. Then the inequality follows by letting $r$ tending to $1^-$ and observing that the function
\[
\IM \rho(z) = \arg \left(r \cos (\theta) + \sqrt{1+r^2 (\cos (2\theta)+i \sin (2\theta))}+i r\sin (\theta)\right)
\]
is strictly increasing in the interval $[0,\pi/2]$ and hence the result follows at once. (iii) The radially farthest and nearest points in $\rho(\mathbb{D})$ from origin are respectively $B$ and $A$ (see Figure \ref{fig:modulus}) and therefore the result obviously holds. Moreover we observe that these points $A$ and $B$ lie on the real line and hence the bounds of $|\rho(z)|$ and $\RE \rho(z)$ coincide.
The proof of (iv) is evident from Theorem \ref{incl_rel}(iii) so skipped here.
\qed \end{proof}
Next we have the following important result:
\begin{lemma}
\label{disk_lem}
For $1-\sinh^{-1}(1)<a<1+\sinh^{-1}(1)$, let $r_a$ be given by
\[
r_a=
\left\{
\begin{array}{lr}
a-(1-\sinh^{-1}(1)), & 1-\sinh^{-1}(1)<a\leq1; \\
1+\sinh^{-1}(1)-a, & 1\leq a<1+\sinh^{-1}(1).
\end{array}
\right.
\]
Then
$
\{w : |w-a|<r_a\} \subset \Omega_{\rho}.
$
\end{lemma}
We omit the proof of Lemma \ref{disk_lem} as it directly follows from Theorem~\ref{imag_symmetry} and Corollary~\ref{g-disk}.
\begin{remark}
Evidently the domain $ \Omega_{\rho}$ is contained inside the disk
$\{w: |w-1|<\pi/2\}.$
\end{remark}
\section{Inclusion Relations}
This section establishes some inclusion results involving the class $\mathcal{S}^*_{\rho}$ with some well-known classes.
We consider the class $M(\beta)$, first studied by Uralegaddi $et\; al.$ \cite{uralegaddi}, given by
\[
M(\beta) := \left\{ f \in \mathcal{A}: \RE\left( \frac{zf'(z)}{f(z)}\right) < \beta,\; z \in \mathbb{D}, \beta>1\right\},
\]
and another interesting class introduced by Kanas and Wis\'niowska \cite{kanas} of $k$-starlike functions, denoted by $k-\mathcal{ST}$ and defined by
\[
k-\mathcal{ST} \coloneqq \left\{ f \in \mathcal{A}: \RE\left( \frac{zf'(z)}{f(z)}\right) > k\left| \frac{zf'(z)}{f(z)}-1 \right|,\; z \in\mathbb{D}, k \geq 0 \right\}.
\]
Note that $\mathcal{S}^* = 0-\mathcal{ST}$ and $\mathcal{S}^*_p = 1-\mathcal{ST}$, where $\mathcal{S}^*_p$ is the class of parabolic starlike functions \cite{ronn}.
We establish the following inclusion relations for the class $\mathcal{S}^*_{\rho}$.
\begin{theorem}
\label{incl_rel}
The class $\mathcal{S}^*_{\rho}$ satisfies the following relationships:
\begin{enumerate}[(i)]
\item $\mathcal{S}^*_{\rho} \subset \mathcal{S}^*_{\alpha} \subset \mathcal{S}^*$ for $0 \leq \alpha \leq 1-\sinh^{-1}(1)$;
\item $\mathcal{S}^*_{\rho} \subset M(\beta)$ for $\beta \geq 1+\sinh^{-1}(1)$;
\item $\mathcal{S}^*_{\rho} \subset \mathcal{SS}^*(\gamma)$ for $(2/\pi)\tan^{-1}(1/t) \leq \gamma \leq 1$ where $t=\dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}$;
\item $k-\mathcal{ST} \subset \mathcal{S}^*_{\rho}$ for $k \geq 1+1/\sinh^{-1}(1)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Consider $f \in \mathcal{S}^*_{\rho}$ which implies $zf'(z)/f(z) \prec 1+\sinh^{-1}(z)$. By Theorem \ref{geo_bounds}, it is evident that for $z \in \mathbb{D}$,
\[
1-\sinh^{-1}(1) = \min_{|z|=1} \RE (1+\sinh^{-1}(z)) \leq \RE \frac{zf'(z)}{f(z)}
\]
and
\[
\RE \frac{zf'(z)}{f(z)} \leq \max_{|z|=1}\RE (1+\sinh^{-1}(z)) = 1+\sinh^{-1}(1).
\]
This proves (i) and (ii). For (iii), let $w \in \mathbb{C}$, $X = \RE(w)$, $Y = \IM(w)$, and $b = 1-\sinh^{-1}(1)$. Now consider the parabolic domain $\Gamma_P$ with the boundary curve $\partial\Gamma_P = \gamma_p: Y^2 = 4a(X-b)$. Then the focus $a$ of the smallest parabola $\gamma_p$ which contains $\Omega_{\rho}$ will touch the peak points $1 \pm i\pi/2$ of $\mathcal{S}^*_{\rho}$ is $\pi^2/(16\sinh^{-1}(1))$. Let $P$ be any point on the parabola $\gamma_P$ with parametric coordinates $(b+at^2, 2at)$ such that the tangent OE at $P$ passes through origin for some parameter $t$. Let the equation of the tangent OE be $y=mx$, where $m = dy/dx = (dy/dt)/(dx/dt) = 1/t$. Therefore at P, we have
\begin{equation*}
m =\frac{y}{x} \Rightarrow \frac{1}{t} = \frac{2at}{b+at^2},
\end{equation*}
which yields
\begin{equation}\label{t}
t = \sqrt{\dfrac{b}{a}} = \dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}
\end{equation}
and the argument of the tangent at $P$ of $ \gamma_p$ is $\tan^{-1}(1/t)$. Since $\Omega_{\rho} \subset \Gamma_p$, it gives
\[
\left|\arg \frac{zf'(z)}{f(z)}\right| \leq \max_{|z|=1} \arg (\rho(z)) = \max \arg(\gamma_p) = \tan^{-1}(1/t),
\]
which demonstrates $f \in \mathcal{SS}^*((2/\pi)\tan^{-1}(1/t))$, where $t$ is given by \eqref{t}.
To show (iv), consider $f \in k-\mathcal{ST}$ along with the conic domain $\Gamma_k = \{w \in \mathbb{C}: \RE w > k|w-1|\}$. For $k>1$, let $\partial\Gamma_k$ represent the horizontal ellipse $\gamma_k: x^2 = k^2(x-1)^2+k^2y^2$ which may be rewritten as
\[
\frac{(x-x_0)^2}{a^2} + \frac{(y-y_0)^2}{b^2} = 1,
\]
where $x_0=k^2/(k^2-1),\, y_0=0,\, a=k/(k^2-1)$ and $b=1/\sqrt{k^2-1}$. For $\gamma_k \subset \Omega_{\rho}$, the condition $x_0+a\leq 1+\sinh^{-1}(1)$ must hold, or equivalently $k \geq 1+1/\sinh^{-1}(1)$. Since $\Gamma_{k_1} \subseteq \Gamma_{k_2}$ for $k_1 \geq k_2$, it follows that for $k \geq 1+1/\sinh^{-1}(1)$, $k-\mathcal{ST} \subset \mathcal{S}^*_{\rho}$. Figure \ref{fig:incl_rel} clearly depicts these relations.
\qed \end{proof}
\begin{figure}[t]
\begin{minipage}{0.55\textwidth}
\includegraphics[width=0.95\linewidth, height=8cm]{inclRel.pdf}
\end{minipage}%
\begin{minipage}{0.45\textwidth}
\parbox{0.9\linewidth}{
\small
$\gamma_0: w = 1+\sinh^{-1}(z)$\\
$\gamma_1: \RE(w)=1-\sinh^{-1}(1)$\\
$\gamma_2: \RE(w)=1+\sinh^{-1}(1)$\\
$\gamma_3: |\arg(w)|\leq \tan^{-1}(1/t)$\\
where $t$ is given by \eqref{t}\\
$\gamma_ 4= \gamma_p: Y^2 = 4a(X-1+\sinh^{-1}(1))$\\
where $a = \pi^2/(16\sinh^{-1}(1))$\\
$\gamma_5= \gamma_k: \RE(w)>k|w-1|$\\
where $\; k=1+1/\sinh^{-1}(1)$\\
$\gamma_6= D_L: |w-1|<\sinh^{-1}(1)$\\
$\gamma_7= D_S: |w-1|<\pi/2$\\
$\RE$(A) $= 1-\sinh^{-1}(1)$\\
$\RE$(B) $= 1+\sinh^{-1}(1)$\\
$\IM$(C) $= -\IM(D) = \pi/2$\\
$\arg($E$)=-\arg($F$)= \tan^{-1}(1/t)$\\
where $t$ is given by \eqref{t}\\
O: origin\\
P: point of tangency of $\gamma_4$ w.r.t. OE
}
\end{minipage}
\caption{Boundary curves, depicting some inclusion relations for $w=1+\sinh^{-1}(z)$.}
\label{fig:incl_rel}
\end{figure}
For our next result, we consider $\mathcal{P}_n[C,D],$ the class of functions $p(z)$ of the form $1 + \sum_{k=n}^{\infty}c_{k}z^{k}$, satisfying $ p(z) \prec (1+Cz)/(1+Dz),$ where $-1 \leq D < C \leq 1$.
Denote by $\mathcal{P}_n(\alpha) := \mathcal{P}_n[1-2\alpha,-1] \; \text{and} \; \mathcal{P}_n:= \mathcal{P}_n(0)$. For n=1, $\mathcal{P} = \mathcal{P}_1$ is the Carath\'{e}odory class. We need the following lemmas:
\begin{lemma}
\label{p-nAlpha_lem}\normalfont{\cite{shah}}
For $p \in \mathcal{P}_n(\alpha)$, we have
\[
\left|\frac{zp'(z)}{p(z)}\right| \leq \frac{2(1-\alpha)nr^n}{(1-r^n)(1+(1-2\alpha)r^n)}, \; (|z|=r).
\]
\end{lemma}
\begin{lemma}
\label{p-nCD_lem}\normalfont{\cite{ravi-ron}}
For $p\in\mathcal{P}_n[C,D]$, we have
\[
\left|p(z)-\frac{1-CDr^{2n}}{1-D^2r^{2n}}\right| \leq \frac{(C-D)r^n}{1-D^2r^{2n}}, \; (|z|=r).
\]
Especially, for $p \in \mathcal{P}_n(\alpha)$, we have
\[
\left| p(z) - \frac{1+(1-2\alpha)r^{2n}}{1-r^{2n}}\right| \leq \frac{2(1-\alpha)r^n}{1-r^{2n}}, \; (|z|=r).
\]
\end{lemma}
\begin{theorem}
Let $-1 < D < C \leq 1$. If either of the following two conditions holds:
\begin{enumerate}[(i)]
\item $(1-\sinh^{-1}(1))(1-D^2) < 1-CD\leq 1-D^2$ and $C-D\leq (1-D)\sinh^{-1}(1)$;
\item $1-D^2 \leq 1-CD < (1+\sinh^{-1}(1))(1-D^2)$ and $C-D\leq (1+D)\sinh^{-1}(1)$.
\end{enumerate}
Then $\mathcal{S}^{*}[C,D]\subset \mathcal{S}^*_{\rho}$.
\end{theorem}
\begin{proof}
Let $f\in\mathcal{S}^{*}[C,D]$ which implies $zf'(z)/f(z)\in\mathcal{P}[C,D]$. Using Lemma \ref{p-nCD_lem} we have
\begin{equation}
\label{p-thm-eq}
\left|\frac{zf'(z)}{f(z)}-\frac{1-CD}{1-D^2}\right| \leq \frac{(C-D)}{1-D^2}.
\end{equation}
Let $a=(1-CD)/(1-D^2)$ and assume that (i) holds. Now multiplying $1+D$ and dividing by $(1-D^2)$ on either sides of the inequality $(C-D) \leq (1-D)\sinh^{-1}(1)$ gives $(C-D)/(1-D^2)\leq a-(1-\sinh^{-1}(1))$ on simplification. Also, the inequality $(1-\sinh^{-1}(1))(1-D^2) < 1-CD \leq 1-D^2$ is equivalent to $1-\sinh^{-1}(1)<(1-CD)/(1-D^2)\leq 1$. Therefore, from \eqref{p-thm-eq} we find $w=zf'(z)/f(z)$ is contained inside the disk $|w-a|<r_a$, where $r_a=a-(1-\sinh^{-1}(1))$ and $1-\sinh^{-1}(1) < a\leq 1$. Hence $f\in\mathcal{S}^*_{\rho}$ by Lemma \ref{disk_lem}. A similar proof can be shown when (ii) holds. \qed \end{proof}
\section{Radius Problems}
\noindent In this section, radius results for various subclasses of $\mathcal{A}$ are established. We begin by determining sharp $\mathcal{S}^*_{\alpha}\;(0\leq\alpha<1),\; \mathcal{M}(\beta)\;(\beta>1) $ and $k-\mathcal{ST}$-radii $(k \geq 0)$ for the class $\mathcal{S}^*_{\rho}$. Using Theorem \ref{incl_rel}, we can establish that $R_{\mathcal{S}^*_{\alpha}}(\mathcal{S}^*_{\rho})= R_{M(\beta)}(\mathcal{S}^*_{\rho})= 1$ for $0\leq\alpha\leq 1-\sinh^{-1}(1)$ and $\beta > 1+\sinh^{-1}(1)$.
\begin{theorem}
If $f \in \mathcal{S}^*_{\rho}$, then the following results hold:
\begin{enumerate}[(i)]
\item For $1-\sinh^{-1}(1)\leq\alpha<1$, we have $f \in \mathcal{S}^*_{\alpha}$ in $|z| \leq \sinh(1-\alpha)$.
\item For $1<\beta\leq 1+\sinh^{-1}(1)$, we have $f \in \mathcal{M}(\beta)$ in $|z| \leq \sinh(\beta-1)$.
\item For $k>0$, we have $f \in k-\mathcal{ST}$ in $|z| \leq \sinh(1/(k+1))$.
\end{enumerate}
The results are sharp.
\end{theorem}
\begin{proof}
Since $f \in \mathcal{S}^*_{\rho},\; zf'(z)/f(z) \prec 1+\sinh^{-1}(z)$ and hence for $|z|=r<1$ Theorem \ref{geo_bounds} gives
\[
1-\sinh^{-1}(r) \leq \RE \frac{zf'(z)}{f(z)} \leq 1+\sinh^{-1}(r),
\]
thereby validating the first two parts. Also, the constants $\sinh(1-\alpha)$ and $\sinh(\beta-1)$ are optimal for the function $f_0$ given by \eqref{func_int_rep}. Now to prove (iii), note that $f \in k-\mathcal{ST}$ in $|z|<r$, if
\[
\RE (1+\sinh^{-1}(w(z))) \geq k|1+\sinh^{-1}(w(z))-1| = k|\sinh^{-1}(w(z))|.
\]
Here $w$ denotes the Schwarz function. Since $\RE (1+\sinh^{-1}(w(z))) \geq 1-\sinh^{-1}(r)$ and $|\sinh^{-1}(w(z))| \leq \sinh^{-1}(r)$,
the inequality $\RE (1+\sinh^{-1}(w(z)) \geq k|\sinh^{-1}(w(z))|$ holds whenever $ 1-\sinh^{-1}(r) \geq k\sinh^{-1}(r)$, which implies $r\leq\sinh(1/(1+k))$. For the function $f_0$ given by \eqref{func_int_rep} and for $z_0=-\sinh(1/(1+k))$, we have
\[
\RE \frac{z_0f'_0(z_0)}{f_0(z_0)} = \RE (1+\sinh^{-1}(z_0)) = \frac{k}{k+1} = k|\sinh^{-1}(z_0)| = k\left|\frac{z_0f'_0(z_0)}{f_0(z_0)}-1\right|.
\]
This concludes the proof.
\qed \end{proof}
\begin{corollary}
Substituting $k=1$ in part (iii) above, we find that $f \in \mathcal{S}^*_{\rho}$ is parabolic starlike \normalfont{\cite{ronn}} in $|z| \leq \sinh(1/2)$.
\end{corollary}
\noindent In the next result, we find the $\mathcal{K}_{\alpha}$-radius for the class $\mathcal{S}^*_{\rho}$.
\begin{theorem}
Let $f\in \mathcal{S}^*_{\rho}$. Then $f\in \mathcal{K}_{\alpha}$ in $|z|<r_{\alpha}$, where $r_{\alpha}$ is the least positive root of
\begin{equation}
\label{rconv}
(1-r^2)\sqrt{1+r^2}\left(1-\sinh^{-1}(r)\right)\left(1-\alpha-\sinh^{-1}(r)\right)-r = 0 \quad (0\leq\alpha<1).
\end{equation}
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}^*_{\rho}$ and $w$ be a Schwarz function. Then $zf'(z)/f(z)= 1+\sinh^{-1}(w(z))$ such that
\[
1+\frac{zf''(z)}{f'(z)} = 1+\sinh^{-1}(w(z)) + \frac{zw'(z)}{(1+\sinh^{-1}(w(z)))\sqrt{1+w^2(z)}}
\]
which yields
\[
\RE\left(1+\frac{zf''(z)}{f'(z)}\right) \geq \RE\left(1+\sinh^{-1}(w(z))\right) - \left|\frac{zw'(z)}{(1+\sinh^{-1}(w(z)))\sqrt{1+w^2(z)}}\right|.
\]
We know for the Schwarz function $w$, the inequality $|w'(z)| \leq (1-|w(z)|^2)/(1-|z|^2)$ holds. Thus
we observe that
\begin{align*}
\RE\left(1+\frac{zf''(z)}{f'(z)}\right) &\geq 1-\sinh^{-1}(|z|) - \frac{|z|(1-|w(z)|^2)}{(1-\sinh^{-1}(|z|))(1-|z|^2)\sqrt{1+|z|^2}}\\
&\geq 1-\sinh^{-1}(|z|) - \frac{|z|}{(1-\sinh^{-1}(|z|))(1-|z|^2)\sqrt{1+|z|^2}}.
\end{align*}
Now consider the function $q(r):=1-\sinh^{-1}(r)-r/\left((1-\sinh^{-1}(r))(1-r^2)\sqrt{1+r^2}\right)$. This is a decreasing function in $[0,1)$ with $q(0)=1$. Therefore $\RE(1+zf''(z)/f'(z))>\alpha$ in $|z|<r_{\alpha}<1$, where $r_{\alpha}$ is given as the least positive root of the equation $q(r)=\alpha$, which is same as \eqref{rconv} and hence the result.
\qed \end{proof}
\begin{remark} Note for $\alpha = 0$, $r_0 \approx 0.37198$ which is not sharp, so the result can be further improved. The sharp $\mathcal{K}_{0}$-radius for the class $\mathcal{S}^*_{\rho}$ is $r_0 \approx 0.400435$, which we can guess graphically but a mathematical proof is yet to derive.
\end{remark}
For our next theorems \ref{Kvn-rad-thm-1} - \ref{Kvn-rad-thm-4}, the following subclasses are required:\\
Let $\mathcal{S}^*_n[C,D]:= \{f\in\mathcal{A}_n : zf'(z)/f(z)\in \mathcal{P}_n[C,D] \}$. Also, let $\mathcal{S}^*_n(\alpha) := \mathcal{S}^*_n[1-2\alpha,-1] = \mathcal{A}_n \cap \mathcal{S}^*_{\alpha}\; \text{and}\; \mathcal{S}^*_{\rho,n} := \mathcal{A}_n \cap \mathcal{S}^*_{\rho}$.
Further, Ali $et\; al.$ \cite{ali12} studied the three classes $\mathcal{S}_n := \{f \in \mathcal{A}_n : f(z)/z \in \mathcal{P}_n\}, \, \mathcal{S}^*_n[C,D]$ and
\[
\mathcal{CS}_n(\alpha) := \left\{f \in \mathcal{A}_n : \frac{f(z)}{g(z)} \in \mathcal{P}_n, \; g \in \mathcal{S}^*_n(\alpha)\right\}.
\]
Now we obtain the $\mathcal{S}^*_{\rho,n}$-radii for the classes defined above.
\begin{theorem}
\label{Kvn-rad-thm-1}
For the class $\mathcal{S}_n$, the sharp $\mathcal{S}^*_{\rho,n}$-radius is given by:
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}_n) = \left(\frac{\sinh^{-1}(1)}{n + \sqrt{n^2 + \left(\sinh^{-1}(1)\right)^2}} \right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}_n$. Define $s: \mathbb{D} \rightarrow \mathbb{C}$ by $s(z) = f(z)/z$. Then $s \in \mathcal{P}_n$ and we can obtain $zf'(z)/f(z) - 1 = zs'(z)/s(z)$ from the above definition of $s$. Using Lemma \ref{disk_lem} and Lemma \ref{p-nAlpha_lem}, the following holds
\[
\left| \frac{zf'(z)}{f(z)} -1 \right| = \frac{zs'(z)}{s(z)} \leq \frac{2nr^n}{1-r^{2n}} \leq \sinh^{-1}(1),
\]
or equivalently $(\sinh^{-1}(1))r^{2n} + 2nr^n - \sinh^{-1}(1) \leq 0$. Therefore, the $\mathcal{S}^*_{\rho,n}$-radius of $\mathcal{S}_n$ is the least positive root of $(\sinh^{-1}(1))r^{2n} + 2nr^n - \sinh^{-1}(1)=0$ for $r\in(0,1)$. We can verify $\RE(f_0(z)/z)>0$ holds in $\mathbb{D}$ where $f_0(z) = z(1+z^n)/(1-z^n)$. Thus $f_0 \in \mathcal{S}_n$ and $zf'_0(z)/f_0(z) = 1 + 2nz^n/(1-z^{2n})$. Moreover, the result is sharp since at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}_n)$, we obtain
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{2nz^n}{1-z^{2n}} = \sinh^{-1}(1).
\]
The proof is complete.
\qed \end{proof}
Let $\mathcal{F}$ define the class of functions $f \in \mathcal{A}$ satisfying $f(z)/z \in \mathcal{P}$. The radius of univalence and starlikeness of the class $\mathcal{F}$ is $\sqrt{2}-1$, as shown in \cite{macgreg}.
\begin{corollary}
\label{Kvn-rad-thm-2}
For the class $\mathcal{F}$, the $\mathcal{S}^*_{\rho}$-radius is stated as
\[
R_{\mathcal{S}^*_{\rho}}(\mathcal{F}) = -e+\sqrt{1+e^2} \approx 0.178105.
\]
\end{corollary}
\begin{theorem}
\label{Kvn-rad-thm-3}
For the class $\mathcal{CS}_n(\alpha)$, the sharp $\mathcal{S}^*_{\rho,n}$-radius is given by
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{CS}_n(\alpha)) = \left(\frac{\sinh^{-1}(1)}{n-\alpha+1 +\sqrt{(n-\alpha+1)^2 + (\sinh^{-1}(1)+2(1-\alpha))\sinh^{-1}(1)}}\right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{CS}_n(\alpha)$ and $g \in \mathcal{S}^*_n(\alpha)$. Considering $s(z)= f(z)/g(z),$ clearly indicates $s \in \mathcal{P}_n$. Also, it gives
\[
\frac{zf'(z)}{f(z)} = \frac{zs'(z)}{s(z)} + \frac{zg'(z)}{g(z)}.
\]
The use of Lemmas (\ref{p-nAlpha_lem} -- \ref{p-nCD_lem}) gives us
\begin{equation}
\label{eq_CSn-1}
\left| \frac{zf'(z)}{f(z)} - \frac{1+(1-2\alpha)r^{2n}}{1-r^{2n}} \right| \leq \frac{2(n-\alpha+1)r^n}{1-r^{2n}}.
\end{equation}
Considering $(1+(1-2\alpha)r^{2n})/(1-r^{2n}) \geq 1$, the relation $f \in \mathcal{S}^*_{\rho,n}$ follows from \eqref{eq_CSn-1} and Lemma \ref{disk_lem} if the subsequent inequality is true:
\[
\frac{1 + 2(n-\alpha+1)r^n + (1-2\alpha)r^{2n}}{1-r^{2n}} \leq 1+\sinh^{-1}(1)
\]
or equivalently, $(2-2\alpha+\sinh^{-1}(1))r^{2n} + 2(n-\alpha+1)r^n - \sinh^{-1}(1) \leq 0$ holds. Thus, the least positive root of
\[
(2-2\alpha+\sinh^{-1}(1))r^{2n} + 2(n-\alpha+1)r^n - \sinh^{-1}(1) = 0
\]
gives the $\mathcal{S}^*_{\rho,n}$-radius for the class $\mathcal{CS}_n(\alpha)$.
Next examine the following functions
\begin{equation}
\label{eq_CSn-2}
f_0(z) = \frac{z(1+z^n)}{(1-z^n)^{(n+2-2\alpha)/n}} \; \text{and} \; g_0(z) = \frac{z}{(1-z^n)^{2(1-\alpha)/n}},
\end{equation}
which implies $f_0(z)/g_0(z) = (1+z^n)/(1-z^n)$ and $zg'_0(z)/g_0(z) = (1+(1-2\alpha)z^n)/(1-z^n)$. Moreover, it is obvious that $\RE(f_0(z)/g_0(z))>0$ and $\RE(zg'_0(z)/g_0(z))>\alpha$ in the unit disk $\mathbb{D}$. Hence $f_0 \in \mathcal{CS}_n(\alpha)$. At $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{CS}_n(\alpha))$, the function $f_0$ defined in \eqref{eq_CSn-2} satisfies
\[
\frac{zf'_0(z)}{f_0(z)} = \frac{1 + 2(n-\alpha+1)z^n + (1-2\alpha)z^{2n}}{1-z^{2n}} = 1+\sinh^{-1}(1),
\]
which accomplish sharpness of the result.
\qed \end{proof}
\begin{theorem}
\label{Kvn-rad-thm-4}
For the class $\mathcal{S}^*_n[C,D]$, the $\mathcal{S}^*_{\rho,n}$-radius is given by
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}^*_n[C,D]) = \left\{
\begin{array}{ll}
\min \{ 1;R_1\}, & -1 \leq D < 0 < C \leq 1;\\
\min \{1;R_2\}, & 0 < D < C \leq 1,
\end{array}
\right.
\]
where
\[
R_1 := \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(1+\sinh^{-1}(1))-CD)\sinh^{-1}(1)}}\right)^{1/n}
\]
and
\[
R_2 := \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(\sinh^{-1}(1) - 1) + CD)\sinh^{-1}(1)}}\right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}^*_n[C,D]$. From Lemma \ref{p-nCD_lem}, we have
\begin{equation}
\label{eq_S*nCD}
\left| \frac{zf'(z)}{f(z)} -b \right| \leq \frac{(C-D)r^n}{1-D^2r^{2n}},
\end{equation}
where $b = (1-CDr^{2n})/(1-D^2r^{2n}), \; |z|=r,$ represents the center of the disk. We infer $b \geq 1$ for $-1 \leq D < 0 < C \leq 1$. From Lemma \ref{disk_lem}, $f \in \mathcal{S}^*_{\rho,n}$ depends on whether following condition is true:
\[
\frac{1 + (C-D)r^n - CDr^{2n}}{1-D^2 r^{2n}} \leq 1+\sinh^{-1}(1),
\]
which reduces to
\[
r \leq \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(1+\sinh^{-1}(1)) - CD)\sinh^{-1}(1)}}\right)^{1/n} = R_1.
\]
Further, taking $D=0$, we get $b=1$. Then \eqref{eq_S*nCD} yields
\[
\left| \frac{zf'(z)}{f(z)} -1 \right| \leq Cr^n, \; (0 < C \leq 1).
\]
Now applying Lemma \ref{disk_lem} with $a=1$ gives $f \in \mathcal{S}^*_{\rho,n}$, if $r \leq ((\sinh^{-1}(1))/C)^{1/n}$.
For $0 < D < C \leq 1$, we have $b < 1$. Thus, using Lemma \ref{disk_lem} and \eqref{eq_S*nCD}, we have $f \in \mathcal{S}^*_{\rho,n}$ if the following holds:
\[
\frac{CDr^{2n} + (C-D)r^n -1}{1-D^2 r^{2n}} \leq \sinh^{-1}(1) - 1,
\]
or equivalently, if
\[
r \leq \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(\sinh^{-1}(1) - 1) + CD)\sinh^{-1}(1)}}\right)^{1/n} = R_2.
\]
This concludes the proof.
\qed \end{proof}
The next theorem establishes radius results for some well-known classes mentioned earlier.
\begin{theorem}
The sharp $\mathcal{S}^*_{\rho}$-radii for the classes $\mathcal{S}^*_{L}, \mathcal{S}^*_{RL}, \mathcal{S}^*_{C}, \mathcal{S}^*_{e}, \Delta^* \, \text{and} \, \mathcal{BS}^*(\alpha)$ are:
\begin{enumerate}[(i)]
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L}) = \sinh^{-1}(1)(2-\sinh^{-1}(1)) \approx 0.985928$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL}) = \dfrac{\left(2+(1+\sqrt{2})\sinh^{-1}(1)\right)\sinh^{-1}(1)}{5-3\sqrt{2} + \left(4(\sqrt{2}-1)+2\sinh^{-1}(1)\right)\sinh^{-1}(1)} \approx 0.964694$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C}) = \dfrac{1}{2}\left(\sqrt{2\left(2+3\sinh^{-1}(1)\right)}-2\right) \approx 0.523831$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e}) = \ln(1+\sinh^{-1}(1)) \approx 0.632002.$
\item $R_{\mathcal{S}^*_{\rho}}(\Delta^*) = \dfrac{\sinh^{-1}(1)(2+\sinh^{-1}(1))}{2(1+\sinh^{-1}(1))} \approx 0.674924$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(\alpha)) = \dfrac{-1+\sqrt{1+\alpha\left(2\sinh^{-1}(1)\right)^2}}{2\alpha\sinh^{-1}(1)}, \; \alpha \in [0,1]$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Suppose $f \in \mathcal{S}^*_{L}$. We have $zf'(z)/f(z) \prec \sqrt{1+z}$. When $|z|=r$, we obtain
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq 1 - \sqrt{1-r} \leq \sinh^{-1}(1),
\]
such that $r \leq (2-\sinh^{-1}(1))\sinh^{-1}(1) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L})$ holds. Next examine the function
\[
f_0(z) = \frac{4z}{\left(1+\sqrt{1+z}\right)^2} {e^{2\left(\sqrt{1+z}-1\right)}}.
\]
Since $zf_0'(z)/f_0(z) = \sqrt{1+z}$, it follows that $f_0 \in \mathcal{S}^*_{L}$. As $zf_0'(z)/f_0(z) -1 = -\sinh^{-1}(1)$ is obtained at $z=-R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L})$, the result is sharp.
\item Suppose $f \in \mathcal{S}^*_{RL}$, we obtain
\[
\frac{zf'(z)}{f(z)} \prec \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-z}{(1+2(\sqrt{2}-1)z)}}.
\]
For $|z|=r$, the subsequent inequality holds
\[
\left|\frac{zf'(z)}{f(z)}-1 \right| \leq 1 - \sqrt{2} + (\sqrt{2}-1) \sqrt{\frac{1+r}{(1-2(\sqrt{2}-1)r)}} \leq \sinh^{-1}(1),
\]
provided
\[
r \leq \frac{\left(2+(1+\sqrt{2})\sinh^{-1}(1)\right)\sinh^{-1}(1)}{5-3\sqrt{2} + \left(4(\sqrt{2}-1)+2\sinh^{-1}(1)\right)\sinh^{-1}(1)} = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL}).
\]
Next observe the following function defined as
\[
f_0(z)= z \exp\left(\int^z_0\frac{q_0(t)-1}{t}dt \right),
\]
where
\[
q_0(t) = \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-t}{(1+2(\sqrt{2}-1)t)}}.
\]
From the definition of $f_0$, at $z=-R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL})$, we have
\[
\frac{zf_0'(z)}{f_0(z)} = \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-z}{(1+2(\sqrt{2}-1)z)}} = 1-\sinh^{-1}(1).
\]
which confirms the sharpness.
\item Suppose $f \in \mathcal{S}^*_{C}$. So $zf'(z)/f(z) \prec 1+4z/3+2z^2/3$. This gives
\[
\left|\frac{zf'(z)}{f(z)}-1\right| \leq \frac{4r}{3} + \frac{2r^2}{3} \leq \sinh^{-1}(1), \; |z|=r,
\]
for $r \leq \frac{1}{2}\left(\sqrt{2\left(2+3\sinh^{-1}(1)\right)}-2\right) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C})$. The sharpness of the result is established using the subsequent function
\[
f_0(z) = z \exp\left(\frac{4z+z^2}{3}\right),
\]
where $zf_0'(z)/f_0(z) = 1+(4z+2z^2)/3$ yields $f_0 \in \mathcal{S}^*_{\rho}$, and substituting $z = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C})$ gives $zf_0'(z)/f_0(z) = 1+\sinh^{-1}(1)$, thereby proving the sharpness.
\item Suppose $f \in \mathcal{S}^*_{e}$, we have $zf'(z)/f(z) \prec e^z,$ which yields
\[
\left| \frac{zf'(z)}{f(z)}-1 \right| \leq e^r-1 \leq \sinh^{-1}(1) \;\text{holds in} \; |z|=r,
\]
provided $r \leq \ln(1+\sinh^{-1}(1)) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e})$.
Now Consider
\[
f_0(z)= z \exp\left(\int^z_0\frac{e^t -1}{t}dt \right).
\]
Since $zf_0'(z)/f_0(z) = e^z$, $f_0 \in \mathcal{S}^*_{e}$, and at $z=R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e})$, we have $zf_0'(z)/f_0(z) = 1+\sinh^{-1}(1)$, which shows the sharpness of the result.
\item Suppose $f \in \Delta^*$ which gives $zf'(z)/f(z) \prec z+ \sqrt{1+z^2}$. Then,
\[
\left| \frac{zf'(z)}{f(z)}-1 \right|\leq r+\sqrt{1+r^2} -1 \leq \sinh^{-1}(1), \, |z|=r,
\]
for $r \leq \dfrac{\sinh^{-1}(1)(2+\sinh^{-1}(1))}{2(1+\sinh^{-1}(1))} = R_{\mathcal{S}^*_{\rho}}(\Delta^*)$.
For sharpness, define $f_0$ as
\[
f_0(z)= z \exp\left(\int^z_0\frac{t+\sqrt{1+t^2}-1}{t}dt \right).
\]
Since $zf_0'(z)/f_0(z) = z+\sqrt{1+z^2}$, $f_0 \in \Delta^*$, so at $z=R_{\mathcal{S}^*_{\rho}}(\Delta^*)$, we have $zf_0'(z)/f_0(z)$ $= 1+\sinh^{-1}(1)$ which shows the sharpness of the result.
\item Suppose $f \in \mathcal{BS}^*(\alpha),\;\alpha \in [0,1]$, which gives $zf'(z)/f(z) \prec 1+ z/(1-\alpha z^2)$. Then,
\[
\left| \frac{zf'(z)}{f(z)}-1 \right|\leq \frac{r}{1-\alpha r^2} \leq \sinh^{-1}(1), \, |z|=r,
\]
for $r \leq \dfrac{-1+\sqrt{1+\alpha\left(2\sinh^{-1}(1)\right)^2}}{2\alpha\sinh^{-1}(1)} = R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(\alpha)), \, \alpha \in (0,1]$. For $\alpha=0,\, r \leq \sinh^{-1}(1)$.
Next examine the function $f_0$ defined as
\[
f_0(z)= z \left(\frac{1+\sqrt{\alpha}z}{1-\sqrt{\alpha}z}\right)^{1/(2\sqrt{\alpha})}.
\]
Since $zf_0'(z)/f_0(z) = 1+ z/(1-\alpha z^2)$, $f_0 \in (\mathcal{BS}^*(\alpha))$, so at $z=-R_{\mathcal{S}^*_{\rho}}((\mathcal{BS}^*(\alpha)))$, we have $zf_0'(z)/f_0(z)$ $= 1-\sinh^{-1}(1)$, which ensures sharpness of the result.
\end{enumerate}
\noindent Note that $ R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(1)) = \left(-1+\sqrt{1+(2\sinh^{-1}(1))^2}/(2\sinh^{-1}(1))\right) \approx 0.58241$ and $R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(0))$ $= \sinh^{-1}(1) \approx 0.881374$.
\qed \end{proof}
Next we present some radius problems for certain classes of functions expressed as ratio of functions:
\[
\mathcal{F}_1 := \left\{ f \in \mathcal{A}_n : \RE \left(\frac{f(z)}{g(z)}\right) > 0 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 0,\; g \in \mathcal{A}_n\right\},
\]
\[
\mathcal{F}_2 := \left\{ f \in \mathcal{A}_n : \RE \left(\frac{f(z)}{g(z)}\right) > 0 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 1/2,\; g \in \mathcal{A}_n\right\},
\]
and
\[
\mathcal{F}_3 := \left\{ f \in \mathcal{A}_n : \left|\frac{f(z)}{g(z)} -1 \right| < 1 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 0,\; g \in \mathcal{A}_n\right\}.
\]
\begin{theorem} \label{ratio_func}
For functions in the classes $\mathcal{F}_1,\, \mathcal{F}_2$ and $ \mathcal{F}_3$, the sharp $\mathcal{S}^*_{\rho,n}$-radii respectively, are:
\begin{enumerate}[(i)]
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1) = \left(\dfrac{\sqrt{4n^2 + (\sinh^{-1}(1))^2}-2n}{\sinh^{-1}(1)}\right)^{1/n}$.
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2) = \left( \dfrac{\sqrt{9n^2 + 4\sinh^{-1}(1)(n + \sinh^{-1}(1))} -3n}{2(n+\sinh^{-1}(1))} \right)^{1/n}$.
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_3) = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Let $f \in \mathcal{F}_1$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=f(z)/g(z)$ and $d(z)=g(z)/z$. Clearly, $s,d \in \mathcal{P}_n$. As $f(z)=zd(z)s(z)$, applying Lemma \ref{p-nAlpha_lem} here gives
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{4nr^n}{1-r^{2n}} \leq \sinh^{-1}(1)
\]
such that
\[
r \leq \left(\frac{\sqrt{4n^2 + (\sinh^{-1}(1))^2}-2n}{\sinh^{-1}(1)}\right)^{1/n} = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1)
\]
holds. Next examine the functions
\[
f_0(z)= z \left(\frac{1+z^n}{1-z^n}\right)^2 \; \text{and} \; g_0(z) = z \left(\frac{1+z^n}{1-z^n}\right).
\]
Evidently, $\RE(f_0(z)/g_0(z))>0$ and $\RE(g_0(z)/z)>0$, which implies $f_0 \in \mathcal{F}_1$. Further calculation yields at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1)e^{i\pi/n}$
\[
\frac{zf'_0(z)}{f_0(z)} = 1 + \frac{4nz^n}{1-z^{2n}} = 1 - \sinh^{-1}(1),
\]
which validates the result is sharp.
\item Let $f \in \mathcal{F}_2$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=f(z)/g(z)$ and $d(z)=g(z)/z$. Clearly, $s \in \mathcal{P}_n(1/2)$ and $d \in \mathcal{P}_n$. As $f(z)=zd(z)s(z)$, applying \ref{p-nAlpha_lem} here gives
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{2nr^n}{1-r^{2n}} + \frac{nr^n}{1-r^n} = \frac{3nr^n + nr^{2n}}{1-r^{2n}} \leq \sinh^{-1}(1),
\]
whenever
\[
r \leq \left( \frac{\sqrt{9n^2 + 4\sinh^{-1}(1)(n + \sinh^{-1}(1))} -3n}{2(n+\sinh^{-1}(1))} \right)^{1/n} = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2).
\]
Therefore, $f \in \mathcal{S}^*_{\rho,n}$ holds for $r \leq R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$.
Next observe that $\RE(g_0(z)/z)>1/2$ while $\RE(f_0(z)/g_0(z))>0$ for the functions
\[
f_0(z) = \frac{z(1+z^n)}{(1-z^n)^2} \; \text{and} \; g_0(z) = \frac{z}{1-z^n}.
\]
Therefore $f_0 \in \mathcal{F}_2$ which verifies the sharpness for $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$ such that
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{3nz^n + nz^{2n}}{1-z^{2n}} = \sinh^{-1}(1).
\]
\item Let $f \in \mathcal{F}_3$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=g(z)/f(z)$ and $d(z) = g(z)/z$. Then $d \in \mathcal{P}_n$. We can verify that $|1/s(z)-1|<1$ holds whenever $\RE(s(z))>1/2$ and therefore $s \in \mathcal{P}_n(1/2)$. As $f(z)=zd(z)/s(z)$, on applying Lemma \ref{p-nAlpha_lem}, we obtain
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{3nr^n + nr^{2n}}{1-r^{2n}} \leq \sinh^{-1}(1).
\]
The rest of the proof is omitted as it is analogous to proof of Theorem \ref{ratio_func}(ii). The sharpness can be verified as follows. Examine the functions
\[
f_0(z) = \frac{z(1+z^n)^2}{1-z^n} \; \text{and} \; g_0(z) = \frac{z(1+z^n)}{1-z^n}.
\]
Using above definitions of $f_0$ and $g_0$, we see that
\[
\RE\left(\frac{g_0(z)}{f_0(z)}\right) = \RE\left(\frac{1}{1+z^n}\right) > \frac{1}{2} \; \text{and} \; \RE\left(\frac{g_0(z)}{z}\right) = \RE\left(\frac{1+z^n}{1-z^n}\right) > 0,
\]
and therefore, $f_0 \in \mathcal{F}_3$. Now at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_3) e^{i\pi/n}$, we obtain
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{3nz^n - nz^{2n}}{1-z^{2n}} = -\sinh^{-1}(1),
\]
which serves as validation for the sharp result.
\end{enumerate}
This concludes the proof.
\qed \end{proof}
\section{Introduction}
Let the open unit disk $\{z\in\mathbb{C}:|z|<1\}$ be represented by $\mathbb{D}$ and denote the class of all analytic functions in $\mathbb{D}$ by $\mathcal{H}$. Consider $\mathcal{A}_n$ as the class of analytic functions $f$ in $\mathbb{D}$ represented by
\begin{equation}\label{A_n}
f(z)=z+a_{n+1}z^{n+1}+a_{n+2}z^{n+2}+...
\end{equation}
In particular, denote $\mathcal{A}_1:=\mathcal{A}$ and let $\mathcal{S}$ be the subclass of $\mathcal{A}$ such that it involves all univalent functions $f(z)$ in $\mathbb{D}$. Let $g, h$ be two analytic functions and $\omega$ be a Schwarz function satisfying $\omega(0)=0$ and $|\omega(z)|\leq|z|$ such that $g(z)=h(\omega(z))$ then $g$ is said to be subordinate to $h$, or $g \prec h$. If $h$ is univalent, then $g\prec h$ iff $g(0)=h(0)$ and $g(\mathbb{D})\subset h(\mathbb{D})$. Ma and Minda \cite{minda94} introduced the univalent function $\psi$ satisfying $\RE{\psi(\mathbb{D})}>0$, $\psi(\mathbb{D})$ starlike with respect to $\psi(0)=1$ and $\psi'(0)>0$ and the domain $\psi(\mathbb{D})$ being symmetric about the real axis. Further, they gave the definitions for the general subclasses of starlike and convex functions, respectively as follows:
\begin{equation}
\mathcal{S}^*(\psi):=\left\{f\in\mathcal{S}: zf'(z)/f(z)\prec\psi(z)\right\}
\end{equation}
and
\begin{equation}
\mathcal{K}(\psi):= \left\{f\in\mathcal{S}: 1 + zf''(z)/f'(z) \prec \psi(z)\right\}.
\end{equation}
For different choices of $\psi$, many subclasses of $\mathcal{S}^*$ and $\mathcal{K}$ can be obtained. For example, the notable classes of Janowski starlike and convex functions \cite{janow} are represented by $\mathcal{S}^*[C,D] := \mathcal{S}^*((1+Cz)/(1+Dz))$ and $\mathcal{K}[C,D]:= \mathcal{K}((1+Cz)/(1+Dz))$ for $-1 \leq D < C \leq 1$, respectively. Further, $\mathcal{S}^*_{\alpha}:= \mathcal{S}^*[1-2\alpha,-1]$ and $\mathcal{K}_{\alpha}:= \mathcal{K}^*[1-2\alpha,-1]$ represents the classes of starlike and convex functions of order $\alpha \in [0,1)$, respectively. Note that $\mathcal{S}^*:= \mathcal{S}^*_0$ and $\mathcal{K}:=\mathcal{K}_0$ represent the well-known classes of starlike and convex functions, respectively. We denote $\mathcal{SS}^*(\gamma):= \mathcal{S}^*(((1+z)/(1-z))^\gamma)$ and $\mathcal{SK}(\gamma):= \mathcal{K}(((1+z)/(1-z))^\gamma)$ representing the class of strongly starlike and strongly convex functions of order $\gamma \in (0,1]$ respectively.
Recall that for two subfamilies $G_1$ and $G_2$ of $\mathcal{A}$, we say that $r_0$ is the $G_1$ -- radius for the class $G_2$, if $r_0$, $(0 < r \leq r_0)$, is the greatest number which satisfies $r^{-1}g(rz)\in G_1$ where $g \in G_2$. Moreover, starlike class $\mathcal{S}^*(\psi)$ for different $\psi(z)$ were considered by many authors, whose works examined the geometrical properties, radius results and coefficient estimates of the functions of their respective classes. Sok\'{o}\l \; and Stankiewicz \cite{sokol96,sokol09} considered the class $\mathcal{S}^*_{L}:= \mathcal{S}^*(\sqrt{1+z})$ and Mendiratta $et\; al.$ \cite{mendi} worked on the class $\mathcal{S}^*_{RL}:= \mathcal{S}^*(\sqrt{2} - (\sqrt{2}-1) ((1-z)/((1+2(\sqrt{2}-1)z)))^{1/2})$. Sharma $et\; al.$ \cite{naveen14} studied the class $\mathcal{S}^*_{C}:= \mathcal{S}^*(1+4z/3+2z^2/3)$ while the class $\mathcal{S}^*_s:=\mathcal{S}^*(1+\sin{z})$ was examined by Cho $et\; al.$ \cite{sinefun}. The classes $\mathcal{S}^*_{e}:=\mathcal{S}^*(e^z)$ and $\Delta^*:= \mathcal{S}^*(z+\sqrt{1+z^2})$ were considered by Mendiratta $et\; al.$ \cite{mendi2exp} and Raina $et\; al.$ \cite{raina}, respectively. Kargar $et\; al.$ \cite{kargarbooth} introduced and studied the class $\mathcal{BS}^*(\alpha) := \mathcal{S}^*(1+z/(1-\alpha z^2)), \; \alpha \in [0,1]$, associated with the Booth lemniscate which was also investigated by Cho $et\;al.$ \cite{chobooth}. Some more recent work on radius problems can be found in \cite{aghalary, chobell, bano, priyanka, swaminathan}.
Motivated by the classes defined in \cite{sokol96,kargarbooth,mendi,naveen14,mendi2exp,sinefun,raina}, we consider the petal shaped region ${\Omega}_{\rho}:=\{ w\in \mathbb{C}: |\sinh(w - 1)| < 1 \}$, which is characterised functionally as $\rho(z)=1+\sinh^{-1}(z)$ to define our class. Clearly, $\rho(z)$ is a Ma-Minda function. See Figure {\ref{fig:incl_rel}} for its boundary curve $\gamma_0$ which is petal shaped. Note that $\sinh^{-1}(z)$ is a multivalued function and has the branch cuts along the line segments $(-i\infty,-i) \cup (i,i\infty)$, on the imaginary axis
and hence it is analytic in $\mathbb{D}$. Now we introduce a new class of starlike functions
\begin{equation}
\label{func_def}
\mathcal{S}^*_{\rho}:=\left\{f\in\mathcal{A}: \frac{zf'(z)}{f(z)} \prec 1+\sinh^{-1}(z)\right\} \quad (z\in\mathbb{D}),
\end{equation}
which is associated with the petal-shaped domain $\rho(\mathbb{D})$. From the above definition, we deduce that $f\in \mathcal{S}^*_{\rho} $ iff there exists an analytic function $q(z)\prec \rho(z)$ such that
\begin{equation}\label{gen_int_rep}
f(z) = z \exp\left(\int^z_0 \frac{q(t)-1}{t}dt\right).
\end{equation}
Table \ref{func_exa_table} presents some functions in the class $\mathcal{S}^*_{\rho}$ where $q_j \prec \rho$.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ccc}
\hline
$j$ & $q_j(z)$ & $f_j(z)$\\
\hline
$1$ & $1 + z/5$ & $z \exp(z/5)$\\
$2$ & $(5+2z)/(5+z)$ & $z + z^2/5$\\
$3$ & $(7+4z)/(7+z)$ & $z(1+z/7)^3$\\
\hline
\end{tabular}
\end{center}
\caption{Some functions in the class $\mathcal{S}^*_{\rho}$}
\label{func_exa_table}
\end{table}
Since $\rho$ is univalent in $\mathbb{D},\, q_j(\mathbb{D}) \subset \rho(\mathbb{D})$ and $q_j(0)=\rho(0) \; (j=1, 2, 3)$, it follows that each $q_j \prec \rho$. Thus the functions $f_j(z)$ obtained from \eqref{gen_int_rep} are in the class $\mathcal{S}^*_{\rho}$. In particular, if we choose
\[
q(z)=1+\sinh^{-1}(z) = 1+z-\dfrac{z^3}{6}+\dfrac{3z^5}{40}-\dfrac{5z^7}{112}...,
\]
then \eqref{gen_int_rep} gives
\begin{equation}\label{func_int_rep}
f_0(z) = z \exp\left(\int^z_0\frac{\sinh^{-1}(t)}{t}dt\right) = z + z^2 + \frac{z^3}{2} + \frac{z^4}{9} - \frac{z^5}{72} - \frac{z^6}{225}\cdots,
\end{equation}
which often acts as the extremal function for the class $\mathcal{S}^*_{\rho}$ yielding sharp results.
\begin{remark}
Note that $\sinh^{-1}(z) = \ln(z+\sqrt{1+z^2})$. Let $w=zf'(z)/f(z)$, where $f \in \mathcal{S}^*_{\rho}$. Then the class $\mathcal{S}^*_{\rho}$ can be alternatively represented by
$\exp(w-1) \prec z+\sqrt{1+z^2}$, where $z+\sqrt{1+z^2}$ represents the Crescent shaped domain \cite{raina}. Thus, there exists an exponential relation among the functions in the classes $\mathcal{S}^*_{\rho}$ and $\Delta^*$.
\end{remark}
In the present investigation, the geometrical properties of the function $1+\sinh^{-1}(z)$ are studied and certain inclusion properties as well as radius problems are established for the class $\mathcal{S}^*_{\rho}$.
\section{Properties of the function $1+\sinh^{-1}(z)$}
The current section deals with the study of some geometric properties of the function $1+\sinh^{-1}(z)$.
\begin{theorem}\label{convexfunction}
The function $\rho(z)=1+\sinh^{-1}(z)$ is a convex univalent function.
\end{theorem}
\begin{proof}
Let $h(z) = \sinh^{-1}(z)$. Clearly, $h(0)=0$. Since $h'(z)=1/\sqrt{1+z^2}$ and $\sqrt{1+z^2} \prec \sqrt{1+z} \in \mathcal{P}$, where $\mathcal{P}$ is the Carath\'{e}odory class. Therefore, $1/\sqrt{1+z^2} \in \mathcal{P}$ which implies that $\RE h'(z) > 0$. Hence $\rho$ is univalent.
Now a calculation yields
\begin{equation*}
1+\frac{zh''(z)}{h'(z)}=\frac{1}{1+z^2}.
\end{equation*}
Since
\begin{equation*}
\frac{1}{1+z^2}\prec \frac{1}{1+z} \in \mathcal{P}.
\end{equation*}
Therefore, $\RE(1+zh''(z)/h'(z))>0$ which implies that $h$ (and thus $\rho$) is a convex univalent function.
\qed \qed \end{proof}
\begin{remark}\label{real_symmetry}
Note that $\rho'(0)>0$ and the function $\varphi(z)= z+\sqrt{1+z^2}$ satisfies $\varphi(\bar{z})=\overline{\varphi(z)}$. Therefore, $\rho(\bar{z})=\overline{\rho(z)}$ and hence, the domain $\Omega_{\rho}=\rho(\mathbb{D})$ is symmetric about the real axis.
\end{remark}
\begin{theorem}\label{imag_symmetry}
The domain $\Omega_{\rho}$ is symmetric about the line $\RE(w)=1$.
\end{theorem}
\begin{proof}
Since $\Omega_{\rho}$ is symmetric about the real axis, the condition $0 \leq \theta \leq \pi/2$ is sufficient to prove our result. As we know that symmetry along imaginary axis for $f \in \mathcal{A}$ holds if $\RE(f(\theta)) = -\RE(f(\pi-\theta))$ and $\IM(f(\theta)) = \IM(f(\pi-\theta))$.
Now let $h(z) = \sinh^{-1}(z) = \ln(z+\sqrt{1+z^2})$. Then $\IM(h(z)) = \arg(z+\sqrt{1+z^2})$. For $z = re^{it}, t \in [0,\pi]$ and fixed $ r \in (0,1)$, we have the following expressions for $t \rightarrow \theta$
\[
\begin{array}{ll}
I_1 &= \arg\left(r(\cos\theta + i\sin\theta) + \sqrt{1+r^2(\cos(2\theta) + i\sin(2\theta))}\right)\\
&= \arg\left(z+\sqrt{1+z^2}\right),
\end{array}
\]
and for $t \rightarrow \pi-\theta$
\[
\begin{array}{ll}
I_2 &= \arg\left(r(\cos(\pi-\theta) + i\sin(\pi-\theta)) + \sqrt{1+r^2(\cos(2(\pi-\theta)) + i\sin(2(\pi-\theta)))}\right)\\
&= \arg\left(r(-\cos\theta + i\sin\theta) + \sqrt{1+r^2(\cos(2\theta) - i\sin(2\theta))}\right)\\
&= \arg\left(-\overline{z}+\sqrt{1+\overline{z}^2}\right).
\end{array}
\]
Now let us consider $(z+\sqrt{1+z^2})/({-\overline{z}+\sqrt{1+\overline{z}^2}})$. On rationalising the denominator, we get
\[
\displaystyle{\frac{z+\sqrt{1+z^2}}{-\overline{z}+\sqrt{1+\overline{z}^2}}} = \displaystyle{\frac{(z+\sqrt{1+z^2})(-z+\sqrt{1+z^2})}{(-\overline{z}+\sqrt{1+\overline{z}^2})(-z+\sqrt{1+z^2})}} = \displaystyle{\frac{1}{|-z+\sqrt{1+z^2}|^2}} = k > 0,
\]
where $k$ is some real positive constant. Thus,
\[
\begin{array}{ll}
&\arg\left( \displaystyle{\frac{z+\sqrt{1+z^2}}{-\overline{z}+\sqrt{1+\overline{z}^2}}} \right) = \arg(k) = 0\\
\Rightarrow & \arg\left(z+\sqrt{1+z^2}\right) = \arg\left(-\overline{z}+\sqrt{1+\overline{z}^2}\right)\\
\Rightarrow & I_1 = I_2.
\end{array}
\]
Similarly, $\RE(h(\theta)) = -\RE(h(\pi-\theta))$ for $0 \leq \theta \leq \pi/2$. Hence, $h(z)$ is symmetric about the imaginary axis and thus, by translation property, $\rho(z)$ is symmetric about the line $\RE(w) = 1$.
\qed \end{proof}
Now using Theorem~\ref{imag_symmetry}, we obtain the next result:
\begin{corollary}\label{g-disk}
The disk $\{w: |w-1|\leq \sinh^{-1}(r)\}$ is contained in $\rho(|z|\leq r)$ and is maximal.
\end{corollary}
\begin{proof}
Since $\min_{|z|=r} |\sinh^{-1}(z)| = |\sinh^{-1}(-r)|=\sinh^{-1}(r)$ and hence the conclusion can be drawn at once.
\qed \end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{modulus.pdf}
\caption{ $\rho(\mathbb{D})$ lies in the annular region bounded between the circles $C1$ and $C2$.}
\label{fig:modulus}
\end{figure}
\begin{theorem}\label{geo_bounds}
We find that the following properties hold for $\rho(z)=1+\sinh^{-1}(z)$:
\begin{enumerate}[(i)]
\item $\rho(-r) \leq \RE \rho(z) \leq \rho(r) \quad (|z|\leq r<1)$;
\item $|\IM \rho(z)| \leq \pi/2 \quad (|z|\leq 1)$;
\item $\rho(-r) \leq |\rho(z)| \leq \rho(r)\quad (|z|\leq r<1)$;
\item $|\arg \rho(z)| \leq \tan^{-1}(1/t)$ where $t= \dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}$.
\end{enumerate}
\end{theorem}
\begin{proof} (i) Since $\rho(z)$ is convex and typically real, the value of $\RE\rho(z)$ falls between $\lim_{\theta\rightarrow 0}\rho(re^\theta)$ and $\lim_{\theta\rightarrow \pi}\rho(re^\theta)$, thus the result follows.
(ii) Using Theorem \ref{imag_symmetry}, it suffices to take $\theta \in [0,\pi/2]$. Then the inequality follows by letting $r$ tending to $1^-$ and observing that the function
\[
\IM \rho(z) = \arg \left(r \cos (\theta) + \sqrt{1+r^2 (\cos (2\theta)+i \sin (2\theta))}+i r\sin (\theta)\right)
\]
is strictly increasing in the interval $[0,\pi/2]$ and hence the result follows at once. (iii) The radially farthest and nearest points in $\rho(\mathbb{D})$ from origin are respectively $B$ and $A$ (see Figure \ref{fig:modulus}) and therefore the result obviously holds. Moreover we observe that these points $A$ and $B$ lie on the real line and hence the bounds of $|\rho(z)|$ and $\RE \rho(z)$ coincide.
The proof of (iv) is evident from Theorem \ref{incl_rel}(iii) so skipped here.
\qed \end{proof}
Next we have the following important result:
\begin{lemma}
\label{disk_lem}
For $1-\sinh^{-1}(1)<a<1+\sinh^{-1}(1)$, let $r_a$ be given by
\[
r_a=
\left\{
\begin{array}{lr}
a-(1-\sinh^{-1}(1)), & 1-\sinh^{-1}(1)<a\leq1; \\
1+\sinh^{-1}(1)-a, & 1\leq a<1+\sinh^{-1}(1).
\end{array}
\right.
\]
Then
$
\{w : |w-a|<r_a\} \subset \Omega_{\rho}.
$
\end{lemma}
We omit the proof of Lemma \ref{disk_lem} as it directly follows from Theorem~\ref{imag_symmetry} and Corollary~\ref{g-disk}.
\begin{remark}
Evidently the domain $ \Omega_{\rho}$ is contained inside the disk
$\{w: |w-1|<\pi/2\}.$
\end{remark}
\section{Inclusion Relations}
This section establishes some inclusion results involving the class $\mathcal{S}^*_{\rho}$ with some well-known classes.
We consider the class $M(\beta)$, first studied by Uralegaddi $et\; al.$ \cite{uralegaddi}, given by
\[
M(\beta) := \left\{ f \in \mathcal{A}: \RE\left( \frac{zf'(z)}{f(z)}\right) < \beta,\; z \in \mathbb{D}, \beta>1\right\},
\]
and another interesting class introduced by Kanas and Wis\'niowska \cite{kanas} of $k$-starlike functions, denoted by $k-\mathcal{ST}$ and defined by
\[
k-\mathcal{ST} \coloneqq \left\{ f \in \mathcal{A}: \RE\left( \frac{zf'(z)}{f(z)}\right) > k\left| \frac{zf'(z)}{f(z)}-1 \right|,\; z \in\mathbb{D}, k \geq 0 \right\}.
\]
Note that $\mathcal{S}^* = 0-\mathcal{ST}$ and $\mathcal{S}^*_p = 1-\mathcal{ST}$, where $\mathcal{S}^*_p$ is the class of parabolic starlike functions \cite{ronn}.
We establish the following inclusion relations for the class $\mathcal{S}^*_{\rho}$.
\begin{theorem}
\label{incl_rel}
The class $\mathcal{S}^*_{\rho}$ satisfies the following relationships:
\begin{enumerate}[(i)]
\item $\mathcal{S}^*_{\rho} \subset \mathcal{S}^*_{\alpha} \subset \mathcal{S}^*$ for $0 \leq \alpha \leq 1-\sinh^{-1}(1)$;
\item $\mathcal{S}^*_{\rho} \subset M(\beta)$ for $\beta \geq 1+\sinh^{-1}(1)$;
\item $\mathcal{S}^*_{\rho} \subset \mathcal{SS}^*(\gamma)$ for $(2/\pi)\tan^{-1}(1/t) \leq \gamma \leq 1$ where $t=\dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}$;
\item $k-\mathcal{ST} \subset \mathcal{S}^*_{\rho}$ for $k \geq 1+1/\sinh^{-1}(1)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Consider $f \in \mathcal{S}^*_{\rho}$ which implies $zf'(z)/f(z) \prec 1+\sinh^{-1}(z)$. By Theorem \ref{geo_bounds}, it is evident that for $z \in \mathbb{D}$,
\[
1-\sinh^{-1}(1) = \min_{|z|=1} \RE (1+\sinh^{-1}(z)) \leq \RE \frac{zf'(z)}{f(z)}
\]
and
\[
\RE \frac{zf'(z)}{f(z)} \leq \max_{|z|=1}\RE (1+\sinh^{-1}(z)) = 1+\sinh^{-1}(1).
\]
This proves (i) and (ii). For (iii), let $w \in \mathbb{C}$, $X = \RE(w)$, $Y = \IM(w)$, and $b = 1-\sinh^{-1}(1)$. Now consider the parabolic domain $\Gamma_P$ with the boundary curve $\partial\Gamma_P = \gamma_p: Y^2 = 4a(X-b)$. Then the focus $a$ of the smallest parabola $\gamma_p$ which contains $\Omega_{\rho}$ will touch the peak points $1 \pm i\pi/2$ of $\mathcal{S}^*_{\rho}$ is $\pi^2/(16\sinh^{-1}(1))$. Let $P$ be any point on the parabola $\gamma_P$ with parametric coordinates $(b+at^2, 2at)$ such that the tangent OE at $P$ passes through origin for some parameter $t$. Let the equation of the tangent OE be $y=mx$, where $m = dy/dx = (dy/dt)/(dx/dt) = 1/t$. Therefore at P, we have
\begin{equation*}
m =\frac{y}{x} \Rightarrow \frac{1}{t} = \frac{2at}{b+at^2},
\end{equation*}
which yields
\begin{equation}\label{t}
t = \sqrt{\dfrac{b}{a}} = \dfrac{4}{\pi} \sqrt{\sinh^{-1}(1)(1-\sinh^{-1}(1))}
\end{equation}
and the argument of the tangent at $P$ of $ \gamma_p$ is $\tan^{-1}(1/t)$. Since $\Omega_{\rho} \subset \Gamma_p$, it gives
\[
\left|\arg \frac{zf'(z)}{f(z)}\right| \leq \max_{|z|=1} \arg (\rho(z)) = \max \arg(\gamma_p) = \tan^{-1}(1/t),
\]
which demonstrates $f \in \mathcal{SS}^*((2/\pi)\tan^{-1}(1/t))$, where $t$ is given by \eqref{t}.
To show (iv), consider $f \in k-\mathcal{ST}$ along with the conic domain $\Gamma_k = \{w \in \mathbb{C}: \RE w > k|w-1|\}$. For $k>1$, let $\partial\Gamma_k$ represent the horizontal ellipse $\gamma_k: x^2 = k^2(x-1)^2+k^2y^2$ which may be rewritten as
\[
\frac{(x-x_0)^2}{a^2} + \frac{(y-y_0)^2}{b^2} = 1,
\]
where $x_0=k^2/(k^2-1),\, y_0=0,\, a=k/(k^2-1)$ and $b=1/\sqrt{k^2-1}$. For $\gamma_k \subset \Omega_{\rho}$, the condition $x_0+a\leq 1+\sinh^{-1}(1)$ must hold, or equivalently $k \geq 1+1/\sinh^{-1}(1)$. Since $\Gamma_{k_1} \subseteq \Gamma_{k_2}$ for $k_1 \geq k_2$, it follows that for $k \geq 1+1/\sinh^{-1}(1)$, $k-\mathcal{ST} \subset \mathcal{S}^*_{\rho}$. Figure \ref{fig:incl_rel} clearly depicts these relations.
\qed \end{proof}
\begin{figure}[t]
\begin{minipage}{0.55\textwidth}
\includegraphics[width=0.95\linewidth, height=8cm]{inclRel.pdf}
\end{minipage}%
\begin{minipage}{0.45\textwidth}
\parbox{0.9\linewidth}{
\small
$\gamma_0: w = 1+\sinh^{-1}(z)$\\
$\gamma_1: \RE(w)=1-\sinh^{-1}(1)$\\
$\gamma_2: \RE(w)=1+\sinh^{-1}(1)$\\
$\gamma_3: |\arg(w)|\leq \tan^{-1}(1/t)$\\
where $t$ is given by \eqref{t}\\
$\gamma_ 4= \gamma_p: Y^2 = 4a(X-1+\sinh^{-1}(1))$\\
where $a = \pi^2/(16\sinh^{-1}(1))$\\
$\gamma_5= \gamma_k: \RE(w)>k|w-1|$\\
where $\; k=1+1/\sinh^{-1}(1)$\\
$\gamma_6= D_L: |w-1|<\sinh^{-1}(1)$\\
$\gamma_7= D_S: |w-1|<\pi/2$\\
$\RE$(A) $= 1-\sinh^{-1}(1)$\\
$\RE$(B) $= 1+\sinh^{-1}(1)$\\
$\IM$(C) $= -\IM(D) = \pi/2$\\
$\arg($E$)=-\arg($F$)= \tan^{-1}(1/t)$\\
where $t$ is given by \eqref{t}\\
O: origin\\
P: point of tangency of $\gamma_4$ w.r.t. OE
}
\end{minipage}
\caption{Boundary curves, depicting some inclusion relations for $w=1+\sinh^{-1}(z)$.}
\label{fig:incl_rel}
\end{figure}
For our next result, we consider $\mathcal{P}_n[C,D],$ the class of functions $p(z)$ of the form $1 + \sum_{k=n}^{\infty}c_{k}z^{k}$, satisfying $ p(z) \prec (1+Cz)/(1+Dz),$ where $-1 \leq D < C \leq 1$.
Denote by $\mathcal{P}_n(\alpha) := \mathcal{P}_n[1-2\alpha,-1] \; \text{and} \; \mathcal{P}_n:= \mathcal{P}_n(0)$. For n=1, $\mathcal{P} = \mathcal{P}_1$ is the Carath\'{e}odory class. We need the following lemmas:
\begin{lemma}
\label{p-nAlpha_lem}\normalfont{\cite{shah}}
For $p \in \mathcal{P}_n(\alpha)$, we have
\[
\left|\frac{zp'(z)}{p(z)}\right| \leq \frac{2(1-\alpha)nr^n}{(1-r^n)(1+(1-2\alpha)r^n)}, \; (|z|=r).
\]
\end{lemma}
\begin{lemma}
\label{p-nCD_lem}\normalfont{\cite{ravi-ron}}
For $p\in\mathcal{P}_n[C,D]$, we have
\[
\left|p(z)-\frac{1-CDr^{2n}}{1-D^2r^{2n}}\right| \leq \frac{(C-D)r^n}{1-D^2r^{2n}}, \; (|z|=r).
\]
Especially, for $p \in \mathcal{P}_n(\alpha)$, we have
\[
\left| p(z) - \frac{1+(1-2\alpha)r^{2n}}{1-r^{2n}}\right| \leq \frac{2(1-\alpha)r^n}{1-r^{2n}}, \; (|z|=r).
\]
\end{lemma}
\begin{theorem}
Let $-1 < D < C \leq 1$. If either of the following two conditions holds:
\begin{enumerate}[(i)]
\item $(1-\sinh^{-1}(1))(1-D^2) < 1-CD\leq 1-D^2$ and $C-D\leq (1-D)\sinh^{-1}(1)$;
\item $1-D^2 \leq 1-CD < (1+\sinh^{-1}(1))(1-D^2)$ and $C-D\leq (1+D)\sinh^{-1}(1)$.
\end{enumerate}
Then $\mathcal{S}^{*}[C,D]\subset \mathcal{S}^*_{\rho}$.
\end{theorem}
\begin{proof}
Let $f\in\mathcal{S}^{*}[C,D]$ which implies $zf'(z)/f(z)\in\mathcal{P}[C,D]$. Using Lemma \ref{p-nCD_lem} we have
\begin{equation}
\label{p-thm-eq}
\left|\frac{zf'(z)}{f(z)}-\frac{1-CD}{1-D^2}\right| \leq \frac{(C-D)}{1-D^2}.
\end{equation}
Let $a=(1-CD)/(1-D^2)$ and assume that (i) holds. Now multiplying $1+D$ and dividing by $(1-D^2)$ on either sides of the inequality $(C-D) \leq (1-D)\sinh^{-1}(1)$ gives $(C-D)/(1-D^2)\leq a-(1-\sinh^{-1}(1))$ on simplification. Also, the inequality $(1-\sinh^{-1}(1))(1-D^2) < 1-CD \leq 1-D^2$ is equivalent to $1-\sinh^{-1}(1)<(1-CD)/(1-D^2)\leq 1$. Therefore, from \eqref{p-thm-eq} we find $w=zf'(z)/f(z)$ is contained inside the disk $|w-a|<r_a$, where $r_a=a-(1-\sinh^{-1}(1))$ and $1-\sinh^{-1}(1) < a\leq 1$. Hence $f\in\mathcal{S}^*_{\rho}$ by Lemma \ref{disk_lem}. A similar proof can be shown when (ii) holds. \qed \end{proof}
\section{Radius Problems}
\noindent In this section, radius results for various subclasses of $\mathcal{A}$ are established. We begin by determining sharp $\mathcal{S}^*_{\alpha}\;(0\leq\alpha<1),\; \mathcal{M}(\beta)\;(\beta>1) $ and $k-\mathcal{ST}$-radii $(k \geq 0)$ for the class $\mathcal{S}^*_{\rho}$. Using Theorem \ref{incl_rel}, we can establish that $R_{\mathcal{S}^*_{\alpha}}(\mathcal{S}^*_{\rho})= R_{M(\beta)}(\mathcal{S}^*_{\rho})= 1$ for $0\leq\alpha\leq 1-\sinh^{-1}(1)$ and $\beta > 1+\sinh^{-1}(1)$.
\begin{theorem}
If $f \in \mathcal{S}^*_{\rho}$, then the following results hold:
\begin{enumerate}[(i)]
\item For $1-\sinh^{-1}(1)\leq\alpha<1$, we have $f \in \mathcal{S}^*_{\alpha}$ in $|z| \leq \sinh(1-\alpha)$.
\item For $1<\beta\leq 1+\sinh^{-1}(1)$, we have $f \in \mathcal{M}(\beta)$ in $|z| \leq \sinh(\beta-1)$.
\item For $k>0$, we have $f \in k-\mathcal{ST}$ in $|z| \leq \sinh(1/(k+1))$.
\end{enumerate}
The results are sharp.
\end{theorem}
\begin{proof}
Since $f \in \mathcal{S}^*_{\rho},\; zf'(z)/f(z) \prec 1+\sinh^{-1}(z)$ and hence for $|z|=r<1$ Theorem \ref{geo_bounds} gives
\[
1-\sinh^{-1}(r) \leq \RE \frac{zf'(z)}{f(z)} \leq 1+\sinh^{-1}(r),
\]
thereby validating the first two parts. Also, the constants $\sinh(1-\alpha)$ and $\sinh(\beta-1)$ are optimal for the function $f_0$ given by \eqref{func_int_rep}. Now to prove (iii), note that $f \in k-\mathcal{ST}$ in $|z|<r$, if
\[
\RE (1+\sinh^{-1}(w(z))) \geq k|1+\sinh^{-1}(w(z))-1| = k|\sinh^{-1}(w(z))|.
\]
Here $w$ denotes the Schwarz function. Since $\RE (1+\sinh^{-1}(w(z))) \geq 1-\sinh^{-1}(r)$ and $|\sinh^{-1}(w(z))| \leq \sinh^{-1}(r)$,
the inequality $\RE (1+\sinh^{-1}(w(z)) \geq k|\sinh^{-1}(w(z))|$ holds whenever $ 1-\sinh^{-1}(r) \geq k\sinh^{-1}(r)$, which implies $r\leq\sinh(1/(1+k))$. For the function $f_0$ given by \eqref{func_int_rep} and for $z_0=-\sinh(1/(1+k))$, we have
\[
\RE \frac{z_0f'_0(z_0)}{f_0(z_0)} = \RE (1+\sinh^{-1}(z_0)) = \frac{k}{k+1} = k|\sinh^{-1}(z_0)| = k\left|\frac{z_0f'_0(z_0)}{f_0(z_0)}-1\right|.
\]
This concludes the proof.
\qed \end{proof}
\begin{corollary}
Substituting $k=1$ in part (iii) above, we find that $f \in \mathcal{S}^*_{\rho}$ is parabolic starlike \normalfont{\cite{ronn}} in $|z| \leq \sinh(1/2)$.
\end{corollary}
\noindent In the next result, we find the $\mathcal{K}_{\alpha}$-radius for the class $\mathcal{S}^*_{\rho}$.
\begin{theorem}
Let $f\in \mathcal{S}^*_{\rho}$. Then $f\in \mathcal{K}_{\alpha}$ in $|z|<r_{\alpha}$, where $r_{\alpha}$ is the least positive root of
\begin{equation}
\label{rconv}
(1-r^2)\sqrt{1+r^2}\left(1-\sinh^{-1}(r)\right)\left(1-\alpha-\sinh^{-1}(r)\right)-r = 0 \quad (0\leq\alpha<1).
\end{equation}
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}^*_{\rho}$ and $w$ be a Schwarz function. Then $zf'(z)/f(z)= 1+\sinh^{-1}(w(z))$ such that
\[
1+\frac{zf''(z)}{f'(z)} = 1+\sinh^{-1}(w(z)) + \frac{zw'(z)}{(1+\sinh^{-1}(w(z)))\sqrt{1+w^2(z)}}
\]
which yields
\[
\RE\left(1+\frac{zf''(z)}{f'(z)}\right) \geq \RE\left(1+\sinh^{-1}(w(z))\right) - \left|\frac{zw'(z)}{(1+\sinh^{-1}(w(z)))\sqrt{1+w^2(z)}}\right|.
\]
We know for the Schwarz function $w$, the inequality $|w'(z)| \leq (1-|w(z)|^2)/(1-|z|^2)$ holds. Thus
we observe that
\begin{align*}
\RE\left(1+\frac{zf''(z)}{f'(z)}\right) &\geq 1-\sinh^{-1}(|z|) - \frac{|z|(1-|w(z)|^2)}{(1-\sinh^{-1}(|z|))(1-|z|^2)\sqrt{1+|z|^2}}\\
&\geq 1-\sinh^{-1}(|z|) - \frac{|z|}{(1-\sinh^{-1}(|z|))(1-|z|^2)\sqrt{1+|z|^2}}.
\end{align*}
Now consider the function $q(r):=1-\sinh^{-1}(r)-r/\left((1-\sinh^{-1}(r))(1-r^2)\sqrt{1+r^2}\right)$. This is a decreasing function in $[0,1)$ with $q(0)=1$. Therefore $\RE(1+zf''(z)/f'(z))>\alpha$ in $|z|<r_{\alpha}<1$, where $r_{\alpha}$ is given as the least positive root of the equation $q(r)=\alpha$, which is same as \eqref{rconv} and hence the result.
\qed \end{proof}
\begin{remark} Note for $\alpha = 0$, $r_0 \approx 0.37198$ which is not sharp, so the result can be further improved. The sharp $\mathcal{K}_{0}$-radius for the class $\mathcal{S}^*_{\rho}$ is $r_0 \approx 0.400435$, which we can guess graphically but a mathematical proof is yet to derive.
\end{remark}
For our next theorems \ref{Kvn-rad-thm-1} - \ref{Kvn-rad-thm-4}, the following subclasses are required:\\
Let $\mathcal{S}^*_n[C,D]:= \{f\in\mathcal{A}_n : zf'(z)/f(z)\in \mathcal{P}_n[C,D] \}$. Also, let $\mathcal{S}^*_n(\alpha) := \mathcal{S}^*_n[1-2\alpha,-1] = \mathcal{A}_n \cap \mathcal{S}^*_{\alpha}\; \text{and}\; \mathcal{S}^*_{\rho,n} := \mathcal{A}_n \cap \mathcal{S}^*_{\rho}$.
Further, Ali $et\; al.$ \cite{ali12} studied the three classes $\mathcal{S}_n := \{f \in \mathcal{A}_n : f(z)/z \in \mathcal{P}_n\}, \, \mathcal{S}^*_n[C,D]$ and
\[
\mathcal{CS}_n(\alpha) := \left\{f \in \mathcal{A}_n : \frac{f(z)}{g(z)} \in \mathcal{P}_n, \; g \in \mathcal{S}^*_n(\alpha)\right\}.
\]
Now we obtain the $\mathcal{S}^*_{\rho,n}$-radii for the classes defined above.
\begin{theorem}
\label{Kvn-rad-thm-1}
For the class $\mathcal{S}_n$, the sharp $\mathcal{S}^*_{\rho,n}$-radius is given by:
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}_n) = \left(\frac{\sinh^{-1}(1)}{n + \sqrt{n^2 + \left(\sinh^{-1}(1)\right)^2}} \right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}_n$. Define $s: \mathbb{D} \rightarrow \mathbb{C}$ by $s(z) = f(z)/z$. Then $s \in \mathcal{P}_n$ and we can obtain $zf'(z)/f(z) - 1 = zs'(z)/s(z)$ from the above definition of $s$. Using Lemma \ref{disk_lem} and Lemma \ref{p-nAlpha_lem}, the following holds
\[
\left| \frac{zf'(z)}{f(z)} -1 \right| = \frac{zs'(z)}{s(z)} \leq \frac{2nr^n}{1-r^{2n}} \leq \sinh^{-1}(1),
\]
or equivalently $(\sinh^{-1}(1))r^{2n} + 2nr^n - \sinh^{-1}(1) \leq 0$. Therefore, the $\mathcal{S}^*_{\rho,n}$-radius of $\mathcal{S}_n$ is the least positive root of $(\sinh^{-1}(1))r^{2n} + 2nr^n - \sinh^{-1}(1)=0$ for $r\in(0,1)$. We can verify $\RE(f_0(z)/z)>0$ holds in $\mathbb{D}$ where $f_0(z) = z(1+z^n)/(1-z^n)$. Thus $f_0 \in \mathcal{S}_n$ and $zf'_0(z)/f_0(z) = 1 + 2nz^n/(1-z^{2n})$. Moreover, the result is sharp since at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}_n)$, we obtain
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{2nz^n}{1-z^{2n}} = \sinh^{-1}(1).
\]
The proof is complete.
\qed \end{proof}
Let $\mathcal{F}$ define the class of functions $f \in \mathcal{A}$ satisfying $f(z)/z \in \mathcal{P}$. The radius of univalence and starlikeness of the class $\mathcal{F}$ is $\sqrt{2}-1$, as shown in \cite{macgreg}.
\begin{corollary}
\label{Kvn-rad-thm-2}
For the class $\mathcal{F}$, the $\mathcal{S}^*_{\rho}$-radius is stated as
\[
R_{\mathcal{S}^*_{\rho}}(\mathcal{F}) = -e+\sqrt{1+e^2} \approx 0.178105.
\]
\end{corollary}
\begin{theorem}
\label{Kvn-rad-thm-3}
For the class $\mathcal{CS}_n(\alpha)$, the sharp $\mathcal{S}^*_{\rho,n}$-radius is given by
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{CS}_n(\alpha)) = \left(\frac{\sinh^{-1}(1)}{n-\alpha+1 +\sqrt{(n-\alpha+1)^2 + (\sinh^{-1}(1)+2(1-\alpha))\sinh^{-1}(1)}}\right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{CS}_n(\alpha)$ and $g \in \mathcal{S}^*_n(\alpha)$. Considering $s(z)= f(z)/g(z),$ clearly indicates $s \in \mathcal{P}_n$. Also, it gives
\[
\frac{zf'(z)}{f(z)} = \frac{zs'(z)}{s(z)} + \frac{zg'(z)}{g(z)}.
\]
The use of Lemmas (\ref{p-nAlpha_lem} -- \ref{p-nCD_lem}) gives us
\begin{equation}
\label{eq_CSn-1}
\left| \frac{zf'(z)}{f(z)} - \frac{1+(1-2\alpha)r^{2n}}{1-r^{2n}} \right| \leq \frac{2(n-\alpha+1)r^n}{1-r^{2n}}.
\end{equation}
Considering $(1+(1-2\alpha)r^{2n})/(1-r^{2n}) \geq 1$, the relation $f \in \mathcal{S}^*_{\rho,n}$ follows from \eqref{eq_CSn-1} and Lemma \ref{disk_lem} if the subsequent inequality is true:
\[
\frac{1 + 2(n-\alpha+1)r^n + (1-2\alpha)r^{2n}}{1-r^{2n}} \leq 1+\sinh^{-1}(1)
\]
or equivalently, $(2-2\alpha+\sinh^{-1}(1))r^{2n} + 2(n-\alpha+1)r^n - \sinh^{-1}(1) \leq 0$ holds. Thus, the least positive root of
\[
(2-2\alpha+\sinh^{-1}(1))r^{2n} + 2(n-\alpha+1)r^n - \sinh^{-1}(1) = 0
\]
gives the $\mathcal{S}^*_{\rho,n}$-radius for the class $\mathcal{CS}_n(\alpha)$.
Next examine the following functions
\begin{equation}
\label{eq_CSn-2}
f_0(z) = \frac{z(1+z^n)}{(1-z^n)^{(n+2-2\alpha)/n}} \; \text{and} \; g_0(z) = \frac{z}{(1-z^n)^{2(1-\alpha)/n}},
\end{equation}
which implies $f_0(z)/g_0(z) = (1+z^n)/(1-z^n)$ and $zg'_0(z)/g_0(z) = (1+(1-2\alpha)z^n)/(1-z^n)$. Moreover, it is obvious that $\RE(f_0(z)/g_0(z))>0$ and $\RE(zg'_0(z)/g_0(z))>\alpha$ in the unit disk $\mathbb{D}$. Hence $f_0 \in \mathcal{CS}_n(\alpha)$. At $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{CS}_n(\alpha))$, the function $f_0$ defined in \eqref{eq_CSn-2} satisfies
\[
\frac{zf'_0(z)}{f_0(z)} = \frac{1 + 2(n-\alpha+1)z^n + (1-2\alpha)z^{2n}}{1-z^{2n}} = 1+\sinh^{-1}(1),
\]
which accomplish sharpness of the result.
\qed \end{proof}
\begin{theorem}
\label{Kvn-rad-thm-4}
For the class $\mathcal{S}^*_n[C,D]$, the $\mathcal{S}^*_{\rho,n}$-radius is given by
\[
R_{\mathcal{S}^*_{\rho,n}}(\mathcal{S}^*_n[C,D]) = \left\{
\begin{array}{ll}
\min \{ 1;R_1\}, & -1 \leq D < 0 < C \leq 1;\\
\min \{1;R_2\}, & 0 < D < C \leq 1,
\end{array}
\right.
\]
where
\[
R_1 := \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(1+\sinh^{-1}(1))-CD)\sinh^{-1}(1)}}\right)^{1/n}
\]
and
\[
R_2 := \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(\sinh^{-1}(1) - 1) + CD)\sinh^{-1}(1)}}\right)^{1/n}.
\]
\end{theorem}
\begin{proof}
Let $f \in \mathcal{S}^*_n[C,D]$. From Lemma \ref{p-nCD_lem}, we have
\begin{equation}
\label{eq_S*nCD}
\left| \frac{zf'(z)}{f(z)} -b \right| \leq \frac{(C-D)r^n}{1-D^2r^{2n}},
\end{equation}
where $b = (1-CDr^{2n})/(1-D^2r^{2n}), \; |z|=r,$ represents the center of the disk. We infer $b \geq 1$ for $-1 \leq D < 0 < C \leq 1$. From Lemma \ref{disk_lem}, $f \in \mathcal{S}^*_{\rho,n}$ depends on whether following condition is true:
\[
\frac{1 + (C-D)r^n - CDr^{2n}}{1-D^2 r^{2n}} \leq 1+\sinh^{-1}(1),
\]
which reduces to
\[
r \leq \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(1+\sinh^{-1}(1)) - CD)\sinh^{-1}(1)}}\right)^{1/n} = R_1.
\]
Further, taking $D=0$, we get $b=1$. Then \eqref{eq_S*nCD} yields
\[
\left| \frac{zf'(z)}{f(z)} -1 \right| \leq Cr^n, \; (0 < C \leq 1).
\]
Now applying Lemma \ref{disk_lem} with $a=1$ gives $f \in \mathcal{S}^*_{\rho,n}$, if $r \leq ((\sinh^{-1}(1))/C)^{1/n}$.
For $0 < D < C \leq 1$, we have $b < 1$. Thus, using Lemma \ref{disk_lem} and \eqref{eq_S*nCD}, we have $f \in \mathcal{S}^*_{\rho,n}$ if the following holds:
\[
\frac{CDr^{2n} + (C-D)r^n -1}{1-D^2 r^{2n}} \leq \sinh^{-1}(1) - 1,
\]
or equivalently, if
\[
r \leq \left( \frac{2\sinh^{-1}(1)}{C-D + \sqrt{(C-D)^2 + 4(D^2(\sinh^{-1}(1) - 1) + CD)\sinh^{-1}(1)}}\right)^{1/n} = R_2.
\]
This concludes the proof.
\qed \end{proof}
The next theorem establishes radius results for some well-known classes mentioned earlier.
\begin{theorem}
The sharp $\mathcal{S}^*_{\rho}$-radii for the classes $\mathcal{S}^*_{L}, \mathcal{S}^*_{RL}, \mathcal{S}^*_{C}, \mathcal{S}^*_{e}, \Delta^* \, \text{and} \, \mathcal{BS}^*(\alpha)$ are:
\begin{enumerate}[(i)]
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L}) = \sinh^{-1}(1)(2-\sinh^{-1}(1)) \approx 0.985928$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL}) = \dfrac{\left(2+(1+\sqrt{2})\sinh^{-1}(1)\right)\sinh^{-1}(1)}{5-3\sqrt{2} + \left(4(\sqrt{2}-1)+2\sinh^{-1}(1)\right)\sinh^{-1}(1)} \approx 0.964694$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C}) = \dfrac{1}{2}\left(\sqrt{2\left(2+3\sinh^{-1}(1)\right)}-2\right) \approx 0.523831$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e}) = \ln(1+\sinh^{-1}(1)) \approx 0.632002.$
\item $R_{\mathcal{S}^*_{\rho}}(\Delta^*) = \dfrac{\sinh^{-1}(1)(2+\sinh^{-1}(1))}{2(1+\sinh^{-1}(1))} \approx 0.674924$.
\item $R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(\alpha)) = \dfrac{-1+\sqrt{1+\alpha\left(2\sinh^{-1}(1)\right)^2}}{2\alpha\sinh^{-1}(1)}, \; \alpha \in [0,1]$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Suppose $f \in \mathcal{S}^*_{L}$. We have $zf'(z)/f(z) \prec \sqrt{1+z}$. When $|z|=r$, we obtain
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq 1 - \sqrt{1-r} \leq \sinh^{-1}(1),
\]
such that $r \leq (2-\sinh^{-1}(1))\sinh^{-1}(1) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L})$ holds. Next examine the function
\[
f_0(z) = \frac{4z}{\left(1+\sqrt{1+z}\right)^2} {e^{2\left(\sqrt{1+z}-1\right)}}.
\]
Since $zf_0'(z)/f_0(z) = \sqrt{1+z}$, it follows that $f_0 \in \mathcal{S}^*_{L}$. As $zf_0'(z)/f_0(z) -1 = -\sinh^{-1}(1)$ is obtained at $z=-R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{L})$, the result is sharp.
\item Suppose $f \in \mathcal{S}^*_{RL}$, we obtain
\[
\frac{zf'(z)}{f(z)} \prec \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-z}{(1+2(\sqrt{2}-1)z)}}.
\]
For $|z|=r$, the subsequent inequality holds
\[
\left|\frac{zf'(z)}{f(z)}-1 \right| \leq 1 - \sqrt{2} + (\sqrt{2}-1) \sqrt{\frac{1+r}{(1-2(\sqrt{2}-1)r)}} \leq \sinh^{-1}(1),
\]
provided
\[
r \leq \frac{\left(2+(1+\sqrt{2})\sinh^{-1}(1)\right)\sinh^{-1}(1)}{5-3\sqrt{2} + \left(4(\sqrt{2}-1)+2\sinh^{-1}(1)\right)\sinh^{-1}(1)} = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL}).
\]
Next observe the following function defined as
\[
f_0(z)= z \exp\left(\int^z_0\frac{q_0(t)-1}{t}dt \right),
\]
where
\[
q_0(t) = \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-t}{(1+2(\sqrt{2}-1)t)}}.
\]
From the definition of $f_0$, at $z=-R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{RL})$, we have
\[
\frac{zf_0'(z)}{f_0(z)} = \sqrt{2} - (\sqrt{2}-1) \sqrt{\frac{1-z}{(1+2(\sqrt{2}-1)z)}} = 1-\sinh^{-1}(1).
\]
which confirms the sharpness.
\item Suppose $f \in \mathcal{S}^*_{C}$. So $zf'(z)/f(z) \prec 1+4z/3+2z^2/3$. This gives
\[
\left|\frac{zf'(z)}{f(z)}-1\right| \leq \frac{4r}{3} + \frac{2r^2}{3} \leq \sinh^{-1}(1), \; |z|=r,
\]
for $r \leq \frac{1}{2}\left(\sqrt{2\left(2+3\sinh^{-1}(1)\right)}-2\right) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C})$. The sharpness of the result is established using the subsequent function
\[
f_0(z) = z \exp\left(\frac{4z+z^2}{3}\right),
\]
where $zf_0'(z)/f_0(z) = 1+(4z+2z^2)/3$ yields $f_0 \in \mathcal{S}^*_{\rho}$, and substituting $z = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{C})$ gives $zf_0'(z)/f_0(z) = 1+\sinh^{-1}(1)$, thereby proving the sharpness.
\item Suppose $f \in \mathcal{S}^*_{e}$, we have $zf'(z)/f(z) \prec e^z,$ which yields
\[
\left| \frac{zf'(z)}{f(z)}-1 \right| \leq e^r-1 \leq \sinh^{-1}(1) \;\text{holds in} \; |z|=r,
\]
provided $r \leq \ln(1+\sinh^{-1}(1)) = R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e})$.
Now Consider
\[
f_0(z)= z \exp\left(\int^z_0\frac{e^t -1}{t}dt \right).
\]
Since $zf_0'(z)/f_0(z) = e^z$, $f_0 \in \mathcal{S}^*_{e}$, and at $z=R_{\mathcal{S}^*_{\rho}}(\mathcal{S}^*_{e})$, we have $zf_0'(z)/f_0(z) = 1+\sinh^{-1}(1)$, which shows the sharpness of the result.
\item Suppose $f \in \Delta^*$ which gives $zf'(z)/f(z) \prec z+ \sqrt{1+z^2}$. Then,
\[
\left| \frac{zf'(z)}{f(z)}-1 \right|\leq r+\sqrt{1+r^2} -1 \leq \sinh^{-1}(1), \, |z|=r,
\]
for $r \leq \dfrac{\sinh^{-1}(1)(2+\sinh^{-1}(1))}{2(1+\sinh^{-1}(1))} = R_{\mathcal{S}^*_{\rho}}(\Delta^*)$.
For sharpness, define $f_0$ as
\[
f_0(z)= z \exp\left(\int^z_0\frac{t+\sqrt{1+t^2}-1}{t}dt \right).
\]
Since $zf_0'(z)/f_0(z) = z+\sqrt{1+z^2}$, $f_0 \in \Delta^*$, so at $z=R_{\mathcal{S}^*_{\rho}}(\Delta^*)$, we have $zf_0'(z)/f_0(z)$ $= 1+\sinh^{-1}(1)$ which shows the sharpness of the result.
\item Suppose $f \in \mathcal{BS}^*(\alpha),\;\alpha \in [0,1]$, which gives $zf'(z)/f(z) \prec 1+ z/(1-\alpha z^2)$. Then,
\[
\left| \frac{zf'(z)}{f(z)}-1 \right|\leq \frac{r}{1-\alpha r^2} \leq \sinh^{-1}(1), \, |z|=r,
\]
for $r \leq \dfrac{-1+\sqrt{1+\alpha\left(2\sinh^{-1}(1)\right)^2}}{2\alpha\sinh^{-1}(1)} = R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(\alpha)), \, \alpha \in (0,1]$. For $\alpha=0,\, r \leq \sinh^{-1}(1)$.
Next examine the function $f_0$ defined as
\[
f_0(z)= z \left(\frac{1+\sqrt{\alpha}z}{1-\sqrt{\alpha}z}\right)^{1/(2\sqrt{\alpha})}.
\]
Since $zf_0'(z)/f_0(z) = 1+ z/(1-\alpha z^2)$, $f_0 \in (\mathcal{BS}^*(\alpha))$, so at $z=-R_{\mathcal{S}^*_{\rho}}((\mathcal{BS}^*(\alpha)))$, we have $zf_0'(z)/f_0(z)$ $= 1-\sinh^{-1}(1)$, which ensures sharpness of the result.
\end{enumerate}
\noindent Note that $ R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(1)) = \left(-1+\sqrt{1+(2\sinh^{-1}(1))^2}/(2\sinh^{-1}(1))\right) \approx 0.58241$ and $R_{\mathcal{S}^*_{\rho}}(\mathcal{BS}^*(0))$ $= \sinh^{-1}(1) \approx 0.881374$.
\qed \end{proof}
Next we present some radius problems for certain classes of functions expressed as ratio of functions:
\[
\mathcal{F}_1 := \left\{ f \in \mathcal{A}_n : \RE \left(\frac{f(z)}{g(z)}\right) > 0 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 0,\; g \in \mathcal{A}_n\right\},
\]
\[
\mathcal{F}_2 := \left\{ f \in \mathcal{A}_n : \RE \left(\frac{f(z)}{g(z)}\right) > 0 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 1/2,\; g \in \mathcal{A}_n\right\},
\]
and
\[
\mathcal{F}_3 := \left\{ f \in \mathcal{A}_n : \left|\frac{f(z)}{g(z)} -1 \right| < 1 \;\text{and}\; \RE \left(\frac{g(z)}{z}\right) > 0,\; g \in \mathcal{A}_n\right\}.
\]
\begin{theorem} \label{ratio_func}
For functions in the classes $\mathcal{F}_1,\, \mathcal{F}_2$ and $ \mathcal{F}_3$, the sharp $\mathcal{S}^*_{\rho,n}$-radii respectively, are:
\begin{enumerate}[(i)]
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1) = \left(\dfrac{\sqrt{4n^2 + (\sinh^{-1}(1))^2}-2n}{\sinh^{-1}(1)}\right)^{1/n}$.
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2) = \left( \dfrac{\sqrt{9n^2 + 4\sinh^{-1}(1)(n + \sinh^{-1}(1))} -3n}{2(n+\sinh^{-1}(1))} \right)^{1/n}$.
\item $R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_3) = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Let $f \in \mathcal{F}_1$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=f(z)/g(z)$ and $d(z)=g(z)/z$. Clearly, $s,d \in \mathcal{P}_n$. As $f(z)=zd(z)s(z)$, applying Lemma \ref{p-nAlpha_lem} here gives
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{4nr^n}{1-r^{2n}} \leq \sinh^{-1}(1)
\]
such that
\[
r \leq \left(\frac{\sqrt{4n^2 + (\sinh^{-1}(1))^2}-2n}{\sinh^{-1}(1)}\right)^{1/n} = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1)
\]
holds. Next examine the functions
\[
f_0(z)= z \left(\frac{1+z^n}{1-z^n}\right)^2 \; \text{and} \; g_0(z) = z \left(\frac{1+z^n}{1-z^n}\right).
\]
Evidently, $\RE(f_0(z)/g_0(z))>0$ and $\RE(g_0(z)/z)>0$, which implies $f_0 \in \mathcal{F}_1$. Further calculation yields at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_1)e^{i\pi/n}$
\[
\frac{zf'_0(z)}{f_0(z)} = 1 + \frac{4nz^n}{1-z^{2n}} = 1 - \sinh^{-1}(1),
\]
which validates the result is sharp.
\item Let $f \in \mathcal{F}_2$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=f(z)/g(z)$ and $d(z)=g(z)/z$. Clearly, $s \in \mathcal{P}_n(1/2)$ and $d \in \mathcal{P}_n$. As $f(z)=zd(z)s(z)$, applying \ref{p-nAlpha_lem} here gives
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{2nr^n}{1-r^{2n}} + \frac{nr^n}{1-r^n} = \frac{3nr^n + nr^{2n}}{1-r^{2n}} \leq \sinh^{-1}(1),
\]
whenever
\[
r \leq \left( \frac{\sqrt{9n^2 + 4\sinh^{-1}(1)(n + \sinh^{-1}(1))} -3n}{2(n+\sinh^{-1}(1))} \right)^{1/n} = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2).
\]
Therefore, $f \in \mathcal{S}^*_{\rho,n}$ holds for $r \leq R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$.
Next observe that $\RE(g_0(z)/z)>1/2$ while $\RE(f_0(z)/g_0(z))>0$ for the functions
\[
f_0(z) = \frac{z(1+z^n)}{(1-z^n)^2} \; \text{and} \; g_0(z) = \frac{z}{1-z^n}.
\]
Therefore $f_0 \in \mathcal{F}_2$ which verifies the sharpness for $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_2)$ such that
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{3nz^n + nz^{2n}}{1-z^{2n}} = \sinh^{-1}(1).
\]
\item Let $f \in \mathcal{F}_3$ and consider the functions $s,d: \mathbb{D} \rightarrow \mathbb{C}$ where $s(z)=g(z)/f(z)$ and $d(z) = g(z)/z$. Then $d \in \mathcal{P}_n$. We can verify that $|1/s(z)-1|<1$ holds whenever $\RE(s(z))>1/2$ and therefore $s \in \mathcal{P}_n(1/2)$. As $f(z)=zd(z)/s(z)$, on applying Lemma \ref{p-nAlpha_lem}, we obtain
\[
\left|\frac{zf'(z)}{f(z)} -1 \right| \leq \frac{3nr^n + nr^{2n}}{1-r^{2n}} \leq \sinh^{-1}(1).
\]
The rest of the proof is omitted as it is analogous to proof of Theorem \ref{ratio_func}(ii). The sharpness can be verified as follows. Examine the functions
\[
f_0(z) = \frac{z(1+z^n)^2}{1-z^n} \; \text{and} \; g_0(z) = \frac{z(1+z^n)}{1-z^n}.
\]
Using above definitions of $f_0$ and $g_0$, we see that
\[
\RE\left(\frac{g_0(z)}{f_0(z)}\right) = \RE\left(\frac{1}{1+z^n}\right) > \frac{1}{2} \; \text{and} \; \RE\left(\frac{g_0(z)}{z}\right) = \RE\left(\frac{1+z^n}{1-z^n}\right) > 0,
\]
and therefore, $f_0 \in \mathcal{F}_3$. Now at $z = R_{\mathcal{S}^*_{\rho,n}}(\mathcal{F}_3) e^{i\pi/n}$, we obtain
\[
\frac{zf'_0(z)}{f_0(z)} -1 = \frac{3nz^n - nz^{2n}}{1-z^{2n}} = -\sinh^{-1}(1),
\]
which serves as validation for the sharp result.
\end{enumerate}
This concludes the proof.
\qed \end{proof}
| {
"timestamp": "2020-10-21T02:13:48",
"yymm": "2010",
"arxiv_id": "2010.10072",
"language": "en",
"url": "https://arxiv.org/abs/2010.10072",
"abstract": "This paper deals with some radius results and inclusion relations that are established for functions in a newly defined subclass of starlike functions associated with a petal shaped domain.",
"subjects": "Complex Variables (math.CV)",
"title": "Starlike Functions associated with a Petal Shaped Domain",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126500692714,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7094397205875732
} |
https://arxiv.org/abs/1303.1052 | Random walk attachment graphs | We consider the random walk attachment graph introduced by Saramäki and Kaski and proposed as a mechanism to explain how behaviour similar to preferential attachment may appear requiring only local knowledge. We show that if the length of the random walk is fixed then the resulting graphs can have properties significantly different from those of preferential attachment graphs, and in particular that in the case where the random walks are of length 1 and each new vertex attaches to a single existing vertex the proportion of vertices which have degree 1 tends to 1, in contrast to preferential attachment models. AMS 2010 Subject Classification: Primary 05C82. Key words and phrases:random graphs; preferential attachment; random walk. | \section{Introduction}
There is currently great interest in the preferential attachment
model of network growth, usually called the Barab\'{a}si-Albert
\cite{ba,scalefreedefs} model, though it dates back at least to Yule \cite{yule}, and was
discussed also by Simon \cite{simon}. In the simplest version of this an
existing graph is incremented at each stage by adding a
single new vertex which then attaches to a single pre-existing
vertex; this latter is chosen from amongst those of the
pre-existing graph with probability proportional to the degree of
that vertex. In the Barab\'{a}si-Albert model the new vertex will
connect to $m$ vertices, where $m$ is fixed and is a parameter of the model, but here we only consider the
case $m=1$. One of the best known properties of the model is that
it produces a power law degree distribution, as shown rigorously by
Bollob\'{a}s et al \cite{brst}.
One weakness of this model and its
generalisations is that this implicitly requires a calculation
across all the existing vertices, or at least a knowledge of the
total degree (sum of the vertex degrees) of the graph. This
requirement then destroys the potential for this model
to have emergent properties from local behaviour.
A possible solution to this was proposed by Saram\"{a}ki and Kaski
\cite{sarkaski}. In their model the new vertex simply chooses a single vertex
from the graph and then executes a random walk of length $\ell$ step
initiated from that vertex. Saram\"{a}ki and Kaski \cite{sarkaski} and Evans
and Saram\"{a}ki \cite{evans} claim that this reproduces the
Barab\'{a}si-Albert degree distribution, even when $\ell=1$. It is
clear that this is the case if the random walk is run for long
enough to have converged to its stationary distribution. However
we will prove that in the particular case $\ell=1$ the degree sequence does not converge to a power law distribution, but rather to a degenerate limiting distribution in which almost every
vertex has degree $1$.
\section{The Model}
Let $G_0$ be an arbitrary (perhaps connected) graph, with $v_0$ vertices and $e_0$ edges. Form $G_{n+1}$
from $G_n$ by adding a single vertex. This vertex chooses a single
vertex (i.e. this corresponds to $m=1$ in the Barab\'{a}si-Albert model) to connect to by picking a vertex uniformly at
random in $G_n$ and then, conditional on the vertex chosen, performing a simple
random walk of length $\ell$ on $G_n$, starting from the randomly chosen vertex, and then choosing to connect to the destination vertex. Most of the time we will assume that $\ell$ is deterministic, but we will also consider a particular case where $\ell$ is replaced by a random variable.
\section{Number of leaves}
We first consider the number of leaves in the graph. Let $\hier[p]{n}{d}$ be the proportion of vertices in $G_n$ with
degree $d$, and let $L_n=\hier[p]{n}{1}$, i.e. the proportion of
leaves. The number of edges in $G_n$ will be $n+e_0$, the total
degree will thus be $2(n+e_0)$, and the number of vertices will be
$n+v_0$. Let $V_n$ be
the vertex initially chosen at random at step $n$, and let $W_n$ be
the vertex selected by the random walk, so the new vertex connects
to $W_n$. We now prove the main result, which applies to the case where $\ell=1$.
\begin{thm}\label{l1}When $\ell=1$, as $n\to\infty$, $L_n\to 1$, almost surely.\end{thm}
\begin{proof}We assume that $G_0$ is not a star. If $G_0$ is a star,
then it is clear that, with probability $1$, $G_n$ will eventually
not be a star, so we can just wait until this happens and re-label
the first non-star graph as $G_0$. If $G_n$ is not a star each vertex has at least one neighbour which is not a leaf,
and in particular no leaves have a leaf as their neighbour. If $V_n$ is a leaf,
which has probability $L_n$, then $W_n$ will be one of its
neighbours, which will not be a leaf, so in this case the number of
leaves increases by $1$. Hence, considering the conditional expectation of the number of leaves in $G_{n+1}$,
\begin{equation}\label{submart}\E((n+v_0+1)L_{n+1}|G_n)\geq
(n+v_0)L_n+L_n=(n+v_0+1)L_n,\end{equation} and so $\E(L_{n+1}|G_n)\geq L_n$ and
so $(L_n)_{n \in \N}$ is a submartingale taking values in $[0,1]$,
and thus converges almost surely and in $\mathcal{L}^2$ to a limit, which we call
$L_{\infty}$.
To show that $L_{\infty}=1$ almost surely, note that conditional on $V_n$ having
degree $d$ the probability of $W_n$ not being a leaf is at
least $1/d$, so we can make \eqref{submart} sharper, getting
\begin{equation}\E(L_{n+1}|G_n) \geq
L_n+\sum_{d=2}^{\infty}\frac{\hier[p]{n}{d}}{(n+v_0+1)d}.\end{equation}
The total degree of non-leaves in $G_n$ is $2(n+e_0)-L_n(n+v_0)=(2-L_n)(n+v_0)+2(e_0-v_0)$, and the
number of non-leaves is $(1-L_n)(n+v_0)$, so the average degree of
non-leaves is $\frac{2-L_n}{1-L_n}+\frac{2(e_0-v_0)}{(n+v_0)(1-L_n)}$. Hence at least half the non-leaves have degree at most
$2\left(\frac{2-L_n}{1-L_n}+\frac{2(e_0-v_0)}{(n+v_0)(1-L_n)}\right)$ and so
\begin{equation}\E(L_{n+1}|G_n) \geq
L_n+\frac{1-L_n}{2(n+1)}\left(2\left(\frac{2-L_n}{1-L_n}+\frac{2(e_0-v_0)}{(n+v_0)(1-L_n)}\right)\right)^{-1}\end{equation} and so \begin{equation}\label{expiter}\E(L_{n+1}) \geq
\E(L_n)+\frac{1}{2(n+1)}\E\left(\frac{1-L_n}{2}\left(\frac{2-L_n}{1-L_n}+\frac{2(e_0-v_0)}{(n+v_0)(1-L_n)}\right)^{-1}\right).\end{equation}
If $\E(L_{\infty})=\lim_{n\to\infty}\E(L_n)<1$, then for some fixed $c<1$ we must have $L_{n}\leq c$ with positive probability. The expectation on the right of \eqref{expiter} is then bounded away from zero for large $n$, giving a contradiction and showing that $\E(L_{\infty})=1$ and thus that
$L_{\infty}=1$ almost surely.\end{proof}
It should be noted that the argument for Theorem \ref{l1} is dependent on the walk length being fixed at $1$. For example, define a sequence of random variables $(X_n)_{n\in\N}$ which are independent and identically distributed with $P(X_n=0)=p$ and $P(X_n=1)=1-p$, and let the walk length from $V_n$ to $W_n$ be $X_n$, rather than a fixed $\ell$ as previously.
Then, by the same argument as before
$$\E(L_{n+1}-L_n|G_n,X_{n+1}=1)\geq \frac{1-L_n}{2}\frac{1-L_n}{2(n+v_0+1)(2-L_n)}+O(n^{-2}).$$ As there can be at
most one more leaf in $G_{n+1}$ than in $G_n$, we also have $$\E(L_{n+1}-L_n|G_n,X_{n+1}=1)\leq
\frac{1-L_n}{n+v_0+1}+O(n^{-2}).$$
Also, if there are no random walk steps from the initially chosen vertex the probability that the new vertex connects to
a leaf is simply $L_n$, so $$\E((n+v_0+1)L_{n+1}|G_n,X_{n+1}=0)=(n+v_0)L_n+1-L_n,$$ and hence
$$\E(L_{n+1}-L_n|G_n,X_{n+1}=0)=\frac{1}{n+v_0+1}(1-2L_n).$$
So, if we have $X_{n}=0$ with probability $p$ and $1$ with probability $1-p$ for all $n$ independently of each other \begin{equation}\label{ub}\E(L_{n+1}-L_n|G_n)\geq
\frac{1}{n+v_0+1}\left[p(1-2\lambda)+(1-p)\frac{(1-\lambda)^2}{4(2-\lambda)}\right]+O(n^{-2}).\end{equation} Similarly, \begin{equation}\label{lb}\E(L_{n+1}-L_n|G_n)\leq
\frac{1}{n+v_0+1}\left[1-\lambda(1+p)\right]+O(n^{-2}).\end{equation}
The right hand side of \eqref{ub} is negative if $$L_n<\frac{1+9p-2\sqrt{8p^2+p}}{1+7p}$$ and $n$ is sufficiently large and the right hand side of
\eqref{lb} is negative if $L_n>\frac{1}{1+p}$ and $n$ is sufficiently large.
Note that $$\frac{1+9p-2\sqrt{8p^2+p}}{1+7p} -\frac{1}{1+p} \ge 0$$ for $p
\in [0,1]$ with equality only at $p=0$ and $p=1$, and that $$\frac{1+9p-2\sqrt{8p^2+p}}{1+7p}\leq 1,$$ with equality only if $p=0$.
A version of the argument of Lemma 2.6 of \cite{pemantlesurvey} now shows that, almost surely, $$\liminf_{n\to\infty}L_n\geq \frac{1}{1+p}$$ and $$\limsup_{n\to\infty}L_n\leq \frac{1+9p-2\sqrt{8p^2+p}}{1+7p}.$$ So we do not get a similar result to Theorem \ref{l1} in this setting.
\section{$G_0$ Bipartite}
We now consider a special case which demonstrates that, for all odd
$\ell$, the random walk model of \cite{sarkaski} differs
fundamentally from that of the Barab\'{a}si-Albert model.
Assume that $G_0$ is a bipartite graph, with the two parts coloured
as red and blue. Then, in both models, for all $n$ the graph $G_n$ will be
bipartite, and the parts can be coloured red and blue consistently
for each $n$. Let the proportion of red vertices in $G_n$ be $R_n$.
We begin with the random walk model.
\begin{thm}
We have $R_{\infty}$ such that $R_n$ converges almost surely to
$R_{\infty}$. If $\ell$ is even, then $R_{\infty}=\half$, almost
surely, while if $\ell$ is odd $R_{\infty}$ is a random variable
with a Beta distribution.
\end{thm}
\begin{proof}
Conditional on $G_n$, $V_n$ will be red with probability $R_n$. If
$\ell$ is odd $W_n$ will be of opposite colour to $V_n$, which
implies that the new vertex (which connects to $W_n$) will be of the
same colour as $V_n$, and thus, conditional on $G_n$, will be red
with probability $R_n$ and blue with probability $1-R_n$. Hence in
this case the colours of vertices are equivalent to the colours of
the balls in a standard P\'{o}lya urn (where when a ball is drawn
two of the same colour are returned), and so by classical results on the P\'{o}lya urn (see, for example, Theorem 2.1 in \cite{pemantlesurvey}) $R_n$ converges almost
surely to $R_{\infty}$ where $R_{\infty}$ has a Beta distribution
whose parameters depend on $G_0$.
If $\ell$ is even then $W_n$ is of the same colour as $V_n$ and so
the new vertex is of opposite colour to $V_n$. Hence this case
corresponds to a two-colour generalised P\'{o}lya urn where a ball
is selected and a ball of the opposite colour is added, namely a
Friedman urn with $\alpha=0$ and $\beta=1$. In this case
$R_n\to\half$ almost surely; see for example Freedman \cite{freedman}, and Theorem 2.2 in \cite{pemantlesurvey}.
\end{proof}
\begin{thm} In the Barab\'{a}si-Albert model $R_{\infty}=\half$
almost surely.
\end{thm}
\begin{proof}
In this model it is possible to associate the selection of a vertex
with an urn model by considering half-edges, and giving each
half-edge the colour of its associated vertex, i.e. each edge is
split into a red half and a blue half. The selection of a vertex
with probability proportional to its degree is then equivalent to
selecting a half-edge uniformly at random and then selecting the
associated vertex. As the new edge added in $G_{n+1}$ will always
consist of a blue half and a red half, the proportion of red
half-edges must converge to $\half$, and as a red vertex is added if
and only if a blue vertex is selected, the proportion of red
vertices will converge to $\half$, almost surely.
\end{proof}
So in this respect the behaviour of the random walk model is
different from the Barab\'{a}si-Albert model when $\ell$ is odd,
regardless of the size of $\ell$.
\section{Discussion}
We have demonstrated that the model of Saram\"{a}ki and Kaski is
fundamentally different from that of Barab\'{a}si and Albert, unless
we allow an indefinite length for the random walk component. It does
have the advantage of not requiring a global calculation, retaining
the local behaviour characteristic which is desirable in models of
emergent behaviour. An alternate approach might be to imagine that the
addition of edges is affected by the vertices in $G_n$, rather than
by the new vertex. Thus each vertex in $G_n$ could link to a new
vertex as it arises with probability proportional to its degree,
independently of all other vertices, as in the variant of preferential attachment studied by Dereich and M\"{o}rters \cite{dm1,dm2}. This, of course, destroys one of
the usual assumptions of the preferential attachment model that the
number of new links is some fixed value $m$, though we could
substitute the condition that the average number added was fixed.
The urn model approach is interesting particularly since there is
much known about these (see for example the survey paper by Pemantle \cite{pemantlesurvey}). We might
generalise the model to consider directed graphs where there are $k$
colours $c_i;~~ i=0,k-1$, with directed edges only between a vertex
of colour $c_i$ and one of colour $c_{(i+1)(\mathrm{mod}~k)}$. When a new
vertex is added it links at random to a vertex and then takes $\ell$
random steps along directed edges, its colour then being determined.
The case $\ell \ne 0(\mathrm{mod}~k)$ will have the proportions of each
colour converging to $1/k$, whereas for $\ell = 0(\mathrm{mod}~k)$ there will
be a Dirichlet distribution with parameters depending on $G_0$.
\section{Acknowledgement}
The first author acknowledges support from the European Union
through funding under FP7-ICT-2011-8 project HIERATIC (316705).
| {
"timestamp": "2013-07-24T02:04:15",
"yymm": "1303",
"arxiv_id": "1303.1052",
"language": "en",
"url": "https://arxiv.org/abs/1303.1052",
"abstract": "We consider the random walk attachment graph introduced by Saramäki and Kaski and proposed as a mechanism to explain how behaviour similar to preferential attachment may appear requiring only local knowledge. We show that if the length of the random walk is fixed then the resulting graphs can have properties significantly different from those of preferential attachment graphs, and in particular that in the case where the random walks are of length 1 and each new vertex attaches to a single existing vertex the proportion of vertices which have degree 1 tends to 1, in contrast to preferential attachment models. AMS 2010 Subject Classification: Primary 05C82. Key words and phrases:random graphs; preferential attachment; random walk.",
"subjects": "Probability (math.PR)",
"title": "Random walk attachment graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126488274566,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7094397196874185
} |
https://arxiv.org/abs/1501.07774 | Near Optimal Subdivision Algorithms for Real Root Isolation | We describe a subroutine that improves the running time of any subdivision algorithm for real root isolation. The subroutine first detects clusters of roots using a result of Ostrowski, and then uses Newton iteration to converge to them. Near a cluster, we switch to subdivision, and proceed recursively. The subroutine has the advantage that it is independent of the predicates used to terminate the subdivision. This gives us an alternative and simpler approach to recent developments of Sagraloff (2012) and Sagraloff-Mehlhorn (2013), assuming exact arithmetic.The subdivision tree size of our algorithm using predicates based on Descartes's rule of signs is bounded by $O(n\log n)$, which is better by $O(n\log L)$ compared to known results. Our analysis differs in two key aspects. First, we use the general technique of continuous amortization from Burr-Krahmer-Yap (2009), and second, we use the geometry of clusters of roots instead of the Davenport-Mahler bound. The analysis naturally extends to other predicates. | \section{Introduction}
Given a polynomial $f \in \RR[x]$ of degree $n$, the problem is to isolate the
real roots of $f$ in an input interval $I_0$, i.e.,
compute disjoint intervals which contain exactly one real root of
$f$, and together contain all roots of $f$ in $I_0\cap \RR$.
Subdivision based algorithms
have been successful in addressing the problem.
A general subdivision algorithm uses two predicates,
given an interval $I$: the exclusion predicate $C_0(I)$, which if true means
$I$ has no roots; the inclusion predicate, $C_1(I)$, which if true means $I$ has exactly one root.
The algorithm outputs a \dt{root-partition} $\calP$ of $I_0$, i.e., a set of pairwise disjoint open intervals
such that for each interval either $C_0$ or $C_1$ holds, and $I_0 \sm \calP$ contains no roots of $f$.
To compute isolating intervals for roots of $f$, check the sign of $f$ at the endpoints of the intervals
in $\calP$ (this works if $f$ is square-free). The following generic subdivision algorithm constructs
a root-partition:
\progb{
\texttt{Isolate}$(I_0)$\\
{\sc Input:} $f \in \RR[x]$ and an interval $I_0 \ib \RR$.\\
{\sc Output:} A root-partition $\calP$ of $f$ in $I_0$.\\
0.\> Preprocessing step.\\
1.\> Initialize a queue $Q$ with $I_0$, and $\calP \ass \es$.\\
2.\> While $Q$ is not empty\\
\>\> Remove an interval $I=(a,b)$ from $Q$.\\
\>\> If $C_0(I) \vee C_1(I)$ then add $I$ to $\calP$.\\
\>\> else\Comment{Subdivide $I$}\\
\>\>\> Let $m \ass (a+b)/2$.\\
\>\>\> Push $(a, m)$ and $(m,b)$ into $Q$.\\
3.\> Output $\calP$.
}
The algorithm is guaranteed to terminate for square-free polynomials;
otherwise we get an infinite sequence of intervals converging to a root of multiplicity
greater than one.
Some standard choices of the predicates and the corresponding algorithms are:
\begin{tightenum}{r}
\item Sturm sequences and Sturm's method \cite{davenport:85},
\item Descartes's rule of signs and the Descartes method \cite{collins-akritas:76},
\item Interval-arithmetic based approaches and \texttt{Eval} \cite{burr-krahmer-yap:continuousAmort:09}.
\end{tightenum}
The complexity of these algorithms is well understood for the
benchmark problem of isolating all real roots of a square-free integer polynomial with coefficients
of bit-length $L$. One measure of complexity is the size of the subdivision tree constructed by
the algorithm. For the first two algorithms a bound of $O(n(L+\log n))$ was shown in
\cite{davenport:85} and \cite{eigenwillig-sharma-yap:descartes:06}, respectively.
For \texttt{Eval} a weaker bound of $O(n(L+n))$ was established in \cite{sharma-yap:near-optimal:12}.
It is also known \cite{eigenwillig-sharma-yap:descartes:06} that the bound $O(n(L+\log n))$
is essentially tight for any algorithm doing uniform subdivision, i.e.,
reduces the width at every step by some constant (in our case by half).
Uniform subdivision cannot improve on the bound mentioned above because
it only gives linear convergence to a ``root cluster'', i.e., roots which
are relatively closer to each other than to any other root. But it is known that
from points sufficiently far away from the cluster,
Newton iteration (more precisely, its variants for multiple roots)
converges quadratically to the cluster.
This has been an underlying idea in improving the linear convergence
of subdivision algorithms for root isolation \cite{pan:approx-poly-zeros:00,sagraloff-newton+descartes:12,sagraloff-mehlhorn:13},
and has also been combined with homotopy based approaches \cite{yakoubsohn:zero-clusters:00,sasaki-terui:cluster:09}.
We follow the same idea with some key differences.
Given $C_0$ and $C_1$,
our algorithm can be described as follows (we only
give the inner loop, see \refSec{algorithm} for complete details):
\progb{
\texttt{Newton-Isol}$(I_0)$\\
\> \dots\\
\> If $C_0(I)\vee C_1(I)$ then add $I$ to $\calP$.\\
\> else if a cluster $\calC$ of roots is detected in $I$ then\\
\>\> Apply Newton iteration to approximate $\calC$\\
\>\> while quadratic convergence holds.\\
\>\> Estimate an interval $J$ containing $\calC$.\\
\>\> Push $J$ into $Q$.\\
\> else \Comment{Subdivide $I$}\\
\>\> \dots
}
For detecting root clusters, we use a result of Ostrowski
based on Newton diagram of a polynomial \cite{ostrowski:graeffe1:40}; other choices are
based on a generalization of Smale's $\alpha$-theory (see \cite{giusti+2:zeros-analytic:05}
and the references therein); the details can be found in \refSec{notation}.
These tools and approaches have been used earlier
\cite{pan:approx-poly-zeros:00}, however, our approach has the following differences:
\begin{tightenum}{r
\item The tools used to detect and estimate the size of a cluster are independent
of the particular choice of the exclusion-inclusion predicates
(cf. \cite{sagraloff-newton+descartes:12}).
This way we obtain a general approach to improve any subdivision algorithm.
\item Another difference is the method that is combined with bisection to improve convergence.
In \cite{sagraloff-newton+descartes:12} Abbott's QIR method
is combined with the Schr\"oder operator \cite{giusti+2:zeros-analytic:05}, whereas we
apply standard Newton iteration to a suitable derivative of $f$.
The former combination is a backtracking approach to get quadratic convergence; the latter gives quadratic
convergence right away (but perhaps increasing subdivisions). This has the
advantage of separating the Newton iteration steps from the subdivision tree,
which is reflected in the bounds on the subdivision tree size for the two approaches:
for the former we have $O(n\log (nL))$, and for the latter we have $O(n\log n)$.
The number of quadratically converging steps remains the same in both cases.
\item Our approach can be modified to
isolate complex roots; replace binary subdivision with a quad-tree subdivision, and choose
appropriate predicates (e.g., Ostrowski's result mentioned above, or Pellet's test). This avoids
Graeffe iteration (cf. \cite{pan:approx-poly-zeros:00}),
and yet the modified algorithm can be shown to attain a near optimal bound on subdivision tree size.
\end{tightenum}
In this paper, we focus on bounding the size of subdivision
tree of \texttt{Newton-Isol}.
For this purpose, we use the general framework
of continuous amortization \cite{burr-krahmer-yap:continuousAmort:09,burr:contamort:13}.
The key idea here is to bound the tree size
by $\int_{I_0} G(x) dx$, where $G$ is a suitable ``charging'' function
corresponding to the predicates used in the algorithm (e.g., see \cite{burr:contamort:13}).
Our key contributions are the following:
\begin{tightenum}{r}
\item We derive a near optimal bound of $O(n\log n)$ on the size of the subdivision tree
of \texttt{Newton-Isol} when $C_0$, $C_1$ are based on Descartes's rule of signs
(see \refThm{bound}). This is the first application of the continuous amortization framework
to a non-uniform subdivision algorithm.
\item
We show that if the distance of the cluster center to the nearest root outside the cluster
exceeds roughly $n^3$ times the diameter of the cluster,
then Ostrowski's criterion for cluster detection works,
and we obtain quadratic convergence to the cluster center (see \refLem{converse}).
\item Our analysis crucially uses the cluster tree of the polynomial (see \refPro{ct}).
We derive an integral bound on the size of the subdivision tree (see \refThm{intbound}).
The usual approach to upper bound this integral is to break it over
the (real) Voronoi regions of the roots \cite{burr:contamort:13}.
We instead break the integral over the Voronoi regions corresponding to the clusters
in an inductive manner based on the cluster tree.
The integral over the portion of the region outside the cluster is bounded using known techniques.
However, for the portion inside the cluster, we devise an amortized bound on the integral (see \refLem{dense}), which is
of independent interest, and is analogous to the improvement given by Davenport-Mahler bound
over repeated applications of the root separation bound. It is this result that underlies the $O(n\log n)$ bound.
A simple argument extends these bounds to Sturm's method and the \texttt{Eval} algorithm.
The details are in \refSec{complexity}.
\end{tightenum}
\section{Notation and Basic Results}\label{sec:notation}
Let $f \in \RR[x]$ be a square-free polynomial of degree $n\ge 2$
and $Z(f)\ibp \CC$ be its set of roots.
Given a finite pointset $S\ib \RR^2$, let $D_S$ be the disc $D(m_S,r_S)$
such that $m_S$ is the centroid of the points in $S$,
and $r_S$ is the least radius such that all the points in $S$ are contained in $D(m_S, r_S)$.
Given a $\lambda \in \RR_{> 0}$, define $\lambda D_S \as D(m_S, \lambda r_S)$.
We borrow the following definition from \cite{sagraloff-sharma-yap:analytic:13}:
A subset $\calC \ib Z(f)$ of size at least two is called a (root) \dt{cluster} if
the only roots in $3D_\calC$ are from $\calC$.
We treat individual roots as (trivial) clusters. In this paper,
the non-real roots in $\calC$ come in conjugate pairs.
Therefore, the center of $D_\calC$ will always be in $\RR$.
Define $R_\calC$ as the distance from $m_\calC$ to the nearest point in the set $Z(f)\sm \calC$.
From the definition it follows that $Z(f)$ trivially forms a cluster and $R_{Z(f)}=\infty$.
Given an interval $I$, let $m(I)$ denote its midpoint and $w(I)$ its width.
We will often use the shorthand $I=[m(I)\pm w(I)/2]$, and for $\lambda > 0$,
$\lambda I \as [m(I)\pm \lambda w(I)/2]$.
An \dt{interval $I$ contains a cluster} $\calC$ if $\calC \ib D(m(I), w(I)/2)$.
We use the following convenient notation in the subsequent definitions:
for $x, y \in \RR$, `$x \gg y$' if there is a constant $c \ge 1$
such that $x \ge c y$; similarly define $x \ll y$.
A \dt{strongly-separated cluster (ssc)} is a cluster $\calC$ for which
$R_\calC/r_\calC \gg n^3$; the exact
constant can be found in \refCor{con}. For a ssc $\calC$, define the following three quantities:
\begin{tightenum}{r}
\item The interval $I_\calC\as [m_\calC\pm c\cdot kr_\calC]$, for some constant $c\ge 1$.
\item The interval $\calI_\calc \as \set{x: |x-m_\calC|\ll R_\calC/n^2}$.
\item The annulus $\calA_\calc \as \calI_\calc \sm I_\calC=\set{z \in \CC: |\calC|r_\calC \ll |z-m_\calC|\ll R_\calC/n^2}$.
\end{tightenum}
The exact constants in these definitions are given in \refLem{converse}.
See \refFig{ssc} for an illustration of these concepts.
If $\calC$ is not a ssc, then we define
$ I_\calC \as [m_\calC \pm r_\calC]$ and $\calI_\calc\as 2I_\calC$.
Note that for all clusters $\calC$, $I_\calC \ib \calI_\calc$.
We will need the following result later in our analysis \cite[Lemma 2.1]{sagraloff-sharma-yap:analytic:13}:
\bprol{ct}
Given a root cluster $\calC$ of $f$. There is a unique unordered tree $T_\calC$
rooted at $\calC$ whose set of nodes are the clusters contained in $\calC$, and
the parent-child relation is subset inclusion. Let $T_f$ be the tree where
the parent is the cluster $Z(f)$ of all roots.
\eprol
The result originally is stated for root clusters of $f \in \CC[x]$. However, for $f \in \RR[x]$
the clusters come in conjugate pairs, and by taking the union of such pairs the result
still holds. The tree $T_\calC$ is called the \dt{cluster tree of $\calC$}. The leaves of this tree are the roots
in $\calC$.
\vfigpdf{Geometry of a ssc $\calC$. We focus on the the relative geometry,
overlooking the exact constants involved in the definition of the intervals.
}{ssc}{0.45}
\subsection{Cluster Detection and Approximation}
The literature on detection and approximation of root clusters is vast
(see \cite{giusti+2:zeros-analytic:05} and the references therein).
One approach is based on Pellet's test: if for a complex polynomial $f(x)=\sum_{i=0}^na_i x^i$
there is an $r > 0$ such that $|a_k|r^k > \sum_{i \neq k} |a_i|r^i$ then the disc
$D(0, r)$ contains exactly $k$ roots of $f$. A point $z \in \CC$
is said to satisfy Pellet's test, if there is a $k$ and $r$
for which the test holds with the coefficients of $f(x+z)$.
Results in \cite{yakoubsohn:zero-clusters:00,giusti+2:zeros-analytic:05}
generalize Smale's $\alpha$-theory and relate it to Pellet's test;
an alternative derivation based on tropical algebra is given in \cite{sharify:thesis}.
We instead use a result by Ostrowski \cite{ostrowski:graeffe1:40}.
We need the following definitions.
Let $f(x) = \sum_{i=0}^n a_i x^i$, where $a_i \in \CC$.
With each index $i$, $a_i \neq 0$, associate the point
$P_i \as (i, -\log |a_i|) \in \RR^2$. The lower-hull of the convex-hull of these points
is called the \dt{Newton diagram} of $f$.
Given an index $k\in \set{0\dd n}$, let $y_k$ be the point such that $(k, y_k)$ is on the diagram.
Define $\rho_k \as e^{y_k-y_{k-1}}$, for $1 \le k\le n$,
$\rho_{n+1}\as \infty$, and the
$k$th \dt{deviation} $\Delta_k \as \rho_{k+1}/\rho_k$, for $0 < k < n$.
Let $\alpha_1 \dd \alpha_n \in \CC$ be the roots of $f$ ordered such that
$|\alpha_1| \le |\alpha_2| \le \cdots \le |\alpha_n|$. Ostrowski showed the following
fundamental relation between the absolute values of the roots and $\rho_k$'s
\cite[p.~143]{ostrowski:graeffe1:40}:
\beql{ost}
\frac{1}{2k} < \frac{|\alpha_k|}{\rho_k}< 2(n-k+1).
\eeql
Given $z\in \CC$, we will be interested in the Newton diagram of $f(x+z)$.
If $f_j(z) \as f^{(j)}(z)/j!$, then from a result of
Ostrowski \cite[p.~128]{ostrowski:graeffe1:40} we get:
\beql{rkr}
\rho_k(z) = \max_{j<k} \abs{\frac{f_j(z)}{f_k(z)}}^{\frac{1}{(k-j)}},\text{ and }
\rho_{k+1}(z) = \min_{j>k} \abs{\frac{f_k(z)}{f_j(z)}}^{\frac{1}{(j-k)}}.
\eeql
The RHS is defined for any $k$ such that $f_k(z) \neq 0$; however, we
are only interested in those $k$ for which $P_k$ is
on the diagram. The $k$th deviation $\Delta_k(z)\as \rho_{k+1}(z)/\rho_{k}(z)$.
We have the following result for detecting clusters:
\bleml{cluster}
If $\Delta_k(z) \ge 27$, for some index $0 < k< n$, then
there are exactly $k$ roots in $D(z, 3\rho_k(z))$ and $D(z, \rho_{k+1}(z)/3)$. Moreover,
as $\rho_{k+1}(z)/3 \ge 9\rho_k(z)$, these roots form a cluster.
\eleml
The proof shows that the inequality $\Delta_{k}(z)\ge 27$ implies that Pellet's
test holds for $D(z, r)$, $3\rho_k(z) \le r \le D(z, \rho_{k+1}(z)/3)$
(see \cite[Thm.~1.5]{giusti+2:zeros-analytic:05}).
Since the $P_i$'s are sorted by x-coordinate,
all the $\rho_k$'s can be computed in $O(n)$ steps using, e.g., Graham's scan for convex hull computation.
Once we have detected a cluster $\calC$ near $z$, we want a good approximation to
$m_\calC$. A standard way is to do the iteration $z_{i+1}= z_i - kf(z_i)/f'(z_i)$, starting
from $z$,
but this may not be numerically desirable, as both $f$ and $f'$ are small near
$\calC$. Another option is to use the standard Newton iteration applied to $f^{(k-1)}$.
We show that if $\Delta_k(z) \ge 27$, then $z$ is an approximate zero,
in the sense of Smale et al.~\cite[p.~160, Thm.~2]{bcss:bk},
to the root of $f^{(k-1)}$ in $D(z, \frac{3\rho_k(z)}{2k})$.
Subsequently we show that if $\Delta_k(z)\ge c_0$, for some constant $c_0\ge 27$,
then for all $z'$ in this disc $\Delta_k(z') \ge 27$, and hence there is a cluster of $k$ roots in $D(z', 3\rho_k(z'))$.
Moreover, the cluster is exactly $\calC$.
These results are summarized in the following:
\bleml{correctness}
Let $z \in \CC$ be such that $\Delta_k(z) \ge c_0$, for some $k \ge 2$,
$\calC$ be the cluster in $D(z, 3\rho_k(z))$, and $D' \as D(z, \frac{3\rho_k(z)}{2k})$.
Then the following hold:
\begin{tightenum}{r}
\item $z$ is an approximate zero to the root $z^*$ of $f^{(k-1)}$ in $D'$ and
the Newton iterates starting from $z$ are in $D'$.
\item For all $z' \in D'$,
$\Delta_k(z') \ge 27$, and
$\calC$ is the cluster in $D(z', 3\rho_k(z'))$.
\item If $z, w$ are such that $\Delta_k(z), \Delta_k(w) \ge 27$ and $D(z, 3\rho_k(z))$,
$D(w, 3\rho_k(w))$ intersect then the discs have the same cluster.
\end{tightenum}
\eleml
The proof is given in the appendix.
We choose $c_0 \as 27\times 6e^6$.
Given $z\in \CC$, a value of $k$ satisfying the condition $\Delta_k(z) \ge c_0$ is called an \dt{admissible
value} for $z$, with the corresponding \dt{inclusion disc} $D(z, 3\rho_k(z))$.
Note that there can be more than one admissible value for a point $z$ corresponding to
clusters of different sizes.
\section{The Algorithm}\label{sec:algorithm}
Let $C_0$ and $C_1$ be some exclusion and inclusion predicate respectively.
The following algorithm takes as input $f$ and an interval $I_0$ and outputs a root partition
of $I_0$.
\progb{
\texttt{Newton-Isol}$(f, I_0)$\\
1\> Initialize $\calP \ass \es$, $\Phi \ass \es$; let $Q$ be an empty queue.\\
1.a.\> If this is a recursive call then subdivide $I_0$ and \\
\> push the two halves into $Q$; else $Q \ass \set{I_0}$.\\
2.\> While $Q$ is not empty do\\
\>\> Remove an interval $I$ from $Q$.\\
2.a.\>\> If $C_0(I)\vee C_1(I)$ then add $I$ to $\calP$.\\
\>\>else if \texttt{Newton-Incl-Exc}($I$) is successful then\\
\>\>\> Let $(J,k)$ be the pair returned.\\
2.b.\>\>\> If $\forall J' \in \Phi$, $J\si J'=\es$ and $J \si I_0 \neq \es$ then\\
2.c.\>\>\>\> $\forall$ $I'\in Q$, $I'\ass I'\sm D(m(J),\frac{\rho_{k+1}(m(J))}{3})$.\\
2.d.\>\>\>\> Add $J\si I_0$ to $\Phi$.\\
\>\> else subdivide $I$ and push the two halves into $Q$.\\
3.\> Return $\calP \su_{J \in \Phi} \texttt{Newton-Isol}(f, J)$.
}
The input to \texttt{Newton-Incl-Exc} is an interval $I=(a,b)$. If the predicate is successful then
it returns an interval $J$ containing a cluster such that $w(J) < w(I)/2$, and an
admissible value $k$ for $m(J)$; otherwise it returns failure.
\progb{
\texttt{Newton-Incl-Exc} $(f,I)$\\
1.\> Let $m \as (a+b)/2$.\\
2.\> For $p \in \set{a,m,b}$, let $k_p \ge 2$ be the {\em smallest} admissible\\
\> value $k$ for $p$ such that $I\ib D\paren{p, \frac{\rho_{k+1}(p)}{3}}$. \\
3.\> If the three admissible values are equal and the three\\
\> inclusion discs are contained in $D\paren{m, \frac{\rho_{k_m+1}(m)}{3}}$ then:\\
3.a.\>\> $z_0 \as m$, $k \as k_m$, $g \as f^{(k-1)}$, $i \as 0$.\\
4.\>\> While $\rho_k(z_{i}) \le 2^{5-2^{i}}\rho_k(z_0)$\\
\>\>\> $z_{i+1} \as z_i- g(z_i)/g'(z_i)$; $i \as i+1$.\\
\>\> $J \as [z_{i-1} \pm 3 \rho_k(z_{i-1})]$\\%, z_{i-1} +3 \rho_k(z_{i-1})]$. \\
5.\>\>If $w(J) \ge w(I)/2$ then return failure\\
6.\>\>else return $(J, k)$.\\
7.\> Return failure.
}
We first explain some steps in the predicate above:
\begin{tightenum}{}
\item {\bf Step 2.} A point $p$ in $I$ can have more than one admissible value associated with it.
The right admissible value is governed by $w(I)$,
since we should only consider those clusters $\calC$ for which $r_\calC \ll w(I) \ll R_\calC$.
\item {\bf Step 3.} As $D(m, \rho_{k_m+1}(m)/3)$ contains all the three inclusion discs,
they all contain the same cluster $\calC$. Otherwise, it is possible that the three
inclusion discs contain different clusters but of the same size.
\item {\bf Step 4.} This ensures that as $z_i$ converges to the root of $f^{(k-1)}$, the distance to $\calC$
decreases quadratically; this fails when we are near $\calC$,
or the root of $f^{(k-1)}$ is not near $\calC$.
\item {\bf Step 5.}
Required to ensure linear convergence to $\calC$.
\item {\bf Step 6.} The interval $J$ contains the cluster $\calC$. Moreover,
as $I \ib D(m, \rho_{k+1}(m)/3)$, we know that if the roots in $I$ are a
subset of $\calC$, and hence are inside $J$. By now $w(J)< w(I)/2$,
therefore, it suffices to return $J$.
\end{tightenum}
We now comment on some steps in \texttt{Newton-Isol}:
\begin{tightenum}{}
\item {\bf Step 1.a.} Ensures that a successful call to \texttt{Newton-Incl-Exc} is followed by a subdivision step.
Thus the recursion tree is a binary tree. The predicate can still be successful on
an interval $J$ returned by an earlier successful call.
But the convergence in this case would only be linear, and so we prefer subdivision, though in practice
one can go ahead with the linear convergence.
\item {\bf Step 2.b.}
Checks if $\calC$ has not been found before (see \refLem{correctness}(iii)),
and that $J$ is inside $I_0$; if either of this test fails, then $I$ contains no roots and can be excluded.
\item {\bf Step 2.c.} As the only cluster in $D(m(J), \frac{\rho_{k+1}(m(J))}{3})$
is $\calC$, we can remove this disc from the intervals in $Q$. It is this exclusion step that significantly
contributes to the improvement of the subdivision algorithm.
\item {\bf Step 2.d.} This step adds the interval $J\si I_0$ containing the newly discovered cluster $\calC$
to the set $\Phi$
\end{tightenum}
There are only two loops in the algorithm:
first, the while-loop in step 2 of the algorithm, and second, the Newton iteration
in step 4 of \texttt{Newton-Incl-Exc}.
The argument for the termination of the first loop is the same as for \texttt{Isolate}.
The termination of the second loop is guaranteed, because
if $z_i$'s are such that
$\rho_k(z_i)$ keeps on decreasing, then in the limit $\rho_k$'s converge to zero;
but the disc $D(z_i, 3\rho_k(z_i))$ contains exactly $k$ roots;
since, in the limit $z_i$'s tend to a root $z^*$ of $f^{(k-1)}$, this implies that $z^*$ is a $k$-fold root
of $f$, which is a contradiction as $f$ is square-free.
The following is a proof of correctness of the algorithm.
\bthm
Given a polynomial $f$ and an interval $I_0$, \texttt{Newton-Isol}$(f, I_0)$ outputs a
root partition $\calP$ of $I_0$.
\ethm
\bpf
We need to show the following claims:\\
1. $I_0\sm \calP$ contains no real roots of $f$.\\
2. $\calP$ contains (interior) pairwise disjoint intervals.\\
3. For all $I \in \calP$, $C_0$ or $C_1$ holds (follows from step 2.a.).
\refLem{correctness} gives us
the correctness of \texttt{Newton-Incl-Exc}$(I)$, i.e., if the test is successful then it returns
an interval $J$ such that any roots in $I$ are contained in $J$.
We only argue for the first claim.
For every interval $J$ returned by a successful call of the predicate, define
\beql{aj}
A_J \as D\paren{m(J) , {\rho_{k+1}(m(J))}/{3}} \sm D(m(J), w(J)/2),
\eeql
i.e., the annulus around $J$ that does not contain any roots. We exclude
intervals if step (2.b) fails for the interval $J$, or a portion of an interval is removed in
step (2.c.).
In the former case, either the cluster contained in $J$ was already detected, or it is outside $I_0$.
In the latter case, we do not loose any roots since $A_J$ has no roots.
So $I_0\sm \calP$ contains no roots.
\epf
\section{Complexity Analysis}\label{sec:complexity}
The main result is that \texttt{Newton-Incl-Exc} will be successful near a ssc $\calC$.
Let $c_0> 20$ be the constant in \refLem{correctness},
and $\calC$ a ssc throughout this section.
Our first claim is that $|\calC|$ is an admissible value for all points in $\calI_\calc$.
\bleml{con}
If $|z-m_\calC| \leq R_\calC/(8c_0n^2)$ then $\Delta_{k}(z) \ge c_0$.
\eleml
\bpf
Let $\alpha_1 \dd \alpha_k \in \calC$ and $\alpha_{k+1} \dd \alpha_n \in Z(f)\sm \calC$.
Moreover, assume that they are ordered in increasing distance from $z$.
From \refeq{ost}, we know that $2k|z-\alpha_{k+1}| > \rho_{k+1}(z) > |z-\alpha_{k+1}|/(2(n-k+1))$.
Moreover, $|z-\alpha_{k+1}| > R_\calC - |z-m_\calC| \ge R_\calC/2$; similarly,
$|z-\alpha_{k+1}|< 3R_\calC/2$. Therefore,
\beql{rk1}
\frac{R_\calC}{4n} \le \rho_{k+1}(z) \le 3|\calC|R_\calC.
\eeql
From \refeq{ost}, we again have $\rho_k(z) < 2k|z-\alpha_k|$.
But $|z-\alpha_k| \le |z-m_\calC|+r_\calC$, which gives us
\beql{rk2}
\rho_k(z) \le 2k (|z-m_\calC|+ r_\calC).
\eeql
Since $|z-m_\calC|, r_\calC \le R_\calC/(8c_0n^2)$, we get $\rho_k(z)\le kR_\calC/(2c_0n^2)$.
Combining this with \refeq{rk1}, and the observation that $(n-k)k \le n^2/4$,
we obtain that $\Delta_k \ge 2c_0n^2/(8(n-k)k)\ge c_0$.
\epf
Recall the definition of the intervals $I_\calC$, $\calI_\calc$ and the annulus $\calA_\calc$ from
\refSec{notation}, and $A_J$ from \refeq{aj}.
\bleml{converse}
If an interval $I$ is such that
$$I \ib \calI_\calc=[m_\calC\pm R_\calC/(8c_0n^2)] \text{ and } w(I) > 72|\calC|r_\calC$$
then the pair $(J, k)$ returned by \texttt{Newton-Incl-Exc}$(I)$ is such that $k=|\calC|$,
$J \ib I_\calC=[m_\calC\pm 20kr_\calC]$, and $A_J\ip\calA_\calc$.
\eleml
\bpf
We show that the conditions on $I$ above imply that
\texttt{Newton-Incl-Exc}$(I)$ reaches step 6 of \texttt{Newton-Incl-Exc} (all the steps below refer to the steps in the predicate).
This requires showing the following:
(i) all the conditions in step 3 are met; (ii) Newton-iteration in step 4
converges quadratically terminating with an interval $J$ with $w(J) < w(I)/2$,
and (iii) $J \ib I_\calC$. The following claims provide the proof.
Let $I=[a,b]$ and $m= m(I)$.
\begin{tightenum}{}
\item {\bf Claim 1:} For all $p \in \set{a, m , b}$, $k_p=|\calC|$.
Recall from Step 2 that $k_p$
is defined as the {\em smallest} admissible value $k$ for which $I \ibp D(p, \rho_{k+1}(p)/3)$.
From \refLem{con}, we have $k_p \le |\calC|$.
Since the roots in
$I$ can only come from $\calC$, any smaller admissible value corresponds to a subcluster $\calC'$ of $\calC$,
which implies $R_{\calC'} \le r_\calC$. From \refeq{rk1} we know that
$\rho_{|\calC'|+1}(p) \le 3(|\calC'|+1) R_{\calC'} \le 3|\calC|r_\calC$.
Since $w(I) \ge 72|\calC|r_\calC$, clearly $I\ibn D(p, \rho_{|\calC'|+1}(p)/3)$
for any subcluster $\calC' \ibp \calC$. Thus $k_p \ge |\calC|$. \\
\item {\bf Claim 2:} For all $p \in I$,
$I\ib D(p, \rho_{k+1}(p)/3)$. This will follow from the more general claim that
$$D_1 \as D(m_\calC, R_\calC/(8c_0n^2)) \ib D(z, \rho_{|\calC|+1}(z)/3) \sa D_2,$$
for all $z \in D_1$; since $a, m, n \in I \ib D_1$, the claim holds.
But for any $z \in D_1$, we know from \refeq{rk1} that
$\frac{\rho_{|\calC|+1}(z)}{3} \ge \frac{R_\calC}{12n}$
which is greater than $\frac{R_\calC}{4c_0n^2}$, the diameter of $D_1$, for $c_0 \ge 3$.
\item {\bf Claim 3:} For all $z, w \in D_1$, the inclusion disc
$D(z, 3\rho_k(z)) \ib D(w, \frac{\rho_{k+1}(w)}{3})$. This follows if
\beql{zw} |z-w| + 3\rho_k(z) \le \frac{\rho_{k+1}(w)}{3}. \eeql
But $|z-w|, r_\calC \le R_\calC/(8c_0n^2)$, which along with \refeq{rk2} implies that
$3\rho_k(z) \le 6kR_\calC/(4c_0n^2)$. Therefore, LHS of \refeq{zw} is smaller than
$13kR_\calC/(8c_0n^2)$, which is smaller than $R_\calC/(12n)$ for $c_0 \ge 20$, but
from \refeq{rk1} we know that the latter is smaller than the RHS of \refeq{zw}.\\
\item {\bf Claim 4:} Let $z_i$ be the sequence of iterates computed in
the while-loop in Step 4. If $z_i \in D(m_\calC, \frac{R_\calC}{8c_0n^2})\sm D(m_\calC, 2r_\calC)$,
then $\rho_k(z_i)< 2^{5-2^i} \rho_k(z_0)$. Since $z_i \nin D(m_\calC, 2r_\calC)$, $r_\calC \le |z_i - m_\calC|$,
and hence from \refeq{rk2} we obtain
$\rho_k(z_i) \le 4k |z_i - m_\calC|$. From \cite[Thm.~2.2]{pawlowski:zeros-of-derivatives:99}
we know that there is a unique root $z^*$ of $f^{(k-1)}$ in $ D(m_\calC, r_\calC)$.
Therefore, $|z_i -m_\calC| \le |z_i - z^*|+r_\calC$. But as $z_i \nin D(m_\calC, 2r_\calC)$
and $z^* \in D(m_\calC, r_\calC)$,
we have $r_\calC \le |z_i- z^*|$, and hence $|z_i - m_\calC| \le 2|z_i - z^*|$.
Thus, $\rho_k(z_i) \le 8k |z_i - z^*|$. As $z_0$ is an approximate zero to $z^*$ (see \refLem{correctness}(i)),
we know
$|z_i - z^*| \le 2^{1-2^i} |z_0 - z^*|$, which implies that $\rho_k(z_i) \le 2^{4-2^i}k|z_0-z^*|$.
Furthermore, from \refLem{correctness}(i) we know
$k|z_0-z^*|< 2\rho_k(z_0)$. Hence $\rho_k(z_i)< 2^{5 - 2^i}\rho_k(z_0)$.\\
\item {\bf Claim 5:} The interval $J \ib I_\calC$ and $w(J)< w(I)/2$.
The previous claim shows that if
$z_i \nin D(m_\calC, 2r_\calC)$, then we will obtain quadratically decreasing values of $\rho_k(z_i)$.
Thus when the iteration stops $z_i \in D(m_\calC, 2r_\calC)$, and it follows from
\refeq{rk2} that $\rho_k(z_i) \le 6kr_\calC$. Hence the interval
$J = z_i \pm 3\rho_k(z_i)$ is contained in $I_\calC$, for $k \ge 2$.
Moreover, $w(J) \le 36kr_\calC < w(I)/2$, and hence the condition in
Step 5 fails and we return $J$. The claim on the annulus follows from \refeq{rk1}.
\end{tightenum}
\epf
The following result translates the result above in terms of the subdivision tree:
\bcorl{con}
Let $\calC$ be a ssc such that $\calI_\calc \ib I_0$.
If $I$ is the first interval such that
\texttt{Newton-Incl-Exc}$(I)$ is successful and the interval returned contains $\calC$, then
$\calI_\calc \ib I' \su I''$, where $I'$ is the parent-interval of $I$
and $I''$ is one of $I'$'s neighbors.
\ecorl
\bpf
In the worst case, $\calC$ will be detected
the first time in the subdivision tree an interval $I\ib\calI_\calc$.
For such an $I$, we show $w(I) \gg kr_\calC$.
Since $I$ is the first interval to fall in $\calI_\calc$, both
$I'$ and $I''$ have endpoints outside $\calI_\calc$, thus $\calI_\calc \ib I' \su I''$.
So $2w(I) \ge R_\calC/(16c_0n^2)>72 kr_\calC$, as $\calC$ is ssc.
The claim clearly holds if $\calC$ is detected at an ancestor of $I$.
\epf
\Remark The proof above gives us the explicit constant in the definition of ssc, namely, we require
$R_\calC/r_\calC > 16c_0\times 72 n^3$. A careful working out of the proofs shows that
the weaker inequality $R_\calC/r_\calC > 4c_0 \times 72 (n-|\calC|)|\calC|^2$, (or even
$50 c_0n^3$) is sufficient.
Recall that the set of all roots $Z(f)$ is a cluster.
As a consequence of \refLem{converse}, we assume that $I_0\ib n I_{Z(f)}$;
otherwise $\texttt{Newton-Incl-Exc}$ will be successful right away and the
interval returned will satisfy the property.
\subsection{An Integral Bound on the Subdivision Tree}
Let $\calN(I_0)$ be the set of leaves in the subdivision tree of \texttt{Newton-Isol}$(f,I_0)$.
Step 1.a. of the algorithm ensures that
the subdivision tree is a binary tree. Therefore, it suffices to bound $|\calN(I_0)|$.
For this purpose, we use the general framework of continuous amortization developed in
\cite{burr-krahmer-yap:continuousAmort:09} and generalized in \cite{burr:contamort:13}.
The idea is to bound $|\calN(I_0)|$ by an integral and then derive an upper bound on this integral.
For this purpose we need the following notion: Given a choice of predicates $C_0$, $C_1$, a
function $G:\RR\to\RR_{\ge 0}$ is called a \dt{stopping function} corresponding to $C_0$ and $C_1$
if for every interval $I$, if there is an $x\in I$ such that
$w(I)G(x) \le 1$, then either $C_0(I)$ or $C_1(I)$ holds.
Stopping functions, corresponding to different predicates, are
provided in \cite{burr:contamort:13}. The crucial property of $G(x)$ is the following:
\bleml{cp}
If $C_0(I)$ and $C_1(I)$ fail for an interval $I$, then for all $J \ib I$, such that
$2w(J) \ge w(I)$, $2\int_{J} G(x) dx \ge 1$.
\eleml
\bpf
From the definition of $G(x)$, we have
for all $x\in I$, $G(x) w(I) \ge 1$. As $J \ib I$, $\forall x\in J$, $2G(x)w(J) \ge G(x)w(I) \ge 1$.
Thus $2\int_{J} G(x) dx \ge 2 w(J) \min_{x \in J}G(x) \ge 1$.
\epf
The main result of this section is the following:
\bthml{intbound}
$$|\calN(I_0)| \le 4n + 2\int_{I_0 \sm \su_\calC \calA_\calc} G(x) dx,$$
where the union is over all ssc $\calC$ in $T_f$.
\ethml
We bound $\calN(I_0)$ recursively.
The leaves in $\calN(I_0)$ correspond to three types of intervals:
\begin{tightenum}{r}
\item intervals in the root partition $\calP$,
\item intervals that were discarded in step 2.c., and
\item intervals for which condition 2.b fails to hold (either cluster already found, or $J\si I_0=\es$).
\end{tightenum}
We will bound each of these three types.
We analyse what happens before the first set of recursive calls.
Let $\Phi$ be the set of intervals collected in Step 2.d.~of the algorithm,
$A_J$ be as defined in \refeq{aj}, and $\calI_J \as J\su A_J$.
From the construction of $\Phi$, we know that all intervals $J \in \Phi$ are contained in $I_0$
and each contains a unique cluster.
For each $J \in \Phi$, let $L_J$ be the set of {\em parent-intervals} of intervals in $Q$
that intersect $\calI_J$; the type (ii) intervals are children of intervals in $L_J$. Let
$M_J$ be the set of intervals that do not intersect $\calI_J$ and are of type (iii).
See \refFig{3types} for an illustration of these types.
Note that if $I\in L_J$ contains an endpoint of $\calI_J$, then
$I\sm \calI_J$ can be of type (i) or (iii); but there can be
at most two such intervals for each $J$ on either side of $\calI_J$.
We abuse notation and use $L_J$ to represent a set as well as the union
of the intervals in it; same for $M_J$.
\vfigpdf{The three types of intervals in $\calN(I_0)$. Intervals in $L_J$ are shown in green.
The remaining intervals could be in $M_J$ or $\calP$.
The width of the red colored intervals can be much smaller than their parents.
But there are at most two such intervals.}{3types}{0.5}
For an $I\in M_J$, both $C_0$ and $C_1$
failed. Therefore, from \refLem{cp} we get
$|M_J| \le 2\sum_{I \in M_J}\int_I G(x) dx= 2\int_{M_J}G(x) dx$.
As the predicates $C_0$ and $C_1$ also fail for the intervals in $L_J$,
we can similarly bound $|L_J|$. But this
effectively amounts to doing subdivision on $J$.
To avoid this we do the following: since the width of the intervals in $L_J$ is
more than $w(J)$, we know that there are at most two neighboring
intervals $I'_J$ and $I''_J$ that contain $J$. We count them separately, and for the rest
we use \refLem{cp} to get
$|L_J|\le 2 + 2\int_{L_J \sm (I'_J \su I''_J)} G(x) dx$.
For an interval $I \in \calP$, we expect $2\int_IG(x) dx \ge 1$, as the predicates
must have failed for the parent $I'$ of $I$. However,
\refLem{cp} requires that $w(I')\le 2w(I)$.
This can fail to happen near the boundary of $\calI_J$, as noted earlier.
But then there are at most two such intervals.
Therefore, the number of intervals in $\calP$ coming from the non-recursive calls is at most
$2|\Phi| + 2\int_{I_0 \sm \su_J (L_J \su M_J)} G(x) dx$.
Combining this with the bounds on $|L_J|$ and $|M_J|$ we get
\beql{ni1}
|\calN(I_0)| \le 4|\Phi| + 2\int_{I_0 \sm \su_J(I'_J \su I''_J)} G(x) dx + \sum_{J \in \Phi}|\calN(J)|.
\eeql
To open the RHS recursively, we introduce the notion of \dt{cluster tree $T_{I_0}$
with respect to an interval $I_0$}: It is the smallest subtree $T_\calc$ of $T_f$ rooted at
a cluster $\calC$ such that $I_0\ib I_\calC$; since by assumption
$I_0 \ib nI_{Z(f)}$, in the worst case, $T_{I_0}$ is $T_f$.
Moreover, as enlarging $I_0$ increases the integral in \refeq{ni1}, we further
make the simplifying assumption that $I_0=2I_{\calc_{0}}$,
where $\calc_{0}$ is the root of $T_{I_0}$.
Let $\calC$ be the cluster associated with a node $u$ in $T_{I_0}$.
Let $J_u \in \Phi$ be the interval returned the first time
$\calC$ is detected by \texttt{Newton-Incl-Exc}~.
Define $A_u \as (I'_{J_u} \su I''_{J_u}) \sm J_u$; if $\calC$ is not detected,
let $A_u = J_u = \es$.
Using this notation, the following bound can be derived from \refeq{ni1} by induction
\beql{ni2}
|\calN(I_0)| \le 4|T_{I_0}| + 2 \int_{I_0 \sm \bsu_{u \in T_{I_0}} A_u}G(x) dx.
\eeql
For a ssc $\calC\in T_{I_0}$, the assumption $I_0=2I_{\calc_{0}}$ ensures that
$\calI_\calc\ib I_0$. So \refCor{con} implies that $I'_u \su I''_u\ip \calI_\calc$, and
\refLem{converse} implies that $J_u\ib I_\calC$; hence, $A_u \ip \calA_\calc$.
Considering only the ssc in $T_{I_0}$ on the RHS of \refeq{ni2} we obtain
\beql{nie3}
|\calN(I_0)| \le
4|T_{I_0}| + 2 \int_{I_0 \sm \su_{\calC} \calA_\calc}G(x) dx,
\eeql
As $|T_{I_0}|\le n$, we get \refThm{intbound}.
\subsection{Bound for the Descartes's rule of signs}\label{sec:ndsc}
In this section, we derive the following bound:
\bthml{bound}
Given a square-free polynomial $f\in \RR[x]$ of degree $n$,
the size of the subdivision tree constructed by \texttt{Newton-Isol}($f, I_0$) using
predicates based on the Descartes's rule of signs is bounded by $O(n\ln n)$.
\ethml
We bound the RHS of \refeq{nie3},
where the stopping function corresponds to the Descartes's rule of signs.
We use the same stopping function as described in \cite{burr:contamort:13},
but explain why the argument there fails to give us the bound above
Let $V \as Z(f)$, the set of roots of $f$. Define $d(x, V)$ as the distance from
$x$ to the closest point in $V$, and $d_2(x,V)$ as the distance to the second closest point in $V$.
The crucial idea in \cite{burr:contamort:13} is to partition the integral
over the (real) Voronoi interval $I_\alpha$ of each root $\alpha$ (for the moment
suppose $\alpha \in \RR$).
Define $J_\alpha\as [\alpha\pm \frac{d_2(\alpha, V)}{2}]$. Then
for $x\in J_\alpha$, $G(x) \as 2/d_2(x,V)$, and for $x\in I_\alpha\sm J_\alpha$, $G(x) \as 1/|x-\alpha|$.
Break $\int_{I_\alpha}G(x) dx$ as $\int_{J_\alpha}G(x) dx + \int_{I_\alpha\sm J_\alpha}G(x) dx$.
In \cite{burr:contamort:13} it is shown that the first integral is $O(1)$,
and the second integral is $O(\log w(I_\alpha)/d_2(\alpha, V))$;
from Cauchy's bound we can assume that $w(I_\alpha)=2^{O(L)}$.
The problem is that
in the worst case this ratio can be $\Omega(n(L+\log n))$. E.g., if all the other roots
are of the form $\alpha \pm i t$, for increasing values of $t$, then $I_\alpha$ is the x-axis.
Therefore,
$\int_{I_\alpha\sm J_\alpha}G(x) dx= \Omega(L - \log d_2(\alpha, V))$; in the worst case $d_2(\alpha, V)$ can be
the root separation bound.
Our idea is based on the observation that roots with very small separation give rise to root clusters.
For clusters that are not ssc, the ratio $R_\calC/r_\calC=O(n^3)$, therefore, the number of subdivisions
needed to bridge this gap is $O(\log n)$. For a ssc, the gap is bridged by \texttt{Newton-Incl-Exc}
so that the subdivision is restricted to the ranges $R_\calC$ to roughly $R_\calC/n^2$ and
$|\calC|r_\calC$ to $r_\calC$, both of which take $O(\log n)$ subdivisions.
Doing this for all clusters basically gives the bound in \refThm{bound}.
Let $P\ib \CC$ be a pointset such that any non-real point in $P$ also has its complex conjugate in $P$.
Such a set of points is called \dt{dense} if no proper subset of $P$ forms a non-trivial cluster, i.e.,
for all $S \ibp P$, such that $|S| \ge 2$,
the disc $3D_S$ contains a point from $P\sm S$. This structure plays a fundamental
role in our arguments, as do the following two integrals (see \cite{burr:contamort:13,sharma-yap:near-optimal:12}):
\bleml{int}
Let $\gamma\in \CC$ and $J=[r,s]$.
\\{\bf(Re)} If $\gamma\in \RR\sm J$, then
\beql{areal}
\int_J \frac{dx}{|\gamma-x|} = \ln
\left| \frac{\gamma-s}{\gamma-r}\right|^{\delta(J>\gamma)},
\eeql
where $\delta(J> \gamma)= +1$ if $r > \gamma$ and $-1$ if $s < \gamma$.
\\{\bf(Im)} If $\gamma \in \CC\sm \RR$ then
\beql{acomp}
\int_J \frac{dx}{|\gamma-x|}
\le
O\paren{\ln \frac{\max \set{|s-\gamma|,|r-\gamma|}}{|\Im(\gamma)|}}.
\eeql
\eleml
We now give the proof of \refThm{bound}.\\
\bpf
The proof is by induction on $|T_{I_0}|$.
We claim that
\beql{claim1}
\int_{I_0 \sm \su_{\calC} \calA_\calc } G(x) dx = O(|T_{I_0}|\ln n).
\eeql
Let $\calc_{0}$ be the root of $T_{I_0}$; by assumption we have $I_0= 2I_{\calc_{0}}$.
Let $\calM_0$ be the children of $\calc_{0}$ in $T_{I_0}$.
Consider a ssc $\calC \in \calM_0$. Then $I_0 \sm \calA_\calc \ib (I_0 \sm \calI_\calc) \su 2I_\calC$.
If $\calC'$ is a ssc contained in $\calC$, we can inductively remove $\calA_{\calC'}$ from $I_\calC$.
This also works for clusters that are not ssc in $\calM_0$, since by definition $\calI_\calc=2I_\calC$.
Therefore,
$$I_0 \sm \su_{\calC} \calA_\calc
\ib (I_0 \sm \su_{\calC \in \calM_0} \calI_\calc) \su \paren{\su_{\calC \in \calM_0}\paren{2I_\calC \sm \su_{\calC'\ibp\calC} \calA_{\calC'}}}.$$
We claim that
\beql{claim2}
\int_{I_0 \sm \su_{\calC \in \calM_0} \calI_\calc}G(x) dx = O(|\calM_0|\ln n).
\eeql
As $|T_\calc| < |T_{I_0}|$, for $\calC \in \calM_0$, by induction we obtain
$$\int_{2I_\calC \sm \su_{\calC'\ibp\calC} \calA_{\calC'}}G(x) dx = O(|T_\calc|\ln n).$$
This bound along with \refeq{claim2} and the
observation that $|\calM_0|+\sum_{\calC \in \calM_0}|T_\calc|<|T_{I_0}|$ gives us \refeq{claim1}.
The base case is when $\calM_0$ contains only leaves,
in which case \refeq{claim1} reduces to \refeq{claim2}.
We next claim that
$$
\int_{I_0 \sm \su_{\calC \in \calM_0} \calI_\calc}G(x) dx= O(\ln n) +
\int_{I'_0 \sm \su_{\calC \in \calM_0}\calI_\calc} G(x) dx,
$$
where $I'_0 \as [m_{\calc_{0}} \pm 2r_{\calc_{0}}]$.
If $\calc_{0}$ is not a ssc, then this is clear as $I'_0 =2I_{\calc_{0}}=I_0$.
If $\calc_{0}$ is a ssc, then $I_0 = [m_{\calc_{0}}\pm |\calc_{0}| r_{\calc_{0}}]$. Break $I_0$ as
$I'_0$, $[m_{\calc_{0}}+2r_{\calc_{0}}, m_{\calc_{0}}+ |\calc_{0}|r_{\calc_{0}}]$
and $[m_{\calc_{0}}-2r_{\calc_{0}}, m_{\calc_{0}}- |\calc_{0}|r_{\calc_{0}}]$.
The closest root to any $x$ in these intervals
is from $\calc_{0}$. Moreover, as $|x-m_{\calc_{0}}| \ge 2r_{\calc_{0}}$, we get
$G(x) \as 1/d(x, V) \le 2/|x-m_{\calc_{0}}|$. Therefore, from \refLem{int}(Re) it follows that
$\int_{m_{\calc_{0}}+2r_{\calc_{0}}}^{m_{\calc_{0}}+ |\calc_{0}|r_{\calc_{0}}} \frac{2}{|x-m_{\calc_{0}}|} = O(\ln |\calc_{0}|)$.
Similarly for the other interval. Hence to prove \refeq{claim2}, it suffices to show
\beql{claim3}
\int_{I'_0 \sm \su_{\calC \in \calM_0}\calI_\calc} G(x) dx = O(|\calM_0|\ln n).
\eeql
Let $\calM_0$ also denote the pointset obtained
by replacing each $\calC \in \calM_0$ by its center $m_\calC$.
We will use \refLem{dense} to prove \refeq{claim3}.
As no subset of $\calM_0$ forms a cluster, $\calM_0$ is a dense pointset, and \refLem{dense} is applicable.
However, we first remove some region around every $p \in \calM_0\si \RR$ to be able
to invoke \refLem{dense}. For each such $p$, define
$J_p \as [p \pm d_2(p, \calM_0)/2]$. If $p=m_\calC$, for $\calC \in \calM_0$, then
$\calI_p \as \calI_\calc \ib J_p$. We claim
\beql{claim4}
\int_{\su_{p\in \calM_0} (J_p \sm \calI_p)}G(x) dx = O(|\calM_0|\ln n).
\eeql
From \refLem{dense} we get
$$\int_{I'_0 \sm (\su_{p \in \calM_0}J_p)}G(x) dx = O(|\calM_0|\ln n).$$
Combining these two bounds, along with the observation that the union of the sets
$I'_0 \sm (\su_{p \in \calM_0}J_p)$ and $\su_{p\in \calM_0} (J_p \sm \calI_p)$ is the set
$I'_0\sm \su_{p\in \calM_0} \calI_p$, completes the proof of \refeq{claim3}.
To prove \refeq{claim4}, we show that $\int_{J_p\sm \calI_p}G(x) dx= O(\ln n)$, and then sum over all
$p \in \calM_0$. There are three cases to consider:
\begin{tightenum}{r}
\item $p=m_\calC$ for some normal cluster $\calC \in \calM_0$.
Then $J_p= [m_\calC \pm R_\calC/2]$ and $\calI_p=\calI_\calc=[m_\calC\pm 2r_\calC]$.
Therefore, $J_p \sm \calI_p$ contains $[m_\calC+2r_\calC, m_\calC+R_\calC/2]$
and $[m_\calC-R_\calC/2, m_\calC - 2r_\calC]$.
The nearest root to any $x$ in these two intervals is from $\calC$.
Since $x$ is outside $2I_\calC$, it follows that $d(x,V) \ge |x-m_\calC|/2$.
Therefore, $G(x)\as\frac{1}{d(x,V)} \le 2/|x-m_\calC|$.
From \refLem{int}(Re), we obtain
$\int_{m_\calC+2r_\calC}^{m_\calC+R_\calC/2}G(x) dx=O(\ln R_\calC/r_\calC)$.
Since $\calC$ is not a ssc, $R_\calC/r_\calC = O(n^3)$, which gives us the desired bound.
The same applies to the other interval.
\item Suppose $p=m_\calC$, where $\calC \in \calM_0$ is a ssc.
Then $J_p = [m_\calC \pm R_\calC/2]$ and $\calI_p=\calI_\calc=[m_\calC\pm R_\calC/n^2]$.
Let $I'_\calC \as [m_\calC +\frac{R_\calC}{n^2}, m_\calC + \frac{R_\calC}{2}]$ be one of the
intervals in $J_p \sm \calI_p$. The nearest root to any $x\in I'_\calC$ is from $\calC$.
Since $x\nin 2I_\calC$, it follows that $d(x,V) \ge |x-m_\calC|/2$.
Therefore, $G(x)\as\frac{1}{d(x,V)} \le 2/|x-m_\calC|$.
Applying \refLem{int}(Re), we get $\int_{I'_\calC} G(x) dx \le 4\ln n$.
Similarly, for the other interval.
\item $p$ is a real root then $\calI_p=\es$.
For $x \in J_p$, our stopping function $G(x)=2/d_2(x,P)$, i.e., corresponding
to the inclusion predicate. Suppose $q \in P$ is such that $d_2(p, P)=|p-q|$.
Then for all $x \in J_p$, $d_2(x,P) \ge |p-q|-|p-x|\ge \sigma_p/2$,
and hence
$\int_{J_p} \frac{2dx}{d_2(x,P)} \le 4\int_{p-\sigma_p/2}^{p+\sigma_p/2} \frac{dx}{\sigma_p}=O(1)$.
\end{tightenum}
\epf
The proof above can carried out with the exact constants involved in the definitions of
$I_\calC$, $\calI_\calc$ and $\calA_\calc$ (see \refLem{converse}),
but they will be absorbed by the big-O notation. Note that the $O(n \ln n)$ bounds
the number of calls to the $C_0$ predicate.
The specialization of $G(x)$ for $C_0$ is $1/d(x,V)$.
The corresponding specialization for Sturm sequences is $1/d(x, V\si \RR) \le 1/d(x,V)$.
Therefore, $O(n\ln n)$ holds for \texttt{Newton-Isol} combined with Sturm sequences. For \texttt{Eval},
one specialization of the stopping function for the $C_0$ predicate is $n/d(x,V)$,
which immediately gives an $O(n^2\ln n)$
bound for \texttt{Newton-Isol} combined with \texttt{Eval}. Whether it can be improved
using the more precise specialization $\sum_{\alpha\in V}\frac{1}{|x-\alpha|}$ remains open.
Let $P$ be a dense pointset $P$.
Given a point $p \in P$, define $\sigma_p \as \min|p-q|$, where $q\in P\sm \set{p}$,
and $J_p \as [p\pm \sigma_p/2]$.
We want to bound $\int_{(2D_P \si \RR) \sm \su_p J_p }dx/d(x,P) $.
We first show an $O(|P|^2)$ bound, essentially following \cite{burr:contamort:13}.
Let $\calV_p$ be the set of points in $2D_P \si \RR$ closer to $p$ than to any other point in $P$.
It is clear that $J_p \ib \calV_p$.
The intervals $\calV_p$ partition $2D_P \si \RR $.
Then $\int_{\calV_p\sm J_p} dx/d(x,P)$ can be shown to be bounded by
$O(\ln (r(D_p)/\sigma_p))$. Using the density of $P$, it can be shown
that if $p, q$ are such that $\sigma_p = |p-q|$ then $P \ib 3^{O(|P|)}D_{\set{p,q}}$,
which implies that $r(D_P) \le 3^{O(|P|)}\sigma_p$,
for all $p \in P$.
This gives an $O(|P|^2)$ bound instead of the bound in \refThm{bound}.
To obtain that we need to amortize the integral carefully.
The intuition is that if $\sigma_p$ is very small then there there must a lot of other points
close to $p$, and hence the width of $\calV_p$ cannot be very large compared to $\sigma_p$ .
The challenge is to get an ``almost cluster-like'' decomposition of $P$.
We construct a tree on $P$ that gives us this decomposition.
We describe an iterative bottom-up procedure to construct a tree $\calT_{P}$ with leaves from $P$.
Let $\sigma \as \min_{p \in P}\sigma_p$.
For all points $p \in P$, draw a disc of radius $\sigma/2$ centered at $p$.
As $\sigma$ is the smallest distance between any pair of points,
two such discs can at most touch each other. The discs touching each other form a connected
component. The collection of the largest connected components partitions $P$ (leaves are
considered as components).
Moreover, there is at least one component $G\ib P$ that has cardinality strictly greater than one;
the components with cardinality one are the leaves.
For each such component $G$,
we introduce an internal node $u$ in $\calT_{P}$ with children as the leaves $p$, where $p \in G$;
let $G_u \as G$, the associated component, and $\sigma_u \as \sigma$.
Now redefine $\sigma$ as the minimum separation between the
components constructed so far,
draw a disc of radius $\sigma/2$ centered at each $p\in P$, and continue as above.
Let $\calT_{P}$ be the tree constructed in this bottom-up manner; see \refFig{ripples}.
Further define the following quantities for each $u \in \calT_{P}$:
\begin{tightenum}{r}
\item $\nu_u$ as the number of children of $u$,
\item $m_u$ be the center and $r_u$ be the radius of $D(G_u)$.
\end{tightenum}
\vfigpdf{A dense pointset $P$ and construction of $\calT_{P}$. Circles of different colors correspond
to different $\sigma$'s. The first choice of $\sigma$ corresponds to blue colored circles,
followed by green, orange and red.
The components formed are shown in the corresponding colors.
We only draw some of the relevant circles to give an idea.}{ripples}{0.5}
Let $u, v \in \calT_{P}$ be such that $v$ is a child of $u$. We have the following properties of $\calT_{P}$:
\begin{tightenum}{P}
\item$\sigma_u \le \min_{p\in G_v, q\in P\sm G_v}|p-q| \le 3 r_v$. The upper bound follows from the density of $P$.
The lower bound follows from the observation that the discs with radius
$\sigma/2$ centered at $p \in G_v$, where $\sigma_v <\sigma < \sigma_u$, do not touch
the discs of any other component, except when the radius is $\sigma_u/2$.
\item $r_u \le |G_u|\sigma_u$. Consider the graph $\calG$ with the vertices as $G_u$ and edges
between two vertices $p, q$ if $D(p, \sigma_u/2)\si D(q, \sigma_u/2)\neq \es$.
As $G_u$ is a connected component of these discs, we know that $\calG$ is connected.
Therefore, if $m$ is the number of vertices on the path joining $p, q$ in $\calG$,
then by triangular inequality $|p-q| \le m\sigma_u\le |G_u|\sigma_u$.
\item If $p$ is a leaf-child of $u$ then $\sigma_u = \sigma_p$.
It is clear that any disc $D(p,r)$, with $r< \sigma_p/2$, cannot touch $D(q,r)$, for any other point $q$.
The first time they touch is when $\sigma_u=\sigma_p$.
If $p \in \CC \sm \RR$, then we further obtain that $|\Im(p)| \ge \sigma_p \ge \sigma_u$.
\item The size of $\calT_{P}= O(|P|)$.
Every level has a node with more than one child, as
there are pairs of components with separation exactly $\sigma$.
\item $P$ is the component associated with the root of $\calT_{P}$.
\end{tightenum}
The next result is an amortization analogous to that of the Davenport-Mahler bound over the
root separation bound.
\bleml{dense}
If $P$ is a dense pointset then
\beql{mcint}
\int_{(2D_P \si \RR) \sm \su_{p\in P} J_p} \frac{dx}{d(x,P)} = O(|P| \ln |P|),
\eeql
where for $p \in P \si \RR$, $J_p \as [p\pm \sigma_p/2]$, and $J_p=\es$ otherwise.
\eleml
\bpf
We break the integral recursively over the nodes of $\calT_{P}$.
For an internal node $u$ of $\calT_{P}$, we will show the following claim:
\beql{intu}
\int_{(2D(G_u)\si \RR )\sm \su_{p \in G_u}J_p} \frac{dx}{d(x,P)} = O(\nu_u \ln |G_u|).
\eeql
We take the sum over all internal nodes $u$. From (P4) we know that
$|\calT_{P}|=O(|P|)$, and hence $\sum_u \nu_u=O(|P|)$; moreover,
from (P5) we know that the component associated with the root of $\calT_{P}$ is $P$.
These observations then give us \refeq{mcint}.
For a point $p \in P$, recall that $\calV_p$ is the set of points in $2D_P\si \RR$
closer to $p$ than to any other point of $P$; by definition $J_p \ib \calV_p$.
Suppose $u$ is the parent of $p$.
We will bound the integral over $\calV_p$ in two steps:
the portion of $\calV_p$ inside $2D(G_u)$
and the portion outside $2D(G_u)$. The latter portion is where amortization occurs, as
for an $x \nin 2D(G_u)$, the distance of $x$ to $p \in G_u$ is roughly $|x- m_u|$.
Let $v$ be a child of $u$. There are three cases to consider:
\begin{tightenum}{}
\item {\bf Case 1.} $v$ is a leaf $p\in \RR$.
We first bound the portion $I_p$ of $\calV_p$ inside $2D(G_u)$;
the portion outside will be handled collectively for all points in the third case.
For all $x \in I_p \sm J_p$, it is clear that $d(x, P) = |x-p|$. From
\refLem{int}(Re) we obtain that
$\int_{I_p \sm J_p} \frac{dx}{|x-p|}=O\paren{\ln \frac{w(I_p)}{\sigma_p}}$.
But as $I_p \ib 2D(G_u) \si \RR$, we know that $w(I_p) \le 4r_u$.
From (P3), we know that $\sigma_p = \sigma_u$.
Therefore, $\int_{I_p \sm J_p} \frac{dx}{|x-p|} = O(\ln r_u/\sigma_u) = O(\ln |G_u|)$, from (P2).
\item {\bf Case 2.} $v$ is a leaf $p=\Re(p)+i\Im(p) \in \CC \sm \RR$. Again consider the interval
$I_p \as \calV_p \si 2D(G_u)$; in this case $J_p=\es$. As $p$ is the closest point to any $x \in I_p$,
$d(x, P)=|x-p|$. Moreover, $p$ and both the endpoints of $I_p$ are
in $2D(G_u)$, so the maximum distance of an endpoint of $I_p$ from $p$ is $\le 2r_u$.
Therefore, from \refLem{int}(Im) we have
$$\int_{I_p} \frac{dx}{d(x,P)} = O\paren{\ln \frac{r_u}{|\Im(p)|}}.$$
But recall from (P3) that $|\Im(p)| \ge \sigma_u$, hence $r_u/|\Im(p)| \le r_u /\sigma_u \le |G_u|$,
where the last inequality follows from (P2). Therefore, $\int_{I_p} \frac{dx}{d(x,P)} = O(\ln |G_u|)$.
\item {\bf Case 3.} $v$ is an internal node. Inductively, we have already bounded
the integral $\int_{(2D(G_v)\si \RR) \sm \su_{p\in G_v}J_p}dx/d(x,P)$. However, it is possible that $\calV_p$, for
some point $p\in G_v$ extends beyond $2D(G_v)\si \RR$.
Suppose $p$ is such a point, and $x\in W_p \as \calV_p \si (2 D(G_u) \sm 2D(G_v))$.
Then we know that $|x- p| \ge |x- m_v|/2$, where $m_v$ is the center of $D(G_v)$.
Therefore,
\begin{align*}
\sum_{p \in G_v}\int_{W_p}\frac{dx}{|x-p|} \le \int_{(2D(G_u) \sm 2D(G_v))\si \RR}\frac{2dx}{|x-m_v|}. \end{align*}
As $2w(I_u) = 4r_u$, from \refLem{int}(Re), it follows that the integral on the RHS is bounded
by $O(\ln r_u/r_v)$. But from (P2) we have $r_u \le |G_u|\sigma_u$, and $\sigma_u\le 3r_v$ from (P1).
Therefore, we obtain
\begin{align*}\sum_{p \in G_v}\int_{W_p}\frac{dx}{d(x,P)}=O(|\ln |G_u|).\end{align*}
This is the case where the amortization of the integral over the Voronoi regions takes place.
\end{tightenum}
Summing the bounds for all children $v$ of $u$ gives \refeq{intu}.
\epf
The following is the analogue of \refLem{dense} in $\CC$:
define $D_p \as D(p, \sigma_p/2)$, then
$$\int_{2D_P \sm \su_p D_p }\frac{dz}{d(z,P)} = O(|P|\ln |P|).$$
\ignore{
\subsection{Bound for Newton+EVAL}\label{sec:neval}
In this section, we derive an upper bound on the integral in the RHS of \refeq{nie3}
where the stopping function corresponds to the centered form interval arithmetic
based predicates used in the EVAL algorithm \cite{burr:contamort:13,sharma-yap:near-optimal:12}:
\beql{evalp}
\begin{split}
C_0(I) \equiv |f(m(I))| > \sum_{j \ge 1}\abs{\frac{f^{(j)}(m(I))}{j!}} \paren{\frac{w(I)}{2}}^j\text{, and}\\
C_1(I) \equiv |f'(m(I))| > \sum_{j \ge 1}\abs{\frac{f^{(j+1)}(m(I))}{j!}} \paren{\frac{w(I)}{2}}^{j-1}.
\end{split}
\eeql
The stopping function in this case is $G(x) \as 1.5\min\set{S_0(x), S_1(x)}$, where
\beql{seval}
S_0(x) \as \sum_{\alpha \in Z(f)} \frac{1}{|x-\alpha|}
\text{, and }
S_1(x) \as \sum_{\alpha' \in Z(f')} \frac{1}{|x-\alpha'|}.
\eeql
The idea is similar to what was done in \refSec{ndsc}, namely to charge $S_0$ on the region between
clusters, and $S_1$ on the roots. However, there is an added complication as $S_0$ depends on all
the roots, and not just the nearest root; similarly, for $S_1$ which depends on all critical points.
The bound is roughly $n$ times the bound in \refThm{bound}. This is because earlier only the
distance to a nearest root to $x$ played a significant role for $C_0$; however, $S_0$ is governed
by the distance to all the roots, and if all of them are equidistant from
$x$ then $S_0 \sim n/d(x,V)$, which gives us the additional factor of $n$. In fact, the analysis
reveals that such a equi-distributed geometry of roots can perhaps achieve the bound.
We show the following:
\bthml{neval}
Given a square-free polynomial $f\in \RR[x]$ of degree $n$,
the size of the subdivision tree constructed by \texttt{Newton-Isol}($f, I_0$) using
predicates given in \refeq{evalp} is bounded by $O(n^2\ln n)$.
\ethml
\bpf
\epf
}
\section{Concluding Remarks}
Our aim has been to devise a general approach to
improve any subdivision based algorithm for real root isolation.
This is achieved by the \texttt{Newton-Incl-Exc}~ predicate, which detects strongly separated clusters,
and hence reduces the number of subdivisions from $O(\log R_\calC/r_\calC)$
to $O(n\log n)$. The crucial ingredient is Ostrowski's criterion based on
deviations of the Newton diagram of a polynomial. The criterion works
for complex polynomials, so we expect an analogue of \texttt{Newton-Isol}~ for isolating
complex roots that is conceptually simpler than the existing approaches.
We have not explored the practical aspects of the algorithm, nevertheless,
we think that the analysis based on the geometry of cluster provides tools and
techniques for an alternate approach to understand existing algorithms.
We can bound the arithmetic complexity of \texttt{Newton-Isol}~ as follows.
The Newton diagram computation takes $O(n)$, and the Taylor shift
$O(n \log n)$ operations. The number of Newton iterations
to approximate $\calC$ is bounded by $O(\log \log \frac{R_\calC}{r_\calC})$,
which is $O(\log (nL))$ from root separation bounds.
Therefore, the arithmetic complexity, ignoring poly-log factors,
is bounded by $\wt{O}(n^2)$. The extension
to the bitstream model involves deriving a robust
version of Ostrowski's result and bounding precision requirements.
The latter will be governed by perturbation bounds for clusters.
For a cluster of size $k$, we expect these bounds to be
$O(\eps^{1/k})$, for $\eps$-perturbation in the coefficients.
In the worst case, this would give an $O(n(L+\log n))$ bound on
the precision.
\input{q-online.bbl}
\newpage
\section*{Appendix}
We give the proof of \refLem{correctness}; the arguments are based standard manipulations
with Taylor series in alpha-theory of Smale et al. We first prove \refLem{correctness}(i), for
which we need the following functions from \cite{bcss:bk} defined for $f^{(k-1)}$:
\beql{gammakz}
\beta_k(z)= \abs{\frac{f^{(k-1)}(z)}{f^{(k)}(z)}},\;
\gamma_k(z)= \max_{j \ge 1}\abs{\frac{f^{(k+j)}(z)}{(j+1)! f^{(k)}(z)}}^{\frac{1}{j}}
\eeql
and $\alpha_k(z) =\beta_k(z)\gamma_k(z)$.
We derive relations between these quantities and $\rho_k(z)$'s given in \refeq{rkr}.
Considering the RHS of $\rho_k(z)$ for $j=k-1$, we immediately have
\beql{betakbound}
\rho_k(z) \ge \abs{\frac{f_{k-1}(z)}{f_k(z)}}
= k \beta_k(z).
\eeql
Multiplying and dividing the inner term on the RHS of $\gamma_k(z)$ in \refeq{gammakz}
by $(k+j)!/k!$ we obtain that
\begin{align*}
\gamma_k(z)
&\le \max_{j \ge 1} \paren{\frac{(k+j)!}{k!(j+1)!}}^{1/j}\max_{j > k}\abs{\frac{f_{j}(z)}{f_{k}(z)}}^{1/(j-k)}\\
&= \max_{j \ge 1} \paren{\frac{(k+j)!}{k!(j+1)!}}^{1/j} \frac{1}{\rho_{k+1}(z)}.
\end{align*}
The max-term is bounded by $(k+1)$,
which implies that
\beql{gammakbound}
\gamma_k( z) \rho_{k+1}(z) \le (k+1).
\eeql
Multiplying \refeq{betakbound} and \refeq{gammakbound} we obtain
that $\alpha_k(z) \Delta_k(z) \le 2.$ Therefore, if $\Delta_k(z) \ge 12$ then
$z$ is an approximate zero of $f^{(k-1)}$ with associated root in $D(z, 1.5\beta_k(z))\ib D(z,\frac{3\rho_k(z)}{2k})$,
where the inclusion follows from \refeq{betakbound}. The claim on
Newton iterates follows from \cite[p.~160,Thm.2]{bcss:bk}.
We now prove \refLem{correctness}(ii). We will need the following result \cite[p.~161, Lem.~3]{bcss:bk}:
for a $u \in [0, 1)$
\beql{binom}
\sum_{j \ge 0} {k+j \choose j}u^j = \frac{1}{(1-u)^{k+1}}.
\eeql
Let $\eps \as 1.5$, $\delta \as \eps\rho_k/k$, and $u \as \frac{\delta}{\rho_{k+1}}=\frac{\eps}{k\Delta_k}$;
here we express $\rho_k(z)$ by $\rho_k$ (similarly, for the other quantities).
\bleml{fklb}
If $z$ is such that $\Delta_k(z) \ge 16$ then for $z' \in D(z, \delta)$, we have
$|f_k(z')| \ge |f_k(z)|(1-u)^{-(k+1)}/2$.
\eleml
\bpf
Take the absolute values in the Taylor expansion of $f^{(k)}(z')$
and apply the triangular inequality to obtain
$$\abs{f_k(z')} \ge \frac{1}{k!} \paren{|f^{(k)}(z)| - \sum_{j \ge 1} \abs{\frac{f^{(k+j)}(z)}{j!}} \delta^j}.$$
Dividing both sides by $|f_k(z)|$, and
multiplying and dividing the summation term on the RHS by $k!$ and $(k+j)!$
we obtain that
$$\abs{\frac{f_k(z')}{f_k(z)}} \ge \paren{1 - \sum_{j \ge 1} {k+j \choose j}\abs{\frac{f_{k+j}(z)}{f_{k}(z)}} \delta^j}.$$
From the expression of $\rho_{k+1}(z)$ in \refeq{rkr} and definition of $u$, we deduce that
\begin{align*}
\abs{\frac{f_k(z')}{f_k(z)}}
&\ge \paren{2 - \sum_{j \ge 0} {k+j \choose j}u^j}.
\end{align*}
Using \refeq{binom}, and the bound on $\Delta_k$, the RHS can be simplified to $(1-u)^{-(k+1)}/2$.
\epf
\blem
If $\Delta_k(z)\ge 16$ then for all $z' \in D(z, \delta)$
we have $\rho_k(z') < 2e^6 \rho_k(z)$ and $\rho_{k+1}(z') \ge \rho_{k+1}(z)/3$.
Therefore, $\Delta_k(z') \ge \Delta_k(z)/(6e^6)$.
\elem
\bpf
For $j < k$, take absolute values in the Taylor expansion of $f_j(z')$, apply triangular inequality,
and split the summation up to $k$ and beyond $k$, to get
\begin{align*}
|f_j(z')|
= |f_{j}(z)| + \sum_{\ell =1}^{k-j} {\ell+j \choose \ell}\abs{{f_{\ell+j}(z)}} \delta^\ell +
\sum_{\ell > k-j} {\ell+j \choose \ell}\abs{f_{\ell+j}(z)} \delta^\ell .
\end{align*}
Divide by $|f_k(z)|$ and use the expressions in \refeq{rkr} to obtain
$$\abs{\frac{f_j(z')}{f_k(z)}}\le \rho_k^{k-j}+ \sum_{\ell =1}^{k-j} {\ell+j \choose \ell}\rho_k^{k-\ell-j} \delta^\ell + \sum_{\ell > k-j} {\ell+j \choose \ell}\frac{\delta^\ell}{\rho_{k+1}^{\ell+j-k}}.$$
Since $\delta =\eps\rho_k/k$, we can pull
out $\rho_k^{k-j}$ from the RHS (and since $\Delta_k>1$), we get that
$$
\abs{\frac{f_j(z')}{f_k(z)}
\le \rho_k^{k-j}\paren{1+ \sum_{\ell =1}^{k-j} {\ell+j \choose \ell}\paren{\frac{\eps}{k}}^\ell
+ \sum_{\ell > k-j} {\ell+j \choose \ell}\paren{\frac{\eps}{k}}^\ell}.
$$
Assuming $k \ge 2$, from \refeq{binom} we obtain that
$$ \abs{\frac{f_j(z')}{f_k(z)}}
\le \rho_k^{k-j}\paren{1- \frac{\eps}{k}}^{-(j+1)}.$$
Combining this bound with \refLem{fklb}, and doing some further simplifications
we obtain the upper bound on $\rho_k(z')$. Note that we require $k \ge 2 > \eps$.
To derive a lower bound on $\rho_{k+1}(z')$ in terms of $\rho_{k+1}(z)$,
we take absolute values in the Taylor expansion of $f_j(z')$, for $j >k$,
apply the triangular inequality, and divide both sides by $|f_k(z)|$, to get
$$\abs{\frac{f_j(z')}{f_k(z)}}
\le \abs{\frac{f_j(z)}{f_k(z)}} + \sum_{\ell \ge 1} {\ell+j\choose \ell}\abs{\frac{f_{\ell+j}(z)}{f_k(z)}} \delta^\ell.$$
From the expression for $\rho_{k+1}$ in \refeq{rkr} and \refeq{binom} it follows that
$$\abs{\frac{f_j(z')}{f_k(z)}}
\le \frac{1}{\rho_{k+1}^{j-k}} \paren{1-u}^{-(j+1)}.$$
Combining this with the lower bound in \refLem{fklb}, and using the lower bound on
$\Delta_k$ we further obtain that
$$\abs{\frac{f_j(z')}{f_k(z')}}^{1/(j-k)} \le \frac{2^{j-k}}{(1-u)\rho_{k+1}}.$
Since $u \le \eps/(k\Delta_k)$ and $\Delta_k \ge16$, we get that
$$\abs{\frac{f_j(z')}{f_k(z')}}^{1/(j-k)} \le \frac{3}{\rho_{k+1}},$$
which implies the desired lower bound on $\rho_{k+1}(z')$.
\epf
To show \refLem{correctness}(iii), we suppose that $\rho_k(w) \le \rho_k(z)$.
As the two inclusion discs intersect it follows that
$$|w \pm 3\rho_k(w) -z| \le |z-w|+3\rho_k(w) \le 9\rho_k(z) \le \frac{\rho_{k+1}(z)}{3},$$
where the last inequality follows from $\Delta_k(z)\ge 27$. This implies that
$D(w, 3\rho_k(w)) \ib D(z, \frac{\rho_{k+1}(z)}{3})$, and hence both the discs have the same cluster.
We next give a self-contained proof of Pawlowski's result \cite[Thm.~2.2]{pawlowski:zeros-of-derivatives:99}.
The difference in our proof is that we avoid using Enestr\"om-Kakeya theorem. The key idea of using
Walsh's representation theorem \cite[Thm.~3.4.1c]{rahman-schmeisser:polynomials:bk}, however, is common to both the proofs.
Given a cluster $\calC$ of size $k$, and $z\in \CC$,
let $f_1(z)=\lead(f)\prod_{\alpha \in \calC}(z-\alpha)$ and $f_2(z) =\prod_{\beta \in \ol{\calC}}(z-\beta)$.
From Leibniz's formula we have
$$f^{(j)}(z)=\sum_{i=\max\set{0, j+k-n}}^{\min\set{j,k}} {j \choose i} f_1^{(i)}(z) f_2^{(j-i)}(z).$$
We will focus on the case when $j \le k$, in which case the upper bound of the summation is $j$.
The bounds on the summation are required because $f_1$ cannot be differentiated more than $k$ times and
similarly for $f_2$.
Applying Walsh's representation theorem, first to $f_1$ we obtain that there is an $\alpha \in D(m,r_\calC)$ such that
$$f^{(j)}(z)=\sum_{i=\max\set{0, j+k-n}}^{j} {j \choose i} k(k-1) \cdots (k-i+1) (z-\alpha)^{k-i}f_2^{(j-i)}(z).$$
Now applying Walsh's representation theorem to $f_2$, we know that there is a $\beta \nin D(m,R_\calC)$ such that
$$f^{(j)}(z)=\sum_{i=\max\set{0, j+k-n}}^{j} {j \choose i} k(k-1) \cdots (k-i+1) (z-\alpha)^{k-i}
(n-k)(n-k-1)\cdots (n-k-j+i+1) (z-\beta)^{n-k-j+i}.$$
Opening up the binomial ${j \choose i}$ and simplifying we obtain
$$f_j(z) = \frac{f^{(j)}(z)}{j!}=\sum_{i=\max\set{0, j+k-n}}^{j} {k\choose i} (z-\alpha)^{k-i}
{n-k\choose j-i} (z-\beta)^{n-k-j+i}.$$
Pulling out the last term from the RHS we obtain that
$$f_j(z)={k \choose j}(z-\alpha)^{k-j}(z-\beta)^{n-k}\sum_{i=\max\set{0, j+k-n}}^{j} \frac{{k\choose i} {n-k\choose j-i}}{{k \choose j}}\paren{\frac{z-\alpha}{z-\beta}}^{j-i};$$
note that when $j=k$, the term $(z-\alpha)^{k-j}$ is one. Now we substitute $i$ by $j-i$ to obtain
\beql{fjz}
f_j(z)={k \choose j}(z-\alpha)^{k-j}(z-\beta)^{n-k}\sum_{i=0}^{\min\set{j, n-k}} \frac{{k\choose j-i} {n-k\choose i}}{{k \choose j}}\paren{\frac{z-\alpha}{z-\beta}}^{i}.
\eeql
The fraction
$$\frac{{k\choose j-i} {n-k\choose i}}{{k \choose j}}
= \frac{1}{i!}\frac{(n-k)!}{(n-k-i)!} \frac{j!}{(j-i)!} \frac{(k-j)!}{(k-j+i)!}
\le \frac{1}{i!} \paren{\frac{(n-k)j}{(k-j)}}^i;$$
note that for $j=k$, the denominator does not appear; we capture this by using the notation
$(x)_1 \as \max\set{1, x}$.
Therefore, the summation in \refeq{fjz}
does not vanish if $z$ satisfies the following inequality:
$$\half\ge \sum_{i\ge 1}^{\min\set{j,n-k}} \frac{1}{i!}\paren{\frac{j(n-k)}{(k-j)_1}\abs{\frac{z-\alpha}{z-\beta}}}^{i}.$$
Substituting the upper bound of the summation by infinity, we get the following stronger constraint:
$$\half \ge \sum_{i\ge 1} \frac{1}{i!}\paren{\frac{j(n-k)}{(k-j)_1}\abs{\frac{z-\alpha}{z-\beta}}}^{i}.$$
Adding one on both sides, and using the observation that the RHS is the expansion of
$\exp(\paren{\frac{j(n-k)}{(k-j)_1}\abs{\frac{z-\alpha}{z-\beta}}})$, we get that the inequality follows
if
\beql{ab}
\ln 1.5 \ge \frac{1}{4} \ge \paren{\frac{j(n-k)}{(k-j)_1}\abs{\frac{z-\alpha}{z-\beta}}}.
\eeql
Since $\alpha \in D(m_\calC, r_\calC)$, we have
$$|z-\alpha| \le |z-m_\calC| + r_\calC \le 2\max(|z-m_\calC|,r_\calC),$$
and
$$|z-\beta| \ge |m_\calC- \beta| - |z-m_\calC| \ge R_\calC - |z-m_\calC| \ge R_\calC - \max\set{|z-m_\calC|, r_\calC}.$$
These bounds imply that
$$\frac{|z-\alpha|}{|z-\beta|} \le 2 \frac{\max\set{|z-m_\calC|, r_\calC}}{(R_\calC - \max\set{|z-m_\calC|, r_\calC})},$$
and hence \refeq{ab} follows
if
$$R_\calC \ge \max\set{|z-m_\calC|, r_\calC} \paren{1 + \frac{8j(n-k)}{(k-j)_1}}.$$
To summarize, we have the following result:
\bleml{derivroots}
Let $\calC$ be a cluster of size $k$ and $j \in \set{0 \dd k}$.
If $z \in \CC$ is such that
\beql{rcalc}
R_\calC \ge 2\max\set{|z-m_\calC|, r_\calC}\paren{\frac{8j(n-k)}{(k-j)_1}}_1,
\eeql
then
$$f_j(z) = {k \choose j}(z-\alpha)^{k-j}(z-\beta)^{n-k}(1 \pm \half),$$
for some $\alpha \in D(m, r_\calC)$ and $\beta \nin D(m,R_\calC)$;
the notation ``$\pm$'' stands for a $\theta\in \CC$ such that $|\theta| \le 1$.
Moreover, if $z$ also satisfies $|z-m_\calC| > r_\calC$ then $f^{(j)}(z) \neq 0$.
\elem
We specialize this result to the case of a strongly separated cluster:
\bcorl{derivrootsa}
For a strongly separated cluster $\calC$, if $z$ is such that
$$|z-m_\calC| \le \frac{R_\calC}{ 4n^2}$$
then for $0 \le j \le k$,
$$f_j(z) = {k \choose j}(z-\alpha)^{k-j}(z-\beta)^{n-k}(1 \pm \half),$$
for some $\alpha \in D(m, r_\calC)$ and $\beta \nin D(m,R_\calC)$.
Moreover, if $z$ also satisfies $|z-m_\calC| > r_\calC$ then $f^{(j)}(z) \neq 0$, for $0 \le j \le k$.
\ecorl
\bpf
Note that the maximum value of the term $j(n-k)/(k-j)_1$ is obtained at $j=k$, and it is
$k(n-k)$. From the AM-GM inequality, we know that $k(n-k) \le n^2/4$. Therefore,
\refeq{rcalc} follows if $z$ and $r_\calC$ are such that
$$R_\calC \ge 4n^2\max\set{|z-m_\calC|, r_\calC}.$$
For a strongly separated cluster $\calC$ we know that $R_\calC \ge 4n^2r_\calC$, and
the condition on $z$ is the condition in the corollary.
\epf
We use this result to show that $(k-j)$ roots of the $j$th derivative are in $D(m_\calC,r_\calC)$ and the remaining are
outside $D(m_\calC, R_\calC/(2n^2))$.
Let $\alpha_1 \dd \alpha_k$ be the roots of $f$ in $\calC$ and let $\beta_1 \dd \beta_{n-k}$
be the remaining roots. Let $g_t$ be the polynomial with roots
$(1-t)m_\calC+t\alpha_1 \dd (1-t)m_\calC+t\alpha_k, \beta_1 \dd \beta_{n-k}$. Thus
$g_0(z)=(z-m_\calC)^k\prod_{\beta \in \ol{\calC}}(z-\beta)$ and $g_1(z)=f(z)$.
Since $g_t(z)$ has a strongly separated cluster of size $k$ in $D(m_\calC, tr_\calC)$, from the lemma above we know that
$g_t^{(j)}(z)$ does not vanish on the boundary of $D(m_\calC, r_\calC)$. As the roots vary continuously
with $t$, and $g_0^{(j)}(z)$ has a root of multiplicity $(k-j)$ at $m_\calC$, it follows that $g_t^{(j)}(z)$ has
$k-j$ roots in $D(m_\calC, tr_\calC)$ and the remaining roots outside $D(m_\calC, R_\calC/(2n^2))$.
Substituting $t=1$ gives us the desired result. To summarize, we have obtained the following result:
\bleml{derivrootsb}
Given a strongly separated cluster $\calC$ of size $k$, for $j \le k$, there are $k-j$ roots of the derivative
$f^{(j)}(z)$ in $D(m_\calC, r_\calC)$ and the remaining $(n-k)$ roots are outside $D(m_\calC, R_\calC/(2n^2))$.
\eleml
\end{document}
| {
"timestamp": "2015-02-02T02:10:57",
"yymm": "1501",
"arxiv_id": "1501.07774",
"language": "en",
"url": "https://arxiv.org/abs/1501.07774",
"abstract": "We describe a subroutine that improves the running time of any subdivision algorithm for real root isolation. The subroutine first detects clusters of roots using a result of Ostrowski, and then uses Newton iteration to converge to them. Near a cluster, we switch to subdivision, and proceed recursively. The subroutine has the advantage that it is independent of the predicates used to terminate the subdivision. This gives us an alternative and simpler approach to recent developments of Sagraloff (2012) and Sagraloff-Mehlhorn (2013), assuming exact arithmetic.The subdivision tree size of our algorithm using predicates based on Descartes's rule of signs is bounded by $O(n\\log n)$, which is better by $O(n\\log L)$ compared to known results. Our analysis differs in two key aspects. First, we use the general technique of continuous amortization from Burr-Krahmer-Yap (2009), and second, we use the geometry of clusters of roots instead of the Davenport-Mahler bound. The analysis naturally extends to other predicates.",
"subjects": "Numerical Analysis (math.NA); Symbolic Computation (cs.SC)",
"title": "Near Optimal Subdivision Algorithms for Real Root Isolation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126469647337,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.709439718337186
} |
https://arxiv.org/abs/1806.01163 | VADU 2018 Open Problem Session | We state the problems discussed in the open problem session at Variational Analysis Down Under (VADU2018) conference held in honour of Prof. Asen Dontchev's 70th birthday on 19--21 February 2018 at Federation University Australia,this https URL. | \section{Existence of local calm selections}
This problem was proposed by Asen Dontchev. All background material, including notation, history, etc. can be found in \cite{book}. We are grateful to Asen for providing this description.
\proclaim Theorem (Bartle-Graves (1952)). Let $X$ and $Y$ be
Banach spaces and let $f:X \to Y$ be a function which is strictly
differentiable at $\bar{x}$ and such that the derivative $D f(\bar{x})$ is
surjective. Then there exist a neighborhood $V$ of $ f(\bar{x})$ and
a constant $\gamma > 0$ such that $f^{-1}$ has a continuous selection $s$ on $V$ which is calm with constant $\gamma$; that is,
$$
\|s(y) - \bar{x}\| \leq \gamma\|y - f(\bar{x})\|
\;\text{for every } y \in V.
$$
When $X$ and $Y$ are finite dimensional, even Hilbert, the proof is easy. For Banach spaces, the proof is highly nontrivial. A generalization of the Bartle-Graves theorem to set-valued mappings was obtained in \cite{a}.
Here is the open problem:
\proclaim Conjecture.
Consider a function $f:\mathbb{R}^n \to \mathbb{R}^m$ which is Lipschitz continuous around $\bar{x}$
and suppose that all matrices $A$ in Clarke's generalized Jacobian of $f$ at $\bar x$ are surjective. Then
$f^{-1}$ has a continuous
local selection around $\bar{y}$ for $\bar{x}$ which is calm at $\bar y = f(\bar x)$.
If $n=m$ the conjecture reduces to Clarke's inverse function theorem. For $m \leq n$, according to a theorem by Pourciau \cite{P}, under the same condition the function $f$ is metrically regular. This last result was generalized recently to Banach spaces in \cite{r}.
\section{Are $6$-polytopes $3$-linked?}
This problem was presented by Bui Thi Hoa.
A graph $G$ is $k$-linked if for any selection of $k$ pairs of all distinct vertices $Y:=\{(s_1,t_1),\ldots,(s_k,t_k)\}$, $(k \ge 1)$ there exist $k$ disjoint paths, connecting the $k$ pairs of points in $Y$. If the graph of a polytope is {\it $k$-linked} we say that the polytope is also {\it $k$-linked}.
Recall that a $d$-polytope is a $d$-dimensional polytope, i.e. the linear span of the polytope is a $d$-dimensional space. The initial question is whether or not every $d$-polytope is {\it $\lfloor d/2 \rfloor$-linked}. And the negative answer was given by Gallivan (see \cite{Gal}) with a construction of a $d$-polytope which is not {\it $\lfloor 2(d+4)/5\rfloor$-linked}.
It had been already proven that $4$-polytopes and $5$-polytopes are {\it $2$-linked} (see \cite{Tho}, \cite{Sey}), meanwhile not all $8$-polytopes are {\it $4$-linked}. The remaining question is that if all the $6$-polytopes are {\it $3$-linked}.
\section{Is FFS3 polytope decomposable?}
This problem was suggested by David Yost, and communicated during the open problem section by Scott Lindstrom and Vera Roshchina.
A polytope is called decomposable \cite{PY} if it can be represented as Minkowski sum of dis-similar convex bodies. Two polytopes are similar if one can be obtained from the other by a dilation and a translation.
David Yost in collaboration with Debra Briggs have classified all but one 3-polytopes with up to 16 edges in terms of decomposability (manuscript in preparation). The only remaining case is the (combinatorial) polytope FFS3 with its graph shown in Fig.~\ref{fig:FFS3}.
\begin{figure}[ht]
{\centering \includegraphics[width=0.3\textwidth]{polytope}\\}
\caption{Graph of the polytope FFS3}
\label{fig:FFS3}
\end{figure}
It is conjectured that this polytope has no decomposable geometric realisation.
All polytopes with up to 15 edges are classified in terms of their decomposability \cite{briggs}, and the resolution of the decomposability question for FFS3 polytope will settle the 16-edge case. However further case-by-case decomposability classification of polyhedra with higher number of edges presents a tedious challenge, and a more interesting question is developing an algorithm to check decomposability. We note that in an overwhelming number of cases indecomposability can be checked using combinatorial conditions from \cite{PY}.
\section{Projections onto compact convex sets}
This problem was proposed by Andrew Eberhard.
Let $C_1$ and $C_2$ be compact convex sets in a Hilbert space $\mathcal{H}$. The conjecture states that there always exists a point $x\in \mathcal{H}$ such that for each of its projections $p_i$ onto $C_i$, $i\in \{1,2\}$ the relevant normals $x-p_1$ and $x-p_2$ define the hyperplanes that strongly expose the faces $\{p_1\}$ and $\{p_2\}$ of $C_1$ and $C_2$ respectively.
Recall (see \cite[Definition 8.27]{Fabian}) that a point $x\in C$ is strongly exposed by a linear functional $f$ if $f(x) = \sup_{x'\in C} f(x')$ and $x_k\to x$ for all sequences $\{x_k\} \subset C$ such that $\lim f(x_k) = \sup_{x\in C}f(x)$.
\section{Convergence of the continuous time Douglas-Rachford algorithm}
This problem was proposed by Scott Lindstrom.
For the feasibility problem of finding a point in the nonempty intersection $A\cap B \ne \emptyset$ of proximal sets $A$ and $B$, the Douglas-Rachford method for a given starting point $x_0$ generates a sequence
\begin{equation*}
x_n \in Tx_{n-1}:=\left(\lambda(2P_B-{\rm Id})(2P_A-{\rm Id})+(1-\lambda){\rm Id}\right)x_{n-1}
\end{equation*}
where $P_A,P_B$ denote the usual projection operators for $A,B$ respectively and $\lambda \in (0,1]$ is usually taken to be $1/2$. When $A,B$ are also convex, the sequence $(x_n)_{n\in \mathbb{N}}$ converges weakly to a fixed point of the method (see \cite{LM} and \cite{BCL}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.45\textwidth]{circleflow}\quad \includegraphics[width=.515\textwidth]{ellipseflow}
\end{center}
\label{fig:circleflow}
\caption{The flowfield \eqref{DR_DE} with a circle/line (left) and ellipse/line (right). Images courtesy of Veit Elser.}
\end{figure}
For the nonconvex case where $A$ is a circle and $B$ a line, Borwein and Sims \cite{BS} considered the ``continuous time'' version of the algorithm---whose flow field is shown at left in Figure~\ref{fig:circleflow} and corresponds to the solution of the differential equation given by
\begin{equation}\label{DR_DE}
\frac{dx}{dt}=T(x) \quad \text{when} \; \lambda \rightarrow 0^+
\end{equation}
---as a means to approaching the question of convergence in the usual case of $\lambda=1/2$ given subtransversality, a case Benoist \cite{Benoist} answered in the affirmative by means of a Lyapunov function and which has since been extended by Minh N. Dao and Matthew K. Tam \cite{DT}.
The generalization to a subtransversal ellipse and line and also to a p-sphere and a line was considered by Borwein et al.\cite{BLSSS}, who showed that local convergence remains while global behaviour becomes far more complicated. See, for example, Figure~\ref{fig:ellipseandline}. Veit Elser has suggested analysing the continuous time version of the method in these more general settings and has generously furnished the images in Figure~\ref{fig:circleflow}.
\begin{figure}
\begin{center}
\includegraphics[angle=90,width=\textwidth]{bigellipsecindy}
\end{center}
\caption{Behaviour of Douglas-Rachford method with an ellipse and line varies from the case of a circle and line.}\label{fig:ellipseandline}
\end{figure}
\section{Minimal distance problem}
This problem was proposed by Alex Kruger.
Given a finite set of points $a_1,\dots, a_m\in X$, where $X$ is an Euclidean space, find the solution to the problem
\begin{equation}\label{eq:pb-alex}
\min_{x\in X}\max_{i\in \{1,\dots, m\}}\|a_i-x\|.
\end{equation}
The problem has a unique solution for which $x$ is the centre of the minimal Euclidean sphere that contains all points. However it is unclear whether there exists a neat way to write this explicitly.
This is a particular case of a more general problem. The space $X$ can be an arbitrary normed linear or even a metric space. In the latter case, the norm of the difference in \eqref{eq:pb-alex} should be replaced by the distance. Instead of the maximum in \eqref{eq:pb-alex}, it could be an arbitrary norm in $\mathbb{R}^m$.
\section{Demyanov-Ryabova conjecture}
This problem was communicated by Vera Roshchina.
The problem was originally stated in \cite[Conjecture 1]{DR}. Recently two different special cases were confirmed in \cite{DP,TS}. During the preparation of this file a counterexample was found \cite{Vera-DR}.
Given a finite family $\Omega$ of convex polytopes in $\mathbb{R}^n$, for each unit vector $g\in S_{n-1}$ we construct a new polytope as the convex hull of all support faces of all polytopes in the family $\Omega$, i.e. we define the function
$$
C(g) := \mathrm{conv\,} \{\Argmax_{x\in P}\langle x, g\rangle \,|\, P \in \Omega\}.
$$
Collecting all such polytopes, we obtain a new finite family of polytopes,
$$
F(\Omega) = \{C(g)\, g\in S_{n-1}\}.
$$
Now starting from a given finite collection of polytopes $\Omega_0$ we apply this transformation infinitely obtaining a sequence $\Omega_0$, $\Omega_1$, $\Omega_2$, \dots, where $\Omega_i = F(\Omega_{i-1})$, $i\in \mathbb{N}$.
The original Demyanov-Ryabova conjecture claimed that this sequence eventually reaches a two-cycle, i.e. for a sufficiently large $N$ we have $\Omega_{N+2} = \Omega_N$. Since we now know that the conjecture is false, the question is to find a characterisation of such collections of polytopes that yield two-cycles, extending and generalising the results of \cite{DP,TS}.
\section{D\"urer's conjecture}
This problem was communicated by Vera Roshchina.
Albrecht D\"urer dedicated a nontrivial part of his career to laying out the geometric foundations of drawing and perspective. His five centuries old work \cite{Durer} is available online via Google books. The mathematical statement known as D\"urer's conjecture was motivated by this work and proposed in 1975 by Shephard \cite{Shephard}. A net (or unfolding) of a 3-polytope is the process of cutting it along its edges, so that the resulting connected shape can be flattened (developed) into the plane \cite{GhomiNotices}. It is not difficult to find examples of polytopes for which certain cuts result in overlapping unfoldings, such as the truncated tetrahedron shown in Fig.~\ref{fig:unfolding}
\begin{figure}[ht]
{\centering
\includegraphics[scale=1]{unfold-truncated.pdf}\\}
\caption{Two different nets of the same truncated tetrahedron}
\label{fig:unfolding}
\end{figure}
(see \cite{GhomiProof}).
The D\"urer's conjecture is a claim that any polytope has a nonoverlapping net. A significant recent contribution in this direction is the work by Mohammed Ghomi who showed that every polytope is combinatorially equivalent to an unfoldable one \cite{GhomiProof}. For more details we refer the reader to an overview \cite{GhomiNotices} by the same author.
\section*{Acknowledgements}
We are grateful to Asen Dontchev, Andrew Eberhard, Alex Kruger and David Yost for patiently clarifying the mathematical details of their open problems to us.
\bibliographystyle{plain}
| {
"timestamp": "2018-06-05T02:19:08",
"yymm": "1806",
"arxiv_id": "1806.01163",
"language": "en",
"url": "https://arxiv.org/abs/1806.01163",
"abstract": "We state the problems discussed in the open problem session at Variational Analysis Down Under (VADU2018) conference held in honour of Prof. Asen Dontchev's 70th birthday on 19--21 February 2018 at Federation University Australia,this https URL.",
"subjects": "Optimization and Control (math.OC)",
"title": "VADU 2018 Open Problem Session",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126469647337,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.709439718337186
} |
https://arxiv.org/abs/1107.5971 | Injective hulls of certain discrete metric spaces and groups | Injective metric spaces, or absolute 1-Lipschitz retracts, share a number of properties with CAT(0) spaces. In the 1960es, J. R. Isbell showed that every metric space X has an injective hull E(X). Here it is proved that if X is the vertex set of a connected locally finite graph with a uniform stability property of intervals, then E(X) is a locally finite polyhedral complex with finitely many isometry types of n-cells, isometric to polytopes in l^n_\infty, for each n. This applies to a class of finitely generated groups G, including all word hyperbolic groups and abelian groups, among others. Then G acts properly on E(G) by cellular isometries, and the first barycentric subdivision of E(G) is a model for the classifying space \underbar{E}G for proper actions. If G is hyperbolic, E(G) is finite dimensional and the action is cocompact. In particular, every hyperbolic group acts properly and cocompactly on a space of non-positive curvature in a weak (but non-coarse) sense. | \section{Introduction}
A metric space $Y$ is called {\em injective} if for every metric space~$B$
and every $1$-Lipschitz map $f \colon A \to Y$ defined on a set $A \subset B$
there exists a $1$-Lipschitz extension $\overline f \colon B \to Y$ of $f$.
The terminology is in accordance with the notion of an injective object
in category theory.
Basic examples of injective metric spaces are the real line,
all complete $\mathbb{R}$-trees, and $l_\infty(I)$ for an arbitrary index set $I$.
Every injective metric space $Y$ is complete, geodesic, and satisfies
Busemann's non-positive curvature condition in a restricted form
(see~\eqref{eq:gamxy} below); in particular, $Y$ is contractible.
By an old construction of Isbell~\cite{Isb},
every metric space $X$ possesses an essentially unique
{\em injective hull} $({\rm e},\E{X})$;
that is, $\E{X}$ is an injective metric space, ${\rm e} \colon X \to \E{X}$
is an isometric embedding, and every isometric embedding of $X$ into some
injective metric space $Z$ factors through ${\rm e}$.
If $X$ is compact then so is $\E{X}$, and if $X$ is finite then
the injective hull is a finite polyhedral complex of dimension at
most $\frac12 |X|$ whose $n$-cells are isometric to polytopes
in~$l^n_\infty = l_\infty(\{1,\dots,n\})$.
A detailed account of injective metric spaces and hulls is given
below, in Sections~\ref{Sect:inj} and~\ref{Sect:hull}.
Isbell's construction was rediscovered twenty years later by Dress~\cite{Dre}
(and even another time in~\cite{ChrL}).
Due to this independent work and a characterization of injective
metric spaces from~\cite{AroP}, metric injective hulls
are also called {\em tight spans}
or {\em hyperconvex hulls} in the literature,
furthermore ``hull'' is often substituted by ``envelope''.
Tight spans are widely known in discrete mathematics
and have notably been used in phylogenetic analysis
(see~\cite{DreMT,DreHM} for some surveys). Apart from the two foundational
papers~\cite{Isb,Dre} and some work referring to Banach spaces
(see, for instance,~\cite{Isb2,Rao,CiaD}), the vast literature on metric
injective hulls deals almost exclusively with finite metric spaces.
Dress proved that for certain discrete metric spaces $X$ the tight span
$T_X$ still has a polyhedral structure~\cite[(5.19), (6.2), (6.6)]{Dre};
these results, however, presuppose that $T_X$ is locally finite dimensional.
A simple sufficient, geometric condition on $X$ to this effect has been
missing (but see~\cite[Theorem~9 and~(5.14)]{Dre}).
Here it is now shown that, in the case of integer valued metrics,
a weak form of the fellow traveler property for discrete geodesics
serves the purpose and even ensures that $\E{X}$ is proper, provided $X$ is;
see Theorem~\ref{Thm:intro-ex} below.
The polyhedral structure of $\E{X}$ and the possible isometry types of cells
are described in detail and no prior knowledge of the constructions
in~\cite{Isb, Dre} is assumed.
With regard to applications in geometric group theory,
a general fixed point theorem for injective metric spaces is pointed out
(Proposition~\ref{Prop:intro-fix}), which closely parallels the well-known
result for $\text{\rm CAT}(0)$ spaces.
Furthermore, it has been known for some time that if the metric space $X$ is
$\delta$-hyperbolic, then so is $\E{X}$, and this implies that $\E{X}$ is within
finite distance of ${\rm e}(X)$, provided $X$ is geodesic or discretely geodesic
(Proposition~\ref{Prop:intro-hyp}). Despite this fact, the injective
hull of the hyperbolic plane has infinite topological dimension. Yet, it is
shown that for a word hyperbolic group~$\Gamma$ the injective hull
is a finite dimensional polyhedral complex, on which $\Gamma$ acts properly
and cocompactly. This is part of a more general result,
Theorem~\ref{Thm:intro-groups}, which provides a new source for geometric
models of finitely generated groups and universal spaces for proper actions.
To state these results in detail, we introduce some
general notation used throughout the paper. Let $X$ be a
metric space with metric $d$. For $x,y \in X$,
\[
\bw(x,y) := \{ v \in X : d(x,v) + d(v,y) = d(x,y) \}
\]
denotes the {\em interval} between $x$ and $y$ (compare~\cite{Mul}),
and for $x,v \in X$,
\begin{equation} \label{eq:c}
\co(x,v) := \{ y \in X : v \in \bw(x,y) \}
\end{equation}
is the {\em cone} determined by the directed pair $(x,v)$.
Given a reference point $z \in X$, $d_z \colon X \to \mathbb{R}$ denotes the
distance function to $z$, thus $d_z(x) = d(x,z)$.
The metric space $X$ is called {\em discretely geodesic} if the metric
is integer valued and for every pair of points $x,y \in X$ there
exists an isometric embedding $\gamma \colon \{0,1,\dots,d(x,y)\} \to X$
such that $\gamma(0) = x$ and $\gamma(d(x,y)) = y$.
We say that a discretely geodesic metric space $X$ has
{\em $\beta$-stable intervals}, for some constant $\beta \ge 0$,
if for every triple of points $x,y,y' \in X$ with $d(y,y') = 1$ we have
\begin{equation} \label{eq:stable}
d_\H(\bw(x,y),\bw(x,y')) \le \beta,
\end{equation}
where $d_\H$ denotes the Hausdorff distance in $X$. To verify this condition
it suffices, by symmetry, to show that for every $v \in \bw(x,y)$ there exists
a $v' \in \bw(x,y')$ with $d(v,v') \le \beta$; this means that
some (but not necessarily every) discrete geodesic from~$x$ to~$y'$
passes close to $v$.
We have the following result.
\begin{Thm} \label{Thm:intro-ex}
Let $X$ be a discretely geodesic metric space such that all bounded
subsets are finite. If $X$ has $\beta$-stable intervals, then the
injective hull $\E{X}$ is proper (that is, bounded closed subsets are
compact) and has the structure of a locally finite polyhedral
complex with only finitely many isometry types of $n$-cells, isometric
to injective polytopes in $l^n_\infty$, for every $n \ge 1$.
\end{Thm}
In this article, polytopes are understood to be convex and compact.
The polyhedral (or rather polytopal) structure
of $\E{X}$ is discussed in detail, under some weaker but technical
assumption, in Section~\ref{Sect:poly}.
Then, in Section~\ref{Sect:cones}, this condition is approached through
the uniform stability of intervals in $X$. The proof of the theorem is
completed in Section~\ref{Sect:proofs}, where also the other results
stated in this introduction are proved.
Inequality~\eqref{eq:stable} is used through the following two
consequences. First, for every fixed vertex $v \in X$ there are only
finitely many distinct cones $\co(x,v)$ as $x$ ranges over $X$;
the argument goes back to Cannon~\cite{Can}.
Second, for all $x,y,z \in X$ there exists
$v \in \bw(x,y)$ such that $d_z(v) \le \beta(d_z(x) + d_z(y) - d(x,y))$
(this implies in turn that $X$ has $2\beta$-stable intervals).
We derive upper bounds on the local dimension and
complexity of $\E{X}$ in terms of the distance to a point ${\rm e}(z)$ in the
image of the embedding ${\rm e} \colon X \to \E{X}$,
the cardinality of balls centered at $z$, and the constant $\beta$.
In particular, if there is a uniform bound on the number of points at
distance one from any point in $X$, then every subcomplex of $\E{X}$
contained in a tubular neighborhood of ${\rm e}(X)$ is finite dimensional.
Note that this applies to finitely generated groups, discussed further below.
Injective (or hyperconvex) metric spaces have some remarkable
fixed point properties.
For instance, every $1$-Lipschitz map $L \colon X \to X$
of a bounded injective metric space $X$ has a non-empty fixed point set
which is itself injective and thus contractible;
compare~\cite[Theorem~6.1]{EspK} and the references there.
Contrary to what one might expect, the boundedness condition cannot be
relaxed to the assumption that (the semigroup generated by) the $1$-Lipschitz
map $L$ has bounded orbits. Indeed Prus gave an example
of an isometric embedding $L$ of the Banach space $l_\infty$ into itself
such that $L$ has bounded orbits but no fixed point;
see~\cite[Remark~6.3]{EspK}. However, this map $L$ is not surjective and
thus the example still leaves room for the following proposition.
In the process of finishing this paper I became aware of the
reference~\cite{Dre3}, where the result is shown for continuous isometric
actions of compact groups, using the Haar integral.
\begin{Prop} \label{Prop:intro-fix}
Let $X$ be an injective metric space. If $\Lambda$ is a subgroup
of the isometry group of $X$ with bounded orbits,
then the fixed point set of $\Lambda$ is non-empty and furthermore
injective, hence contractible.
\end{Prop}
This should be compared with the analogous result for
$\text{\rm CAT}(0)$ spaces (see~\cite[Corollary~II.2.8]{BriH}).
A metric space $X$ is called {\em $\delta$-hyperbolic},
for some constant $\delta \ge 0$, if
\begin{equation} \label{eq:hyp}
d(w,x) + d(y,z) \le \max\{d(w,y) + d(x,z), d(x,y) + d(w,z)\} + \delta
\end{equation}
for all quadruples of points $w,x,y,z \in X$.
It is easily seen that every discretely
geodesic $\delta$-hyperbolic metric space has $(\delta+1)$-stable intervals.
The following proposition provides a most efficient way to embed a general
$\delta$-hyperbolic metric space isometrically into a geodesic
(and contractible) $\delta$-hyperbolic metric space
(see~\cite[Proposition~6.4.D]{Gro}, \cite[Theorem~4.1]{BonS} for some
results of similar type).
\begin{Prop} \label{Prop:intro-hyp}
Let $X$ be a $\delta$-hyperbolic metric space. Then $\E{X}$ is
$\delta$-hyperbolic as well. If, in addition, $X$ is geodesic or discretely
geodesic, then $\E{X}$ is within distance $\delta$ or $\delta + \frac12$,
respectively, of the image of the embedding ${\rm e} \colon X \to \E{X}$.
\end{Prop}
The first part of this result is mentioned without proof
in~\cite[Section~4.4]{DreMT}, and the argument is given
in~\cite[(4.1)]{Dre} for the case $\delta = 0$.
The second part (with a primarily different proof and a worse bound)
served as the starting point for the present investigation and for the
thesis~\cite{Moe}, where also a weaker version of
Theorem~\ref{Thm:intro-groups} below was shown.
Now let $\Gamma$ be a group with a finite generating system $S$, equipped with
the word metric $d_S$ with respect to the alphabet $S \cup S^{-1}$.
The isometric action of $\Gamma$ by left multiplication on $\Gamma_S = (\Gamma,d_S)$
extends canonically to an isometric action on the injective hull $\E{\Gamma_S}$.
If $\Gamma_S$ has $\beta$-stable intervals
(see Remark~\ref{Rem:fft-property} for some sufficient conditions),
Theorem~\ref{Thm:intro-ex} yields that $\E{\Gamma_S}$ is a locally
finite polyhedral complex with finitely many isometry types of $n$-cells
for every $n$. By virtue of the two propositions above, we obtain the
following further information.
\begin{Thm} \label{Thm:intro-groups}
Let $\Gamma_S = (\Gamma,d_S)$ be a finitely generated group.
If $\Gamma_S$ has $\beta$-stable intervals, then $\Gamma$ acts properly by
cellular isometries on the complex $\E{\Gamma_S}$, and the first barycentric
subdivision $\E{\Gamma_S}^1$ of $\E{\Gamma_S}$ is a model for the classifying
space $\underbar{\rm E}\Gamma$ for proper actions.
If $\Gamma_S$ is $\delta$-hyperbolic, then $\E{\Gamma_S}$ is finite dimensional
and the action is cocompact in addition.
\end{Thm}
If $\Gamma_S$ is $\delta$-hyperbolic, $\E{\Gamma_S}^1$ has only finitely many
distinct $\Gamma$-orbits of cells and thus constitutes a (so-called) finite
model for $\underbar{\rm E}\Gamma$.
A corresponding result holds for the Rips complex $\mathscr{P}_D(\Gamma_S)$,
provided the maximal simplex diameter $D$ is chosen sufficiently large
(see~\cite{MeiS}). It should be noted, by contrast, that the injective hull
construction requires no such extra parameter and that the entire structure
of $\E{\Gamma_S}^1$ is canonically determined once the finite generating system
$S$ is fixed. In some simple examples, the dimension of $\E{\Gamma_S}$ is
approximately one half the dimension of the smallest contractible Rips
complex. It should further be noted that $\E{\Gamma_S}$ comes with some
features of non-positive curvature.
In fact, for every injective metric space $X$, there is a map
$\gamma \colon X \times X \times [0,1] \to X$ such that
$\gamma_{xy} := \gamma(x,y,\cdot)$ is a constant speed geodesic
from $x$ to $y$ and
\begin{equation} \label{eq:gamxy}
d(\gamma_{xy}(t),\gamma_{x'y'}(t)) \le (1-t)d(x,x') + td(y,y'),
\end{equation}
for all $x,y,x',y' \in X$ and $t \in [0,1]$. Thus $X$ satisfies Busemann's
convexity condition for a suitable geodesic bicombing. In addition,
$\gamma$ can be chosen to be equivariant with respect to the full isometry
group of $X$ (see~Proposition~\ref{Prop:bicombing}).
Furthermore, as a consequence of~\eqref{eq:gamxy}
and a result from~\cite{Wen}, injective metric spaces
satisfy isoperimetric filling inequalities of Euclidean type
for integral cycles in any dimension, like $\text{\rm CAT}(0)$ spaces.
These features relate Theorem~\ref{Thm:intro-groups} to the
long-standing question in geometric group theory whether every word
hyperbolic group acts properly and cocompactly by isometries on a
$\text{\rm CAT}(0)$ or even $\text{\rm CAT}(-1)$ space. In the first instance, the result
shows that the answer is positive if the $\text{\rm CAT}(0)$ condition is relaxed
to the weaker convexity property of~\eqref{eq:gamxy}. The metric of
$\E{\Gamma_S}$ is piecewise of the type of $l^n_\infty$ and is therefore
not $\text{\rm CAT}(0)$, unless $\E{\Gamma_S}$ is one-dimensional (a tree). It is
natural to ask whether $\E{\Gamma_S}$ can be equipped with an honest
equivariant $\text{\rm CAT}(0)$ metric. The answer turns out to be
positive, for instance, if $\E{\Gamma_S}$ has dimension two.
This and further results on the structure of injective
hulls of finitely generated groups will be discussed in a subsequent
article.
\section{Injective metric spaces} \label{Sect:inj}
We start by discussing some basic examples, properties, and
characterizations of injective metric spaces. This section is largely
expository.
The set of all $1$-Lipschitz maps from a metric space
$B$ into another metric space $X$ will be denoted by $\operatorname{Lip}_1(B,X)$.
Recall that $X$ is \emph{injective} if for every metric space $B$,
every $A \subset B$, and every $f \in \operatorname{Lip}_1(A,X)$ there exists
$\overline f \in \operatorname{Lip}_1(B,X)$ such that $\overline f|_A = f$.
(Note that for $A = \emptyset \ne B$ this says that $X \ne \emptyset$.)
The most basic examples of injective metric spaces are the real line $\mathbb{R}$
and all non-empty closed subintervals, with the usual metric.
For instance, if $f \in \operatorname{Lip}_1(A,\mathbb{R})$, where $A \ne \emptyset$ is a subset
of a metric space $B$, then
\begin{equation} \label{eq:least-ext}
\overline f(b) := \sup_{a \in A}(f(a) - d(a,b))
\end{equation}
defines the least possible extension $\overline f \in \operatorname{Lip}_1(B,\mathbb{R})$ of $f$.
It follows easily from the definition that every injective metric space
$X$ is complete and geodesic. Indeed, if $\bar X$ denotes the completion,
then the identity map on $X$ extends to a $1$-Lipschitz retraction
$\pi \colon \bar X \to X$ which turns out to be an isometry
as $X$ is dense in $\bar X$. Furthermore, given $x,y \in X$, the map
that sends $0 \in \mathbb{R}$ to $x$ and $l := d(x,y)$ to $y$ extends to a
$1$-Lipschitz map $\gamma \colon [0,l] \to X$ which, due to the triangle
inequality, is in fact an isometric embedding.
Another basic property is that for every triple
of points $x,y,z$ in an injective metric space $X$ there is a
(not necessarily unique) median point $v \in X$, that is, a point in
$\bw(x,y) \cap \bw(y,z) \cap \bw(z,x)$.
This is shown by extending the isometric inclusion $\{x,y,z\} \to X$
to a $1$-Lipschitz map from $Q := (\{x,y,z,u\},\bar d)$ to $X$, where
the metric $\bar d$ is determined by the requirement that it agrees
with~$d$ on $\{x,y,z\}$ and that the additional point $u$ is a median point
of $x,y,z$ in $Q$ (thus $\bar d(u,z) = (x \mid y)_z$, and so on,
see~\eqref{eq:Gromov-prd}). As above, this $1$-Lipschitz extension
is in fact an isometric embedding, and the image of $u$ is the desired
median point $v \in Y$.
One may choose geodesic segments $[v,x],[v,y],[v,z]$ to produce a
geodesic tripod spanned by $x,y,z$ (thus $[v,x] \cup [v,y]$ is a geodesic
segment from $x$ to $y$, and so on).
Checking the existence of median points is a simple and useful first test
for injectivity.
Furthermore, it follows that if $X$ is an injective metric space with
the property that every pair of points $x,y$ is connected by a unique
geodesic segment $[x,y]$, then every geodesic triangle in $X$ is a tripod
and so $X$ is an $\mathbb{R}$-tree.
The converse is a well-known fact:
\begin{Prop} \label{Prop:tree-inj}
Every complete $\mathbb{R}$-tree $X$ is injective.
\end{Prop}
Most proofs in the literature proceed via pointwise extensions and transfinite
induction (compare Proposition~\ref{Prop:hyperconvex} below).
The following direct argument, extracted from a more
general construction in~\cite{Lan}, adapts~\eqref{eq:least-ext} to trees.
\begin{proof}
Fix a base point $z \in X$.
Let $f \in \operatorname{Lip}_1(A,X)$, where $\emptyset \ne A \subset B$.
For every pair $(a,b) \in A \times B$, define
\[
\rho(a,b) := \max\{ 0, d_z(f(a)) - d(a,b) \}
\]
and let $x(a,b)$ be the point on the segment $[z,f(a)]$ at
distance $\rho(a,b)$ from $z$. For two such pairs $(a,b),(a',b')$,
consider the tripod spanned by $z,f(a),f(a')$.
Depending on the positions of $x(a,b)$ and $x(a',b')$ on the tripod,
$d(x(a,b),x(a',b'))$ equals either $|\rho(a,b) - \rho(a',b')|$ or
$d(f(a),f(a')) - d(a,b) - d(a',b')$.
Since $d(f(a),f(a')) \le d(a,a')$, it follows that
\begin{equation} \label{eq:dvv}
d(x(a,b),x(a',b')) \le \max\{ |\rho(a,b) - \rho(a',b')|, d(b,b') \}.
\end{equation}
To define $\overline f \colon B \to X$ at the point $b \in B$,
choose a sequence $(a_i)$ in $A$ such that
\[
\lim_{i \to \infty} \rho(a_i,b) = \bar\rho(b) := \sup_{a \in A} \rho(a,b).
\]
The corresponding sequence $(x(a_i,b))$ in $X$ is Cauchy by~\eqref{eq:dvv},
and $\overline f(b)$ is defined as its limit, which is independent
of the choice of $(a_i)$. Note that $\bar\rho \colon B \to \mathbb{R}$
is the least non-negative $1$-Lipschitz extension of $d_z(f(\cdot))$
(compare~\eqref{eq:least-ext}).
It follows from~\eqref{eq:dvv} that $\overline f \in \operatorname{Lip}_1(B,X)$.
To check that $\overline f$ extends $f$, let $b \in A$, and let $(a_i)$ be a
sequence in $A$ such that $\rho(a_i,b) \to \bar\rho(b)$.
We have $\rho(b,b) = d_z(f(b)) = \bar\rho(b)$ and $x(b,b) = f(b)$,
so $d(x(a_i,b),f(b)) \le |\rho(a_i,b) - \bar\rho(b)|$ by~\eqref{eq:dvv}.
Since $x(a_i,b) \to \overline f(b)$, this gives $\overline f(b) = f(b)$.
\end{proof}
The $l_\infty$ product of a non-empty family $\{(X_i,d_i,z_i)\}_{i\in I}$
of pointed metric spaces is defined as the set
of all $x = (x_i)_{i \in I}$ with $x_i \in X_i$ and
$\sup_{i \in I} d_i(x_i,z_i) < \infty$,
endowed with the metric $(x,x') \mapsto \sup_{i\in I} d_i(x_i,x'_i)$.
Here $I \ne \emptyset$ is an arbitrary index set; if $I$ is finite
or the diameters of the $X_i$ are uniformly bounded, base points may be
disregarded. It is easy to see that if each $(X_i,d_i)$ is injective,
then so is the $l_\infty$ product.
In case $(X_i,z_i) = (\mathbb{R}, 0)$ for all $i \in I$, the corresponding
$l_\infty$ product is the Banach space $l_\infty(I)$, which is thus an
injective metric space. Similarly, $L_\infty(Y,\mu)$ is injective for
every measure space $(Y,\mu)$.
Next we recall some well-known characterizations of injective
metric spaces. A metric space $X$ is called an
\emph{absolute $1$-Lipschitz retract}
if, whenever $i \colon X \to Y$ is an isometric embedding into
another metric space $Y$, there exists a $1$-Lipschitz retraction
of $Y$ onto $i(X)$. If $X$ is injective and $i \colon X \to Y$
is an isometric embedding, then $i(X)$ is injective
and thus the identity map on $i(X)$ extends to
a $1$-Lipschitz retraction $\pi \colon Y \to i(X)$.
On the other hand, every metric space $X$ embeds isometrically
into $l_\infty(X)$ via the map $k_z \colon x \mapsto d_x - d_z$,
for any base point $z \in X$. Hence, if $k_z(X)$ is a $1$-Lipschitz retract
in $l_\infty(X)$, then $X$ is injective since $l_\infty(X)$ is.
This shows:
\begin{Prop} \label{Prop:alr}
A metric space $X$ is injective if and only if it is an absolute
$1$-Lipschitz retract.
\end{Prop}
At this point, we note that every injective metric space $X$
is contractible. If $\pi \colon l_\infty(X) \to X' := k_z(X)$ is a
$1$-Lipschitz retraction onto the image of the Kuratowski embedding, then
$h(x',t) := \pi(tx')$ defines a homotopy
$h \colon X' \times [0,1] \to X'$ from the constant map with value
$0 = k_z(z)$ to the identity map. A map $\gamma$ as in~\eqref{eq:gamxy}
can be obtained in a similar way. See also
Proposition~\ref{Prop:bicombing} and~\cite[Theorem~1.1]{Isb}.
Another characterization of injective metric spaces relies on pointwise
extensions of $1$-Lipschitz maps. A metric space $X$ is said to be
\emph{hyperconvex} if every family $((x_i,r_i))_{i \in I}$ in $X \times \mathbb{R}$
with the property that $r_i + r_j \ge d(x_i,x_j)$ for all pairs of
indices $i,j \in I$ satisfies $\bigcap_{i \in I}B(x_i,r_i) \ne \emptyset$.
(We adopt the convention that the intersection equals $X$ if $I = \emptyset$,
so that hyperconvex spaces are non-empty by definition.)
This terminology was introduced
by Aronszajn and Panitchpakdi in~\cite{AroP}, who also observed
Proposition~\ref{Prop:alr} and the next result.
\begin{Prop} \label{Prop:hyperconvex}
A metric space $X$ is injective if and only if it is hyperconvex.
\end{Prop}
For the proof, one notes first that if $f \in \operatorname{Lip}_1(A,X)$, $\emptyset \ne A \subset B$,
and $b \in B \setminus A$, then $d(a,b) + d(a',b) \ge d(a,a') \ge d(f(a),f(a'))$
for all $a,a' \in A$. Hence, if $X$ is hyperconvex, then
$\bigcap_{a \in A}B(f(a),d(a,b))$ is non-empty,
and one obtains an extension $f_b \in \operatorname{Lip}_1(A \cup \{b\},X)$ of $f$
by declaring $f_b(b)$ to be any point in this intersection.
By iterating this process transfinitely, in general, one
infers that $X$ is injective. Conversely, if $X$ is injective,
a similar argument as for the existence of median points shows that
$X$ is hyperconvex. A useful direct consequence of this characterization
is that the intersection of a family of closed balls in an injective
metric space is injective, whenever the intersection is non-empty.
Some key results on hyperconvex metric spaces were shown by Baillon~\cite{Bai}.
Proposition~\ref{Prop:hyperconvex} will only be used in
Section~\ref{Sect:proofs} for the proof of Proposition~\ref{Prop:intro-fix}.
A concept very close to hyperconvexity is the
{\em binary intersection property}, obtained by replacing
the inequality $r_i + r_j \ge d(x_i,x_j)$ in the above definition by
the condition $B(x_i,r_i) \cap B(x_j,r_j) \ne \emptyset$. The two concepts
agree for geodesic metric spaces. Nachbin~\cite[Theorem~1]{Nac} showed
that a normed real vector space $X$ has the binary intersection property if
and only if $X$ is {\em linearly injective}, that is, for every real
normed space $B$, every linear subspace $A \subset B$,
and every bounded linear operator $f \colon A \to X$ there exists a
linear extension $\overline f \colon B \to X$ with norm $\|\overline f\| = \|f\|$.
(The Hahn--Banach Theorem thus asserts that $\mathbb{R}$ is linearly injective.)
Hence, a real normed space $X$ is injective as a metric space if and only
if $X$ is injective in the linear category, and no ambiguity arises.
By~\cite[Theorem~3]{Nac}, an $n$-dimensional normed space $X$ is injective
if and only if $X$ is linearly isometric to
$l_\infty^n$ or, in other words, balls in $X$ are parallelotopes.
The final classification result, usually attributed to
Nachbin--Goodner--Kelley, asserts that a real normed space is injective
if and only if it is isometrically isomorphic to the Banach space $C(K)$ of
continuous real valued functions on some extremally disconnected compact
Hausdorff space $K$, endowed with the supremum norm. See~\cite{Kel}.
It is clear that linear subspaces of injective normed spaces need
not be injective. A familiar example is the plane
$H = \{x_1 + x_2 + x_3 = 0\}$ in $l_\infty^3$, whose norm ball is hexagonal.
One may also check directly that the triple of points
$(1,1,-2),(1,-2,1),(-2,1,1)$ has no median point in $H$.
We conclude this section by showing that certain subsets of
$l_\infty^n$ (or $l_\infty(I)$) defined by linear inequalities involving
at most two variables are injective. This will be employed
to prove that the polyhedral cells of $\E{X}$ are themselves injective
(compare Theorem~\ref{Thm:intro-ex}); however, this fact will not be used
further in this paper.
\begin{Prop} \label{Prop:p-inj}
Let $I \ne \emptyset$ be any index set. Suppose that $Q$ is a non-empty subset
of $l_\infty(I)$ given by an arbitrary system of inequalities
of the form $\sigma x_i \le C$ or $\sigma x_i + \tau x_j \le C$
with $|\sigma|,|\tau| = 1$ and $C \in \mathbb{R}$. Then $Q$ is injective.
\end{Prop}
We use a similar explicit construction as for $\mathbb{R}$-trees.
Some further results on injective polyhedral sets in $l_\infty^n$ can be found
in~\cite[Section~1.8.2]{Moe}.
A good characterization of such sets seems to be missing.
\begin{proof}
Assume that $0 \in Q$, so that all constants on the right sides
of the inequalities describing $Q$ are non-negative.
For $i \in I$, denote by $R_i$ the reflection of $l_\infty(I)$ that
interchanges $x_i$ with $-x_i$.
Let $B$ be a metric space and $\emptyset \ne A \subset B$.
We show that there exists an extension operator
$\phi \colon \operatorname{Lip}_1(A,l_\infty(I)) \to \operatorname{Lip}_1(B,l_\infty(I))$ such that
\begin{equation} \label{eq:l1}
\phi(R_i \circ f) = R_i \circ \phi(f)
\end{equation}
for every $i$, and such that the components of $\phi(f)$ satisfy
\begin{equation} \label{eq:l2}
\phi(f)_i + \phi(f)_j \le C
\end{equation}
whenever $f_i + f_j \le C$ for some pair of possibly equal indices $i,j$
and some constant $C \ge 0$. This clearly gives the result.
First, for a real valued function
$f \in \operatorname{Lip}_1(A,\mathbb{R})$, we combine the smallest and largest $1$-Lipschitz
extensions and define $\overline f \colon B \to \mathbb{R}$ by
\[
\overline f(b) := \sup \Bigl\{ 0,\,\sup_{a \in A}(f(a) - d(a,b)) \Bigr\} +
\inf \Bigl\{ 0,\,\inf_{a' \in A}(f(a') + d(a',b)) \Bigr\}.
\]
Note that at most one of the two summands is nonzero since
$f(a) - d(a,b) \le f(a') + d(a,a') - d(a,b) \le f(a') + d(a',b)$.
It is not difficult to check that $\overline f$ is a $1$-Lipschitz extension
of $f$ and that $\overline{R \circ f} = R \circ \overline f$ for the reflection
$R \colon x \mapsto -x$ of $\mathbb{R}$.
(The proof of Proposition~\ref{Prop:tree-inj} yields precisely
this extension $\overline f$ in the case $(X,z) = (\mathbb{R},0)$.)
Now, for $f \in \operatorname{Lip}_1(A,l_\infty(I))$,
define $\phi(f)$ such that $\phi(f)_i = \overline{f_i}$ for every~$i$.
Clearly $\phi(f) \in \operatorname{Lip}_1(B,l_\infty(I))$, and~\eqref{eq:l1} holds.
As for~\eqref{eq:l2}, suppose that $f_i + f_j \le C$ for some indices $i,j$
and some constant $C \ge 0$. Let $b \in B$, and assume that
$\phi(f)_i(b) \ge \phi(f)_j(b)$. If $\phi(f)_j(b) > 0$, then
\begin{align*}
\phi(f)_i(b) + \phi(f)_j(b)
&= \sup_{a,a' \in A} (f_i(a) + f_j(a') - d(a,b) - d(a',b)) \\
&\le \sup_{a,a' \in A} (f_i(a) + f_j(a') - d(a,a')) \\
&\le \sup_{a \in A} (f_i(a) + f_j(a)) \le C.
\end{align*}
If $\phi(f)_i(b) > 0 \ge \phi(f)_j(b)$, then
\begin{align*}
\phi(f)_i(b) + \phi(f)_j(b)
&\le \sup_{a \in A} (f_i(a) - d(a,b)) + \inf_{a' \in A} (f_j(a') + d(a',b)) \\
&\le \sup_{a \in A} (f_i(a) + f_j(a)) \le C.
\end{align*}
Finally, if $\phi(f)_i(b) \le 0$, then
$\phi(f)_i(b) + \phi(f)_j(b) \le 0 \le C$.
\end{proof}
\section{Injective hulls} \label{Sect:hull}
We now review Isbell's~\cite{Isb} construction $X \mapsto \E{X}$ in detail.
Our proof of the injectivity of $\E{X}$ differs from Isbell's in that
it does not appeal to Zorn's Lemma or the like. Instead we employ an
observation by Dress, restated in Proposition~\ref{Prop:retr} below,
which will be of further use.
Let $X$ be a (non-empty) metric space. Denote by $\mathbb{R}^X$ the
vector space of all real valued functions on $X$,
and define
\[
\D{X} := \{ f \in \mathbb{R}^X :
\text{$f(x) + f(y) \ge d(x,y)$ for all $x,y \in X$}\}
\]
(compare~\cite[p.~35]{Nac}). By the triangle inequality, the distance
function $d_z$ belongs to $\D{X}$ for each $z \in X$, and clearly
all elements of $\D{X}$ are non-negative.
Isbell called a function $f \in \mathbb{R}^X$ \emph{extremal}
if it is a minimal element of the partially ordered set $(\D{X},\le)$,
where $g \le f$ means $g(x) \le f(x)$ for all $x \in X$ as usual. Thus
\[
\E{X} := \{ f \in \D{X} :
\text{if $g \in \D{X}$ and $g \le f$, then $g = f$}\}
\]
is the set of extremal functions on $X$.
In case $X$ is compact, $f \in \D{X}$ is extremal if and only if
for every $x \in X$ there exists $y \in X$ such that
$f(x) + f(y) = d(x,y)$. In general, $f \in \mathbb{R}^X$ is extremal if
and only if
\begin{equation} \label{eq:extr-sup}
f(x) = \sup_{y \in X}(d(x,y) - f(y))
\end{equation}
for all $x \in X$. Each $d_z$ is extremal.
Applying~\eqref{eq:extr-sup} twice one obtains
\[
f(x) \le \sup_{y \in X}(d(x,x') + d(x',y) - f(y)) = d(x,x') + f(x')
\]
for all $x,x' \in X$, so every $f \in \E{X}$ is $1$-Lipschitz.
Now consider the set
\[
\Dl{X} := \D{X} \cap \operatorname{Lip}_1(X,\mathbb{R}),
\]
equipped with the metric
\[
(f,g) \mapsto \|f - g\|_\infty = \sup_{x \in X} |f(x) - g(x)|.
\]
To see that the supremum is finite, note that
a function $f \in \mathbb{R}^X$ belongs to $\Dl{X}$ if and only if
$|f(x) - d(x,y)| \le f(y)$ for all $x,y \in X$ or, equivalently,
\begin{equation} \label{eq:fdf}
\|f - d_y\|_\infty = f(y)
\end{equation}
for all $y \in X$. Hence, $\|f - g\|_\infty \le \inf(f+g)$.
The set $\E{X}$ is contained in $\Dl{X}$ and
is equipped with the induced metric. The map
\[
{\rm e} \colon X \to \E{X}, \quad {\rm e}(y) = d_y,
\]
is a canonical isometric embedding of $X$ into $\E{X}$, as
$\|d_y - d_z\|_\infty = d(y,z)$. Equation~\eqref{eq:fdf} will be used
frequently. It shows that a function $f \in \Dl{X}$ corresponds, after
identification of $X$ with ${\rm e}(X)$, to the restriction of a distance
function to a point in $\Dl{X}$, namely $f$ itself.
To prove that $({\rm e},\E{X})$ is an injective hull of $X$
we shall make use of the following important fact.
\begin{Prop} \label{Prop:retr}
For every metric space $X$ there exists a map
$p \colon \D{X} \to \E{X}$ such that
\begin{enumerate}
\item[\rm (1)]
$p(f) \le f$ for all $f \in \D{X}$, hence $p(f) = f$ for all $f \in \E{X}$;
\item[\rm (2)]
$\|p(f) - p(g)\|_\infty \le \|f - g\|_\infty$ for all $f,g \in \D{X}$.
\end{enumerate}
\end{Prop}
In~(2) the right side is possibly infinite, but it is finite
if $f,g \in \Dl{X}$, thus the restriction of $p$ to $\Dl{X}$ is a
$1$-Lipschitz retraction onto $\E{X}$. The existence of such a map $p$
could be shown by means of Zorn's Lemma, however
Dress~\cite[Section~(1.9)]{Dre} (compare~\cite[Lemma~5.3]{ChrL})
also found the following construction,
which is canonical in the sense that no choices need to be made.
\begin{proof}
For every $f \in \D{X}$, define $f^* \in \mathbb{R}^X$ such that
\[
f^*(x) = \sup_{z \in X}(d(x,z) - f(z))
\]
for all $x \in X$. Clearly $f^* \le f$, and equality holds
if and only if $f \in \E{X}$, by~\eqref{eq:extr-sup}.
For every pair of points $x,y \in X$, the definition of $f^*$ gives
$f^*(x) + f(y) \ge d(x,y)$ and $f(x) + f^*(y) \ge d(x,y)$.
It follows that the function
\[
q(f) := \frac12(f+f^*)
\]
belongs to $\D{X}$, and $q(f) \le f$. For all $f,g \in \D{X}$ and $x \in X$,
\[
g^*(x) = \sup_{z \in X}(d(x,z) - f(z) + f(z) - g(z))
\le f^*(x) + \|f - g\|_\infty,
\]
hence $\|f^* - g^*\|_\infty \le \|f - g\|_\infty$ and thus
\[
\|q(f) - q(g)\|_\infty
\le \frac12 \|f - g\|_\infty + \frac12 \|f^* - g^*\|_\infty
\le \|f - g\|_\infty.
\]
Iterating the map $q$, we obtain for every $f \in \D{X}$ a sequence
of functions $q(f) \ge q^2(f) \ge q^3(f) \ge \ldots$
in $\D{X}$, then we define $p(f)$ as the pointwise limit.
Clearly $p(f) \in \D{X}$, and~(1) and~(2) hold. For all $n \ge 1$,
$p(f) \le q^n(f)$ and hence $p(f)^* \ge q^n(f)^*$, so
\[
0 \le p(f) - p(f)^* \le q^n(f) - q^n(f)^* = 2(q^n(f) - q^{n+1}(f)).
\]
As $n \to \infty$, the last term converges pointwise to $0$,
thus $p(f)^* = p(f)$ and therefore $p(f) \in \E{X}$.
\end{proof}
Now, since $\E{X}$ is a $1$-Lipschitz retract of $\Dl{X}$, to prove
the injectivity of $\E{X}$ it remains to show that $\Dl{X}$ is injective.
A simple component-wise extension procedure applies, like for $l_\infty(I)$.
\begin{Prop} \label{Prop:dl-inj}
For every metric space $X$ the metric spaces $\Dl{X}$ and $\E{X}$
are injective.
\end{Prop}
\begin{proof}
As just mentioned, in view of Proposition~\ref{Prop:retr} it suffices
to prove the result for $\Dl{X}$. We could embed $\Dl{X}$
into $l_\infty(X)$ via $f \mapsto f-h$ for some fixed $h \in \Dl{X}$
and then refer to Proposition~\ref{Prop:p-inj},
but the following argument is slightly more direct.
Let $B$ be a metric space, $\emptyset \ne A \subset B$, and let
$F \colon A \to \Dl{X}$ be a $1$-Lipschitz map, $F \colon a \mapsto f_a$.
For $b \in B$, put
\[
\bar f_b(x) := \inf_{a \in A}(f_a(x) + d(a,b))
\]
for all $x \in X$. Clearly $\bar f_b$ is a non-negative $1$-Lipschitz
function on $X$, as the infimum of a family of such.
For $a,a' \in A$ and $y \in X$, we have
$f_a(y) - f_{a'}(y) \le \|f_a - f_{a'}\|_\infty
= \|F(a) - F(a')\|_\infty \le d(a,a')$ and so
\begin{align*}
\bar f_b(x) + \bar f_b(y)
&\ge \inf_{a,a' \in A} (f_a(x) + f_{a'}(y) + d(a,a')) \\
&\ge \inf_{a \in A}(f_a(x) + f_a(y)) \\
&\ge d(x,y).
\end{align*}
This shows that $\bar f_b \in \Dl{X}$. For $b,b' \in B$ and $x \in X$,
\[
\bar f_b(x) - d(b,b')
= \inf_{a \in A}(f_a(x) + d(a,b) - d(b,b'))
\le \bar f_{b'}(x),
\]
hence $\|\bar f_b - \bar f_{b'}\|_\infty \le d(b,b')$.
If $b \in A$, then $\bar f_b(x) \le f_b(x)$
and $f_b(x) \le f_a(x) + \|f_a - f_b\|_\infty \le f_a(x) + d(a,b)$ for
all $x \in X$ and $a \in A$, so that $\bar f_b = f_b$.
Thus $\overline F \colon b \mapsto \bar f_b$ is a $1$-Lipschitz extension
of $F$.
\end{proof}
If $X$ is finite, so that the supremum norm gives a metric on $\D{X}$,
the same argument also shows that $\D{X}$ is injective.
We now state Isbell's result about $\E{X}$. For brevity,
isometric embeddings will just be called {\em embeddings}.
An embedding $i$ of $X$ into some metric space $Y$ is called
{\em essential} if for every metric space $Z$ and every $1$-Lipschitz map
$h \colon Y \to Z$ with the property that $h \circ i \colon X \to Z$ is an
embedding, $h$ is an embedding as well.
If $i \colon X \to Y$ is essential and $Y$ is injective,
then $(i,Y)$ is an {\em injective hull} of $X$;
see~\cite[Section~9]{AdaHS}. In the terminology of~\cite{Dre},
an essential extension $(i,Y)$ of $X$ is called a {\em tight extension}
(and $\D{X}$ and $\E{X}$ are denoted $P_X$ and $T_X$, respectively).
\begin{Thm} \label{Thm:isbell}
For every metric space $X$, the following hold:
\begin{enumerate}
\item[\rm (1)]
If $L \colon \E{X} \to \E{X}$ is a $1$-Lipschitz map that fixes ${\rm e}(X)$
pointwise, then $L$ is the identity on $\E{X}$;
\item[\rm (2)]
$({\rm e},\E{X})$ is an injective hull of $X$;
\item[\rm (3)]
if $(i,Y)$ is another injective hull of $X$, then there exists
a unique isometry $I \colon \E{X} \to Y$ with the property that
$I \circ {\rm e} = i$.
\end{enumerate}
\end{Thm}
\begin{proof}
For~(1) we use~\eqref{eq:fdf}. The map $L$ takes
$f \in \E{X}$ to some $g \in \E{X}$ such that
\[
g(x) = \|g - d_x\|_\infty
= \|L(f) - L(d_x)\|_\infty \le \|f - d_x\|_\infty = f(x)
\]
for all $x \in X$, so $g = f$ by the minimality of $f$.
By Proposition~\ref{Prop:dl-inj}, $\E{X}$ is injective, so for~(2)
it remains to show that ${\rm e}$ is essential.
Suppose $h \colon \E{X} \to Z$ is $1$-Lipschitz and
$h \circ {\rm e} \colon X \to Z$ is an embedding.
Since $\E{X}$ is injective, ${\rm e} \colon X \to \E{X}$ extends to a $1$-Lipschitz
map $\overline{\rm e} \colon Z \to \E{X}$, thus $\overline{\rm e} \circ h \circ {\rm e} = {\rm e}$.
The map $L := \overline{\rm e} \circ h$ is $1$-Lipschitz and fixes ${\rm e}(X)$ pointwise,
so $L$ is the identity on $\E{X}$ by~(1).
As both $h$ and $\overline{\rm e}$ are $1$-Lipschitz, $h$ is in fact an
embedding.
As for~(3), if $(i,Y)$ is another injective hull of $X$, then $i$ extends to
a $1$-Lipschitz map $I \colon \E{X} \to Y$, so $I \circ {\rm e} = i$.
Likewise, there is a $1$-Lipschitz map $\overline{\rm e} \colon Y \to \E{X}$ with
$\overline{\rm e} \circ i = {\rm e}$. Since $i$ is essential, $\overline{\rm e}$ is an embedding;
furthermore, $\overline{\rm e} \circ I \circ {\rm e} = {\rm e}$, thus $\overline{\rm e} \circ I = \operatorname{id}_{\E{X}}$
by~(1). Hence $\overline{\rm e}$ is an isometry onto $\E{X}$, and $I$ is its inverse.
\end{proof}
Injective hulls can be characterized in a number of
different ways. We just state the following proposition,
which is independent of the construction described above, except
that the proof relies on the existence of {\em some} injective hull
of $X$. For the details we refer again to the general discussion
in~\cite[Section~9]{AdaHS}.
\begin{Prop}
Let $X$ and $Y$ be metric spaces, and let $i \colon X \to Y$ be an embedding.
Then the following are equivalent:
\begin{enumerate}
\item[\rm (1)]
$(i,Y)$ is an injective hull of $X$, that is, $i$ is essential and $Y$
is injective;
\item[\rm (2)]
$(i,Y)$ is a maximal essential extension of $X$, that is,
$i$ is essential and $Y$ has no proper essential extension;
\item[\rm (3)]
$(i,Y)$ is a minimal injective extension of $X$, that is,
$Y$ is injective and no proper subspace of $Y$
containing $i(X)$ is injective;
\item[\rm (4)]
$(i,Y)$ is a smallest injective extension of $X$, that is,
$Y$ is injective and whenever $j \colon X \to Z$ is an embedding
into some injective metric space $Z$, there is an embedding
$h \colon Y \to Z$ such that $h \circ i = j$.
\end{enumerate}
\end{Prop}
In fact, (3)~is the definition of injective hulls adopted
by Isbell in~\cite{Isb}, and~(2) corresponds to the notion
of {\em tight span} introduced by Dress~\cite{Dre}.
In the introduction we used property~(4),
a concrete instance of which is given in the next result
(compare \cite[Section~(1.11)]{Dre}).
\begin{Prop} \label{Prop:x-subspace}
Let $X$ be a subspace of the metric space $X'$. Then:
\begin{enumerate}
\item[\rm (1)]
There exists an isometric embedding $h \colon \E{X} \to \E{X'}$
such that $h(f)|_X = f$ for every $f \in \E{X}$.
\item[\rm (2)]
For every pair of functions $g \in \E{X}$ and $f' \in \E{X'}$ there
exists $g' \in \E{X'}$ such that $g'|_X = g$ and
$\|g' - f'\|_\infty = \|g - f'|_X\|_\infty$.
\end{enumerate}
\end{Prop}
\begin{proof}
For $f \in \E{X}$, let first $\overline f \colon X' \to \mathbb{R}$
be the $1$-Lipschitz extension defined by
\[
\overline f(y) := \inf_{x \in X}(f(x) + d(x,y)).
\]
Clearly $\overline f \in \D{X'}$. Now put
$h(f) := p(\overline f) \in \E{X'}$,
where $p$ is as in Proposition~\ref{Prop:retr}. We have $h(f)|_X
= p(\overline f)|_X \le \overline f|_X = f$; since $h(f)|_X \in \D{X}$,
this gives $h(f)|_X = f$ by the minimality of $f$.
For $f,g \in \E{X}$ and $y \in X'$,
\[
\overline f(y) - \|f - g\|_\infty
= \inf_{x \in X}(f(x) - \|f - g\|_\infty + d(x,y))
\le \overline g(y),
\]
hence $\|h(f) - h(g)\|_\infty
= \|p(\overline f) - p(\overline g)\|_\infty
\le \|\overline f - \overline g\|_\infty = \|f - g\|_\infty$.
This yields~(1).
As for~(2), suppose that $\nu := \|g - f'|_X\|_\infty < \infty$.
Define $\tilde g \colon X' \to \mathbb{R}$ such that $\tilde g|_X = g$ and
$\tilde g(y) = f'(y) + \nu$ for all $y \in X' \setminus X$.
Since $\tilde g(x) = g(x) \ge f'(x) - \nu$ for $x \in X$, it follows that
$\tilde g \in \D{X'}$. Now let $g' \in \E{X'}$ be any extremal function
with $g' \le \tilde g$. Similarly as above, $g'|_X \le \tilde g|_X = g$
and thus $g'|_X = g$ by the minimality of $g$.
Furthermore, $g' \le f' + \nu$ and hence
\[
g'(y) \ge \sup_{y' \in X'}(d(y,y') - f'(y') - \nu) = f'(y) - \nu
\]
for all $y \in X'$ by~\eqref{eq:extr-sup}. This gives the result.
\end{proof}
A number of properties of $\E{X}$ are more or less obvious from
the construction. If $X$ is bounded, then $0 \le f \le \operatorname{diam}(X) :=
\sup_{x,y \in X}d(x,y)$ for all $f \in \E{X}$ by~\eqref{eq:extr-sup}, thus
\[
\operatorname{diam}(\E{X}) \le \operatorname{diam}(X).
\]
If $X$ is compact, then so is $\E{X}$, as a consequence of
the Arzel\`a-Ascoli Theorem. If $X$ is finite, $\E{X}$ is a polyhedral
subcomplex of the boundary of the polyhedral set $\D{X} \subset \mathbb{R}^X$.
The faces of $\D{X}$ that belong to $\E{X}$ are exactly those whose
affine hull $H \subset \mathbb{R}^X$ is determined by a system of equations
of the form $f(x_i) + f(x_j) = d(x_i,x_j)$ involving each point
$x_i \in X$ at least once. (Note that these are precisely the bounded faces
of $\D{X}$; compare \cite[Lemma~1]{Dre2}.)
It follows that $\E{X}$ has dimension at most $\frac12 |X|$.
The possible combinatorial types of the injective hulls of metric spaces
up to cardinality~$5$ are depicted in~\cite[Section~(1.16)]{Dre},
and a classification for $6$-point metrics is given in~\cite{StuY}.
\begin{Rem} \label{Rem:banach}
As mentioned in Section~\ref{Sect:inj}, a normed real vector
space is injective as a metric space if and only if it is linearly injective,
and the only $n$-dimensional example is $l^n_\infty$, up to isometric
isomorphism.
Cohen~\cite{Coh} showed that every real or complex normed space
has an essentially unique injective hull in the respective
linear category. Isbell~\cite{Isb2} and Rao~\cite{Rao} then proved that
for a real normed space~$X$ the linearly injective hull is isometric
to~$\E{X}$; an explicit description of the Banach space structure on~$\E{X}$
can be found in~\cite{CiaD}.
\end{Rem}
We conclude this section with some results involving isometries of $X$.
The isometry group of $X$ will be denoted by $\operatorname{Isom}(X)$.
\begin{Prop} \label{Prop:isometries}
Let $X$ be a metric space. Then:
\begin{enumerate}
\item[\rm (1)]
For every $L \in \operatorname{Isom}(X)$ there is a unique isometry
$\bar{L} \colon \E{X} \to \E{X}$
with the property that $\bar{L} \circ {\rm e} = {\rm e} \circ L$.
One has $\bar L(f) = f \circ L^{-1}$ for all $f \in \E{X}$,
and $(L,f) \mapsto \bar L(f)$ is an action of $\operatorname{Isom}(X)$ on $\E{X}$.
\item[\rm (2)]
For every $L \in \operatorname{Isom}(X)$, the linear isomorphism $f \mapsto f \circ L^{-1}$
of $\mathbb{R}^X$ maps $\D{X}$ onto itself, and the map $p \colon \D{X} \to \E{X}$
constructed in the proof of Proposition~\ref{Prop:retr} has the additional
property that $\bar{L}(p(f)) = p(f \circ L^{-1})$ for all $f \in \D{X}$.
\end{enumerate}
\end{Prop}
\begin{proof}
For every $L \in \operatorname{Isom}(X)$, ${\rm e} \circ L$ is essential and so
$({\rm e} \circ L,\E{X})$ is an injective hull of $X$.
Hence, by part~(3) of Theorem~\ref{Thm:isbell}, there is a unique isometry
$\bar L \colon \E{X} \to \E{X}$ such that $\bar L \circ {\rm e} = {\rm e} \circ L$.
If $f \in \E{X}$ and $x \in X$, then
\begin{align*}
(\bar{L}(f))(x)
&= \|\bar{L}(f) - d_x\|_\infty
= \|f - \bar{L}^{-1}(d_x)\|_\infty
= \|f - d_{L^{-1}(x)}\|_\infty\\
&= f(L^{-1}(x)).
\end{align*}
Obviously $(L,f) \mapsto \bar L(f) = f \circ L^{-1}$
is an action of $\operatorname{Isom}(X)$ on $\E{X}$.
It is straightforward to check that the linear isomorphism
$f \mapsto f \circ L^{-1}$ of $\mathbb{R}^X$
maps $\D{X}$ onto $\D{X}$ and that it commutes with the operators defined
in the proof of Proposition~\ref{Prop:retr}, thus
$f^* \circ L^{-1} = (f \circ L^{-1})^*$,
$q(f) \circ L^{-1} = q(f \circ L^{-1})$, and
\[
\bar L(p(f)) = p(f) \circ L^{-1} = p(f \circ L^{-1})
\]
(compare~\cite[pp.~83--84]{Dre3}).
\end{proof}
As an application of the above result
we show that the weak convexity property of injective
metric spaces stated in~\eqref{eq:gamxy} holds in an equivariant form.
By a \emph{geodesic bicombing} $\gamma$ on a metric space $X$
we mean a map $\gamma \colon X \times X \times [0,1] \to X$
such that, for every pair $(x,y) \in X \times X$,
$\gamma_{xy} := \gamma(x,y,\cdot)$ is a geodesic from $x$ to $y$ with
constant speed, that is, $\gamma_{xy}(0) = x$, $\gamma_{xy}(1) = y$, and
$d(\gamma_{xy}(s),\gamma_{xy}(t)) = (t - s)d(x,y)$ for $0 \le s \le t \le 1$.
\begin{Prop} \label{Prop:bicombing}
Every injective metric space $X$ admits a geodesic bicombing $\gamma$
such that, for all $x,y,x',y' \in X$ and $t \in [0,1]$,
\begin{enumerate}
\item[\rm (1)]
$d(\gamma_{xy}(t),\gamma_{x'y'}(t)) \le (1-t)d(x,x') + td(y,y')$;
\item[\rm (2)]
$\gamma_{xy}(t) = \gamma_{yx}(1-t)$;
\item[\rm (3)]
$L \circ \gamma_{xy} = \gamma_{L(x)L(y)}$ for every isometry $L$ of $X$.
\end{enumerate}
\end{Prop}
\begin{proof}
Since $X$ is injective, the canonical map ${\rm e} \colon x \mapsto d_x$
is an isometry of $X$ onto $\E{X}$. Let $p \colon \D{X} \to \E{X}$ be the map
from the proof of Proposition~\ref{Prop:retr}.
For all $x,y \in X$ and $t \in [0,1]$, we have $(1-t)d_x + td_y \in \Dl{X}$,
and we set
\[
\gamma_{xy}(t) := ({\rm e}^{-1} \circ p)((1-t)d_x + td_y).
\]
Since $p|_{\Dl{X}}$ is $1$-Lipschitz, it follows that
\begin{align*}
d(\gamma_{xy}(t),\gamma_{x'y'}(t))
&\le \|((1-t)d_x + td_y) - ((1-t)d_{x'} + td_{y'})\|_\infty \\
&\le (1-t) \|d_x - d_{x'}\|_\infty + t \|d_y - d_{y'}\|_\infty \\
&= (1-t)d(x,x') + t d(y,y')
\end{align*}
for $x,y,x',y' \in X$ and $t \in [0,1]$. Similarly,
\begin{align*}
d(\gamma_{xy}(s),\gamma_{xy}(t))
&\le \|((1-s)d_x + sd_y) - ((1-t)d_x + td_y)\|_\infty \\
&= (t-s) \|d_x - d_y\|_\infty \\
&= (t-s) d(x,y)
\end{align*}
for $x,y \in X$ and $0 \le s \le t \le 1$, and it is easy to see that
equality must hold since $\gamma_{xy}(0) = x$ and $\gamma_{xy}(1) = y$.
Thus $\gamma$ is a geodesic bicombing on $X$ that satisfies~(1) and~(2).
Now let $L \in \operatorname{Isom}(X)$, and recall that
$\bar{L}(p(f)) = p(f \circ L^{-1})$ for
all $f \in \D{X}$, by Proposition~\ref{Prop:isometries}.
Since $d_v \circ L^{-1} = d_{L(v)}$ for all $v \in X$, we have
\[
\bar{L}(p((1-t)d_x + td_y)) = p((1-t)d_{L(x)} + t d_{L(y)})
\]
for $x,y \in X$ and $t \in [0,1]$. As ${\rm e}^{-1} \circ \bar{L} = L \circ {\rm e}^{-1}$,
this gives~(3).
\end{proof}
\section{Polyhedral structure} \label{Sect:poly}
The main purpose of this section is to show that under suitable discreteness
and local finite dimensionality assumptions on the metric space $X$ and its
injective hull $\E{X}$, respectively,
the latter has the structure of a polyhedral complex, like for finite $X$.
Some results of this type are developed in~\cite[Sections~5 and~6]{Dre}.
Here we give an independent treatment, with some emphasis on the analysis of
the isometry types of cells.
For simplicity, we shall focus on integer valued metrics.
At first, let $X$ be an arbitrary metric space.
For $f \in \mathbb{R}^X$, we denote by $A(f)$ the set of all unordered
pairs $\{x,y\}$ of points in $X$ with the property that
\[
f(x) + f(y) = d(x,y).
\]
We consider the undirected graph $(X,A(f))$ with vertex set $X$,
edge set $A(f)$, and with loops $\{x,x\} \in A(f)$ marking the zeros of $f$.
If $f \in \D{X}$ and $X$ is finite (or compact),
then $f$ is extremal if and only if $(X,A(f))$ has no isolated vertices,
that is, $\bigcup A(f) = X$. For an infinite $X$, this need no longer
be true. We therefore introduce the subset
\[
\Ex{X} := \bigl\{ f \in \D{X} : \textstyle\bigcup A(f) = X \bigr\}
\]
of $\E{X}$, whose structure can be analyzed more directly,
but which is not injective unless it coincides with $\E{X}$.
(In~\cite{Dre}, $\Ex{X}$ is denoted $T_X^0$.)
Proposition~\ref{Prop:e-dense} below will show that $\Ex{X}$ is
dense in $\E{X}$ in case the metric of $X$ is integer valued.
A set $A$ of unordered pairs of points in $X$ is called
an {\em admissible} edge set if there exists a function $f \in \Ex{X}$
with $A(f) = A$, and $\mathscr{A}(X)$ denotes the set of all such admissible
sets. Let $A \in \mathscr{A}(X)$. Note that the graph $(X,A)$ has no isolated
vertices but need not be connected.
We associate with $A$ the affine subspace
\begin{align*}
H(A) &:= \bigl\{ g \in \mathbb{R}^X : A \subset A(g) \bigr\} \\
&\phantom{:}= \bigl\{ g \in \mathbb{R}^X : \text{$g(x) + g(y) = d(x,y)$
for all $\{x,y\} \in A$} \bigr\}
\end{align*}
of $\mathbb{R}^X$, and we define the {\em rank} of $A$
as the dimension of $H(A)$,
\[
\operatorname{rk}(A) := \dim(H(A)) \in \{0,1,2,\dots\} \cup \{\infty\}.
\]
An \emph{$A$-path} in $X$ of length $l \ge 0$ is an $(l+1)$-tuple
$(v_0,\dots,v_l) \in X^{l+1}$ with $\{v_{i-1}, v_i\} \in A$
for $i = 1,\dots,l$. An \emph{$A$-cycle} is an $A$-path $(v_0,\dots,v_l)$
with $v_l = v_0$. Note that $(x,x)$ is an $A$-cycle of length~$1$
if $\{x,x\} \in A$. The {\em $A$-component}
$\vc{x}$ of a point $x \in X$ is the set
\[
\vc{x} := \{y \in X: \text{there exists an $A$-path from $x$ to $y$}\}.
\]
Whenever $g,h \in H(A)$ and
$\{v,v'\}\in A$, we have $g(v) + g(v') = d(v,v') = h(v) + h(v')$
and so $g(v') - h(v') = -(g(v) - h(v))$.
It follows that
\begin{equation} \label{eq:g-h}
g(y) - h(y) = (-1)^{l}(g(x) - h(x))
\end{equation}
whenever there is an $A$-path of length $l$ from $x$ to $y$.
As a consequence, if there exists an $A$-cycle of odd length in $\vc{x}$,
then $g|_{\vc{x}} = h|_{\vc{x}}$ for all $g,h \in H(A)$.
We call $\vc{x}$ an {\em odd $A$-component} in this case.
In the opposite case, if $\vc{x}$ contains no $A$-cycle of odd length,
$\vc{x}$ is called an {\em even $A$-component}.
Then the set $\{ g|_{\vc{x}} : g \in H(A) \}$ forms a one-parameter family.
In fact, every even $A$-component admits a unique partition
\begin{equation} \label{eq:par}
\vc{x} = \vc{x}_1 \cup \vc{x}_{-1}
\end{equation}
such that $x \in \vc{x}_1$ and every edge $\{v,v'\} \in A$ with
$\{v,v'\} \subset \vc{x}$ connects $\vc{x}_1$ and $\vc{x}_{-1}$; that is,
the subgraph of $(X,A)$ induced by $\vc{x}$ is bipartite.
Then, by~\eqref{eq:g-h}, $g(y) - h(y) = \sigma(g(x) - h(x))$ whenever
$g,h \in H(A)$, $\sigma \in \{1,-1\}$, and $y \in \vc{x}_\sigma$.
It is now clear that $\operatorname{rk}(A)$ is exactly the number of even
$A$-components of $X$.
If $\operatorname{rk}(A) = 0$, $H(A)$ consists of a single function.
This occurs in particular if $A = A(d_x)$ for some $x\in X$;
then $\{x,y\} \in A$ for every $y \in X$, so $X$
is $A$-connected, and $(x,x)$ is an $A$-cycle of length $1$.
\begin{Lem} \label{Lem:ha}
Suppose that $X$ is a metric space, $A \in \mathscr{A}(X)$, and
$1 \le n := \operatorname{rk}(A) < \infty$.
Then the difference of any two elements of $H(A)$ is uniformly
bounded on $X$, so the supremum norm gives a metric on $H(A)$,
and there exists an affine isometry from $H(A)$ onto $l^n_\infty$.
In particular $H(A)$ is injective.
\end{Lem}
\begin{proof}
Choose reference points $x_1,\dots,x_n \in X$ such that
$\vc{x_1},\dots,\vc{x_n}$ are precisely the $n$ even $A$-components
of~$X$.
Let $I \colon H(A) \to l^n_\infty$ be the affine map defined by
\[
I(g) := (g(x_1),\dots,g(x_n)).
\]
It follows from~\eqref{eq:g-h} that
$\|g - h\|_\infty = \max_{1 \le k \le n} |g(x_k) - h(x_k)|
= \|I(g) - I(h)\|_\infty$ for all $g,h \in H(A)$.
\end{proof}
For every $A \in \mathscr{A}(X)$ we consider the set
\[
P(A) := \Ex{X} \cap H(A) = \{g \in \Ex{X} : A \subset A(g)\}.
\]
First we note that
\begin{equation} \label{eq:pg-char}
P(A) = \E{X} \cap H(A)
= \D{X} \cap H(A).
\end{equation}
To see this, let $f \in \Ex{X}$ be such that $A(f) = A$, and
let $g \in \D{X} \cap H(A)$. Every $x \in X$ is part of an edge
$\{x,y\} \in A(f) = A$; then $\{x,y\} \in A(g)$ because $g \in H(A)$.
Since $g \in \D{X}$, this shows that $g \in \Ex{X}$. In view of the inclusions
$\Ex{X} \subset \E{X}
\subset \D{X}$ we get~\eqref{eq:pg-char}. As $\D{X}$ is convex,
so is $P(A)$. For every $f \in \Ex{X}$ we have $f \in P(A(f))$, thus
\[
\mathscr{P} := \{P(A)\}_{A \in \mathscr{A}(X)}
\]
is a family of convex subsets of $\mathbb{R}^X$ whose union equals $\Ex{X}$. Note that
$P(A') \subset P(A)$ if and only if $A \subset A'$.
The next result lists some basic properties of $\mathscr{A}(X)$ and $\mathscr{P}$.
\begin{Prop} \label{Prop:p-properties}
Let $X$ be a metric space. Then:
\begin{enumerate}
\item[\rm (1)]
If $f_0,f_1 \in \Ex{X}$ and $\lambda \in (0,1)$, then
$f := (1-\lambda)f_0 + \lambda f_1 \in \D{X}$ and $A(f) = A(f_0) \cap A(f_1)$,
so $f \in \Ex{X}$ if and only if $\bigcup(A(f_0) \cap A(f_1)) = X$.
\item[\rm (2)]
For $A_0,A_1 \in \mathscr{A}(X)$, the following are equivalent:
\begin{enumerate}
\item[\rm (i)]
$P(A_0) \cup P(A_1) \subset P(A)$ for some $A \in \mathscr{A}(X)$;
\item[\rm (ii)]
$\bigcup(A_0 \cap A_1) = X$;
\item[\rm (iii)]
$A_0 \cap A_1 \in \mathscr{A}(X)$.
\end{enumerate}
If conditions {\rm (i)--(iii)} hold,
then $P(A_0) \cup P(A_1) \subset P(A_0 \cap A_1) \subset
P(A)$.
\end{enumerate}
\end{Prop}
\begin{proof}
Let $f$ be given as in~(1).
Since $f_0,f_1 \in \D{X}$ and $\lambda \in (0,1)$, we have
\begin{align*}
f(x) + f(y)
&= (1-\lambda)(f_0(x) + f_0(y)) + \lambda(f_1(x) + f_1(y)) \\
&\ge (1-\lambda)d(x,y) + \lambda d(x,y) = d(x,y)
\end{align*}
for every pair of points $x,y \in X$, and
equality holds if and only if $\{x,y\} \in A(f_0) \cap A(f_1)$.
Regarding~(2), choose $f_0,f_1 \in \Ex{X}$ such that $A(f_i) = A_i$, and put
$f := \frac12(f_0 + f_1)$.
If $P(A_0) \cup P(A_1) \subset P(A)$ for some $A \in \mathscr{A}(X)$,
then $f \in P(A)$ by convexity and hence $f \in \Ex{X}$,
so $\bigcup(A_0 \cap A_1) = X$ by~(1).
If~(ii) holds, then, again by~(1), $f \in \Ex{X}$ and so
$A_0 \cap A_1 = A(f) \in \mathscr{A}(X)$.
Finally, assuming~(iii), we obtain
$P(A_0) \cup P(A_1) \subset P(A_0 \cap A_1)$ and thus~(i), and
for every $A \in \mathscr{A}(X)$ with $P(A_0) \cup P(A_1) \subset P(A)$
we have $A \subset A_0 \cap A_1$ and hence $P(A_0 \cap A_1) \subset P(A)$.
\end{proof}
We now pass to integer valued metrics. Then the sets $P(A)$ with
$\operatorname{rk}(A) = n < \infty$ turn out to be $n$-dimensional polytopes:
\begin{Thm} \label{Thm:pa}
Suppose that $X$ is a metric space with integer valued metric.
Let $A \in \mathscr{A}(X)$, and assume that $1 \le n := \operatorname{rk}(A) < \infty$. Then:
\begin{enumerate}
\item[\rm (1)]
The set $P(A) \subset H(A) \subset \mathbb{R}^X$ is an injective $n$-dimensional
polytope.
\item[\rm (2)]
The interior of $P(A)$ relative to $H(A)$ is the set
$\{g \in \Ex{X} : A(g) = A\}$.
\item[\rm (3)]
The faces of $P(A)$ are precisely the
sets $P(A')$ with $A' \in \mathscr{A}(X)$ and $A \subset A'$.
\end{enumerate}
\end{Thm}
The proof will also give precise information on the
possible isometry types of~$P(A)$.
\begin{proof}
We fix reference points $x_1,\dots,x_n \in X$ representing the
$n$ even $A$-components. For each $k$ we consider the partition
$\vc{x_k} = \vc{x_k}_1 \cup \vc{x_k}_{-1}$ as in~\eqref{eq:par}.
We also fix an element $f \in \Ex{X}$ with $A(f) = A$.
For $y \in \vc{x_k}_\sigma$, $\sigma \in \{1,-1\}$, we have
\begin{equation} \label{eq:f-even}
f(y) \in \mathbb{Z} + \sigma f(x_k).
\end{equation}
By contrast, if $y \in X_0 := X \setminus \bigcup_{k=1}^n \vc{x_k}$,
then there is an $A$-path from $y$ to itself of odd length, so
$f(y) \in \mathbb{Z} - f(y)$ and thus
\begin{equation} \label{eq:f-odd}
f(y) \in \frac12 \mathbb{Z}.
\end{equation}
Now let $I_f \colon H(A) \to l^n_\infty$ be the affine isometry defined by
\[
I_f(g) := (g(x_1) - f(x_1),\dots,g(x_n) - f(x_n));
\]
compare the proof of Lemma~\ref{Lem:ha}.
To show that $I_f(P(A))$ is a polytope,
we introduce constants as follows.
First, for $1 \le k \le n$ and $\sigma \in \{1,-1\}$, put
\[
C_{k \sigma} := \sup \biggl\{ \frac{d(x,y) - f(x) - f(y)}{2} :
x,y \in \vc{x_k}_\sigma \biggl\}.
\]
For $x,y \in \vc{x_k}_\sigma$ we have $\{x,y\} \not\in A = A(f)$,
hence $0 > d(x,y) - f(x) - f(y) \in \mathbb{Z} - 2 \sigma f(x_k)$ by~\eqref{eq:f-even}.
Thus the supremum is attained, and
$C_{k \sigma} < 0$.
Next, if $X_0 \ne \emptyset$, then for $1 \le k \le n$ and $\sigma \in \{1,-1\}$,
define
\[
C_{k \sigma 0} := \sup \bigl\{ d(x,y) - f(x) - f(y) :
(x,y) \in \vc{x_k}_\sigma \times X_0 \bigr\}.
\]
For every such pair $(x,y)$, we have
$0 > d(x,y) - f(x) - f(y) \in \frac12\mathbb{Z} - \sigma f(x_k)$, hence
$C_{k \sigma 0} < 0$.
Set $\bar C_{k \sigma} := C_{k \sigma}$ if $X_0 = \emptyset$ and
\[
\bar C_{k \sigma} := \max \bigl\{ C_{k \sigma},C_{k \sigma 0} \bigr\}
\]
if $X_0 \ne \emptyset$.
Finally, if $n \ge 2$, then for $1 \le k < l \le n$ and
$\sigma,\tau \in \{1,-1\}$, define
\[
C_{k \sigma l \tau} := \sup \bigl\{ d(x,y) - f(x) - f(y) :
(x,y) \in \vc{x_k}_\sigma \times \vc{x_l}_\tau \bigr\}.
\]
For every such pair $(x,y)$, we have $0 > d(x,y) - f(x) - f(y)
\in \mathbb{Z} - \sigma f(x_k) - \tau f(x_l)$ and so $C_{k \sigma l \tau} < 0$.
Now let $Q$ denote the set of all
$t = (t_1,\dots,t_n) \in l^n_\infty$ satisfying the system of
$2n + 4\binom{n}{2} = 2n^2$ relations
\begin{align}
\sigma t_k &\ge \bar C_{k \sigma \phantom{l \tau}}
\quad \text{($1 \le k \le n$, $\,\sigma \in \{1,-1\}$),} \label{eq:def-q-1} \\
\sigma t_k + \tau t_l &\ge C_{k \sigma l \tau}
\quad \text{($1 \le k < l \le n$, $\,\sigma,\tau \in \{1,-1\}$).}
\label{eq:def-q-2}
\end{align}
By the first $2n$ inequalities $Q$ is bounded.
Since all constants on the right side are strictly negative, $Q$ is a polytope
containing $I_f(f) = 0$ in its interior, so $Q$ has dimension $n$.
It follows readily from Proposition~\ref{Prop:p-inj}
that $Q$ is itself injective.
We claim that $I_f(P(A)) = Q$. Let $g \in H(A)$ and $t := I_f(g)$.
In view of~\eqref{eq:pg-char}, we need to check that $t \in Q$ if and only if
$g(x) + g(y) \ge d(x,y)$ for all pairs $\{x,y\} \not\in A$.
First, consider pairs of points $x,y \in \vc{x_k}$, for some
$k$. If $\sigma \in \{1,-1\}$ and $x,y \in \vc{x_k}_\sigma$, then
\[
2\sigma t_k = 2\sigma(g(x_k) - f(x_k)) = g(x) - f(x) + g(y) - f(y)
\]
by~\eqref{eq:g-h}. Hence, we have $\sigma t_k \ge C_{k \sigma}$
if and only if the inequality $g(x) + g(y) \ge d(x,y)$
holds for all pairs of points $x,y \in \vc{x_k}_\sigma$.
If $x \in \vc{x_k}_1$ and $y \in \vc{x_k}_{-1}$, then $g(x) + g(y) = f(x) + f(y)
> d(x,y)$ by~\eqref{eq:g-h} and since $\{x,y\} \not\in A$ by assumption.
Next, in case $X_0 \ne \emptyset$, consider pairs
$(x,y) \in \vc{x_k}_\sigma \times X_0$, for some $k$ and
$\sigma$. Then $g(y) = f(y)$ and so
\[
\sigma t_k = \sigma(g(x_k) - f(x_k)) = g(x) - f(x) + g(y) - f(y),
\]
hence $\sigma t_k \ge C_{k \sigma 0}$
if and only if $g(x) + g(y) \ge d(x,y)$
for all such $(x,y)$.
If $x,y \in X_0$ and $\{x,y\} \not\in A$,
then $g(x) + g(y) = f(x) + f(y) > d(x,y)$.
Finally, in case $n \ge 2$, consider pairs
$(x,y) \in \vc{x_k}_\sigma \times \vc{x_l}_{\tau}$ for some
$k < l$ and $\sigma,\tau$. Then
\[
\sigma t_k + \tau t_l = \sigma(g(x_k) - f(x_k)) + \tau(g(x_l) - f(x_l))
= g(x) - f(x) + g(y) - f(y),
\]
therefore $\sigma t_k + \tau t_l \ge C_{k \sigma l \tau}$
if and only if $g(x) + g(y) \ge d(x,y)$
for all such $(x,y)$.
This yields $I_f(P(A)) = Q$ and completes the proof of~(1).
Since $f$ was an arbitrary element of $\Ex{X}$ with $A(f) = A$
and $I_f(f)$ is an inner point of $Q$, it follows
that the set $\{g \in \Ex{X} : A(g) = A\}$ is contained in the
relative interior of $P(A)$.
Furthermore, if $g \in P(A)$ is such that the inclusion $A \subset A(g)$
is strict, then $g(x) + g(y) = d(x,y)$ for some pair $\{x,y\} \not\in A$
and we see from the above argument that equality holds in at least one
of the $2n^2$ inequalities~\eqref{eq:def-q-1}, \eqref{eq:def-q-2};
thus $I_f(g)$ is a boundary point of $Q$. This shows~(2).
Now suppose that $F$ is face of $P(A)$ of dimension $n-1$.
Choose a point $g$ in the relative interior of $F$.
Since $g \in H(A)$, we have $H(A(g)) \subset H(A)$.
For $t := I_f(g)$, exactly one of the $2n^2$ inequalities~\eqref{eq:def-q-1},
\eqref{eq:def-q-2} is an equality and the others are strict.
Reviewing the above argument again we see that then
the inclusion $A \subset A(g)$ is strict and exactly one of
the following two cases occurs:
\begin{enumerate}
\item[(i)]
there exist $k$ and $\sigma$ such that every
edge in $A(g) \setminus A$ relates two (possibly equal) points
of $\vc{x_k}_\sigma$ or connects $\vc{x_k}_\sigma$ with $X_0$;
\item[(ii)]
there exist $k < l$ and $\sigma,\tau$ such that every edge in
$A(g) \setminus A$ connects $\vc{x_k}_\sigma$ with $\vc{x_l}_\tau$.
\end{enumerate}
In either case, $X$ has $n-1$ even $A(g)$-components, thus $H(A(g))$
is an $(n-1)$-dimensional affine subspace of $H(A)$.
For all $h \in H(A(g))$, $A \subset A(g) \subset A(h)$ and the first
inclusion is strict, so $H(A(g))$ contains no inner points of $P(A)$
by~(2). As $g \in H(A(g))$ is in the relative interior of $F$,
we have $F = P(A) \cap H(A(g))$ and hence $F = P(A(g))$.
Now it follows easily by downward induction on $k$
that every face $F$ of $P(A)$ of dimension $k \in \{0,\dots,n\}$
satisfies $F = P(A_F)$ for some $A_F \in \mathscr{A}(X)$ with $A \subset A_F$
and $\operatorname{rk}(A_F) = k$. Conversely, let $A' \in \mathscr{A}(X)$ with
$A \subset A'$ be given. Then $P(A') \subset P(A)$, so the relative interior
of $P(A')$ meets the relative interior of some face $F = P(A_F)$ of
$P(A)$. Applying~(2) to both $P(A')$ and $P(A_F)$ we obtain
$A' = A_F$, thus $P(A') = P(A_F) = F$.
This concludes the proof of~(3).
\end{proof}
Next we show that $\Ex{X}$ is dense in $\E{X}$, provided the metric of $X$
is integer valued. A different criterion is given in~\cite[(5.17)]{Dre}.
\begin{Prop} \label{Prop:e-dense}
Let $X$ be a metric space with integer valued metric.
Then for every $f \in \E{X}$ and every integer $m \ge 1$ there
exists a function $f' \in \Ex{X}$ with values in $\frac1m\mathbb{Z}$ such that
$\|f - f'\|_\infty \le \frac{1}{2m}$.
\end{Prop}
\begin{proof}
Let $f \in \E{X}$, $m \ge 1$, and put $\varepsilon := \frac{1}{2m}$.
Denote by $\mathscr{F}$ the set of all functions $g \in \D{X}$ with values
in $2\varepsilon\mathbb{Z}$ and with $\|f - g\|_\infty \le \varepsilon$. To see that
$\mathscr{F}$ is non-empty, let $g_0 \in \mathbb{R}^X$ be the largest function less
than or equal to $f + \varepsilon$ with values in $2\varepsilon\mathbb{Z}$.
Then $g_0 > f - \varepsilon$, in particular $\|f - g_0\|_\infty \le \varepsilon$.
For $x,y \in X$, we have $g_0(x) + g_0(y) > f(x) + f(y) - 2\varepsilon
\ge d(x,y) - 2\varepsilon$, and since both the first and the last term
are in $2\varepsilon\mathbb{Z}$, this gives $g_0(x) + g_0(y) \ge d(x,y)$. So $g_0 \in \mathscr{F}$.
Now let $g \in \mathscr{F}$ be arbitrary, and suppose that $x \in X \setminus \bigcup A(g)$.
Then, for every $y \in X$, we have the strict inequality
$g(x) > d(x,y) - g(y)$ in $2\varepsilon\mathbb{Z}$, so that
\[
g(x) \ge \sup_{y \in X}(d(x,y) - g(y) + 2\varepsilon)
\ge \sup_{y \in X}(d(x,y) - f(y) + \varepsilon) = f(x) + \varepsilon
\]
by~\eqref{eq:extr-sup}. Hence $g(x) = f(x) + \varepsilon$.
Let the function $g'$ be defined by
\[
g'(x) := g(x) - 2\varepsilon = f(x) - \varepsilon
\]
and $g'(y) := g(y)$ for all $y \in X \setminus \{x\}$. Note that $g(x) \ge \varepsilon$,
thus in fact $g(x) \ge 2\varepsilon$ and $g'(x) \ge 0$.
Since $g'(x) + g'(y) = g(x) + g(y) - 2\varepsilon \ge d(x,y)$ for all
$y \in X \setminus \{x\}$, it follows that $g' \in \mathscr{F}$. This shows that
every minimal element $f'$ of $\mathscr{F}$ satisfies $\bigcup A(f') = X$,
that is, $f' \in \Ex{X}$. The existence of some minimal element is
obvious if $X$ is countable and a consequence of Zorn's Lemma in the
general case.
\end{proof}
We now state the concluding result of this section.
A metric space $X$ with integer valued metric will be called
{\em discretely path-connected} if for every pair of points
$x,y \in X$ there exists a {\em discrete path}
$\gamma \colon \{0,1,\dots,l\} \to X$ from $x$ to $y$, that is,
$\gamma(0) = x$, $\gamma(l) = y$, and $d(\gamma(k-1),\gamma(k)) = 1$
for $k = 1,\dots,l$.
\begin{Thm} \label{Thm:poly}
Let $X$ be a metric space with integer valued metric.
Suppose that for every $f \in \E{X}$ there exist
$\varepsilon,N > 0$ such that $\operatorname{rk}(A(g)) \le N$ for
all $g \in \Ex{X}$ with $\|f - g\|_\infty < \varepsilon$.
Then:
\begin{enumerate}
\item[\rm (1)]
$\Ex{X} = \E{X}$.
\item[\rm (2)]
$\mathscr{P} = \{P(A)\}_{A\in \mathscr{A}(X)}$ is a polyhedral structure on $\E{X}$ with
locally finite dimension,
where $P(A')$ is a face of $P(A)$ if and only if $A \subset A'$.
\item[\rm (3)]
For every $n \ge 1$ and $D > 0$, $\mathscr{P}$ has only finitely many isometry
types of $n$-cells with diameter at most $D$.
If, in addition, $X$ is discretely path-connected,
then for every $n$ there are only finitely many
isometry types of $n$-cells.
\end{enumerate}
\end{Thm}
\begin{proof}
For~(1), let $f \in \E{X}$.
By Proposition~\ref{Prop:e-dense} there exists a sequence
$(f_i)$ in $\Ex{X}$ that converges to $f$, and by the assumption of the
theorem there is no loss of generality in assuming that $\operatorname{rk}(A(f_i)) = n$
for all $i$ and for some $n \ge 0$. It follows that for every $i$ there
exists a set $R_i \subset [0,1)$ with $|R_i| \le 2n + 2$ such that $f_i$ takes
values in $\mathbb{Z} + R_i$; see~\eqref{eq:f-even} and~\eqref{eq:f-odd}.
Since $f_i \to f$, there also exists $R \subset [0,1)$ with $|R| \le 2n + 2$
such that $f(X) \subset \mathbb{Z} + R$.
But then the supremum in~\eqref{eq:extr-sup} is attained for every $x \in X$,
and so $f \in \Ex{X}$. (Compare~\cite[(5.19)]{Dre}.)
The union of the family $\mathscr{P} = \{P(A)\}_{A \in \mathscr{A}(X)}$ equals $\Ex{X} = \E{X}$.
In view of Theorem~\ref{Thm:pa}, for~(2) it remains
to show that if $A_1,A_2 \in \mathscr{A}(X)$ and
$C := P(A_1) \cap P(A_2) \ne \emptyset$, then
$C \in \mathscr{P}$. For $i = 1,2$, let $P(A'_i)$ be the minimal face of
$P(A_i)$ that contains $C$. By convexity, $C$ has non-empty interior
relative to its affine hull in $\mathbb{R}^X$, hence the relative interiors of
$P(A'_1)$ and $P(A'_1)$ have a common point. It follows that
$A'_1 = A'_2$ and thus $P(A'_1) = P(A'_2) = C$.
As for~(3), we first observe that if $f \in \Ex{X}$ is a vertex of $\mathscr{P}$,
then $\operatorname{rk}(A(f)) = 0$ and so $f(X) \subset \frac12\mathbb{Z}$ by~\eqref{eq:f-odd}.
In particular, all edges of $\mathscr{P}$ have length in $\frac12\mathbb{Z}$.
Now we show that if $X$ is discretely path-connected,
then all edges have length at most $2$.
Suppose that $A \in \mathscr{A}(X)$, $\operatorname{rk}(A) = 1$,
and $x_1$ is a point in the only even $A$-component of $X$.
Then clearly there exists a pair $(x,y)$ with $d(x,y) = 1$ such that
$x \in \vc{x_1}_1$ and either $y \in \vc{x_1}_{-1}$
or $y \in X \setminus \vc{x_1}$. Let $g,h \in P(A)$. By~\eqref{eq:g-h},
\[
\|g - h\|_\infty = |g(x) - h(x)|.
\]
In case~$y \in \vc{x_1}_{-1}$, we have furthermore
$g(x) - h(x) = -(g(y) - h(y))$;
since $g,h$ are $1$-Lipschitz and $d(x,y) = 1$, it follows that
\begin{align*}
2|g(x) - h(x)|
&= |g(x) - h(x) - (g(y) - h(y))| \\
&\le |g(x) - g(y)| + |h(y) - h(x)| \le 2.
\end{align*}
In case~$y \in X \setminus \vc{x_1}$, we have $g(y) = h(y)$ and so
\[
|g(x) - h(x)| \le |g(x) - g(y)| + |h(y) - h(x)| \le 2.
\]
In either case, $\|g - h\|_\infty \le 2$.
Hence the edge $P(A)$ has length at most $2$.
Finally, for $n \ge 2$, we see from~\eqref{eq:def-q-1}, \eqref{eq:def-q-2}
that there are only finitely many isometry types of
$n$-cells with diameter at most $D > 0$ and edge lengths in $\frac12\mathbb{Z}$,
and only finitely many isometry types of $n$-cells with edge lengths in
$\{\frac12,1,\frac32,2\}$.
\end{proof}
For illustration, we give two simple examples of (finite) discretely
geodesic metric spaces with two-dimensional injective hulls.
Together they show in particular that the four possible lengths
$\frac12,1,\frac32,2$ for edges in $\mathscr{P}$, as just discussed, do indeed occur.
For points $v_1,\dots,v_n$ in a vector space $V$ we will denote by
$[v_1,\dots,v_n]$ the convex hull $\{\sum_{i=1}^n \lambda_iv_i : \lambda_i \ge 0,\,
\sum_{i=1}^n \lambda_i = 1\}$.
\begin{Expl}
Let $X = \{x_1,x_2,y_1,y_2,y_3\}$ be the
metric space with $d(x_1,x_2) = 2$, $d(x_i,y_j) = 1$,
and $d(y_j,y_k) = 2$ ($j < k$).
Then $P' := [d_{x_1},d_{x_2}]$ is an edge of $\mathscr{P}$ of length $2$.
The maximal cells of the injective hull $\E{X}$ (that is, of the partially
ordered set $\mathscr{P}$) are the three triangles
$P_j := [d_{x_1},d_{x_2},d_{y_j}]$, isometric to $[(1,0),(-1,0),(0,1)]$ in
$l^2_\infty$. They are glued along $P'$. For instance,
$A_1 := \{\{x_1,x_2\},\{y_1,y_2\},\{y_1,y_3\}\}$
and $A' := A_1 \cup \{\{y_2,y_3\}\}$ are the respective admissible
edge sets with $P(A_1) = P_1$ and $P(A') = P'$.
\end{Expl}
\begin{Expl}
Consider the six-point metric space $X = \{x_1,x_2,y_1,y_2,y_3,z\}$,
where all distances between distinct points are $1$ except that
$d(x_i,z) = 2$ and $d(y_j,y_k) = 2$ ($j < k$).
Besides the distance functions to the elements
of $X$, the injective hull $\E{X}$ has four additional vertices
$f_1,f_2,f_3,g$, where $f_j = \frac12$ on $\{x_1,x_2,y_j\}$,
$f_j = \frac32$ otherwise, $g = \frac12$ on $\{x_1,x_2\}$, $g = 1$ on
$\{y_1,y_2,y_3\}$,
and $g(z) = \frac32$. The graph $(X,A(g))$
consists of two $3$-cycles, with vertices $x_1,x_2,z$ and $y_1,y_2,y_3$,
respectively. By deleting one of the six edges of this graph one obtains
the graph corresponding to an edge of $\E{X}$ that connects $g$ with
one of $d_{x_1},d_{x_2},d_z,f_1,f_2,f_3$. All of these edges have length
$\frac12$, except for $[g,d_z]$, which has length $\frac32$.
For instance, deletion of $\{y_1,y_2\}$ gives the graph for $[g,f_3]$.
The maximal cells of $\E{X}$ are the six triangles $[d_{x_i},f_j,g]$,
isometric to $[(\frac12,0),(0,\frac12),(0,0)]$ in $l^2_\infty$,
and the three quadrilaterals $[g,f_j,d_{y_j},d_z]$, isometric to
$[(0,0),(0,\frac12),(-\frac12,1),(0,-\frac32)]$ in $l^2_\infty$.
\end{Expl}
\section{Cones} \label{Sect:cones}
We now discuss geometric conditions that allow to verify the assumption on
the rank in Theorem~\ref{Thm:poly}.
Cones, as defined in~\eqref{eq:c}, will be instrumental.
We start with a basic fact.
\begin{Lem} \label{Lem:co}
Suppose that $X$ is a metric space, $f \in \Dl{X}$, and $\{x,y\} \in A(f)$.
Then $\{x,z\} \in A(f)$ and $f(z) = f(y) + d(y,z)$ for all $z \in \co(x,y)$.
\end{Lem}
\begin{proof}
For $f \in \D{X}$ and $z \in \co(x,y)$, we have
\[
f(x) + f(z) \ge d(x,z) = d(x,y) + d(y,z).
\]
Furthermore, if $f \in \operatorname{Lip}_1(X,\mathbb{R})$ and $\{x,y\} \in A(f)$, then
\[
f(x) + f(z) \le f(x) + f(y) + d(y,z) = d(x,y) + d(y,z).
\]
This gives the result.
\end{proof}
The next lemma, in particular criterion~(4),
will play a key role in the proof of Theorem~\ref{Thm:intro-ex}.
(For~(2), compare~\cite[Theorem~3.12]{GooM}.)
\begin{Lem} \label{Lem:xyxy}
Let $X$ be a metric space, and suppose
that $f \in \D{X}$, $x,y,\bar x,\bar y \in X$, and
$\{x,y\},\{\bar x,\bar y\} \in A(f)$. Then each of the following
conditions implies that also $\{x,\bar y\},\{\bar x,y\} \in A(f)$:
\begin{enumerate}
\item[\rm (1)]
$d(x,y) + d(\bar x,\bar y) \le d(x,\bar y) + d(\bar x,y)$;
\item[\rm (2)]
$\co(x,y) \cap \co(\bar x,\bar y) \ne \emptyset$;
\item[\rm (3)]
$\bw(x,\bar y) \cap \bw(\bar x,y) \ne \emptyset$;
\item[\rm (4)]
there exists $v \in \bw(x,y) \cap \bw(\bar x,\bar y)$
such that $\co(x,v) = \co(\bar x,v)$.
\end{enumerate}
\end{Lem}
\begin{proof}
Because $\{x,y\},\{\bar x,\bar y\} \in A(f)$, (1) gives
\[
f(x) + f(y) + f(\bar x) + f(\bar y)
= d(x,y) + d(\bar x,\bar y)
\le d(x,\bar y) + d(\bar x,y).
\]
It follows that each of the inequalities $f(x) + f(\bar y) \ge d(x,\bar y)$
and $f(\bar x) + f(y) \ge d(\bar x,y)$ must in fact be an equality, that is,
$\{x,\bar y\},\{\bar x,y\} \in A(f)$.
Now assume that~(2) holds, and let $z \in \co(x,y) \cap \co(\bar x,\bar y)$.
Then
\[
d(x,y) + d(y,z) = d(x,z) \le d(x,\bar y) + d(\bar y,z)
\]
and, likewise, $d(\bar x,\bar y) + d(\bar y,z) \le d(\bar x,y) + d(y,z)$.
Adding these two inequalities one obtains~(1).
If $v \in \bw(x,\bar y) \cap \bw(\bar x,y)$, then
\[
d(x,y) + d(\bar x,\bar y)
\le d(x,v) + d(v,y) + d(\bar x,v) + d(v,\bar y)
= d(x,\bar y) + d(\bar x,y),
\]
so~(3) implies~(1) as well. Finally, if (4) holds, then
$\bar y \in \co(\bar x,v) = \co(x,v)$ and $y \in \co(x,v) = \co(\bar x,v)$,
thus $v \in \bw(x,\bar y) \cap \bw(\bar x,y)$.
\end{proof}
As a first simple application of these lemmas we note the following
result.
\begin{Prop} \label{Prop:disj-cones}
Suppose that $X$ is a metric space containing at most $k$ pairwise disjoint
cones, that is, $|I| \le k$ for every disjoint family
$(\co(x_i,y_i))_{i \in I}$ of cones in $X$.
Then $\operatorname{rk}(A) \le \frac12 k$ for all $A \in \mathscr{A}(X)$.
\end{Prop}
\begin{proof}
Let $A \in \mathscr{A}(X)$, and suppose that the two edges
$\{x,y\},\{\bar x,\bar y\} \in A$ belong to different even
$A$-components of $X$. It follows from either Lemma~\ref{Lem:co} or
Lemma~\ref{Lem:xyxy} that the four cones
$\co(x,y),\co(y,x),\co(\bar x,\bar y),\co(\bar y,\bar x)$
are pairwise disjoint.
For instance, if there was a point $z$ in $\co(x,y) \cap \co(\bar x,\bar y)$,
then $\{x,z\},\{\bar x,z\} \in A$ by the first and
$\{x,\bar y\},\{\bar x,y\} \in A$ by the second lemma,
so $x$ and $\bar x$ would be connected by $A$-paths of length~2.
This clearly gives the result.
\end{proof}
An example will be given below. First we record
another useful fact related to cones.
\begin{Prop} \label{Prop:y-x}
Let $Y$ be a metric space, and let $X$ be a non-empty subset.
If for every pair of points $x,y \in Y$ there exists
a point $z \in \co(x,y) \cap X$, then $\E{Y}$ is isometric to $\E{X}$
via the restriction map $f \mapsto f|_X$.
\end{Prop}
\begin{proof}
Let $f \in \E{Y}$, and let $x \in Y$. For every
$\varepsilon > 0$ there exist $y \in Y$ and $z \in X$ such that
$f(x) + f(y) \le d(x,y) + \varepsilon$ and $d(x,y) + d(y,z) = d(x,z)$;
since $f(z) \le f(y) + d(y,z)$, this gives
$f(x) + f(z) \le d(x,z) + \varepsilon$. Hence
\begin{equation} \label{eq:sup-z-x}
f(x) = \sup_{z \in X}(d(x,z) - f(z)).
\end{equation}
For $x \in X$, this shows that $f|_X \in \E{X}$.
Furthermore, for another function $g \in \E{Y}$,
combining~\eqref{eq:sup-z-x} with the inequality $d(x,z) \le g(x) + g(z)$
we conclude that $f(x) - g(x) \le \sup_{z \in X}(g(z) - f(z))$
for every $x \in Y$. So $\|f-g\|_\infty = \|f|_X - g|_X\|_\infty$,
and by the second part of Proposition~\ref{Prop:x-subspace},
the restriction operator $f \mapsto f|_X$ maps $\E{Y}$ onto $\E{X}$.
\end{proof}
We now illustrate Proposition~\ref{Prop:disj-cones},
which turns out to be optimal in some instances.
\begin{Expl} \label{Expl:zn}
Consider the discretely geodesic metric space $X = \mathbb{Z}^n$ with the
$l_1$ distance (the standard word metric of the group $\mathbb{Z}^n$).
It is not difficult to see that $X$ contains at most $2^n$ pairwise
disjoint cones.
By Theorem~\ref{Thm:poly} and Proposition~\ref{Prop:disj-cones},
$\E{X}$ is a polyhedral complex of dimension at most $2^{n-1}$.
For the subspace $W_n := \{0,1\}^n \subset X$ of diameter $n$,
the constant function $g$ on $W_n$ with value $\frac12 n$ satisfies
$g \in \E{W_n}$ and $\operatorname{rk}(A(g)) = 2^{n-1}$ (each pair of antipodal points is an
$A(g)$-component of $W_n$); thus $\dim(\E{W_n}) = \dim(\E{X}) = 2^{n-1}$.
Furthermore, it follows easily from Proposition~\ref{Prop:y-x} that
$\E{X}$ is isometric to $\E{l^n_1}$. So $\E{X}$ is also a Banach space,
isometric to $l^{2^{n-1}}_\infty$ (see Remark~\ref{Rem:banach}).
Unless $n = 1,2$, the dimension of $\E{X}$
is strictly larger than $n$ and hence the canonical action of $\mathbb{Z}^n$ on
$\E{X}$ is not cocompact. This can be remedied by taking
the $l_\infty$ distance on $\mathbb{Z}^n$ instead (which is again a word metric);
then clearly the injective hull is isometric to $l^n_\infty$.
\end{Expl}
Given a metric space $X$ and a point $v \in X$,
we denote by $\mathscr{C}(v)$ the set of all cones $\co(x,v)$ for $x \in X$.
The following result shows that if $\mathscr{C}(v)$ happens to be finite,
then one obtains some control on the complexity of $\Ex{X}$ near $d_v$.
Note that here $X$ is not assumed to be discrete.
\begin{Prop} \label{Prop:cv-finite}
Suppose that $X$ is a metric space and
$v \in X$ is a point with $|\mathscr{C}(v)| < \infty$.
Consider the set $A_v := A(d_v) \in \mathscr{A}(X)$. Then:
\begin{enumerate}
\item[\rm (1)]
Every admissible set $A \in \mathscr{A}(X)$ with $A \subset A_v$
satisfies $\operatorname{rk}(A) \le \frac12|\mathscr{C}(v)|$.
\item[\rm (2)]
There are at most $2^{|\mathscr{C}(v)|-1} - 1$ sets $A \in \mathscr{A}(X)$ with $A \subset A_v$
and $\operatorname{rk}(A) = 1$.
\end{enumerate}
\end{Prop}
Note that $\{x,y\} \in A_v = A(d_v)$ if and only if
$d_v(x) + d_v(y) = d(x,y)$, that is, $v \in \bw(x,y)$.
\begin{proof}
Let $A \subset A_v$ be admissible. There exists a partition
\begin{equation} \label{eq:x-partition}
X = X_0 \cup \bigcup_{j \in J}(X_{j,1} \cup X_{j,-1}),
\end{equation}
where $X_0$ is the union of all odd $A$-components
of $X$, $\{X_j\}_{j \in J}$ is the family of all even $A$-components,
and the partition $X_j = X_{j,1} \cup X_{j,-1}$ is such that
no edge in $A$ relates points in the same subset.
Let $x,\bar x \in X$. There exist $y,\bar y$ such that
$\{x,y\},\{\bar x,\bar y\} \in A \subset A_v$ and thus
$v \in \bw(x,y) \cap \bw(\bar x,\bar y)$. In case $\co(x,v) = \co(\bar x,v)$
it follows from Lemma~\ref{Lem:xyxy} that
$\{x,\bar y\},\{\bar x,y\} \in A$, in particular $x$ and $\bar x$ are
connected by an $A$-path of length~$2$. Hence, if $x$ and $\bar x$
lie in different sets of the above partition, then
$\co(x,v) \ne \co(\bar x,v)$. Thus the number of even $A$-components
is in fact finite and less than or equal to $\frac12|\mathscr{C}(v)|$.
For the proof of~(2), let $\mathscr{A}_v'$ denote the set of all $A \in \mathscr{A}(X)$
with $A \subset A_v$ and $\operatorname{rk}(A) = 1$. We show that there is an injective map
$S$ from $\mathscr{A}_v'$ into the set of all non-empty subsets of
$\mathscr{C}(v)$ that do not contain $\co(v,v) = X$.
For each $A \in \mathscr{A}_v'$ there is a unique
partition $X = X_0 \cup X_1 \cup X_{-1}$ such that every $g \in \Ex{X}$ with
$A(g) = A$ satisfies
\begin{equation} \label{eq:g-dv}
g(x) = d_v(x) + \sigma \|g-d_v\|_\infty
\end{equation}
for $\sigma \in \{0,1,-1\}$ and $x \in X_\sigma$.
Note that every such $g$ is strictly positive since $\operatorname{rk}(A) > 0$, therefore
$v \in X_1$ and thus $\co(x,v) \ne X = \co(v,v)$ for all $x \in X_{-1}$.
The desired map $S$ is defined by
\[
S(A) := \{\co(x,v) : x \in X_{-1}\}.
\]
To show that $S$ is injective, suppose that $S(A) = S(A')$ for
some $A,A' \in \mathscr{A}_v'$, and let $X_0 \cup X_1 \cup X_{-1}$ and
$X'_0 \cup X'_1 \cup X'_{-1}$ be the respective partitions of $X$.
Now note, first, that
\[
X_{-1} = \{x \in X : \co(x,v) \in S(A)\}.
\]
This holds since $\co(\bar x,v) \ne \co(x,v)$ for
$\bar x \in X_0 \cup X_1$ and $x \in X_{-1}$,
by the same argument as in the proof of~(1). Similarly,
$X'_{-1} = \{x \in X : \co(x,v) \in S(A')\}$ and so $X_{-1} = X'_{-1}$.
Second,
\[
X_{1} = \{y \in X : \text{there exists $x \in X_{-1}$ with
$\{x,y\} \in A_v$}\}.
\]
The inclusion $\subset$ is clear since $A \subset A_v$.
For the other, if $x \in X_{-1}$ and $\{x,y\} \in A_v$,
then every $g \in \Ex{X}$ with $A(g) = A$ satisfies
$g(y) \ge d(x,y) - g(x) = d_v(x) + d_v(y) - g(x) = d_v(y) + \|g-d_v\|_\infty$,
so $y \in X_1$. Together with the corresponding characterization of
$X'_{1}$ and the fact that $X_{-1} = X'_{-1}$, this shows that $X_1 = X'_1$
and $X_0 = X'_0$ as well. Finally, using~\eqref{eq:g-dv} again, we conclude
that $A = A'$.
\end{proof}
The bound in the first part of Proposition~\ref{Prop:cv-finite} is sharp:
\begin{Expl}
The cyclic group of order $2n$, with the usual word metric of diameter $n$,
satisfies $|\mathscr{C}(v)| = 2n$ for every element $v$.
The constant function $f$ with value $\frac12 n$ has $\operatorname{rk}(A(f)) = n$,
and $A(f) \subset A(d_v)$ for all $v$. In fact, the injective
hull is a combinatorial $n$-cube, as is shown in~\cite[Section~9]{GooM}.
\end{Expl}
We now turn to discretely geodesic metric spaces $X$ with
$\beta$-stable intervals, as defined in~\eqref{eq:stable}.
The following observation goes back to Cannon~\cite[Section~7]{Can}.
For $x,v \in X$, define
$F_{xv} \colon B(v,\beta) \to \mathbb{Z}$ by $F_{xv}(u) := d_x(u) - d_x(v)$.
\begin{Lem} \label{Lem:cone-type}
Let $X$ be a discretely geodesic metric space with $\beta$-stable intervals,
and let $x,x',v \in X$.
If $F_{xv} \le F_{x'v}$, then $\co(x,v) \subset \co(x',v)$.
Hence, $F_{xv} = F_{x'v}$ implies that $\co(x,v) = \co(x',v)$.
\end{Lem}
In particular, if for a fixed vertex $v$ the closed
ball $B(v,\beta)$ is finite, then there are only finitely many distinct
such functions $F_{xv}$ as $x$ ranges over $X$ and so $|\mathscr{C}(v)| < \infty$.
\begin{proof}
Suppose that $F_{xv} \le F_{x'v}$. We show by induction on $l \ge 0$
that every $y \in \co(x,v)$ with $d(v,y) = l$ is an element of $\co(x',v)$.
The case $l = 0$ is trivial, so let $y \in \co(x,v)$ with
$d(v,y) = l \ge 1$. Choose a point $y' \in \bw(v,y)$ such that
$d(v,y') = l - 1$; note that $y' \in \co(x,v)$.
By the induction hypothesis, $y' \in \co(x',v)$ and thus $v \in \bw(x',y')$.
Since $d(y',y) = 1$ and $X$ has $\beta$-stable intervals, there exists
a point $u \in \bw(x',y) \cap B(v,\beta)$. We have
\[
d_{x'}(u) - d_{x'}(v) = F_{x'v}(u) \ge F_{xv}(u) = d_x(u) - d_x(v).
\]
Adding the term $d(u,y) - d(v,y)$ and using the identities
$d_{x'}(u) + d(u,y) = d_{x'}(y)$ and $d_x(v) + d(v,y) = d_x(y)$ we obtain
\[
d_{x'}(y) - d_{x'}(v) - d(v,y) \ge d_x(u) + d(u,y) - d_x(y) \ge 0.
\]
Thus $d_{x'}(y) = d_{x'}(v) + d(v,y)$ and so $y \in \co(x',v)$.
\end{proof}
Lemma~\ref{Lem:cone-type} shows that if a finitely generated group
$\Gamma_S = (\Gamma,d_S)$ with the word metric has $\beta$-stable intervals,
then $|\mathscr{C}(v)|$ is finite for every $v \in \Gamma_S$, and this number is
of course independent of $v$. The cones $\co(x,1)$ based at the identity
element of $\Gamma$ will be called {\em cone types}. For groups with finitely
many cone types the language of all geodesic words is regular and
the growth series is a rational function (see~\cite{Can, Eps+}).
\begin{Rem} \label{Rem:fft-property}
Neumann--Shapiro~\cite{NeuS} introduced a similar criterion,
the {\em falsification by fellow traveller} (FFT) property,
which is easily seen to imply uniform stability of intervals.
In particular it follows from Proposition~4.4 and Theorem~4.3 in their paper
that all finitely generated abelian groups have $\beta$-stable intervals and
that finitely generated virtually abelian groups as well as
geometrically finite hyperbolic groups have $\beta$-stable intervals
for {\em some} word metrics.
The FFT~property has been verified for further classes of groups, with
respect to suitably chosen finite generating sets, in~\cite{Nos,Nos2,Hol}.
\end{Rem}
We also remark that the uniform stability of intervals is not
a necessary condition for a finitely generated group $\Gamma_S$
to have finitely many cone types:
\begin{Expl}
The finitely presented group $\Gamma = \langle a,t \mid t^2 = 1,\,atat = tata \rangle$
with generating set $S = \{a,t\}$ has finitely many
cone types, but intervals are not uniformly stable.
This example is discussed in~\cite{Eld}.
\end{Expl}
The next result states another consequence of the stability
assumption. For a pair of points $x,y$ in a metric space $X$,
\begin{equation} \label{eq:Gromov-prd}
(x \mid y)_z := \frac12 \bigl( d_z(x) + d_z(y) - d(x,y) \bigr)
\end{equation}
denotes their Gromov product with respect to $z \in X$.
Note that $0 \le (x\mid y)_z \le \min\{d_z(x), d_z(y)\}$ and
$(x\mid y)_z + (x \mid z)_y = d(y,z)$.
\begin{Lem} \label{Lem:z-beta}
Let $X$ be a discretely geodesic metric space with $\beta$-stable intervals.
Whenever $x,y,z \in X$, there exists a point $v \in \bw(x,y)$
with $d_z(v) \le \beta \cdot 2(x \mid y)_z$.
\end{Lem}
\begin{proof}
We proceed by induction on the integer $2(x \mid y)_z$.
If $(x \mid y)_z = 0$, then $z \in \bw(x,y)$ and we can take $v = z$.
Now suppose that $(x \mid y)_z > 0$.
Choose a discrete geodesic $\gamma \colon \{0,1,\dots,d_z(y)\} \to X$ from
$z$ to $y$, and let $k$ be the largest parameter value such that
$z \in \bw(x,\gamma(k))$. Note that $k < d_z(y)$ because $(x \mid y)_z > 0$.
Let $y' := \gamma(k+1)$. Since $X$ has $\beta$-stable intervals,
there exists a point $z' \in \bw(x,y')$ with $d_z(z') \le \beta$.
We have
\[
d_{z'}(x) + d_{z'}(y') = d(x,y') \le d_z(x) + d_z(y') - 1
\]
by the choice of $k$. Adding the term $d(y',y) - d(x,y)$
we obtain $2(x\mid y)_{z'} \le 2(x \mid y)_z - 1$.
Hence, by the induction hypothesis, there exists $v \in \bw(x,y)$
such that $d_{z'}(v) \le \beta \cdot 2(x \mid y)_{z'}$. So
\[
d_z(v) \le d_z(z') + d_{z'}(v) \le \beta \bigl( 1 + 2(x \mid y)_{z'} \bigr)
\le \beta \cdot 2(x \mid y)_z,
\]
as desired.
\end{proof}
We conclude this section with a partial generalization
of Proposition~\ref{Prop:cv-finite}.
The above lemma will be used in combination with the following
simple fact: if $f \in \D{X}$ and $\{x,y\} \in A(f)$, that is,
$f(x) + f(y) = d(x,y)$, then
\begin{equation} \label{eq:xy-fz}
(x \mid y)_z
\le \frac12 \bigl( (f(x) + f(z)) + (f(y) + f(z)) - d(x,y) \bigr) = f(z)
\end{equation}
for every $z \in X$. For a subset $B$ of a metric space $X$
we denote by $\mathscr{C}(B)$ the set of all pointed cones $(v,\co(x,v))$ with
$v \in B$ and $x \in X$.
\begin{Prop} \label{Prop:loc-finite}
Let $X$ be a discretely geodesic metric space with $\beta$-stable
intervals, and assume that all bounded subsets of $X$ are finite.
Fix $z \in X$ and $\alpha > 0$, and let $B$ be the closed ball
$B(z,2\alpha\beta)$. Then $|\mathscr{C}(B)| < \infty$, and
\begin{enumerate}
\item[\rm (1)]
every $f \in \Ex{X}$ with $f(z) \le \alpha$ satisfies
$\operatorname{rk}(A(f)) \le \frac12|\mathscr{C}(B)|$;
\item[\rm (2)]
for every $f \in \Ex{X}$ with $f(z) \le \alpha$ and
$\operatorname{rk}(A(f)) = 0$ there are no more than $2^{|\mathscr{C}(B)|}$ sets $A \in \mathscr{A}(X)$
such that $A \subset A(f)$ and $\operatorname{rk}(A) = 1$.
\end{enumerate}
\end{Prop}
\begin{proof}
By the assumptions on~$X$ and Lemma~\ref{Lem:cone-type}, $\mathscr{C}(B)$ is finite.
Now let $f \in \Ex{X}$, and suppose that $f(z) \le \alpha$.
For every ordered pair $(x,y)$ with $\{x,y\} \in A(f)$,
we choose a point $v_{xy} \in \bw(x,y) \cap B$
by means of Lemma~\ref{Lem:z-beta} and~\eqref{eq:xy-fz}, then we put
\[
\widehat C(x,y) := \bigl(v_{xy},\co(x,v_{xy})\bigr) \in \mathscr{C}(B).
\]
Consider the partition of $X$ induced by $A(f)$,
as in~\eqref{eq:x-partition}.
Let $x,\bar x \in X$, and choose $y,\bar y$ such that
$\{x,y\},\{\bar x,\bar y\} \in A(f)$.
If $\widehat C(x,y) = \widehat C(\bar x,\bar y)$,
Lemma~\ref{Lem:xyxy} shows that $x$ and $\bar x$ are connected by an
$A(f)$-path of length~$2$. Hence, if $x$ and $\bar x$
lie in different sets of the partition, then
$\widehat C(x,y) \ne \widehat C(\bar x,\bar y)$.
It follows that $\operatorname{rk}(A(f)) \le \frac12|\mathscr{C}(B)|$.
Now suppose in addition that $\operatorname{rk}(A(f)) = 0$, and
define $\widehat C(x,y) \in \mathscr{C}(B)$ as above, for every pair $(x,y)$
with $\{x,y\} \in A(f)$.
Let $\mathscr{A}_f'$ denote the set of all $A \in \mathscr{A}(X)$
such that $A \subset A(f)$ and $\operatorname{rk}(A) = 1$.
We show that there is an injective map $S$ from $\mathscr{A}_f'$ into the set
of all (non-empty) subsets of $\mathscr{C}(B)$.
For every $A \in \mathscr{A}_f'$ there is a unique partition
$X = X_0 \cup X_1 \cup X_{-1}$ such that every $g \in \Ex{X}$ with $A(g) = A$
satisfies
\[
g(x) = f(x) + \sigma \|g - f\|_\infty
\]
for $\sigma \in \{0,1,-1\}$ and $x \in X_\sigma$. Define
\[
S(A) := \bigl\{ \widehat C(x,y) :
(x,y) \in X_{-1} \times X_1,\,\{x,y\} \in A \bigr\};
\]
since $A \subset A(f)$, $\widehat C(x,y)$ is defined for all $(x,y)$
with $\{x,y\} \in A$. We claim that
\[
X_{-1} = \bigl\{ x \in X :
\text{$\widehat C(x,y) \in S(A)$ for all $y \in X$ with $\{x,y\} \in A(f)$}
\bigr\}.
\]
Let $x \in X_{-1}$. If $y \in X$ is such that $\{x,y\} \in A(f)$,
then every $g \in \Ex{X}$ with $A(g) = A$ satisfies
$g(y) \ge d(x,y) - g(x) = f(x) + f(y) - g(x) =
f(y) + \|g - f\|_\infty$, so $y \in X_1$ and $\{x,y\} \in A(g) = A$;
thus $\widehat C(x,y) \in S(A)$.
Conversely, suppose that $\bar x \in X$, and
$\widehat C(\bar x,\bar y) \in S(A)$ for all $\bar y \in X$
with $\{\bar x,\bar y\} \in A(f)$. Among all such points $\bar y$,
fix one with $\{\bar x,\bar y\} \in A$.
Since $\widehat C(\bar x,\bar y) \in S(A)$, there is a pair
$(x,y) \in X_{-1} \times X_1$ such that
$\{x,y\} \in A$ and $\widehat C(x,y) = \widehat C(\bar x,\bar y)$.
By Lemma~\ref{Lem:xyxy}, $x$ and $\bar x$
are connected by an $A$-path of length~$2$,
thus $\bar x \in X_{-1}$.
Hence, $X_{-1}$ is characterized in terms of $S(A)$
as claimed. Now one can proceed as in the proof of
Proposition~\ref{Prop:cv-finite}, with $f$ in place of
$d_v$, to show that $S$ is injective. This gives~(2).
\end{proof}
\section{Proofs of the main results} \label{Sect:proofs}
We now prove the results stated in the introduction
and discuss some examples.
\begin{proof}[Proof of Theorem~\ref{Thm:intro-ex}]
Let $X$ be a discretely geodesic metric space with $\beta$-stable intervals,
and suppose that all bounded subsets of $X$ are finite.
Given $f \in \E{X}$, there exists a point $z \in X$ where
$f$ attains its minimum. Fix any $\varepsilon > 0$.
By the first part of Proposition~\ref{Prop:loc-finite} there exists a number
$N$ such that $\operatorname{rk}(A(g)) \le N$ for all $g \in \Ex{X}$ with
$g(z) \le f(z) + \varepsilon$, in particular for all $g \in \Ex{X}$ with
$\|f - g\| < \varepsilon$. Now Theorem~\ref{Thm:poly} shows that
$\Ex{X} = \E{X}$ and that $\mathscr{P} = \{P(A)\}_{A \in \mathscr{A}(X)}$ is a polyhedral
structure on $\E{X}$ with locally finite dimension and with only finitely
many isometry types of $n$-cells for every $n$.
By Theorem~\ref{Thm:pa} every $n$-cell $P(A)$ is isometric to an
injective polytope in $l_\infty^n$. To show that $\mathscr{P}$ is in fact locally
finite, let $f \in \E{X} = \Ex{X}$ be a vertex of $\mathscr{P}$; that is,
$\operatorname{rk}(A(f)) = 0$. Let again $z$ be a point where $f$ attains the minimum,
and put $\alpha := f(z)$. By the second part of
Proposition~\ref{Prop:loc-finite} there is a number $M$ such that
there are at most $M$ admissible sets $A \subset A(f)$ with $\operatorname{rk}(A) = 1$;
in other words, there are at most $M$ edges in $\mathscr{P}$ issuing from the vertex
$f$. Thus $\mathscr{P}$ is locally finite, and $\E{X}$ is locally compact.
Consequently, as a complete geodesic metric space, $\E{X}$ is proper.
\end{proof}
The following simple example shows that, with the assumptions of
Theorem~\ref{Thm:intro-ex}, the injective hull may be
infinite dimensional.
\begin{Expl} \label{Expl:infinite}
For every integer $n \ge 1$, $W_n := \{0,1\}^n$ with the
$l_1$ distance (the vertex set of the $n$-cube graph) has
$1$-stable intervals, and the injective hull $\E{W_n}$ has dimension
$2^{n-1}$ (compare Example~\ref{Expl:zn}; see also~\cite[Section~5]{GooM}
for more precise information in the case $n = 3$). Now let $X$ be the
space obtained from the disjoint union $\bigcup_{n = 1}^\infty W_n$
by identifying $(1,1,\dots,1) \in W_n$ with
$(0,0,\dots,0) \in W_{n+1}$ for $n = 1,2,\dots$, equipped with the obvious
discretely geodesic metric, so that $X$ contains an isometric copy of each
$W_n$. Clearly $X$ has $1$-stable intervals, bounded subsets of $X$ are
finite, and $\E{X}$ is infinite dimensional.
\end{Expl}
Next we establish Proposition~\ref{Prop:intro-fix}.
Given a metric space $X$ and a group $\Lambda$ of isometries of $X$,
we write $\Lambda x := \{L(x) : L \in \Lambda\}$ for the orbit of $x$
and $\Lambda\!\setminus\!X$ for the set of orbits; furthermore
$\operatorname{Fix}(\Lambda) := \{x \in X : \Lambda x = \{x\}\}$ denotes the fixed
point set.
\begin{proof}[Proof of Proposition~\ref{Prop:intro-fix}]
First we show that if $X$ is a metric space and $\Lambda$ is a subgroup
of the isometry group of $X$ with bounded orbits,
there exists an extremal function $f \in \E{X}$ that is constant on each
orbit. For $\Lambda x,\Lambda y \in \Lambda\!\setminus\!X$, define
\[
D(\Lambda x,\Lambda y) := \sup\{d(x',y') : x' \in \Lambda x,\,y' \in \Lambda y \}.
\]
Note that this is finite since the orbits are bounded, and $D$ has all the
properties of a metric except that $D(\Lambda x,\Lambda x) = \operatorname{diam}(\Lambda x) > 0$
if $x \not\in \operatorname{Fix}(\Lambda)$.
Denote by $\D{\Lambda\!\setminus\!X,D}$ the set of all functions
$G \colon \Lambda\!\setminus\!X \to \mathbb{R}$ such that
\[
G(\Lambda x) + G(\Lambda y) \ge D(\Lambda x,\Lambda y)
\]
for all $\Lambda x,\Lambda y \in \Lambda\!\setminus\!X$. For $z \in X$,
the function defined by $G_z(\Lambda x) := D(\Lambda x,\Lambda z)$
belongs to $\D{\Lambda\!\setminus\!X,D}$, due to the triangle inequality for $D$.
By Zorn's Lemma, the partially ordered set $(\D{\Lambda\!\setminus\!X,D},\le)$
has a minimal element $F$.
Consider the respective function $f \colon X \to \mathbb{R}$,
$f(x) := F(\Lambda x)$. For all $x,y \in X$,
\[
f(x) + f(y) = F(\Lambda x) + F(\Lambda y) \ge D(\Lambda x,\Lambda y) \ge d(x,y),
\]
so $f \in \D{X}$. Furthermore, by the minimality of $F$, for every $x \in X$
and $\varepsilon > 0$ there is a point $y \in X$ such that
$F(\Lambda x) + F(\Lambda y) \le D(\Lambda x,\Lambda y) + \varepsilon$ and
$D(\Lambda x,\Lambda y) \le d(x,y) + \varepsilon$,
hence $f(x) + f(y) \le d(x,y) + 2\varepsilon$.
This shows that in fact $f \in \E{X}$.
Now suppose that $X$ is injective. Then the only extremal functions
on $X$ are distance functions, so by the above result
there exists a point $z \in X$ such that $d_z$ is constant on each orbit
of $\Lambda$. Thus $\Lambda z = \{z\}$ and so $z \in \operatorname{Fix}(\Lambda)$.
We prove that $\operatorname{Fix}(\Lambda)$ is hyperconvex
(recall Proposition~\ref{Prop:hyperconvex}).
Since $\operatorname{Fix}(\Lambda) \ne \emptyset$, it suffices to show that
if $((x_i,r_i))_{i \in I}$ is a non-empty family in $X \times \mathbb{R}$
such that $x_i \in \operatorname{Fix}(\Lambda)$ and $r_i + r_j \ge d(x_i,x_j)$ for all pairs of
indices $i,j \in I$, then $Y := \bigcap_{i \in I}B(x_i,r_i)$ has non-empty
intersection with $\operatorname{Fix}(\Lambda)$.
Note that $Y$ is bounded and hyperconvex, in particular $Y \ne \emptyset$.
For all $i \in I$, $L \in \Lambda$, and $y \in Y$, we have
\[
d(x_i,L(y)) = d(L(x_i),L(y)) = d(x_i,y) \le r_i,
\]
thus $L(Y) \subset Y$.
In other words, for every $L \in \Lambda$, the restriction $L|_Y$ is an
isometric embedding of $Y$ into itself. In fact, since also $L^{-1}(Y) \subset Y$,
$L|_Y$ is an isometry of $Y$. Since $Y$ is bounded and injective,
the group $\{L|_Y : L \in \Lambda\}$ must have a fixed point,
as we already know, so $Y \cap \operatorname{Fix}(\Lambda) \ne \emptyset$.
\end{proof}
We proceed to $\delta$-hyperbolic metric spaces, as defined in~\eqref{eq:hyp}.
\begin{proof}[Proof of Proposition~\ref{Prop:intro-hyp}]
To show that $\E{X}$ is $\delta$-hyperbolic, let $e,f,g,h \in \E{X}$,
and let $\varepsilon > 0$. There exist $w,x \in X$ such
that either $\|e - f\|_\infty \le e(x) - f(x) + \varepsilon$ and
$e(x) \le d(w,x) - e(w) + \varepsilon$, or $\|e - f\|_\infty \le f(w) - e(w) + \varepsilon$
and $f(w) \le d(w,x) - f(x) + \varepsilon$. Thus, in either case,
\[
\|e - f\|_\infty \le d(w,x) - e(w) - f(x) + 2\varepsilon.
\]
Likewise, $\|g - h\|_\infty \le d(y,z) - g(y) - h(z) + 2\varepsilon$ for some
$y,z \in X$. Put $\Sigma := e(w) + f(x) + g(y) + h(z)$.
Using the $\delta$-hyperbolicity of $X$ we obtain
\begin{align*}
&\|e - f\|_\infty + \|g - h\|_\infty \le d(w,x) + d(y,z) - \Sigma + 4\varepsilon \\
&\qquad\le \max\{d(w,y) + d(x,z),d(x,y) + d(w,z)\} - \Sigma + \delta + 4\varepsilon.
\end{align*}
Now $d(w,y) + d(x,z) - \Sigma \le e(y) + f(z) - g(y) - h(z) \le \|e-g\|_\infty +
\|f-h\|_\infty$ and
$d(x,y) + d(w,z) - \Sigma \le -e(w) - f(x) + g(x) + h(w) \le \|f-g\|_\infty +
\|e-h\|_\infty$. Since $\varepsilon > 0$ was arbitrary, this gives the desired
inequality for $e,f,g,h$.
Suppose, in addition, that $X$ is geodesic or discretely geodesic.
Put $\nu := 0$ in the former and $\nu := \frac12$ in the latter case.
Let $f \in \E{X}$. For $\varepsilon > 0$, choose $x,y \in X$ such that
$f(x) + f(y) \le d(x,y) + \varepsilon$. Since $f(x) + f(y) \ge d(x,y)$,
there is a point $v \in \bw(x,y)$ such that $d(v,x) \le f(x) + \nu$
and $d(v,y) \le f(y) + \nu$.
Using the $\delta$-hyperbolicity of the quadruple
$\{f,d_v,d_x,d_y\} \subset \E{X}$, together with~\eqref{eq:fdf}, we get
\begin{align*}
f(v) + d(x,y)
&\le \max\{ f(x) + d(v,y), f(y) + d(v,x) \} + \delta \\
&\le f(x) + f(y) + \delta + \nu \\
&\le d(x,y) + \varepsilon + \delta + \nu,
\end{align*}
thus $f(v) \le \varepsilon + \delta + \nu$. Hence, for every $\varepsilon > 0$ there exists
$v \in X$ such that $\|f - d_v\|_\infty = f(v) \le \varepsilon + \delta + \nu$.
\end{proof}
Now let $\Gamma_S = (\Gamma,d_S)$ be a finitely generated group with the word
metric. By Proposition~\ref{Prop:isometries}, the isometric action
$(x,y) \mapsto L_x(y) := xy$ of $\Gamma$ on $\Gamma_S$ induces an
isometric action
\[
(x,f) \mapsto \bar{L}_x(f) = f \circ L_x^{-1}
\]
of $\Gamma$ on the injective hull $\E{\Gamma_S}$. Suppose that $\Gamma_S$ has
$\beta$-stable intervals. Then we already know
that $\E{\Gamma_S}$ is proper and that $\mathscr{P} = \{P(A)\}_{A \in \mathscr{A}(\Gamma_S)}$
is a locally finite polyhedral structure on $\E{\Gamma_S}$ with finitely
many isometry types of $n$-cells for every $n$.
Before we proceed with the proof of Theorem~\ref{Thm:intro-groups},
we recall the construction of the first barycentric subdivision
of $\E{\Gamma_S}$.
The barycenter $b$ of a finite family $(v_i)_{i=1}^k$
of points in a vector space $Z$ is defined as $b := \frac1k\sum_{i=1}^k v_i$
or, equivalently, as the unique point $b$ such that $\sum_{i=1}^k(v_i-b) = 0$.
If $L \colon Z \to Z$ is an affine map, the barycenter of
$(L(v_i))_{i=1}^k$ equals $L(b)$. If, in addition, $L(v_i) = v_{\sigma(i)}$
for some permutation $\sigma$ of $\{1,\dots,k\}$, then $L(b) = b$.
Now, for every cell $P(A)$ of $\mathscr{P}$,
the barycenter $b(A)$ of $P(A)$ is defined as the barycenter in $\mathbb{R}^\Gamma$ of
the vertex set of $P(A)$. Clearly $b(A)$ is a point in the interior of
$P(A)$ relative to the affine hull $H(A)$ of $P(A)$. Every isometry $L$ of
$P(A)$ is the restriction of an affine transformation of $H(A)$ that permutes
the vertices of $P(A)$, so $L(b(A)) = b(A)$. The first barycentric
subdivision $\mathscr{P}^1$ of $\mathscr{P}$ is the collection of all simplices
$[b(A_0),b(A_1),\dots,b(A_j)]$ corresponding to strictly ascending
sequences $P(A_0) \subset P(A_1) \subset \ldots \subset P(A_j)$ of cells in $\mathscr{P}$.
We write $\E{\Gamma_S}^1$ for the metric space $\E{\Gamma_S}$
equipped with the simplicial structure $\mathscr{P}^1$.
\begin{proof}[Proof of Theorem~\ref{Thm:intro-groups}]
First we show that for every bounded set $B \subset \E{\Gamma_S}$
there are only finitely many $x \in \Gamma$ such that
$\bar{L}_x(B) \cap B \ne \emptyset$. Let $R > 0$ be such that
$\|f - d_1\|_\infty \le R$ for all $f \in B$,
where $d_1$ is the distance function to $1 \in \Gamma$.
We have $\bar{L}_x(d_1) = d_x$. Hence, if $f \in \bar{L}_x(B) \cap B$,
then also $\|f - d_x\|_\infty \le R$ and so
$d_S(1,x) = \|d_1 - d_x\|_\infty \le 2R$.
This gives the result. As this holds for compact sets $B$,
the action of $\Gamma$ on $\E{\Gamma_S}$ is proper (recall also that
$\E{\Gamma_S}$ is itself proper).
For $x \in \Gamma$ and $f,g \in \E{\Gamma_S}$,
it follows from the left-invariance of $d_S$ that $A(f) = A(g)$ if and only if
$A(f \circ L_x^{-1}) = A(g \circ L_x^{-1})$, which is in turn equivalent to
$A(\bar{L}_x(f)) = A(\bar{L}_x(g))$. So $\bar L_x$ maps cells in $\mathscr{P}$
onto cells.
If we pass to the first barycentric subdivision of $\E{\Gamma_S}$, the
group $\Gamma$ still acts by cellular---now simplicial---isometries
on $\E{\Gamma_S}^1$ via $x \mapsto \bar L_x$. In addition, if $\bar L_x$ maps
a simplex in $\mathscr{P}^1$ to itself, then $\bar L_x$ fixes the simplex pointwise.
Thus $\E{\Gamma_S}^1$ is a $\Gamma$-CW-complex. For every finite
subgroup $\Lambda$ of $\Gamma$, the fixed point subcomplex $\operatorname{Fix}(\Lambda)$
is contractible by Proposition~\ref{Prop:intro-fix}. Since $\Gamma$ acts
properly, this shows that $\E{\Gamma_S}^1$ is a model for the classifying space
$\underbar{\rm E}\Gamma$ for proper $\Gamma$-actions (see~\cite[Section~1]{Lue}).
Now suppose that $\Gamma_S$ is $\delta$-hyperbolic.
It follows from Proposition~\ref{Prop:intro-hyp} that for every
$f \in \E{\Gamma_S}$ there is a point $z \in \Gamma_S$ with
$\|f - d_z\| \le \delta + \frac12$. Hence
the closed ball in $\E{\Gamma_S}$ with center $d_1$ and radius
$\delta + \frac12$ is a compact set whose $\Gamma$-orbit covers $\E{\Gamma_S}$.
As in the second half of the proof of Proposition~\ref{Prop:intro-hyp} we see
that whenever $f \in \E{\Gamma_S}$ and $\{x,y\} \in A(f)$,
there exists a point $v_{xy} \in \bw(x,y)$ with
$f(v_{xy}) \le \delta + \frac12$. If $\{x',y'\}$ is another element of $A(f)$,
then $d(v_{xy},v_{x'y'}) \le f(v_{xy}) + f(v_{x'y'}) \le 2\delta + 1$.
The argument of the first part of Proposition~\ref{Prop:loc-finite}
then shows that
\begin{equation} \label{eq:dim-bound}
\operatorname{rk}(A(f)) \le \frac12 \cdot
\max\{|B| : B \subset \Gamma_S,\,\operatorname{diam}(B) \le 2\delta + 1\}
\cdot |\mathscr{C}(1)|
\end{equation}
for all $f \in \E{\Gamma_S}$,
where $|\mathscr{C}(1)|$ is the number of cone types of $(\Gamma,S)$.
So the dimension of $\E{\Gamma_S}$ is bounded by the right side
of~\eqref{eq:dim-bound} too.
\end{proof}
In order for the injective hull $\E{\Gamma_S}$
to lie within finite distance of ${\rm e}(\Gamma_S)$, $\Gamma_S$ need not be
word hyperbolic, as is shown by $\mathbb{Z}^2$ (compare Example~\ref{Expl:zn}).
A necessary condition is given next.
\begin{Rem}
Let $\Gamma_S = (\Gamma,d_S)$ be a finitely generated group with the word
metric, and suppose that there is a constant $D$ such that for every
$f \in \E{\Gamma_S}$ there is an element $z \in \Gamma_S$ with
$\|f - d_z\|_\infty = f(z) \le D$. Note that $\Gamma$ acts coboundedly
on $\E{\Gamma_S}$. It follows from Proposition~\ref{Prop:bicombing} that
there is map $\sigma \colon \Gamma_S \times \Gamma_S \times [0,1] \to \Gamma_S$
with the following properties: for every pair $(x,y) \in \Gamma_S \times \Gamma_S$,
the map $\sigma_{xy} := \sigma(x,y,\cdot)$ satisfies $\sigma_{xy}(0) = x$,
$\sigma_{xy}(1) = y$, and $|d_S(\sigma_{xy}(s),\sigma_{xy}(t)) - (t-s)d_S(x,y)| \le 2D$
for $0 \le s \le t \le 1$; furthermore,
\[
d_S(\sigma_{xy}(t),\sigma_{x'y'}(t)) \le (1-t)d(x,x') + td(y,y') + 2D
\]
and $z \cdot \sigma_{xy}(t) = \sigma_{zx,zy}(t)$
for $x,y,x',y' \in \Gamma_S$, $t \in [0,1]$, and $z \in \Gamma$.
In particular, $\Gamma_S$ is semihyperbolic in the sense of
Alonso--Bridson~\cite{AloB}. On the other hand, $\mathbb{Z}^n$ for $n \ge 3$
is an example of a semihyperbolic group that does not act coboundedly
on its injective hull.
\end{Rem}
For a finitely generated group $\Gamma_S$ with $\beta$-stable intervals that
is not word hyperbolic, Theorem~\ref{Thm:intro-groups} leaves open
the possibility of $\E{\Gamma_S}$ being infinite dimensional.
An example of this type is missing at present.
However, there are simple instances of finitely presented groups
(without uniformly stable intervals) whose injective hull
fails to be finite dimensional or locally compact.
We also note that groups with $\beta$-stable intervals are easily
seen to be almost convex in the sense of Cannon~\cite{Can2}
and hence finitely presented.
\begin{Expl}
Let $\Gamma_S$ be the Baumslag--Solitar group
$\langle x,y \mid yx = x^2y \rangle$ with
generating set $S = \{x,y\}$.
Fix an integer $n \ge 1$. The word
$w_n := u_nxu_n^{-1}x^{-1}$, where $u_n := y^nx^2y^{-n}$,
represents the identity. Let $\gamma \colon \{0,1,\dots,l\}
\to \Gamma_S$ be the corresponding discrete loop of length $l := 4n + 6$.
This is similar to the loop depicted
in~\cite[Figure~7.8]{Eps+} (where $u_n$ is chosen to be $y^nxy^{-n}$).
By inspecting this picture, one sees that for $k = 0,\dots,n$,
the two points $\gamma(k) = y^k$ and $\gamma(\frac12 l + k) = u_nxy^k =
x^{2^{n+1} + 1}y^k$ are at distance $\frac12 l$ from each other.
It follows that the constant function $f = \frac14 l$
on $Y := \bigcup_{k=0}^n\{\gamma(k),\gamma(\frac12 l + k)\}$
is an element of $\E{Y}$ with $\operatorname{rk}(A(f)) = n + 1$.
As $n \ge 1$ was arbitrary, $\E{\Gamma_S}$ cannot be finite dimensional.
\end{Expl}
The following example shows that for a finitely presented group $\Gamma_S$
with infinitely many cone types, $\E{\Gamma_S}$ need not be locally finite
near points in ${\rm e}(\Gamma_S)$. This contrasts with the second assertion of
Proposition~\ref{Prop:cv-finite}.
\begin{Expl} \label{Expl:not-loc-cpt}
Consider the group $\Gamma = \langle a,b,t \mid ab = ba,\, t^2=1,\, tab = abt \rangle$
with generating set $S = \{a,b,t\}$.
For every integer $m \ge 1$, put $x_m := a^{-m}t$ and $y_m := b^mt$.
Note that $d_S(1,x_m) = d_S(1,y_m) = m + 1$,
\[
d_S(x_m,y_m) = d_S(1,ta^mb^mt) = d_S(1,t(ab)^mt) = 2m
\]
for all $m \ge 1$, and $d_S(x_m,y_n) = m + n + 2$ if $n \ne m$.
Hence, $\co(x_m,1)$ contains $\{y_n : n \ne m\}$ but
not $y_m$, so $\Gamma_S$ has infinitely many cone types.
Now let $f_m \in \E{\Gamma_S}$ be a median point of the triple
of distance functions $d_1,d_{x_m},d_{y_m}$. Then $\|f_m - d_1\|_\infty = 1$
and $\|f_m - d_{x_m}\|_\infty = \|f_m - d_{y_m}\|_\infty = m$,
and it follows from the triangle inequality that $\|f_n - f_m\|_\infty = 2$
whenever $n \ne m$. Hence, there is an isometrically embedded simplicial
tree with infinite valence at the vertex $d_1$, which therefore has
no compact neighborhood in $\E{\Gamma_S}$.
Nevertheless, I suspect $\E{\Gamma_S}$ to be a polyhedral complex of
finite dimension (equal to~3).
\end{Expl}
\bigskip
{\bf Acknowledgements.}
I thank Mario Bonk, Arvin Moezzi, Pierre Pansu, and Viktor Schroeder
for inspiring discussions and valuable comments.
Parts of this paper were written
during visits to the Institut Henri Poincar\'{e} in Paris, the Max Planck
Institute for Mathematics in Bonn, and the University of Seville.
I gratefully acknowledge support from these institutions and from the Swiss
National Science Foundation.
\addcontentsline{toc}{section}{References}
| {
"timestamp": "2012-06-29T02:07:52",
"yymm": "1107",
"arxiv_id": "1107.5971",
"language": "en",
"url": "https://arxiv.org/abs/1107.5971",
"abstract": "Injective metric spaces, or absolute 1-Lipschitz retracts, share a number of properties with CAT(0) spaces. In the 1960es, J. R. Isbell showed that every metric space X has an injective hull E(X). Here it is proved that if X is the vertex set of a connected locally finite graph with a uniform stability property of intervals, then E(X) is a locally finite polyhedral complex with finitely many isometry types of n-cells, isometric to polytopes in l^n_\\infty, for each n. This applies to a class of finitely generated groups G, including all word hyperbolic groups and abelian groups, among others. Then G acts properly on E(G) by cellular isometries, and the first barycentric subdivision of E(G) is a model for the classifying space \\underbar{E}G for proper actions. If G is hyperbolic, E(G) is finite dimensional and the action is cocompact. In particular, every hyperbolic group acts properly and cocompactly on a space of non-positive curvature in a weak (but non-coarse) sense.",
"subjects": "Group Theory (math.GR); Metric Geometry (math.MG)",
"title": "Injective hulls of certain discrete metric spaces and groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126531738088,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.709439717020906
} |
https://arxiv.org/abs/1912.00060 | Uniformly vertex-transitive graphs | We introduce uniformly vertex-transitive graphs as vertex-transitive graphs satisfying a stronger condition on their automorphism groups, motivated by a problem which arises from a Sinkhorn-type algorithm. We use the derangement graph $D(\Gamma)$ of a given graph $\Gamma$ to show that the uniform vertex-transitivity of $\Gamma$ is equivalent to the existence of cliques of sufficient size in $D(\Gamma)$. Using this method, we find examples of graphs that are vertex-transitive but not uniformly vertex-transitive, settling a previously open question. Furthermore, we develop sufficient criteria for uniform vertex-transitivity in the situation of a graph with an imprimitive automorphism group. We classify the non-Cayley uniformly vertex-transitive graphs on less than 30 vertices outside of two complementary pairs of graphs. | \section{Introduction and main results}\label{sec:int}
The motivation for our article arose from \cite{NSW19}, where a Sinkhorn-type algorithm was presented for determining whether or not a given finite graph has quantum symmetries. This algorithm can only be applied to graphs which are vertex-transitive in a stronger sense we want to specify in the present article. We introduce it independently of the framework of \cite{NSW19}.
Specifically, a graph $\Gamma$ on $n$ vertices is uniformly vertex-transitive if there exist $n$ graph automorphisms $\sigma_1,\ldots,\sigma_n\in\Aut(\Gamma)$ such that, when viewed as matrices, we have $\sum\sigma_i=J_n$, the $n\times n$ matrix with all entries 1. In \cref{prop:implications} we show that we have the following inclusions of classes of graphs:
\[ \small\{\text{Cayley graphs}\} \subset \{\text{uniformly vertex-transitive graphs}\} \subset \{\text{vertex-transitive graphs}\} \]
We show that both of these inclusions are strict, with the Petersen graph and its line graph as testimonials. For the former, we explicitly prove that the Petersen graph is uniformly vertex-transitive in \cref{prop:petersen}, by constructing a set of automorphisms in the above sense, while it is well-known that the Petersen graph is non-Cayley (see \cite{MP94} for instance). For the latter, it is well-known that the Petersen graph is edge-transitive (see \cite{HS93}, for instance) which implies that its line graph is vertex-transitive, but showing that its line graph is not uniformly vertex-transitive is more subtle.
We do so by developing some criteria for a graph to be uniformly vertex-transitive. In \cref{sec:comp}, we see that the derangement graph $D(\Gamma)$ of a graph $\Gamma$ encodes information about the Schur product of automorphisms of $\Gamma$. In the main theorem of this article (\cref{thm:clique_nhd}), we prove that $\Gamma$ is uniformly vertex-transitive if $D(\Gamma)$ has a maximal clique number.
Additionally, we develop sufficient criteria for a graph to be uniformly-vertex transitive in the case of its automorphism group being imprimitive in \cref{sec:imp}. In particular, we show in \cref{thm:factor} that the existence of an $\Aut(\Gamma)$-invariant partition of the vertex set satisfying some extra conditions implies $\Gamma$ is uniform vertex-transitive.
\begin{table}[ht]\begin{tabular}{|c|cc|} \hline
Vertices & UVT & non-UVT \\ \hline\hline
10 & 2 & \\
15 & & 4 \\
16 & 8 & \\
18 & 4 & \\
20 & 70 & 12 \\
24 & 112 & \\
26 & 132 & \\
28 & $\geq 24$ & $\geq 38$ \\
30 & $\geq 324$ & $\geq 730$ \\\hline
\end{tabular}
\\[3mm]\caption{The counts of uniformly vertex-transitive (UVT) and non-uniformly vertex-transitive (non-UVT) graphs on numbers of vertices for which there exist vertex-transitive vertex-transitive non-Cayley graphs.}\label{tab:simple_counts}\end{table}
We additionally present some experimental results, based on a search for uniformly vertex-transitive graphs in a database \cite{R97} of vertex-transitive non-Cayley graphs on up to 30 vertices. We present the counts of graphs in the database that were identified to be uniformly vertex-transitive or non-uniformly vertex-transitive in \cref{tab:simple_counts}. In particular, we fully classify the non-Cayley uniformly vertex-transitive graphs on less than 30 vertices outside of two complementary pairs of graphs on 28 vertices with large automorphism groups. More detailed information about experimental results can be found in \cref{sec:exp}.
In \cref{sec:further}, we introduce the property of $k$-uniform vertex-transitivity, one possible generalization of uniform vertex-transitivity, and record some basic facts about this property.
\section{The notion of uniform vertex-transitivity}\label{sec:uvt}
In this section we definine the notion of uniformly vertex-transitive graphs. Before doing so, let us recall the notions of Cayley graphs and vertex-transitive graphs. Throughout this article, we restrict to finite simple graphs. For a graph $\Gamma$, we denote its vertex set and edge set as $V(\Gamma)$ and $E(\Gamma)$, respectively.
\begin{defn}
Let $G$ be a finite group and $S\subset G$ be a subset of $G$. The (uncolored, undirected) \emph{Cayley graph} $C(G,S)$ is a graph with vertex set $G$ in which two group elements $a,b\in G$ are adjacent if there exists $s\in S$ such that either $a=sb$ or $b=sa$. We also say that a finite graph $\Gamma$ is \emph{Cayley} if $\Gamma$ is isomorphic to $C(G,S)$ for a group $G$ and generating set $S$.
\end{defn}
Note that it is also common to define a Cayley graph as directed to indicate which equality holds and colored to indicate the element $s\in S$. For our purposes however, we will work with this uncolored, undirected version of the definition. We have the following well-known characterization of Cayley graphs by Sabidussi. We give a proof for the convenience of the reader. Recall that a transitive group action is \emph{regular} if all of the point stabilizers are trivial.
\begin{prop}[{\cite[Lemma 4]{S58}}]
\label{prop:cay-reg}
A graph $\Gamma$ is a Cayley graph if and only if a subgroup of $\Aut(\Gamma)$ acts regularly on $V(\Gamma)$.
\end{prop}
\begin{proof}
Suppose that $\Gamma$ is Cayley, so $\Gamma$ is isomorphic to $C(G,S)$ for some group $G$ and subset $S\subset G$. Observe that for adjacent vertices $a,b\in G$ and $s\in S$ such that $b=sa$, for any $g\in G$, we have that $gb=gsa$ as well. Hence each $g\in G$ induces an automorphism of $\Gamma$ via left multiplication. Thus, $G$ is a subgroup of $\Aut(\Gamma)$. Because the vertices of $\Gamma$ are identified with elements of $G$, this action is equivalent to the left regular action of $G$ on itself. Thus, $G$ is a regular subgroup of $\Aut(\Gamma)$.
In the opposite direction, suppose that $\Aut(\Gamma)$ has a subgroup $G$ with a regular action on $V(\Gamma)$. Thus, we seek to construct a graph isomorphism $\varphi:C(G,S)\to\Gamma$ for some subset $S\subset G$. Pick an arbitrary vertex $v_0\in V(\Gamma)$, which we will `label' as the identity in $G$. For all vertices $v \in V(\Gamma)$, there exists a unique $g_v\in G$ which sends $v_0\mapsto v$ by the regularity of the action of $G$. The regularity of $G$ also gives that $|G|=|V(\Gamma)|$. Let $S=\{g_v : v \in V(\Gamma), v \sim v_0\}$, Defining $\varphi$ as $\varphi(g_v) = g_v(v)$ indeed gives a graph isomorphism $C(G,S)\simeq\Gamma$, so $\Gamma$ is Cayley.
\end{proof}
It is clear from the above proposition that the automorphism group of a Cayley graph is transitive. Indeed, every Cayley graph also has the weaker property of being vertex-transitive, which we define below.
\begin{defn}
A graph $\Gamma$ is \emph{vertex-transitive} if for any vertices $u,v\in V(\Gamma)$, there exists an automorphism $\sigma\in\Aut(\Gamma)$ such that $\sigma(u)=v$. In other words, $\Aut(\Gamma)$ acts transitively on the vertices of $\Gamma$.
\end{defn}
Having seen the definition of a vertex-transitive graph, we are ready to define what it means for a graph to be uniformly vertex-transitive.
\begin{defn}
Let $\Gamma$ be a graph on $n$ vertices. The graph $\Gamma$ is \emph{uniformly vertex-transitive} if there exists a size $n$ subset $\{\sigma_1,\ldots,\sigma_n\} \subset \Aut(\Gamma)$ such that, when viewed as matrices, $\sum \sigma_i = J_n$, where $J_n$ is the $n\times n$ matrix with all entries 1. Such a subset $\{\sigma_1,\ldots,\sigma_n\}$ is called a \emph{maximal Schur set}\footnote{Maximal Schur sets are also known elsewhere in the literature as \emph{sharply transitive sets}.}.
\end{defn}
We motivate the naming of this property with the following observation. If $\Gamma$ is vertex-transitive and $j\in V(\Gamma)$, we find $n$ automorphisms $\{ \sigma_1^j,\ldots, \sigma_n^j \}$ for which $\sum_i \sigma_i^j$ has all entries 1 in row $j$. Then, $\Gamma$ is uniformly vertex-transitive if it is possible to make a \emph{uniform} choice of $\{\sigma_1,\ldots,\sigma_n\} \subset \Aut(\Gamma)$ such that $\sum \sigma_i$ has all entries 1 in \emph{each} row. A maximal Schur set is named as such to reflect the fact that the matrix $J_n$ is the identity under the Schur (i.e. entry-wise) product of $n\times n$ matrices. We make the relationship between these properties of such sets of automorphisms rigorous with the following result.
\begin{lem} \label{lem:eq-conditions}
Let $\Gamma$ be a finite graph on $n$ vertices and let $S = \{\sigma_1,\ldots,\sigma_n\}\subset\Aut(\Gamma)$. The following are equivalent:
\begin{enumerate}[(a)]
\item $\sum \sigma_i = J_n$.
\item For all $u,v\in V(\Gamma)$, there exists an $i$ for which $\sigma_i(u)=v$.
\item For all $u,v\in V(\Gamma)$, there exists a unique $i$ for which $\sigma_i(u)=v$.
\end{enumerate}
\end{lem}
\begin{proof}
Suppose that $\sum \sigma_i = J_n$, and fix $u,v\in V(\Gamma)$. The entry of $J_n$ corresponding to $(u,v)$ will clearly be 1, so there is a term $\sigma_i$ in the sum $\sum\sigma_i=J_n$ for which the $(u,v)$ entry is 1, thus $\sigma_i(u)=v$. Since all the $\sigma_i$ are permutation matrices with all entries 0 or 1, this $\sigma_i$ is unique. This establishes (a)$\implies$(c).
The implication (c)$\implies$(b) is clear.
To show (b)$\implies$(a), fix a vertex $u\in V(\Gamma)$. Since for every $v\in V(\Gamma)$, there is an $i$ for which $\sigma_i(u)=v$ and there are as many vertices in $V(\Gamma)$ as there are elements in $S$, the row corresponding to $u$ in the sum $\sum\sigma_i$ must have all 1s. Therefore $\sum\sigma_i=J_n$.
\end{proof}
As mentioned above, every Cayley graph is vertex-transitive. We observe that the property of being uniformly vertex-transitive sits between these two other graph properties.
\begin{prop} \label{prop:implications}
Let $\Gamma$ be a finite graph on $n$ vertices. \begin{enumerate}[(a)]
\item If $\Gamma$ is a Cayley graph, then $\Gamma$ is uniformly vertex-transitive.
\item If $\Gamma$ is a uniformly vertex-transitive graph, then $\Gamma$ is vertex-transitive.
\end{enumerate}
\end{prop}
\begin{proof}
Let $\Gamma$ be a Cayley graph. From \cref{prop:cay-reg}, we have that $\Aut(\Gamma)$ contains a regular subgroup which we will denote by $G$. By the definition of a regular action, $G$ satisfies (c) of \cref{lem:eq-conditions}, so $G$ is a maximal Schur set. Thus $\Gamma$ is uniformly vertex-transitive, proving (a).
Suppose $\Gamma$ is uniformly vertex-transitive, so it follows there exists a maximal Schur set $S\subset \Aut(\Gamma)$. Thus by \cref{lem:eq-conditions}, $S$ also satisfies property (b) of the lemma, so $\Gamma$ is vertex-transitive. This proves part (b).
\end{proof}
Hence we have the following chain of implications of graph properties:
\[ \text{Cayley} \Longrightarrow \text{uniformly vertex-transitive} \Longrightarrow \text{vertex-transitive} \]
It is a natural question to ask whether the converse of any of these implications holds. In \cite{NSW19}, the authors were particularly interested in the latter implication. Comparing Cayley graphs and vertex-transitive graphs only, it is well known that the Petersen graph is the smallest vertex-transitive graph which is not Cayley. Such graphs have been extensively studied (see \cite{MP94}, for instance). In fact, the Petersen graph is even uniformly vertex-transitive, which shows that the first of the above implications is not an equivalence.
\begin{prop} \label{prop:petersen}
The Petersen graph is a uniformly vertex-transitive graph that is not a Cayley graph.
\end{prop}
\begin{proof}
Recall that the Petersen graph $P$ is realized with $V(P)$ consisting of the two-element subsets of $\{1,2,3,4,5\}$ in which two subsets are adjacent if they are disjoint. It is well known that $\Aut(P)\simeq S_5$ and that the action of $\Aut(P)$ on $V(P)$ is induced by the standard action of $S_5$ on $\{1,2,3,4,5\}$. Let $\alpha, \beta \in S_5$ be given by $\alpha = (1\,5\,3\,4)$ and $\beta = (1\,2\,3\,4\,5)$. When viewed as permutations of the 10 vertices of the Petersen graph, $\alpha$ and $\beta$ are given by the permutation matrices
\[ \Tiny \alpha = \begin{blockarray}{ccccccccccc}
& \scriptstyle{12}
& \scriptstyle{23}
& \scriptstyle{34}
& \scriptstyle{45}
& \scriptstyle{15}
& \scriptstyle{13}
& \scriptstyle{24}
& \scriptstyle{35}
& \scriptstyle{14}
& \scriptstyle{25} \\
\begin{block}{c[cccccccccc]}
\scriptstyle{12}&&&&&&&1&&& \\
\scriptstyle{23}&&&&&&&&&&1 \\
\scriptstyle{34}&&&&&&&&1&& \\
\scriptstyle{45}&&&&&&1&&&& \\
\scriptstyle{15}&&&&&&&&&1& \\
\scriptstyle{13}&&&&1&&&&&& \\
\scriptstyle{24}&&1&&&&&&&& \\
\scriptstyle{35}&&&&&1&&&&& \\
\scriptstyle{14}&&&1&&&&&&& \\
\scriptstyle{25}&1&&&&&&&&& \\
\end{block}
\end{blockarray} \qquad \beta = \begin{blockarray}{ccccccccccc}
& \scriptstyle{12}
& \scriptstyle{23}
& \scriptstyle{34}
& \scriptstyle{45}
& \scriptstyle{15}
& \scriptstyle{13}
& \scriptstyle{24}
& \scriptstyle{35}
& \scriptstyle{14}
& \scriptstyle{25} \\
\begin{block}{c[cccccccccc]}
\scriptstyle{12}&&&&&1&&&&& \\
\scriptstyle{23}&1&&&&&&&&& \\
\scriptstyle{34}&&1&&&&&&&& \\
\scriptstyle{45}&&&1&&&&&&& \\
\scriptstyle{15}&&&&1&&&&&& \\
\scriptstyle{13}&&&&&&&&&&1 \\
\scriptstyle{24}&&&&&&1&&&& \\
\scriptstyle{35}&&&&&&&1&&& \\
\scriptstyle{14}&&&&&&&&1&& \\
\scriptstyle{25}&&&&&&&&&1& \\
\end{block}
\end{blockarray}
\]
in which we write 12 for the vertex $\{1,2\}$ of the Petersen graph, for instance. From these matrix descriptions, it is clear that
\begin{equation*}
\sum_{i=0}^4 \beta^i = \begin{bmatrix} J_5 & \\ & J_5 \end{bmatrix} \qquad \sum_{i=0}^4 \alpha\beta^i = \begin{bmatrix} & J_5 \\ J_5 & \end{bmatrix}
\end{equation*}
where $J_k$ is the $k\times k$ matrix with all entries 1. Therefore, we have
\begin{equation*}
\sum_{i=0}^1\sum_{j=0}^4 \alpha^i\beta^j = J_{10}.
\end{equation*}
which shows that $\{\alpha^i \beta^j : 0 \leq i \leq 1 \text{ and } 0 \leq j \leq 4\} \subset \Aut(\Gamma)$ is a maximal Schur set. It follows that $P$ is uniformly vertex-transitive.
That $P$ is not a Cayley graph is well known, see for instance \cite{MP94}.
\end{proof}
The remaining question is whether there exist vertex-transitive graphs which are not uniformly vertex-transitive. We give an affirmative answer to this question in the following section, as well as techniques used to show this result and to determine whether or not general graphs are uniformly vertex-transitive.
\begin{rmk}
If $\Gamma$ is a vertex-transitive graph for which $|V(\Gamma)|=|\!\Aut(\Gamma)|$, \cref{prop:cay-reg} implies that $\Gamma$ is a Cayley graph for $\Aut(\Gamma)$. In this case, $\Gamma$ is furthermore uniformly vertex-transitive.
\end{rmk}
\section{Computational methods for detecting uniform vertex-transitivity}\label{sec:comp}
We consider an arbitrary finite graph $\Gamma$ on $n$ vertices. Our present motivation is to develop a computational strategy for determining whether $\Gamma$ is uniformly-vertex transitive. This problem is apparently harder than determining whether or not it is Cayley. Indeed, unlike the latter---which in light of \cref{prop:cay-reg} can be easily realized from knowing the subgroups of $\Aut(\Gamma)$---the former depends on the existence of maximal Schur sets, which are not subgroups of $\Aut(\Gamma)$ in general. A naive approach of checking all subsets of $\Aut(\Gamma)$ of a given size quickly becomes untenable for $\Gamma$ with sufficiently many vertices or automorphisms, so a more informed approach is necessary.
In this section, we consider the derangement graph $D(\Gamma)$ of $\Gamma$, which we will see encodes how elements of $\Aut(\Gamma)$ relate under the Schur product of permutation matrices. Culminating in \cref{thm:clique_nhd}, we will see that the uniform vertex-transitivity of $\Gamma$ is equivalent to the existence of cliques of a certain size in $D(\Gamma)$. Because the problem of finding cliques in graphs is relatively well-studied and implemented, we propose this as an effective method of determining whether an arbitrary graph $\Gamma$ is uniformly-vertex transitive.
For a graph $\Gamma$ We denote by $\Der_{\Gamma}$ the subset of $\Aut(\Gamma)$ which consists of all permutations without fixed points. Such permutations are often called \emph{derangements}.
\begin{defn}
Let $\Gamma$ be a graph. The \emph{derangement graph} $D(\Gamma)$ is a graph with vertex set $\Aut(\Gamma)$ in which automorphisms $\sigma$ and $\tau$ are adjacent if $\sigma^{-1}\tau \in\Der_\Gamma$. Thus, $D(\Gamma)$ coincides with the Cayley graph $C(\Aut(\Gamma),\Der_\Gamma)$.
\end{defn}
We observe that the derangement graph encodes the orthogonality of automorphisms with respect to the Schur product.
\begin{lem} \label{lem:der-schur}
For automorphisms $\sigma, \tau \in\Aut(\Gamma)$, we have $\sigma^{-1}\tau\in\Der_\Gamma$ if and only if the Schur product of $\sigma$ and $\tau$ is zero.
\end{lem}
\begin{proof}
Suppose $\sigma^{-1}\tau$ is not a derangement, so there is a fixed point $i\in V(\Gamma)$ of $\sigma^{-1}\tau$, for which $\sigma^{-1}\tau(i)=i$. For such a vertex $i$, we have $\sigma(i) = \tau(i)$, which means the Schur product of $\sigma$ and $\tau$ is nonzero.
\end{proof}
In other words, the edges of $D(\Gamma)$ connect automorphisms with Schur product zero. Recall that a \emph{clique} in a graph $\Gamma$ is a subset $S\subset V(\Gamma)$ which induces a complete subgraph, i.e. any two vertices in $S$ are adjacent.
\begin{lem} \label{lem:clique}
Let $\Gamma$ be a graph on $n$ vertices. A size $n$ subset $S \subset \Aut(\Gamma)$ is a maximal Schur set if and only if $S$ is a clique in $D(\Gamma)$.
\end{lem}
\begin{proof}
Suppose that $S=\{\sigma_1,\ldots,\sigma_n\} \subset V(D(\Gamma))$ is a clique. That is, $\sigma_i$ and $\sigma_j$ have Schur product 0 for $i\neq j$. That is, for each row $k$, the matrices $\sigma_i$ and $\sigma_j$ have a 1 in different positions, so row $k$ of $\sigma_i+\sigma_j$ will consist of 1s in two entries and 0s otherwise. Continuing inductively, we see that each row of $\sum\sigma_i$ consists of all entries 1, so it follows that $\sum\sigma_i=J_n$, and thus $S$ is a maximal Schur set.
Conversely suppose that $S=\{\sigma_1,\ldots,\sigma_n\}\subset \Aut(\Gamma)$ is a maximal Schur set, so $\sum\sigma_i=J_n$. Take $\sigma_i$ and $\sigma_j$ for $i\neq j$. If $\sigma_i$ and $\sigma_j$ have nonzero Schur product, then $\sigma_i$ and $\sigma_j$ have a 1 in the same entry. However, if this was the case, $\sum\sigma_i$ would have an entry other than 1, which is a contradiction, as we assumed $\sum\sigma_i=J_n$. Thus $S$ is a clique in $D(\Gamma)$, as all $\sigma_i$ are adjacent.
\end{proof}
\begin{lem} \label{lem:clique_mult}
Suppose $S = \{\sigma_1,\ldots,\sigma_n\} \subset \Aut(\Gamma)$ is a maximal Schur set. For any $\alpha\in \Aut(\Gamma)$, the set $\alpha S=\{\alpha\sigma_1,\ldots,\alpha\sigma_n\}$ is also a maximal Schur set.
\end{lem}
\begin{proof}
View $g,\sigma_1,\ldots,\sigma_n\in G$ all as their corresponding permutation matrices. That $S$ is a maximal Schur set gives that $\sum\sigma_i=J_n$. Observe that
\begin{equation*}
\sum_{i=1}^n \alpha\sigma_i = \alpha \sum_{i=1}^n \sigma_i = \alpha J_n = J_n
\end{equation*}
which shows that $\alpha S$ is also a maximal Schur set.
\end{proof}
\begin{cor} \label{lem:id_clique}
If $\Gamma$ is uniformly vertex-transitive, then $\Aut(\Gamma)$ contains a maximal Schur set which contains the identity automorphism.
\end{cor}
\begin{proof}
Let $S=\{\sigma_1,\ldots,\sigma_n\}$ be a maximal Schur set in $\Aut(\Gamma)$. Observe that the set $\sigma_1^{-1}S = \{\text{id}, \sigma_1^{-1}\sigma_2, \ldots, \sigma_1^{-1}\sigma_n\}$ is a maximal Schur set, by \cref{lem:clique_mult}, which contains the identity.
\end{proof}
Before preceding, we recall some more terminology from graph theory. The \emph{clique number} $\omega(\Gamma)$ of a graph $\Gamma$ is the maximum size of a clique in $\Gamma$. The \emph{neighborhood} of a vertex $v\in V(\Gamma)$ is the induced subgraph on the set of vertices adjacent to $v$ in $\Gamma$, and is denoted by $\Gamma_v$.
\begin{thm} \label{thm:clique_nhd}
Let $\Gamma$ be a graph on $n$ vertices. $\Gamma$ is uniformly vertex-transitive if and only if $\omega(D(\Gamma)_{\text{id}}) = n-1$.
\end{thm}
\begin{proof}
Suppose that $\Gamma$ is uniformly vertex-transitive. In light of the previous corollary, let $S\subset \Aut(\Gamma)$ be a maximal Schur set which contains $\text{id}\in\Aut(\Gamma)$. By \cref{lem:clique_mult}, $S$ is a clique in the graph $D(\Gamma)$. It follows that $S-\{\text{id}\}$ is a clique in $D(\Gamma)_{\text{id}}$ of size $n-1$. Thus, $\omega(D(\Gamma)_{\text{id}}) = n-1$.
In the opposite direction, suppose that $\omega(D(\Gamma)_{\text{id}}) = n-1$, so there exists a size $n-1$ clique $C\subset V(D(\Gamma))$ consisting of neighbors of $\text{id}\in D(\Gamma)$. The set $C\cup \{\text{id}\}$ is then a clique of size $n$ in $D(\Gamma)$. By \cref{lem:clique}, $C\cup \{\text{id}\}$ is a maximal Schur set in $\Aut(\Gamma)$, so $\Gamma$ is thus uniformly vertex-transitive.
\end{proof}
We now apply the above methods to the line graph of the Petersen graph. Recall that for a finite graph $\Gamma$, its \emph{line graph} $L(\Gamma)$ is a graph with vertices $E(\Gamma)$ in which edges $e_1,e_2\in E(\Gamma)$ are adjacent in $L(\Gamma)$ if $e_1$ and $e_2$ share a common vertex in $\Gamma$.
\begin{cor} \label{thm:line_petersen}
The line graph of the Petersen graph is vertex-transitive, but not uniformly vertex-transitive.
\end{cor}
\begin{proof}
Let $P$ be the Petersen graph, and let $L(P)$ denote its line graph. Using Sage \cite{sage}, we determined that $\omega(D(L(P))_{\textrm{id}}) = 12$. Now, $L(P)$ has 15 vertices, thus by \cref{thm:clique_nhd}, we have that $L(P)$ is not uniformly vertex-transitive. That $L(P)$ is vertex-transitive follows from the fact that $P$ is edge-transitive, as can be seen in \cite[Theorem 4.7]{HS93}.
\end{proof}
In light of \cref{thm:line_petersen}, we may conclude that the converses of the aforementioned chain of implications of graph properties do not hold, that is:
\[ \text{Cayley} \centernot\Longleftarrow \text{uniformly vertex-transitive}
\centernot\Longleftarrow \text{vertex-transitive} \]
Thus, the property of uniform vertex-transitive sits strictly between vertex-transitivity and being a Cayley graph, settling our motivating question from \cite{NSW19}.
\section{Uniform vertex-transitivity for imprimitive graphs}\label{sec:imp}
In this section, we study uniform vertex-transitivity for a special class of graphs, those graphs whose automorphism group is imprimitive. As in the previous section, we consider a finite graph $\Gamma$ on $n$ vertices. We begin by recalling the notion of primitivity from permutation group theory.
\begin{defn}
Let $G$ be a group which acts transitively on a set $X$. A \emph{block} of $G$ is a subset $B\subset X$ for which $gB=B$ or $gB\cap B=\emptyset$ for all $g\in G$. A block $B$ is \emph{trivial} if $B$ is a singleton or if $B=X$, and is \emph{nontrivial} otherwise. If $G$ has a nontrivial block, then $G$ acts \emph{imprimitively}, and \emph{primitively} otherwise.
\end{defn}
As mentioned earlier, we are interested in the case in which a graph has an imprimitive automorphism group. We call a graph $\Gamma$ an \emph{imprimitive graph} if the action of $\Aut(\Gamma)$ on $V(\Gamma)$ is imprimitive.
\begin{defn}
For $G$ a group acting transitively on a set $X$ with a block $B$ of $G$, the set $\mathcal{B}=\{ gB : g\in G\}$ is called a \emph{block system} of $B$.
\end{defn}
It is well known that a block system $\mathcal{B}$ forms a partition of $X$ and that each block has the same cardinality. Consider an imprimitive group $G$ with nontrivial block system $\mathcal{B}$ consisting of $m$ blocks of size $k$. Then, the action of $G$ on $\mathcal{B}$ induces a group homomorphism $G \to \Sym(\mathcal{B}) \simeq S_m$. The kernel of this homomorphism is the \emph{fixer} $\fix_G(\mathcal{B})$ of $\mathcal{B}$ in $G$, which is the subgroup of $G$ consisting of automorphisms which leave each block in place. In the situation of an automorphism group of a graph $\Gamma$, we will often write $\fix_{\Gamma}(\mathcal{B})$ to mean $\fix_{\Aut(\Gamma)}(\mathcal{B})$. The quotient $G/\!\fix_G(\mathcal{B})$ then has a faithful and transitive action on the blocks $\mathcal{B}$.
We wish to study uniform vertex-transitivity in the context of such imprimitive graphs. First, we abstract the notion of uniform vertex-transitivity to general permutation groups.
\begin{defn}
Let $G$ be a group acting on a set $X$ with $|X|=n$. The group $G$ is \emph{uniformly transitive} if there exists a size $n$ subset $\{\sigma_1,\ldots,\sigma_n\} \subset G$ such that, when viewed as matrices, $\sum \sigma_i = J_n$. Such a subset $\{\sigma_1,\ldots,\sigma_n\}$ is called a \emph{maximal Schur set}.
\end{defn}
\begin{defn}
Let $\Gamma$ be an imprimitive graph with nontrivial block system $\mathcal{B}$ of $\Aut(\Gamma)$, which has $m$ blocks of size $k$. The block system $\mathcal{B}$ is \emph{factorizing} if
\begin{enumerate}[(i)]
\item the group $\fix_\Gamma(\mathcal{B})$ contains a size $k$ subset of element which are mutually orthogonal with respect to the Schur product of matrices, and
\item the group $\Aut(\Gamma)/\!\fix_\Gamma(\mathcal{B})$ acts uniformly transitively on $\mathcal{B}$.
\end{enumerate}
\end{defn}
Intuitively, a block system $\mathcal{B}$ for $\Aut(\Gamma)$ is factorizing both the action on the blocks and the action within the blocks admit maximal Schur sets. We now turn to the main result of this section.
\begin{thm}\label{thm:factor}
Let $\Gamma$ be an imprimitive graph on $n$ vertices. If $\Aut(\Gamma)$ has a factorizing block system, then $\Gamma$ is uniformly vertex-transitive.
\end{thm}
\begin{proof}
Suppose $\mathcal{B}=\{B_1,\ldots,B_m\}$ consists of $m$ blocks of size $k$. Assume without loss of generality that the vertices of $\Gamma$ are ordered in such a way that the first $k$ vertices are in $B_1$, the next $k$ in $B_2$ and so forth. Let $A=\{\alpha_1,\ldots,\alpha_k\}$ be a mutually orthogonal subset of size $k$ in $\fix_\Gamma(\mathcal{B})$ and let $B'=\{\beta_1',\ldots,\beta_m'\}$ be a maximal Schur set of the action of $G/\!\fix_\Gamma(\mathcal{B})$ on $\mathcal{B}$. Let $B=\{\beta_1,\ldots,\beta_m\}$ be a lift of $B'$ to $\Aut(\Gamma)$, i.e. the image of $B\subset\Aut(\Gamma)$ under the canonical surjection $G\to G/\!\fix_\Gamma(\mathcal{B})$ is $B'$.
We have that $\sum\alpha_i$ has the block diagonal matrix form
\[ \sum_{i=1}^k \alpha_i = \begin{bmatrix} J_k & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & J_k \end{bmatrix} \]
consisting of a matrix of $m\times m$ blocks of size $k\times k$. Furthermore, $\sum\beta_j$ has the block form
\[ \sum_{j=1}^m \beta_j = \begin{bmatrix} P_{1,1} & \cdots & P_{1,m} \\ \vdots & \ddots & \vdots \\ P_{m,1} & \cdots & P_{m,m} \end{bmatrix} \]
in which each $P_{i,j}$ is some $k\times k$ permutation matrix. Finally, we have that
\begin{align*} \sum_{i=1}^k \sum_{j=1}^m \alpha_i\beta_j &= \left( \sum_{i=1}^k \alpha_i \right) \left( \sum_{j=1}^m \beta_j \right) = \begin{bmatrix} J_k & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & J_k \end{bmatrix} \begin{bmatrix} P_{1,1} & \cdots & P_{1,m} \\ \vdots & \ddots & \vdots \\ P_{m,1} & \cdots & P_{m,m} \end{bmatrix} \\&= \begin{bmatrix} J_k & \cdots & J_k \\ \vdots & \ddots & \vdots \\ J_k & \cdots & J_k \end{bmatrix} = J_{mk} = J_n
\end{align*}
which shows that the set $AB=\{ \alpha_i\beta_j : 1 \leq i \leq k, 1 \leq j \leq m\}$ is a maximal Schur set of $\Aut(\Gamma)$. Thus $\Gamma$ is uniformly vertex-transitive.
\end{proof}
This proof motivates the name of a factorizing block system. Indeed, the maximal Schur set obtained in the end ``factors'' into the product of the sets $A$ and $B$, which are given in terms of the block system $\mathcal{B}$.
In practice, \cref{thm:factor} is already useful in determining the uniform vertex-transitivity of graphs with relatively large automorphism group, graphs for which the computational methods in the last section alone are not sufficient. With a factorizing block system for a graph $\Gamma$, one can hope to find a maximal Schur set in $\Aut(\Gamma)$ by applying the methods from the previous section to the action of $\fix_\Gamma(\mathcal{B})$ and the action of $G/\!\fix_\Gamma(\mathcal{B})$ on $\mathcal{B}$. This amounts to determining the clique number of two smaller graphs as opposed to the clique number of a larger graph.
Computational evidence suggests that the converse of \cref{thm:factor} holds. Indeed, for every imprimitive graph which is known to be uniformly vertex-transitive (these will be discussed in the following section and in the appendix), there exists a factorizing block system for its automorphism group. Unfortunately, we are unable to prove the equivalence of uniform vertex-transitivity and the existence of factorizing block systems.
We finish this section with two more results on imprimitive graphs.
\begin{prop} \label{prop:simple}
If $\Gamma$ is an imprimitive graph such that $\Aut(\Gamma)$ is simple, then $\Aut(\Gamma)$ does not admit a factorizing block system.
\end{prop}
\begin{proof}
Let $\Gamma$ be an imprimitive graph and let $\mathcal{B}$ be a block system for the action of $\Aut(\Gamma)$ on $V(\Gamma)$. As above, let each block of $\mathcal{B}$ have size $k$. It is clear from the above construction that $\fix_\Gamma(\mathcal{B})$ is a normal subgroup of $\Aut(\Gamma)$, realized as the kernel of a homomorphism. Since $\Aut(\Gamma)$ is simple by assumption, we have that $\fix_\Gamma(\mathcal{B})$ is either trivial or all of $\Aut(\Gamma)$. If $\fix_\Gamma(\mathcal{B})$ is trivial, then it cannot contain a Schur set of size $k$, where $k>1$ necessarily from the definition of imprimitivity. Similarly, if $\fix_\Gamma(\mathcal{B})=\Aut(\Gamma)$, then $\Aut(\Gamma)/\fix_\Gamma(\mathcal{B})$ is trivial, and hence does not act uniformly transitively on $\mathcal{B}$.
\end{proof}
\begin{prop} \label{prop:a5}
If $\Gamma$ is a vertex-transitive graph with $\Aut(\Gamma)\simeq A_5$, then $\Gamma$ is imprimitive.
\end{prop}
\begin{proof}
We show that $A_5$ does does not occur as the automorphism group of a primitive vertex-transitive graph. Recall from \cite[Corollary 1.5A]{DM96} that the primitive actions of a group $G$ are characterized by their point stabilizers being maximal subgroups of $G$. A transitive group action is determined up to conjugacy by its point stabilizer. Thus, the primitive actions of a group $G$ correspond to the maximal subgroups of $G$. In the case of $A_5$, there are three conjugacy classes of maximal subgroups, as seen in \cite{ATLAS}, for instance. These correspond to three primitive actions of $A_5$ on 5 points, 6 points, and 10 points, respectively. By consulting a database \cite{R97} of all possible vertex-transitive graphs on 5, 6, and 10 vertices, we verify that none of these primitive actions of $A_5$ occur as the automorphism group of a vertex-transitive graph.
\end{proof}
Note that in the case the converse of \cref{thm:factor} was true, \cref{prop:simple} and \cref{prop:a5} would imply that there is no uniformly vertex-transitive graph with automorphism group $A_5$.
\section{Experimental results}\label{sec:exp}
The basis for our experimental results is a database \cite{R97} of vertex-transitive non-Cayley graphs on up to 30 vertices\footnote{The website for this database remarks that it is ``guaranteed correct only up to 26 vertices.'' However, the completeness of this dataset is supported by later work by Royle \cite[Table 3]{HR19}, which lists the same counts of vertex-transitive non-Cayley graphs as appear in the dataset.}. Indeed, the motivation for much of our results was to develop methods to classify these graphs into the uniformly vertex-transitive and non-uniformly vertex-transitive ones. We carried out these methods using the SageMath environment \cite{sage} and GAP \cite{GAP4}. We first present a summary of our findings in \cref{tab:counts}.
\begin{table}[ht]
\begin{tabular}{|c|cc|ccc|} \hline
Vertices & VT & non-Cayley & UVT & non-UVT & Unknown \\ \hline\hline
10 & 22 & 2 & 2 & & \\
15 & 48 & 4 & & 4 & \\
16 & 286 & 8 & 8 & & \\
18 & 380 & 4 & 4 & & \\
20 & 1214 & 82 & 70 & 12 & \\
24 & 15506 & 112 & 112 & & \\
26 & 4236 & 132 & 132 & & \\
28 & 25850 & 66 & 24 & 38 & 4\\
30 & 46308 & 1124 & 324 & 730 & 70 \\\hline
\end{tabular}
\\[3mm]\caption{The counts of graphs with various properties on numbers of vertices for which there exist vertex-transitive non-Cayley graphs.}\label{tab:counts}\end{table}
Outside of two complementary pairs of graphs on 28 vertices, the classification of uniformly vertex-transitive graphs on less than vertices is complete. A complete descriptions of these graphs by their automorphism groups can be found in the appendix. One of the two remaining complementary pairs of graphs has automorphism group $2\times\PSL(2,13)$, of order 2184. This is, in practice, too large to compute using only the clique number of the orthogonality graph. However, it has been computed that none of the block systems for this automorphism group are factorizing, so the converse of \cref{thm:factor}, if true, would imply that this pair of graphs is not uniformly vertex-transitive.
The other complementary pair of graphs on 28 vertices consists of the Johnson graph $J(8,2)$ and its complement. This pair of graphs has automorphism group $S_8$ of order 40320 acting primitively, so determining the uniform vertex-transitivity of this graph is outside the means of our current methods. The smaller Johnson graphs $J(n,2)$ exhibit curious behavior. The graph $J(5,2)$ is the complement of the Petersen graph, which is uniformly-vertex transitive, but not Cayley. The graph $J(6,2)$ is the line graph of $K_6$, which is not uniformly vertex-transitive (indeed, it is the $S_6$ entry in the \cref{tab:nuvt}). Most curiously, the graph $J(7,2)$ is a Cayley graph for the group $7:3$. Further methods to determine the uniform-vertex transitivity of graphs which have a large primitive automorphism group would be desireable, in order to settle the case of $J(8,2)$ and other graphs with large primitive automorphism groups.
\begin{table}[ht]
\begin{tabular}{lcccc} \hline
Group $G$ & Degree & $\omega(D(G))-\deg(G)$ & Graphs & ID\# \\\hline
$2^3 : 7$ & 28 & $-21$ &18& $(28,11)$ \\
$A_5$ & 20 & $-10$ &4& $(20,15)$ \\
& 30 & $-17$ &382& $(30,9)$ \\
$2 \times A_5$ & 20 & $-8$ &8& $(20,36)$ \\
& 30 & $-4$ &88& $(30,29)$ \\
& 30 & $-4$ &32& $(30,30)$ \\
$S_5$ & 15 & $-2$ &2& $(15,10)$ \\
& 30 & $-17$ &22& $(30,22)$ \\
& 30 & $-4$ &32& $(30,25)$ \\
& 30 & $-14$ &90& $(30,27)$ \\
$\PSL(3,2)$ & 28 & $-18$ &4& $(28,32)$ \\
$2 \times S_5$ & 30 & $-4$ &40& $(30,58)$ \\
& 30 & $-4$ &44& $(30,60)$ \\
$\PSL(3,2) : 2$ & 28 & $-18$ &12& $(28,46)$ \\
$\PSL(2,8)$ & 28 & $-18$ &2& $(28,70)$ \\
$S_6$ & 15 & $-2$ &2& $(15,28)$ \\
$2^3 : \PSL(3,2)$ & 28 & $-16$ &2& $(28,159)$ \\
\hline \end{tabular}
\\[3mm]\caption{The vertex-transitive graphs which are not uniformly-vertex transitive, listed by automorphism group.}\label{tab:nuvt}\end{table}
We additionally list by automorphism group the vertex-transitive graphs which are known to not be uniformly-vertex transitive in \cref{tab:nuvt}. The first column lists a description of the permutation list and the second column lists the degree of the permutation group. The group descriptions are based on output from the GAP \texttt{StructureDescription()} function. Note that an abstract group may have several entries corresponding to its different actions. In the third column we record the quantity $\omega(D(G))-\deg(G)$, the difference between the maximum size of a clique in $D(G)$ and the degree of the permutation group $G$. If this value is 0, then $G$ is uniformly-transitive. The fourth column lists the number of graphs which have this automorphism group. Lastly, we record the identification number of $G$ in the GAP Transitive Groups Library \cite{TransGrp}. For example, to create the first group in the table in GAP, one would simply call \texttt{TransitiveGroup(28,11)}.
One noteworthy observation from \cref{tab:nuvt} is that their automorphism groups tend to include non-abelian simple groups as a large subgroup. In some sense, this may provide further evidence that the converse of \cref{thm:factor} is true. Indeed, the automorphism groups failing to have a factorizing block system because of a large simple subgroup is in line with \cref{prop:simple}.
We provide this information for all groups appearing as the automorphism group of a vertex-transitive non-Cayley graph (including the uniformly vertex-transitive graphs) in the appendix.
\section{Further directions}\label{sec:further}
One possible generalization of the notion of uniform-vertex transitivity is the following, in which we replace the matrix $J_n$ by one of its integer multiples.
\begin{defn}
Let $\Gamma$ be a graph on $n$ vertices and let $k\geq 1$. The graph $\Gamma$ is \emph{$k$-uniformly vertex-transitive} if there exists a size $kn$ subset $\{\sigma_1,\ldots,\sigma_{kn}\}\subset\Aut(\Gamma)$ such that, when viewed as permutation matrices, $\sum \sigma_i = kJ_n$. Such a subset $\{\sigma_1,\ldots,\sigma_{kn}\}$ is called a \emph{$k$-maximal Schur set}.
\end{defn}
Setting $k=1$ recovers the above notion of uniform vertex-transitivity. It turns out that every vertex-transitive graph is $k$-uniformly vertex-transitive for some $k$, and this $k$ admits an explicit description in terms of the action of the group $\Aut(\Gamma)$.
\begin{prop}\label{prop:uvtk}
Let $\Gamma$ be a vertex-transitive graph on $n$ vertices, and let $s$ be the order of the stabilizer of a vertex in $\Aut(\Gamma)$. Then,
\begin{enumerate}[(a)]
\item the graph $\Gamma$ is $s$-uniformly vertex-transitive.
\item for $1\leq k\leq s$, $\Gamma$ is $k$-uniformly vertex-transitive if and only if $\Gamma$ is $(s-k)$-uniformly vertex-transitive.
\end{enumerate}
\end{prop}
\begin{proof}
We claim that $\Aut(\Gamma)$ is an $s$-maximal Schur set for itself. Indeed, let $M=\sum_{\sigma\in\Aut(\Gamma)}\sigma$. Consider the $i$th row of the matrix $M$, which has an $s$ in the $i$th position, as $s$ is the number of elements in $\Aut(\Gamma)$ stabilizing vertex $i$. The $j$th entry of the $i$th row corresponds to all automorphisms moving vertex $j$ to vertex $i$, which is a coset of the stabilizer of vertex $j$, which also has size $s$. Thus, each row of $M$ consists of $s$ in each position. This shows that $M=sJ_n$, which proves (a).
Suppose that $\Gamma$ is $k$-uniformly vertex-transitive for some $k\leq s$, and let $S$ be a $k$-maximal Schur set. Then, $\Aut(\Gamma)-S$ is a $(s-k)$-maximal Schur set. Indeed, we see that
\[ \sum_{\sigma\in \Aut(\Gamma) - S} \sigma = \sum_{\sigma \in \Aut(\Gamma)} \sigma - \sum_{\sigma \in S} \sigma = sJ_n - kJ_n = (s-k)J_n \]
which proves (b).
\end{proof}
The following result improves \cref{prop:implications} for Cayley graphs.
\begin{prop}\label{prop:cay-uvtk}
Let $\Gamma$ be a Cayley graph on $n$ vertices, with vertex stabilizer of size $s$. Then $\Gamma$ is $k$-uniformly vertex-transitive for all $1\leq k\leq s$.
\end{prop}
\begin{proof}
Let $G$ be a subgroup of $\Aut(\Gamma)$ which acts regularly on $V(\Gamma)$, as guaranteed to exist by \cref{prop:cay-reg}. In particular, $|G|=n$, and $\Aut(\Gamma)$ is partitioned into $s$ many subsets of size $n$ by the cosets of $G$ in $\Aut(\Gamma)$. Each of these cosets is a maximal Schur set for $\Aut(\Gamma)$ by \cref{lem:clique_mult}. As such, taking the union of any $k$ of these cosets yields a $k$-maximal Schur set of $\Aut(\Gamma)$.
\end{proof}
One fundamental difference in $k$-uniform vertex-transitivity for $k=1$ versus $k>1$ is that for $k>1$, computation is more difficult. As seen, the $k=1$ case can be understood in terms of a pairwise property of automorphisms---that their entrywise product is 0. The $k>1$ does not seem to admit such a clean understanding, and therefore, more sophisticated methods are needed to determine whether or not an arbitrary graph is $k$-uniformly vertex-transitive for $k>1$.
In particular, it is unclear whether non-uniformly vertex-transitive graphs may be $k$-uniformly vertex-transitive for some $1<k<s$, with $s$ the order of a vertex stabilizer in the automorphism group. Even on the smallest example of a such a graph, the line graph of the Petersen graph, we were unsuccessful in determining whether it is 2-uniformly vertex-transitive.
\section*{Acknowledgments}
The authors would like to thank David Roberson for useful comments on an earlier version of this article and for bringing the notions of the derangement graph and sharply transitive sets to our attention. This work was supported by the DAAD RISE program and the collaborative research center SFB-TRR 195 \emph{Symbolic Tools in Mathematics and their Application}. Simon Schmidt and Moritz Weber were also supported by the DFG project \emph{Quantenautomorphismen von Graphen}.
| {
"timestamp": "2019-12-03T02:01:33",
"yymm": "1912",
"arxiv_id": "1912.00060",
"language": "en",
"url": "https://arxiv.org/abs/1912.00060",
"abstract": "We introduce uniformly vertex-transitive graphs as vertex-transitive graphs satisfying a stronger condition on their automorphism groups, motivated by a problem which arises from a Sinkhorn-type algorithm. We use the derangement graph $D(\\Gamma)$ of a given graph $\\Gamma$ to show that the uniform vertex-transitivity of $\\Gamma$ is equivalent to the existence of cliques of sufficient size in $D(\\Gamma)$. Using this method, we find examples of graphs that are vertex-transitive but not uniformly vertex-transitive, settling a previously open question. Furthermore, we develop sufficient criteria for uniform vertex-transitivity in the situation of a graph with an imprimitive automorphism group. We classify the non-Cayley uniformly vertex-transitive graphs on less than 30 vertices outside of two complementary pairs of graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Uniformly vertex-transitive graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126525529014,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7094397165708287
} |
https://arxiv.org/abs/1507.04484 | Correspondences and singular varieties | What is generally known as the "Bloch--Srinivas method" consists of decomposing the diagonal of a smooth projective variety, and then considering the action of correspondences in cohomology. In this note, we observe that this same method can also be extended to singular and quasi--projective varieties. We give two applications of this observation: the first is a version of Mumford's theorem, the second is concerned with the Hodge conjecture for singular varieties. | \section{Introduction}
\label{intro}
Let $X$ be a smooth complex projective variety. The cycle class maps
\[ cl^i\colon A^iX_{\mathbb{Q}}\to H^{2i}(X,\mathbb{Q})\]
from Chow groups to singular cohomology have given rise to some of the most profound and fascinating conjectures in algebraic geometry: the Hodge conjecture (concerning the image of $cl^i$), and the Bloch--Beilinson conjectures (concerning the structure of the kernel of $cl^i$).
Since Mumford's work \cite{M}, it is well--known that if the Chow groups $A^iX_{\mathbb{Q}}$ are ``small'' (in the sense of being supported on some subvariety), then also the singular cohomology groups are small (in the sense that they are supported on some subvariety).
There is for instance the following result:
\begin{theorem}[\cite{J2}] Let $X$ be a smooth projective variety, and suppose $A_0X_{\mathbb{Q}}$ is supported on a subvariety of dimension $r$. Then
the Hodge numbers $h^{p,0}(X)$ are $0$ for $p>r$.
\end{theorem}
In proving Mumford--type theorems such as this one, the approach of Bloch--Srinivas \cite{BS} has become hugely influential. (The curious reader is invited to look at \cite{Vo} for a fairly comprehensive overview of this circle of ideas, including many exciting subsequent developments it has spawned)
In brief, the Bloch--Srinivas method consists of decomposing the diagonal, given some input on the level of Chow groups. Then, the action of this decomposition seen as a correspondence turns out to have many consequences on the level of cohomology.
Because of the formalism of correspondences being used, the Bloch--Srinivas method is usually restricted to smooth projective varieties. In this note, on the other hand, we show this method can also be made to work for singular and quasi--projective varieties. The idea is very elementary: if $X$ is a (possibly singular) projective variety of dimension $n$, a correspondence is defined as a cycle $C\in A_n(X\times X)_{\mathbb{Q}}$. A correspondence defines an action
\[C_\ast\colon H^j(X,\mathbb{Q})\to H_{2n-j}(X,\mathbb{Q})\]
in a natural way. If $C$ is the diagonal, this action is just the natural map (capping with the fundamental class of $X$). It follows that, once we have a decomposition of the diagonal, this will have consequences for
\[ \hbox{Im}\bigl( H^j(X,\mathbb{Q})\to H_{2n-j}(X,\mathbb{Q})\bigr)\ .\]
It turns out that in certain degrees (depending on the dimension of the singular locus), this image is well--understood: it is exactly the subgroup $W_{j-2n}H_{2n-j}(X,\mathbb{Q})$ where $W_\ast$ is Deligne's weight filtration (this is proven using intersection homology, cf. lemma \ref{durf}).
We give two applications of this elementary observation. The first is a new version of Mumford's theorem:
\begin{proposition} Let $X$ be a quasi--projective variety of dimension $n$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le r\ \ \hbox{for\ all\ }i\ ,\]
and suppose there exists a compactification of $X$ with singular locus of dimension $\le {n+r+1\over 3}$.
Then
\[ \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})=0\ \ \hbox{provided\ }\vert 2k+j\vert >r \ .\]
\end{proposition}
Here, the hypothesis ``Niveau $(A_iX_{\mathbb{Q}})\le r$'' means that the Chow group $A_iX_{\mathbb{Q}}$ is supported on an $(i+r)$--dimensional subvariety.
It should be noted that Lewis has obtained several Mumford--type theorems for singular varieties \cite{L2}; his statements and method are somewhat different from the present note.\footnote{The statements in \cite{L2} are considerably sharper than the one obtained in the present note. On the other hand, Lewis gets by by supposing the generalized Hodge conjecture or the Lefschetz standard conjecture hold universally. The aim of the present note is (1) to see how far one could get unconditionally, (2) extending the Bloch--Srinivas argument to singular varieties.}
The second application concerns the Hodge conjecture (as extended to singular and quasi--projective varieties in \cite{J}):
\begin{proposition}\label{hodge} Let $X$ be a quasi--projective variety of dimension $n$, and suppose there exists a compactification with singular locus of dimension $\le {n+4\over 3}$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le 3\ \ \hbox{for\ all\ }i\le \ell\ .\]
Then the cycle class map
\[ A_jX_{\mathbb{Q}}\ \to\ W_{-2j}H_{2j}(X,\mathbb{Q})\cap F^{-j}H_{2j}(X,\mathbb{C})\]
is surjective for $j\le \ell+2$.
\end{proposition}
We present some examples where this can be applied (corollaries \ref{A0} and \ref{A1}).
\section{The Bloch--Srinivas argument}
\label{sec:1}
\begin{definition} Let $X$ be a quasi--projective variety, and let $A_iX$ denote the Chow group of $i$--dimensional algebraic cycles. We say that
\[ \hbox{Niveau}\bigl( A_iX_{\mathbb{Q}}\bigr)\le r\]
if there exists a closed (i+r)--dimensional subvariety $Y\subset X$ such that $A_i(X\setminus Y)_{\mathbb{Q}}=0$.
\end{definition}
The key to what follows is the following decomposition lemma. This is the Bloch--Srinivas argument \cite{BS}; in his book, Bloch attributes this argument to Colliot--Th\'el\`ene \cite[appendix to lecture 1]{B}.
\begin{lemma}\label{diag} Let $\bar{X}$ be a projective variety of dimension $n$, and $X\subset\bar{X}$ the complement of a closed subvariety $D$. Suppose
\[\hbox{Niveau}\bigl( A_iX_{\mathbb{Q}}\bigr)\le r \ \ \ \ \hbox{for\ all\ $i\le \ell$\ .}\]
Then there is a decomposition of the diagonal
\[ \Delta=\Delta_0+\Delta_1+\cdots+\Delta_\ell+\Delta^{\ell+1}+\Gamma\ \ \in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\ ,\]
where $\Delta_j$ is supported on $V_j\times W_j$, $\Delta^{\ell+1}$ is supported on $X\times W_{\ell+1}$, and $V_j\subset\bar{X}$ is of dimension $j+r$, $W_j\subset\bar{X}$ is of dimension $n-j$, and $\Gamma$ is supported on $D\times\bar{X}$.
\end{lemma}
\begin{proof} This is an application of the Bloch--Srinivas method \cite{BS}.
We use the following two well--known lemmas:
\begin{lemma}\label{lim} Let $X$ and $Z$ be quasi--projective varieties, and suppose $Z$ is irreducible of dimension $n$. Then for any $i$
\[ A_i(X_{k(Z)})\cong \varinjlim A_{i+n}(X\times U)\ ,\]
where the limit is taken over opens $U\subset Z$.
\end{lemma}
\begin{proof} This is usually stated for smooth projective varieties \cite[appendix to Lecture 1]{B}. If one is brave, one goes checking in Quillen's work to see that the proof given in loc. cit. for the smooth case still goes on for singular varieties. Alternatively, take a resolution of singularities and reduce to the smooth case using the ``descent'' exact sequences, and the fact that $\varinjlim$ is an exact functor.
\end{proof}
\begin{lemma}\label{inj} Let $X$ be a quasi--projective variety defined over a field $k$, and let $k\subset K$ be a field extension. Then
\[A_i(X_k)_{\mathbb{Q}}\to A_i(X_K)_{\mathbb{Q}}\]
is injective.
\end{lemma}
\begin{proof} This is usually stated for smooth varieties \cite[appendix to Lecture 1]{B}, but the same argument works in general: use lemma \ref{lim} to reduce to the case of a finite extension. For a finite extension, take a resolution of singularities; for smooth varieties, the existence of the norm implies the extension map is a split injection; by descent, the same is true for singular varieties.
\end{proof}
Now we proceed with the proof of lemma \ref{diag}.
We can reduce to some subfield $k\subset\mathbb{C}$ which is finitely generated over its prime subfield (that is, we may suppose $X$ and $\bar{X}$ and the various subvarieties supporting the $A_iX_{\mathbb{Q}}$ are defined over $k$). Consider the restriction
\[ \Delta\in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\to A_n(X\times\bar{X})_{\mathbb{Q}}\to \varinjlim A_n(X\times U)_{\mathbb{Q}}=A_0(X_{k(\bar{X})})_{\mathbb{Q}}\ \]
(where the $U$ run over opens of $\bar{X}$).
But
\[A_0(X_{k(\bar{X})})_{\mathbb{Q}}\to A_0(X_{\mathbb{C}})_{\mathbb{Q}}\]
is injective (lemma \ref{inj}), so $A_0(X_{k(\bar{X})})_{\mathbb{Q}}$ is supported in dimension $r$. It follows that we get a rational equivalence
\[ \Delta=\Delta_0+\Delta^1+\Gamma^1\ \ \in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\ ,\]
where $\Delta_0$ is supported on $V_0\times\bar{X}$, and $\Delta^1$ is supported on $\bar{X}\times W_1$ for some divisor $W_1$, and $\Gamma_1$ is supported on $D\times\bar{X}$.
If $\ell=0$ we are done. If not, we consider the restriction of the element $\Delta^1$
\[ \Delta^1\in A_n(\bar{X}\times W_1)_{\mathbb{Q}}\to A_n(X\times W_1)_{\mathbb{Q}}\to A_1(X_{k(W_1)})_{\mathbb{Q}} \ ,\]
and we use the hypothesis on $A_1(X_{\mathbb{C}})_{\mathbb{Q}}$.
Continuing the same process, after $\ell+1$ steps we end up with a decomposition
as desired.
\end{proof}
Next, we consider correspondences for possibly singular projective varieties:
\begin{definition}\label{corr} Let $X$ be a projective variety of dimension $n$, and $C\in A_n(X\times X)_{\mathbb{Q}}$. Then $C$ induces an action
\[ C_\ast\colon\ H^j(X,\mathbb{Q})\ \to\ H_{2n-j}(X,\mathbb{Q})\ ,\]
defined as follows: for $b\in H^j(X,\mathbb{Q})$, let
\[ C_\ast(b):= (p_2)_\ast \bigl( (p_1)^\ast(b)\cap [C]\bigr)\ \ \in H_{2n-j}(X,\mathbb{Q})\ ,\]
where $p_1$ and $p_2$ denote projections on the first resp. second factor.
\end{definition}
This ``correspondence action'' has the following properties (which are well--known, and oft exploited, in the smooth case):
\begin{lemma}\label{cap} Let $X$ be a projective variety of dimension $n$, and let $\Delta\in A_n(X\times X)$ be the diagonal. Then
\[ \Delta_\ast b=b\cap[X]\ \ \in H_{2n-j}(X,\mathbb{Q})\]
for any $b\in H^j(X,\mathbb{Q})$.
\end{lemma}
\begin{proof} Let $f\colon\widetilde{X}\to {X}$ be a resolution of singularities, and let $\widetilde{\Delta}$ denote the diagonal of $\widetilde{X}$. Then
\[ \begin{split} \Delta_\ast(b)&:=(p_2)_\ast\bigl((p_1)^\ast b\cap\Delta\bigr)\\
&=(p_2)_\ast f_\ast \bigl( f^\ast(p_1)^\ast b\cap\widetilde{\Delta}\bigr)\\
&=f_\ast \widetilde{\Delta}_\ast(f^\ast b)\\
&=f_\ast(f^\ast b\cap[\widetilde{X}])=b\cap [{X}].\end{split}\]
\end{proof}
\begin{lemma}\label{factor} Let $X$ be a projective variety of dimension $n$, and suppose $C\in A_n(X\times X)_{\mathbb{Q}}$ is the image of a cycle $c\in A_n(V\times W)_{\mathbb{Q}}$, for some closed subvarieties $V$ and $W$ in $X$.
Then there exists a factorization
\[
\begin{array}[c]{ccc}
H^j(\widetilde{V}\times\widetilde{W},\mathbb{Q})&\stackrel{\cdot[\widetilde{c}]}{\to}&H_{2n-j}(\widetilde{V}\times\widetilde{W},\mathbb{Q}) \\
\uparrow&&\downarrow\\
H^{j}(\widetilde{V},\mathbb{Q})&& H_{2n-j}(\widetilde{W},\mathbb{Q}) \\
\uparrow&&\downarrow\\
H^{j}({X},\mathbb{Q})&\ \ \stackrel{C_\ast}{\to} \ \ &H_{2n-j}({X},\mathbb{Q})\ \\
\end{array}\]
(where $\widetilde{V}$ and $\widetilde{W}$ denote resolutions of singularities, and $\widetilde{c}\in A_n(\widetilde{V}\times\widetilde{W})_{\mathbb{Q}}$ is any cycle mapping to $c$).
\end{lemma}
\begin{proof}
This is a formality. Let
\[\begin{split}
&\psi\colon \ \widetilde{V}\to X ,\\
& \phi\colon \ \widetilde{W}\to X\\ \end{split}\]
denote the compositions of the resolution morphism with the inclusion morphism. Let $q_1$ and $q_2$ denote the projection from $\widetilde{V}\times\widetilde{W}$ to the first resp. second factor.
Then for any $b\in H^j(X,\mathbb{Q})$,
\[\begin{split}
C_\ast(b):=&(p_2)_\ast \Bigl( (p_1)^\ast(b)\cap [C]\Bigr)\\
=&(p_2)_\ast \Bigl( (p_1)^\ast(b)\cap (\psi\times\phi)_\ast [\widetilde{c}]\Bigr)\\
=&(p_2)_\ast (\psi\times\phi)_\ast\Bigl( (\psi\times\phi)^\ast(p_1)^\ast(b)\cap [\widetilde{c}]\Bigr)\\
=&\phi_\ast(q_2)_\ast\Bigl( (q_1)^\ast\psi^\ast(b)\cap[\widetilde{c}]\Bigr)\ .\\
\end{split}\]
\end{proof}
\begin{remark}\label{alsoA} Naturally, definition \ref{corr} extends to other cohomology theories. For instance, if $A^\ast$ denotes the operational Chow cohomology of Fulton--MacPherson \cite{F}, any correspondence $C\in A_n(X\times X)_{\mathbb{Q}}$ defines an action
\[ C_\ast\colon\ A^iX_{\mathbb{Q}}\to A_{n-i}(X)_{\mathbb{Q}}\ .\]
The lemmas \ref{cap} and \ref{factor} still hold in this context (indeed, the proofs are the same; they only use formal properties of cohomology/homology).
\end{remark}
\section{Mumford theorem}
\begin{definition} Let $X$ be a quasi--projective variety. We let $W_\ast$ and $F^\ast$ denote the weight filtration, resp. the Hodge filtration, on cohomology and on homology of $X$ \cite{PS}.
\end{definition}
\begin{proposition} Let $X$ be a quasi--projective variety of dimension $n$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le r\ \ \hbox{for\ all\ }i\ ,\]
and suppose there exists a compactification of $X$ with singular locus of dimension $\le {n+r+1\over 3}$.
Then
\[ \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})=0\ \ \hbox{provided\ }\vert 2k+j\vert >r \ .\]
\end{proposition}
This follows from a more precise version:
\begin{proposition}\label{precisemum} Let $X$ be a quasi--projective variety of dimension $n$, and suppose there exists a compactification of $X$ with singular locus of dimension $\le s$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le r\ \ \hbox{for\ all\ }i\le \ell\ .\]
Let $j\in[0,n-s]\cap[2s-r,2n]$. Then
\[ \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})=0\ \ \hbox{provided\ }\vert 2k+j\vert >r \ .\]
\end{proposition}
\begin{proof} Let $\tau\colon X\to\bar{X}$ denote the given compactification, with boundary $D=\bar{X}\setminus X$.
Taking the transpose of the decomposition of lemma \ref{diag}, we obtain a decomposition of the diagonal
\[ \Delta=\Delta_0+\Delta_1+\cdots+\Delta_{n-r}+\Gamma\ \ \in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\ ,\]
where $\Delta_i$ is supported on $ V_i\times W_i$, and $V_i$ (resp. $W_i$) is of dimension $j+r$ (resp. $n-j$), and $\Gamma$ is supported on $\bar{X}\times D$.
\item{\underline{Step 1: $j\le n-s$. }} Let
\[ a\in \hbox{Gr}^k_F W_{-j} H_j(X,\mathbb{C})\ ,\]
with $k$ and $j$ as indicated in the proposition.
Using strict compatibility of the Hodge filtration, one can find
\[\bar{a}\in \hbox{Gr}^k_F W_{-j} H_j(\bar{X},\mathbb{C}) \]
restricting to $a$ (i.e. $\tau^\ast(\bar{a})=a\in H_{j}(X,\mathbb{C})$). Applying lemma \ref{durf} below, there exists
\[ b\in \hbox{Gr}_F^{k+n} H^{2n-j}(\bar{X},\mathbb{C})\]
such that
\[ \bar{a}=b\cap[\bar{X}]\ \ \in \hbox{Gr}^k_F W_{-j} H_j(\bar{X},\mathbb{C})\ .\]
In other words, we have
\[ \bar{a}=\Delta_\ast(b)=( \Delta_0+\cdots+\Delta_n+\Gamma)_\ast(b)\ \]
(here we have used lemma \ref{cap}), and hence
\[ a=\tau^\ast(\bar{a})=\tau^\ast\Bigl( ( \Delta_0+\cdots+\Delta_n+\Gamma)_\ast(b)\Bigr)\ \ \in \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})\ .\]
Now, we analyze the actions of these correspondences piece by piece:
First,
\[\tau^\ast \Gamma_\ast(b)=0\ .\]
Indeed, using lemma \ref{factor}, we find that $\Gamma_\ast(b)$ is supported on $D$.
Next, we consider the action of $\Delta_i$. There is a factorization (guaranteed by lemma \ref{factor})
\[\begin{array}[c]{ccc}
\cdots&&\\
\uparrow&&\downarrow\\
\hbox{Gr}_F^{k+n} H^{2n-j}(\widetilde{V_{i}},\mathbb{C})&& \hbox{Gr}_F^{k+n-i}H^{2n-2i-j}(\widetilde{W_{i}},\mathbb{C}) \\
\uparrow&&\downarrow\\
\hbox{Gr}_F^{k+n}\hbox{Gr}^W_{2n-j}H^{2n-j}(\bar{X},\mathbb{C})&\ \ \stackrel{(\Delta_{i})_\ast}{\to} \ \ &\hbox{Gr}_F^{k}W_{-j}H_{j}(\bar{X},\mathbb{C})\ .\\
\end{array}\]
The upper left group (which is just $H^{k+n,n-k-j}(\widetilde{V_i})$) vanishes for $k+n>i+r$ and for $n-k-j>i+r$. The upper right group vanishes for $k+n-i<0$ and for $n-i-j-k<0$. It follows that $(\Delta_i)_\ast(b)$ vanishes unless
\[ \hbox{both\ } k+n\hbox{\ and\ }n-k-j\ \in [i,i+r]\ ;\]
in particular, $(\Delta_i)_\ast(b)$ vanishes under the hypothesis $\vert 2k+j\vert>r$.
\item{\underline{Step 2 : $j\ge 2s-r$}} Let $S$ denote the singular locus of $X$, and $U=X\setminus S$ the non--singular locus.
We have the exact sequence
\[ \hbox{Gr}^k_F W_{-j}H_j(S,\mathbb{C})\to \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})\to \hbox{Gr}^k W_{-j}H_j(U,\mathbb{C})\ .\]
Suppose now $\vert 2k+j\vert>r$. Then the group on the left vanishes for dimension reasons (indeed, suppose for simplicity $S$ is equidimensional of dimension $s$, and let $\widetilde{S}\to S$ be a resolution; then $W_{-j}H_j(S,\mathbb{C})$ comes from $H^{2s-j}(\widetilde{S},\mathbb{C})$ which has Hodge level $\le r$). The vanishing of the group on the right follows from lemma \ref{smoothmum} below.
\begin{lemma}\label{smoothmum} Let $X$ be a smooth quasi--projective variety, and suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le r\ \ \hbox{for\ all\ }i\ .\]
Then
\[ \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})=0\ \ \hbox{provided\ }\vert 2k+j\vert >r \ .\]
\end{lemma}
\begin{proof} Let $\tau\colon X\to\bar{X}$ be a smooth compactification, with boundary $D=\bar{X}\setminus X$. From lemma \ref{diag}, we obtain a decomposition
\[ \Delta=\Delta_0+\Delta_1+\cdots+\Delta_{n-r}+\Gamma\ \ \in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\ ,\]
where $\Delta_i$ is supported on $ V_i\times W_i$, and $V_i$ (resp. $W_i$) is of dimension $j+r$ (resp. $n-j$), and $\Gamma$ is supported on $\bar{X}\times D$.
Given $a\in \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})$, we can find
\[\bar{a}\in \hbox{Gr}^k_F W_{-j}H_j(X,\mathbb{C})\]
restricting to $a$. Then we have
\[a=\tau^\ast(\bar{a}) =\tau^\ast\bigl( (\Delta_0+\cdots+\Delta_{n-r}+\Gamma)_\ast (\bar{a})\bigr)\ \ \in H_j(X,\mathbb{C})\ .\]
Just as above, we check that $\tau^\ast\Gamma_\ast(\bar{a})=0$, and that
\[ (\Delta_i)_\ast(\bar{a})=0\ \ \hbox{provided\ }\vert 2k+j\vert >r\ .\]
\end{proof}
\begin{lemma}\label{durf} Let $X$ be a projective variety of dimension $n$, and with singular locus of dimension $\le s$.
\item{(\rom1)}
The natural map
\[ \hbox{Gr}^W_{j} H^j(X,\mathbb{Q})\to W_{j-2n} H_{2n-j}(X,\mathbb{Q})\]
is injective for $j\le n-s$, and surjective for $j\ge n+s$.
\item{(\rom2)} For any $k$, the natural map
\[ F^k H^j(X,\mathbb{C})\to F^{k-n} W_{j-2n} H_{2n-j}(X,\mathbb{C})\]
is surjective for $j\ge n+s$.
\item{(\rom3)} The natural map
\[ H^{2j}(X,\mathbb{Q})\cap F^jH^{2j}(X,\mathbb{C})\ \to\ W_{2j-2n}H_{2n-2j}(X,\mathbb{Q})\cap F^{j-n}H_{2n-2j}(X,\mathbb{C})\]
is surjective for $j\ge n+s$.
\end{lemma}
\begin{proof}
\item{(\rom1)}
Let $IH^jX$ denote middle--perversity intersection homology with rational coefficients. It follows from work of Durfee \cite{D} that
\[ IH^jX=\begin{cases} \hbox{Gr}^W_j H^j(X,\mathbb{Q}), & j\ge n+s;\\
W_{j-2n} H_{2n-j}(X,\mathbb{Q}), & j\le n-s\ .
\end{cases}\]
It is well--known \cite{GM}, \cite{GM2} that the ``Poincar\'e duality'' map factors
\[ \hbox{Gr}^W_j H^j(X,\mathbb{Q}) \to IH^jX \to W_{j-2n} H_{2n-j}(X,\mathbb{Q})\ .\]
Moreover, it is known \cite{HS} that the first arrow is injective, and the second arrow surjective.
\item{(\rom2)} The natural map (given by the cap product) is a map of Hodge structures; as such, it is strictly compatible with the Hodge filtration.
\item{(\rom3)} Consider again the factorization
\[ \hbox{Gr}^W_{2j} H^{2j}(X,\mathbb{Q}) \stackrel{\cong}{\to} IH^{2j}X \to W_{2j-2n} H_{2n-2j}(X,\mathbb{Q})\ .\]
The group $IH^{2j}X$ admits a polarized Hodge structure, given by the Hodge--Riemann relations proven in \cite[Theorem 2.2.3]{CM}. This implies (\cite[Corollary 2.24]{Vo}) that a Hodge class in the image comes from a Hodge class in $IH^{2j}X$. But since the left arrow is an isomorphism (and a map of Hodge structures), this Hodge class comes from a Hodge class in $\hbox{Gr}^W_{2j} H^{2j}(X,\mathbb{Q})$.
\end{proof}
\end{proof}
\begin{remark} The proof of proposition \ref{precisemum} actually yields a slightly more general statement, which is as follows:
Let $X$ be a quasi--projective variety of dimension $n$ (no condition on the singular locus), with
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le r\ \ \hbox{for\ all\ }i\ .\]
Then
\[ \hbox{Im}\Bigl( H^{2n-j}(X,\mathbb{C})\to H_j(X,\mathbb{C})\Bigr)\cap \hbox{Gr}^k_F=0\ \ \hbox{provided\ }\vert 2k+j\vert >r \ .\]
\end{remark}
\begin{remark} In the smooth case, one can easily obtain Mumford type theorems involving the coniveau filtration rather than the Hodge filtration. Unfortunately, in the singular case I have not been able to obtain such a statement. The problem lies in the use of lemma \ref{durf}: it is not clear to me whether the surjection
\[ H^j(X,\mathbb{Q})\to W_{j-2n}H_{2n-j}(X,\mathbb{Q})\]
respects the coniveau filtration.
\end{remark}
\section{The Hodge conjecture}
We recall the formulation of the Hodge conjecture that is adapted to singular varieties \cite{J}, \cite{LH}.
\begin{definition}[Hodge conjecture] Let $X$ be a quasi--projective variety, and $j\in\mathbb{N}$. We say that $HC(X,2j)$ holds if the cycle class map
\[cl_j\colon A_jX_{\mathbb{Q}}\ \to\ W_{-2j} H_{2j}(X,\mathbb{Q})\cap \hbox{Gr}^{-j}_F W_{-2j}H_{2j}(X,\mathbb{C})\]
is surjective.
\end{definition}
\begin{remark} It is known that the Hodge conjecture in degree $2j$ for all smooth projective varieties implies $HC(X,2j)$ for all quasi--projective varieties $X$; this is proven by descent \cite{J}. In particular, for $X$ of dimension $n$ we know that $HC(X,2j)$ is true for $j=0,1,n-1,n$.
\end{remark}
\begin{proposition} Let $X$ be a quasi--projective variety of dimension $n$, and suppose there exists a compactification with singular locus of dimension $\le {n+4\over 3}$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le 3\ \ \hbox{for\ all\ }i\le \ell\ .\]
Then $HC(X,2j)$ is true for $j\le \ell+2$.
\end{proposition}
This follows from a more precise version:
\begin{proposition}\label{precise} Let $X$ be a quasi--projective variety of dimension $n$, and suppose there exists a compactification with singular locus of dimension $\le s$. Suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le 3\ \ \hbox{for\ all\ }i\le \ell\ .\]
Then $HC(X,2j)$ is true for $2j\in \bigl([0,n-s,]\cup [2s-2,2n]\bigr)\cap[0,2\ell+4]$.
\end{proposition}
\begin{proof} Let $\tau\colon X\to\bar{X}$ denote the given compactification, with boundary $D=\bar{X}\setminus X$. Taking the transpose of the decomposition of lemma \ref{diag}, we obtain a decomposition of the diagonal
\[ \Delta=\Delta_0+\Delta_1+\cdots+\Delta_\ell+\Delta^{\ell+1}+\Gamma\ \ \in A_n(\bar{X}\times\bar{X})_{\mathbb{Q}}\ ,\]
where $\Delta_i$ is supported on $W_i\times V_i$, $\Delta^{\ell+1}$ is supported on $W_{\ell+1}\times\bar{X}$, and $V_i$ (resp. $W_i$) is of dimension $j+3$ (resp. $n-j$), and $\Gamma$ is supported on $\bar{X}\times D$.
\item{\underline{Step 1: $2j\le \min(n-s,2\ell+4)$. }} Let
\[a\in W_{-2j} H_{2j}(X,\mathbb{Q})\cap \hbox{Gr}^{-j}_F W_{-2j}H_{2j}(X,\mathbb{C})\]
be a Hodge class. Let $\bar{a}\in W_{-2j} H_{2j}(\bar{X},\mathbb{Q})$ be a Hodge class restricting to $a$, i.e. $\tau^\ast(\bar{a})=a$ (to see this exists, one needs to use a resolution of singularities of $\bar{X}$ and the existence of a polarisation on this resolution).
According to lemma \ref{durf}, there exists a Hodge class
\[b\in \hbox{Gr}^W_{2n-2j} H^{2n-2j}(\bar{X},\mathbb{Q})\cap \hbox{Gr}^{n-j}_F \hbox{Gr}^W_{2n-2j} H^{2n-2j}(\bar{X},\mathbb{C})\ \]
such that
\[ b\cap[X]=\bar{a} \ \ \in H_{2j}(\bar{X},\mathbb{Q})\ .\]
It follows that
\[a=\tau^\ast(\bar{a})=\tau^\ast(\Delta_\ast b)=\tau^\ast\Bigl((\Delta_0+\cdots+\Delta^{\ell+1}+\Gamma)_\ast b\Bigr)\ \ \in H_{2j}({X},\mathbb{Q})\ ,\]
and it remains to analyze the action of each piece in the decomposition:
As for the last piece, obviously
\[\tau^\ast\Gamma_\ast(b)=0\ ,\]
as $\Gamma_\ast(b)$ is supported on $D$.
Next, the action of $\Delta^{\ell+1}$. This factors
\[\begin{array}[c]{ccc}
\cdots&&\\
\uparrow&&\downarrow\\
H^{2n-2j}(\widetilde{W_{\ell+1}},\mathbb{Q})\cap F^{n-j}&&\downarrow\\
\uparrow&&\\
\hbox{Gr}^W_{2n-2j}H^{2n-2j}(\bar{X},\mathbb{Q})\cap F^{n-j}&\ \ \stackrel{(\Delta^{\ell+1})_\ast}{\to} \ \ &H_{2j}(\bar{X},\mathbb{Q})\ .\\
\end{array}\]
But the group on the left is generated by cycles for $j\le \ell+2$ (this is $HC(\widetilde{W_{\ell+1}},2$)); it follows that
\[ (\Delta^{\ell+1})_\ast(b)\ \ \in H_{2j}(\bar{X},\mathbb{Q})\]
is a cycle class.
As for the action of $\Delta_i$, this is similar. We have a factorization
\[\begin{array}[c]{ccc}
\cdots&&\\
\uparrow&&\downarrow\\
H^{2n-2j}(\widetilde{W_{i}},\mathbb{Q})\cap F^{n-j}&& H_{2j}(\widetilde{V_{i}},\mathbb{Q})\cap F^{j-n} \\
\uparrow&&\downarrow\\
\hbox{Gr}^W_{2n-2j}H^{2n-2j}(\bar{X},\mathbb{Q})\cap F^{n-j}&\ \ \stackrel{(\Delta_{i})_\ast}{\to} \ \ &H_{2j}(\bar{X},\mathbb{Q})\ .\\
\end{array}\]
The upper left group is generated by cycles provided $2n-2j\ge 2\dim\widetilde{W_i}-2=2n-2i-2$, i.e. provided $j\le i+1$. The upper right group is generated by cycles provided $2j\ge 2\dim\widetilde{V_i}-2=2i+4$, i.e. provided $j\ge i+2$. It follows that for any $j$,
\[ (\Delta_i)_\ast(b)\ \ \in H_{2j}(\bar{X},\mathbb{Q})\]
is a cycle class.
\item{\underline{Step 2: $j\in[ s-1,\ell+2]$. }} Let $U\subset X$ be the complement of the singular locus $S$ of $X$. We have a commutative diagram with exact rows
\[
\begin{array}[c]{cccccc}
A_jS_{\mathbb{Q}}&\to& A_jX_{\mathbb{Q}}&\to& A_jU_{\mathbb{Q}}&\to 0\\
\downarrow{cl_i}&&\downarrow{cl_i}&&\downarrow{cl_i}&\\
W_{-2j} H_{2j}(S,\mathbb{Q})&\to&W_{-2j} H_{2j}(X,\mathbb{Q})&\to& W_{-2j} H_{2j}(U,\mathbb{Q})&\to 0\ .
\end{array}\]
It follows from lemma \ref{smoothcase} below that for any $j\le \ell+2$ the right vertical map is surjective on Hodge classes. Any Hodge class in $H_{2j}X$ that is supported on $S$ comes from a Hodge class on $S$ (this can be seen by going to a resolution of singularities of $S$). But the left vertical arrow is surjective on Hodge classes provided $j\ge s-1$.
\begin{lemma}\label{smoothcase} Let $U$ be a smooth quasi--projective variety of dimension $n$, and suppose
\[\hbox{Niveau}(A_iU_{\mathbb{Q}})\le 3\ \ \hbox{for\ all\ }i\le \ell\ .\]
Then $HC(U,2j)$ is true for all $j\le \ell+2$.
\end{lemma}
\begin{proof} Let $\tau\colon U\to\bar{U}$ denote a smooth compactification, with boundary $D=\bar{U}\setminus U$. As above (taking the transpose of the decomposition of lemma \ref{diag}), we obtain a decomposition
\[ \Delta=\Delta_0+\Delta_1+\cdots+\Delta_\ell+\Delta^{\ell+1}+\Gamma\ \ \in A_n(\bar{U}\times\bar{U})_{\mathbb{Q}}\ ,\]
where $\Delta_i$ is supported on $V_i\times W_i$, $\Delta^{\ell+1}$ is supported on $ W_{\ell+1}\times\bar{U}$, and $V_i$ (resp. $W_i$) is of dimension $i+3$ (resp. $n-i$), and $\Gamma$ is supported on $\bar{U}\times D$.
Let $a\in W_{-2j}H_{2j}(U,\mathbb{Q})$ be a Hodge class, where $j\le \ell+2$. Let $\bar{a}\in H_{2j}(\bar{U},\mathbb{Q})$ be a Hodge class restricting to $a$. Then
\[ a=\tau^\ast(\bar{a})=\tau^\ast\bigl( (\Delta_0)_\ast(\bar{a})+\cdots+(\Delta_\ell)_\ast(\bar{a})+(\Delta^{\ell+1})_\ast(\bar{a})\bigr)\]
(since obviously $\tau^\ast\Gamma_\ast(\bar{a})=0$).
But
\[(\Delta^{\ell+1})_\ast(\bar{a})\ \ \in H_{2j}(\bar{U},\mathbb{Q})\]
is a cycle class, just as above (the action of $\Delta^{\ell+1}$ factors over $H^{2n-2j}(\widetilde{W_{\ell+1}},\mathbb{Q})\cap F^{n-j}$, which is generated by cycles for $j\le \ell+2$).
Likewise, each
\[(\Delta_{i})_\ast(\bar{a})\ \ \in H_{2j}(\bar{U},\mathbb{Q})\]
is a cycle class (this is the same argument as above).
\end{proof}
\end{proof}
\begin{remark} The argument of proposition \ref{precise} actually shows the following weak version of $HC(X,2j)$: let $X$ be projective of dimension $n$, and suppose
\[\hbox{Niveau}(A_iX_{\mathbb{Q}})\le 3\ \ \hbox{for\ all\ }i\le \ell\ .\]
Then the group
\[ \hbox{Im}\bigl( H^{2n-2j}(X,\mathbb{Q})\to H_{2j}(X,\mathbb{Q})\bigr)\cap F^{-j}H_{2j}(X,\mathbb{C})\]
is generated by algebraic cycles for $j\le \ell+2$.
\end{remark}
\begin{remark} It seems likely one could likewise prove the generalized Hodge conjecture for quasi--projective varieties (as formulated in \cite[Conjecture 2.4]{L}), in the case where Chow groups have niveau $\le 2$ (extending the smooth projective case \cite{moi}). I have not looked into this.
\end{remark}
\begin{remark} In \cite{A} and \cite{A2}, Arapura studies the Hodge conjecture for (possibly singular) varieties that have a small Hodge diamond; his approach is somewhat different from the present note.
\end{remark}
\begin{corollary}\label{A0} Let $X$ be quasi--projective of dimension $5$, with singular locus of dimension $\le 3$. Suppose
\[\hbox{Niveau}(A_0X_{\mathbb{Q}})\le 3\ .\]
Then $HC(X,4)$ is true.
\end{corollary}
In particular, corollary \ref{A0} applies to log $\mathbb{Q}$--Fano varieties; by a result of Zhang \cite{Z} such varieties are rationally connected, hence $\hbox{Niveau}(A_0X_{\mathbb{Q}})\le 0$.
\begin{corollary}\label{A1} The Hodge conjecture $HC(X,\ast)$ is completely verified in the following cases:
\item{(\rom1)} $X$ is a cubic of dimension $6$, and with singular locus of dimension $\le 3$;
\item{(\rom2)} $X\subset\mathbb{P}^8$ is the intersection of a quadric and a cubic, and $X$ has singular locus of dimension $\le 3$.
\end{corollary}
\begin{proof} Since $X$ is in both cases a complete intersection, it suffices to consider $HC(X,j)$ for $j\ge\dim X=6$. The result now follows from proposition
\ref{hodge}, plus the fact that
\[ \hbox{Niveau}(A_iX_{\mathbb{Q}})\le 0\ \ \hbox{for\ }i\le 1\ .\]
In case (\rom1), this statement was proven by Esnault--Levine--Viehweg \cite{ELV}; in case (\rom2) this is proven by Hirschowitz--Iyer \cite{HI}.
\end{proof}
\begin{acknowledgements}
This note was written while preparing for the Strasbourg ``groupe de travail'' based on the monograph \cite{Vo}. I wish to thank all the participants of this groupe de travail for the very pleasant and stimulating atmosphere.
\end{acknowledgements}
| {
"timestamp": "2015-07-17T02:05:44",
"yymm": "1507",
"arxiv_id": "1507.04484",
"language": "en",
"url": "https://arxiv.org/abs/1507.04484",
"abstract": "What is generally known as the \"Bloch--Srinivas method\" consists of decomposing the diagonal of a smooth projective variety, and then considering the action of correspondences in cohomology. In this note, we observe that this same method can also be extended to singular and quasi--projective varieties. We give two applications of this observation: the first is a version of Mumford's theorem, the second is concerned with the Hodge conjecture for singular varieties.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Correspondences and singular varieties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712651931994,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7094397161207513
} |
https://arxiv.org/abs/1509.02533 | Absorbing random-walk centrality: Theory and algorithms | We study a new notion of graph centrality based on absorbing random walks. Given a graph $G=(V,E)$ and a set of query nodes $Q\subseteq V$, we aim to identify the $k$ most central nodes in $G$ with respect to $Q$. Specifically, we consider central nodes to be absorbing for random walks that start at the query nodes $Q$. The goal is to find the set of $k$ central nodes that minimizes the expected length of a random walk until absorption. The proposed measure, which we call $k$ absorbing random-walk centrality, favors diverse sets, as it is beneficial to place the $k$ absorbing nodes in different parts of the graph so as to "intercept" random walks that start from different query nodes.Although similar problem definitions have been considered in the literature, e.g., in information-retrieval settings where the goal is to diversify web-search results, in this paper we study the problem formally and prove some of its properties. We show that the problem is NP-hard, while the objective function is monotone and supermodular, implying that a greedy algorithm provides solutions with an approximation guarantee. On the other hand, the greedy algorithm involves expensive matrix operations that make it prohibitive to employ on large datasets. To confront this challenge, we develop more efficient algorithms based on spectral clustering and on personalized PageRank. | \section{Introduction}
A fundamental problem in graph mining is to
identify the most central nodes in a graph.
Numerous centrality measures have been proposed,
including
degree centrality,
closeness centrality~\cite{closenesscentrality},
betweenness centrality~\cite{betweennesscentrality},
random-walk centrality~\cite{randomwalkcentrality},
Katz centrality~\cite{katzcentrality}, and
PageRank~\cite{PageRank}.
In the interest of robustness
many centrality measures use random walks:
while the shortest-path distance between two nodes can
change dramatically by inserting or deleting a single edge,
distances based on random walks account for multiple paths and offer a
more global view of the connectivity between two nodes.
In this spirit, the random-walk centrality of {\it one} node with
respect to {\it all nodes} of the graph is defined as the
expected time needed to come across this node in a random walk that
starts in any other node of the graph~\cite{randomwalkcentrality}.
In this paper, we consider a measure that generalizes random-walk
centrality for a {\it set} of nodes $C$
with respect to a set of \emph{query nodes} $Q$.
Our centrality measure is defined as the expected length
of a random walk that starts from any node in $Q$ until
it reaches any node in $C$ --- at which point the random walk
is {\it ``absorbed''} by $C$. Moreover, to allow for adjustable
importance of query nodes in the centrality measure,
we consider random walks {\it with restarts}, that occur with a fixed
probability $\alpha$ at each step of the random walk.
The resulting computational problem
is to find a set of $k$ nodes $C$ that optimizes this measure with respect to nodes $Q$,
which are provided as input.
We call this measure $k$ {\em absorbing random-walk centrality}
and the corresponding optimization problem {{{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}}.
To motivate the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem, let us consider the scenario of
searching the Web graph and summarizing the search results.
In this scenario, nodes of the graph correspond to webpages,
edges between nodes correspond to links between pages, and the set of
query nodes $Q$ consists of all nodes that match a user query,
i.e., all webpages that satisfy a keyword search.
Assuming that the size of $Q$ is large, the goal is to find the $k$
most central nodes with respect to $Q$,
and present those to the user.
It is clear that ordering the nodes of the graph by their individual
random-walk centrality scores and taking the top-$k$ set does not solve the
{{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem,
as these nodes may all be located in the
same ``neighborhood'' of the graph, and thus,
may not provide a good absorbing set for the query.
On the other hand,
as the goal is to minimize the expected absorption time for walks starting at $Q$,
the optimal solution to the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem will be a set of $k$, both
{\em centrally-placed} and {\em diverse}, nodes.
This observation has motivated researchers in the informa\-tion-retrieval
field to consider random walks with absorbing states in order
to diversify web-search results~\cite{Zhu:2007tj}.
However, despite the fact that similar problem definitions and algorithms have
been considered earlier, the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem has not been formally
studied and there has not been a theoretical analysis of its properties.
Our key results in this paper are the following:
we show that the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem is {\ensuremath{\mathbf{NP}}}-hard, and
we show that the $k$ absorbing random-walk centrality measure is
{\em monotone} and {\em supermodular}.
The latter property allows us to quantify the approximation guarantee
obtained by a natural greedy algorithm,
which has also been considered by previous work~\cite{Zhu:2007tj}.
Furthermore, a na\"{i}ve implementation of the greedy algorithm requires
many expensive matrix inversions, which make the algorithm particularly slow.
Part of our contribution is to show how to make use of the
Sherman-Morrison inversion formula to implement the greedy algorithm
with only one matrix inversion and more efficient
matrix$\,\times\,$vector multiplications.
Moreover, we explore the performance of faster, heuristic algorithms,
aiming to identify methods that are faster than the greedy approach
without significant loss in the quality of results. The heuristic
algorithms we consider include the personalized PageRank
algorithm~\cite{PageRank,langville2005survey}
as well as algorithms based on spectral clustering~\cite{von2007tutorial}. We find that, in
practice, the personalized PageRank algorithm offers a very good trade-off
between speed and quality.
The rest of the paper is organized as follows.
In Section~\ref{sec:related},
we overview previous work and discuss how it compares to this paper.
We define our problem in Section~\ref{section:problem-definition}
and provide basic background results on absorbing random walks in
Section~\ref{section:absorbing}.
Our main technical contributions are given in Sections~\ref{section:absorbing} and~\ref{sec:properties},
where we characterize the complexity of the problem, and provide the details of the
greedy algorithm and the heuristics we explore.
We evaluate the performance of algorithms in Section~\ref{sec:experiments},
over a range of real-world graphs,
and Section~\ref{sec:conclusions} is a short conclusion.
Proofs for some of the theorems shown in the paper are
provided in the Appendix.
\section{Related work}
\label{sec:related}
Many works in the literature explore ways to quantify
the notion of node centrality on graphs~\cite{boldi2014axioms}.
Some of the most commonly-used measures include the following:
($i$) {\em degree centrality}, where the centrality of a node is
simply quantified by its degree;
($ii$) {\em closeness centrality} \cite{leavitt1951some,closenesscentrality}, defined
as the average distance of a node from all other nodes on the
graph;
($iii$) {\em betweenness centrality}~\cite{betweennesscentrality},
defined as the number of shortest paths between pairs of
nodes in the graph that pass through a given node;
($iv$) {\em eigenvector centrality}, defined as the stationary probability
that a Markov chain on the graph visits a given node, with
Katz centrality~\cite{katzcentrality} and PageRank~\cite{PageRank}
being two well-studied variants; and
($v$) {\em random-walk centrality}~\cite{randomwalkcentrality}, defined as
the expected first passage time of a random walk from a given node,
when it starts from a random node of the graph.
The measure we study in this paper
generalizes the notion of {\em random-walk centrality} to a set
of absorbing nodes.
Absorbing random walks have been used in previous work
to select a {\it diverse} set of nodes from a graph.
For example,
an algorithm proposed by Zhu et~al.~\cite{Zhu:2007tj} selects nodes in the following manner:
($i$) the first node is selected based on its PageRank value and is set as absorbing;
($ii$) the next node to be selected is the node that maximizes the expected first-passage time from the already selected absorbing nodes.
Our problem definition differs considerably from the one considered in that work,
as in our work the expected first-passage times are always computed from the set of
query nodes that are provided in the input, and not from the nodes that participate in the solution so far.
In this respect, the greedy method proposed by Zhu et~al.\ is not associated with a crisp problem definition.
Another conceptually related line of work aims to
select a diverse subset of query results, mainly
within the context of document retrieval~\cite{Agrawal:2009bc,Angel:2011be,Vieira:2011bx}.
The goal, there, is to select $k$ query results to
optimize a function that quantifies the trade-off between relevance
and diversity.
Our work is also remotely related to the problem studied by Leskovec et al.\ on
{\it cost-effective outbreak detection} \cite{leskovec2007cost}.
One of the problems discussed there is to select
nodes in the network so that the detection time for a set of cascades is minimized.
However, their work differs from ours on the fact that they consider as input a set of {\it cascades},
each one of finite size,
while in our case the input consists of a set of query {\it nodes}
and we consider a probabilistic model that generates random walk paths,
of possibly infinite size.
\section{Problem definition}
\label{section:problem-definition}
We are given a graph $G=(V,E)$ over a set of nodes $V$ and set of
undirected edges $E$. The number of nodes $|V|$ is denoted by $n$ and the
number of edges $|E|$ by $m$. The input also includes a subset of
nodes $Q\subseteq V$, to which we refer as the {\em query nodes}.
As a special case, the set of query nodes $Q$ may be equal to the
whole set of nodes, i.e., $Q=V$.
Our goal is to find a set $C$ of $k$
nodes that are {\em central} with respect to the query nodes $Q$.
For some applications it makes sense to restrict the central nodes to
be only among the query nodes, while in other cases,
the central nodes may include any node in $V$.
To model those different scenarios, we consider a set of candidate nodes $D$,
and require that the $k$ central nodes should belong in this candidate set,
i.e., $C\subseteq D$.
Some of the cases include $D=Q$, $D=V$, or $D=V\setminus Q$, but
it could also be that $D$ is defined in some other way
that does not involve $Q$.
In general, we assume that $D$ is given as input.
The {\em centrality} of a set of nodes $C$ with respect to query nodes $Q$
is based on the notion of absorbing random-walks and their expected
length.
More specifically, let us consider a random walk on the nodes $V$ of
the graph, that proceeds at discrete steps: the walk starts from a
node $q\in Q$ and, at each step
moves to a different node, following edges in $G$,
until it arrives at some node in $C$.
The {\em starting} node $q$ of the walk is chosen according to a probability distribution ${\ensuremath{\mathbf{s}}}$.
When the walk arrives at a node $c\in C$ for the first time,
it terminates, and we say that the random walk is {\em absorbed} by that node~$c$.
In the interest of generality, and to allow for adjustable
importance of query nodes in the centrality measure,
we also allow the random walk to restart. Restarts occur with a
probability $\alpha$ at each step of the random walk, where $\alpha$ is
a parameter that is specified as input to the problem.
When restarting, the walk proceeds to a query node selected randomly
according to ${\ensuremath{\mathbf{s}}}$.
Intuitively, larger values of $\alpha$ favor nodes that are closer to nodes $Q$.
We are interested in the expected length (i.e., number of steps)
of the walk that starts from a query node $q\in Q$ until it gets
absorbed by some node in~$C$, and we denote this expected length
by ${\ensuremath{\mathrm{ac}}}^q_{_Q}(C)$.
We then define the {\em absorbing random-walk centrality} of a set of
nodes $C$ with respect to query nodes~$Q$,~by
\[
{\ensuremath{\mathrm{ac}}}_Q(C) = \sum_{q\in Q} {\ensuremath{\mathbf{s}}}(q) \, {\ensuremath{\mathrm{ac}}}^q_{_Q}(C).
\]
The problem we consider in this paper is the following.
\begin{problem}
{\em (}{{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}{\em )}
\label{problem:k-ac}
We are given a graph $G=(V,E)$,
a set of query nodes $Q \subseteq V$,
a set of candidate nodes $D\subseteq V$,
a starting probability distribution ${\ensuremath{\mathbf{s}}}$ over $V$ such that ${\ensuremath{\mathbf{s}}}(v)=0$ if $v \in V \setminus Q$,
a restart probability~$\alpha$,
and an integer $k$.
We ask to
find a set of $k$ nodes $C\subseteq D$
that {\em minimizes} ${\ensuremath{\mathrm{ac}}}_Q(C)$, i.e.,
the expected length of a random walk that starts from $Q$ and proceeds
until it gets absorbed in some node in $C$.
\end{problem}
In cases where we have no reason to distinguish among the query nodes,
we consider the uniform starting probability distribution
${\ensuremath{\mathbf{s}}}(q) = 1/|Q|$.
In fact, for simplicity of exposition, hereinafter we focus on the
case of uniform distribution. However, we note that all our definitions
and techniques generalize naturally, not only to general starting
probability distributions ${\ensuremath{\mathbf{s}}}(q)$, but also to
{\em directed} and {\em weighted} graphs.
\section{Absorbing random walks}
\label{section:absorbing}
In this section we review some relevant background on absorbing random
walks. Specifically, we discuss how to calculate the objective function
${\ensuremath{\mathrm{ac}}}_Q(C)$ for Problem~\ref{problem:k-ac}.
Let $\mathbf{P}$ be the transition matrix for a random walk,
with $\mathbf{P}(i,j)$ expressing the probability that the random
walk will move to node~$j$ given that it is currently at node $i$.
Since random walks can only move to absorbing nodes $C$, but not away
from them, we set
$\mathbf{P}(c,c)=1$ and
$\mathbf{P}(c,j)=0$, if $j\neq c$,
for all absorbing nodes $c\in C$.
The set $T = V \setminus C$ of non-absorbing nodes is called {\em transient}.
If $N(i)$ are the neighbors
of a node $i \in T$ and $d_i = |N(i)|$ its degree, the transition
probabilities from node $i$ to other nodes are
\begin{equation}
\mathbf{P}(i,j) = \left\{
\begin{array}{ll}
\alpha \, {\ensuremath{\mathbf{s}}}(j) & \text{ if } j \in Q \setminus N(i), \\
(1-\alpha) / d_i+ \alpha \, {\ensuremath{\mathbf{s}}}(j) & \text{ if } j \in N(i).
\end{array}
\right.
\end{equation}
Here, ${\ensuremath{\mathbf{s}}}$ represents the starting probability vector.
For example, for the uniform distribution over query nodes we have
${\ensuremath{\mathbf{s}}}(i) = 1/|Q|$ if $i\in Q$ and $0$ otherwise.
The transition matrix of
the random walk can be
written as follows
\begin{equation}
\mathbf{P} = \left(
\begin{array}{cc}
\mathbf{P}_{TT} & \mathbf{P}_{TC} \\
\mathbf{0} & \mathbf{I}
\end{array}
\right).
\end{equation}
In the equation above, $\mathbf{I}$ is an $(n - |T|)\times(n - |T|)$
identity matrix and $\mathbf{0}$ a matrix with all its entries equal
to $0$;
$\mathbf{P}_{TT}$ is the $|T| \times |T|$ sub-matrix of $\mathbf{P}$
that contains the transition probabilities between transient nodes; and
$\mathbf{P}_{TC}$ is the $|T|\times|C|$ sub-matrix of $\mathbf{P}$
that contains the transition probabilities from transient to absorbing nodes.
The probability of the walk being on node $j$ at exactly
$\ell$ steps having started at node~$i$,
is given by the $(i,j)$-entry of the matrix $\mathbf{P}_{TT}^\ell$.
Therefore, the expected total number of times that the random walk visits
node $j$ having started from node~$i$ is given by the $(i,j)$-entry of
the $|T|\times |T|$ matrix
\begin{equation}
\mathbf{F} = \sum_{\ell=0}^{\infty} \mathbf{P}_{TT}^\ell
= \left( \mathbf{I} - \mathbf{P}_{TT}\right)^{-1},
\end{equation}
which is known as the {\em fundamental matrix} of the
absorbing random walk.
Allowing the possibility to start the random walk at an absorbing node
(and being absorbed immediately), we see that the expected length of a
random walk that starts from node $i$ and gets absorbed by the set $C$
is given by the $i$-th element of the following $n\times 1$ vector
\begin{equation}
\mathbf{L} = \mathbf{L}_C = \left(
\begin{array}{c}
\mathbf{F} \\
\mathbf{0}
\end{array}
\right) \mathbf{1},
\end{equation}
where $\mathbf{1}$ is an $T\times 1$ vector of all 1s.
We write $\mathbf{L}=\mathbf{L}_C$ to emphasize the dependence on the
set of absorbing nodes $C$.
The expected number of steps when starting from a node in $Q$ and
until being absorbed by some node in $C$ is then obtained by summing
over all query nodes, i.e.,
\begin{equation}
\label{eq:inversion}
{\ensuremath{\mathrm{ac}}}_Q(C) = {\ensuremath{\mathbf{s}}}^T\, \mathbf{L}_C.
\end{equation}
\subsection{Efficient computation of absorbing centrality}
\label{section:fasteval}
Equation~(\ref{eq:inversion}) pinpoints the difficulty of the problem we consider:
even computing the objective function ${\ensuremath{\mathrm{ac}}}_Q(C)$ for a
candidate solution $C$ requires an expensive matrix inversion;
$\mathbf{F} = \left( \mathbf{I} - \mathbf{P}_{TT}\right)^{-1}$.
Furthermore,
searching for the optimal set $C$
involves an exponential number of candidate sets,
while evaluating each one of them requires a matrix inversion.
In practice, we find that we can compute ${\ensuremath{\mathrm{ac}}}_Q(C)$ much faster approximately, as shown in Algorithm~\ref{algo:appox_ac}.
The algorithm follows from the infinite-sum expansion of Equation~(\ref{eq:inversion}).
\begin{eqnarray*}
{\ensuremath{\mathrm{ac}}}_Q(C) = {\ensuremath{\mathbf{s}}}^T\, \mathbf{L}_C = {\ensuremath{\mathbf{s}}}^T\,
\left(
\begin{array}{c}
\mathbf{F} \\
\mathbf{0}
\end{array}
\right) \mathbf{1}
= {\ensuremath{\mathbf{s}}}^T\,
\left(
\begin{array}{c}
\sum_{\ell=0}^{\infty}\mathbf{P}_{TT}^\ell \\
\mathbf{0}
\end{array}
\right) \mathbf{1} \\
= {\ensuremath{\mathbf{s}}}^T\,
\sum_{\ell=0}^{\infty}
\left(
\begin{array}{c}
\mathbf{P}_{TT}^\ell \\
\mathbf{0}
\end{array}
\right) \mathbf{1}
= \left(\sum_{\ell=0}^{\infty} {\ensuremath{\mathbf{s}}}^T\,
\left(
\begin{array}{c}
\mathbf{P}_{TT}^\ell \\
\mathbf{0}
\end{array}
\right)\right) \mathbf{1} \\
= \left(\sum_{\ell=0}^{\infty} \mathbf{x}_\ell \right) \mathbf{1}
= \sum_{\ell=0}^{\infty} \mathbf{x}_\ell \mathbf{1},
\end{eqnarray*}
with
\begin{equation}
\mathbf{x}_0 = {\ensuremath{\mathbf{s}}}^{^T} \;\text{ and }\; \mathbf{x}_{\ell+1} = \mathbf{x}_\ell \left(
\begin{array}{c}
\mathbf{P}_{TT} \\
\mathbf{0}
\end{array}
\right).
\end{equation}
Note that computing each vector $\mathbf{x}_\ell$ requires time~$\mathcal{O}(n^2)$.
Algorithm~\ref{algo:appox_ac} terminates
when the increase of the sum
due to the latest term
falls below a pre-defined threshold $\epsilon$.
\begin{algorithm}[t]
\caption{{{\sf\small Approximate\-AC}}}
\label{algo:appox_ac}
\begin{algorithmic}
\STATE {\bf Input}: Transition matrix $\mathbf{P}_{TT}$, threshold $\epsilon$,
\\ starting probabilities ${\ensuremath{\mathbf{s}}}$
\STATE {\bf Output}: Absorbing centrality ${\ensuremath{\mathrm{ac}}}_Q$
\STATE $\mathbf{x_0} \leftarrow {\ensuremath{\mathbf{s}}}^{^T}$
\STATE $\delta \leftarrow \mathbf{x_0}\cdot\mathbf{1}$
\STATE ${\ensuremath{\mathrm{ac}}} \leftarrow \delta$
\STATE $\ell \leftarrow 0$
\WHILE {$\delta < \epsilon$}
\STATE $\mathbf{x_{\ell + 1}} \leftarrow \mathbf{x_\ell} \left(
\begin{array}{c}
\mathbf{P}_{TT} \\
\mathbf{0}
\end{array}
\right)$
\STATE $\delta \leftarrow \mathbf{x_{\ell + 1}}\cdot \mathbf{1}$
\STATE ${\ensuremath{\mathrm{ac}}} \leftarrow {\ensuremath{\mathrm{ac}}} + \delta$
\STATE $\ell \leftarrow \ell + 1$
\ENDWHILE
\STATE \textbf{return} {\ensuremath{\mathrm{ac}}}
\end{algorithmic}
\end{algorithm}
\section{Problem characterization}
\label{sec:properties}
We now study the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem in more detail.
In particular, we show that the function ${\ensuremath{\mathrm{ac}}}_Q$
is monotone and supermodular, a property that is used later to provide
an approximation guarantee for the greedy algorithm.
We also show that {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ is {\ensuremath{\mathbf{NP}}}-hard.
Recall that a function $f:2^V\rightarrow\mathbb{R}$ over subsets of
a ground set $V$ is {\em submodular} if it has the
{\em diminishing returns} property
\begin{equation}
f(Y \cup \{u\}) - f(Y) \le f(X \cup \{u\}) - f(X),
\end{equation}
for all $X\subseteq Y\subseteq V$ and $u\not\in Y$.
The function $f$ is {\em supermodular} if $-f$ is submodular.
Submodularity (and supermodularity) is a very useful property for
designing algorithms.
For instance, minimizing a submodular function is a polynomial-time solvable problem,
while the maximization problem is typically amenable to approximation algorithms,
the exact guarantee of which depends on other properties of the function and requirements of the problem,
e.g., monotonicity, matroid constraints, etc.
Even though the objective function ${\ensuremath{\mathrm{ac}}}_Q(C)$ is
given in closed-form by Equation~(\ref{eq:inversion}),
to prove its properties we find it more convenient to work with its
descriptive definition, namely,
${\ensuremath{\mathrm{ac}}}_Q(C)$ being the expected length
for a random walk starting at nodes of $Q$
before being absorbed at nodes of~$C$.
For the rest of this section we consider that the set of query nodes
$Q$ is fixed, and for simplicity we write
${\ensuremath{\mathrm{ac}}}={\ensuremath{\mathrm{ac}}}_Q$.
\begin{proposition}[Monotonicity]
\label{proposition:monotonicity}
For all $X\subseteq Y\subseteq V$ it is
${\ensuremath{\mathrm{ac}}}(Y)\le{\ensuremath{\mathrm{ac}}}(X)$.
\end{proposition}
The proposition states that absorption time decreases with more absorbing
nodes. The proof is given in the Appendix.
Next we show that the absorbing random-walk centrality measure
${\ensuremath{\mathrm{ac}}}(\cdot)$ is supermodular.
\begin{proposition}[Supermodularity]
\label{proposition:supermodularity}
For all sets $X\subseteq Y\subseteq V$ and $u \not\in Y$ it is
\begin{equation}
\label{eq:supermodularity}
{\ensuremath{\mathrm{ac}}}(X) - {\ensuremath{\mathrm{ac}}}(X \cup \{u\}) \ge
{\ensuremath{\mathrm{ac}}}(Y) - {\ensuremath{\mathrm{ac}}}(Y \cup \{u\}) .
\end{equation}
\end{proposition}
\begin{IEEEproof}
Given an instantiation of a random walk, we define the following propositions
for any pair of nodes $i, j\in V$, non-negative integer $\ell$,
and set of nodes $Z$:
\begin{description}
\item[$A_{i,j}^\ell(Z)$:] The random walk started at node $i$ and
visited node~$j$ after exactly $\ell$ steps, without visiting any
node in set~$Z$.
\item[$B_{i,j}^\ell(Z, u)$:] The random walk started at node $i$ and
visited node~$j$ after exactly $\ell$ steps, having previously visited
node~$u$ but without visiting any node in the set~$Z$.
\end{description}
It is easy to see that the set of random walks for which
$A_{i,j}^\ell(Z)$ is {{\tt true}}\ can be partitioned into
those that visited $u$ within the first $\ell$ steps and those
that did not.
Therefore, the probability that proposition $A_{i,j}^\ell(Z)$
is {{\tt true}}\ for any instantiation of a random walk generated by our model is equal to
\begin{equation}
\label{eq:split_paths_probability}
\Pr \left[ A_{i,j}^\ell(Z) \right] =
\Pr \left[A_{i,j}^\ell(Z \cup \{u\})\right] + \Pr \left[B_{i,j}^\ell(Z,u)\right].
\end{equation}
Now, let ${\ensuremath{\mathbf{\Lambda}}}(Z)$ be the number of steps for a random walk to reach the nodes in $Z$.
${\ensuremath{\mathbf{\Lambda}}}(Z)$ is a random variable and its expected value
over all random walks generated by our model is equal
to ${\ensuremath{\mathrm{ac}}}(Z)$.
Note that the proposition ${\ensuremath{\mathbf{\Lambda}}}(Z) \geq \ell + 1$
is {{\tt true}}\ for a given instantiation of a random walk only
if there is a pair of nodes $q\in Q$ and $j\in V\setminus Z$,
for which the proposition $A^\ell_{q,j}(Z)$ is {{\tt true}}.
Therefore,
\begin{equation}
\Pr \left[ {\ensuremath{\mathbf{\Lambda}}}(Z) \geq \ell + 1 \right]
= \sum_{q \in Q} \sum_{j \in V \setminus Z} \Pr \left[ A_{q,j}^\ell(Z)\right].
\end{equation}
From the above, it is easy to calculate ${\ensuremath{\mathrm{ac}}}(Z)$ as
\begin{eqnarray}
{\ensuremath{\mathrm{ac}}}(Z) & = & E[{\ensuremath{\mathbf{\Lambda}}}(Z)] \nonumber \nonumber \\
& = & \sum_{\ell = 0}^{\infty} \ell \, \Pr \left[ {\ensuremath{\mathbf{\Lambda}}}(Z) = \ell \right] \nonumber \\
& = & \sum_{\ell = 1}^{\infty} \Pr \left[ {\ensuremath{\mathbf{\Lambda}}}(Z) \geq \ell \right] \nonumber \\
& = & \sum_{\ell = 0}^{\infty} \Pr \left[ {\ensuremath{\mathbf{\Lambda}}}(Z) \geq \ell + 1 \right] \nonumber \\
& = & \sum_{\ell = 0}^{\infty} \sum_{q \in Q} \sum_{j \in V \setminus Z} \Pr \left[ A_{q,j}^\ell(Z)\right] .
\label{equation:expectation}
\end{eqnarray}
The final property we will need is the observation that,
for $X \subseteq Y$,
$B_{i,j}^\ell (Y,u)$ implies $B_{i,j}^\ell (X,u)$ and thus
\begin{equation}
\label{eq:Beta_event_probability}
\Pr \left[ B_{i,j}^\ell (X,u) \right] \ge \Pr \left[ B_{i,j}^\ell (Y,u) \right].
\end{equation}
By using Equation~(\ref{equation:expectation}),
the Inequality~(\ref{eq:supermodularity}) can be rewritten as
\begin{align}
\nonumber
\sum_{\ell=0}^{\infty} \sum_{q \in Q} & \sum_{j \in V \setminus X} \Pr
\left[ A_{q,j}^\ell(X)\right] - \\
\nonumber
& \sum_{\ell=0}^{\infty} \sum_{q \in Q} \sum_{j \in V \setminus \{X
\cup \{u\}\}} \Pr \left[ A_{q,j}^\ell(X \cup \{u\})\right] \\
\nonumber
\ge \sum_{\ell=0}^{\infty} \sum_{q \in Q} & \sum_{j \in V \setminus Y}
\Pr \left[ A_{q,j}^\ell(Y)\right] - \\
& \sum_{\ell=0}^{\infty} \sum_{q \in Q} \sum_{j \in V \setminus \{Y \cup \{u\}\}} \Pr \left[ A_{q,j}^\ell(Y \cup \{u\})\right].
\end{align}
We only need to show that the inequality holds for an arbitrary value of $\ell$ and $q \in Q$,
that is
\begin{align}
\nonumber
&\sum_{j \in V \setminus X} \Pr \left[ A_{q,j}^\ell(X)\right]
- \sum_{j \in V \setminus \{X \cup \{u\}\}} \Pr \left[ A_{q,j}^\ell(X \cup \{u\})\right] \ge \\
& \sum_{j \in V \setminus Y} \Pr \left[ A_{q,j}^\ell(Y)\right]
- \sum_{j \in V \setminus \{Y \cup \{u\}\}} \Pr \left[ A_{q,j}^\ell(Y \cup \{u\})\right].
\end{align}
Notice that $\Pr \left[ A_{i,u}^\ell(Y \cup \{u\}) \right] = 0$,
so we can rewrite the above inequality as
\begin{align}
\nonumber
&\sum_{j \in V \setminus X} \Pr \left[ A_{q,j}^\ell(X)\right]
- \sum_{j \in V \setminus X } \Pr \left[ A_{q,j}^\ell(X \cup \{u\})\right] \ge \\
&\sum_{j \in V \setminus Y} \Pr \left[ A_{q,j}^\ell(Y)\right]
- \sum_{j \in V \setminus Y } \Pr \left[ A_{q,j}^\ell(Y \cup \{u\})\right].
\end{align}
To show the latter inequality we start from the left hand side and use
Inequality (\ref{eq:Beta_event_probability}).
We have
\begin{align*}
\nonumber
\sum_{j \in V \setminus X} \Pr & \left[ A_{i,j}^\ell(X)\right]
- \sum_{j \in V \setminus X } \Pr \left[ A_{i,j}^\ell(X \cup \{u\})\right] \\
= & \sum_{j \in V \setminus X} \Pr \left[ B_{i,j}^\ell(X, u)\right] \\
\ge & \sum_{j \in V \setminus Y} \Pr \left[ B_{i,j}^\ell(Y, u)\right] \\
= & \sum_{j \in V \setminus Y} \Pr \left[ A_{i,j}^\ell(Y)\right]
- \sum_{j \in V \setminus Y } \Pr \left[ A_{i,j}^\ell(Y \cup \{u\})\right],
\end{align*}
which completes the proof.
\end{IEEEproof}
\smallskip
Finally, we establish the hardness of $k$ absorbing centrality,
defined in Problem~\ref{problem:k-ac}.
\begin{theorem}
The {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem is {\ensuremath{\mathbf{NP}}}-hard.
\end{theorem}
\begin{IEEEproof}
We obtain a reduction from the {{\sc VertexCover}} problem~\cite{GJ}.
An instance of the {\sc VertexCover}\ problem is specified by a graph
$G = (V, E)$ and an integer $k$, and asks whether there exists a
set of nodes $C\subseteq V$ such that $|C|\le k$ and $C$ is a
vertex cover,
(i.e., for every $(i,j) \in E$ it is $\{i,j\} \cap C\ne\emptyset$).
Let $|V|=n$.
Given an instance of the {{\sc VertexCover}} problem, we construct an instance of
the decision version of {{{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}} by taking the same graph $G=(V,E)$ with
query nodes $Q=V$ and asking whether there is a set of absorbing nodes
$C$ such that $|C| \le k$ and
${\ensuremath{\mathrm{ac}}}_{_Q}(C) \le 1 - \frac{k}{n}$.
We will show that $C$ is a solution for {{\sc VertexCover}}
if and only if ${\ensuremath{\mathrm{ac}}}_{_Q}(C) \le 1 - \frac{k}{n}$.
Assuming first that $C$ is a vertex cover.
Consider a random walk starting uniformly at random from a node $v\in Q=V$.
If $v\in C$ then the length of the walk will be 0,
as the walk will be absorbed immediately.
This happens with probability $|C|/|V|=k/n$.
Otherwise, if $v\not\in C$ the length of the walk will be 1,
as the walk will be absorbed in the next step
(since $C$ is a vertex cover all the neighbors of $v$ need to
belong in $C$).
This happens with the rest of the probability $1-k/n$.
Thus, the expected length of the random walk is
\begin{equation}
{\ensuremath{\mathrm{ac}}}_{_Q}(C)
= 0\cdot \frac{k}{n} + 1\cdot\left(1 - \frac{k}{n}\right)
= 1 - \frac{k}{n} \\
\end{equation}
Conversely, assume that $C$ is not a vertex cover for $G$.
Then, there should be an uncovered edge $(u,v)$.
A random walk that starts in $u$ and then goes to $v$
(or starts in $v$ and then goes to $u$)
will have length at least 2,
and this happens with probability at least
$\frac{2}{n}\frac{1}{d_{\max}} \ge \frac{2}{n^2}$.
Then, following a similar reasoning as in the previous case, we have
\begin{eqnarray}
\nonumber
{\ensuremath{\mathrm{ac}}}_{_Q}(C) &= &
\sum_{k=0}^\infty k\, \Pr \left( \text{absorbed in exactly } k \text{ steps}\right) \\
\nonumber
& = & \sum_{k=1}^\infty \Pr \left( \text{absorbed after at least } k \text{ steps}\right) \\
& \ge & \left(1 - \frac{k}{n}\right) + \frac{2}{n^2} > 1 - \frac{k}{n}.
\end{eqnarray}
\end{IEEEproof}
\section{Algorithms}
\label{section:algorithms}
This section presents algorithms to solve the {{$k$-{\sc arw}-{\sc Central}\-{\sc ity}}}\ problem.
In all cases, the set of query nodes $Q\subseteq V$ is given as input,
along with a set of candidate nodes $D\subseteq V$ and the restart probability
$\alpha$.
\subsection{Greedy approach}
\label{section:k-ac-algorithms}
The first algorithm is a standard greedy algorithm, denoted {{\sf\small Greedy}},
which exploits the
supermodularity of the absorbing random-walk centrality
measure. It starts with the result set $C$ equal to the empty set,
and iteratively adds a node from the set of candidate nodes $D$,
until $k$ nodes are added.
In each iteration the node added in the set $C$ is the one that brings
the largest improvement to ${\ensuremath{\mathrm{ac}}}_Q$.
As shown before, the objective function to be
minimized, i.e., ${\ensuremath{\mathrm{ac}}}_Q$, is supermodular and monotonically decreasing.
The {{\sf\small Greedy}}\ algorithm is not an approximation
algorithm for this minimization problem. However, it can be shown to
provide an approximation guarantee for maximizing the
{\em absorbing centrality gain} measure, defined below.
\begin{definition}[Absorbing centrality gain]
Given a graph $G$,
a set of query nodes $Q$,
and a set of candidate nodes $D$,
the absorbing centrality gain of a set of nodes
$C\subseteq D$ is defined as
\[
{\ensuremath{\mathrm{acg}}}_Q(C) = {\ensuremath{m}}_Q - {\ensuremath{\mathrm{ac}}}_Q(C),
\]
where
${\ensuremath{m}}_Q = {\min}_{v\in D}\{{\ensuremath{\mathrm{ac}}}_Q(\{v\})\}$.
\end{definition}
\para{Justification of the gain function.}
The reason to define the absorbing centrality gain is
to turn our problem into a submodular-maxi\-mization problem
so that we can apply standard approximation-theory results
and show that the greedy algorithm provides a constant-factor approximation guarantee.
The \emph{shift} ${\ensuremath{m}}_Q$ quantifies the absorbing centrality of the best
single node in the candidate set.
Thus, the value of ${\ensuremath{\mathrm{acg}}}_Q(C)$
expresses how much we gain in expected random-walk length
when we use the set $C$ as absorbing nodes
compared to when we use the best single node.
Our goal is to maximize this gain.
Observe that the gain function ${\ensuremath{\mathrm{acg}}}_Q$ is not non-negative everywhere.
Take for example any node $u$
such that ${\ensuremath{\mathrm{ac}}}_Q(\{u\}) > {\ensuremath{m}}_Q$. Then,
${\ensuremath{\mathrm{acg}}}_Q(\{u\}) < 0$. Note also that we could have obtained a
non-negative gain function
by defining gain with respect to the \emph{worst}
single node, instead of the best.
In other words, the gain function
${\ensuremath{\mathrm{acg}}}'_Q(C) = {\ensuremath{M}}_Q - {\ensuremath{\mathrm{ac}}}_Q(C)$,
with
${\ensuremath{M}}_Q = {\max}_{v\in D}\{{\ensuremath{\mathrm{ac}}}_Q(\{v\})\}$,
is non-negative everywhere.
Nevertheless, the reason we use the gain function ${\ensuremath{\mathrm{acg}}}_Q$
instead of ${\ensuremath{\mathrm{acg}}}'_Q$ is that ${\ensuremath{\mathrm{acg}}}'_Q$ takes much larger
values than ${\ensuremath{\mathrm{acg}}}_Q$, and thus, a multiplicative approximation
guarantee on ${\ensuremath{\mathrm{acg}}}'_Q$ is a
weaker result than a multiplicative approximation guarantee on ${\ensuremath{\mathrm{acg}}}_Q$.
On the other hand,
our definition of ${\ensuremath{\mathrm{acg}}}_Q$ creates a technical difficulty with
the approximation guarantee,
that is defined for non-negative functions.
Luckily, this difficulty can be overcome easily
by noting that, due to the monotonicity of ${\ensuremath{\mathrm{acg}}}_Q$,
for any $k>1$,
the optimal solution of the function ${\ensuremath{\mathrm{acg}}}_Q$,
as well as the solution returned by {{\sf\small Greedy}},
are both non-negative.
\para{Approximation guarantee.}
The fact that the {{\sf\small Greedy}}\ algorithm gives an approximation guarantee
to the problem of maximizing absorbing centrality gain is a standard
result from the theory of submodular functions.
\begin{proposition}
The function ${\ensuremath{\mathrm{acg}}}_Q$ is monotonically increasing, and
submodular.
\end{proposition}
\begin{proposition}
Let $k>1$.
For the problem of finding a set $C\subseteq D$ with $|C|\le k$,
such that ${\ensuremath{\mathrm{acg}}}_Q(C)$ is maximized, the {{\sf\small Greedy}}\ algorithm gives a
$\left(1-\frac{1}{e}\right)$-approximation guarantee.
\end{proposition}
We now discuss the complexity of the {{\sf\small Greedy}}\ algorithm.
A na\"ive implementation
requires computing the absorbing
centrality ${\ensuremath{\mathrm{ac}}}_Q(C)$ using Equation~(\ref{eq:inversion})
for each set $C$ that needs to be evaluated during the execution of
the algorithm.
However, applying Equation~(\ref{eq:inversion}) involves a matrix
inversion, which is a very expensive operation.
Furthermore, the number of times that we need to evaluate
${\ensuremath{\mathrm{ac}}}_Q(C)$ is ${\cal O}(k|D|)$, as for each iteration of
the greedy we need to evaluate the improvement over the current set of
each of the ${\cal O}(|D|)$ candidates.
The number of candidates can be very large, e.g., $|D|=n$, yielding an
${\cal O}(kn^4)$ algorithm, which is prohibitively expensive.
We can show, however, that we can execute {{\sf\small Greedy}}\ significantly more
efficiently. Specifically, we can prove the following two propositions.
\begin{proposition}
Let $C_{i-1}$ be a set of $i-1$ absorbing nodes,
$\mathbf{P}_{i-1}$ the corresponding
transition matrix, and let $\mathbf{F}_{i-1} = (\mathbf{I}-\mathbf{P}_{i-1})^{-1}$.
Let $C_i = C_{i-1} \cup \{u\}$.
Given $\mathbf{F}_{i-1}$ the value ${\ensuremath{\mathrm{ac}}}_Q(C_i)$ can be computed in $\mathcal{O}(n^2)$.
\label{prop:fast-updates-next-step}
\end{proposition}
\begin{proposition}
Let $C$ be a set of absorbing nodes, $\mathbf{P}$ the corresponding
transition matrix, and $\mathbf{F} = (\mathbf{I}-\mathbf{P})^{-1}$.
Let $C' = C - \{v\} \cup \{u\}$, $u, v \in C$.
Given $\mathbf{F}$ the value ${\ensuremath{\mathrm{ac}}}_Q(C')$ can be computed in time $\mathcal{O}(n^2)$.
\label{prop:fast-updates-same-step}
\end{proposition}
The proofs of these two propositions can be found in the Appendix.
Proposition~\ref{prop:fast-updates-next-step} implies that in order to compute~${\ensuremath{\mathrm{ac}}}_Q(C_i)$
for absorbing nodes $C_i$ in $\mathcal{O}(n^2)$, it is enough to maintain the matrix $\mathbf{F}_{i-1}$,
computed in the previous step of the greedy algorithm for absorbing nodes $C_{i-1}$.
Proposition~\ref{prop:fast-updates-same-step}, on the other hand, implies that we can compute
the {{absorbing centrality}}\ of each set of absorbing nodes of a fixed size~$i$ in $\mathcal{O}(n^2)$,
given the matrix $\mathbf{F}$, which is computed for one arbitrary set of absorbing nodes $C$
of size $i$. Combined, the two propositions above yield a greedy algorithm that runs in
${\cal O}(kn^3)$ and offers the approximation guarantee discussed above. We outline it as
Algorithm~\ref{algo:greedy}.
\begin{algorithm}[t]
\caption{{{\sf\small Greedy}}}
\label{algo:greedy}
\begin{algorithmic}
\STATE {\bf Input}: graph $G$, query nodes $Q$, candidates $D$, $k\geq 1$
\STATE {\bf Output}: a set of $k$ nodes $C$
\STATE Compute ${\ensuremath{\mathrm{ac}}}_Q(\{v\})$ for arbitrary $v\in D$
\STATE For each $u\in (D - \{v\})$, use Prop.\ref{prop:fast-updates-same-step} to compute ${\ensuremath{\mathrm{ac}}}_Q({u})$
\STATE Select $u_1\in D$ s.t.\ $u_1 \leftarrow \arg\max_{u\in D} {\ensuremath{\mathrm{ac}}}_Q({u})$
\STATE Initialize solution $C \leftarrow \{u_1\}$
\FOR{$i = 2 .. k$}
\STATE For each $u\in D$, use Prop.\ref{prop:fast-updates-next-step} to compute ${\ensuremath{\mathrm{ac}}}_Q(C\cup\{u\})$
\STATE Select $u_i \in D$ s.t.\ $u_i \leftarrow \arg\max_{u_i\in (D-C)} {\ensuremath{\mathrm{ac}}}_Q(C\cup\{u\})$
\STATE Update solution $C \leftarrow C \cup \{u_i\}$
\ENDFOR
\STATE \textbf{return} $C$
\end{algorithmic}
\end{algorithm}
\para{Practical speed-up.}
We found that the following heuristic lets us speed-up {{\sf\small Greedy}}\ even further,
with no significant loss in the quality of results.
To select the first node for the solution set $C$ (see Algorithm~\ref{algo:greedy}),
we calculate the {\it PageRank} values of all nodes in $D$ and evaluate ${\ensuremath{\mathrm{ac}}}_Q$
only for the $t << k$ nodes with highest PageRank score,
where $t$ is a fixed parameter.
In what follows, we will be using this heuristic version of {{\sf\small Greedy}}, unless explicitly stated otherwise.
\subsection{Efficient heuristics}
Even though {{\sf\small Greedy}}\ runs in polynomial time, it can
be quite inefficient when employed on moderately sized
datasets (more than some tens of thousands of nodes).
We thus describe algorithms that we study as
efficient heuristics for the problem.
These algorithms do not offer guarantee for their performance.
\spara{Spectral}
methods have been used extensively for the problem of
graph partitioning.
Motivated by the wide applicability of this family of algorithms,
here we explore three spectral algorithms:
{{\sf\small SpectralQ}}, {{\sf\small SpectralC}}, and
{{\sf\small SpectralD}}.
We start by a brief overview of the spectral method;
a comprehensive presentation can be found in the
tutorial by von Luxburg~\cite{von2007tutorial}.
The main idea of spectral approaches is to project the original graph
into a low-dimensional Euclidean space so that distances between
nodes in the graph correspond to Euclidean distances between the
corresponding projected points.
A standard spectral embedding method, proposed by Shi and
Malik~\cite{Shi:2000gf},
uses the ``random-walk'' {\em Laplacian} matrix
$\mathbf{L}_G=\mathbf{I}-\mathbf{D}^{-1}\mathbf{A}$
of a graph $G$, where $\mathbf{A}$ is the adjacency matrix of the graph,
and forms the matrix
$\mathbf{U} =[u_2,\ldots,u_{d+1}]$
whose columns are the
eigenvectors of $\mathbf{L}_G$
that correspond to the smallest eigenvalues $\lambda_2\le\ldots\le\lambda_{d+1}$,
with $d$ being the target dimension of the projection.
The spectral embedding is then defined by mapping
the $i$-th node of the graph to a point in~$\mathbb{R}^d$,
which is the $i$-row of the matrix~$\mathbf{U}$.
\iffalse
\begin{itemize}[itemsep=-1pt]
\item
Given a graph $G=(V,E)$ with adjacency matrix $\mathbf{A}$,
we compute the ``random-walk'' {\em Laplacian} matrix
$\mathbf{L}_G=\mathbf{I}-\mathbf{D}^{-1}\mathbf{A}$ of $G$, where
$\mathbf{D}$ is the diagonal matrix whose $d_{ii}$ entry is equal to
the degree of the $i$-th node of~$G$.
\item
Given a number $\mu$, we compute the $n\times \mu$
matrix $\mathbf{U} =[u_2,\ldots,u_{\mu+1}]$ whose columns are the
eigenvectors of $\mathbf{L}_G$
that correspond to the smallest eigenvalues
$\lambda_2\le\ldots\le\lambda_{\mu+1}$.
The eigenvector that corresponds to the smallest eigenvalue
$\lambda_1=0$ is the vector of all $1$'s and is not used.
\item
The matrix $\mathbf{U}$ is then used to define the embedding
$\phi:V\rightarrow\mathbb{R}^\mu$:
the $i$-th node of the graph is mapped to a vector in the Euclidean
space $\mathbb{R}^\mu$, which is the $i$-row of the
matrix~$\mathbf{U}$.
\end{itemize}
\fi
The algorithms we explore are adaptations of the spectral method.
They all start by computing the spectral embedding $\phi:V\rightarrow\mathbb{R}^d$,
as described above, and then, proceed as follows:
\sparanbf{{{\sf\small SpectralQ}}}
performs $k$-means clustering on the embeddings
of the {\it query nodes}, where $k$ is the desired size of the result set.
Subsequently, it selects {\it candidate nodes} that are close to the
computed centroids.
Specifically, if $s_i$ is the size of the $i$-th cluster,
then $k_i$ candidate nodes are selected whose embedding is the nearest
to the $i$-th centroid. The number $k_i$ is selected so that
$k_i \propto s_i$ and $\sum k_i = k$.
\sparanbf{{{\sf\small SpectralC}}}
is similar to {{\sf\small SpectralQ}},
but it performs the $k$-means clustering on the embeddings
of the {\it candidate nodes},
instead of the query nodes.
\sparanbf{{{\sf\small SpectralD}}}
performs $k$-means clustering on the embeddings
of the {\it query nodes}, where $k$ is the desired result-set size.
Then, it selects the $k$ candidate nodes whose embeddings minimize
the sum of squared $\ell_2$-distances from the centroids,
with no consideration of the relative sizes of the clusters.
\spara{Personalized Pagerank} ({{\sf\small PPR}}).
This is the standard Pagerank~\cite{PageRank} algorithm
with a damping factor equal to the restart probability $\alpha$ of the random walk
and personalization probabilities equal to the start probabilities ${\ensuremath{\mathbf{s}}}(q)$.
Algorithm {{\sf\small PPR}}\ returns the $k$ nodes with highest PageRank values.
\spara{Degree and distance centrality.}
Finally, we consider the standard degree and distance centrality measures.
\sparanbf{{{\sf\small Degree}}} returns the $k$ highest-degree nodes.
Note that this baseline is oblivious to the query nodes.
\sparanbf{{{\sf\small Distance}}} returns the $k$ nodes with highest distance
centrality with respect to $Q$.
The distance centrality
of a node $u$
is defined as
${\ensuremath{\mathrm{dc}}}(u) = \left( \sum_{v\in Q} d(u,v)\right)^{-1}$.
\section{Experimental evaluation}
\label{sec:experiments}
\subsection{Datasets}
\label{section:datasets}
We evaluate the algorithms described in Section~\ref{section:algorithms}
on two sets of real graphs:
one set of small graphs that allows us to compare the performance of
the fast heuristics against the greedy approach;
and one set of larger graphs,
to compare the performance of the heuristics against
each other on datasets of larger scale.
Note that the bottleneck of the computation lies in the evaluation
of centrality.
Even though the technique we describe in
Section~\ref{section:fasteval} allows it to scale
to datasets of tens of thousands of nodes on a single processor,
it is still prohibitively expensive for massive graphs.
Still, our experimentation allows us to discover the traits of the different
algorithms and understand what performance to anticipate when they are
employed on graphs of massive size.
The datasets are listed in Table~\ref{table:datasets}. Small graphs are obtained from
Mark Newman's repository\footnote{{http://www-personal.umich.edu/\%7Emejn/netdata/}},
larger graphs from SNAP.\footnote{\url{http://snap.stanford.edu/data/index.html}}
For {{\tt kddCoauthors}}, {{\tt livejournal}}, and {{\tt roadnet}}\
we use samples of the original datasets.
In the interest of repeatability, our code and datasets are
made publicly available.\footnote{{https://github.com/harrymvr/absorbing-centrality}}
\begin{table}[t]
\centering
\caption{Dataset statistics}
\begin{tabular}{lrr}
\toprule
Dataset & $|V|$ & $|E|$ \\
\midrule
{{\tt karate}} & $34$ & $78$ \\
{{\tt dolphins}} & $62$ & $159$ \\
{{\tt lesmis}} & $77$ & $254$ \\
{{\tt adjnoun}} & $112$ & $425$ \\
{{\tt football}} & $115$ & $613$ \\ \hline
{{\tt kddCoauthors}} & $2\,891$ & $2\,891$\\
{{\tt livejournal}} & $3\,645$ & $4\,141$ \\
{{\tt ca-GrQc}} & $5\,242$ & $14\,496$\\
{{\tt ca-HepTh}} & $9\,877$ & $25\,998$\\
{{\tt roadnet}} & $10\,199$ & $13\,932$\\
{{\tt oregon-1}} & $11\,174$ & $23\,409$\\
\bottomrule
\end{tabular}
\label{table:datasets}
\end{table}
\subsection{Evaluation Methodology}
Each experiment in our
evaluation framework is defined by
a graph~$G$,
a set of query nodes~$Q$,
a set of candidate nodes~$D$, and
an algorithm to solve the problem.
We evaluate all algorithms presented in Section~\ref{section:algorithms}.
For the set of candidate nodes~$D$, we consider two cases:
it is equal to either the set of query nodes, i.e., $D = Q$,
or the set of all nodes, i.e., $D = V$.
Query nodes $Q$ are selected randomly,
using the following process:
First, we select a set $S$ of $\ensuremath{s}$ seed nodes, uniformly at random
among all nodes.
Then, we select a ball $B(v,r)$
of predetermined radius $r = 2$,
around each seed $v\in S$.\footnote{For the planar
{{\tt roadnet}}\ dataset we use $r = 3$.}
Finally, from all balls,
we select a set of query nodes $Q$ of predetermined size $q$,
with $q = 10$ and $q = 20$,
respectively, for the small and larger datasets.
Selection is done uniformly at random.
Finally,
the restart probability $\alpha$ is set to $\alpha=0.15$
and the starting probabilities ${\ensuremath{\mathbf{s}}}$ are uniform over $Q$.
\subsection{Implementation}
All algorithms are implemented in Python using the
NetworkX package~\cite{hagberg-2008-exploring},
and were run on an Intel Xeon 2.83GHz with 32GB RAM.
\subsection{Results}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{karate_quality.png}
\caption{{{\tt karate}}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{dolphins_quality.png}
\caption{{{\tt dolphins}}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{lesmis_quality.png}
\caption{{{\tt lesmis}}}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{adjnoun_quality.png}
\caption{{{\tt adjnoun}}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{football_quality.png}
\caption{{{\tt football}}}
\end{subfigure}
\caption{Results on small datasets for varying $k$ and $\ensuremath{s} = 2$.}
\label{fig:small}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{ca-GrQc_quality.png}
\caption{{{\tt ca-GrQc}}}
\end{subfigure}%
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{ca-HepTh_quality.png}
\caption{{{\tt ca-HepTh}}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{livejournal_1000_5_quality.png}
\caption{{{\tt livejournal}}}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{oregon1_010526_quality.png}
\caption{{{\tt oregon-1}}}
\end{subfigure}
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{roadnet_ca_10000_0_quality.png}
\caption{{{\tt roadnet}}}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{author-author-wpapers_KDD_all_quality.png}
\caption{{{\tt kddCoauthors}}}
\end{subfigure}
\caption{Results on large datasets for varying $k$ and $\ensuremath{s} = 5$.}
\label{fig:large_quality}
\end{figure*}
Figure~\ref{fig:small} shows the centrality scores achieved by different algorithms
on the small graphs for varying $k$ (note: lower is better).
We present two settings: on the left, the candidates are all nodes ($D = V$),
and on the right, the candidates are only the query nodes ($D = Q$).
We observe that {{\sf\small PPR}}\ tracks well
the quality of solutions returned by {{\sf\small Greedy}},
while {{\sf\small Degree}}\ and {{\sf\small Distance}}\ often come close to that.
Spectral algorithms do not perform that well.
Figure~\ref{fig:large_quality} is similar to Figure~\ref{fig:small}, but
results on the larger datasets are shown, not including {{\sf\small Greedy}}.
When all nodes are candidates,
{{\sf\small PPR}}\ typically has the best performance,
followed by {{\sf\small Distance}},
while {{\sf\small Degree}}\ is unreliable.
The spectral algorithms typically perform worse than {{\sf\small PPR}}.
When only query nodes are candidates,
all algorithms demonstrate similar performance,
which is most typically worse than the performance of
{{\sf\small PPR}}\ (the best performing algorithm) in the previous setting.
Both observations can be explained by the fact that
the selection is very restricted by the requirement $D = Q$,
and there is not much flexibility for the best
performing algorithms
to produce a better solution.
In terms of running time on the larger graphs, {{\sf\small Distance}}\ returns
within a few minutes
(with observed times between 15 seconds to 5 minutes)
while {{\sf\small Degree}}\ returns within seconds
(all observed times were less than 1 minute).
Finally, even though {{\sf\small Greedy}}\ returns within 1-2 seconds for the small datasets,
it does not scale well for the larger datasets
(running time is orders of magnitude worse than the heuristics and not included in the experiments).
Based on the above, we conclude that {{\sf\small PPR}}\ offers the best trade-off of quality
versus running time for datasets of at least moderate size (more than $10\,\text{k}$ nodes).
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have addressed the problem of
finding central nodes in a graph with respect to a set of query nodes~$Q$.
Our measure is based on absorbing random walks:
we seek to compute $k$ nodes that minimize
the expected number of steps that a random walk will need to
reach at (and be ``absorbed'' by) when it starts from the query nodes.
We have shown that the problem is {\ensuremath{\mathbf{NP}}}-hard and
described an $\mathcal{O}(kn^3)$ greedy algorithm to solve it approximately.
Moreover, we experimented with heuristic algorithms to solve the problem on
large graphs.
Our results show that, in practice, personalized PageRank
offers a good combination of quality and speed.
\bibliographystyle{abbrv}
| {
"timestamp": "2015-09-10T02:00:32",
"yymm": "1509",
"arxiv_id": "1509.02533",
"language": "en",
"url": "https://arxiv.org/abs/1509.02533",
"abstract": "We study a new notion of graph centrality based on absorbing random walks. Given a graph $G=(V,E)$ and a set of query nodes $Q\\subseteq V$, we aim to identify the $k$ most central nodes in $G$ with respect to $Q$. Specifically, we consider central nodes to be absorbing for random walks that start at the query nodes $Q$. The goal is to find the set of $k$ central nodes that minimizes the expected length of a random walk until absorption. The proposed measure, which we call $k$ absorbing random-walk centrality, favors diverse sets, as it is beneficial to place the $k$ absorbing nodes in different parts of the graph so as to \"intercept\" random walks that start from different query nodes.Although similar problem definitions have been considered in the literature, e.g., in information-retrieval settings where the goal is to diversify web-search results, in this paper we study the problem formally and prove some of its properties. We show that the problem is NP-hard, while the objective function is monotone and supermodular, implying that a greedy algorithm provides solutions with an approximation guarantee. On the other hand, the greedy algorithm involves expensive matrix operations that make it prohibitive to employ on large datasets. To confront this challenge, we develop more efficient algorithms based on spectral clustering and on personalized PageRank.",
"subjects": "Social and Information Networks (cs.SI); Data Structures and Algorithms (cs.DS)",
"title": "Absorbing random-walk centrality: Theory and algorithms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712643239288,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7094397156367211
} |
https://arxiv.org/abs/1604.01794 | Optimal initial condition of passive tracers for their maximal mixing in finite time | The efficiency of a fluid mixing device is often limited by fundamental laws and/or design constraints, such that a perfectly homogeneous mixture cannot be obtained in finite time. Here, we address the natural corollary question: Given the best available mixer, what is the optimal initial tracer pattern that leads to the most homogeneous mixture after a prescribed finite time? For ideal passive tracers, we show that this optimal initial condition coincides with the right singular vector (corresponding to the smallest singular value) of a suitably truncated Perron-Frobenius (PF) operator. The truncation of the PF operator is made under the assumption that there is a small length-scale threshold $\ell_\nu$ under which the tracer blobs are considered, for all practical purposes, completely mixed. We demonstrate our results on two examples: a prototypical model known as the sine flow and a direct numerical simulation of two-dimensional turbulence. Evaluating the optimal initial condition through this framework only requires the position of a dense grid of fluid particles at the final instance and their preimages at the initial instance of the prescribed time interval. As such, our framework can be readily applied to flows where such data is available through numerical simulations or experimental measurements. | \section{Introduction}
Given a fluid velocity field $\mathbf u(\mathbf x,t)$, a passive tracer satisfies the linear advection equation
\begin{equation}
\partial_{t}\rho +\mathbf u\cdot \pmb\nabla \rho=0,\quad \rho(\mathbf x,t_0)=f(\mathbf x)
\label{eq:adveq}
\end{equation}
where the scalar field $\rho(\mathbf x,t)$ denotes the concentration of the tracer at time $t$
and $f$ is its initial concentration at time $t_0$. \citet{aref} pointed out that
laminar unsteady velocity fields can, over time, develop complex tracer patterns consisting of ever smaller scales.
This observation has inspired the successful development of many stirring protocols to enhance mixing
in engineered devices (see, e.g.,~\cite{stroock02,gouillart06,mathew07,singh08,thiffeault08,gubanov10,foures14}).
Systematic classification of mixing efficiency of fluid flow, however, is relatively recent.
This classification was initiated by~\cite{lin11} who derived rigorous bounds on the mixing
efficiency of velocity fields with a prescribed stirring energy or stirring power budget.
A notable outcome of their program is the rather remarkable discovery of a
finite-energy velocity field ($\|\mathbf u\|_{L^2}=const.<\infty$) that achieves
perfect mixing in finite time~\citep{lunasin12}. It was shown later, however, that any such velocity
field must have infinite viscous dissipation, i.e. $\|\pmb\nabla\mathbf u\|_{L^2}=\infty$~\citep{seis13,Iyer14}.
Besides this fundamental limitation, the implementation of mathematically obtained
optimal stirring strategies in a mixing device is not always feasible due to, for instance,
geometric constraints.
The problem is more acute in natural fluid flow (such as geophysical flows or the blood stream)
over which we have virtually no control.
In light of the above discussion, the natural question is:
\begin{displayquote}
(Q) \emph{
Given an unsteady velocity field, what is the optimal initial tracer pattern
that leads to the most homogeneous mixture after a prescribed finite time?
}
\end{displayquote}
In spite of its importance, this question has received relatively little attention. \cite{hobbs97}
carried out a case study where the effect of the tracer injection location in a Kenics mixer is
examined. They find that, at least for short time horizons, the mixing efficiency depends
significantly on the injection location. A similar case study
is carried out by \cite{gubanov09} who studied the mixing efficiency of
five different initial tracer patterns in a two-dimensional nonlinear model, known as the sine flow.
\cite{thiffeault08b} addressed an analogous question:
the asymptotic mixing of passive tracers advected under a steady velocity field
where the tracer is injected continuously into the flow via source terms. Through a variational approach, they
determined the optimal distribution of the sources (also see~\cite{okabe08}, for related numerical results).
Here, we address the finite-time mixing of passive tracers advected by fully unsteady
velocity fields, as formulated in~\eqref{eq:adveq}.
Specifically, we seek the optimal initial condition $f$ that leads to the
most homogeneous mixture after a given finite time.
To the best of our knowledge, a rigorous method for determining this optimal initial condition
is missing.
Problem (Q) can, in principle, be formulated and solved as an infinite-dimensional
optimization problem, where the optimal initial condition coincides with the minimizer
of an appropriate cost functional. Such minimizers are typically
obtained by iterative methods of adjoint-based
optimization~\citep{protas08,faraz_cont}. This is, however, computationally prohibitive since it requires the backward-time
integration of an adjoint partial differential equation (PDE) at each iteration.
Here, we show that under reasonable assumptions, the problem
reduces to a finite-dimensional one that can be readily solved at a relatively low computational cost.
To obtain this finite-dimensional reduction, we assume that tracer blobs
smaller than a small, prescribed length-scale $\ell_\nu$ are considered completely mixed for all practical
purposes. This assumption, that is made precise in Section~\ref{sec:theory},
results in a natural Galerkin truncation of the Perron--Frobenius (PF) operator
associated with the advection equation~\eqref{eq:adveq}. We show that the optimal initial condition $f$
then coincides with a singular vector of the truncated PF operator.
Our results complement the transfer operator-based methods for detecting finite-time
coherent sets in unsteady fluid flows (\cite{froyland}; also see~\cite{dellnitz99,froyland07,froyland10,williams15}).
Coherent sets refer to subsets of the fluid which exhibit minimal deformation under advection and therefore inhibit efficient mixing of tracers
with the surrounding fluid. Our aim here is the opposite, namely, initially large-scale structures that
under advection deform mostly into small-scale filaments.
The remainder of the paper is organized as follows. In section~\ref{sec:prelim}, we introduce some basic
notation and definitions. Section~\ref{sec:theory} contains our main results and
section~\ref{sec:numerics} details their numerical implementation. In section~\ref{sec:results}, the
results are demonstrated on two examples.
\section{Preliminaries}\label{sec:prelim}
Consider an unsteady, incompressible velocity field $\mathbf u(\mathbf x,t)$ defined over a
bounded open subset $\mathcal D\subset \mathbb R^d$ where
$d=2$ or $d=3$ for two- and three-dimensional flows, respectively.
The trajectories $\mathbf x(t;t_0,\mathbf x_0)$ of the fluid particles satisfy the ordinary differential equation
\begin{equation}
\dot{\mathbf x}=\mathbf u(\mathbf x,t),\quad t\in\mathbb R,
\label{eq:ode}
\end{equation}
where $\mathbf x(t;t_0,\mathbf x_0)$ denotes the time--$t$ position of the particle starting from the initial position $\mathbf x_0$ at time $t_0$.
If the velocity field is sufficiently smooth, there exists a two-parameter family of homeomorphisms
$\varphi_{s}^t$ (the flow map) such that $\mathbf x(t;s,\mathbf x_0)=\varphi_{s}^t(\mathbf x_0)$ for all times $t$ and $s$.
As our interest here is in finite-time mixing, we restrict our attention to a prescribed finite time interval
$[t_0,t_0+T]$ of interest. The flow map $\varphi_{t_0}^{t_0+T}$ takes the initial position $\mathbf x_0$
of a fluid particle at time $t_0$ to its final position at time $t_0+T$.
Since the finite time interval is fixed, we drop the dependence of the flow map on $t_0$ and $t_0+T$,
and write $\varphi$ for notational simplicity.
Let $\rho(\mathbf x,t)$ denote the concentration of a passive tracer,
i.e. $\rho$ satisfies equation~\eqref{eq:adveq}.
Since the passive tracer is conserved along
fluid trajectories, we have
\begin{equation}
\rho(\mathbf x,t_0+T)=\rho(\varphi^{-1}(\mathbf x),t_0)=f\circ \varphi^{-1}(\mathbf x),
\label{eq:rhoIsPassive}
\end{equation}
for all $\mathbf x\in \mathcal D$. Note that since the flow map is a homeomorphisms, the inverse $\varphi^{-1}$ is
well-defined. Equation~\eqref{eq:rhoIsPassive} motivates the definition of the Perron-Frobenius (PF) operator.
\begin{defn}[Perron--Frobenius operator]
The Perron--Frobenius operator associated with the flow map $\varphi:\mathcal D\to \mathcal D$ is the linear transformation
$\mathcal P:L^2(\mathcal D)\to L^2(\mathcal D)$
such that, for all $f\in L^2(\mathcal D)$,
\begin{equation}
(\mathcal P f)(\mathbf x)=f\circ \varphi^{-1}(\mathbf x),\quad \forall\mathbf x\in\mathcal D.
\label{eq:PF}
\end{equation}
\label{def:PF}
\end{defn}
The evolution of passive tracers can be described by the action of the PF operator on their initial
conditions. More specifically, for the passive tracer $\rho$ described above, we have
\begin{equation}
\rho(\mathbf x,t_0+T)=(\mathcal Pf)(\mathbf x),
\end{equation}
for all $\mathbf x\in\mathcal D$ (cf. equation~\eqref{eq:rhoIsPassive}).
We point out that there is a more general definition of the PF operator applicable to
non-invertible dynamics (see Definition 3.2.3 of~\cite{mackey1994chaos}).
In the special case where the flow map $\varphi$ is invertible and volume-preserving, the general definition
is equivalent to Definition~\ref{def:PF} above (Corollary 3.2.1 in~\cite{mackey1994chaos};~\cite{froyland2009}).
For incompressible flow, the
PF operator is a unitary transformation with respect to the $L^2(\mathcal D)$ inner product
$\langle\cdot,\cdot\rangle_{L^2}$. As a consequence, the $L^2$-norm $\|\rho(\cdot,t)\|_{L^2}$ of the tracer remains
invariant under advection. Furthermore, the spatial average of the tracer is an invariant.
Without loss of generality, one can assume that this spatial average vanishes,
$\int_{\mathcal D}\rho(\mathbf x,t)\mathrm d \mathbf x=0$ \citep{lin11}.
There has been several attempts to detect coherent structures in unsteady fluid flows using approximations of
the PF operator~\citep{froyland07,santitissadeekorn10,froyland10}. \cite{froyland} puts these approaches on a
mathematically rigorous basis by composing the PF operator with diffusion operators. The resulting diffusive PF operator
is compact and has a well-defined singular value decomposition (SVD). \cite{froyland} shows that
a singular vector, corresponding to the largest non-unit singular value of the diffusive PF operator,
can reveal minimally dispersive subsets of the fluid that remain coherent and thereby inhibit
mixing (also see~\cite{froyland14}).
Our goal here, however, is the opposite as we seek passive tracer initial conditions that mix
most efficiently with their surrounding fluid.
\section{Optimal initial conditions}\label{sec:theory}
\subsection{Physical considerations}
Given an initial tracer distribution, a reasonable mixer will generically
deform the tracer through stretching and folding of material elements
such that, over time, it develops ever smaller length scales.
It is, therefore, desirable to release the tracer initially into smallest possible scales.
In practice, the initially available range of scales into which the tracer may be released
is limited to relatively large scales. We denote this
large length-scale limit by $\ell_I$ (see figure~\ref{fig:schem_scales}, for an illustration).
It is left then to the fluid flow to transform the initially large-scale
blobs of tracer to small filaments through a stretch-and-fold mechanism.
\begin{figure}
\centering
\includegraphics[width=.95\textwidth]{schem_scales}
\caption{An illustration of the initially available scales (larger than $\ell_I$)
and the mixed scales (smaller than $\ell_\nu$).}
\label{fig:schem_scales}
\end{figure}
On the other hand, we assume that there is a small length scale threshold $\ell_\nu\ll \ell_I$,
under which the tracer is considered, for all practical purposes, completely mixed. An efficient mixer,
therefore, transfers the tracer distribution from large, initially available scales $\ell\geq\ell_I$
to the mixed scales $\ell<\ell_\nu$. In the next section, we make these statements precise.
\subsection{Mathematical formulation}
We consider a set of functions $\{\phi_j\}_{j\geq 1}$ forming a complete, orthonormal
basis for the space of square integrable functions $L^2(\mathcal D)$.
That is $\langle\phi_i,\phi_j\rangle_{L^2}=\delta_{ij}$
and for any $f\in L^{2}(\mathcal D)$ there are constants $\alpha_j\in\mathbb R$ such that
$\lim_{k\to \infty}\|f-\sum_{j=1}^{k}\alpha_j\phi_j\|_{L^2}=0$.
We also assume that there is a length scale $\ell_j$ associated to each function $\phi_j$, and that
they are ordered such that the sequence $(\ell_1,\ell_2,\cdots)$ is decreasing. In other words, the length scale
associated with the function $\phi_j$ decreases as $j$ increases. Such a basis can be taken, for instance,
to be Fourier modes or wavelets~\citep{walnut13}.
With this basis, we can mathematically model the subspace of initial conditions $V_{I}$.
The subspace $V_{I}$ consists of all scalar functions $f$ whose smallest length scale is larger than or equal to $\ell_I$.
Since the basis $\{\phi_j\}_{j\geq 1}$ is ordered, there is a positive integers $n$ such that
\begin{equation}
V_{I}= \mbox{span}\{\phi_1,\phi_2,\cdots,\phi_n\}= \{\mbox{Initially available length scales}\ \ell\geq\ell_I \}.
\end{equation}
The subspace of unmixed length scales $V_{\nu}$ can be modeled similarly using
a basis $\{\psi_i\}_{i\geq 1}$ for $L^2(\mathcal D)$.
We assume that this basis is also orthonormal, complete and associated with a
decreasing sequence of length scales.
The subspace $V_{\nu}$ consists of all scalar functions whose smallest length scale is larger than or equal to
the unmixed length scale $\ell_\nu$. Therefore, there is $N\gg n$ such that
\begin{equation}
V_{\nu}= \mbox{span}\{\psi_1,\psi_2,\cdots,\psi_N\}=\{\mbox{Unmixed length scales}\ \ell\geq\ell_\nu \}.
\end{equation}
Note that the bases $\{\psi_i\}_{i\geq 1}$
and $\{\phi_i\}_{i\geq 1}$ can be taken to be identical, but this is not necessary here.
We denote the orthogonal complement of $V_{\nu}$ by $V_{\nu}^\perp$. In terms of the basis
functions, we have
\begin{equation}
V_{\nu}^\perp=\overline{\mbox{span}\{\psi_{N+1},\psi_{N+2},\cdots\}}=\{\mbox{Mixed length scales}\ \ell<\ell_\nu\},
\end{equation}
where the overline denotes
closure in the $L^2$ topology. The space $V_{\nu}^\perp$ consists of functions that
only contain the mixed scales, that is scales smaller than $\ell_\nu$ (see figure~\ref{fig:schem_scales}).
\subsection{Main result}\label{sec:MainResult}
Given an initial condition $f\in V_{I}$ for the tracer, its advected
image $\mathcal Pf\in L^2$ at the final time
can potentially contain all length scales $\ell_j$.
The flow redistributes the `energy' budget of the tracer among various scales in such a way
that the $L^2$-norm is conserved, i.e.,
\begin{equation}
\|f\|_{L^2}=\|\mathcal P f\|_{L^2}
=\underbrace{\sum_{i=1}^{N}|\langle \mathcal Pf,\psi_i\rangle_{L^2}|^2}_\text{unmixed}
+ \underbrace{\sum_{i=N+1}^{\infty}|\langle \mathcal Pf,\psi_i\rangle_{L^2}|^2}_\text{mixed}.
\label{eq:parseval}
\end{equation}
A tracer is better mixed if more of its energy budget is transfered to the mixed scales $\ell <\ell_\nu$.
Therefore, we seek optimal initial conditions $f\in V_{I}$ such that the energy budget of its image $\mathcal P f$
is mostly stored in the mixed scales, maximizing
$\sum_{i=N+1}^{\infty}|\langle \mathcal Pf,\psi_i\rangle|^2$. To make
these statements more precise we introduce the following truncation of the PF operator.
\begin{defn}[Truncated Perron--Frobenius operator]
We define the truncated PF operator $\mathcal P_p:V_{I}\to V_{\nu}$ as
the linear map $\mathcal P_p=\Pi_N\circ\mathcal P$, where $\Pi_N$ is the orthogonal projection
onto the $N$-dimensional subspace $V_{\nu}$. We also define the remainder operator
$\mathcal P^\perp_p:V_{I}\to V_{\nu}^\perp$
as $\mathcal P^\perp_p=\mathcal P-\mathcal P_p$.
\label{def:PFtrunc}
\end{defn}
It follows from Parseval's identity that
$\|\mathcal Pf\|_{L^2}^2=\|\mathcal P_pf\|_{L^2}^2+\|\mathcal P_p^\perp f\|_{L^2}^2$
(see equation~\eqref{eq:parseval}).
The quantity $\|\mathcal P_pf\|_{L^2}^2$ represents the portion of the energy budget of the tracer
that remains unmixed after advection to the final time $t_0+T$.
The quantity $\|\mathcal P_p^\perp f\|_{L^2}^2$, on the other hand,
represents the portion of the tracer that is completely mixed.
We, therefore, seek initial conditions $f\in V_{I}$ that maximize the
mixed energy budget $\|\mathcal P_p^\perp f\|_{L^2}^2$.
Since the truncated PF operator $\mathcal P_p$ is a linear transformation between finite-dimensional vector spaces
$V_{I}$ and $V_{\nu}$, it can be represented by a matrix $P_p\in\mathbb R^{N\times n}$.
More specifically, for any $f\in V_{I}$, there are scalars $\{\alpha_1,\cdots,\alpha_n\}$ and $\{\beta_1,\cdots,\beta_N\}$ such that
\begin{equation}
f=\sum_{j=1}^{n}\alpha_j\phi_j\quad \mbox{and}\quad \mathcal P_pf=\sum_{i=1}^{N}\beta_i\psi_i.
\label{eq:expansions}
\end{equation}
The matrix $P_p$ maps $\pmb\alpha=(\alpha_1,\cdots,\alpha_n)^\top$ into
$\pmb{\beta}=(\beta_1,\cdots,\beta_N)^\top$, that is $\pmb\beta =P_p\,\pmb\alpha$.
It follows from elementary linear algebra that
the entries $[P_p]_{ij}$ of the matrix $P_p$ are given by
\begin{equation}
[P_p]_{ij}=\langle \mathcal P\phi_j,\psi_i\rangle_{L^2}, \quad i\in\{1,2,\cdots,N\},\quad j\in\{1,2,\cdots,n\}.
\label{eq:Kmatrix}
\end{equation}
With this prelude, we can now state our main result.
\begin{thm}
Consider the function spaces $V_{I}$ and $V_{\nu}$ and their associated truncated
PF operator defined above.
The solution of
$$\arg\max \|\mathcal P_p^\perp f\|_{L^2},$$
with the maximum taken over all $f\in V_{I}$ with $\|f\|_{L^2}=1$, is given by
$f_{\mbox{\tiny opt}}=\sum_{j=1}^{n}\alpha_{j}\phi_j$,
where $\pmb{\alpha}=(\alpha_1,\alpha_2,\cdots,\alpha_n)^\top$
is a right singular vector of the truncated PF matrix~\eqref{eq:Kmatrix} corresponding to its
smallest singular value.
\label{thm:main}
\end{thm}
\begin{proof}
Since $\| \mathcal P_p^\perp f\|_{L^2}^2 = \| \mathcal P f\|_{L^2}^2-\| \mathcal P_p f\|_{L^2}^2=1-\| \mathcal P_p f\|_{L^2}^2$,
maximizing $\| \mathcal P_p^\perp f\|_{L^2}^2$ is equivalent to minimizing $\| \mathcal P_p f\|^2$.
Since $f$ belongs to the subspace $V_{I}$,
the initial condition $f$ and its image $\mathcal P_pf$ can be expressed
by the series~\eqref{eq:expansions} with $\pmb{\beta}=P_p\,\pmb{\alpha}$.
Denoting the standard Euclidean norm by $|\cdot|$,
we have $|\pmb{\alpha}|^2=\|f\|_{L^2}^2=1$ and $|\pmb{\beta}|^2=|P_p\,\pmb{\alpha}|^2=\| \mathcal P_p f\|_{L^2}^2$.
Therefore,
\begin{equation}
\min_{f\in V_{I}, \|f\|=1}\| \mathcal P_p f\|_{L^2}=\min_{|\pmb{\alpha}|=1}|P_p\,\pmb{\alpha}|.
\end{equation}
The minimum on the right hand side is well-known to
coincide with the smallest singular value of the matrix $P_p$~\citep{stewart98}.
The minimum is attained at the corresponding right singular vector of the matrix $P_p$.
This completes the proof.
\end{proof}
Once the PF matrix $P_p$ is formed, the evaluation of the optimal initial
condition $f_{\mbox{\tiny opt}}$, from the above theorem, is straightforward. We outline the
computation of the truncated PF matrix $P_p$ in section~\ref{sec:numerics}.
\begin{rem}
Note that if the matrix $P_p$ is not full-rank, there are initial conditions
$f$ of the form~\eqref{eq:expansions} with $|\pmb\alpha|= 1$,
such that $|P_p\,\pmb\alpha|=0$. Such initial conditions result in `perfect mixing'
since their advected image $\mathcal Pf$ belongs entirely to the mixed scales $\ell<\ell_\nu$, i.e.,
$\mathcal Pf\in V_\nu^\perp$.
In the examples studied in Section~\ref{sec:results}, such perfect finite-time mixing was not
observed. In other words, the matrices $P_p$ are full-rank in these examples.
\end{rem}
\begin{rem}
We emphasize that the truncated PF operator $\mathcal P_p$
is \emph{not} used as an approximation of the full PF operator $\mathcal P$.
Instead, the truncation $\mathcal P_p$ followed naturally from the physical assumption that length scales
$\ell<\ell_\nu$ are completely mixed.
As is clear from equation~\eqref{eq:Kmatrix}, to evaluate the
truncation $\mathcal P_p$, one still needs to utilize the full PF operator to evaluate the terms $\mathcal P\phi_j$.
\end{rem}
\section{Numerical implementation}\label{sec:numerics}
Numerical computation of the optimal initial condition $f_{\mbox{\tiny opt}}$ relies on the scale-dependent
bases $\{\phi_i \}_{i\geq 1}$ and $\{\psi_i \}_{i\geq 1}$.
For completeness, we discuss two such bases: the Fourier basis and
the Haar wavelet basis. Since the examples considered in Section~\ref{sec:results} below
are defined on equilateral two-dimensional domains, $\mathcal D=[0,L]\times [0,L]$, we focus
on this special case. The generalization to the rectangular domain and
to the three-dimensional case is straightforward.
\subsection{Fourier basis}
For periodic boundary conditions, it is natural to use the Fourier basis to
define the spaces $V_{I}$ and $V_{\nu}$. The orthonormal Fourier basis associated with the two-dimensional domain
$\mathcal D=[0,L]\times [0,L]$ consist of functions $(1/L)\exp\left[\i(2\pi/L)(\mathbf k\cdot \mathbf x)\right]$ where
$\mathbf k\in \mathbb Z^2$ denotes the wave vector.
The length scale associated to each Fourier mode is inversely proportional to
the wave number, $|\mathbf k|\sim\ell^{-1}$.
We take the space of available initial scalar fields to be the functions
whose Fourier modes contain at most a prescribed wave number $k_I\sim\ell_I^{-1}$,
i.e.,
\begin{equation}
V_{I}=\mbox{span}\left\{\frac{1}{L}\exp\left[\i\frac{2\pi}{L}(\mathbf k\cdot \mathbf x)\right] :
\mathbf k=(k_x,k_y)\in \mathbb Z^2, |k_x|\leq k_I, |k_y|\leq k_I\right\}.
\label{eq:VI_kI}
\end{equation}
Similarly, the space of unmixed scales $V_\nu$ is the functions
whose Fourier modes contain at most a prescribed wave number $k_\nu\sim\ell_\nu^{-1}\gg k_I$, i.e.,
\begin{equation}
V_{\nu}=\mbox{span}\left\{\frac{1}{L}\exp\left[\i\frac{2\pi}{L}(\mathbf k\cdot \mathbf x)\right] :
\mathbf k=(k_x,k_y)\in \mathbb Z^2, |k_x|\leq k_\nu, |k_y|\leq k_\nu\right\}.
\label{eq:Vnu_knu}
\end{equation}
More generally, one could define the space $V_{I}$ (and similarly $V_{\nu}$) with independent upper
bounds $k_{I_x}$ and $k_{I_y}$ on the wave-numbers $|k_x|$ and $|k_y|$, respectively.
Since the domain is equilateral,
and for simplicity, we choose the same upper bounds in both directions, $k_I=k_{I_x}=k_{I_y}$.
Since the tracer concentration is real-valued, the complex conjugate basis functions in $V_{I}$ and $V_{\nu}$ are redundant.
Also, the basis with $\mathbf k=\mathbf 0$ (corresponding to constant functions) is unnecessary since we assumed
that the tracer has zero mean. Excluding these redundant functions, the effective dimension
of the vector spaces $V_{I}$ and $V_{\nu}$ are $n=\dim V_{I}=2k_I(k_I+1)$ and $N=\dim V_{\nu}=2k_\nu(k_\nu+1)$, respectively.
\subsection{Wavelet basis}
While the above Fourier basis is a convenient choice, it restricts its applicability to the periodic boundary conditions.
More general boundary conditions can be handled with an alternative basis, such as
Haar wavelets. Such wavelet bases have the
added advantage that they can be localized in space in addition to scale. This property renders wavelets particularly
attractive in applications where the tracer can only be released into a subset of the fluid domain $\mathcal D$
due to geometric or design constraints. Contrast this with the global nature of the Fourier basis.
\begin{figure}
\centering
\includegraphics[width=.85\textwidth]{MRA_functions}
\caption{Three examples of the wavelet functions~\eqref{eq:wavelet_2d_02} with
$j=2$, $i_x=i_y=1$. The domain is $\mathcal D=[0,1]\times[0,1]$}
\label{fig:wavelets}
\end{figure}
Here, we consider the Haar wavelet basis. For completeness, we briefly review the construction of this basis in
two dimensions. Denote the one-dimensional Haar scaling function
with $s(x)=\mathds 1_{[0,1)}(x)$
and the corresponding wavelet with $h(x)=\mathds 1_{[0,1/2)}(x)-\mathds 1_{[1/2,1)}(x)$ where $\mathds 1_A$ is the
indicator function of the set $A$. By dilations and translations, we obtain
\begin{subequations}
\begin{equation}
s_{j,i}(x)=2^{j/2}s\left(2^j\frac{x}{L}-i\right),\quad j\geq 0,\quad i\in\{0,1,\cdots, 2^j-1\},
\label{scaling_1d}
\end{equation}
\begin{equation}
h_{j,i}(x)=2^{j/2}h\left(2^j\frac{x}{L}-i\right),\quad j\geq 0,\quad i\in\{0,1,\cdots, 2^j-1\},
\label{wavelet_1d}
\end{equation}
\end{subequations}
where $0\leq x\leq L$ for a domain of size $L$.
The collection of the wavelets $h_{j,i}$ forms an orthogonal basis for mean-zero functions in
$L^2([0,L])$~\citep{daubechies92}.
The integer $j$ determines the size of the support of $h_{j,i}$ (or $s_{j,i}$) which is $L\times 2^{-j}$. Since
the wavelets with larger $j$ resolve finer structures (or smaller length scales), the
integer $j$ is referred to as the scale of the wavelet. The integer $i$,
on the other hand, introduces a translation in the support of each wavelet, introducing a space dependence
at each scale $j$.
The functions $s_{j,i}$ and $h_{j,i}$ serve as the building blocks of
multidimensional wavelet bases~\citep{daubechies92,farge1999}. For instance, a
complete orthonormal basis for mean-zero functions in $L^2(\mathcal D)$, with $\mathcal D=[0,L]\times [0,L]$,
is formed by the set of functions
\begin{align}
\big\{w^{(\mu)}_{j,i_x,i_y}:\ & 1\leq \mu \leq 3,\ 0\leq j,\ 0\leq i_x\leq 2^{j}-1,\ 0\leq i_y\leq 2^{j}-1 \big\},
\label{eq:wavelet_2d}
\end{align}
where
\begin{subequations}
\begin{equation}
w_{j,i_x,i_y}^{(1)}(x,y)= \frac{1}{L}h_{j,i_x}(x)s_{j,i_y}(y),
\end{equation}
\begin{equation}
w_{j,i_x,i_y}^{(2)}(x,y)= \frac{1}{L}s_{j,i_x}(x)h_{j,i_y}(y),
\end{equation}
\begin{equation}
w^{(3)}_{j,i_x,i_y}(x,y)= \frac{1}{L}h_{j,i_x}(x)h_{j,i_y}(y).
\end{equation}
\label{eq:wavelet_2d_02}
\end{subequations}
The prefactor $1/L$ ensures that each basis function is of unit norm. The integer $j$ determine the
scale in both $x$ and $y$ directions, while the integers $i_x$ and $i_y$ introduce the corresponding translations.
Figure~\ref{fig:wavelets} shows three examples of the two-dimensional wavelet functions~\eqref{eq:wavelet_2d_02}
with $j=2$.
The construction of two-dimensional wavelet bases from one-dimensional wavelets is not unique.
For an alternative wavelet basis see, e.g., Chapter 10 of~\cite{daubechies92}.
Using the wavelet basis~\eqref{eq:wavelet_2d}, we define the subspace of initial conditions $V_{I}$ as
\begin{align}
V_{I}=\mbox{span}\big\{w^{(\mu)}_{j,i_x,i_y}:\ & 1\leq \mu\leq 3,\ 0\leq j\leq J_I-1 \nonumber\\
& 0\leq i_x\leq 2^{j}-1,\ 0\leq i_y\leq 2^{j}-1 \big\},
\label{eq:VI_wlet}
\end{align}
where the integer $J_I$ sets the initially available length scales.
Roughly speaking, the wavelet subspace $V_I$
contains tracer blobs of size $\ell_I=L\times 2^{-J_I}$ or larger.
Similarly, we define the subspace of unmixed length scales by
\begin{align}
V_{\nu}=\mbox{span}\big\{w^{(\mu)}_{j,i_x,i_y}:\ & 1\leq \mu\leq 3,\ 0\leq j\leq J_\nu-1 \nonumber\\
& 0\leq i_x\leq 2^{j}-1,\ 0\leq i_y\leq 2^{j}-1 \big\},
\label{eq:Vnu_wlet}
\end{align}
containing the unmixed tracer blobs of size $\ell_\nu=L\times 2^{-J_\nu}$ or larger.
For given positive integers $J_I$ and $J_\nu$, we have $n=\dim V_{I}=4^{J_I}-1$ and $N=\dim V_{\nu} =4^{J_\nu}-1$.
Recall that the basis functions $\phi_i$ spanning the domain $V_{I}$ of the truncated PF operator $\mathcal P_p$
need not to be identical to the basis functions $\psi_i$ spanning its range $V_{\nu}$. As a result,
the Fourier-based subspaces~\eqref{eq:VI_kI} and~\eqref{eq:Vnu_knu} can be used
in conjunction with the wavelet-based subspaces~\eqref{eq:VI_wlet} and ~\eqref{eq:Vnu_wlet}.
In the following, we consider examples with both Fourier-based and
wavelet-based subspaces~\eqref{eq:VI_kI} and~\eqref{eq:VI_wlet} for defining the domain $V_{I}$.
For the range $V_{\nu}$, however, we only consider the Fourier-based subspace~\eqref{eq:Vnu_knu}
in order to achieve speedup in the computations by taking advantage of the fast Fourier transform \texttt{FFTW}.
\\
Once the choice of bases is made, the truncated PF matrix~\eqref{eq:Kmatrix} can be computed
by evaluating the integral,
\begin{equation}
[P_p]_{ij}=\langle \mathcal P\phi_j,\psi_i \rangle_{L^2}:=\int_{\mathcal D}(\mathcal P\phi_j)(\mathbf x)[\psi_i(\mathbf x)]^\ast\mathrm d \mathbf x,
\label{eq:Kmatrix_2}
\end{equation}
where $\ast$ denotes the complex conjugation. We approximate this integral using
the standard trapezoidal rule~\citep{recipe}. To ensure the accuracy of the approximation,
the results reported in section~\ref{sec:results} are computed
using a dense uniform grid $\mathcal G$ of $2048\times 2048$ collocation points over
the domain $\mathcal D=[0,L]\times [0,L]$. The terms $\mathcal P\phi_j$ are computed from
the definition of the PF operator (Definition~\ref{def:PF}), i.e.,
$(\mathcal P\phi_j)(\mathbf x_0)=\phi_j(\varphi^{-1}(\mathbf x_0))$ for any
$\mathbf x_0\in\mathcal G$. (see~\cite{dellnitz01}, for more accurate numerical methods).
\section{Examples and discussion}\label{sec:results}
\subsection{A time-periodic model}\label{sec:sineMap}
As the first example, we consider the time-periodic sine flow~\citep{liu94,pierrehumbert94}.
This model is simple enough to unambiguously demonstrate our results, yet it can exhibit complex
dynamics with simultaneous presence of chaotic mixing and coherent vortices.
The sine flow has a spatially sinusoidal velocity field on the domain
$(x,y)\in[0,1]\times [0,1]$ with periodic boundary conditions.
The temporal period of the flow is $2\tau$ for some $\tau>0$.
During the first $\tau$ time units, the velocity field is $\mathbf u=(0,\sin(2\pi x))^\top$
and switches instantly to $\mathbf u=(\sin(2\pi y),0)^\top$ for the second $\tau$
time units. This process repeats iteratively.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{map_pmap_small}
\caption{Two hundred iterations of the sine map~\eqref{eq:map} with $\tau=0.25$ starting from $16\times 16$
initial conditions distributed uniformly over the domain $[0,1]\times [0,1]$.
The domain is periodic in both directions.}
\label{fig:pmap}
\end{figure}
The sine flow generates a reversible map $T$ that, over one period, maps
points $(x,y)$ to $T(x,y)$. The inverse of the map $T$ is given explicitly by~\citep{gubanov10}
\begin{equation}
T^{-1}:
\begin{pmatrix}
x\\ y
\end{pmatrix}
\mapsto
\begin{pmatrix}
x-\tau \sin(2\pi y)\\
y-\tau \sin\big(2\pi (x-\tau \sin(2\pi y))\big)
\end{pmatrix}
\mod{1}.
\label{eq:map}
\end{equation}
Figure~\ref{fig:pmap} shows $200$ iterations of this map
with $\tau=0.25$ launched from a uniform grid of initial conditions. The map
$T^{-1}$ has two hyperbolic fixed points located at $(0,0)$ and $(0.5,0.5)$ whose
tangle of stable and unstable manifolds creates a chaotic mixing region.
In addition, the map has two elliptic fixed points located at $(0.5,0)$ and
$(0,0.5)$. These elliptic fixed points are surrounded by invariant
Kolmogorov--Arnold-Moser (KAM) tori with quasi-periodic motion that inhibit mixing~\citep{topolHydro_arnold}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{map_200it_tracers}
\caption{The optimal initial conditions $f_{\mbox{\tiny opt}}\inV_{I}$ for the sine map, with
$V_{I}$ being the Fourier-based subspace defined in~\eqref{eq:VI_kI}.
Four optimal initial conditions with $k_I=1$, $2$ , $3$ and $4$
are shown in the top panel. The range of the truncated PF operator
$\mathcal P_p$ is the Fourier-based subspace $V_{\nu}$ with $k_\nu=256$.
The bottom panel shows their corresponding
advected image $\mathcal Pf_{\mbox{\tiny opt}}$ under $200$ iterations of the sine map.
All figures show the entire domain $\mathcal D=[0,1]\times [0,1]$.}
\label{fig:optIC}
\end{figure}
It is known that mixing is more efficient around the hyperbolic fixed points due to their
tangle of stable and unstable manifolds~\citep{aref}. The KAM regions, in contrast, form islands of coherent motion
that inhibit efficient mixing of passive tracers. Therefore, it is desirable to release the tracer blobs around the hyperbolic fixed points, avoiding
the KAM region. Here, we examine whether the optimal initial condition given by Theorem~\ref{thm:main}
agrees with this intuitive assessment.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{map_200it_tracers_wavelets}
\caption{The optimal initial conditions $f_{\mbox{\tiny opt}}\inV_{I}$ for the sine map.
Four optimal initial conditions with $J_I=1$, $2$ , $3$
are shown in the top panel. The bottom panel shows their corresponding
advected images $\mathcal Pf_{\mbox{\tiny opt}}$ under $200$ iterations of the sine map.
The range of the truncated PF operator
$\mathcal P_p$ is the Fourier-based subspace $V_{\nu}$ with $k_\nu=256$.
All figures show the entire domain $\mathcal D=[0,1]\times [0,1]$.}
\label{fig:optIC_wavelet}
\end{figure}
For the finite-time analysis, we consider the flow under $200$ iterations
of the sine map, i.e. we set the flow map $\varphi = T^{200}$.
First, we consider the Fourier-based initial subspace $V_{I}$ defined in~\eqref{eq:VI_kI}.
Figure~\ref{fig:optIC} shows the optimal initial conditions obtained from Theorem~\ref{thm:main}
with $k_I=1,2,3$ and $4$. For all parameter values $k_I$, the optimal initial condition consists of two
prominent blobs centered at the hyperbolic fixed points $(0,0)$ and $(0.5,0.5)$.
For $k_I=1$, only very large scales are available for the distribution of the tracer blob
and therefore some intersection with the KAM region is inevitable.
As the number of available wave numbers $k_I$ (or equivalently, available initial length scales)
increases the blobs become more concentrated at the hyperbolic fixed points.
Even for $k_I=4$, the optimal initial condition has very small but non-zero concentration
in the KAM regions. This is due to the global nature of the Fourier modes which
inhibits the perfect localization around the hyperbolic fixed points.
The wavelet-bases subspace~\eqref{eq:VI_wlet} does not suffer from this drawback.
Figure~\ref{fig:optIC_wavelet}, for instance, shows three optimal initial conditions
in this wavelet-based subspace.
For $J_I=1$, where only the largest scales are available, intersection
with the KAM region is inevitable (similar to the case of $k_I=1$ in figure~\ref{fig:optIC}).
As the smaller scales become available, the optimal initial condition $f_{\mbox{\tiny opt}}$ concentrates
around the hyperbolic fixed points with no concentration at the KAM regions.
The results in figures~\ref{fig:optIC} and \ref{fig:optIC_wavelet} are computed using the
Fourier-based subspace $V_{\nu}$ with $k_\nu=256$. To ensure the insensitivity
of the results to perturbations, we recomputed them
by varying the cut-off wavenumber in the interval $250\leq k_\nu\leq 260$ and obtained
almost identical optimal initial conditions.
\subsection{Two-dimensional turbulence}\label{sec:2Dturb}
As the second example, we consider a fully unsteady flow obtained from a direct numerical
simulation of the two-dimensional Navier--Stokes equation,
\begin{equation}
\partial_t \mathbf u +\mathbf u\cdot \pmb\nabla\mathbf u = -\pmb\nabla p +\nu \Delta \mathbf u +\mathbf F,\quad \pmb\nabla\cdot\mathbf u=0,
\end{equation}
with the dimensionless viscosity $\nu=10^{-5}$ and a band-limited stochastic forcing $\mathbf F$.
The flow domain is the box $\mathcal D=[0,2\pi]\times[0,2\pi]$
with periodic boundary conditions. A standard pseudo-spectral code
with 2/3 dealiasing was used to numerically solve the Navier--Stokes equations
(see Section 6.2 of~\citet{pra} for further computational details).
\begin{figure}
\centering
\includegraphics[width=.95\textwidth]{turb_T100_vort_ftle}
\caption{(a) The vorticity field at the initial time $t_0$. (b) The vorticity field at the final time $t_0+T$.
(c) The forward-time FTLE field corresponding to the time interval $[t_0,t_0+T]$. Here, $T$
is $100$ time units.}
\label{fig:2Dturb_vort}
\end{figure}
Starting from a random-phase initial
condition, we numerically integrate the Navier--Stokes equation. After
$1000$ time units the flow has reached a statistically steady turbulent state with Reynolds number $4.1\times 10^3$.
We set this time as the initial time $t_0$ for the
mixing analysis. The final time instance is set to $t_0+T$ with $T=100$.
Figures~\ref{fig:2Dturb_vort}(a,b) show the vorticity fields at these initial and final times.
As is typical of two-dimensional turbulence, the flow contains several coherent vortices that
exhibit minimal material deformation over the time interval $[t_0,t_0+T]$~\citep{mcwilliams1984vortices}. These coherent
vortices are signaled by the islands of small finite-time Lyapunov exponent (FTLE)
shown in figure~\ref{fig:2Dturb_vort}(c). The FTLE field is computed as
$\log [\lambda(\mathbf x)]/(2T)$ for all $\mathbf x\in\mathcal D$, with $\lambda(\mathbf x)$ being the largest
eigenvalue of the Cauchy--Green strain tensor $[\mathrm d \varphi(\mathbf x)]^\top\mathrm d\varphi (\mathbf x)$, and $\mathrm d\varphi$
denoting the Jacobian of the flow map~\citep{voth02}.
Outside the coherent vortices the flow is mostly chaotic, dominated by the stretching
and folding of material lines.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{turb_T100_tracers}
\caption{Optimal initial tracers $f_{\mbox{\tiny opt}}\in V_{I}$ (upper panel) in the turbulent flow with $k_I=1,2,3$
and $4$. Their corresponding advected images $\mathcal Pf_{\mbox{\tiny opt}}$ at the final time $t_0+T$ are
shown in the lower panel. In all four cases shown here, the mixed wavenumber is $k_\nu=256$ (see equation~\eqref{eq:Vnu_knu}).
All panels show the entire domain $\mathcal D=[0,2\pi]\times [0,2\pi]$.}
\label{fig:2Dturb}
\end{figure}
Next we compute the optimal initial conditions $f_{\mbox{\tiny opt}}$.
Unlike the sine map, the preimages $\varphi^{-1}(\mathbf x_0)$ are not explicitly known
here. We numerically evaluate the preimages by integrating
the ODE~\eqref{eq:ode} backwards in time from the final time $t_0+T$ to the initial time $t_0$,
for each initial condition $\mathbf x_0\in\mathcal G$.
This numerical integration is carried out by the fifth-order Runge-Kutta scheme
of~\cite{RK45}. Since the velocity field $\mathbf u$ is stored on a discrete spatiotemporal
grid, it needs to be interpolated for the particle advection.
Here, we use cubic splines for the spatial interpolation of the velocity field
together with a linear interpolation in time.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{compare_mixNorms_turb}
\caption{Red line (squares): The mix-norm $\|\mathcal Pf_{\mbox{\tiny opt}}\|_{H^{-1}}$
for the optimal initial concentrations $f_{\mbox{\tiny opt}}\in V_{I}$. The optimal initial conditions for $k_I=1$, $2$, $3$ and $4$
are shown in figure~\ref{fig:2Dturb}.
Blue line (circles): The mix-norm $\|\mathcal Pf\|_{H^{-1}}$ for the initial concentrations
$f(x,y)=\cos(k_Ix)\cos(k_Iy)/\pi$.
\label{fig:mixNorm}
}
\end{figure}
Figure~\ref{fig:2Dturb} shows the optimal initial tracer patterns $f_{\mbox{\tiny opt}}$ for
$k_I=1,2,3$ and $4$, which belong to the corresponding Fourier-based subspaces $V_{I}$
as defined in equation~\eqref{eq:VI_kI}.
As opposed to the simple model considered in Section~\ref{sec:sineMap},
the optimal tracer patterns here have fairly complicated structures. This is to be expected
as the turbulent flow itself has a complex spatiotemporal structure.
Ideally, the tracer should concentrate outside the coherent vortices to achieve
better mixing. Similar to the sine flow, for $k_I=1$, where only the very large scales are available
for the release of the tracer, there is some inevitable overlap between the coherent vortices and
the tracer. This results in the visibly unmixed blobs in the advected tracers $\mathcal Pf_{\mbox{\tiny opt}}$
shown in the lower panel of figure~\ref{fig:2Dturb}. Theorem~\ref{thm:main}, however, guarantees
that the optimal initial condition $f_{\mbox{\tiny opt}}$ is such that
the unmixed blobs are minimal.
As smaller scales become available ($k_I>1$), the intersection of the high initial tracer concentration
and the coherent vortices becomes smaller, leading to a more homogeneous mixture after
advection to the final time $t_0+T$.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{turb_T100_tracers_wavelets}
\caption{Optimal initial tracers $f_{\mbox{\tiny opt}}\in V_{I}$ (upper panel) in the turbulent flow with $J_I=1,2$ and $ 3$.
Their corresponding advected images $\mathcal Pf_{\mbox{\tiny opt}}$ at the final time $t_0+T$ are
shown in the lower panel. In all four cases shown here, the mixed wavenumber is $k_\nu=256$ (see equation~\eqref{eq:Vnu_knu}).
All panels show the entire domain $\mathcal D=[0,2\pi]\times [0,2\pi]$.}
\label{fig:2Dturb_wavelet}
\end{figure}
To quantify the mixture qualities, we compute the mix-norm
of the advected tracers proposed by~\cite{shaw07}. This mix-norm is the Sobolev $H^{-1}$ norm,
\begin{equation}
\|\rho\|_{H^{-1}}=\sqrt{\sum_{\mathbf k\neq \mathbf 0}|\widehat{\rho}(\mathbf k)|^2/|\mathbf k|^2},
\label{eq:mix-norm}
\end{equation}
where the hat sign denotes the Fourier transform.
\cite{mathew07} proposed the alternative Sobolev norm $H^{-1/2}$ for quantifying the mixture quality.
The motivation for using such Sobolev norms is that the density of homogeneous mixtures
are concentrated at ever smaller scales or equivalently larger wave numbers $|\mathbf k|$.
As a result, more homogeneous mixtures have smaller mix-norms.
The mix-norm $\|\mathcal Pf_{\mbox{\tiny opt}}\|_{H^{-1}}$ is shown
in figure~\ref{fig:mixNorm} for the optimal initial conditions $f_{\mbox{\tiny opt}}\inV_{I}$ with $1\leq k_I\leq 6$.
As $k_I$ increases, more homogeneous mixtures are obtained, as is also visible in figure~\ref{fig:2Dturb}.
For comparison, we also show the mix-norm $\|\mathcal Pf\|_{H^{-1}}$ for
the non-optimal initial conditions $f(x,y)=\cos(k_Ix)\cos(k_Iy)/\pi\inV_{I}$. The
non-optimal initial conditions result into a larger mix-norm, showing that they
do not mix as well as the optimal initial conditions $f_{\mbox{\tiny opt}}$ do.
Figure~\ref{fig:optIC_wavelet} shows the optimal initial conditions found
in the wavelet-based subspace~\eqref{eq:VI_wlet}. Their mix-norms exhibit a
similar behavior as the one shown in figure~\ref{fig:mixNorm}.
\section{Concluding remarks}
The design of mixing devices has primarily been concerned with the
stirring protocols that enhance mixing. The optimality of these protocols are limited by
design constraints and fundamental physics~\citep{lin11}. On the other hand, for a given stirring protocol,
the final mixture quality also depends on the initial configuration of the
tracer. The optimal initial condition for the release of the tracer has received far less attention.
Here, we proposed a rigorous framework for determining the
optimal initial tracer configuration to achieve maximal mixing under finite-time passive advection.
We showed that, under reasonable assumptions, the problem reduces to a finite-dimensional
optimization problem. The optimal initial condition then coincides with
a singular vector of a truncated Perron--Frobenius (PF) operator.
This truncation is not an approximation of the infinite-dimensional PF operator; it rather
follows naturally from our simplifying assumption that the tracer blobs smaller than
a prescribed critical length scale $\ell_\nu$ are completely mixed.
We discussed two numerical implementations of the
optimization problem using Fourier modes and Haar wavelets.
While the Fourier modes are convenient for the spatially periodic flows considered
here, the wavelets are more suitable for handing more complicated geometries
and boundary conditions. Wavelets also allow for optimal initial conditions
that are local in both space and scale. The space localization is crucial in many
applications where the tracer can only be released into a subset of the flow domain
duo to geometric constraints.
We restricted our attention here to ideal passive tracers.
Future work will expand the framework
to account for diffusion and the presence of sinks and sources.
Diffusion, in particular, dictates a dissipative length scale $\ell_\nu$ for mixed blobs which,
in the absence of diffusion, was prescribed here in an ad-hoc manner.
\ \\
\textbf{Acknowledgments:}
I would like to thank Daniel Karrasch for pointing out the correct terminology in Definition~\ref{def:PF}
and bringing a number of relevant references to my attention.
I am also grateful to Charles Doering, Gary Froyland, George Haller
and Christopher Miles for their comments on this manuscript.
| {
"timestamp": "2016-11-22T02:00:47",
"yymm": "1604",
"arxiv_id": "1604.01794",
"language": "en",
"url": "https://arxiv.org/abs/1604.01794",
"abstract": "The efficiency of a fluid mixing device is often limited by fundamental laws and/or design constraints, such that a perfectly homogeneous mixture cannot be obtained in finite time. Here, we address the natural corollary question: Given the best available mixer, what is the optimal initial tracer pattern that leads to the most homogeneous mixture after a prescribed finite time? For ideal passive tracers, we show that this optimal initial condition coincides with the right singular vector (corresponding to the smallest singular value) of a suitably truncated Perron-Frobenius (PF) operator. The truncation of the PF operator is made under the assumption that there is a small length-scale threshold $\\ell_\\nu$ under which the tracer blobs are considered, for all practical purposes, completely mixed. We demonstrate our results on two examples: a prototypical model known as the sine flow and a direct numerical simulation of two-dimensional turbulence. Evaluating the optimal initial condition through this framework only requires the position of a dense grid of fluid particles at the final instance and their preimages at the initial instance of the prescribed time interval. As such, our framework can be readily applied to flows where such data is available through numerical simulations or experimental measurements.",
"subjects": "Fluid Dynamics (physics.flu-dyn); Dynamical Systems (math.DS); Optimization and Control (math.OC); Computational Physics (physics.comp-ph)",
"title": "Optimal initial condition of passive tracers for their maximal mixing in finite time",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126506901791,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7094397152205967
} |
https://arxiv.org/abs/1201.1948 | Rigorous Enclosures of a Slow Manifold | Slow-fast dynamical systems have two time scales and an explicit parameter representing the ratio of these time scales. Locally invariant slow manifolds along which motion occurs on the slow time scale are a prominent feature of slow-fast systems. This paper introduces a rigorous numerical method to compute enclosures of the slow manifold of a slow-fast system with one fast and two slow variables. A triangulated first order approximation to the two dimensional invariant manifold is computed "algebraically". Two translations of the computed manifold in the fast direction that are transverse to the vector field are computed as the boundaries of an initial enclosure. The enclosures are refined to bring them closer to each other by moving vertices of the enclosure boundaries one at a time. As an application we use it to prove the existence of tangencies of invariant manifolds in the problem of singular Hopf bifurcation and to give bounds on the location of one such tangency. | \section{Introduction}\label{S_Intro}
Invariant manifolds and their intersections are important features that organize qualitative properties of dynamical systems. Three types of manifolds have been prominent in the subject: (1) compact invariant tori ~\cite{Dl01}, (2) stable and unstable manifolds of equilibria and periodic orbits~\cite{Kea05,EKO07}, and (3) slow manifolds of multiple time scale systems~\cite{J95}. Interval arithmetic and verified computing have been used extensively to give rigorous estimates and existence proofs for invariant tori and occasionally to locate stable and unstable manifolds, but this paper is the first to employ these methods to locate slow manifolds. Each of these three cases pose numerical challenges to locate the manifolds.
Many methods that locate invariant tori assume that the flow on the tori is smoothly conjugate to a constant flow with dense orbits. Existence of this conjugacy confronts well known small divisor problems and the winding vector of the flow must satisfy diophantine conditions in order for this problem to be solvable. Typically, the numerical methods produce a Fourier expansion of the conjugacy which is determined up to a translation. The manifolds are located by projection onto a discrete set of Fourier modes and solving a fixed point equation for the coefficients of the conjugacy.
The computation of stable and unstable manifolds of equilibria and periodic orbits is a ``one-sided'' boundary value problem. The manifolds consist of trajectories that are asymptotic to the equilibrium or periodic orbit. In the case of an equilibrium point of an analytic vector field, the local stable and unstable manifolds are analytic graphs that have convergent asymptotic expansions whose coefficients can be determined iteratively. The most challenging aspect of computations of two dimensional manifolds arises from the way that trajectories do or do not spread out in the manifold as one departs from the equilibrium or periodic orbit. As illustrated by the Lorenz manifold~\cite{Kea05}, the manifolds can twist and fold in ways that present additional geometrical complications for numerical methods. The development of rigorous bounds for these invariant manifolds follows similar principles to the verified computation of individual trajectories.
Multiple time scale vector fields, also known as singularly perturbed differential equations, occur in many settings: systems of chemical reactions, lasers, fluid dynamics and models of the electrical activity of neurons are a few examples. Borrowing terminology from fluid dynamics, the solutions of these systems can have (boundary) layers in which the fast time scale determines the rate at which the solution varies as well as long periods of time during which the solution evolves on the slow time scale. The slow motion typically occurs along \textit{slow manifolds} that are locally invariant. The slow manifolds play a prominent role in qualitative analysis of the dynamics and bifurcations of multiple time scale systems. Indeed, model reduction procedures are frequently employed that replace a model by a lower dimensional model that approximates the motion along a slow manifold and ignores the fast dynamics of the original model. The ideal for this type of model reduction is an algorithm that computes the slow manifold exactly. That ideal seems very difficult to achieve and is not addressed in this paper. Instead, we seek rigorous bounds for the location of the slow manifold that are tight enough to give information that can be used in the analysis of bifurcations of the system.
To explain the methods we introduce in the simplest terms, we focus upon \textit{slow-fast} systems that contain an explicit parameter $\varepsilon$ that represents the ratio of time scales. Moreover, we restrict attention to systems that have two slow variables and one fast variable and use a single example as a test case. In principle, the methods generalize to the case of codimension one slow manifolds, and the definitions and existence proofs in Sections \ref{S_OverMethod} and \ref{S_Existence} have obvious higher dimensional analogues. In practice, however, due to the scarcity of tools for computational geometry in higher dimensions, implementing a higher dimensional version would be a significant extension of the work described in this paper.
We comment on generalizations from the setting of systems with two slow and one fast variable in the discussion at the end of the paper, but leave consideration of further details to future work.
Slow manifolds of multiple time scale systems present unique theoretical and numerical challenges compared to the computation of invariant tori and (un)stable manifolds. The first of these challenges is that theory is developed primarily in terms of ``small enough'' values of the parameter $\varepsilon$ measuring the time scale ratio of a slow-fast system. Numerically, one always works with specific values of $\varepsilon$. The convergence of trajectories as $\varepsilon \to 0$ is singular, making it difficult to develop methods framed in terms of asymptotic expansions in $\varepsilon$. Divergent series are the rule rather than the exception in this context. The rich history of numerical integration methods for stiff systems and the large literature on reduction methods for kinetic equations of chemical systems reflect the difficulty of computing attracting slow manifolds, the simplest case for this problem. Computing slow manifolds of saddle-type presents the additional challenge that most nearby trajectories diverge from the slow manifold on the fast time scale in both forward and backward time. The second theoretical difficulty in finding slow manifolds is that they are only locally invariant in most problems of interest. The local invariance is accompanied by a lack of uniqueness: possible manifolds intersect fast subspaces in open sets whose diameter is exponentially small in $\varepsilon$; i.e., bounded by $\exp(-c/\varepsilon)$ for a suitable $c > 0$. Methods based upon root finding of a discretized set of equations must choose a specific solution of the discretized equations.
We compute enclosures of slow manifolds by exploiting transversality properties that improve as $\varepsilon \to 0$ while being suitable for fixed values of $\varepsilon$. The methods do not identify a unique object and are well suited to locating locally invariant slow manifolds. If $H$ is a hypersurface and $F$ is a vector field, then transversality of $F$ to $H$ is a \emph{local} property: verification does not rely upon computation of trajectories of $F$. For a slow-fast vector field with one fast variable, translation of a normally hyperbolic critical manifold along the fast direction produces a transverse hypersurface when the translation distance is large enough. Translation distances proportional to $\varepsilon$ suffice. In this paper, we use piecewise linear surfaces $H$ as enclosing manifolds. For the example we consider, transversality at vertices of a face of $H$ implies transversality of the entire face. This reduces the computational complexity of checking transversality sufficiently that iterative refinement of the enclosures was feasible.
Since slow manifolds are objects that are defined asymptotically in terms of the parameter $\varepsilon$, they are not directly computable using finite information. One part of this paper is devoted to the development of a mathematical framework within which slow manifolds are defined for fixed values of $\varepsilon > 0$. We define \emph{computable slow manifolds} and relate this concept to the slow manifolds studied in geometric singular perturbation theory. All computations and statements in this paper are for computable slow manifolds. This is similar in spirit to the finite resolution dynamics approach of Luzzatto and Pilarczyk \cite{LP11}.
Our work is motivated by the study of tangencies of invariant manifolds. Significant global changes in the dynamics of a system have been observed to occur at bifurcations involving tangencies. Proving the existence of tangencies is intrinsically complicated because the manifolds themselves must be tracked over a range of parameters. Computer-aided proofs of tangencies of invariant manifolds have previously been studied by Arai and Mischaikow in \cite{AM06}, and Wilczak and Zgliczy\'nski in \cite{WZ09}. In Section \ref{S_Tang}, we prove that a tangency bifurcation involving a computable slow manifold occurs in the singular Hopf normal form introduced in \cite{G08}.
\subsection{Slow-fast systems}\label{SS_SlowFast}
Slow-fast differential equations have the form:
\begin{eqnarray}\label{eq_slowFast}
\varepsilon \dot x&=&f(x,y,\varepsilon) \\
\nonumber \dot y & = & g(x,y,\varepsilon),
\end{eqnarray}
where $x\in\mathbb{R}^n$, $y\in\mathbb{R}^m$, $f:\mathbb{R}^{n+m+1} \rightarrow \mathbb{R}^n$, and $g:\mathbb{R}^{n+m+1} \rightarrow \mathbb{R}^m$. We assume that the vector field $(f,g)$ is smooth ($C^\infty$), although most of this paper can easily be adapted to the finitely differentiable setting. Here $x$ and $y$ are the fast and slow variables, respectively. Throughout the paper we consider the case $m=2$ and $n=1$ of two slow variables and one fast variable.
We define the \textit{critical manifold}, as the set
\begin{equation}\label{eq_critMfd}
S_0:=\{(x,y)\in \mathbb{R}^{n+m} : f(x,y,0)=0\}.
\end{equation}
The critical manifold is normally hyperbolic at points where $D_xf$ is hyperbolic; i.e., has no eigenvalue whose real part is zero. Points where $S_0$ is singular are referred to as folds. On the normally hyperbolic pieces of the critical manifold, $x$ is given as a function of $y$, $x=h_0(y)$. The corresponding differential equation
\begin{equation}\label{eq_slowSystem}
\dot y = g(h_0(y),y,0)
\end{equation}
is called the slow flow. If one instead rescales time with $\varepsilon$ and puts $\varepsilon=0$ in \eqref{eq_slowFast}, one gets the \textit{layer equation}:
\begin{eqnarray}\label{eq_fastSystem}
x'&=&f(x,y,0) \\
\nonumber y' & = & 0,
\end{eqnarray}
Note that the manifold $S_0$ is exactly the set of critical points for the layer equation. Singular perturbation theory studies how the solutions to (\ref{eq_slowFast}) for $\varepsilon$ small, but positive, can be understood by studying solutions to (\ref{eq_slowSystem}) and (\ref{eq_fastSystem}).
When $S_0$ is normally hyperbolic and $\varepsilon>0$ is sufficiently small, geometric singular perturbation theory \cite{J95} ensures that the critical manifold perturbs to a \textit{slow manifold}. Slow manifolds are \textit{locally invariant} and $O(\varepsilon)$ close to the critical manifold. However, slow manifolds are not unique, although different choices are within $O(e^{-c/\varepsilon})$ distance from each other. We denote slow manifolds by $S_\varepsilon$.The purpose of this work is to compute approximations of $S_\varepsilon$ that are guaranteed to be of a certain accuracy. This is achieved by computing two approximations that enclose the slow manifold. The two approximations of the slow manifold are triangulated surfaces transverse to the vector field. To prove the transversality, we use interval analysis, to be explained in Subsection \ref{SS_ValNum}. Interval analysis is a general technique that enables mathematically rigorous proofs of inequalities on a digital computer.
To simplify notation we denote the two slow variables by $y$ and $z$, i.e., from now on $y\in\mathbb{R}$, and the vector field in the slow variables is denoted by $g=(g_y,g_z)$. We also assume that $f,g_y,$ and $g_z$ are independent of $\varepsilon$. To summarize, the systems we study are of the following form:
\begin{eqnarray}\label{eq_slowFast_12}
\varepsilon \dot x&=&f(x,y,z) \\
\nonumber \dot y & = & g_y(x,y,z) \\
\nonumber \dot z & = & g_z(x,y,z),
\end{eqnarray}
where $x,y,z\in\mathbb{R}$, and $f,g_y,g_z:\mathbb{R}^{3} \rightarrow \mathbb{R}$. We will sometimes use the notation $F=(f,g_y,g_z)$.
\subsection{Validated numerics}\label{SS_ValNum}
\textit{Interval analysis} was introduced by Moore in \cite{Mo66} as a method to use a digital computer to produce mathematically rigorous results with only approximate arithmetic. Tucker \cite{T11} is a modern introduction to the subject, and more advanced topics are discussed by Neumaier \cite{Ne90}. The main idea is to replace floating point arithmetic with a set arithmetic; the basic objects are intervals of points rather than individual points. Together with directed rounding this method yields an enclosure arithmetic that allows for the rigorous verification of inequalities. To use interval analysis to produce a mathematical proof, often called \textit{(auto-)validated numerical methods}, one has to prove that the statement at hand can be reduced to a finite number of inequalities, and then verify that these inequalities are satisfied. Interval arithmetic is used for the verification. The objects used to describe sets in validated numerics are typically convex sets in some coordinate system, e.g., intervals, parallelograms, or ellipsoids. In this paper we will employ triangular meshes of surfaces, an approach that previously, in this setting, only has been used in \cite{JT11c}.
\subsection{Computation of invariant manifolds}\label{SS_InvMfd}
The study of invariant manifolds \cite{HPS77} is central to the theory of dynamical systems. The behavior of a system can often be understood by understanding its invariant structures. Numerical computations of invariant manifolds \cite{CFL05,C09,CZ11,EKO07,GK09,JT11a,Kea05,O95,S90,Z09} are important in many applications. There are no universally applicable methods to compute invariant manifolds; to be efficient, they have to be tailored for the specific class of problems one is studying. Computing invariant manifolds of slow-fast systems is particularly challenging. Two existing methods are \cite{EKO07,GK09}, and no rigorous methods exist. The main idea of our method is to refine a first order approximation of the manifold by local modifications that maintain transversality of the enclosing manifolds. Interval arithmetic is used to make the local computation of transversality rigorous. This is similar in spirit to the methods developed in \cite{G95} to study the phase portraits of planar polynomial vector fields. Even in the planar case the verified computation of phase portraits is a challenging task, and the few methods that exist include \cite{G95,JT11b}.
\section{Overview of the method}\label{S_OverMethod}
This section describes our method to compute enclosures of the slow manifold of a slow-fast system of the form (\ref{eq_slowFast_12}). We start by giving an overview of the main ideas of the method. There are five main steps in the algorithm:
\begin{enumerate}
\item
triangulation of the critical manifold,
\item
computing the $O(\varepsilon)$ correction term for the slow manifold,
\item
constructing left and right perturbations of the slow manifold,
\item
proving that the left and right perturbations enclose the manifold, and
\item
tightening the enclosure by contracting the left and right perturbations towards each other.
\end{enumerate}
The first step is to compute a triangulation of the critical manifold, which is adapted to its geometry. The manifold is defined implicitly by the condition $f(x,y,z)=0$. In the example we consider in Section \ref{S_Method}, we solve this equation to obtain explicit expressions for the functions of the form $x = h_0(y,z)$ whose graphs lie in the critical manifold. Alternatively, one computes approximations to $h_0$ using, e.g., automatic differentiation and continuation procedures. There are many software packages to compute triangulations of surfaces; we use CGAL \cite{CGAL} via its matlab interface. When a part of the critical manifold is represented as the graph of a function $h_0$, its domain in the plane of the slow variables can be triangulated, and then this triangulation can be lifted to the graph, as illustrated in Figure \ref{F_triangulation}. So that the triangles in the lifted triangulation have similar diameters, we choose triangles in the plane of the slow variables to have diameters that depend upon the gradient of $h_0$. We stress that the rest of the algorithm is independent from how the triangulation of the critical manifold is constructed. Rather than using axis parallel patches, one could, e.g., use approximate trajectory segments of the reduced system to determine the piece of the domain of the slow variables, where the slow manifold is computed.
\begin{figure}[h]
\begin{center}
(a)\includegraphics[width=0.45 \textwidth]{triangulation2D.eps}
(b)\includegraphics[width=0.45 \textwidth]{triangulation3D.eps}
\caption{The mesh generated for the example in Section \ref{S_Method}. There is a fold at $\{y=0\}$. (a) The Delaunay triangulation of the $(y,z)$ plane that is generated by the geometry adapted mesh points. (b) The lift of the triangulation to the critical manifold.}\label{F_triangulation}
\end{center}
\end{figure}
We compute an approximation to the slow manifold using a procedure similar to that employed in stiff integrators that use Rosenbrock methods \cite{HW}. The tangent space to the critical manifold is orthogonal to the vector $df$. According to the Fenichel theory, the slow manifold is $O(\varepsilon)$ close to the critical manifold in the $C^1$ topology, so its tangent space is approximately normal to $df$. At a point $(x,y,z)$ in the (lifted) triangulation of $S_0$, we look for a nearby point $(x+\delta,y,z)$ at which the vector field is orthogonal to $df(x,y,z)$. Since $f(x,y,z) = 0$ and the normal hyperbolicity implies that $\partial_x f \ne 0$,
$$\delta = -\varepsilon \frac{(\partial_y f g_y + \partial_z f g_z)}{(\partial_x f)^2}$$
is an approximate solution to this equation.
Setting $\delta$ to this value, we take $(x+\delta,y,z)$ as a point of the triangulation of the approximate slow manifold.
The critical manifold and the approximation to the slow manifold are illustrated in Figure \ref{F_critSlowMfd} (a) and (b), respectively. We next perturb this triangulation of the approximate slow manifold in both directions parallel to the $x$-axis, as in Figure \ref{F_critSlowMfd} (c), by a factor $2^{j-6}\delta$, where $j$ is a natural number that will be specified later. In case that $\delta$ is very small, we replace it by a $O(\varepsilon^2)$ term. This procedure yields two surfaces that are candidates for the enclosing surfaces that we seek.
\begin{figure}[h]
\begin{center}
(a)\includegraphics[width=0.28 \textwidth]{critMfd.eps}
(b)\includegraphics[width=0.28 \textwidth]{critSlowMfd.eps}
(c)\includegraphics[width=0.28 \textwidth]{enclosureMfds.eps}
\caption{Construction of the enclosing triangulations. This figure shows the projection on $(x,y)$ coordinates of: (a) the critical manifold (solid) (b) the critical manifold (solid), and the slow manifold (dotted) (c) the critical manifold (solid), the slow manifold (dotted), and the two enclosing surfaces (dashed).}\label{F_critSlowMfd}
\end{center}
\end{figure}
To verify that the surfaces enclose the slow manifold, we check whether the flow of the full system (\ref{eq_slowFast_12}) is transversal to the candidate surfaces. As the candidate surfaces are piecewise linear, we have to define what we mean by transversality at the edges and vertices of the triangulation.
\begin{definition}\label{D_Cone}
Let $\mathcal{T}\subset \mathbb{R}^3$ be a triangulated, piecewise linear two dimensional manifold $\mathcal{T}=\bigcup T_i$. Since $\mathcal{T}$ is a manifold, it locally separates $R^3$ into two sides. We say that a vector $v$ is transverse to $\mathcal{T}$ if $v$ and $-v$ point to opposite sides of $\mathcal{T}$. A smooth vector field is transverse to $\mathcal{T}$ if it is transverse to $\mathcal{T}$ at every point of $\mathcal{T}$.
\end{definition}
Figure \ref{F_cone}(a) illustrates this definition. Trajectories of the flow generated by $F$ will all cross $\mathcal{T}$ from one side to another if $F$ is transverse to $\mathcal{T}$. If $\mathcal{T}_1$ and $\mathcal{T}_2$ are triangulated surfaces transverse to the flow with opposite crossing directions, then they form enclosing surfaces for the slow manifold we seek.
\begin{figure}[h]
\begin{center}
(a)\includegraphics[width=0.35 \textwidth]{cone.eps}
\hspace{1.0cm}
(b)\includegraphics[width=0.2 \textwidth]{triangle.eps}
\caption{(a) Transversality check on an edge of the triangulation. $F$ is transversal if it does not belong to the cone $\mathcal{C}$. (b) Transversality check on one face of the triangulation. To verify that the flow intersects the surface transversally, it suffices to prove that the vector field is never orthogonal to the normal of the surface, which a constant vector. So, we compute the range of $F\cdot n$, and prove that it does not contain $0$.}\label{F_cone}\label{F_triangle}
\end{center}
\end{figure}
Transversality is a condition that is local to each face of the triangulation, so we can check it on each face of the triangulation separately. To check the transversality condition on one face, we estimate the range of the inner product of the vector field with the normal of the face, as illustrated in Figure \ref{F_triangle}(b). Details about the existence of locally invariant, normally hyperbolic manifolds inside the enclosure are addressed in Section \ref{S_Existence} below.
The final part of the algorithm is to iteratively update the location of the vertices by moving them towards each other in small steps along the fast direction. We check that the transversality properties still hold, see Figure \ref{F_updateMfd}. This tightening step is stopped when no more vertices can be moved. Note that the vertices of all $4$ triangulations: the critical manifold, the approximate slow manifold, and the two perturbed manifolds, all have the same $(y,z)$ components.
\begin{figure}[h]
\begin{center}
(a)\includegraphics[width=0.28 \textwidth]{updateMfd1.eps}
(b)\includegraphics[width=0.28 \textwidth]{updateMfd2.eps}
(c)\includegraphics[width=0.28 \textwidth]{updateMfd3.eps}
\caption{Updating the enclosures of the slow manifold. This figure show the projection on $(x,y)$ coordinates of: (a) The initial enclosures. (b) The enclosures are updated vertex wise, here the first half of the vertices are updated. (c) The new enclosure.}\label{F_updateMfd}
\end{center}
\end{figure}
\section{Existence of locally invariant manifolds}\label{S_Existence}
The method outlined in the previous section constructs two triangulated surfaces, in the phase space of a slow-fast system, that are transversal to the flow for the given $\varepsilon$. In this section we discuss the existence of locally invariant manifolds enclosed between these two triangulations. We denote the two enclosing surfaces by $\mathcal{L}$ and $\mathcal{R}$, and the region enclosed between them by $\mathcal{C}$. Note that $\mathcal{L}$ and $\mathcal{R}$ are graphs over the same compact region, so $\mathcal{C}$ is well defined. Specifically, if for some compact set of slow variables $D$, $\mathcal{L}=\{(x,y,z) \,: x=h_l(y,z), (y,z)\in D\}$, $\mathcal{R}=\{(x,y,z) \,: x=h_r(y,z), (y,z)\in D\}$, then $\mathcal{C}=\{(x,y,z) \,: x\in[h_l(y,z),h_r(y,z)], (y,z)\in D\}$. We would like to claim that there is a locally invariant manifold inside of $\mathcal{C}$ which is a graph over the slow variables. We must thus verify that it is possible to choose a subset of $\mathcal{C}$ which is a $C^1$ manifold, locally invariant, and whose projection onto the domain in the slow variables is bijective. We start this section by defining \textit{computable slow manifolds} as objects associated to a fixed $\varepsilon$. Similar to a slow-manifold, a computable slow manifold, is not unique. Informally, a computable slow manifold is a manifold close to the critical manifold where the flow is slow. We measure slowness by comparing the slopes of trajectories within our enclosure with the slope of the critical manifold. The relative slope is defined as a bound on the slope of trajectories divided by the slope of the critical manifold:
\begin{equation}\label{eq_relSlopeDef}
s(\varepsilon)=\max_{\mathcal{C}} \dfrac{\dfrac{|\dot x|}{|\dot y|+|\dot z|}}{\left(\left|\dfrac{\partial h_0(y,z)}{\partial y}\right|+\left|\dfrac{\partial h_0(y,z)}{\partial z}\right|\right)}.
\end{equation}
\begin{definition}
A computable slow manifold is a $C^1$ locally invariant, normally hyperbolic manifold, of the same dimension as the critical manifold, which projects injectively along the fast variable to the critical manifold, such that the its relative slope satisfies $s(\varepsilon)\leq \frac{1}{\sqrt{\varepsilon}}.$
\end{definition}
Note that the order of $s(\varepsilon)$ is $O\left(\frac{1}{\varepsilon}\right)$ away from the slow manifold, so the definition is consistent with the standard perturbative definition of slow manifolds \cite{J95}. Slow manifolds are widely used in studies of slow-fast systems arising from biological or chemical models. However, computable slow manifolds are often the objects that are identified in applications: a locally invariant manifold at a \emph{fixed} value of epsilon that follows the critical manifold closely, and on which the flow is slow \cite{DKO08,GK09,I00}. This concept is captured by the definition of the computable slow manifold. Thus, our enclosures method gives a general and robust method to compute where candidates for such manifolds might lie in the phase space.
We will explain why computable slow manifolds exist within $\mathcal{C}$ in the following special case, which is sufficient for the purpose of this paper.
\begin{assumption}\label{A_Exist}
Assume that:
\begin{itemize}
\item[i] All trajectories of $\mathcal{C}$ reach its boundary in forward and backward time.
\item[ii] The boundary of $\mathcal{C}$, $\partial \mathcal{C}$ is piecewise smooth. Tangencies of the vector field with $\partial \mathcal{C}$ are quadratic (i.e., folds in the sense of singularity theory), and these tangencies occur along smooth curves that connect $\mathcal{L}$ and $\mathcal{R}$.
\item[iii] There are invariant horizontal and vertical cone fields on $\mathcal{C}$, and the vertical invariant cone field contains the fast direction of the vector field on $\mathcal{C}$.
\end{itemize}
\end{assumption}
Assume that the vector field is inward on $\mathcal{L}$ and $\mathcal{R}$ and denote by $\mathcal{C}_{in}$ and $\mathcal{C}_{out}$ the sets in $\partial \mathcal{C} - \mathcal{L} - \mathcal{R}$ where the vector field points inward and outward, respectively.
Choose a smooth curve, $x=r_0(y,z)$, in $\partial \mathcal{C} - \mathcal{L} - \mathcal{R}$ such that the projection of the curve to the slow variables contains the projection of $C_{in}$ to the slow variables, and points on the curve on $\mathcal{C}_{out}$ are images of the flow of points on the curve on $\mathcal{C}_{in}$. Flow this graph forward until each trajectory leaves $\mathcal{C}$. The set swept out by these trajectory segments is:
\begin{equation}\label{eq_hatSeps}
S_\varepsilon:=\{\phi_t(x,y,z) \,: x=r_0(y,z), \phi_t(x,y,z)\in \mathcal{C}\}.
\end{equation}
The set $S_\varepsilon$ is well-defined, as smooth as $r_0$ and the vector field, and diffeomorphic to its projection onto the critical manifold $S_0$. Inflowing trajectories of $\mathcal{C}$ must exit through $\mathcal{C}_{out}$. The exit time is uniformly bounded, since $\mathcal{C}$ is compact. Hence, $S_\varepsilon$ is well defined. The existence of invariant cone fields, with a normal vertical cone field containing the fast direction, ensures that $S_\varepsilon$ is a graph over the slow variables, and thus diffeomorphic to the corresponding part of the critical manifold. The final requirement of the definition - that the relative slope is small - yields a quantitative requirement on the tightness of $\mathcal{L}$ and $\mathcal{R}$.
\section{Singular Hopf normal form}\label{S_SingHopfBif}
In a slow-fast system, an equilibrium point may cross a fold of the critical manifold. If it undergoes a Hopf bifurcation at $O(\varepsilon)$ distance from the fold both in parameter and phase space, we follow \cite{G08} and refer to this as a \textit{singular Hopf bifurcation}. Singular Hopf bifurcation occurs in generic one parameter families of slow-fast systems.
We use a normal form for singular Hopf bifurcation in systems with one fast and two slow variables proposed by Guckenheimer \cite{G08} as an example system for the computations of slow manifolds presented in this paper. The normal form is given by
\begin{equation}\label{eq_singularHopfNormalForm}
\begin{split}
\varepsilon\,\dot{x}
& =
(y-x^2) \\
\dot{y} & =
z-x \\
\dot{z}& =
-\mu-a x -b y -c z,\\
\end{split}
\end{equation}
which depends upon the four parameters $\mu, a, b, c$ as well as $\varepsilon$. An $\varepsilon$-dependent scaling transformation eliminates $\varepsilon$ as a parameter: set
\begin{equation}\label{eq_rescaling}
(X,Y,Z,T) = (\varepsilon^{-1/2}x,\varepsilon^{-1} y,\varepsilon^{-1/2}z,\varepsilon^{-1/2}t){\textrm{ and }}(A,B,C)=(\varepsilon^{1/2}a,\varepsilon b,\varepsilon^{1/2}c)
\end{equation}
to obtain
\begin{equation}\label{eq_resc_shnf}
\begin{split}
X'
& =
Y-X^2 \\
Y' & =
Z-X \\
Z'& =
-\mu-A X -B Y -C Z\\
\end{split}
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8 \textwidth]{shb_phase_portrait.eps}
\caption{Phase space of system ~\eqref{eq_resc_shnf} when $(\mu,A,B,C)=(0.0015709, -0.05, 0.001, 0.1)$. A repelling sheet of the slow manifold is plotted in dark blue, an attracting sheet in cyan. A selection of trajectories in the unstable manifold of the equilibrium near the origin are plotted in red, a strip of unstable manifold that escapes from the fold region is shaded in magenta.}\label{F_shb_phase_space}
\end{center}
\end{figure}
Guckenheimer \cite{G08} studies invariant manifolds in the phase space of system ~\eqref{eq_resc_shnf} at different system parameters:
The branch of the critical manifold $y=x^2$ where $x<0$ perturbs into a repelling slow manifold, while the branch where $x>0$ perturbs into an attracting slow manifold.
In large regions of parameter space, an equilibrium that has undergone singular Hopf bifurcation is a saddle-focus with a two-dimensional unstable manifold that is initially bounded by a periodic orbit. As the parameter $\mu$ is varied, the unstable manifold grows and eventually intersects the repelling slow manifold, first tangentially and then transversally.
In the following, we refer to such a tangential intersection of the equilibrium's unstable manifold with the repelling slow manifold as a \textit{tangency} or \textit{tangency of invariant manifolds}. Figure ~\ref{F_shb_phase_space} shows selected objects in the phase space of system ~\eqref{eq_resc_shnf} at a parameter just after the tangency. The tangency is a codimension 1 phenomenon. Note that since slow manifolds are not unique, there is some ambiguity in what it means for a slow manifold to intersect another manifold. We will introduce a definition to deal with this in section \ref{S_Tang}.
The tangency bifurcation is evident in the organization of phase space by invariant manifolds, as it separates regions in parameter space where trajectories in the unstable manifold of the singular Hopf equilibrium have different possible limit sets: after the tangency, trajectories can escape from the fold region, whereas, before the tangency, the unstable manifold is confined to the fold region. This change is significant in many other slow-fast systems: suppose that a system with one fast and two slow variables has an S-shaped critical manifold as well as a singular Hopf bifurcation followed by a tangency. The system introduced by Koper \cite{Koper, KrupaPopovicKopell, Kuehn, DGKKO} is an example. Figure ~\ref{F_Koper_phasespace_timeseries} (left pane) shows a trajectory in Koper's system just after the tangency bifurcation, together with the position of the critical manifold: the trajectory starts in the vicinity of the singular Hopf equilibrium and goes through a spiraling motion until it leaves the fold region to the left. It lands close to the critical manifold on an attracting slow manifold, which it follows to a fold before ``jumping'' to another attracting slow manifold and returning along this slow manifold back to the vicinity of the singular Hopf equilibrium point. The process now repeats, leading to a time series like the one shown on the right pane of Figure ~\ref{F_Koper_phasespace_timeseries}. Such patterns with alternating large-amplitude and small-amplitude oscillations are known as mixed mode oscillations in the literature ~\cite{DGKKO}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.0 \textwidth]{Koper_phasespace_timeseries.eps}
\caption{Mixed mode oscillations in Koper's system with system parameters $\varepsilon_1=0.1,\varepsilon_2 = 1, k=-10, \lambda = -7.50$: the left panel shows a trajectory in the $xy$-plane, where $x$ is a fast variable while $y$ is slow. The singular Hopf equilibrium is marked with a black dot, the critical manifold is drawn with a dashed black line. The right panel shows the time-series of the $x$-coordinate of the same trajectory. }\label{F_Koper_phasespace_timeseries}
\end{center}
\end{figure}
Guckenheimer and Meerkamp \cite{GM11} present a detailed analysis of the bifurcation structure of the singular Hopf normal form. This includes extensive numerical results on the position of the tangency bifurcation in the five-dimensional parameter space of the normal form. The position of the tangency curve was computed in two ways: (1) using the numerical continuation software AUTO \cite{AUTO} and (2) using custom MATLAB code.
In method (1), a boundary value problem is set up to track a trajectory segment that starts on a fixed ray in the linear approximation of the unstable manifold of the singular Hopf equilibrium point and follows the repelling slow manifold for a substantial period of time. The latter is achieved by requiring it to have a sufficiently long time-length and ending on the parabola $y=x^2+5$, thus forcing the trajectory to remain very close to the repelling slow manifold nearly to the end of the trajectory segment.
A tangency of invariant manifolds corresponds to a fold of a solution to this boundary value problem, i.e., a point where two solutions approach each other and vanish together as the parameter is varied. Such folds can be detected by AUTO in a one-parameter continuation and continued in two parameters, thus enabling detection and continuation of the tangency bifurcation. Method (2) is based on the heuristic that the tangency separates regions in parameter space where trajectories in the unstable manifold escape the fold region from regions where trajectories in the unstable manifold remain in the fold region. A grid of initial conditions in an approximation of a fundamental domain of the unstable manifold is integrated numerically for a sufficiently long time. If the sample trajectories in the unstable manifold limit to an attracting periodic orbit or another bounded attractor, the parameter is before the tangency. If at least one trajectory leaves the fold region, the parameter is after the tangency. Repeated applications of the above steps in an interval bisection method determine an approximate position of the tangency in parameter space. Note that neither method (1) nor (2) is rigorous. In particular, neither of the two methods establishes bounds for the position of the repelling slow manifold or of the unstable manifold of the equilibrium. Section ~\ref{S_Tang} presents a rigorous method to compute a tangency.
\section{Detailed description of the method for the singular Hopf normal form}\label{S_Method}
In this section we give a detailed description of our method for computing enclosures of slow manifolds, applying it to the system from Section \ref{S_SingHopfBif} as an example. Most of the details generalize to any system of the form (\ref{eq_slowFast_12}). In the description, we comment on nontrivial differences between the general case and the example at hand.
\subsection{Constructing the triangulation}\label{SS_Triangulation}
The first step of our algorithm is to triangulate a portion of the critical manifold $S_0$. On a normally hyperbolic piece of the critical manifold, $\partial_x f \neq 0$. The implicit function theorem implies there is locally a function $h_0(y,z)$, such that $f(h_0(y,z),y,z)=0$. In the singular Hopf normal form, $h_0$ is given explicitly as $h_0^\pm(y,z)=\pm \sqrt{y}$ with domain $D = [y_m,y_M]\times [z_m,z_M]\subset\mathbb{R}^2$. For other systems, any
suitable method for finding a sufficiently accurate approximation to $h_0(y,z)$ can be used.
To construct the vertices of a Delaunay triangulation of $S_0$, as shown in Figure \ref{F_triangulation}(a), we start with a
triangulation of the domain of $h_0$, but want the diameter of the triangles on $S_0$ to be almost uniform. Setting $\kappa(y,z)=\|\nabla h\|$, $\tilde k=\sqrt{(y_M-y_m)^2+(z_M-z_m)^2}/d$, and
$$k(y,z):=\frac{\tilde k}{1+\kappa(y,z)},$$
with $d\in\mathbb{Z}_+$ to be chosen later, we select the following points in the $(y,z)$ plane as vertices of a triangulation:
\begin{eqnarray}\label{eq_vertexPts}
\nonumber(y_0,z_0) :=& (y_m,z_m) \\
\nonumber(y_i,z_0) :=& (y_{i-1}+k(y_{i-1},z_0),z_0) ,\quad & \textrm{if} \quad y_{i-1}<y_{i-1}+k(y_{i-1},z_0)<y_M, \\
(y_i,z_0) :=& (y_M,z_m) ,\quad & \textrm{if} \quad y_{i-1}<y_M\leq y_{i-1}+k(y_{i-1},z_0), \\
\nonumber(y_i,z_j) :=& (y_i,z_{j-1}+k(y_i,z_{j-1})) ,\quad & \textrm{if} \quad z_{j-1}<z_{j-1}+k(y_i,z_{j-1})<z_M, \\
\nonumber(y_i,z_j) :=& (y_i,z_M) ,\quad & \textrm{if} \quad z_{j-1}<z_M\leq z_{j-1}+k(y_i,z_{j-1}), \\
& 0\le i \le I, \, 0\le j(i)\le J_i
\end{eqnarray}
Note that these points are aligned along lines parallel to the fold curve $x=y=0$ where $\partial_x f=0$.
Let $\mathcal{T}$ denote the Delaunay triangulation generated by the set
$$
\{(y_i,z_i) : 0\le i \le I, 0\le j(i)\le J_i \},
$$
and $\mathcal{K}_0$ its lift to $S_0$, using the map $\pi_0^{-1}: (y,z) \mapsto (h_0(y,z),y,z)$.
Clearly $\pi_0^{-1}$ is a homeomorphism; i.e., the set of vertices, edges, and faces of $\mathcal{K}_0$, denoted by $V(\mathcal{K}_0)$, $E(\mathcal{K}_0)$, and $F(\mathcal{K}_0)$, are defined by $\pi_0^{-1}(V(\mathcal{T}))$, $\pi_0^{-1}(E(\mathcal{T}))$, and $\pi_0^{-1}(F(\mathcal{T}))$, respectively. $\mathcal{T}$ and $\mathcal{K}_0$ are shown in Figures \ref{F_triangulation}(a) and \ref{F_triangulation}(b), respectively.
\subsection{Constructing perturbed triangulations}
Our next step is to perturb $\mathcal{K}_0$, as illustrated in Figure \ref{F_critSlowMfd}, so that it lies closer to the slow manifold $S_\varepsilon$
we are trying to enclose.
Fenichel theory, \cite{J95}, guarantees that for $\varepsilon>0$ sufficiently small, $S_\varepsilon$ is the graph of a function $h_\varepsilon(y,z)$ with domain $D$ and $h_\varepsilon(y,z) - h_0(y,z) = O(\varepsilon)$. To compute triangulations, $\mathcal{K}_\varepsilon$ that approximate $S_\varepsilon$, we write $h_\varepsilon$ in the form
$$
h_\varepsilon(y,z)=h_0(y,z)+\varepsilon h_1(y,z).
$$
Substituting into the equation $\varepsilon \dot x_\varepsilon=f(h_\varepsilon(y,z),y,z)$, we get that:
\begin{eqnarray}\label{eq_h1AssFor}
\nonumber f(h_0(y,z)+\varepsilon h_1(y,z)+O(\varepsilon^2),y,z)/\varepsilon & = & \partial_y (h_0(y,z)+\varepsilon h_1(y,z))\dot y+\partial_z(h_0(y,z)+\varepsilon h_1(y,z))\dot z \\
\nonumber & = & \partial_y h_0(y,z)\dot y+\partial_z h_0(y,z)\dot z+O(\varepsilon) \\
\nonumber & = & \partial_y h_0(y,z)g_y(h_0(y,z),y,z) \\
& &+\partial_z h_0(y,z)g_z(h_0(y,z),y,z)+O(\varepsilon)
\end{eqnarray}
To compute $\partial_y h_0$ and $\partial_z h_0$, we use that $f(h_0(y,z),y,z)=0$, and hence
$$
\partial_y h_0(y,z) = -\frac{\partial_y f(h_0(y,z),y,z)}{\partial_x f(h_0(y,z),y,z)},
$$
and
$$
\partial_z h_0(y,z) = -\frac{\partial_z f(h_0(y,z),y,z)}{\partial_x f(h_0(y,z),y,z)}.
$$
In addition, since $f(h_0(y,z),y,z)=0$,
\begin{equation}\label{eq_fAssFor}
f(h_\varepsilon(y,z),y,z)=\varepsilon\partial_x f(h_0(y,z),y,z) h_1(y,z)+O(\varepsilon^2).
\end{equation}
Thus, we can solve equation \eqref{eq_fAssFor} for $h_1(y,z)$, up to $O(\varepsilon)$, and substitute for $f(h_\varepsilon(y,z),y,z)$ using \eqref{eq_h1AssFor}, obtaining
$$
h_1(y,z) = -\frac{\partial_y f(h_0(y,z),y,z)g_y(h_0(y,z),y,z)+\partial_z f(h_0(y,z),y,z)g_z(h_0(y,z),y,z)}{\left(\partial_x f(h_0(y,z),y,z)\right)^2}+O(\varepsilon),
$$
which in our case, considering $h^+(y,z)$ reads:
\begin{equation}\label{eq_h1Def}
h_1^+(y,z) = \frac{\sqrt{y}-z}{4y}.
\end{equation}
For $h^-(y,z)$, that we will use in Section \ref{S_Tang}, we get:
\begin{equation}\label{eq_h1DefM}
h_1^-(y,z) = \frac{-\sqrt{y}-z}{4y}.
\end{equation}
We put $\pi_\varepsilon^{-1}: (y,z) \mapsto (h_0(y,z)+\varepsilon h_1(y,z),y,z)$, and define:
\begin{equation*}
\mathcal{K}_\varepsilon := \pi^{-1}_\varepsilon \circ \pi_0(\mathcal{K}_0).
\end{equation*}
$\mathcal{K}_\varepsilon$ is our approximation to the slow manifold, shown together with $S_0$ in Figure \ref{F_critSlowMfd}(b). Heuristically, it is $O(\varepsilon^2)$ to $S_\varepsilon$ at the vertex points.
Let $\sigma_c$ denote the following map that moves points parallel to the $x$-axis:
\begin{equation}\label{eq_sigmaDef}
\sigma_c: (x,y,z) \mapsto (x+c\max\left(|h_1(y,z)|,\frac{\epsilon^2}{|c|}\right),y,z).
\end{equation}
We define our candidate enclosing surfaces as:
\begin{eqnarray}
\mathcal{L}_{\varepsilon, N} & := & \sigma_{-\varepsilon/N}(\mathcal{K}_\varepsilon) \\
\mathcal{R}_{\varepsilon, N} & := & \sigma_{\varepsilon/N}(\mathcal{K}_\varepsilon),
\end{eqnarray}
where $N\in\mathbb{R}_+$. The initial choice for $N$ in our implementation was $N=64$, but we would have chosen a smaller $N$ if that had failed. The verification step of the algorithm includes a loop that divides $N$ by a factor $2$ upon failure and repeats the transversality test. Note that the region that is enclosed by $\mathcal{L}_{\varepsilon, N}$ and $\mathcal{R}_{\varepsilon, N}$ is disjoint from the critical manifold so long as $N>1$. The construction of $S_0$, $S_\varepsilon$, $\mathcal{L}_{\varepsilon, N}$ and $\mathcal{R}_{\varepsilon, N}$ is shown in Figure \ref{F_critSlowMfd}.
\subsection{Verifying the enclosure property}
To prove that a slow manifold is located between $\mathcal{L}_{\varepsilon,N}$ and $\mathcal{R}_{\varepsilon,N}$, it suffices to prove that the vector field $(\ref{eq_slowFast_12})$ is transversal to each face of the triangulations, with opposite crossing directions for $\mathcal{L}_{\varepsilon,N}$ and $\mathcal{R}_{\varepsilon,N}$. For the remainder of this subsection, we restrict our attention to a single triangle. Local transversality, i.e., the verification of transversality on each face in the triangulation implies global transversality of $\mathcal{L}_{\varepsilon,N}$ and $\mathcal{R}_{\varepsilon,N}$.
Let $T$ be one face in $\mathcal{L}_{\varepsilon,N}$ or $\mathcal{R}_{\varepsilon,N}$. We denote its vertices by $v_1$, $v_2$, and $v_3$ and its edges by $e_{12}$, $e_{13}$, and $e_{23}$ with the edge $e_{ij}$ between the vertices $v_i$ and $v_j$. To verify that the vector field is transverse, it suffices to prove that the inner product between the normal of the face and the vector field is non-zero. Note that in contrast to most work on slow-fast systems, this condition, which is the main condition checked by our algorithm, becomes \textit{easier} to verify as $\varepsilon\rightarrow 0$. The reason is that as $\varepsilon\rightarrow 0$, the condition becomes essentially one-dimensional. We denote the normal to the face, normalized so that the first component is positive, by $n(T)$. This is possible because the first component is zero exactly at the folds, where the critical manifold fails to be normally hyperbolic. With this notation, the condition that we have to verify is
\begin{equation}\label{eq_transCond}
F(x,y,z)\cdot n(T) \neq 0, \quad \textrm{for all } (x,y,z)\in T.
\end{equation}
Condition (\ref{eq_transCond}) is equivalent to a verification that
\begin{equation}\label{eq_transCondFace}
F(\lambda_1 v_1+\lambda_2 v_2+\lambda_3 v_3)\cdot n(T) \neq 0 \quad \textrm{for all } \lambda_i\in[0,1], \lambda_1+\lambda_2+\lambda_3=1,
\end{equation}
which is an enclosure of the range of a function on a compact domain. This problem is the one we solve with interval analysis. Directly enclosing (\ref{eq_transCondFace}) using interval analysis in order to verify that the function is non-zero is, however, not optimal. The reason is that the problem is sufficiently sensitive that we would have to split the $\lambda_i$ domains into a very fine subdivision, and since this has to be done on each face, such a procedure would be prohibitively slow.
Our actual approach is based on monotonicity; first we prove that $F\cdot n$ is monotone on the face and on its restriction to the edges. Then we compute $F(v_i)\cdot n$ for the three vertices and verify that the interval hull of the results, i.e., the smallest representable interval containing the results, does not contain $0$. Note that this amounts to showing that the dot-product does not change sign on the face. We introduce
\begin{equation}\label{eq_SDef}
G := \nabla (F\cdot n).
\end{equation}
If $G\neq (0,0,0)$ on all of $T$ then $F\cdot n$ has no critical points inside of $T$ and we can restrict our attention to the edges, i.e. the boundary of $T$. Consider an edge $e_{ij}=\{(1-\lambda)v_i+\lambda v_j \,:\,\lambda\in[0,1] \}$, and denote its parametrization by $r(\lambda)$. The scalar product $F\cdot n$ is monotone on the edge if
$$
0 \neq \frac{\partial}{\partial \lambda} (F(r(\lambda))\cdot n) =
G\cdot (v_j-v_i).
$$
Hence, we arrive at the monotonicity requirements, which for the case at hand are much easier to verify than \eqref{eq_transCondFace}:
\begin{eqnarray}
(0,0,0) &\notin& G(T) \label{eq_transEq1}\\
0 &\notin& G(e_{12}) \cdot (v_2-v_1) \label{eq_transEq2}\\
0 &\notin& G(e_{13}) \cdot (v_3-v_1) \label{eq_transEq3}\\
0 &\notin& G(e_{23}) \cdot (v_3-v_2) \label{eq_transEq4}
\end{eqnarray}
If the conditions (\ref{eq_transEq1}-\ref{eq_transEq4}) are satisfied we compute
\begin{equation}\label{eq_vertexTrans}
F(v_1)\cdot n \sqcup F(v_2)\cdot n \sqcup F(v_3)\cdot n,
\end{equation}
where $\sqcup$ denotes the interval hull. If (\ref{eq_vertexTrans}) does not contain zero, then the vector field is transversal to the face $T$. If (\ref{eq_transEq1}) holds but one or more of (\ref{eq_transEq2}-\ref{eq_transEq4}) do not hold, then we add the appropriate $F(e_{ij})\cdot n$ terms to (\ref{eq_vertexTrans}).
\subsection{Improving the bounds}\label{SS_ImpBound}
If the previous steps of the algorithm are successful, they yield two surfaces $\mathcal{L}_{\varepsilon,N}$ and $\mathcal{R}_{\varepsilon,N}$, that have been proven to enclose the part of the slow manifold that is above $[y_m,y_M]\times[z_m,z_M]$ in the $(y,z)$ plane. Since $N$ is fixed after the verification step we henceforth drop the indices on $\mathcal{L}$ and $\mathcal{R}$. Our aim is to produce enclosures that are as tight as possible, given the mesh size. We, therefore, try to improve the enclosure. The procedure is illustrated in Figure \ref{F_updateMfd}.
We do this by iteratively updating each of the vertices in the triangulation by moving them towards each other along the segment joining them. This segment is parallel to the x-axis due to our earlier constructions. The moves are done in two steps: (1) a tentative move is made of a vertex, and (2) the transversality conditions of all faces attached to this vertex are verified. When the transversality holds, the vertex is fixed at its new position and we proceed to the next vertex. The efficiency of this procedure will depend on several factors, primarily the ordering of the vertices and how much the vertices are moved. By moving a vertex only a fraction of what seems to be possible, the effect of the ordering of the vertices can be minimized. The penalty of smaller updates is that the procedure has to be run more times. Larger moves might be possible if an appropriate sorting algorithm were used, but we have not found an effective and efficient sorting criterion. Instead, we heuristically determine an update factor that optimizes the accuracy vs complexity. Given a right vertex, $v_R$, and a left vertex, $v_L$, such that $\pi_0(v_R)=\pi_0(v_L)$, we move each of them towards each other by an amount
\begin{equation}\label{eq_update}
\frac{1}{8}\|(v_R-v_L)\|.
\end{equation}
We run the procedure to refine the enclosures of the slow-manifold several times, until no further improvement is possible. The quantity we use to measure the quality of the enclosures is the average distance between the two triangulations at the vertices. Let $\iota$ denote the number of vertices of the triangulations; by construction $\mathcal{L}$ and $\mathcal{R}$ have the same number of vertices, edges, and faces. The only difference between $\mathcal{L}$ and $\mathcal{R}$ is the values of the $x$-coordinates. We put
\begin{equation}\label{D_eta}
\eta(\mathcal{L},\mathcal{R})= \frac{1}{\sqrt{\iota}} \|v_{R}-v_{L}\|.
\end{equation}
If the triangulation is fine enough $\eta$ will be $O(\varepsilon^2)$. This fact is investigated numerically in Section \ref{S_NumRes}.
\subsection{Cone fields}
In order to ensure that there are manifolds inside of the set $\mathcal{C}$ enclosed by $\mathcal{L}$ and $\mathcal{R}$, we need to have invariant cone fields on $\mathcal{C}$, as introduced in Section \ref{S_Existence}. In this subsection we describe how such cone fields - one horizontal and one vertical - are constructed. Recall, see \cite{KH95}, that a standard horizontal or vertical cone for a phase space with variables $(x,y)$ is a set $\{\gamma\|x\|\geq \|y\|\}$ or $\{\gamma\|y\|\geq \|x\|\}$, respectively, and that a cone is the image of a standard cone under an invertible linear map. Equivalently, a cone is the set of points where a non-degenerate indefinite quadratic form is non-negative. Since horizontal and vertical cones are traditionally in the expanding and contracting directions, respectively, we will call the cone in the normal direction the vertical cone, and the cone in the direction of the slow manifold the horizontal cone. Also recall that a cone field is invariant if it is mapped into itself by the derivative of the dynamics, i.e., if the set where the quadratic form is non-negative is mapped by the derivative into the set where the quadratic form at the image point under the map is non-negative.
For the case at hand we will use $\gamma=1$ for both the horizontal and vertical cones in an appropriate coordinate system, such that the normal direction is in the vertical cone. A cone field is a map that associates a cone to each point of its domain. Given that \eqref{eq_resc_shnf} only has one nonlinear component, we will use constant cone fields. To prove that the cone fields are invariant, we solve the variational equation for the time $0.0004$ flow map, and use the eigendirections of the derivative of the flow as a basis, in which we represent the standard horizontal and vertical cones with $\gamma=1$. We verify that the vertical and horizontal cone fields are invariant, and that the vertical cone contains the fast direction, which ensures that $\hat S_\varepsilon$ defined in \eqref{eq_hatSeps} projects injectively onto the slow variables, and, thus, is a graph over them. The flow time needs to be large enough for us to be able to prove the separation of the horizontal and vertical directions, but small enough that we do not move away too far in phase space. The value $0.0004$ turned out to be a good choice.
\subsection{Algorithms}\label{SS_Algorithm}
An implementation \cite{Progs} of the method described above has been made using the IntLab package \cite{IntLab} for interval arithmetic. A detailed description of the main algorithm is given as Algorithm \ref{mainAlgorithm}. The algorithm that checks if the vector field is transversal to a face is given as Algorithm \ref{transversalityAlgorithm}. Algorithm \ref{mainAlgorithm} takes a triangulation as input. That triangulation can be computed with any method, not necessarily the one outlined in Section \ref{SS_Triangulation}. In Algorithm \ref{transversalityAlgorithm} the function $sign(x)$ returns $0$ if $0\in x$.
\begin{small}
\begin{algorithm}[ph]\label{mainAlgorithm}
\KwData{$(f,g_y,g_z)$, $h_0$, $\mathcal{T}$, $\varepsilon$}
\KwResult{$\mathcal{L}$, $\mathcal{R}$, $\eta$}
\ForAll{$(y,z)\in \mathcal{T}$}{$h_1(y,z) = -\frac{\partial_y f(h_0(y,z),y,z)g_y(h_0(y,z),y,z)+\partial_z f(h_0(y,z),y,z)g_z(h_0(y,z),y,z)}{\left(\partial_x f(h_0(y,z),y,z)\right)^2}$\;
}
$N=64$\;
transversal=false\;
$NF=\mathcal{T}.numberOfFaces$\;
\While{$\neg transversal$ \& $N>2^{-18}$}{
$x_{left}=h_0(y,z)+h_1(y,z)-\varepsilon/N |h_1(y,z)|$\;
$x_{right}=h_0(y,z)+h_1(y,z)+\varepsilon/N |h_1(y,z)|$\;
\eIf{$getTransversality(\mathcal{T},x_{left})=-getTransversality(\mathcal{T},x_{right})=NF$}
{transversal=true\;}
{$N=N/2$\;}
}
\If{$\neg transversal$}
{exit(FAIL)\;}
$\eta=1$\;
$\eta_{new}=0$\;
\While{$\eta_{new}<\eta$}{
$\eta=\frac{\|x_{left}-x_{right}\|}{\sqrt{T.\iota}}$\;
$\tilde x_{left}=x_{left}$, $\tilde x_{right}=x_{right}$\;
\ForAll{$1\leq i\leq \iota$}{
$tri=\mathcal{T}.adjacentFaces(i)$\;
$\tilde x_{left}(i)=x_{left}(i)+0.125(x_{right}(i)-x_{left}(i))$\;
\eIf{$getTransversality(tri,\tilde x_{left},T.y,T.z)=-getTransversality(tri,x_{right},T.y,T.z)=tri.numberOfFaces$}{$x_{left}(i)=\tilde x_{left}(i)$\;}{$\tilde x_{left}(i)=x_{left}(i)$\;}
$\tilde x_{right}(i)=x_{right}(i)-0.125(x_{right}(i)-x_{left}(i))$\;
\eIf{$getTransversality(tri,x_{left},T.y,T.z)=-getTransversality(tri,\tilde x_{right},T.y,T.z)=tri.numberOfFaces$}{$x_{right}(i)=\tilde x_{right}(i)$\;}{$\tilde x_{right}(i)=x_{right}(i)$\;}
}
$\eta_{new}=\frac{\|x_{left}-x_{right}\|}{\sqrt{T.\iota}}$\;
}
$\mathcal{L}=Triangulate(\mathcal{T}.Triangulation,x_{left},\mathcal{T}.y,\mathcal{T}.z)$\;
$\mathcal{R}=Triangulate(\mathcal{T}.Triangulation,x_{right},\mathcal{T}.y,\mathcal{T}.z)$\;
\caption{Implementation of the main algorithm}
\end{algorithm}
\end{small}
\begin{small}
\begin{algorithm}[h]\label{transversalityAlgorithm}
\KwData{$F=(f,g_y,g_z)$, $\mathcal{T}$(Triangulation,Vertices)}
\KwResult{$Intersections$}
$NF=\mathcal{T}.numberOfFaces$\;
$Intersections=0$\;
\ForAll{$1\leq i\leq NF$}{
$n=\mathcal{T}.Normal(i)$\;
$(v_{1},v_{2},v_{3})=\mathcal{T}.Vertices(i)$\;
$(e_{12},e_{13},e_{23})=\mathcal{T}.Edges(i)$\;
$G=\nabla(F(\mathcal{T}.Face(i))\cdot n)$\;
\eIf{$0\in G$}
{$Intersections+=sign(F(\mathcal{T}.Face(i))\cdot n)$\;}
{$G_{12}=\nabla(F(e_{12})\cdot n)\cdot e_{12}$,
$G_{13}=\nabla(F(e_{13})\cdot n)\cdot e_{13}$,
$G_{23}=\nabla(F(e_{23})\cdot n)\cdot e_{23}$\;
\eIf{$0\notin G_{12}G_{13}G_{23}$}{
$Intersections+=sign(F(v_1)\cdot n \sqcup F(v_2)\cdot n \sqcup F(v_3)\cdot n)$\;
}{
\ForAll{$a\in\{12,13,23\}$}{
\eIf{$0\in G_a$}
{$F_a=F(e_a)\cdot n$}
{$F_a=F(v_{a_1})\cdot n \sqcup F(v_{a_2})\cdot n$}
}
$Intersections+=sign(F_{12}\sqcup F_{13}\sqcup F_{23})$\;
}
}
}
\caption{getTransversality(Triangulation,Vertices)}
\end{algorithm}
\end{small}
\section{Numerical Results}\label{S_NumRes}
In this section we describe the results of several experiments illustrating the behavior of the enclosure computations. Given a system and a domain, there are two numbers that can be changed, the number $d$, which controls the mesh size, and the value of $\varepsilon$. In the experiments below, we use the normal form, \eqref{eq_singularHopfNormalForm}, for the singular Hopf bifurcation discussed in Section 3. We choose the same values of the constants as in the first part of \cite{GM11}: $\mu=10^{-2}$, $A=-0.05$, $B=0.001$, and $C=0.1$. We enclose the branch of the critical manifold $\{y=x^2\}$ with $x>0$. The results of four experiments are described below, in each of them we present the results as a plot of $\eta$ vs $\varepsilon$. In the first experiment, we fix the domain as a small strip: $y\in[0.01,0.2]$, $z\in[-0.01,0.01]$ and give the results for several values of $\iota$ (defined implicitly by changing $d$). In the second, we take a square domain: $y\in[0.01,0.2]$, $z\in[-0.095,0.095]$ for comparison. Our third example analyzes the effect and usefulness of the tightening step described in Section \ref{SS_Tight}. In our fourth example, we investigate the heuristic constant $8$ in the denominator of (\ref{eq_update}); the domain and constants are from the first example with its finest mesh. Note that our domains are such that $\dot y <0$, which means that the assumptions from Section \ref{S_Existence} are satisfied, i.e., all trajectories with initial conditions in $\mathcal{C}$ leave in both forward and backward time, and tangencies of the vector field with $\partial \mathcal{C}$ occur along a plane where they have quadratic tangency.
During the computations we use the function $G$ defined in (\ref{eq_SDef}) to prove the monotonicity properties that enables us to efficiently prove transversality. We note that for the example at hand, $G$ is
\begin{center}
$
\left(\begin{array}{c}
-\frac{2x}{\varepsilon}n_x-n_y+\frac{0.05}{\sqrt{\varepsilon}}n_z\\
\frac{1}{\varepsilon}n_x-\frac{0.001}{\varepsilon} n_z \\
n_y-\frac{0.1}{\sqrt{\varepsilon}}n_z
\end{array}\right).
$
\end{center}
A trivial calculation shows that $G=(0,0,0)$ if and only if $x=-25\sqrt{\varepsilon}$ and $n$ is a multiple of $(1,\frac{100}{\sqrt{\varepsilon}},1000)$, so monotonicity always holds on the right branch of the critical manifold.
\subsection{Varying $\iota$}\label{SS_Iota}
The convergence rate of the enclosures at the vertex points should ideally be $O(\varepsilon^2)$, since we have corrected for the linear term in the asymptotic expansion of $h_\varepsilon$. Our interpolating surfaces between the vertex points are, however, linear. The discretization size thus puts a curvature dependent restriction on the tightness of the enclosure.
In Figure \ref{F_varyIota}(a), we illustrate how $\eta$, for different values of $\iota$ first decreases, but then reaches a plateau. Looking at $\eta$ as a function of $\varepsilon$, we see that as the mesh size decreases ($\iota$ increases), $\eta$ is approximately proportional to $\varepsilon^2$, as expected. This gives a heuristic picture of how $\eta$ depends on $\varepsilon$: first, there will be a period of quadratic convergence, where the accuracy depends on $\varepsilon$; while at the end, the accuracy oscillates around some fixed value and depends on the mesh size. In the intermediate region, the accuracy depends both on the ratio of time scales and the mesh size. In this region, the exponent will decrease from $2$ to $0$. Figure \ref{F_varyIota}(b) illustrates the quadratic convergence region for the finest mesh size from Figure \ref{F_varyIota}(a).
As the plateau is reached $s(\varepsilon)$ defined in \eqref{eq_relSlopeDef} starts to increase. For $\varepsilon=0.1$ the enclosure is too wide for all trajectories inside to be slow. In Table \ref{T_varyIotaSlopes} we give the slopes on the $\varepsilon$ interval $[10^{-1},10^{-4}]$ and bounds on the intervals where $\sqrt(\varepsilon)s(\varepsilon)\leq 1$, for the various $\iota$ values from Figure \ref{F_varyIota}(a). We are only able to prove that the cone fields are invariant for $\varepsilon\leq 10^{-1.94}$, which means that for $\varepsilon>10^{-1.94}$ the normal hyperbolicity is too weak for the algorithm to work. Thus, for the finest mesh size, we prove that the computable slow manifold exists for $10^{-6}\leq\varepsilon\leq10^{-1.94}$. Finer meshes would prove the existence for smaller values of $\varepsilon$.
\begin{figure}[h]
\begin{center}
\psfrag{E}{$-\log_{10}\varepsilon$}
\psfrag{N}{$\log_{10}\eta$}
(a)\includegraphics[width=0.45 \textwidth]{varyIota.eps}
(b)\includegraphics[width=0.45 \textwidth]{highIotaEps1_4.eps}
\caption{(a) $\log_{10}\eta$ vs $-\log_{10}\varepsilon$ for the various values of $\iota$ specified in Table \ref{T_varyIotaSlopes}. (b) Zoom in on $-\log_{10}\varepsilon\in[1,4]$ for the value $\iota=162190$. The least squares approximation of the slope in the steepest part ($-\log_{10}\varepsilon\in[2,3.5]$) is $-2.14$, on the whole interval $[1,4]$ it is $-1.89$.}\label{F_varyIota}
\end{center}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{c|ccccccc}
$\iota$ & 1200 & 4662 & 18236 & 40805 & 72239 & 112736 & 162190 \\ \hline
$Slope$ & -1.40 & -1.58 & -1.70 & -1.76 & -1.82 & -1.86 & -1.89 \\ \hline
$\max -\log_{10}\varepsilon$ & 4 & 4.5 & 5 & 5& 6 & 6 & 6
\end{tabular}
\caption{The second row is the least squares approximations of the slopes of $\log_{10}\eta(-\log_{10}\varepsilon)$ on the domain $-\log_{10}\varepsilon\in[1,4]$, for some different values of $\iota$. The third row gives the maximum value of $-\log_{10}\varepsilon$, where the flow is slow, i.e., $s(\varepsilon)\leq \frac{1}{\sqrt{\varepsilon}}$.}
\label{T_varyIotaSlopes}
\end{center}
\end{table}
\subsection{Larger domain}\label{SS_LargeDomain}
In this subsection, we redo the experiment above for a square domain. There are roughly the same number of triangles in the $y$ and $z$ directions, rather than having only a couple of faces in each $\{y=const\}$ slice as we had in Subsection \ref{SS_Iota}. The resulting $\eta$ vs $\varepsilon$ graph is given as Figure \ref{F_bigDomain}. We see that the results correspond to the coarser meshes in Figure \ref{F_varyIota}(a), which is natural, since a larger domain would require a larger number of faces. This illustrates that the results in Subsection \ref{SS_Iota} do not depend on the specific thin slice in the $z$-direction that we chose to study. For the two discretization sizes in Figure \ref{F_bigDomain} we have $\sqrt(\varepsilon)s(\varepsilon)\leq 1$ for $\varepsilon\geq 10^{-4}$ and $\varepsilon\geq 10^{-5}$, respectively. We are only able to prove that the cone fields are invariant for $\varepsilon\leq 10^{-2.09}$, which means that for $\varepsilon>10^{-2.09}$ the normal hyperbolicity is too weak for the algorithm to work. Thus, for the finest mesh size, we prove that the computable slow manifold exists for $10^{-5}\leq\varepsilon\leq10^{-2.09}$.
\begin{figure}[h]
\begin{center}
\psfrag{E}{$-\log_{10}\varepsilon$}
\psfrag{N}{$\log_{10}\eta$}
\includegraphics[width=0.45 \textwidth]{bigDomain.eps}
\caption{$\log_{10}\eta$ vs $-\log_{10}\varepsilon$ for the values $\iota=21810$ and $\iota=194396$. The least squares approximations of the slopes on the interval $[2,4]$ are $-1.27$ and $-1.52$, respectively.}\label{F_bigDomain}
\end{center}
\end{figure}
\subsection{The effect of the tightening step}\label{SS_Tight}
The tightening step is the slow part of the algorithm, and our program spends the vast majority of its computing time performing this step. It is therefore interesting to see how the results of a fast version of the algorithm, without the tightening step compares, performance wise. We run the example from Section \ref{SS_Iota}, with the highest precision ($\iota=162190$), and compare the results. The $\eta$ vs $\varepsilon$ graph of the results is given as Figure \ref{F_with_OTightening}. In this example, the program spends $92.7\%$ of the computing time performing the tightening step. The total computing time in this case was $1526$ seconds on a 3.2 GHz Dual-Core AMD Opteron.
For the example at hand, it might not be worth the extra effort to compute the tightening step or all applications. We do need it, however, for the application in Section \ref{S_Tang}.
\begin{figure}[h]
\begin{center}
\psfrag{E}{$-\log_{10}\varepsilon$}
\psfrag{N}{$\log_{10}\eta$}
\includegraphics[width=0.45 \textwidth]{with_OTightening.eps}
\caption{$\log_{10}\eta$ vs $-\log_{10}\varepsilon$ for the value $\iota=162190$, with and without the tightening step of the algorithm.}\label{F_with_OTightening}
\end{center}
\end{figure}
\subsection{Varying the improvement rate}\label{SS_ImpRate}
Our method contains a choice of the heuristic constant in the denominator of equation (\ref{eq_update}) that regulates the aggressiveness of the tightening step. In this subsection, we present a study on how the results depend on this choice. We use the same model as above, the domain from Subsection \ref{SS_Iota}, and the finest mesh size from Subsection \ref{SS_Iota} - $\iota = 162190$. For the purpose of this study, we denote the denominator of equation (\ref{eq_update}), by $l$. In Figure \ref{F_varyQuota} we display the results for $l=4,6,8$. For larger values of $l$, the results are virtually indistinguishable from the $l=8$ case. Typically, the updates mostly occur for smaller values of $\varepsilon$. The reason is that for sufficiently small values of $\varepsilon$ the vector field is almost equal to the layer equation, which makes the transversality condition almost trivial. Therefore, less smooth triangulations will still work, and the updates will not violate the transversality conditions.
\begin{figure}[h]
\begin{center}
\psfrag{E}{$-\log_{10}\varepsilon$}
\psfrag{N}{$\log_{10}\eta$}
\includegraphics[width=0.45 \textwidth]{varyQuota.eps}
\caption{$\log_{10}\eta$ vs $-\log_{10}\varepsilon$ for the value $\iota=162190$, for updates with $\|(v_R-v_L)\|$ divided by $4$, $6$, and $8$.}\label{F_varyQuota}
\end{center}
\end{figure}
\section{Tangencies}\label{S_Tang}
In this section we give a proof that the singular Hopf normal form, given by \eqref{eq_resc_shnf}, used here with $(A,B,C)=(-0.07,0.001,0.16)$, undergoes a tangency bifurcation of the unstable manifold of the saddle equilibrium, and the repelling slow manifold. We will often refer to these manifolds as the unstable manifold and the slow manifold denoted by $W^u_\mu$ and $S_\mu^r$, respectively. With a slow manifold for the rescaled system, we mean the image of a computable slow manifold for some $\varepsilon$ under the map \eqref{eq_rescaling}.
Recall that (computable) slow manifolds are not unique. We therefore need to define what we mean by tangency, since if one choice of computable slow manifold is tangential, there will be other choices where the intersection is transversal. The natural setting is therefore to define when a one parameter family of slow manifolds is tangential to another manifold or family of manifolds.
\begin{definition}
A smooth one parameter family of manifolds, $\{M_\mu\}_{\mu\in[\mu_0,\mu_1]}$, intersects a one parameter family of families of computable slow manifolds $\{C_\mu\}_{\mu\in[\mu_0,\mu_1]}$ tangentially if for each choice of a smooth one parameter family of computable slow manifolds $\{S_\mu\}_{\mu\in[\mu_0,\mu_1]}$, $S_\mu\in C_\mu$, there is a value of $\mu\in(\mu_0,\mu_1)$ such that $S_\mu$ and $M_\mu$ intersect tangentially.
\end{definition}
In our proof, we compute one enclosing region $\mathcal{C}$ that satisfies the requirements from Section \ref{S_Existence} for all values of the parameter $\mu$ that appear in the proof. However, the computable slow manifolds might change with the parameter, since they are defined using \eqref{eq_hatSeps}. We prove that the one parameter family of unstable manifolds $W^u_\mu(p_\mu)$ moves through this fixed enclosing region, and that, as the family passes through $\mathcal{C}$, it always has to have a tangential intersection with at least one of the computable slow manifolds, regardless of how the smooth one parameter family of computable slow manifolds inside of $\mathcal{C}$ was chosen.
\begin{theorem}\label{T_tangential}
For $0<\varepsilon\leq 10^{-3}$, the singular Hopf normal form \eqref{eq_singularHopfNormalForm} undergoes a tangential bifurcation of a computable slow manifold and the unstable manifold of the equilibrium. The bifurcation occurs in the interval $[\mu_0,\mu_1]=[0.00454,0.004553]$ with fixed parameters $(A,B,C)=(-0.07, 0.001,0.16)$.
\end{theorem}
The main argument in the proof of Theorem \ref{T_tangential} is illustrated in Figure \ref{F_finalBox}. We consider the intersections of $S_\mu^r$ and $W^u_\mu$ with a half-plane $\Sigma$. At $\mu_0$, the two manifolds do not intersect each other in $\Sigma$. Notice that the unstable manifold seems to translate to the left relative to the repelling slow manifold as $\mu$ increases. At $\mu_1$, the two manifolds intersect transversally in $\Sigma$. In the proof of the theorem, we formalize and prove these observations, and moreover show that the first intersection of the two manifolds is tangential. The vector field is transverse to $\Sigma$, so a tangential intersection of the manifolds in $\Sigma$ corresponds to a tangential intersection in the 3-dimensional phase space.
In the proof of Theorem \ref{T_tangential}, we will at times work with the singular Hopf normal form \eqref{eq_singularHopfNormalForm}, and at other times with the rescaled singular Hopf normal form \eqref{eq_resc_shnf}. Recall that, in the rescaled system we use upper case variables and parameters ($\mu$ is scale independent). Note that we do not assert that the tangency of the manifolds is unique. We will first prove Theorem \ref{T_tangential} for $\varepsilon=10^{-3}$. For smaller $\varepsilon$, the result follows from the rescaling \eqref{eq_rescaling} and the following property of the relative slope condition for the singular Hopf normal form (recall that the tangency will occur in different parts of phase space for different values of $\varepsilon$):
$$
\sqrt{\varepsilon}s(\varepsilon) = \sqrt{\varepsilon}\dfrac{2|X'|\sqrt{\varepsilon Y}}{\sqrt{\varepsilon}|Y'|+|Z'|} = 2|X'|\sqrt{Y}\dfrac{\varepsilon}{\sqrt{\varepsilon}|Y'|+|Z'|},
$$
since $\varepsilon>0$, $|Y'| \geq 0$, and $|Z'|\geq 0$, this is a non-decreasing function of $\varepsilon$.
Hence, if
$$
s(\varepsilon)\leq\dfrac{1}{\sqrt{\varepsilon}},
$$
then
$$
s(\varepsilon')=\dfrac{1}{\sqrt{\varepsilon'}}\sqrt{\varepsilon'}s(\varepsilon')\leq \dfrac{1}{\sqrt{\varepsilon'}}\sqrt{\varepsilon}s(\varepsilon) \leq \dfrac{1}{\sqrt{\varepsilon'}}, \quad {\rm for \, all }\,\, 0<\varepsilon'<\varepsilon.
$$
The existence of computable slow manifold at a particular value of $\varepsilon$ thus implies the existence of computable slow manifolds at all smaller values of $\varepsilon$. Note that these computable slow manifolds will appear at different positions in the phase space for different values of $\varepsilon$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7 \textwidth]{sectionOverview.eps}
\caption{An example family $S_\mu^r$ (thick line) and images of fundamental domains (solid curves) of $W^u_\mu$ for a selection of $\mu$ in $[\mu_0, \mu_1]$, shown here intersected with $\Sigma$. The boundary of $\Sigma$ is drawn as a dashed line, the rectangle $R$ is drawn as a shaded region. As $\mu$ is varied in $[\mu_0,\mu_1]$, the slow manifold only moves by amounts too small to be noticeable at the scale of the diagrams.
For each $W^u_\mu$ included in the figure, we plot the first intersection of the trajectories with $\Sigma$, if the trajectory reaches $\Sigma$.}
\label{F_finalBox}
\end{center}
\end{figure}
{\bf Set-up. }
We will work with
$$
\Sigma :=\{(X,Y,Z)\in \mathbb{R}^3: Z \geq -0.1693+0.16\, (X+1.353), Y=2\}
$$
and
$$
R:=\{(X,Y,Z)\in\Sigma:-1.62\leq X \leq -1.49, -0.169\leq Z \leq -0.162\}.
$$
We next list verifiable conditions that via Lemma \ref{L_tangential} below will prove Theorem \ref{T_tangential}. Many of these conditions are illustrated in Figure ~\ref{F_initialFinalBox}. Let
$$
Y_{min},Y_{max}: [ \mu_0,\mu_1]\rightarrow \mathbb{R}
$$
be continuous with $Y_{min}(\mu)\leq Y_{max}(\mu)$. Further define a 2-dimensional ``box'' by
$$
B_0:=\{(\mu,X,Y,Z) \in [\mu_0,\mu_1]\times W^u_\mu:\, X= \pi_X(p_\mu), Y_{min}(\mu)\leq Y\leq Y_{max}(\mu)\}.
$$
Note that the requirement $(X,Y,Z)\in W^u_\mu$ uniquely defines $Z$ as a function of $(\mu,X,Y)$. Denote the corners of $B_0$ corresponding to
$$(\mu,Y)\in\{(\mu_1,Y_{max}(\mu_1)),
(\mu_0,Y_{max}(\mu_0)),
(\mu_0,Y_{min}(\mu_0)),
(\mu_1,Y_{min}(\mu_1))\}$$
by $\{M_1, M_2, M_3, M_4\}$. Denote the flow map of system \eqref{eq_resc_shnf} from $B_0$ to $\Sigma$, wherever it is defined, by $\Psi$. The next step of our construction is to introduce a number of assumptions, that are verifiable using validated numerics, i.e., they can be restated as a finite number of computable conditions. The geometry of these assumptions is illustrated in Figure \ref{F_initialFinalBox}. In Lemma \ref{L_tangential} below we show that these assumptions are sufficient to prove Theorem \ref{T_tangential}.
\begin{assumption}\label{A_Tangency}
Assume that the following conditions are satisfied:
\begin{enumerate}
\item[(I)] For $\mu\in[\mu_0,\mu_1]$, a family of repelling slow manifolds $S_\mu^r$ intersects $R$ in a single family of curves $C_\mu$ that enters $R$ at the top and exits $R$ at the bottom.
\item[(II)] The map $\Psi$ is defined on the three sides of $B_0$ corresponding to $Y=Y_{min}(\mu), Y=Y_{max}(\mu)$ and $\mu=\mu_0$, and their images under $\Psi$ lie in $R$ and strictly to the right of $S_\mu^r\cap R$.
\item[(III)] The map $\Psi$ is defined on $\{ (\mu, X,Y,Z)\in B_0 : \mu=\mu_1\}$ and its image lies in $R$. Furthermore, $\Psi(M_1)$ and $\Psi(M_4)$ are strictly to the right of $S_\mu^r\cap R$, and there exists a point $M_5\in\{ (\mu, X,Y,Z)\in B_0 : \mu=\mu_1\}$ such that
$\Psi(M_5)$ lies strictly to the left of $S_\mu^r\cap R$ in $\Sigma$.
\item[(IV)] The map $\Psi$ is well-defined on $B_0$.
\end{enumerate}
\end{assumption}
\begin{figure}[h]
\begin{center}
(a)\includegraphics[width=0.46 \textwidth]{initialBox_labeled.eps}
(b)\includegraphics[width=0.46 \textwidth]{finalBox_labeled.eps}
\caption{Illustration of the assumptions made in Assumption \ref{A_Tangency}. The box $B_0$ shown in pane (a) maps into $R\subset\Sigma$ as shown in pane (b). As $\mu$ is varied in $[\mu_0,\mu_1]$, the slow manifold (thick solid line) only moves by amounts too small to be noticeable at the scale of the diagrams.}\label{F_initialFinalBox}
\end{center}
\end{figure}
\begin{lemma} \label{L_tangential} Suppose that Assumptions \ref{A_Tangency} are satisfied. Then $S^r_{\mu}$ and $W^u_\mu$ intersect tangentially for some $\mu^* \in [\mu_0,\mu_1]$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{L_tangential}]
Fix a family of slow repelling manifolds $S^r_{\mu}$, $\mu\in[ \mu_0, \mu_1]$. Since all of $B_0$ reaches $\Sigma$ by Assumption \ref{A_Tangency}.IV, the existence and uniqueness theorem for ODEs implies that the map from $B_0$ to $\Sigma\times [ \mu_0, \mu_1]$ is continuous. We may thus define the continuous function
\begin{equation*}
\mathrm{dist}(\mu,Z):=\min \left(\pi_X (W^u_\mu|_Z\cap R) - \pi_X (S^r_{\mu} |_Z\cap R) \right)
\end{equation*}
where $Z$ is required to lie in the range of $Z$ values of $R$ and $|_Z$ denotes restriction to $Z$. Consider
$$\mu^* = \min \{\mu\in [\mu_0,\mu_1]:\, \min_Z \mathrm{dist}(\mu,Z)=0\},
$$
the existence of which follows from Assumptions \ref{A_Tangency}.II and \ref{A_Tangency}.III, and the continuity of $\mathrm{dist}(\mu,Z)$. Clearly $S_{\mu^*}^r\cap R$ and $W^u_{\mu^*}\cap R$ intersect in at least one point $(X_0,Y_0,Z_0)$. Moreover,
$$
\pi_X(W^u_{\mu^*}|_Z\cap R) - \pi_X(S_{\mu^*} |_Z\cap R)\geq 0
$$
for the range of $Z$ values that lie in $R$. Since $W^u_{\mu^*}$ and $S_{\mu^*}$ are smooth surfaces in $\mathbb{R}^3$, transverse to $R$, we can now consider the Taylor series expansion of
$$
\pi_X(W^u_{\mu^*}|_Z\cap R) - \pi_X(S_{\mu^*} |_Z\cap R
$$
at $X_0$ with respect to $X$ and conclude that its linear term must be zero. We have thus shown that the manifolds $W^u_{\mu^*}$ and $S_{\mu^*} $ intersect tangentially in $R$.
\end{proof}
Subsection \ref{SS_SlowManifold} below gives details on the verification of Assumption \ref{A_Tangency}.I. Subsection \ref{SS_UnstableManifold} describes in detail how $\Sigma$, $Y_{min}$, and $Y_{max}$ are chosen, and provides details on the verification of Assumptions \ref{A_Tangency}.II, \ref{A_Tangency}.III, and \ref{A_Tangency}.IV.
\subsection{Slow manifold computations.} \label{SS_SlowManifold}
Showing that for $\mu\in [\mu_0,\mu_1]$, a family of repelling slow manifolds $S_\mu^r$ intersects $R$ in a single family of curves is a straight-forward application of the methods developed in the earlier parts of this paper: we compute slow manifold enclosures for the rescaled singular Hopf system \eqref{eq_resc_shnf} over a domain that corresponds to
$$
1\leq Y \leq 500, \quad -0.169\leq Z \leq -0.162,
$$
and for the singular Hopf parameter values
$$
\{(\mu, A, B, C)\in \mathbb{R}^4: \mu\in [4.54,4.553]\times 10^{-3}, A=-0.07, B=0.001, C=0.16\}.
$$
The actual computations for the enclosures are performed in the original singular Hopf coordinates of \eqref{eq_singularHopfNormalForm}, as described in earlier sections of this paper. Let $\epsilon_0 = 10^{-3}$, in the original coordinates of the singular Hopf normal form, \eqref{eq_singularHopfNormalForm}, the domain now corresponds to
$$
D=[y_{min}, y_{max}]\times [z_{min}, z_{max}],
$$
where
$y_{min}=1.0\, \epsilon_0,y_{max}=500.0 \, \epsilon_0,z_{min}=-0.169\, \sqrt{\epsilon_0},z_{max}=-0.162\, \sqrt{\epsilon_0}$, and the set of singular Hopf system parameters is
$$
\left\{(\epsilon,\mu,a,b, c)\in \mathbb{R}^5: \epsilon=\epsilon_0, \mu\in [4.54,4.553]\times 10^{-3}, a=-\frac{0.07}{\sqrt{\epsilon_0}}, b=\frac{0.001}{\epsilon_0}, c=\frac{0.16}{\sqrt{\epsilon_0}} \right\}.
$$
The enclosures obtained show that points $(X,Y,Z)$ in the repelling slow manifold over $D$ must satisfy
$$
-1.5726<X<-1.5539.
$$
Moreover, the methods of Section \ref{S_Existence} of this paper were used to check that at any parameter in the above-described set, $S_\mu^r$ is a graph over a domain $D \subset S_0$, and that $s(\varepsilon_0)\leq1.027$. Again, note that the computation is independent of the choice of $\varepsilon_0$, since a different choice of $\varepsilon_0$ would imply that we should enclose a different part of the phase space. Since the above $z_{min}$ and $z_{max}$ were chosen large and small enough, respectively, to conclude that the enclosed repelling slow manifolds enter $R$ at the top and leave $R$ at the bottom, we have shown the existence of the sought family of slow manifolds.
\noindent {\bf Remark.} Even though the slow manifold intersected with the section $\Sigma$ in our case resembles a fixed straight line, enclosing it with the precision required for the proof to work is a hard problem. To determine rigorously the location of a slow manifold is difficult even in the easiest non-trivial cases. The problem is amplified in our case since we need high accuracy in the rescaled system, where the errors are blown up by a factor $O\left(\frac{1}{\sqrt{\varepsilon}}\right)$.
\subsection{Unstable manifold computations.}\label{SS_UnstableManifold}
We now describe how $Y_{min}$ and $Y_{max}$ are chosen for Assumptions \ref{A_Tangency}.II and \ref{A_Tangency}.III to be satisfied. Recall that Figure \ref{F_finalBox} was obtained by examining trajectories in entire fundamental domains of $W^u_\mu$ for $\mu\in[\mu_0, \mu_1]$, and that
some of these trajectories did not reach $\Sigma$. We chose $Y_{min}$ first and then $Y_{max}$ in such a way that $B_0$ is on the one hand small enough for the map to $\Sigma$ to be well-defined and its image to be in $R$, and on the other hand large enough for the images of marked points $M_1, \ldots, M_5$ and of the boundaries of $B_0$ to map to the left or right of $S_\mu^r$ as required by Assumptions \ref{A_Tangency}.II and \ref{A_Tangency}.III.
\subsubsection{Computing $Y_{min}(\mu)$.}
Let $L\subset \Sigma$ be the line given by
$$
L:=\{(X,Y,Z)=(t, 3, -0.1678+0.16\,(t+1.535))\,: t\in\mathbb{R}\}.
$$
This line lies well within $\Sigma$ and is parallel to $\partial \Sigma$. It is moreover transverse to the parts $W^u_\mu$ that reach $\Sigma$, for all $\mu\in[\mu_0, \mu_1]$. The boundary value problem (BVP) for the flow of the rescaled singular Hopf normal form with the following boundary conditions and $\mu$ as the continuation parameter is thus well-defined:
\begin{itemize}
\item trajectories have to start in the unstable eigenspace of $p_\mu$
\item trajectories have to start at an $X$ coordinate equal to that of $p_\mu$
\item trajectories have to end on $L$.
\end{itemize}
Note that there are multiple solutions to this BVP, as each trajectory in the unstable manifold satisfies the initial boundary condition multiple times as it spirals away from the equilibrium point $p_\mu$, but we choose one by selecting a fundamental domain for its endpoint near $p_\mu$. The equilibrium points $p_\mu=(x_\mu,x_\mu^2,x_\mu)$ satisfy the equation $x_\mu=-45+\sqrt{45^2-1000\mu}$.
We use a trajectory that initially has a $Y$ coordinate approximately $10^{-4}$
larger than that of $p_\mu$, deferring a discussion of the suitability of this distance to a remark at the end of this subsection. Solving the BVP with a shooting method, we find that the $Y$ coordinates of the solutions to the boundary value problem are close to linear in $\mu$ on the interval $[\mu_0, \mu_1]$. We thus define
$$
Y_{min}(\mu)=x_\mu^2-9.37888799540\times 10^{-5} + 0.640307054861539(\mu-\mu_0),
$$
to be the linear function in $\mu$ that approximates the $Y$ coordinates of the solution endpoints to the BVP.
\subsubsection{Computing $Y_{max}(\mu)$.}
After inspecting diagrams similar to Figure \ref{F_initialFinalBox}, we defined $Y_{max}(\mu)$ in an ad-hoc manner to be the linear function for which the box $B_0$ contains $25\%$ of a fundamental domain of $W^u_{\mu_0}$ and $40\%$ of a fundamental domain of $W^u_{\mu_1}$:
$$
Y_{max}(\mu)=x_\mu^2 -9.628167607168\times 10^{-5} + 0.549805711513847(\mu-\mu_0).
$$
\subsubsection{Computing unstable manifolds.}\label{SSS_WU}
The complexity of the singular Hopf normal form makes it unfeasible to compute unstable manifolds analytically. We therefore begin by describing a method to rigorously compute the location of $W^u_\mu$. We will use the method developed in Section \ref{S_Method} together with covering relations with cone conditions \cite{Z09} and validated numerical integration \cite{L88,NJ99,NJC99,NJP01} to enclose and propagate the manifolds, respectively. In our implementation \cite{Progs} we use the software VNODE-LP \cite{Vnode} to integrate the system \eqref{eq_resc_shnf}. The computations are done using order $11$ Taylor expansions in VNODE-LP.
Since our proof relies heavily on the concept of h-sets and the method of covering relations we provide an informal introduction here. For a complete formal description of these concepts and methods we refer the reader to \cite{ZG04,Z09}. In \cite{ZG04} h-sets and covering relations are introduced, and in \cite{Z09} the concept of an h-set with cones is introduced together with the appropriate modification to the definition of a covering relation. An h-set is a compact hyperbolic like set, in the sense that it has expanding and contracting directions, in an appropriate coordinate system. An h-set is a set together with the coordinates. A map together with two h-sets, $h_1, h_2$, is said to satisfy covering relations if $h_1$ is mapped across $h_2$ under the map. Across in this setting means that the boundaries of $h_1$ transversal to the expanding directions are mapped outside $h_2$ and the image of $h_1$ does not intersect the boundaries of $h_2$ transversal to the contracting directions. Using the Brouwer degree one can show, see \cite{ZG04}, that a cycle of h-sets with covering relations must contain a periodic orbit. An h-set with cones is an h-set together with a quadratic form $Q$, that describes a uniform cone field on the h-set. The map is said to satisfy covering relations with cone conditions, if the quadratic form is increasing along orbits. Given recurrence, this yields uniqueness of periodic orbits. One can also use the cone conditions, see \cite{Z09}, to prove the existence of invariant manifolds and propagate them along orbits, which is how they are used in this section. The bounds on the location of the invariant manifolds given by covering relations with cone conditions are Lipschitz. In particular around a fixed point one gets a cone, which bounds the location of the invariant manifold. The Lipschitz constant depends on the ratio of the positive and negative eigenvalues of $Q$.
We construct an $h$-set with cones
centered at $p_\mu$ as a cylinder of size $10^{-4}$ and $10^{-5}$ in the $(X,Y)$ and $Z$ directions, respectively, with a cone with Lipschitz constant $0.1$ defined by the quadratic form
$$
Q=\left[\begin{array}{ccc}
1 &0&0\\0&1&0\\0&0&-100
\end{array}
\right].
$$
We verify that covering relations and cone conditions hold for the time $6.3$ map. This proves that the unstable manifold exists within the $h$-set, and yields an enclosure of the unstable manifold as a Lipschitz graph with Lipschitz constant $0.1$ over the disc:
$$
\left\{(X,Y) \,: \|(X-x_\mu,Y-x_\mu^2)\|\leq 10^{-4}\right\}.
$$
To further contract the enclosure for a given value of $(X,Y)$, we partition the line segment over $(X,Y)$ in the cone, and integrate backwards for $100$ time units or until the trajectory passes the cone. Subsegments that leave the cone in backwards time are removed, and we use the interval hull of the remaining subsegments as our new bound of a point in the unstable manifold. The covering relations with cone conditions prove that each remaining subsegment over $(X,Y)$ contains a unique $Z$ value such that $(X,Y,Z)\in W^u_\mu$.
Given an initial enclosure of a point in $W_\mu^u$ we propagate it forwards by integrating \eqref{eq_resc_shnf} until it hits $\Sigma$ using VNODE-LP. To integrate the top and bottom of $B_0$, i.e., the boundaries of $B_0$ where $\mu$ is not constant, and the interior of $B_0$, we consider a $4$ dimensional phase space by appending $\dot \mu = 0$ to \eqref{eq_resc_shnf}. This procedure stabilizes the numerical behavior of the propagation of the unstable manifold.
\subsubsection{Verifying Assumptions \ref{A_Tangency}.(II-IV)}
Using the method described in Section \ref{SSS_WU} one can now subdivide $\partial B_0$ into small subsets, compute an interval enclosure of each subset, and use validated numerical integration to show that Assumptions \ref{A_Tangency}.II, \ref{A_Tangency}.III, and \ref{A_Tangency}.IV are satisfied. In practice, this requires some experimentation: if the subsets are too large, wrapping effects in the numerical integration will make the verification of Assumptions \ref{A_Tangency}.II, \ref{A_Tangency}.III, and \ref{A_Tangency}.IV impossible. On the other hand, the computing time for the entire verification of Assumption \ref{A_Tangency}.II is approximately proportional to the number of subsets to be integrated numerically. The bounds on $\Psi(\partial B_0)$ and $\Psi(M_i)$, for $i=1,4,$ and $5$, are given in Table \ref{T_PsiIm}.
\begin{table}[h]
\begin{center}
(a) \begin{tabular}{c|cccc}
& $\Psi(\partial B_0(\mu_0))$ & $\Psi(\partial B_0(\mu_1))$ & $\Psi(\partial B_0(Y_{min}))$ & $\Psi(\partial B_0(Y_{max}))$ \\ \hline
$X$ & $-1.5_{227}^{468}$ & $-1.^{6102}_{5156}$ & $-1.5_{236}^{462}$ & $-1.5_{107}^{368}$\\ \hline
$Z$ & $-0.16_{46}^{71}$ & $-0.16_{35}^{83}$ & $-0.16_{64}^{84} $ & $-0.16_{29}^{57}$
\end{tabular}
\vspace{0.3cm}
(b)\begin{tabular}{c|ccccc}
& $\psi(M_1)$ & $\psi(M_4)$ & $\psi(M_5)$\\ \hline
$X$ & $-1.52_{29}^{37}$ & $-1.535_{1}^{5}$ & $-1.57_{58}^{86}$ \\ \hline
$Z$ & $-0.164_0^1$ & $-0.167_7^8$ & $-0.166_1^3$
\end{tabular}
\vspace{0.3cm}
\caption{(a) The image of $\partial B_0$ under $\Psi$. (b) The image of the marked points on the $\partial B_0(\mu_1)$ line under $\Psi$. All images are in the interior of $R$. The computations in (a) and (b) prove Assumptions \ref{A_Tangency}.II and \ref{A_Tangency}.III, respectively.}\label{T_PsiIm}
\end{center}
\end{table}
{\bf Remark.}
Note that since the position of $S_\mu^r$ as well as the map to $\Sigma$ are computed using interval arithmetic, their computed positions have errors due to over estimation associated with them. These errors have to be taken into account when choosing the $Y$ value at which to place the half-plane $\Sigma$, the interval boundaries $\mu_0$ and $\mu_1$, and the functions $Y_{min}$ and $Y_{max}$. Generally, placing $\Sigma$ at greater values of $Y$ results in tighter bounds for the slow manifold, and the repelling nature of the slow manifold spreads trajectories that were initially close in the fundamental domain far apart, making it easier to verify Assumptions \ref{A_Tangency}.(II-IV). We found the size $2\times 10^{-4}$ of the $h$-sets constructed in Section \ref{SSS_WU} to be large enough to keep the validated numerical integration to $\Sigma$ short enough to not accumulate prohibitively large errors, while being small enough to be efficiently computable.
{\bf Remark.} To give further insight into what happens after the bifurcation we note that the following set is forward invariant. For other values of the parameters, similar sets can be constructed. For $k>B$,
$$
X<\frac{-\mu}{A+C}, \quad X^2>(1+k)Y, \quad Y>\frac{1+k}{k}, \quad |X|>|Z|.
$$
We verify that the above conditions are satisfied, with $k=2$, for the point $M_5$. Thus, $X\rightarrow-\infty$ and $Y\rightarrow\infty$ for a part of the unstable manifold past the tangential bifurcation.
\section{Summary and Discussion}\label{S_Disc}
Computation of the slow manifolds in a normal form for singular Hopf bifurcation served as a case study for this paper. A singular Hopf bifurcation in slow-fast systems with two slow and one fast variable occurs when an equilibrium point crosses between attracting and repelling slow manifolds. The dynamics associated with this crossing -- a \emph{folded saddle-node type II} in the singular limit -- is complicated. The small amplitude oscillations emanating from the equilibrium point are part of \emph{mixed mode oscillations} in some examples, notably the model originally studied by Koper. Subsidiary bifurcations occur, including tangency between the repelling slow manifold and the two dimensional unstable manifold of the equilibrium point. Tangency bifurcations form part of the boundary of the parameter space region in which mixed mode oscillations occur in the Koper model, making them essential to understanding global aspects of the dynamics in this and other systems. Since there are no analytic methods for locating the tangency bifurcations, this paper uses verified computing methods to prove the existence of tangency bifurcations between a slow manifold and an unstable manifold of an equilibrium point for the first time.
Some of our ideas generalize to the case of slow manifolds of saddle type. To compute normally hyperbolic manifolds of saddle type, see e.g. \cite{CZ11}, one usually first computes the manifold's stable and unstable manifolds, and then intersects them. To compute a saddle slow manifold in a three dimensional ambient space using our ideas, one could compute enclosures of the stable and unstable manifolds, as presented in this paper. The existence argument given in Section \ref{S_Existence} can be modified to this setting, under appropriate assumptions on the dynamics on the slow manifold. Generalization to slow manifolds of saddle type in higher dimensional ambient spaces is substantially more challenging.
We made several design decisions while constructing our algorithm for computing slow manifolds. This section discusses details of several and motivates our choices.
\begin{itemize}
\item
Our enclosures were constructed as pairs of enclosing transversal piecewise linear surfaces. There are several alternative approaches to how to construct and refine the vertices of the enclosing triangulated surfaces $\mathcal{L}$ and $\mathcal{R}$. For the examples in Sections \ref{S_NumRes} and \ref{S_Tang} we used rectangular patches in the domain of the slow variables. Instead, one could construct the triangulations of the original domain in the slow variables by considering a dynamically defined region, constructed by flowing a set of initial conditions on the critical manifold with the slow flow, and use a discretization of those trajectories as the vertices of the triangulation.
\item
We considered other possibilities for moving vertices in Section \ref{SS_ImpBound}; namely, to move them along trajectories of the flow of (\ref{eq_slowFast_12}), or to move them along the normal of the triangulation. Both of these methods have serious disadvantages. When moving vertices along the flow of the system, we have to carefully check whether the vertices are moved past edges, thereby destroying the integrity of the triangulation. If the triangulation remains a graph over the $(y,z)$ domain, it is possible to generate a new triangulation by a Delaunay-type algorithm, and lift it to the surface, but if two vertices flow to the same $(y,z)$ coordinate this is no longer possible. Additionally, this method of moving vertices moves the two enclosing surfaces by different amounts, so that we obtain an enclosure of a smaller part of the slow manifold. A third drawback is that the triangulations might develop very acute triangles. Finally, numerical integration of a large number of vertices is slow compared to the approach that we use. Moving vertices along the normals combines the worst of both methods: we no longer control the triangulations, and we might introduce violations of the transversality conditions.
\item
The tightening procedure described in Section \ref{SS_ImpBound} only updates one vertex at the time, i.e., we move one vertex a big step and if all the faces attached to it are still transversal to the flow, then we move it. An alternative would be to move not only the vertex itself, but at the same time all vertices attached to it by an edge. Such a procedure would work as follows: when it is one vertex' ``turn'', only update it by a fraction of its potential improvement, and simultaneously move the ones it attaches to, by a smaller amount. The smaller neighbour updates should be such that the expected value of the total update of each vertex stays the same as in Section \ref{SS_ImpBound}. The benefit of such an approach is that the triangulation is not skewed as much in each step, so it should be easier to verify the transversality condition. In practice, however, the gain of this approach is negligible, compared to a slight increase of the denominator of (\ref{eq_update}). There are also disadvantages of such an approach, primarily in its computational complexity. Each time an update is made, one has to not only locate all its neighbouring vertices and update them, but also locate all of their neighbouring faces and check the transversality condition on them. In the results presented in Section \ref{S_NumRes}, we thus only update one vertex at the time.
\item
We construct invariant cone fields on $\mathcal{C}$ to prove that it contains normally hyperbolic locally invariant manifolds. We constructed these manifolds by flowing a ``ribbon'' around the inflowing boundaries of the enclosure. The property that our enclosures were aligned with the flow in the sense that for one of the slow variables the vector field is non-zero, was crucial for proving the existence of computable slow manifolds. In general one could also use the invariant cone fields to show that the graph transform is well defined, by adapting the method in \cite{KH95}. To prove the convergence of such a scheme would require very careful estimates of the expansion and contraction rates, and the norms of the nonlinear components of the vector field. An alternative is to define an extension of the vector field outside of $\mathcal{C}$ that has a slow manifold that is invariant rather than just locally invariant. Global invariance together with normal hyperbolicity would give a unique manifold for the extension using the technique from \cite{CZ11}. Given normal hyperbolicity, ensured by the existence of the cone field, either method would give the existence of a (non unique) $C^1$ normally hyperbolic manifold, which is the graph over the slow variables. Either of these approaches, however, include many subtle details that need to be clarified for the case at hand.
\item
If the mesh size of piecewise linear enclosing surfaces remains fixed as $\varepsilon$ decreases, then the curvature of the slow manifold becomes a a limiting factor in the tightness of enclosures. With smoother enclosing manifolds, tighter enclosures are likely to be possible. We did not attempt this because the transversality calculations for piecewise linear systems were particularly simple in the singular Hopf normal form we studied.
\end{itemize}
\section{Acknowledgment}
T. J. was funded by a postdoctoral fellowship from \textit{Vetenskapsr\aa det} (the Swedish Research Council). J. G. and P. M. were partially supported by a grant from the National Science Foundation.
| {
"timestamp": "2012-09-20T02:02:18",
"yymm": "1201",
"arxiv_id": "1201.1948",
"language": "en",
"url": "https://arxiv.org/abs/1201.1948",
"abstract": "Slow-fast dynamical systems have two time scales and an explicit parameter representing the ratio of these time scales. Locally invariant slow manifolds along which motion occurs on the slow time scale are a prominent feature of slow-fast systems. This paper introduces a rigorous numerical method to compute enclosures of the slow manifold of a slow-fast system with one fast and two slow variables. A triangulated first order approximation to the two dimensional invariant manifold is computed \"algebraically\". Two translations of the computed manifold in the fast direction that are transverse to the vector field are computed as the boundaries of an initial enclosure. The enclosures are refined to bring them closer to each other by moving vertices of the enclosure boundaries one at a time. As an application we use it to prove the existence of tangencies of invariant manifolds in the problem of singular Hopf bifurcation and to give bounds on the location of one such tangency.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Rigorous Enclosures of a Slow Manifold",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712650690179,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7094397152205965
} |
https://arxiv.org/abs/1905.06998 | Majorization bounds for Ritz values of self-adjoint matrices | A priori, a posteriori, and mixed type upper bounds for the absolute change in Ritz values of self-adjoint matrices in terms of submajorization relations are obtained. Some of our results prove recent conjectures by Knyazev, Argentati, and Zhu, which extend several known results for one dimensional subspaces to arbitrary subspaces. In addition, we improve Nakatsukasa's version of the $\tan \Theta$ theorem of Davis and Kahan. As a consequence, we obtain new quadratic a posteriori bounds for the absolute change in Ritz values. | \section{Introduction}
The study of sensitivity of Ritz values of Rayleigh quotients of self-adjoint matrices (i.e. the changes in the
eigenvalues of compressions of a self-adjoint matrix) is a well established and active research field in applied mathematics
\cite{AKPRitz,BosDr,AKFEM,AKMaj,AKProxy,LiLi,Mathias,Ovt,TeLuLi,ZAK,ZK,Z}. Explicitly, given a $d\times d$ complex self-adjoint matrix $A$
and isometries $X,\, Y$ of size $d\times k$, with ranges $\mathcal{X}$ and $\mathcal{Y}$ respectively, we are interested in computing upper and lower bounds
for
$$ |\lambda(\rho(X))-\lambda(\rho(Y))|=(\,|\lambda_i(\rho(X))-\lambda_i(\rho(Y))| \, )_{i\in\mathbb{I}_k}\in \mathbb{R}_{\geq 0}^k$$
where $\rho(X)=X^*A\,X, \, \rho(Y)=Y^*A\,Y$ are $k\times k$ complex self-adjoint matrices known as Rayleigh quotients (RQ) of $A$,
and $\lambda(\rho(X)),\,\lambda(\rho(Y)) \in\mathbb{R}^k$ are the eigenvalues (counting multiplicities and arranged in non-increasing order) also known as Ritz values.
\medskip\noi
Typically, the bounds for the absolute change in the Ritz values are obtained in terms of the
residuals $R_X=AX-X\, \rho(X)$ and $R_Y=AY-Y \rho(Y)$ or in terms of the principal angles between subspaces (PABS) denoted by $\Theta(\mathcal{X},\mathcal{Y})\in [0,\pi/2]^k$.
Upper bounds are classified according to which parameters are used to bound the change in Ritz values (see \cite{ZAK}). Indeed, the {\it a priori} bounds
are those obtained in terms of PABS; the {\it a posteriori} bounds are those obtained in terms of (singular values of) residuals while the {\it mixed type} bounds are obtained in terms
of both PABS and residuals. It is worth pointing out that PABS appearing in a priori bounds may not be readily available in practice. On the other hand, a
posteriori bounds are based on computable singular values of residual matrices.
Moreover, bounds based on residuals (i.e. both a posteriori and mixed type) are particularly convenient in case one of the spaces, say $\mathcal{X}$, is $A$-invariant
(as in this case $R_X=0$), as opposed to (autonomous) a priori bounds.
\medskip\noi
The abstract matrix analysis formulation of the sensitivity problem stated above makes it possible to apply
this theory in a variety of different research areas such as: graph matching \cite{AKMaj} in terms of spectral analysis of the graphs;
signal distinction in signal processing, where
Ritz values serve as harmonic signature to differentiate subspaces; finite element methods (FEM) \cite{AKFEM},
for approximation of subspaces corresponding to fundamental modes; of course, matrix analysis, e.g. for bounds for eigenvalues
after matrix additive perturbations. Also, bounds for changes in Ritz values
play a central role in the analysis of algorithms for simultaneous approximation of eigenvalues based on
Rayleigh-Ritz methods (see \cite{Parlett,StewSun} and the references therein). By now, the role of submajorization in obtaining bounds for the change of Ritz values
(recognized in the seminal paper \cite{AKMaj})
is well known; this partial pre-order relation is a powerful tool in this context, as bounds in terms of
submajorization imply a whole family of inequalities with respect to
unitarily invariant norms and with respect to the class of non-decreasing convex functions (\cite{MaOlAr}).
\medskip\noi
In this work we obtain a priori, a posteriori and mixed type upper bounds for the absolute change in Ritz values
of self-adjoint matrices in terms of submajorization. Some of our results prove recent conjectures from \cite{AKFEM,ZAK,ZK} which
extend several known
results for one dimensional subspaces to arbitrary subspaces. In addition, we improve Nakatsukasa's version
of the $\tan \Theta$ theorem \cite{Nakats} of Davis and Kahan \cite{DavKah}. We have included some (rather simple) examples to establish comparisons with previous work (for a detailed exposition of the context,
previous work, our results and some applications, see Section \ref{sec maj err}). We will consider further applications of the results herein elsewhere.
\medskip\noi
The paper is organized as follows. In Section \ref{sec prelis} we introduce preliminary
results in majorization theory and principal angles between subspaces.
In Section \ref{sec maj err} we develop our main results;
our approach to obtain these results is based on methods from abstract matrix analysis, so
we delay the proofs of some technical results until an appendix section.
Section \ref{sec maj err} is divided in three subsections: in Section \ref{sec 3.1} we prove a mixed type
upper bound for the change of the Ritz values that is conjectured in \cite{ZK} and show that this bound is sharp. We have also included some comments
with a comparison of our results with previous works and with future applications of the results of this subsection.
In Section \ref{subsec applic}
we establish a link between the results from Section \ref{sec 3.1} and an a priori upper bound for Ritz values conjectured from \cite{AKFEM}.
Although the results in this section are not sharp, they can be applied in quite general situations and they
capture the order of approximation conjectured in \cite{AKFEM}.
In Section \ref{sec 3.3.} we revisit Nakatsukasa's version of the $\tan \Theta$ theorem
of Davis and Kahan and obtain an improved version of this result;
we include an example that shows that this new version of the $\tan \Theta$ theorem is sharp in cases in which the classical
result is not. As an application, we obtain improved quadratic a posteriori error bounds for Ritz values.
The paper
ends with an Appendix (Section \ref{sec append})
in which we include a detailed background on majorization theory and present the
proofs of some technical results needed in Section \ref{sec maj err}.
\section{Preliminaries}\label{sec prelis}
Throughout our work we use the following
\medskip\noi
{\bf Notation and terminology}. We let $\mathcal{M}_{d,k}(\mathbb{C})$ be the space of complex $d\times k$ matrices
and write $\mathcal{M}_{d,d}(\mathbb{C})=\mathcal{M}_d(\mathbb{C})$ for the algebra of $d\times d$ complex matrices.
We denote by ${\cal H}(d)\subset \mathcal{M}_d(\mathbb{C})$ the
set of self-adjoint matrices and by $\mat^+$, the cone of
positive semi-definite matrices. Also,
$\mathcal G l(d)\subset \mathcal{M}_d(\mathbb{C})$ and $\mathcal{U}(d)$ denote the groups of invertible and unitary matrices respectively,
and $\mathcal{G} l (d)^+ =\mathcal{G} l(d)\cap \mat^+$. On the other hand, given a subspace $\mathcal{Z}\subset \mathbb{C}^d$, we let $\mathcal L(\mathcal{Z})$ denote the
space of linear operators acting on $\mathcal{Z}$.
\medskip\noi
For $d\in\mathbb{N}$, let $\mathbb{I}_d=\{1,\ldots,d\}$.
Given a vector $x\in\mathbb{C}^d$ we denote by $D_x$ the diagonal matrix in $\mathcal{M}_d(\mathbb{C})$ whose main diagonal is $x$.
Given $x=(x_i)_{i\in\mathbb{I}_d}\in\mathbb{R}^d$ we denote by $x^\downarrow=(x_i^\downarrow)_{i\in\mathbb{I}_d}$ the vector obtained by
rearranging the entries of $x$ in non-increasing order. We also use the notation
$(\mathbb{R}^d)^\downarrow=\{x\in\mathbb{R}^d\ :\ x=x^\downarrow \}$, $R_{\ge 0} =
\{x\in \mathbb{R}: x\ge 0\}$ and $(\mathbb{R}_{\geq 0}^d)^\downarrow=
\{x\in\mathbb{R}_{\geq 0}^d\ :\ x=x^\downarrow \}$. For $r\in\mathbb{N}$, we let $\mathds{1}_r=(1,\ldots,1)\in\mathbb{R}^r$.
\medskip\noi
Given a matrix $A\in\mathcal{H}(d)$ we denote by $\lambda(A)=(\lambda_i(A))_{i\in\mathbb{I}_d}\in (\mathbb{R}^d)^\downarrow$
the eigenvalues of $A$ counting multiplicities and arranged in
non-increasing order.
For $B\in\mathcal{M}_d(\mathbb{C})$ we let $s(B)=\lambda(|B|)$ denote the singular values of $B$, i.e. the eigenvalues of $|B|=(B^*B)^{1/2}\in\mat^+$. We use the abbreviation ONB for ``orthonormal basis".
\medskip\noi Arithmetic operations with vectors are performed entry-wise i.e.,
in case $x=(x_i)_{i\in\mathbb{I}_k} $ and $ y=(y_i)_{i\in\mathbb{I}_k}\in \mathbb{C}^k $
then $x+y=(x_i+y_i)_i$ and, following the notational convention of the principal references on these matters,
$$
x\,
=(x_i\,y_i)_i \peso{ and (assuming that $y_i\neq 0$, for $i\in\mathbb{I}_k$)}
\frac xy =(x_i/y_i)_i \ ,
$$
where these vectors all lie in $\mathbb{C}^k$. Moreover, if we assume further that $x,\,y\in\mathbb{R}^k$ then we write $x\leq y$ whenever
$x_i\leq y_i$, for $i\in\mathbb{I}_k$.
\hfill $\triangle$
\medskip\noi Next we recall the notion of majorization between vectors, that will play a central role throughout our work.
\begin{definition}\rm
Let $x,\, y\in\mathbb{R}^k$. We say that $x$ is
{\it submajorized} by $y$, and write $x\prec_w y$, if
$$
\sum\limits_{i=1}^j x^\downarrow _i\leq \sum\limits_{i=1}^j y^\downarrow _i \peso{for} j\in\mathbb{I}_k\,.
$$ If $x\prec_w y$ and $\tr x \ \stackrel{\mbox{\tiny{def}}}{=}\ \sum\limits_{i=1}^kx_i
\tr y$, then we say that $x$ is
{\it majorized} by $y$, and write $x\prec y$. \hfill $\triangle$
\end{definition}
\def[a\coma b]{[a\, , \, b]}
\medskip\noi
There are many fundamental results in matrix theory that are stated in terms of submajorization relations.
In what follows, we mention some elementary properties of submajorization that we will need in Section \ref{sec maj err}
(for detailed expositions on majorization theory, including proofs of the results mentioned below, see \cite{bhatia,HJ,MaOlAr}).
We will consider some further properties and results on majorization theory
in Section \ref{sec append}.
Given $f: [a\coma b] \rightarrow \mathbb{R}$, where $[a\coma b]\subset \mathbb{R}$ is an interval, and $z=(z_i)_{i\in\mathbb{I}_k}\in [a\coma b]^k$ we denote $f(z)=(f(z_i))_{i\in\mathbb{I}_k}\in\mathbb{R}^k$.
\begin{rem}\label{convfunction}
Let $[a\coma b]\subset \mathbb{R}$ be an interval and let
$f: [a\coma b] \rightarrow \mathbb{R}$ be a convex function. Then,
\begin{enumerate}
\item if $x,\, y\in [a\coma b]^k$ satisfy $x\prec y$ then $f(x)\prec_w f(y)$.
\item If $x,\, y\in [a\coma b]^k$ only satisfy $x\prec_w y$ but $f$ is further non-decreasing in $[a\coma b]$, then $f(x)\prec_w f(y)$.
\end{enumerate}
\hfill $\triangle$
\end{rem}
\begin{definition}\rm
A norm $N$ in $\mathcal{M}_d(\mathbb{C})$ is {\bf unitarily invariant} (briefly u.i.n.) if
$\nui{UAV}=\nui{A}$, for every $A\in\mathcal{M}_d(\mathbb{C})$ and $U,\, V\in\mathcal{U}(d)$.
\hfill $\triangle$
\end{definition}
\medskip\noi
Well known examples of u.i.n. are the spectral norm $\|\cdot\|_{sp}$ and
the Schatten $p$-norms $\|\cdot\|_p$, for $p\geq 1$.
\begin{rem}\label{Domkyfan}\rm
It is well known that (sub)majorization relations between singular values of matrices are intimately related
with inequalities with respect to u.i.n's. Indeed, given $A,\, B\in\mathcal{M}_d(\mathbb{C})$ the following statements are equivalent:
\begin{enumerate}
\item For every u.i.n. $N$ in $\mathcal{M}_d(\mathbb{C})$
we have that $N(A)\leq N(B)$.
\item $s(A)\prec_w s(B).$ \hfill $\triangle$
\end{enumerate}
\end{rem}
\medskip\noi
{\bf Principal Angles Between Subspaces}. Let $\mathcal{X},\, \mathcal{Y}\subset \mathbb{C}^d$ denote subspaces, with $\dim\mathcal{X}=h$ and $\dim \mathcal{Y}=k$.
Let $X\in{\cal M}_{d,h}$ and $Y\in{\cal M}_{d,k}$ be such that their columns form orthonormal bases of $\mathcal{X}$ and $\mathcal{Y}$ respectively.
Then, the principal angles between $\mathcal{X}$ and $\mathcal{Y}$, denoted $\pi/2\geq \Theta_1(\mathcal{X},\mathcal{Y})\geq \ldots\geq \Theta_m(\mathcal{X},\mathcal{Y})\geq 0$
where $m=\min\{h,k\}$ - are determined by
$$\cos(\Theta_{m-i+1}(\mathcal{X},\mathcal{Y}))= s_i(X^*Y) \peso{for} i\in\mathbb{I}_m\,.$$
We further write $\Theta(\mathcal{X},\mathcal{Y})=(\Theta_i(\mathcal{X},\mathcal{Y}))_{i\in\mathbb{I}_m}\in(\mathbb{R}^m)^\downarrow$ for the vector of principal angles between
$\mathcal{X}$ and $\mathcal{Y}$. Principal angles are a useful tool in describing the relative position and several geometric and metric aspects related with
the subspaces $\mathcal{X}$ and $\mathcal{Y}$ in $\mathbb{C}^d$
(see \cite{DavKah,Hal} and the references therein).
\section{Main results}\label{sec maj err}
In this section we develop our main results.
The section is divided in three parts;
first we prove \cite[Conjecture 2.1]{ZK} which
establishes a mixed type bound for the error in the (absolute) change
of the Ritz values.
In the second part, we establish connections between the mixed type bounds of the first section
and some a priori bounds for the change of Ritz values conjectured in \cite{AKFEM,AKProxy}.
Finally we take a closer look at Nakatsukasa's $\tan \Theta$ theorem under relaxed conditions from \cite{Nakats} and
obtain an improved version of this result. As a consequence we obtain quadratic a posteriori error bounds
for the change of the Ritz values that improve several known bounds.
Our approach to obtain these results is based on methods from abstract matrix analysis,
so we delay the proofs of some technical results until Section \ref{sec append},
where we have also included several classical results of this area that we will refer to in this section.
\medskip\noi
We begin by introducing the following
\begin{nota}\label{nota1} \rm
Throughout this section we consider the following notation and terminology:
\begin{enumerate}
\item $\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ denote two subspaces of dimension $k$.
We fix $X,\, Y\in \mathcal M_{d,k}(\mathbb{C})$ such that their columns form orthonormal
bases of $\mathcal{X}$ and $\mathcal{Y}$, respectively.
\item $\Theta(\mathcal{X}\, , \, \mathcal{Y})\in (\mathbb{R}_{\geq 0}^k)^\downarrow$ denotes the vector of principal angles between
the subspaces $\mathcal{X}$ and $\mathcal{Y}$; in this case,
$$\cos(\Theta^\uparrow(\mathcal{X}\, , \, \mathcal{Y}))=s(X^*Y)=(s_1(X^*Y),\ldots,s_k(X^*Y)) \in(\mathbb{R}_{\geq 0}^k)^\downarrow .$$
\item For a (fixed) self-adjoint $A\in\mathcal{H}(d)$ we set
$\rho(X)=X^*AX\in{\cal M}_k(\mathbb{C})$, $R_X=AX-X\rho(X)\in{\cal M}_{d,k}(\mathbb{C})$ and similarly $\rho(Y)$ and $R_Y$ for $Y$.
Notice that $$ R_X=AX-XX^*AX=AX-P_\mathcal{X} AX=P_{\mathcal{X}^\perp} AX\in{\cal M}_{d,k}(\mathbb{C})\,,$$
where $P_\mathcal{X}\in\mathcal{M}_d(\mathbb{C})$ denotes the orthogonal projection onto $\mathcal{X}$ and $\mathcal{X}^\perp$ denotes the orthogonal complement of $\mathcal{X}$. We consider similar
notation and identities for $\mathcal{Y}$.
\item
Let $X_\perp\in {\cal M}_{d\, , \, d-k}(\mathbb{C})$ be such that its columns form an ONB of $\mathcal{X}^\perp$.
Then, the matrix $\left(X,X_\perp\right)\in\mathcal{U}(d)$ and we get
$$\tilde A=
\left(X, X_\perp\right)\ A\ \left(X, X_\perp\right) ^*
= \left(\begin{array}{cc} \rho(X)&R_X^*\, X_\perp\\X_\perp^*\, R_X &\rho(X_\perp) \end{array}\right)
\, .
$$
Note that, since $R_X = (I-P_\mathcal{X})\,R_X\,$, then
$s(R_X)= s(X_\perp^*\, R_X )$, so that we can think of
$R_X$ (up to an isometric factor) as the $(2,1)$-block of
$\tilde A$,
in the block matrix representation of
(the unitary conjugate of $A$) $\tilde A$ as above.
\hfill $\triangle$
\end{enumerate}
\end{nota}
\subsection{Rayleigh-Ritz majorization error bounds of the mixed type}\label{sec 3.1}
We adopt Setting \ref{nota1}; moreover, in this subsection
we further assume that $\mathcal{X}$ and $\mathcal{Y}$ are such that $\Theta_1(\mathcal{X}\, , \, \mathcal{Y})<\frac{\pi}{2}$ that is, that $X^*Y\in\mathcal G l(k)$ is invertible.
\medskip\noi
Our first result concerns a submajorization error bound for the distance of eigenvalue lists of self-adjoint matrices:
\begin{theorem}\label{lemmaa3'} Let $C,\, D\in\mathcal{H}(k)$ and let $T\in\mathcal G l(k)$. Then,
\begin{equation}\label{eqn7}
|\lambda(C)-\lambda(D)|\prec_w s(T^{-1})\ s(CT-TD).
\end{equation}
\end{theorem}
\begin{proof} See the Appendix (Section \ref{sec append}).
\end{proof}
\medskip\noi The following result is \cite[Conjecture 2.1]{ZK} (see also Corollary \ref{cor caso invariante} below).
\begin{theorem}\label{theoremrC1} Under Setting \ref{nota1}, if
$\Theta_1(\mathcal{X}\, , \, \mathcal{Y})<\frac{\pi}{2}\,$ then
\begin{equation}\label{theorem1} |\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w \frac{s(P_\mathcal{Y}\ R_X) + s(P_\mathcal{X}\ R_Y)}{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}))}
\peso{and}\end{equation}
\begin{equation}\label{theorem2} |\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w [s(P_{\mathcal{X}+\mathcal{Y}}\ R_X) + s(P_{\mathcal{X}+\mathcal{Y}}\ R_Y)] \,\tan(\Theta(\mathcal{X}\, , \, \mathcal{Y}))\,.\end{equation}
\end{theorem}
\begin{proof}
Set $T=X^{\ast}Y$ and notice that, since $\Theta_1(\mathcal{X},\mathcal{Y})<\frac{\pi}{2}$, $T\in\mathcal{M}_k(\mathbb{C})$ is invertible. Using Theorem \ref{lemmaa3'} we get that
\begin{align}\label{eqn9}
|\lambda(\rho(X))-\lambda(\rho(Y))| \prec_w s(T^{-1})\ s(\rho(X)T-T\rho(Y))\, ,
\end{align} where $\rho(X)=X^*AX,\, \rho(Y)=Y^*AY\in \mathcal{H}(k)$. By construction we have that
\begin{equation}\label{eqn10}
s(T^{-1})=\frac{1}{\cos(\Theta(\mathcal{X},\mathcal{Y}))}\in (\mathbb{R}^k_{>0})^{\downarrow}\,.
\end{equation}
Arguing as in \cite[Thm 4.1]{ZK} we notice that
\begin{align*}
\rho(X)T-T\rho(Y)&=X^*A\,XX^*Y-X^*YY^*A\,Y=X^*A\,P_\mathcal{X} Y-X^*P_\mathcal{Y} A\,Y
\\ &=X^*A\, (I-P_{\mathcal{X}^{\perp}})Y-X^*(I-P_{\mathcal{Y}^{\perp}})A\, Y
\\&=X^*A\,Y-X^*A\,P_{\mathcal{X}^{\perp}}Y-X^*A\,Y+X^*P_{\mathcal{Y}^{\perp}}A\,Y=-X^*A\,P_{\mathcal{X}^{\perp}}Y+X^*P_{\mathcal{Y}^{\perp}}A\,Y\,.
\end{align*}
Using that $s(C)=s(C^*)$ for $C\in\mathcal M_k(\mathbb{C})$, we see that
$$
s(X^*A\,P_{\mathcal{X}^{\perp}}Y)=s(Y^*P_{\mathcal{X}^{\perp}}A\,X)=s(P_\mathcal{Y} P_{\mathcal{X}^{\perp}}A\,X)=s(P_\mathcal{Y} R_X)\in(\mathbb{R}_{\geq 0}^k)^\downarrow\,.
$$
Analogously
$
s(X^*P_{\mathcal{Y}^{\perp}}A\,Y)=s(P_\mathcal{X} R_Y)
$. The previous facts together with the sub-additivity property of taking singular values
(item 1 in Theorem \ref{theorem ag}) imply that
\begin{equation}\label{eqn11}
s(\rho(X)T-T\rho(Y))= s(-X^*A\,P_{\mathcal{X}^{\perp}}Y+X^*P_{\mathcal{Y}^{\perp}}A\,Y)\prec_w s(P_\mathcal{X} R_Y)+s(P_\mathcal{Y} R_X)\,.
\end{equation}
Now, if we apply \eqref{eqn10} and \eqref{eqn11} to \eqref{eqn9},
together with item 4 in Lemma \ref{lemma submaj props1},
we get
\eqref{theorem1}.
\medskip\noi
In order to show \eqref{theorem2} we point out that by \cite[Lemma 4.1]{ZK} we get that
\begin{equation} \label{eq para C1y medio}
s(P_\mathcal{X} R_Y ) \prec_w s (P_{\mathcal{X}+\mathcal{Y}}\, R_Y) \sin (\Theta(\mathcal{X}, \mathcal{Y})) \,.
\end{equation}
Since the entries of these vectors are ordered downwards, by Lemma \ref{lemma submaj props1} we deduce that
\begin{equation} \label{eq para C2}
s(P_\mathcal{X} R_Y ) + s(P_\mathcal{Y} R_X)
\prec_w \big(\, s (P_{\mathcal{X}+\mathcal{Y}}\,R_Y )
+ s (P_{\mathcal{X}+\mathcal{Y}}\, R_X) \, \big) \, \sin (\Theta(\mathcal{X}, \mathcal{Y})) \ .
\end{equation}
Hence, using \eqref{theorem1} and \eqref{eq para C2} together with Lemma \ref{lemma submaj props1} we see that
\eqref{theorem2} holds.
\end{proof}
\medskip\noi The fact that \eqref{theorem1} implies \eqref{theorem2} was already observed in \cite{ZK}; we have included
the proof of this fact for the benefit of the reader.
\begin{corollary}\label{cor caso invariante}
Consider Setting \ref{nota1} and assume that $\Theta_1(\mathcal{X}\, , \, \mathcal{Y})<\frac{\pi}{2}$. If we further assume that
$\mathcal{X}$ is $A$-invariant then
\begin{equation}\label{eq coro1} |\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w \frac{s(P_\mathcal{X}\ R_Y)}{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}))}
\quad \text{ and } \end{equation}
\begin{equation}\label{eq coro2} |\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w s(P_{\mathcal{X}+\mathcal{Y}}\ R_Y) \, \tan(\Theta(\mathcal{X}\, , \, \mathcal{Y}))\,.\end{equation}
\end{corollary}
\begin{proof}
In case $\mathcal{X}$ is $A$-invariant notice that $R_X=0$. The result now follows from Theorem \ref{theoremrC1}.
\end{proof}
\medskip\noi
It is natural to wonder whether we can improve the bounds in the previous results.
As shown in the following example, the submajorization bounds in Theorem \ref{theoremrC1} and Corollary \ref{cor caso invariante} are {\it sharp}.
\begin{exa}\label{exa1}\rm
Let $\lambda=(a,b,c,d)\in\mathbb{R}^4$, where $a<b<c<d$, and consider
$A\in\mathcal{H}(4)$ given by $A=D_\lambda$, i.e. $A$ is the diagonal matrix with main diagonal $\lambda$.
\medskip\noi
Let $\mathcal{X}$ be the $A$-invariant subspace
$\mathcal{X}=\text{span}\{e_1,\,e_2\}$ spanned by the first two elements of the canonical basis of $\mathbb{C}^4$. For $\theta\in (0,\pi/2)$ let
$f_\theta=\cos\theta \, e_2 + \sin \theta \, e_3$ and set $\mathcal{Y}_\theta=\text{span}\{e_1,\,f_\theta\}$. Then,
the principal angles are given by $\Theta(\mathcal{X},\mathcal{Y}_{\theta})=(\theta,0)$.
Let
$$
X=\begin{pmatrix}
1 & 0 \\
0 & 1 \\
0 & 0 \\
0 & 0
\end{pmatrix}
\peso{,}
X_{\perp}=\begin{pmatrix}
0 & 0 \\
0 & 0 \\
1 & 0 \\
0 & 1
\end{pmatrix}
\peso{and}
Y_\theta=\begin{pmatrix}
1 & 0 \\
0 & \cos \theta \\
0 & \sin \theta \\
0 & 0
\end{pmatrix} \,.
$$
It is straightforward to check that $\lambda(X^*AX)=(b,a)$ and that $\lambda(Y_\theta^*AY_\theta)=(b\,\cos^2\theta + c\, \sin^2(\theta) , a)$.
Again, simple computations show that
$$R_{Y_\theta}=
\begin{pmatrix}
0 & 0 \\
0 & (b-c)\,\cos\theta\, \sin^2\theta \\
0 & (c-b)\,\cos^2\theta\, \sin\theta \\
0 & 0
\end{pmatrix} \peso{,} P_\mathcal{X}\, R_{Y_\theta}=
\begin{pmatrix}
0 & 0 \\
0 & (b-c)\,\cos\theta\, \sin^2\theta \\
0 & 0\\
0 & 0
\end{pmatrix} \,.
$$ Hence, $s(P_\mathcal{X}\, R_{Y_\theta})=( (c-b) \cos \theta \, \sin^2\theta, 0)$. Now,
\begin{equation} \label{eq conc theoremC11}
|\lambda(X^*A\,X)-\lambda((Y_\theta)^*A\,Y_\theta)|=((c-b)\,\sin^2\theta,0) \ ,
\end{equation}
\begin{equation} \label{eq conc theoremC12}
\frac{s(P_\mathcal{X}\ R_{Y_\theta})}{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}_\theta))} =
((c-b)\,\sin^2\theta,0)\,.
\end{equation}
That is, \eqref{eq coro1} in Corollary \ref{cor caso invariante} becomes an equality in this case.
This also shows that \eqref{theorem1} is sharp, since \eqref{eq coro1} above is a particular case (when $\mathcal{X}$ is $A$-invariant).
Notice that $\mathcal{X}+\mathcal{Y}_\theta=\text{span}\{e_1,e_2,e_3\}$. Therefore,
since $P_{\mathcal{X}+\mathcal{Y}_\theta}\,R_{Y_\theta}=R_{Y_\theta}$ and $s(R_{Y_\theta})
=( (c-b) \cos \theta \, \sin\theta, 0)$,
\begin{equation} \label{eq conc theoremC2}
s(P_{\mathcal{X}+\mathcal{Y}_\theta}\ R_{Y_\theta}) \, \tan(\Theta(\mathcal{X}\, , \, \mathcal{Y}_\theta)) = ( (c-b) \, \sin^2 \theta, 0)\,.
\end{equation} By \eqref{eq conc theoremC11} and \eqref{eq conc theoremC2} we now see that \eqref{eq coro2}
in Corollary \ref{cor caso invariante} becomes an equality in this case.
This also shows that \eqref{theorem2} is sharp, since \eqref{eq coro2} above
is a particular case (when $\mathcal{X}$ is $A$-invariant).
\hfill $\triangle$
\end{exa}
\begin{rem}[Relations between our work and previous results]\rm
In the vector case, that is when $\mathcal{X}$ and $\mathcal{Y}$ are one dimensional spaces, Theorem \ref{theoremrC1}
implies the upper bounds in \cite[Theorem 3.7]{ZAK}, which is one of
the main results of that work (see also Corollary \ref{cor ZAK extend}
and Remark \ref{rem aplic conj tan2}).
\medskip\noi
In \cite{ZK} Knyazev and Zhu obtained several bounds for the absolute change of the Ritz values. Using Setting \ref{nota1}, the authors show (see \cite[Theorem 4.2 and Corollary 4.4]{ZK}) that
\begin{equation}\label{eq ZK1}
|\lambda(\rho(X)) - \lambda(\rho(Y))|^2 \prec_w \frac{\{s(P_\mathcal{Y}\ R_X) + s(P_\mathcal{X}\ R_Y)\}^2 }{\cos^2(\Theta(\mathcal{X}\, , \, \mathcal{Y}))}
\peso{and}
\end{equation}
\begin{equation}\label{eq ZK2} |\lambda(\rho(X)) - \lambda(\rho(Y))|^2 \prec_w \{s(P_{\mathcal{X}+\mathcal{Y}}\ R_X) + s(P_{\mathcal{X}+\mathcal{Y}}\ R_Y)\}^2 \,\tan^2(\Theta(\mathcal{X}\, , \, \mathcal{Y}))\,.
\end{equation}
Using the fact that $f:\mathbb{R}_{\geq 0}\rightarrow \mathbb{R}_{\geq 0}$ given by $f(x)=x^2$
is an increasing and convex function, Remark \ref{convfunction} shows
that \eqref{eq ZK1} and \eqref{eq ZK2} follow from \eqref{theorem1} and
\eqref{theorem2}
from Theorem \ref{theoremrC1}. Similarly, using that
$\cos\Theta_1(\mathcal{X},\mathcal{Y})=\cos\Theta_{\max}(\mathcal{X},\mathcal{Y})\leq \cos\Theta_i(\mathcal{X},\mathcal{Y})$, for $i\in\mathbb{I}_k$, we get that
Theorem \ref{theoremrC1} implies \cite[Theorems 4.1, 4.3]{ZK}.
\medskip\noi
In \cite{ZK} the authors show that their results can be applied in several situations such as: first order and
quadratic a posteriori majorization bounds; bounds for eigenvalues after matrix additive perturbations.
The previous remarks show that our bounds can also be applied in these
settings. Moreover, Theorem \ref{theoremrC1} allows to formalize the arguments related with bounds for eigenvalues
after matrix additive perturbations, and in particular with bounds for eigenvalues
after discarding off-diagonal blocks from \cite[Section 5]{ZK} (see the detailed discussion there).
\hfill $\triangle$
\end{rem}
\medskip\noi
The bounds in Theorem \ref{theoremrC1} can be used to perform a detailed analysis and obtain
better convergence rates for iterative algorithms related with the Rayleigh-Ritz method (see \cite{Parlett,StewSun,Z}).
We will consider such applications elsewhere.
\subsection{Applications: a priori majorization error bounds for Ritz values}\label{subsec applic}
In this section we establish a link between the majorization error bounds of the mixed type obtained in the previous section
and some a priori majorization error bounds considered in \cite{AKFEM,AKProxy}.
\begin{definition}\label{spread}
Let $A\in \mathcal{H}(d)$ and let $\mathcal{Z}\subset \mathbb{C}^d$ be a subspace with $\dim \mathcal{Z}=p$.
We define the (spectral) spread of $A$ relative to $\mathcal{Z}$, denoted
${\rm Spr}(A\, , \, \mathcal{Z})$, given by
$$
{\rm Spr}(A,\mathcal{Z})=\lambda(A_\mathcal{Z})-\lambda^\uparrow(A_\mathcal{Z})
=(\lambda_i(A_\mathcal{Z})-\lambda_{p-i+1}(A_\mathcal{Z}))_{i\in\mathbb{I}_p} \in(\mathbb{R}^p)^\downarrow\, ,
$$
where $A_\mathcal{Z}=P_\mathcal{Z}\, A|_\mathcal{Z}\in \mathcal L(\mathcal{Z})$ is a self-adjoint operator (defined in the obvious way).
In case $\mathcal{Z}=\mathbb{C}^d$, we write
${\rm Spr}(A,\mathbb{C}^d)={\rm Spr}(A)$.
\hfill $\triangle$
\end{definition}
\begin{rem}\rm
Let $A\in {\cal H}(d)$ and let $\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ with $\dim (\mathcal{X})=\dim(\mathcal{Y})=k$. Denote by $p = \dim \mathcal{X} + \mathcal{Y}$. In what follows we consider the vector
$$\text{Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))= (\,(\lambda_i(A_{\mathcal{X}+\mathcal{Y}})-\lambda_{p-i+1}(A_{\mathcal{X}+\mathcal{Y}}) )\, \sin(\Theta_i(\mathcal{X},\mathcal{Y}) \,)\,)_{i\in\mathbb{I}_k}\,.$$
We point out that this vector has non-negative entries, which are arranged in non-increasing order (in particular,
$\sin(\Theta_i(\mathcal{X},\mathcal{Y}))=0$ whenever $\lambda_i(A_{\mathcal{X}+\mathcal{Y}})-\lambda_{p-i+1}(A_{\mathcal{X}+\mathcal{Y}})$ $<0$, for $i\in\mathbb{I}_k$); hence,
$ \text{Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))\in (\mathbb{R}_{\geq 0}^k)^\downarrow$ (see \cite{ZK}). This fact becomes relevant for the conjectures posed
in \eqref{eq conj apr1} and \eqref{eq conj apr2} below.
\hfill $\triangle$
\end{rem}
\begin{rem}[A priori error bounds for changes of Ritz values: conjectures and previous work]\label{rem prepa1}\rm
Let $A\in {\cal H}(d)$ and let $\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ with $\dim (\mathcal{X})=\dim(\mathcal{Y})=k$.
In \cite{AKFEM} the authors conjectured that, in general, the following submajorization bound for the Ritz values holds:
\begin{equation}\label{eq conj apr1}
|\lambda(\rho(X))-\lambda(\rho(Y))|\prec_w \text{Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))\,.
\end{equation} Moreover, in case $\mathcal{X}$ is $A$-invariant, the authors conjectured that
\begin{equation}\label{eq conj apr2}
|\lambda(\rho(X))-\lambda(\rho(Y))|\prec_w \text{Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))^2\,.
\end{equation}
These conjectures are natural extensions of results from \cite{AKProxy} (that were obtained for $k=1$).
Although \cite[Conjecture 2.1.]{AKFEM} claims the validity of \eqref{eq conj apr1} and \eqref{eq conj apr2} for arbitrary
subspaces $\mathcal{X}$ and $\mathcal{Y}$ such that $\dim \mathcal{X}=\dim \mathcal{Y}$,
such bounds would become relevant in the particular case when the subspace $\mathcal{Y}$ is a (small) perturbation of the
subspace $\mathcal{X}$. In this case, the validity of \eqref{eq conj apr1} and \eqref{eq conj apr2}
would reveal the different orders of approximation of $\rho(X)$ by $\rho(Y)$ in terms of
PABS as well as in terms of the spectral spread of $A$ (i.e. when considering $A$ as well as $\mathcal{X}$ and $\mathcal{Y}$ as variables). Notice that these results would have immediate applications
in the study of numerical stability and convergence of iterative methods related with the Rayleigh-Ritz type algorithms.
\medskip\noi
In \cite[Theorem 2.1.]{AKFEM} the authors showed that, in general,
\begin{equation}\label{eq theorem apr1}
|\lambda(\rho(X))-\lambda(\rho(Y))|\prec_w (\lambda_{\max}(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}})) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))\,,
\end{equation} while, in case $\mathcal{X}$ is $A$-invariant,
\begin{equation}\label{eq theorem apr2}
|\lambda(\rho(X))-\lambda(\rho(Y))|\prec_w (\lambda_{\max}(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}})) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))^2\,,
\end{equation} where $A_{\mathcal{X}+\mathcal{Y}}=P_{{\mathcal{X}+\mathcal{Y}}}\ A|_{{\mathcal{X}+\mathcal{Y}}}\in \mathcal L({\mathcal{X}+\mathcal{Y}})$; moreover, in \cite[Theorem 2.2.]{AKFEM} they showed that in the particular
case in which $\mathcal{X}$ is the $A$-invariant subspace corresponding to the $k$ largest eigenvalues of $A$, then
\begin{equation}\label{eq theorem apr3}
0\leq \lambda(\rho(X))-\lambda(\rho(Y))\prec_w (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k}\, \sin(\Theta(\mathcal{X},\mathcal{Y}))^2\,.
\end{equation} Notice that, \eqref{eq theorem apr3} is a stronger bound than that in \eqref{eq theorem apr2}; yet, it is weaker
than the bound conjectured in \eqref{eq conj apr2}, since
$\text{Spr}_i(A,\mathcal{X}+\mathcal{Y})\leq \lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}})$, for $i\in\mathbb{I}_k$.
\hfill $\triangle$
\end{rem}
\medskip\noi
In what follows we apply Theorem \ref{theoremrC1} and obtain some results related with the conjectures
from \cite{AKFEM} described in \eqref{eq conj apr1} and \eqref{eq conj apr2}. In order to obtain these results, we take a closer look at the quantity $s(P_{\mathcal{X}}\, R_Y)$ for arbitrary $\mathcal{X}$ and $\mathcal{Y}$, as well as in the case where $\mathcal{X}$ is $A$-invariant.
\begin{proposition}\label{prospread}
Let $A\in {\cal H}(d)$ and let $\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ with $\dim (\mathcal{X})=\dim(\mathcal{Y})=k$. Then
\begin{equation}\label{eqn12}
s(P_{\mathcal{X}}\, R_Y)\prec_w {\rm Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))\,.
\end{equation}
\end{proposition}
\begin{proof} See the Appendix (Section \ref{sec append}). \end{proof}
\begin{theorem}\label{theorem apriori nostro}
Let $A\in {\cal H}(d)$, $\mathcal{X},\mathcal{Y}\subset\mathbb{C}^d$ subspaces, $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$.
If $\Theta_1(\mathcal{X},\mathcal{Y})<\frac{\pi}{2}$, then
\begin{eqnarray}
\label{T1spread} |\lambda(\rho(X)) - \lambda(\rho(Y))|&\prec_w &
\frac{2\,{\rm Spr}(A,\mathcal{X}+\mathcal{Y})\, \sin(\Theta(\mathcal{X},\mathcal{Y}))}{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}))}
\,.\end{eqnarray}
\end{theorem}
\begin{proof}
Theorem \ref{theoremrC1} establishes that $$|\lambda(\rho(X))-\lambda(\rho(Y))|\prec_w \frac{s(P_{\mathcal{X}}R_{Y})+s(P_{\mathcal{Y}}R_{X})}{\cos(\Theta(\mathcal{X},\mathcal{Y}))}\,.$$
Proposition \eqref{prospread} together with Lemma \ref{lemma submaj props1} imply that
$$
\frac{s(P_{\mathcal{X}}R_{Y})+s(P_{\mathcal{Y}}R_{X})}{\cos(\Theta(\mathcal{X},\mathcal{Y}))}
\prec_w \frac{2\,\text{Spr}(A,\mathcal{X}+\mathcal{Y})\, \sin(\Theta(\mathcal{X},\mathcal{Y}))}{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}))} \,.$$
The result follows from combining these last two inequalities.
\end{proof}
\medskip\noi
The next result illustrates the quadratic dependance of $s(P_{\mathcal{X}}R_{Y})$ from $\sin(\Theta(\mathcal{X},\mathcal{Y}))$ in case $\mathcal{X}$ is $A$-invariant.
\begin{proposition}\label{proSin}
Let $A\in{\cal H}(d)$,$\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ subspaces with $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$. Assume that $\mathcal{X}$ is $A$-invariant.
Then,
\begin{equation}\label{eqn21}
s(P_{\mathcal{X}}R_{Y})\prec_w 2\ (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k} \ \sin^2(\Theta(\mathcal{X},\mathcal{Y}))\,.
\end{equation}
\end{proposition}
\begin{proof}
See the Appendix (Section \ref{sec append}).
\end{proof}
\begin{theorem}\label{theorem bound inv nostro}
Let $A\in {\cal H}(d)$, $\mathcal{X},\mathcal{Y}\subset\mathbb{C}^d$ subspaces, $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$, and assume that $\mathcal{X}$ is $A$-invariant. If $\Theta_1(\mathcal{X},\mathcal{Y})<\frac{\pi}{2}$, then
\begin{equation}\label{invariantsen1} |\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w
\frac{2\ (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k}\ \sin^2(\Theta(\mathcal{X},\mathcal{Y})) }{\cos(\Theta(\mathcal{X}\, , \, \mathcal{Y}))} \,.
\end{equation}
\end{theorem}
\begin{proof}
The result follows from Corollary \ref{cor caso invariante} and Proposition \ref{proSin} with an argument similar to that in the proof of Theorem \ref{theorem apriori nostro} above.
\end{proof}
\begin{corollary}\label{cor bounds nostro}
Let $A\in {\cal H}(d)$, $\mathcal{X},\mathcal{Y}\subset\mathbb{C}^d$ subspaces, $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$. If $\Theta_1(\mathcal{X},\mathcal{Y})<\frac{\pi}{2}$, then
$$
|\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w \frac{2}{\cos(\Theta_1(\mathcal{X},\mathcal{Y}))}\
{\rm Spr}(A,\mathcal{X}+\mathcal{Y})\,\sin(\Theta(\mathcal{X},\mathcal{Y})) \,.
$$
If we assume further that $\mathcal{X}$ is $A$-invariant, then
$$|\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w
\frac{2}{\cos(\Theta_1(\mathcal{X},\mathcal{Y}))} \ (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k} \
\sin^2(\Theta(\mathcal{X},\mathcal{Y})) \,. $$
\qed
\end{corollary}
\medskip\noi
We end this section with some remarks concerning the relations among
Theorems \ref{theorem apriori nostro}
and \ref{theorem bound inv nostro}, Corollary \ref{cor bounds nostro}
and the conjectured bounds in \eqref{eq conj apr1} and \eqref{eq conj apr2}. As already mentioned in
Remark \ref{rem prepa1}, the bounds in \eqref{eq conj apr1} and \eqref{eq conj apr2}
would be particularly relevant in case $\mathcal{Y}$ is a (small) perturbation of $\mathcal{X}$ or, in other terms, in case that
$\mathcal{X}$ and $\mathcal{Y}$ are close subspaces (e.g. $\Theta_1(\mathcal{X},\mathcal{Y})$ is small). In order to simplify the discussion, let us assume that
$\Theta_1(\mathcal{X},\mathcal{Y})\leq \pi/4$. We point out that this assumption holds in a number of significant situations
(see for example \cite[Section 5.2.]{ZK}). In this case, if $A\in\mathcal{H}(d)$ then Corollary \ref{cor bounds nostro}
implies that
\begin{equation}\label{eq cons cor conj1}
|\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w \,(2\,\sqrt 2)\ {\rm Spr}(A,\mathcal{X}+\mathcal{Y})\, \sin(\Theta(\mathcal{X},\mathcal{Y})) \,.
\end{equation} Hence, under the present assumptions ($\Theta_1(\mathcal{X},\mathcal{Y})\leq \pi/4$),
the upper bound in \eqref{eq cons cor conj1} has the conjectured order of approximation (when considering
$A$ as well as the subspaces $\mathcal{X}$ and $\mathcal{Y}$ as variables), up to the constant factor $2\,\sqrt 2$.
\medskip\noi
If we further assume that $\mathcal{X}$ is $A$-invariant then by the same result we get that
\begin{equation}\label{eq cons cor conj2}
|\lambda(\rho(X)) - \lambda(\rho(Y))|\prec_w
\,(2\,\sqrt 2) \ (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k} \
\sin^2(\Theta(\mathcal{X},\mathcal{Y})) \,.
\end{equation} Again, the upper bound in \eqref{eq cons cor conj2} has the conjectured order of approximation (when considering
$A$ as well as the subspaces $\mathcal{X}$ and $\mathcal{Y}$ as variables), up to the constant factor $2\,\sqrt 2$. Moreover, notice that
this bound holds for an arbitrary $A$-invariant subspace $\mathcal{X}$ (as opposed the bound in \eqref{eq theorem apr3} from \cite{AKFEM} that is
shown to hold for special choices of $A$-invariant subspaces $\mathcal{X}$).
\subsection{The tan$\,\Theta$ theorem revisited: improved quadratic a posteriori error bounds}\label{sec 3.3.}
In this section we revisit Nakatsukasa's extension of Davis-Kahan's $\tan(\theta)$ theorem. Our motivation is
the study of an improved version of this result conjectured in \cite{ZK}
(see Corollary \ref{coro conjKZ tan} below).
We first recall the separation hypothesis
for Nakatsukasa's result. As before, in this section we adopt Setting \ref{nota1}.
\begin{definition}\label{sepa DK}\rm
Let $A\in\mathcal{H}(d)$ and let
$\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ be subspaces
with $\dim \mathcal X=\dim \mathcal Y=k$, such that $\mathcal X$ is $A$-invariant. Let
$[X,X_\perp],\, [Y,Y_\perp]\in\mathcal{U}(d)$ be unitary matrices such that the columns of (the $d\times k$ matrices)
$X$ and $Y$ form ONB's of $\mathcal X$ and $\mathcal Y$ respectively. Given $\delta>0$ we say that $(A\, , \, \mathcal X\, , \, \mathcal Y\, , \, \delta)$
satisfies the Davis-Kahan-Nakatsukasa (DKN) separation property if there exist $a\leq b$ such that
\begin{enumerate}
\item $ \lambda_i(X_\perp^*AX_\perp)=\lambda_i(P_{\mathcal{X}^\perp}\, A\, P_{\mathcal{X}^\perp}) \in[a,b]$, for $i\in\mathbb{I}_{d-k}$;
\item $\lambda_i(Y^*AY)= \lambda_i(P_{\mathcal{Y}}\, A\, P_{\mathcal{Y}}) \in (\infty, a-\delta]\cup [b+\delta,\infty)$, for $i\in\mathbb{I}_{k}$.
\hfill $\triangle$
\end{enumerate}
\end{definition}
\medskip\noi
\medskip\noi
Next we state Nakatsukasa's $\tan\Theta$ theorem under relaxed conditions.
\begin{theorem}[\cite{Nakats}]\label{theorem tan Nakats}
Let $A\in\mathcal{H}(d)$, \ $\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ and let
$\delta>0$ be such that $(A\, , \, \mathcal X\, , \, \mathcal Y\, , \, \delta)$
satisfies the DKN separation property. Then, $\Theta_1(\mathcal{X},\mathcal{Y})<\pi/2$ and
$$ \delta\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|\leq \| R_Y\| \,,$$
for every unitarily invariant norm $\|\cdot\|$. Equivalently, $\delta\,\tan(\Theta(\mathcal X\, , \, \mathcal Y))\prec_w s(R_Y)$.
\end{theorem}
\begin{rem}\rm
Theorem \ref{theorem tan Nakats} requires the knowledge of the full matrix $A$ in order to bound the (norm of the) vector
$\tan(\Theta(\mathcal{X},\mathcal{Y}))$ from above. Instead, it would be interesting to bound the vector
$\tan(\Theta(\mathcal{X},\mathcal{Y}))$ from above (only) in terms of the self-adjoint operator
$A_{\mathcal{X}+\mathcal{Y}}=P_{\mathcal{X}+\mathcal{Y}} A |_{\mathcal{X}+\mathcal{Y}}
\in \mathcal L(\mathcal{X}+\mathcal{Y})$ (defined in the obvious way).
In the next result we show that the $\tan\Theta$ theorem
mentioned above allow to obtain such a result. Moreover, we will also see that it is possible to describe separation
hypothesis for $(A_{\mathcal{X}+\mathcal{Y}},\,\mathcal{X},\, \mathcal{Y})$, that are more general than
the DKN separation hypothesis for $(A,\,\mathcal{X},\, \mathcal{Y})$, for which the $\tan\Theta$ theorem holds;
arguing in terms of interlacing inequalities, we can show that these separation hypotheses on $A_{\mathcal{X}+\mathcal{Y}}$ provide better separation constants
than the DKN separation hypotheses on the matrix $A$.
\hfill $\triangle$
\end{rem}
\medskip\noi
We formalize the content of the previous remark - with a small variation on the notation - in the following result.
First, we recall some facts related with the relative position of two subspaces.
\begin{rem}\label{rem two subspaces}\rm
Let $\mathcal{X},\, \mathcal{Y}\subset \mathbb{C}^d$ be two subspaces with $\dim\mathcal{X}=\dim\mathcal{Y}=k$. Consider the mutually orthogonal subspaces
$$\mathcal{H}_{00}=\mathcal{X}^\perp\cap\mathcal{Y}^\perp \ , \ \mathcal{H}_{10}=\mathcal{X}\cap\mathcal{Y}^\perp \ , \ \mathcal{H}_{01}=\mathcal{X}^\perp\cap\mathcal{Y} \ , \
\mathcal{H}_{11}=\mathcal{X}\cap\mathcal{Y} \ , $$and $\mathcal{H}_g=\mathbb{C}^d\ominus (\mathcal{H}_{00}\oplus \mathcal{H}_{10}\oplus \mathcal{H}_{01}\oplus \mathcal{H}_{11})$ which is called the {\it generic
part} of the pair $(\mathcal{X},\mathcal{Y})$. Each of these five (possible zero)
subspaces reduces each projection $P_\mathcal{X}$ and $P_\mathcal{Y}$. Moreover, the subspaces $\mathcal{X}_g=\mathcal{X}\cap \mathcal{H}_g$ and $\mathcal{Y}_g=\mathcal{Y}\cap \mathcal{H}_g$
are in {\it generic position} so that ${\cal H}_g=\mathcal{X}_g+\mathcal{Y}_g$. For details of this well known construction and several fundamental results
see \cite{Hal}.
\hfill $\triangle$
\end{rem}
\begin{theorem}\label{theorem tantan mejorado1}
Let $A\in\mathcal{H}(d)$, and let
$\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ be such that $\dim \mathcal{X}=\dim\mathcal{Y}=k$. Let $A_{\mathcal{X}+\mathcal{Y}}=S^* A S\in\mathcal{H}(p)$,
where $S\in{\cal M}_{d,p}(\mathbb{C})$ is such that its
columns form an ONB for $\mathcal{X}+\mathcal{Y}$. Then,
\begin{enumerate}
\item If $\delta>0$ is such that $(A\, , \, \mathcal X\, , \, \mathcal Y\, , \, \delta)$
satisfies the DKN separation property then there exists $\delta\,'\geq \delta$ such that
$(A_{\mathcal{X}+\mathcal{Y}}\, , \, S^*\mathcal X\, , \, S^*\mathcal Y\, , \, \delta\,')$ satisfies the DKN separation property.
\item If $\delta\,'>0$ is such that
$(A_{\mathcal{X}+\mathcal{Y}}\, , \, S^*\mathcal X\, , \, S^*\mathcal Y\, , \, \delta')$ satisfies the DKN separation property, then
\begin{equation}\label{eq theorem tan tan mejorado}
\delta\,'\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|\leq \| A_{\mathcal{X}+\mathcal{Y}}\,Y_S-Y_S\,(Y_S^*A_{\mathcal{X}+\mathcal{Y}}\,Y_S)\|
=\|P_{\mathcal X+\mathcal Y} \ R_Y\| \end{equation}
for every unitarily invariant norm $\|\cdot\|$, where $Y_S=S^*Y\in{\cal M}_{p,k}(\mathbb{C})$.
\end{enumerate}
\end{theorem}
\proof We first show item 1 Let $X,\, Y\in \mathcal M_{d,k}(\mathbb{C})$ be such that their columns form orthonormal
bases of $\mathcal{X}$ and $\mathcal{Y}$, respectively. By hypothesis, there exist $a\leq b$ such that: for $i\in\mathbb{I}_{d-k}$ and
$j\in\mathbb{I}_{k}$ we have that
$$\lambda_i(X_\perp^*AX_\perp)\in[a,b] \peso{and}
\lambda_j(Y^*AY) \in (\infty, a-\delta]\cup [b+\delta,\infty)\,,$$
where $X_\perp\in{\cal M}_{d,d-k}(\mathbb{C})$ is such that its columns for an ONB for $\mathcal{X}^\perp$.
Let $\mathcal{Z}=\mathcal{X}+\mathcal{Y}$ and notice that $S\in{\cal M}_{d,p}(\mathbb{C})$ is an isometry from $\mathbb{C}^p$ onto $\mathcal{Z}$. Moreover, the matrix
$S^*A\,S\in\mathcal{H}(p)$. Similarly, $X_S=S^*X,\, Y_S=S^*Y\in {\cal M}_{p,k}$
are isometries from $\mathbb{C}^k$ onto
$S^*\mathcal{X},\, S^*\mathcal{Y}\subseteq \mathbb{C}^p$, respectively.
Consider the mutually orthogonal subspaces
$$\mathcal{H}_{11}=\mathcal{X}\cap \mathcal{Y} \peso{,} \mathcal{X}_g= \mathcal{H}_g\cap \mathcal{X} \peso{and} \mathcal{X}_{g^\perp}= \mathcal{H}_g\ominus \mathcal{X}_g \,,$$
where $\mathcal{H}_g$ is the subspace of $\mathbb{C}^d$ corresponding to the generic part of the pair $(\mathcal{X}\, , \, \mathcal{Y})$ (see Remark \ref{rem two subspaces}).
By Theorem \ref{theorem tan Nakats} we have that $\Theta_1(\mathcal{X},\mathcal{Y})<\pi/2$ so then, $\mathcal{X}^\perp\cap \mathcal{Y}=\{0\}=\mathcal{X}\cap\mathcal{Y}^\perp$.
Thus, $$\mathcal{X}=\mathcal{H}_{11}\oplus \mathcal{X}_g \peso{,} \mathcal{Z}=\mathcal{H}_{11}\oplus \mathcal{X}_g\oplus \mathcal{X}_{g^\perp}\peso{and} \mathcal{X}_{g^\perp}=\mathcal{Z}\ominus \mathcal{X}\,.$$
Let $X'\in M_{d,(p-k)}(\mathbb{C})$ be such that its columns form an orthonormal
basis of $\mathcal{X}_{g^\perp}\subset \mathcal{X}^\perp$. Then, $X'_S=S^*\,X'\in{\cal M}_{p,(p-k)}(\mathbb{C})$ is an isometry
from $\mathbb{C}^{p-k}$ onto $S^*\mathcal{X}_{g^\perp}=(S^*\mathcal{X})^\perp\subseteq \mathbb{C}^p$.
To check the DKN separation property for
$(A_{\mathcal{X}+\mathcal{Y}}\, , \, S^*\mathcal X\, , \, S^*\mathcal Y)$ we consider the eigenvalues of
$$ (X'_S)^* (S^*A\,S)\, X'_S= (X')^* \, S\,S^*\, A\, S\, S^*\, X'= (X')^* \, A\, X'\in \mathcal{H}(p-k)\, ,$$ since
$SS^*=P_\mathcal{Z}\in \mathcal{M}_d(\mathbb{C})$, $P_\mathcal{Z} \, X'=X'$ and $ (X')^*\, P_\mathcal{Z} =(X')^*$. Hence, we now see that
$$ \lambda_i((X'_S)^* (S^*A\,S)\, X'_S) = \lambda_i(P_{\mathcal{X}_{g^\perp}} A\, P_{\mathcal{X}_{g^\perp}})\peso{for} i\in \mathbb{I}_{p-k}\,.$$
Since $\mathcal{X}_{g^\perp}\subset \mathcal{X}^\perp$ we have that $P_{\mathcal{X}_{g^\perp}} A\, P_{\mathcal{X}_{g^\perp}}$
is a compression of $P_{\mathcal{X}^\perp} A\, P_{\mathcal{X}^\perp}$. Using the interlacing inequalities
for compressions of self-adjoint matrices (see \cite{bhatia}), we get that
if $\lambda_i((P_{\mathcal{X}^\perp} A\, P_{\mathcal{X}^\perp}))\in [a,b]$, for $i\in \mathbb{I}_{d-k}$, then
\begin{equation} \label{eq theorem tan pulenta1}
\lambda_i(P_{\mathcal{X}_{g^\perp}} A\, P_{\mathcal{X}_{g^\perp}})\in [a,b] \peso{for} i\in \mathbb{I}_{p-k}\,.
\end{equation} On the other hand, notice that
$$ Y_S^* \,(S^*A\, S) \, Y_S= Y^* P_\mathcal{Z} A\, P_\mathcal{Z}\, Y= Y^*A\, Y$$
since, as before, $SS^*=P_\mathcal{Z}$, $P_\mathcal{Z} Y=Y$ and $Y^* P_\mathcal{Z}= Y^*$. Therefore, we get that
\begin{equation} \label{eq theorem tan pulenta2}
\lambda_i(Y_S^* \,(S^*A\, S) \, Y_S)=\lambda_i(Y^*A\,Y)\in (\infty, a-\delta]\cup [b+\delta,\infty)
\peso{for} i\in\mathbb{I}_{k}\, .
\end{equation}
Item 1 now follows from \eqref{eq theorem tan pulenta1} and \eqref{eq theorem tan pulenta2} and
the fact that $S^*\mathcal{X}\subseteq \mathbb{C}^p$ is, by construction,
an $A_{\mathcal{X}+\mathcal{Y}}$-invariant subspace.
\medskip\noi In order to show item 2, we fix a unitarily invariant norm $\|\cdot\|$. Using that $\mathcal{X},\,\mathcal{Y}\subset \mathcal{Z}$ and the fact that
$S^*$ is an isometry from $\mathcal{Z}$ onto $\mathbb{C}^p$, we see that $\Theta(\mathcal{X},\mathcal{Y})=\Theta(S^*\mathcal{X},S^*\mathcal{Y})$.
Then, an application of Nakatsukasa's $\tan\Theta$ theorem (Theorem \ref{theorem tan Nakats})
to the self-adjoint matrix $S^*AS\in\mathcal{H}(p)$ and subspaces $S^*\mathcal{X},\, S^*\mathcal{Y}\subseteq \mathbb{C}^p$ shows that
$$ \delta\,'\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|\leq \| A_{\mathcal{X}+\mathcal{Y}}\,Y_S-Y_S\,(Y_S^*A_{\mathcal{X}+\mathcal{Y}}\,Y_S)\,\| \,,$$
where $Y_S=S^*Y\in{\cal M}_{p,k}$ is an isometry from $\mathbb{C}^k$ onto $S^*\mathcal{Y}$. We notice that
\begin{eqnarray*}
A_{\mathcal{X}+\mathcal{Y}}\,Y_S-Y_S\,(Y_S^*A_{\mathcal{X}+\mathcal{Y}}\,Y_S) &=& S^* A\,S \, S^* Y - S^* Y \,(Y^*S (S^* A\, S) S^*Y)\\
&=& S^* \,(A \,Y - Y\, (Y^* A \,Y))\, ,
\end{eqnarray*} where we have used that $SS^*=P_\mathcal{Z}$, $P_\mathcal{Z}\, Y= Y$ and $Y^*\, P_\mathcal{Z}=Y^*$.
Hence, it follows that
$$
\| A_{\mathcal{X}+\mathcal{Y}}\,Y_S-Y_S\,(Y_S^*A_{\mathcal{X}+\mathcal{Y}}\,Y_S)\|=\| P_\mathcal{Z} \,(A\, Y - Y\, (Y^* A \,Y))\|
=\| P_{\mathcal{X}+\mathcal{Y}}\, R_Y\|\ .
$$
\hfill $\square$
\begin{rem}\rm
With the notation of Theorem \ref{theorem tantan mejorado1} and using Remark \ref{Domkyfan},
\eqref{eq theorem tan tan mejorado} is equivalent
to the majorization relation
$$\delta\,'\, \tan(\Theta(\mathcal X\, , \, \mathcal Y)\prec_w s( A_{\mathcal{X}+\mathcal{Y}}\,Y_S-Y_S\,(Y_S^*A_{\mathcal{X}+\mathcal{Y}}\,Y_S))
=s(P_{\mathcal X+\mathcal Y} \ R_Y)\, $$ in terms of the separation constant $\delta'$ for $A_{\mathcal{X}+\mathcal{Y}}=S^*A\,S$,
$S^*\mathcal{X}$ and $S^*\mathcal{Y}$. \hfill $\triangle$
\end{rem}
\medskip\noi
Consider the notation in Theorem \ref{theorem tantan mejorado1}. Let $\delta>0$ be
such that $(A\, , \, \mathcal X\, , \, \mathcal Y\, , \, \delta)$ satisfies the DKN separation property. Given a unitarily invariant norm $\|\cdot\|$, Theorem \ref{theorem tan Nakats} allows to bound $\|\tan \Theta(\mathcal{X},\mathcal{Y})\|$ from above by
\begin{equation} \label{eq rem boun tan1}
\|\tan \Theta(\mathcal{X},\mathcal{Y})\|\leq \frac{\|R_Y\|}{\delta}\,.
\end{equation} On the other hand, by item 2 in Theorem \ref{theorem tantan mejorado1}
there exists $\delta'\geq \delta>0$ such
that $(A_{\mathcal{X}+\mathcal{Y}}\, , \, S^*\mathcal X\, , \, S^*\mathcal Y\, , \, \delta')$ satisfies the DKN separation property, so that
we get the upper bound
\begin{equation} \label{eq rem boun tan2}
\|\tan \Theta(\mathcal{X},\mathcal{Y})\|\leq \frac{\|P_{\mathcal{X}+\mathcal{Y}}\, R_Y\|}{\delta\,'}\,.
\end{equation} Since $\|P_{\mathcal{X}+\mathcal{Y}}\, R_Y\|\leq \|R_Y\|$ and $\delta\leq \delta\,'$, we immediately see that
the upper bound in \eqref{eq rem boun tan2} improves the classical bound in \eqref{eq rem boun tan1}.
In order to compare these two bounds in some more detail, let us consider the following
\begin{exa}\label{exa2}\rm
Let $\tilde \lambda=(a,b,d,c)\in\mathbb{R}^4$, where $a<b<c<d$, and let $\tilde A\in\mathcal{H}(4)$ be given by
$\tilde A=D_{\tilde \lambda}$. For the purposes of this
example, we consider the real parameter $c\in (b,d)$ as variable (while $a,\,b,\,d$ are fixed).
\medskip\noi
Let $\mathcal{X},\,\mathcal{Y}_\theta\subset \mathbb{C}^4$ be as in Example \ref{exa1} i.e.
$\mathcal{X}=\text{span}\{e_1,\,e_2\}$ and $\mathcal{Y}_\theta=\text{span}\{e_1,\,f_\theta\}$. Recall that
$\Theta(\mathcal{X},\mathcal{Y}_{\theta})=(\theta,0)$. In particular,
$\tan \Theta(\mathcal{X},\mathcal{Y}_{\theta})=(\tan \theta,0)$ in this case.
\medskip\noi
It is clear that
$\mathcal{X}+\mathcal{Y}_{\theta}=\text{span}\{e_1,\,e_2,\, e_3\}$.
Let
$$
X=\begin{pmatrix}
1 & 0 \\
0 & 1 \\
0 & 0 \\
0 & 0
\end{pmatrix}
\peso{,}
X_{\perp}=\begin{pmatrix}
0 & 0 \\
0 & 0 \\
1 & 0 \\
0 & 1
\end{pmatrix}
\peso{and}
Y_\theta=\begin{pmatrix}
1 & 0 \\
0 & \cos \theta \\
0 & \sin \theta \\
0 & 0
\end{pmatrix} \,.
$$
Then, we have that $\lambda(Y_\theta^* \tilde A Y_\theta)=(b\,\cos^2\theta + d\, \sin^2(\theta) , a)$, while
$\lambda(X_\perp ^* \tilde A\, X_\perp)=(d,c)$. Therefore, if we
let $\theta_0(c)=\theta_0=\arcsin\left(\sqrt{\frac{c-b}{d-b}}\right)$ and set
$$\delta_\theta= c-(b\,\cos ^2 \theta + d\, \sin^2\theta)>0 \peso{for} 0<\theta<\theta_0\, , $$ then
$(\tilde A,\mathcal{X},\mathcal{Y}_\theta,\delta_\theta)$ satisfies the DKN separation property, and $\delta_\theta$ is the optimal
(largest) separation constant and the separation property holds only for $0<\theta<\theta_0$ in this case. Again, simple computations show that
$s(R_{Y_\theta})=( (d-b) \cos \theta \, \sin\theta, 0)$.
\medskip\noi
Now, \eqref{eq rem boun tan1} obtained from Theorem \ref{theorem tan Nakats} becomes
\begin{equation} \label{eq conc tan 1}
\tan \theta\leq \frac{(d-b) \cos \theta \, \sin\theta}{c-(b\,\cos ^2 \theta + d\, \sin^2\theta)} \peso{for}
0<\theta<\theta_0\,.
\end{equation} Notice that $\lim_{c\rightarrow b^+} \theta_0=0$ i.e., the range of $\theta$ for which we can apply the bound in
\eqref{eq conc tan 1} tend to become small. In the limit case in which $b=c$ (i.e. multiple eigenvalues)
we can not apply the bound \eqref{eq conc tan 1} (the separation constant in this case is $\delta_0=0$). Finally, if we consider the limit case in which $\theta$ becomes small, then
the upper bound is comparable with the upper bound $(\frac{d-b}{c-b})\,\tan \theta \ (>\tan \theta)$.
\medskip\noi
On the other hand, $\mathcal{X}+\mathcal{Y}_{\theta}\ominus \mathcal{X}=\mathbb{C}\,e_3$, the subspace spanned by $e_3$. In this case, if we let
$X'=(0,0,1,0)^t$, it is clear that $\lambda((X'_S)^* \tilde A \, X'_S)=d$. Therefore, if we let
$\delta'_\theta= d- (b\,\cos ^2 \theta + d\, \sin^2\theta)= (d-b)\,\cos^2\theta>0$, for $\theta\in (0,\pi/2)$, we get that
$(\tilde A_{\mathcal{X}+\mathcal{Y}_{\theta}},S^*\mathcal{X},S^*\mathcal{Y}_{\theta},\delta'_\theta)$
satisfies the DKN separation property, where $S\in{\cal M}_{4,3}(\mathbb{C})$ is the matrix whose columns are the first
three elements in the canonical basis. In this case
we have that $$\frac{s_1(P_{\mathcal{X}+\mathcal{Y}_{\theta}} \, R_{Y_{\theta}})}{\delta'_\theta}= \frac{(d-b) \cos \theta \, \sin\theta}{(d-b)\,\cos^2\theta}=\tan \theta\,,$$
and hence, the upper bound in \eqref{eq rem boun tan2} coincides with $\tan \theta$ (where $\tan \Theta(\mathcal{X},\mathcal{Y}_{\theta})=(\tan \theta, 0)$) i.e. the upper bound is sharp. Notice that the bound is applicable for every
$\theta\in (0,\pi/2)$.
\hfill $\triangle$
\end{exa}
\medskip\noi
The following result was conjectured in \cite{ZK}.
\begin{corollary}\label{coro conjKZ tan}
Let $A\in\mathcal{H}(d)$,
$\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ and $\delta>0$ be such that $(A\,, \mathcal X\,,\mathcal Y\,,\delta)$
satisfies the DKN separation property. Then,
$$ \delta\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|\leq \|P_{\mathcal X+\mathcal Y} \ R_Y\| \,.$$
for every unitarily invariant norm $\| \cdot \|$.
\end{corollary}
\begin{proof}Let
$S\in{\cal M}_{d,p}(\mathbb{C})$ be such that its columns form an ONB for $\mathcal{X}+\mathcal{Y}$.
By item 1 in Theorem \ref{theorem tantan mejorado1}, there exists $\delta\,'\geq \delta$ such that
$(S^* A\, S \, , \, S^* \mathcal X\, , \, S^* \mathcal Y\, , \, \delta\,')$ satisfies the DKN separation property.
By item 2 of the same result, we have that
$$ \delta\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|\leq \delta\,'\, \|\tan(\Theta(\mathcal X\, , \, \mathcal Y))\|
\leq \|P_{\mathcal X+\mathcal Y} \ R_Y\| \,.$$
\end{proof}
\medskip\noi
Finally, we get the following quadratic a posteriori error bound for the simultaneous
approximation of eigenvalues of $A$ by the Ritz values corresponding to Rayleigh quotients for
which a DKN separation property holds.
\begin{theorem}\label{theorem aplic tantan}
Let $A\in\mathcal{H}(d)$,
$\mathcal X,\,\mathcal Y\subset \mathbb{C}^d$ and $\delta>0$ be such that $(A\, , \, \mathcal X\, , \, \mathcal Y\, , \, \delta)$
satisfies the DKN separation property. Then, for every unitarily invariant norm $\|\cdot\|$ we have that
$$
\|\lambda (\rho(X))-\lambda(\rho(Y))\|\leq \frac{\| P_{\mathcal{X}+\mathcal{Y}} \, R_Y\|^2}{\delta}\,.
$$
\end{theorem}
\begin{proof}
This is a consequence of Corollary \ref{cor caso invariante} and Theorem \ref{theorem tantan mejorado1}.
\end{proof}
\medskip\noi
Theorem \ref{theorem aplic tantan} allows to obtain the following extension of
\cite[Theorem 5.3]{ZAK} (see Remark \ref{rem aplic conj tan2} below) which is a quadratic a posteriori majorization error bound for simultaneous approximation of consecutive eigenvalues.
\begin{corollary}\label{cor ZAK extend}
Let $A\in\mathcal{H}(d)$ and let $\mathcal Y\subset \mathbb{C}^d$ be such that:
\begin{enumerate}
\item $\lambda_1(Y^*AY)<\lambda_j(A)$, where $j\in\mathbb{I}_{d-k}$ is the smallest such index;
\item $\lambda_i(Y^*AY)\geq \lambda_{i+j}(A)$, for $i\in\mathbb{I}_k$.
\end{enumerate}
Let ${\cal U}$ be the $A$-invariant space spanned by
the eigenvectors associated with $\lambda_i(A)$, for $1\leq i\leq j$, and set $\mathcal{X}=(I-P_U)\mathcal{Y}$.
If $\eta=\lambda_j(A)-\lambda_1(Y^*AY)>0$ then
$$
\| (\lambda_{i+j}(A))_{i\in\mathbb{I}_k}-\lambda(\rho(Y))\|\leq \frac{\| P_{\mathcal{X}+\mathcal{Y}} \, R_Y\|^2}{\eta}\,,
$$ for every unitarily invariant norm $\|\cdot\|$.
\end{corollary}
\begin{proof}
Let ${\cal V}={\cal U}+\mathcal{Y}$ and notice that ${\cal U}\cap\mathcal{Y}=\{0\}$; hence, $p=\dim{\cal V}=\dim {\cal U}+k$ i.e. $j=\dim{\cal U}=p-k$.
Moreover, ${\cal V}\ominus {\cal U}=(I-P_{\cal U})\mathcal{Y}=\mathcal{X}$; then, in particular,
$\dim\mathcal{X}=\dim\mathcal{Y}$ and ${\cal V}\ominus \mathcal{X}={\cal U}$. Also notice that $\Theta_1(\mathcal{X},\mathcal{Y})<\pi/2$ or otherwise, we would have that
${\cal U}\cap \mathcal{Y}\neq \{0\}$, since ${\cal V}\ominus \mathcal{X}={\cal U}$.
\medskip\noi
Let $V\in {\cal M}_{d,p}(\mathbb{C})$ be such that its columns form an ONB of ${\cal V}$
and set $A_V=V^*AV\in\mathcal{H}(p)$. Similarly, let $X,\, Y\in{\cal M}_{d,k}(\mathbb{C}), \, U\in {\cal M}_{d,p-k}(\mathbb{C})$
be such that their columns form ONB's of $\mathcal{X}$, $\mathcal{Y}$ and ${\cal U}$ respectively; set $X_V=V^*X,\, Y_V=V^*Y\in{\cal M}_{p,k}(\mathbb{C})$ and $U_V=V^*U\in {\cal M}_{p,p-k}(\mathbb{C})$.
Then, the columns of $U_V$ span ${\cal U}_V\subset \mathbb{C}^p$ an $A$-invariant space of $A_V$. In particular, the columns of $X_V$ span
$\mathcal{X}_V\subset \mathbb{C}^p$ which is also an $A$-invariant space of $A_V$. In this case $\mathcal{X}_V^\perp={\cal U}_V$ and
$\Theta_1(\mathcal{X}_V,\mathcal{Y}_V)=\Theta_1(\mathcal{X},\mathcal{Y})<\pi/2$, where $\mathcal{Y}_V\subset\mathbb{C}^p$
is the space spanned by the columns of $Y_V$.
Notice that, by construction $\lambda_i(Y_V^*A_V\, Y_V)=\lambda_i(Y^*A\, Y)$, for $i\in\mathbb{I}_k$.
Since $\mathcal{X}\subset {\cal U}^\perp$ by the interlacing inequalities for
compressions of self-adjoint matrices and item 2 above, we get that
for $i\in\mathbb{I}_k$,
\begin{equation} \label{eq theorem ultimo momento1}
\lambda_i(X_V^*A_V\, X_V)= \lambda_i(X^*A\, X)\leq \lambda_i(A_{U_\perp})=\lambda_{j+i}(A)\leq \lambda_i(Y_V^*A_V\, Y_V)\, ,
\end{equation}
where $U_\perp\in {\cal M}_{d,d-j}(\mathbb{C})$ is such that its columns for an ONB for ${\cal U}^\perp$.
On the other hand, by hypothesis $(A_V, \mathcal{X}_V,\mathcal{Y}_V,\eta)$ satisfies the DKN separation property
(recall that $\mathcal{X}_V^\perp={\cal U}_V$). Hence, by Theorem \ref{theorem aplic tantan}
we conclude that
\begin{equation} \label{eq theorem ultimo momento2}
\| \lambda(X_V^*A_V\, X_V)-\lambda(Y_V^*A_V\, Y_V)\| \leq \frac{\| \, P_{\mathcal{X}_V+\mathcal{Y}_V} \, (A_V\, Y_V - Y_V\, (Y_V^*A_V\, Y_V)) \,\|^2 }{\eta}\,.
\end{equation}
By \eqref{eq theorem ultimo momento1} we get that
$$| (\lambda_{i+j}(A))_{i\in\mathbb{I}_k}-\lambda(Y_V^*A_V\, Y_V) |\prec_w | \lambda(X_V^*A_V\, X_V)-\lambda(Y_V^*A_V\, Y_V)| \,.$$
On the other hand, arguing as in the proof of Theorem \ref{theorem tantan mejorado1} we see that
$$ \| \, P_{\mathcal{X}_V+\mathcal{Y}_V} \, (A_V\, Y_V - Y_V\, (Y_V^*A_V\, Y_V)) \,\|= \| \, P_{\mathcal{X}+\mathcal{Y}} \, R_Y\,\|\,.$$
The result follows from these last facts together with \eqref{eq theorem ultimo momento2} and Remark \ref{Domkyfan}.
\end{proof}
\begin{rem}\label{rem aplic conj tan2}\rm
We mention that the hypothesis in item 1 in Corollary \ref{cor ZAK extend} is that
there exists an eigenvalue $\beta$ of $A$ such that $\lambda_1(Y^*AY)<\beta$.
Indeed, in this case we can apply the interlacing inequalities and get that
$\lambda_i(Y^*AY)\geq \lambda_{d-k+i}(A)$, for $i\in\mathbb{I}_k$. Therefore, $\beta=\lambda_j(A)$ for some
$1\leq j\leq d-k$.
\medskip\noi
The hypothesis in item 2 is rather restrictive and difficult to check in general.
Nevertheless, we mention two cases in which the hypotheses in Corollary \ref{cor ZAK extend} can be easily checked:
\begin{enumerate}
\item In case the hypothesis in item 1 holds for $j=d-k$, by the interlacing inequalities we have
$$ \lambda_i(Y^*AY)\geq \lambda_{i+d-k}(A) \peso{for} i\in\mathbb{I}_k \, ,$$ so the hypothesis in item
2 automatically hold.
\item
In case $k=1$ that is, if $\mathcal{Y}=\mathbb{C}\,y$ for a unit norm
vector $y\in\mathbb{C}^d$, the hypotheses become the existence of $j\in\mathbb{I}_{d-1}$ such that
$ \lambda_{j+1}(A)\leq \langle A\, y,\, y\rangle<\lambda_j(A)$; then, Corollary \ref{cor ZAK extend} implies that
$$ 0\leq \langle A y,\, y\rangle - \lambda_{j+1}(A)\leq \frac{\| P_{\mathcal{X}+\mathcal{Y}}( Ay-\langle A y,\, y\rangle\, y)\|}{\lambda_{j}(A)-\langle A y,\, y\rangle}\, ,$$
where $\mathcal{X}=\mathbb{C}\,x$, for $x=(I-P_U)y\in\mathbb{C}^d$; this is \cite[Theorem 5.3]{ZAK}. As explained in \cite{ZAK}, Corollary
\ref{cor ZAK extend} encodes several known bounds related with eigenvalue estimation even when $k=1$.
\hfill $\triangle$
\end{enumerate}
\end{rem}
\section{Appendix}\label{sec append}
Here we collect several and well known results about majorization, used throughout our work.
The first result deals with submajorization relations between singular values of arbitrary matrices in $\mathcal{M}_d(\mathbb{C})$.
For detailed proofs of these results and general references in majorization theory see \cite{bhatia,HJ,MaOlAr}.
For $A\in \mathcal{M}_d(\mathbb{C})$ we denote by $\text{re}(A) = \frac{A+A^*}{2}\in \mathcal{H}(d)$.
\begin{theorem}\label{theorem ag}\rm Let $C,\,D\in \mathcal{M}_d(\mathbb{C})$. Then,
\begin{enumerate}
\item $s(C+D)\prec_w s(C)+s(D)$; \hfill(Lidskii's additive property)
\item $s(\text{re}(C))\prec_w s(C)$;
\item $s(CD)\prec_w s(C)\, s(D)$; \hfill(Lidskii's multiplicative property)
\item If we assume that $CD\in\mathcal{H}(d)$ then $s(CD)\prec_w s(\text{re}(DC))$.
\end{enumerate} \qed
\end{theorem}
\medskip\noi
For hermitian matrices we have the following majorization relations
\begin{theorem}\label{theorem ah} \rm Let $C,\, D\in \mathcal{H}(d)$. Then,
\begin{enumerate}
\item $\lambda(C)-\lambda(D)\prec \lambda(C-D)\prec\lambda(C)-\lambda^{\uparrow}(D)$;
\item $|\lambda(C)-\lambda(D)|\prec_w s(C-D)$;
\item Let $\mathcal{P}=\{P_j\}_{j=1}^r$ be a system of projections (i.e. they are mutually orthogonal projections on $\mathbb{C}^d$
such that $\sum_{i=1}^r P_i=I$). If $C_{\mathcal{P}}(C)=\sum_{i=1}^r P_iC P_i$, then $\lambda(C_{\mathcal{P}}(C))\prec \lambda(C)$.
\end{enumerate} \qed
\end{theorem}
\medskip\noi
In the next result we describe elementary but useful properties of (sub)majorization between real vectors.
\begin{lemma}\label{lemma submaj props1}\rm Let $x,\, y,\,z\in \mathbb{R}^k$. Then,
\begin{enumerate}
\item $x^\downarrow + y^\uparrow\prec x+y\prec x^\downarrow+y^\downarrow$;
\item If $x\prec_w y$ and $y,\, z\in(\mathbb{R}^k)^\downarrow$ then $x+z\prec_w y+z$;
\end{enumerate}
If we assume further that $x,\,y,\, z\in \mathbb{R}_{\geq 0}^k$ then,
\begin{enumerate}
\item[3.] $x^\downarrow\, y^\uparrow\prec_w x\, y\prec_w x^\downarrow\, y^\downarrow$;
\item[4.] If $x\prec_w y$ and $y,\, z\in (\mathbb{R}_{\geq 0}^k)^\downarrow$ then $x\, z\prec_w y\, z$.\qed
\end{enumerate}
\end{lemma}
\begin{proposition}\label{hat trick como en el futbol}\rm
Let $1\leq k<d$ and let $E\in \mathbb {M}_{k,(d-k)}(\mathbb{C})$. Then $$ \hat{E}=\begin{pmatrix}
0&E\\ E^*&0
\end{pmatrix}\in {\cal H}(d) \peso{and} \lambda(\hat E)= (s(E),-s(E^*)^\downarrow)\in(\mathbb{R}^{d})^\downarrow\,.$$\qed
\end{proposition}
\begin{theorem}[\cite{AKFEM}]\label{theoremr Fem4.6}\rm
Let $\mathcal{X},\, \mathcal{Y}\subset \mathbb{C}^d$ be such that $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$. Then
$$
\lambda(P_{\mathcal{X}}P_{\mathcal{Y}^{\perp}}P_{\mathcal{X}})= s(P_{\mathcal{X}}P_{\mathcal{Y}^{\perp}}P_{\mathcal{X}}) =
s^2(P_{\mathcal{Y}}P_{\mathcal{X}^{\perp}})=s^2(P_{\mathcal{X}^{\perp}}P_{\mathcal{Y}})=(\sin^2(\Theta(\mathcal{X},\mathcal{Y})),0_{d-k}).$$\qed
\end{theorem}
\medskip\noi
Notice that item 2 below is Theorem \ref{lemmaa3'} from Section \ref{sec maj err}.
\begin {theorem}\label{theorem mas pulenta que 31}
Let $C,\,D\in \mathcal{H}(k)$. Then,
\begin{enumerate}
\item if $T\in \mathcal G l(k)^+$, then
$s(C-D)\prec_w s(T^{-1})\, s(CT-TD)\,.$
\item if $T\in \mathcal G l(k)$, then
$|\lambda(C)-\lambda(D)|\prec_w s(T^{-1})\, s(CT-TD)$.
\end{enumerate}
\end{theorem}
\begin{proof}
We first show item 1 Since $T$ is positive and invertible,
using Theorem \ref{theorem ah} (item 3) we get that
\begin{eqnarray*
s(C-D)&=&s(CT^{\frac{1}{2}}T^{-\frac{1}{2}}-T^{-\frac{1}{2}}T^{\frac{1}{2}}D))=
s(T^{-\frac{1}{2}}(T^{\frac{1}{2}}CT^{\frac{1}{2}}-T^{\frac{1}{2}}DT^{\frac{1}{2}})T^{-\frac{1}{2}})\\
&\prec_w& s(T^{-\frac{1}{2}})^2 \, s(T^{\frac{1}{2}}CT^{\frac{1}{2}}-T^{\frac{1}{2}}DT^{\frac{1}{2}}) =
s(T^{-1}) \, s(T^{\frac{1}{2}}( C- D)\,T^{\frac{1}{2}})\,.
\end{eqnarray*}
By Theorem \ref{theorem ag} (items 2 and 4) and the fact that
$\text{re}(DT)=\text{re}(TD)$ we obtain that
\begin{equation}\label{eqn6 majorization}
s(T^{\frac{1}{2}}(C-D)T^{\frac{1}{2}})\prec_w s(\text{re}[(C-D)T])=s(\text{re}[CT-T D])\prec_w s(CT-TD),
\end{equation}
By the previous inequalities and Lemma \ref{lemma submaj props1} we see that
\begin{equation}\label{proof lemma3'}
s(C-D)\prec_w s(T^{-1})\, s(CT-T D)\,.
\end{equation}
\medskip\noi In order to show item 2, consider a representation of $T$ given by
$T=U\Sigma V^*$, where $U,\, V\in {\cal U}(k)$ are unitary matrices and $\Sigma\in{\cal M}_k(\mathbb{C})$ is the diagonal
matrix with main diagonal $s(T)\in \mathbb{R}_{\geq 0}^k$ (notice that such representation follows from the SVD decomposition of $T$); note that $\Sigma$ is
definite positive and invertible.
Using item 2 in Theorem \ref{theorem ah} and (the already proved) item 1 of the statement we get
\begin{align}\label{eqn8}
|\lambda(C)-\lambda(D)|&= |\lambda(U^*CU)-\lambda(V^*DV)|\prec_w s(U^*CU-V^*DV)\nonumber
\\&\prec_w s(\Sigma^{-1})\, s(U^*CU\Sigma-\Sigma V^*DV)=s(T^{-1})\, s(U^*(CT-TD)V)\nonumber \\ \nonumber &=s(T^{-1})\, s(CT-TD)\,.
\end{align}
\end{proof}
\medskip\noi
In what follows we re-state and prove two propositions of Section \ref{subsec applic}.
\medskip
\noindent
{\scshape Proposition \ref{prospread}.}
Let $A\in {\cal H}(d)$ and let $\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ with $\dim (\mathcal{X})=\dim(\mathcal{Y})=k$. Then
\begin{equation}\label{eqn12 bis}
s(P_{\mathcal{X}}\, R_Y)\prec_w {\rm Spr}(A,\mathcal{X}+\mathcal{Y}) \, \sin(\Theta(\mathcal{X},\mathcal{Y}))\,.
\end{equation}
\begin{proof}
We begin with a simple reduction argument. In order to describe this reduction it will be convenient to consider
matrices in terms of the linear operators that they induce. Hence, given $A\in\mathcal{M}_d(\mathbb{C})$, we consider $A\in\mathcal L(\mathbb{C}^d)$
(defined in the obvious way). The advantage in considering $A\in\mathcal L(\mathbb{C}^d)$ is that we can get different block matrix
representations of $A$ (considered as an operator) with respect to orthogonal decompositions $\mathbb{C}^d={\cal V}\oplus{\cal V}^\perp$ for
a (proper) subspace ${\cal V}\subset \mathbb{C}^d$, in the usual manner. We now proceed as follows:
Let $\mathcal{Z}=\mathcal{X}+\mathcal{Y}$ with $\dim \mathcal{Z}=p$, and consider the matrix representations with respect to the decomposition $\mathbb{C}^d=\mathcal{Z}\oplus \mathcal{Z}^\perp$:
$$P_\mathcal{X}=\begin{pmatrix} P^\mathcal{X} & 0 \\ 0 & 0\end{pmatrix} \ \, , \, \
P_\mathcal{Y}=\begin{pmatrix} P^\mathcal{Y} & 0 \\ 0 & 0\end{pmatrix} \peso{and} A=\begin{pmatrix} A_{\mathcal{Z}} & * \\ * & *\end{pmatrix}\,,
$$ where $P^\mathcal{X},\, P^\mathcal{Y},\, A_\mathcal{Z}=P_{\mathcal{Z}} A|_\mathcal{Z}\in \mathcal L(\mathcal{Z})$ are self-adjoint operators. In this case we have
$$ P_{\mathcal{X}} \, (A\, P_{\mathcal{Y}} -P_{\mathcal{Y}}\, A\, P_{\mathcal{Y}}) = \begin{pmatrix} P^{\mathcal{X}} \, (A_\mathcal{Z} \, P^{\mathcal{Y}} -P^{\mathcal{Y}}\, A_\mathcal{Z}\, P^{\mathcal{Y}}) & 0 \\ 0 & 0\end{pmatrix}\,.$$
On the other hand, a simple calculation show that
$$(s(P_{\mathcal{X}}R_{Y}),0_{d-k})=s(P_{\mathcal{X}} \, (A\, P_{\mathcal{Y}} -P_{\mathcal{Y}}\, A\, P_{\mathcal{Y}}) )\in(\mathbb{R}^d_{\geq 0})^\downarrow\,.$$
Hence, $(s(P_{\mathcal{X}}R_{Y}),0_{p-k})= s(P^{\mathcal{X}} \, (A_\mathcal{Z} \, P^{\mathcal{Y}} -P^{\mathcal{Y}}\, A_\mathcal{Z}\, P^{\mathcal{Y}}))=s(P^{\mathcal{X}} (I_\mathcal{Z}- P^ \mathcal{Y}) \, A_\mathcal{Z}\, P^{\mathcal{Y}})$. Thus, we can assume further that $\mathbb{C}^d=\mathcal{Z}=\mathcal{X}+\mathcal{Y}$ and show that
\begin{equation}\label{eqn a probar pro1}
(s(P_{\mathcal{X}}\, R_Y),0_{d-k})=s(P_{\mathcal{X}}\, (P_{\mathcal{Y}^\perp} A\, P_\mathcal{Y}))\prec_w (\text{Spr}(A) \, \sin(\Theta(\mathcal{X},\mathcal{Y})),0_{d-k})\, .
\end{equation}
Now using item 3 of Theorem \ref{theorem ag} (Lidskii's multiplicative property),
\begin{equation}\label{eqn19}
s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}})=s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp} P_{\mathcal{Y}^\perp} A P_{\mathcal{Y}})\prec_w s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp})\, s(P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}}).
\end{equation}
First noticing that by Theorem \ref{theoremr Fem4.6}, we have that $s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp})=(\sin(\Theta(\mathcal{X},\mathcal{Y})),0_{d-k})$.
On the other hand, consider the matrix representation induced by the decomposition $\mathbb{C}^d=\mathcal{Y}\oplus \mathcal{Y}^\perp$:
\begin{equation}\label{eqn13}
A=\begin{pmatrix}
A_{11}&A_{21}^*\\
A_{21}&A_{22}
\end{pmatrix} \ \text{ and set } \ A_1:=\begin{pmatrix}
A_{11}&0\\
0&A_{22}
\end{pmatrix} \ \text{ , } \ A_2:=\begin{pmatrix}
0&A_{21}^*\\
A_{21}&0
\end{pmatrix}\,.
\end{equation}
Then, we have that $A=A_1+A_2$. Now, $A_1$ is a pinching of $A$ (associated with the system of projections $\{P_{\mathcal{Y}}\, , \, P_{\mathcal{Y}^\perp}\}$)
so $\lambda(A_1)\prec \lambda(A)$ so then
\begin{equation}\label{eqn14}-\lambda^{\uparrow}(A_1)\prec -\lambda^{\uparrow}(A)\,.
\end{equation}
Using Lidskii's additive property for $A_2=A-A_1$ (see item 1 in Theorem \ref{theorem ah})
\begin{equation}\label{eqn15}
\lambda(A_2)\prec\lambda(A)-\lambda^{\uparrow}(A_1)\,.
\end{equation}
Combining \eqref{eqn14} and \eqref{eqn15}, we obtain
\begin{equation}\label{eqn16}
\lambda(A_2)\prec \lambda(A)-\lambda^{\uparrow}(A)=\text{Spr}(A)\in \mathbb{R}^{d}\,.
\end{equation}
By Proposition \ref{hat trick como en el futbol},
we get that $\lambda(A_2)=(s(A_{21}),-s(A_{21}^*))^\downarrow$;
in particular, $s(A_{21})=(\lambda_i(A_2))_{i\in \mathbb{I}_k}$.
Now, $s(P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}})=(s(A_{21}),0_{d-k})$; thus, we see that
\begin{equation}\label{eqn17}
s(P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}})=(s(A_{21}),0_{d-k})=((\lambda_i(A_2))_{i\in \mathbb{I}_k},0_{d-k})\prec_w ((\text{Spr}_i(A))_{i\in\mathbb{I}_k},0_{d-k})\, ,
\end{equation} where $\text{Spr}(A)=(\text{Spr}_i(A))_{i\in\mathbb{I}_d}$. Using \eqref{eqn19} and \eqref{eqn17} together with Lemma \ref{lemma submaj props1} we finally get that
$$ s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}})\prec_w (\text{Spr}(A) \, \sin(\Theta(\mathcal{X},\mathcal{Y})),0_{d-k})\in (\mathbb{R}_{\geq 0}^d)^\downarrow\,.$$
Now the result follows from the last submajorization relation, by considering the first $k$ entries of both vectors.
\end{proof}
\medskip
\noindent
{\scshape Proposition \ref{proSin}.}
Let $A\in{\cal H}(d)$, $\mathcal{X},\mathcal{Y}\subset \mathbb{C}^d$ subspaces with $\dim(\mathcal{X})=\dim(\mathcal{Y})=k$. Assume that $\mathcal{X}$ is $A$-invariant.
Then,
\begin{equation}\label{eqn21 bis}
s(P_{\mathcal{X}}R_{Y})\prec_w 2\ (\lambda_i(A_{\mathcal{X}+\mathcal{Y}})- \lambda_{\min}(A_{\mathcal{X}+\mathcal{Y}}))_{i\in\mathbb{I}_k} \ \sin^2(\Theta(\mathcal{X},\mathcal{Y})).
\end{equation}
\begin{proof
Arguing as in the proof of Proposition \ref{prospread}, we can assume further that $\mathbb{C}^d=\mathcal{X}+\mathcal{Y}$. With this assumption, we consider first
the case where $A\in\mat^+$ and show that
\begin{equation}\label{eqn21ap}
s(P_{\mathcal{X}}R_{Y})\prec_w 2\,(\lambda_i(A))_{i\in\mathbb{I}_k}\,\sin^2(\Theta(\mathcal{X},\mathcal{Y}))\, .
\end{equation}
Indeed, the $A$-invariance of $\mathcal{X}$, allows us to write $A=P_{\mathcal{X}}AP_{\mathcal{X}}+P_{\mathcal{X}^\perp}AP_{\mathcal{X}^\perp}$. With
this decomposition in mind using the fact that $(s(P_{\mathcal{X}}R_{Y}),0_{d-k})=s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp}AP_{\mathcal{Y}})$, we have that
\begin{align}\label{eqn22}
\nonumber s(P_{\mathcal{X}}P_{\mathcal{Y}^\perp}A\,P_{\mathcal{Y}})&=s(P_{\mathcal{X}}P_{\cY\orto} P_{\mathcal{X}}A \,P_{\mathcal{X}} P_{\mathcal{Y}}+P_{\mathcal{X}}P_{\cY\orto} P_{\cX\orto} A \,P_{\cX\orto} P_{\mathcal{Y}})
\\& \nonumber\prec_w
s(P_{\mathcal{X}}P_{\cY\orto} P_{\mathcal{X}}A\,P_{\mathcal{X}}P_{\mathcal{Y}})+s(P_{\mathcal{X}}P_{\cY\orto} A \,P_{\cX\orto} P_{\mathcal{Y}})
\ \stackrel{\mbox{\tiny{def}}}{=}\ M \ .
\end{align}
Using item 3 of Theorem \ref{theorem ag} (Lidskii's multiplicative property),
the fact that $0_d\leq s(P_{\mathcal{X}}\, P_{\mathcal{Y}})\leq \mathds{1}_d$ and Theorem \ref{theoremr Fem4.6}, we get
\begin{align}
\nonumber
M &
\prec_w\nonumber
s(P_{\mathcal{X}}P_{\cY\orto} P_{\mathcal{X}})\, s(A)+
s(P_{\mathcal{X}}P_{\cY\orto})\, s(A) \, s(P_{\cX\orto} P_{\mathcal{Y}})
\\& \nonumber \prec_w
2\,\lambda(A)\,(\sin^2(\Theta(\mathcal{X},\mathcal{Y})),0_{d-k})\in(\mathbb{R}_{\geq 0}^d)^\downarrow\,,
\end{align}
since $A\in\mat^+$ is positive semi-definite. The result now follows from the previous facts.
\medskip\noi In general, for $A\in{\cal H}(d)$ consider the auxiliary matrix $\tilde A=A-\lambda_{\min(A)}\, I\in\mat^+$.
Notice that
$$
R_Y(\tilde A)= \tilde A\, Y- Y(Y^*\tilde A\, Y)= A\, Y- Y(Y^* A\, Y) =R_Y
\,,
$$ and $\lambda(\tilde A)=\lambda(A)-\lambda_{\min(A)}\, \mathds{1}_d$. The result now follows from these facts and from
\eqref{eqn21ap} applied to
$\tilde A$.
\end{proof}
\medskip\noi
{\bf Acknowledgment.} We would like to thank the reviewers of the manuscript for providing
several useful comments that helped us improve the presentation of the results herein.
{\scriptsize
| {
"timestamp": "2020-07-10T02:00:57",
"yymm": "1905",
"arxiv_id": "1905.06998",
"language": "en",
"url": "https://arxiv.org/abs/1905.06998",
"abstract": "A priori, a posteriori, and mixed type upper bounds for the absolute change in Ritz values of self-adjoint matrices in terms of submajorization relations are obtained. Some of our results prove recent conjectures by Knyazev, Argentati, and Zhu, which extend several known results for one dimensional subspaces to arbitrary subspaces. In addition, we improve Nakatsukasa's version of the $\\tan \\Theta$ theorem of Davis and Kahan. As a consequence, we obtain new quadratic a posteriori bounds for the absolute change in Ritz values.",
"subjects": "Functional Analysis (math.FA); Numerical Analysis (math.NA)",
"title": "Majorization bounds for Ritz values of self-adjoint matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712649448364,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7094397143204417
} |
https://arxiv.org/abs/2202.00568 | Stochastic 2D Signal Generative Model with Wavelet Packets Basis Regarded as a Random Variable and Bayes Optimal Processing | This study deals with two-dimensional (2D) signal processing using the wavelet packet transform. When the basis is unknown the candidate of basis increases in exponential order with respect to the signal size. Previous studies do not consider the basis as a random vaiables. Therefore, the cost function needs to be used to select a basis. However, this method is often a heuristic and a greedy search because it is impossible to search all the candidates for a huge number of bases. Therefore, it is difficult to evaluate the entire signal processing under a criterion and also it does not always gurantee the optimality of the entire signal processing. In this study, we propose a stochastic generative model in which the basis is regarded as a random variable. This makes it possible to evaluate entire signal processing under a unified criterion i.e. Bayes criterion. Moreover we can derive an optimal signal processing scheme that achieves the theoretical limit. This derived scheme shows that all the bases should be combined according to the posterior in stead of selecting a single basis. Although exponential order calculations is required for this scheme, we have derived a recursive algorithm for this scheme, which successfully reduces the computational complexity from the exponential order to the polynomial order. | \section{Introduction}
This study deals with two-dimensional (2D) signal processing using the wavelet packet transform. Specifically, We perform 2D signal processing based on statistical decision theory (see e.g. \cite{berger}) with Bayes risk function as the evaluation criterion.
Wavelet packet transform has been applied in various fields, and a lot of research has been done in recent years. For example, denoising (see e.g. \cite{denoising_study1,denoising_study2}), compression (see e.g. \cite{compression_study1}) , classification (see e.g. \cite{classification_study1}), and so on. In any case, they have difficulty in finding an optimal basis because the candidate of basis increases in exponential order with respect to the size of the signal.
Most previous studies do not consider the basis as a random variable. Therefore, some cost functions, for example, such as Shannon entropy \cite{denoising_study1}, need to be used to search and select a basis. However, this method is often a heuristic and a greedy search because it is impossible to search all the candidates for a huge number of bases. Therefore, it is difficult to evaluate the entire signal processing under a criterion, and also it does not always guarantee the optimality of the entire signal processing.
In this study, we propose a stochastic generative model in which the basis is regarded as a random variable. In other words, we consider Bayesian modeling of the basis. This makes it possible to evaluate entire signal processing under a criterion called Bayes risk function in the framework of statistical decision theory\cite{berger}. Moreover, we can derive an optimal signal processing scheme that achieves the theoretical limit by using well-known theorems in the framework of statistical decision theory. In the derived scheme, all the bases are weighted by the posterior probability. This shows that any single basis should not be chosen under the Bayes criterion.
In this study, we use this stochastic generative model for denoising. However, there is a problem with the scheme that achieves the theoretical limit. The problem is that the amount of computation increases exponentially with respect to the size of the signal.
To solve this problem, we have derived a recursive algorithm that utilizes the property of prior distribution assumed for the basis. This algorithm successfully reduces the computational complexity from the exponential order to the polynomial order without loss of optimality.
In section \ref{experiment} we conduct two numerical experiments about the derived algorithm. The first is an experiment on the posterior distribution of the basis. In this experiment, we confirm that the posterior probability of the true basis is high. The second is an experiment to see the value of Bayes risk function. In this experiment, we compare the value of the Bayes risk function of the proposed method with that of the denoising method under some fixed wavelet packet bases to confirm the effectiveness of the proposed methods.
\section{Proposed model}
In this section, we define Walsh wavelet packets basis (see e.g. \cite{text}) and propose a stochastic model. In the following, we assume the 2D signal size is $L\times L=2^{d_{\max}}\times 2^{d_{\max}}(d_{\max}\in \mathbb{N}\cup \{0\})$.
\subsection{2D wavelet packets \cite{text}}
\begin{definition}
($w_{i,j}(n)$)\\
The function $w_{i,j}:\mathbb{Z}\to \mathbb{R}$ is defined as follows.
\begin{align}
w_{0,0}(n)&\coloneqq
\begin{cases}
1&(n=0)\\
0&(\text{otherwise}),
\end{cases}
\\
w_{i+1,2j}(n)&\coloneqq2^{-1/2}w_{i,j}(n)+2^{-1/2}w_{i,j}(n-2^i),\\
w_{i+1,2j+1}(n)&\coloneqq2^{-1/2}w_{i,j}(n)-2^{-1/2}w_{i,j}(n-2^i),
\end{align}
where $i\in \{0,1,\cdots,d_{\max}\}$ and $j\in \{0,1,\cdots,2^i-1\}$.
\end{definition}
\begin{definition}
($w_{i,j_0,j_1}(n_0,n_1)$)\\
The function $w_{i,j_0,j_1}:\mathbb{Z} \times \mathbb{Z}\to \mathbb{R}$ is defined as follows.
\begin{align}
w_{i,j_0,j_1}(n_0,n_1)&\coloneqq w_{i,j_0}(n_0)w_{i,j_1}(n_1),
\end{align}
where $i\in \{0,1,\cdots,d_{\max}\}$ and $j_0,j_1\in \{0,1,\cdots,2^i-1\}.$
\end{definition}
\begin{definition}
($W_{i,j_0,j_1,k_0,k_1}$)\\
The matrix $W_{i,j_0,j_1.k_0,k_1}\in \mathbb{R}^{L\times L}$ is defined as follows.
\begin{align}
&W_{i,j_0,j_1,k_0,k_1} \nonumber\\
&\coloneqq(w_{i,j_0,j_1}(n_0-2^ik_0,n_1-2^ik_1))_{n_0,n_1\in \{0,1,\cdots,L\}},
\end{align}
where $k_0,k_1\in \{0,1,\cdots,L/2^i-1\}.$
\end{definition}
\begin{definition}
($\mathcal{W}_{i,j_0,j_1}\subseteq \mathbb{R}^{L\times L}$)\\
Let $\mathcal{W}_{i,j_0,j_1}\subseteq \mathbb{R}^{L\times L}$ be the space of 2D signals that consist of a linear combination of $\{W_{i,j_0,j_1,k_0,k_1}\}_{k_0,k_1\in \{0,1,\cdots,L/2^i-1\}}$.
\end{definition}
\begin{definition}
($\bm{w}_{i,j_0,j_1,k_0,k_1}$)\\
Let $\bm{w}_{i,j_0,j_1,k_0,k_1}$ be the vertical vector whose components are those of $W_{i,j_0,j_1,k_0,k_1}$ rearranged in raster scan order.
\end{definition}
Under the above definition, the following property holds.
\begin{prop}
($\mathcal{W}_{0,0,0}$)\\
The following relationship holds
\begin{align}
\mathcal{W}_{0,0,0}&=\mathbb{R}^{L\times L}.
\end{align}
\end{prop}
\begin{prop}
(Orthonormal basis of $\mathcal{W}_{i,j_0,j_1}$)\\
$\{W_{i,j_0,j_1,k_0,k_1}\}_{k_0,k_1\in\{0,1,\cdots,L/2^i-1\}}$ forms an orthonormal basis for $\mathcal{W}_{i,j_0,j_1}$.
\end{prop}
\begin{prop}
(Decomposition of space)\\\
The following relationship holds.
\begin{align}
\mathcal{W}_{i,j_0,j_1}=&\hspace{0.1cm} \mathcal{W}_{i+1,2j_0,2j_1}\oplus \mathcal{W}_{i+1,2j_0,2j_1+1}
\nonumber \\
&\oplus \mathcal{W}_{i+1,2j_0+1,2j_1}\oplus \mathcal{W}_{i+1,2j_0+1,2j_1+1}.
\end{align}
\end{prop}
Note that $\oplus$ denotes the orthogonal direct sum.
From the aforementioned properties, the entire space of 2D signals can be represented as an orthogonal direct sum of subspaces corresponding to the leaf nodes of the full quadtree\footnote{All nodes have 4 child nodes or no child node.} with $\mathcal{W}_{0,0,0}$ as the root node. The basis of each subspace can be used to construct an orthonormal basis.
\begin{example}
(Full quadtree)\\
When the relation $\mathcal{W}_{0,0,0}=(\mathcal{W}_{2,0,0}\oplus \mathcal{W}_{2,0,1}\oplus \mathcal{W}_{2,1,0}\oplus \mathcal{W}_{2,1,1})\oplus \mathcal{W}_{1,0,1}\oplus\mathcal{W}_{1,1,0}\oplus\mathcal{W}_{1,1,1}$ holds, the diagram of the full quadtree is at Fig. \ref{ex_tree}. The gray area represents leaf nodes. Notably, the basis of 2D-DWT corresponds to the full quadtree created by extending branches only in the direction $j_0=j_1=0$.
\begin{figure}[tbp]
\centering
\includegraphics[width=6cm,height=3cm]{image/ex_tree.png}
\caption{example for quadtree under $d_{\max}=2$}
\label{ex_tree}
\end{figure}
\end{example}
\subsection{2D wavelet packets basis matrix}
\begin{definition}
($\mathcal{S},\mathcal{L},\mathcal{I},s_{\text{r}}$)\\
Let $\mathcal{S}$ be the set of nodes in a complete quadtree\footnote{All leaf nodes have the same depth.} of depth $d_{\max}$ and $\mathcal{L}\subset \mathcal{S}$ be the set of its leaf nodes and $\mathcal{I}\subset \mathcal{S}$ be the set of its inner nodes. Let $s_{\text{r}}$ be the root node.
\end{definition}
\begin{definition}
($\mathcal{M},\mathcal{L}^m,\mathcal{I}^m$)\\
Let $\mathcal{M}$ be the set of all full quadtrees on $\mathcal{S}$, which contain $s_{\text{r}}$ and whose depth is less than or equal to $d_{\max}$. Let $\mathcal{L}^m \subset \mathcal{S}$ be the set of leaf nodes in $m \in \mathcal{M}$ and $\mathcal{I}^m \subset \mathcal{S}$ be the set of inner nodes in $m \in \mathcal{M}$.
\end{definition}
\begin{definition}
($W^m$)\\
Let $W^m\in \mathbb{R}^{L^2 \times L^2}$ be a matrix created by collecting the basis of the subspace corresponding to the leaf nodes in $m \in \mathcal{M}$. The matrix $W^m\in \mathbb{R}^{L^2 \times L^2}$ is called the basis matrix in this paper.
\end{definition}
\begin{definition}
($W_s$)\\
Let $W_s \in \mathbb{R}^{L^2 \times L^2}$ be the basis at $s\in \mathcal{S}$ which is appropriately complemented by 0 vectors so that the following relation holds.
\begin{align}
W^m&=\sum_{s\in \mathcal{L}^m}W_s.
\end{align}
\end{definition}
\begin{example}
(The basis matrix corresponding to the quadtree $m\in \mathcal{M}$ in Figure \ref{ex_tree})
\begin{align}
W^m
&=
\begin{pmatrix}
\bm{w}_{1,0,0,0,0}^{\top}\\
\bm{w}_{1,0,0,0,1}^{\top}\\
\bm{w}_{1,0,0,1,0}^{\top}\\
\bm{w}_{1,0,0,1,1}^{\top}\\
\bm{0}_{12\times 16}
\end{pmatrix}
+
\begin{pmatrix}
\bm{0}_{4\times 16}\\
\bm{w}_{1,0,1,0,0}^{\top}\\
\bm{w}_{1,0,1,0,1}^{\top}\\
\bm{w}_{1,0,1,1,0}^{\top}\\
\bm{w}_{1,0,1,1,1}^{\top}\\
\bm{0}_{8\times 16}
\end{pmatrix}
+
\begin{pmatrix}
\bm{0}_{8\times 16}\\
\bm{w}_{1,1,0,0,0}^{\top}\\
\bm{w}_{1,1,0,0,1}^{\top}\\
\bm{w}_{1,1,0,1,0}^{\top}\\
\bm{w}_{1,1,0,1,1}^{\top}\\
\bm{0}_{4\times 16}\\
\end{pmatrix}
\nonumber\\
&
+
\begin{pmatrix}
\bm{0}_{12\times 16}\\
\bm{w}_{2,2,2,0,0}^{\top}\\
\bm{0}_{3\times 16}\\
\end{pmatrix}
+
\begin{pmatrix}
\bm{0}_{13\times 16}\\
\bm{w}_{2,2,3,0,0}^{\top}\\
\bm{0}_{2\times 16}\\
\end{pmatrix}
+
\begin{pmatrix}
\bm{0}_{14\times 16}\\
\bm{w}_{2,3,2,0,0}^{\top}\\
\bm{0}_{1\times 16}\\
\end{pmatrix}
\nonumber\\
&+
\begin{pmatrix}
\bm{0}_{15\times 16}\\
\bm{w}_{2,3,3,0,0}^{\top}\\
\end{pmatrix}
.
\end{align}
where $\bm{0}_{k\times l}$ is a $k\times l$ zero matrix.
\end{example}
The basis matrix $W^m$ is orthonormal because it consists of an orthonormal basis.
\subsection{Stochastic generative model}
Herein, we describe the stochastic 2D signal model.
\begin{definition}
($\bm{X},\bm{x}$)\\
Let $\bm{X}$ be a random variable vector on $\mathbb{R}^{L^2\times 1}$ representing 2D signal and $\bm{x}\in \mathbb{R}^{L^2\times 1}$ be its realization. $\bm{x}\in\mathbb{R}^{L^2\times 1}$ is a vertical vector that is given by sorting $L\times L$ size 2D signal in raster scan order. 2D-DWPT $\bm{\theta}$ defined below also has a similar structure.
\end{definition}
\begin{definition}
($\bm{\theta}$)\\
Let $\bm{\theta}$ be a random variable vector on $\mathbb{R}^{L^2\times 1}$ representing the 2D-DWPT. Its realization is similarly written as $\bm{\theta}\in\mathbb{R}^{L^2\times 1}$.
\end{definition}
The following relationship holds between $\bm{x}$ and $\bm{\theta}$.
\begin{align}
\bm{\theta}&=W^m\bm{x},\\\
\bm{x}&=(W^m)^{\top}\bm{\theta}.
\end{align}
In our model, the quadtree $m \in \mathcal{M}$ and 2D-DWPT $\bm{\theta}$ are generated by assumed prior distributions, and the 2D signal $\bm{x}$ is generated by a linear transformation of $\bm{\theta}$.
\section{Application to Bayesian signal processing}\label{problem}
Under the generative probability model discussed in the previous section, various signal processing problems can be considered depending on how the input signal is observed and the nature of the desired output signal. In the Bayesian decision theory\cite{berger}, these are represented by designing the domain and the range of decision function as well as the loss function.
In this section, we formulate the simplest denoising problem based on the Bayesian decision theory and find the optimal denoising under the Bayesian criterion.
\subsection{Problem}
The following additive noises are considered in this paper.
\begin{assumption}\label{noise_assumption}
(additive noise)
\begin{align}
\bm{Y}\coloneqq\bm{X}+\bm{\epsilon}.
\end{align}
We assume $\bm{Y}$ is a vector of random variables on $\mathbb{R}^{L^2\times 1}$ representing the noisy 2D signal, $\bm{X}$ is a random variable representing the original 2D signal, and $\bm{\epsilon}\sim \mathcal{N}(\bm{\epsilon }|\bm{0}_{L^2\times 1},\sigma^2_{\bm{\epsilon}}I)$ ($\sigma^2_{\bm{\epsilon}}\in \mathbb{R}$ is a known hyperparameter). In the following, let $\bm{y}\in \mathbb{R}^{L^2\times 1}$ be the realization of $\bm{Y}$. Our goal in Sections \ref{problem} and \ref{experiment} is to estimate $\bm{x}$ from $\bm{y}$.
\end{assumption}
We now define the decision function $\delta$.
\begin{definition}
(Decision function $\delta$)\\
The function $\delta:\mathbb{R}^{L^2\times 1}\to \mathbb{R}^{L^2\times 1}$ is called the decision function. The input of the decision function $\delta$ is the noisy 2D signal $\bm{y}$, and the output is an estimation for the original 2D signal $\hat{\bm{x}}$.
\end{definition}
We define the Bayes optimal decision function $\delta^{\ast}$ as the function that minimizes the Bayes risk function BR($\delta$) based on mean-square error loss in the 2D signal domain.
\begin{definition}
(Bayes risk function $BR(\delta)$)\\\
The Bayes risk function $BR:\Delta \to \mathbb{R}$ ($\Delta$ is the space of decision functions $\delta$) is defined as below.
\begin{align}
BR(\delta)\coloneqq\sum_{m\in \mathcal{M}} \int \int &\frac{1}{L^2}\|\bm{x}-\delta(\bm{y})\|^2\nonumber \\
&\times p(\bm{y}|\bm{x},m)p(\bm{x}|m)p(m) \mathrm{d}\bm{y} \mathrm{d}\bm{x}.
\end{align}
\end{definition}
\subsection{Solution to the problem}
According to the Bayesian decision theory (see e.g. \cite{berger}), the following holds.
\begin{theorem}
(Bayes optimal decision function $\delta^{\ast}$)\\
Bayes optimal decision function $\delta^{\ast}$ is given by
\begin{align}\label{Bayes_optimal}
\delta^{\ast}(\bm{y})&=\sum_{m\in \mathcal{M}}p(m|\bm{y})\int \bm{x} p(\bm{x}|m,\bm{y})\mathrm{d}\bm{x}.
\end{align}
\end{theorem}
There are two problems in the calculation of the right-hand side of the equation (\ref{Bayes_optimal}).
First, the integral of $\bm{x}$ does not generally have a closed-form expression. Second, the computation complexity of summation with respect to $m \in \mathcal{M}$ increases exponentially with respect to the size of the 2D signal. However, using an appropriate prior distribution of $\bm{\theta}$ and the prior distribution of the quadtree model $m\in \mathcal{M}$, the computational complexity can be reduced to polynomial order while maintaining optimality.
\subsection{Efficient algorithm for the solution}
Let $\bm{\mu}^m\in \mathbb{R}^{L^2\times 1}$ and $\sigma^2\in \mathbb{R}$ be a known hyperparameters. $\bm{\mu}^m$ is constructed in the same way as $W^m$, where $\bm{\mu}^m=\sum_{s\in \mathcal{L}^m}\bm{\mu}_s\hspace{0.1cm}(\bm{\mu}_s\in \mathbb{R}^{L^2\times 1}$ is also a known vector).
\begin{assumption}\label{prior_theta}
(2D-DWPT prior distribution)
\begin{align}
p(\bm{\theta}|m)&\coloneqq\mathcal{N}(\bm{\theta}|\bm{\mu}^m, \sigma^2 I),
\end{align}
where $I$ is the $L^2\times L^2$ identity matrix.
\end{assumption}
By linear transformation, we can show that the 2D signal $\bm{X}$ follows the normal distribution as below.
\begin{prop}\label{signal_distribution}
(Distribution of the 2D signal)\\
The distribution of the 2D signal $\bm{X}$ is given by
\begin{align}
p(\bm{x}|m)&=\mathcal{N}(\bm{x}|(W^m)^{\top}\bm{\mu}^m, \sigma^2I).
\end{align}
\end{prop}
\begin{assumption}\label{tree}
(Prior distribution of the quadtree model)
\begin{align}
p(m)&=\prod_{s\in \mathcal{L}^m}(1-g_{s}) \prod_{s'\in \mathcal{I}^m}g_{s'}\label{priorm}
\end{align}
where $g_{s}\in [0,1]$ is a known hyperparameter for any $s\in \mathcal{S}$, which satisfies $g_{s}=0$ for $s \in \mathcal{L}$. The $g_s$ represents the probability that node $s$ will extend a branch.
\end{assumption}
The aforementioned prior distribution (\ref{priorm}) was proposed in \cite{matsushima} to represent context trees and applied in \cite{110003313864} to represent the prior distribution of 1D wavelet packets trees. A mathematically rigorous proof of the following equation (\ref{prob}), their expected values, and posterior probability calculations is given in \cite{nakahara}.
\begin{align}
\sum_{m\in \mathcal{M}}p(m)=1\label{prob}
\end{align}
There has not been any previous study that applied this prior distribution to 2D-DWPT trees to our best knowledge. Hence, we applied this prior distribution to 2D-DWPT
trees for the first time.
Under the aforementioned assumptions, the following theorem can be derived.
\begin{theorem}\label{theorem_posterior}
(Posterior distribution)\\
The following equation holds.
\begin{align}
p(\bm{x}|m,\bm{y})&=\mathcal{N}(\bm{x}|\tilde{\bm{\mu}}^m,\tilde{\sigma}^2 I),\label{p(x|m,y)}\\
p(m|\bm{y})&=\prod_{s\in \mathcal{L}^m}(1-\tilde{g}_s)\prod_{s' \in \mathcal{I}^m}\tilde{g}_{s'}\label{p(m|y)},
\end{align}
where
\begin{align}
\tilde{\bm{\mu}}^m&\coloneqq \frac{\sigma^2}{\sigma^2+\sigma^2_{\bm{\epsilon}}} \bm{y}+\frac{\sigma^2_{\bm{\epsilon}}}{\sigma^2+\sigma^2_{\bm{\epsilon}}}(W^m)^{\top}\bm{\mu}^m,\\
\tilde{\sigma}^2&\coloneqq \frac{\sigma^2\sigma^2_{\bm{\epsilon}}}{\sigma^2+\sigma^2_{\bm{\epsilon}}},\\
\tilde{g}_s&\coloneqq
\begin{cases}
g_s&(s\in \mathcal{L})\\
\cfrac{g_s\prod_{s'\in \text{Ch}(s)}\tilde{\psi}_{s'}}{\tilde{\psi}_s}&(\text{otherwise}),
\end{cases}
\\
\tilde{\psi}_s&\coloneqq
\begin{cases}
\psi_s &(s\in \mathcal{L})\\
(1-g_s)\psi_s+g_s\prod_{s'\in \text{Ch}(s)}\tilde{\psi}_{s'}&(\text{otherwise}),
\end{cases}
\\
\ln \psi_s&\coloneqq\frac{1}{\sigma^2+\sigma^2_{\bm{\epsilon}}}\bigg(W_s\bm{y}-\frac{\bm{\mu}_s}{2}\bigg)^{\top}\bm{\mu}_s.
\end{align}
Note that $\text{Ch}(s)\subset \mathcal{S}$ is the set of child nodes of $s\in \mathcal{S}$.
\end{theorem}
The proof of this theorem is in Appendix \ref{proof_posterior}.\\
Using Theorem \ref{theorem_posterior}, the Bayes optimal decision function $\delta^{\ast}$ can be calculated as follows.
\begin{theorem}
(Bayes optimal decision function $\delta^{\ast}$)\\
The following equation holds
\begin{align}
\delta^{\ast}(\bm{y})&=\frac{\sigma^2}{\sigma^2+\sigma^2_{\bm{\epsilon}}} \bm{y}+\frac{\sigma^2_{\bm{\epsilon}}}{\sigma^2+\sigma^2_{\bm{\epsilon}}} \sum_{m\in \mathcal{M}}p(m|\bm{y})(W^m)^{\top}\bm{\mu}^m.\label{Bayes}
\end{align}
\end{theorem}
The integral of $\bm{x}$ in (\ref{Bayes_optimal}) is solved in (\ref{Bayes}) but the computational complexity of the second term increases exponentially with the size of the 2D signal. However, using the recursive computation derived from the following theorem, the computational complexity can be reduced to polynomial order while maintaining Bayesian optimality.
\begin{theorem}\label{theorem_recursive}
(Recursive algorithm)\\
The following equation holds.
\begin{align}
r_{s_{\text{r}}}=\sum_{m\in \mathcal{M}}p(m|\bm{y})(W^m)^{\top}\bm{\mu}^m.
\end{align}
\begin{align}
r_s&\coloneqq
\begin{cases}
(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s&(s\in \mathcal{L})\\
(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s+\tilde{g}_s\sum_{s'\in \text{Ch}(s)}r_{s'}&(\text{otherwise}).
\end{cases}
\label{r_s}
\end{align}
\end{theorem}
The proof of this theorem is in Appendix \ref{proof_recursive}.
\section{Experiments}\label{experiment}
\subsection{Experiment 1 : Posterior $p(m|\bm{y})$}
The purpose of Experiment 1 is to quantitatively confirm that the posterior distribution $p(m|\bm{y})$ is properly computed.
The setting of Experiment 1 is as follows.
\begin{itemize}
\item $d_{\max}=2$.
\item $g_s = 0.5$.
\item $\sigma^2 = 10$.
\item $\sigma^2_{\bm{\epsilon}}=4$.
\item $\bm{\mu}_s$ is calculated from resized images in \cite{data}. \footnote{Let $\bm{x}_1,\cdots,\bm{x}_{50000}$ denote the training images in \cite{data}. Non zero terms of $\bm{\mu}_s$ is calculated as $ \frac{1}{50000\times 4^{d_{\max}-i(s)}}\sum_{n=1}^{50000}f(W_s\bm{x}_n)$, where $f:\mathbb{R}^N \to \mathbb{R}$ is defined as $f((a_1,a_2,\cdots,a_N))\coloneqq \sum_{n=1}^{N}a_n $ and $i(s)$ is the depth of node $s$.}
\end{itemize}
This experiment is conducted by using the following procedure.
\begin{enumerate}
\item Set $m_0$ (6 in Fig.\ref{experiment1}).\label{ex2_genem}
\item Generate $\bm{x} \sim p(\bm{x}|m_0)$.\label{ex2_genex}
\item Generate $\bm{y} = \bm{x} + \bm{\epsilon}$.\label{ex2_geney}
\item Calculate $\tilde{g}_s$.\label{ex2_calp}
\item Repeat from step \ref{ex2_geney} to step \ref{ex2_calp} 50 times. \label{ex2_re1}
\item Repeat from step \ref{ex2_genex} to step \ref{ex2_re1} 50 times.\label{ex2_re2}
\end{enumerate}
The result of this experiment is in Fig. \ref{experiment1}.
Since $d_{\max}=2$, we have $|\mathcal{M}|=17$. In other words, there are 17 candidate trees in total. we give them indices from 0 to 16 in an arbitrary order. In this experiment, the sixth tree in this order is fixed as the true tree for data generation.
\begin{figure}[tbp]
\centering
\includegraphics[height=5cm,width=8cm]{image/ex_1_posterior.png}
\caption{$p(m|\bm{y})$ in Experiment 1}
\label{experiment1}
\end{figure}
According to this result, we can confirm that the posterior probability of the sixth tree, which is the true tree, is the maximum value. Moreover, the tree with only one node expanded from the true tree has the second-largest posterior probability and the tree with only one node shrunk from the true tree has the third-largest posterior probability. Therefore, we confirmed our algorithm properly computed the posterior distribution by not only the theoretical proof but also the quantitative experiment.
\subsection{Experiment 2 : Value of Bayes risk function $BR(\delta)$}
The purpose of Experiment 2 is to confirm the effectiveness of the proposed method by calculating the value of the Bayes risk function.
We compare the proposed method with the estimated signals from five models $m_{i} \in \mathcal{M} \hspace{0.1cm}(i=1,2,3,4,5)$. $m_i$ is a perfect quadtree expanded to the depth $i$.
The estimated signals from these models $m_{i} \in \mathcal{M}$ are obtained by the following equations.
\begin{align}
\delta^{i}(\bm{y})&=\frac{\sigma^2}{\sigma^2+\sigma^2_{\bm{\epsilon}}}\bm{y}+\frac{\sigma^2_{\bm{\epsilon}}}{\sigma^2+\sigma^2_{\bm{\epsilon}}}(W^{m_{i}})^{\top}\bm{\mu}^{m_{i}}.
\end{align}
The setting of Experiment 2 is as follows.
\begin{itemize}
\item $d_{\max}=5$.
\item $\sigma^2 = 10$.
\item $\sigma^2_{\bm{\epsilon}}=4$.
\item $\bm{\mu}_s$ is calculated from images in \cite{data}.
\end{itemize}
This experiment is conducted by using the following procedure.
\begin{enumerate}
\item Set $g_s \in \{0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9\}$.
\item Gernerate quadtree $m \sim p(m)$.\label{ex3_genem}
\item Generate $\bm{x}\sim p(\bm{x}|m)$.\label{ex3_genex}
\item Generate $\bm{y} = \bm{x}+\bm{\epsilon}$.\label{ex3_geney}
\item Calculate $\delta^{\ast}(\bm{y}), \delta^{i}(\bm{y})$.\label{ex3_caldelta}
\item Calculate mean-square error loss.\label{ex3_calloss}
\item Repeat from step \ref{ex3_geney} to step \ref{ex3_calloss} 10 times.\label{ex3_rep1}
\item Repeat from step \ref{ex3_genex} to step \ref{ex3_rep1} 10 times.\label{ex3_rep2}
\item Repeat from step \ref{ex3_genem} to step \ref{ex3_rep2} 30 times.
\end{enumerate}
\begin{figure}[tbp]
\centering
\includegraphics[height=5cm,width=8cm]{image/ex2.png}
\caption{Value of Bayes risk function $BR(\delta)$ for each method}
\label{experiment2}
\end{figure}
\begin{table}[tbp]
\centering
\caption{Average depth of $m \in \mathcal{M}$ for each $g_s$}
\begin{tabular}{l|c}
$g_s$&Average depth of $m \in \mathcal{M}$\\ \hline\hline
$0.1$&0.000\\
$0.2$&0.290\\
$0.3$&0.957\\
$0.4$&0.897\\
$0.5$&2.318\\
$0.6$&3.243\\
$0.7$&3.974\\
$0.8$&3.732\\
$0.9$&4.452\\ \hline
\end{tabular}
\label{experiment2_depth}
\end{table}
The result of this experiment is at Fig. \ref{experiment2}, and TABLE \ref{experiment2_depth} shows the average depth\footnote{The average depth is calculated as $\frac{1}{|\mathcal{L}^{m}|}\sum_{s\in \mathcal{L}^m}i(s)$.}of $m\in \mathcal{M}$ for each $g_s$.
According to this result, we can confirm that the proposed method achieved the minimum value of the Bayes risk function for all noise variances. Moreover, each of $\delta^i$ depends on the depth of the $m \in \mathcal{M}$. For example, the deeper the average depth of the generated trees is, the higher the value of the Bayes risk function of $\delta^1, \delta^2, \delta^3, \delta^4$ is, and vice versa. Therefore, we can confirm the effectiveness of the proposed method.
\section*{Acknowledgment}
The author would like to thank my family for supporting me in my daily life and the members of the Matsushima laboratory for their meaningful discussions.
\appendices
\section{Proof of Theory \ref{theorem_posterior}}\label{proof_posterior}
In the following, let $C \in \mathbb{R}$ be a constant. First, we show the equation (\ref{p(x|m,y)}).
\begin{align}
&\ln p(\bm{x}|m,\bm{y})\\
=&\ln p(\bm{y}|\bm{x},m)+\ln p(\bm{x}|m) +C \label{lnp(x|my)}\\
=&\ln \mathcal{N}(\bm{y}|\bm{x}, \sigma^2_{\bm{\epsilon}}I)+\ln \mathcal{N}(\bm{x}|(W^m)^{\top}\bm{\mu}^m, \sigma^2I)+C \\
=&-\frac{1}{2\sigma^2_{\bm{\epsilon}}}(\bm{y}-\bm{x})^{\top}(\bm{y}-\bm{x})\nonumber\\
&-\frac{1}{2\sigma^2}(\bm{x}-(W^m)^{\top}\bm{\mu}^m)^{\top}(\bm{x}-(W^m)^{\top}\bm{\mu}^m)+C \\
=&-\frac{1}{2}\bigg(\frac{1}{\sigma^2}+\frac{1}{\sigma^2_{\bm{\epsilon}}}\bigg)\bm{x}^{\top}\bm{x}\nonumber\\
&+\bigg(\frac{1}{\sigma^2_{\bm{\epsilon}}}\bm{y}+\frac{1}{\sigma^2}(W^m)^{\top}\bm{\mu}^m\bigg)^{\top}\bm{x} +C.
\end{align}
Let us define $\tilde{\sigma}^2,\tilde{\bm{\mu}}^m$ as follows.
\begin{align}
\tilde{\sigma}^2&\coloneqq \frac{\sigma^2\sigma^2_{\bm{\epsilon}}}{\sigma^2+\sigma^2_{\bm{\epsilon}}},\\
\tilde{\bm{\mu}}^m &\coloneqq \tilde{\sigma}^2\bigg(\frac{1}{\sigma^2_{\bm{\epsilon}}}\bm{y}+\frac{1}{\sigma^2}(W^m)^{\top}\bm{\mu}^m\bigg).
\end{align}
Therefore,
\begin{align}
&\ln p(\bm{x}|m,\bm{y})\\
=&-\frac{1}{2\tilde{\sigma}^2}(\bm{x}-\tilde{\bm{\mu}}^m)^{\top}(\bm{x}-\tilde{\bm{\mu}}^m)+C \\
=&\ln \mathcal{N}(\bm{x}|\tilde{\bm{\mu}}^m, \tilde{\sigma}^2 I).
\end{align}
Next, we show the equation (\ref{p(m|y)}).
\begin{align}
&\ln p(m|\bm{y})\\
=&\ln p(\bm{y}|m)+\ln p(m)+C\\
=&\ln \int p(\bm{y}|\bm{x},m)p(\bm{x}|m) \mathrm{d}\bm{x}+\ln p(m)+C\\
=&\ln \int \mathcal{N}(\bm{y}|\bm{x},\sigma^2_{\bm{\epsilon}}I)\mathcal{N}(\bm{x}|(W^m)^{\top}\bm{\mu}^m,\sigma^2I) \mathrm{d}\bm{x}\nonumber\\
&+\ln p(m)+C\\
=&\ln \mathcal{N}(\bm{y}|(W^m)^{\top}\bm{\mu}^m,(\sigma^2+\sigma^2_{\bm{\epsilon}})I)+\ln p(m)+C\\
=&-\frac{1}{2(\sigma^2+\sigma^2_{\bm{\epsilon}})}(\bm{y}-(W^m)^{\top}\bm{\mu}^m)^{\top}(\bm{y}-(W^m)^{\top}\bm{\mu}^m) \nonumber\\
&+\ln p(m)+C\\
=&-\frac{1}{2(\sigma^2+\sigma^2_{\bm{\epsilon}})}(\bm{y}^{\top}\bm{y}-2\bm{y}^{\top}(W^m)^{\top}\bm{\mu}^m+(\bm{\mu}^m)^{\top}\bm{\mu}^m) \nonumber\\
&+\ln p(m)+C\\
=&\frac{1}{(\sigma^2+\sigma^2_{\bm{\epsilon}})}\bigg(W^m\bm{y}-\frac{\bm{\mu}^m}{2}\bigg)^{\top}\bm{\mu}^m +\ln p(m)+C\\
=&\frac{1}{(\sigma^2+\sigma^2_{\bm{\epsilon}})}\bigg\{\sum_{s\in \mathcal{L}^m}\bigg(W_s\bm{y}-\frac{\bm{\mu}_s}{2}\bigg)\bigg\}^{\top}\bigg(\sum_{s'\in \mathcal{L}^m}\bm{\mu}_{s'}\bigg)\nonumber\\
&+\ln p(m)+C\\
=&\frac{1}{(\sigma^2+\sigma^2_{\bm{\epsilon}})}\sum_{s\in \mathcal{L}^m}\bigg(W_s\bm{y}-\frac{\bm{\mu}_s}{2}\bigg)^{\top}\bm{\mu}_s+\ln p(m)+C \\
&\bigg(\because s\neq s'\Rightarrow \bigg(W_s\bm{y}-\frac{\bm{\mu}_s}{2}\bigg)^{\top}\bm{\mu}_{s'}=0\bigg) \nonumber\\
=&\log \prod_{s\in \mathcal{L}^m}\psi_s(1-g_{s})\prod_{s\in \mathcal{I}^m} g_{s'}+C.
\end{align}
Let us denote $\ln \psi_s$ as follows.
\begin{align}
\ln \psi_s &\coloneqq \frac{1}{\sigma^2+\sigma^2_{\bm{\epsilon}}}\bigg(W_s\bm{y}-\frac{\bm{\mu}_s}{2}\bigg)^{\top}\bm{\mu}_s.
\end{align}
We assume the following lemma, which will be proved later.
\begin{lemma}\label{lemma_posterior}
Let us denote $\tilde{\psi}_s,\tilde{g}_s$ as follows.
\begin{align}
\tilde{\psi}_s &\coloneqq
\begin{cases}
\psi_s &(s\in \mathcal{L})\\
(1-g_s)\psi_s+g_s\prod_{s'\in \text{Ch}(s)}\tilde{\psi}_{s'} &(\text{otherwise}),
\end{cases}\\
\tilde{g}_s &\coloneqq
\begin{cases}
g_s &(s\in \mathcal{L})\\
\frac{g_s\prod_{s'\in \text{Ch}(s)}\tilde{\psi}_{s'}}{\tilde{\psi}_s}&(\text{otherwise}).
\end{cases}\label{tilde_g}
\end{align}
In this case, the following equation holds.
\begin{align}
\prod_{s\in \mathcal{L}^m}(1-\tilde{g}_s)\prod_{s'\in \mathcal{I}^m}\tilde{g}_{s'}=\frac{1}{\tilde{\psi}_{s_{\text{r}}}}\prod_{s\in \mathcal{L}^m}\psi_s(1-g_{s})\prod_{s\in \mathcal{I}^m} g_{s'}.
\end{align}
Because $\tilde{g}_s$ is in the range of $[0,1]$, we can show the following equation in the same way for the equation $\sum_{m\in \mathcal{M}}p(m)=1$.
\begin{align}
\sum_{m\in \mathcal{M}}\prod_{s\in \mathcal{L}^m}(1-\tilde{g}_{s})\prod_{s'\in \mathcal{I}^m}\tilde{g}_{s'}=1.
\end{align}
\end{lemma}
Using the above lemma, the following equation can be derived by setting $C=-\ln \tilde{\psi}_{s_{\text{r}}}$
\begin{align}
&\ln p(m|\bm{y})=\ln \prod_{s\in \mathcal{L}^m}(1-\tilde{g}_s)\prod_{s'\in \mathcal{I}^m}\tilde{g}_{s'}.
\end{align}
\section{Proof of Lemma \ref{lemma_posterior}}
We show the proof of Lemma \ref{lemma_posterior}.
\begin{align}
&p(m|\bm{y})\\
=&\prod_{s\in \mathcal{L}^m}(1-\tilde{g}_s)\prod_{s'\in \mathcal{I}^m}\tilde{g}_{s'}\\
=&\prod_{s\in \mathcal{L}^m\cap \mathcal{L}}(1-\tilde{g}_s)\prod_{s'\in \mathcal{L}^m\backslash \mathcal{L}}(1-\tilde{g}_{s'})\prod_{s''\in \mathcal{I}^m}\tilde{g}_{s''}\\
=&\prod_{s\in \mathcal{L}^m\cap \mathcal{L}}(1-g_{s})\prod_{s'\in \mathcal{L}^m\backslash \mathcal{L}}\psi_{s'}(1-g_{s'})\nonumber\\
&\times \prod_{s''\in \mathcal{I}^m}\frac{g_{s''}\prod_{s'''\in \text{Ch}(s'')}\tilde{\psi}_{s'''}}{\tilde{\psi}_{s'''}}\quad \because (\ref{tilde_g})\label{tmp_lemma1}\\
=&\frac{1}{\tilde{\psi}_{s_{\text{r}}}}\prod_{s\in \mathcal{L}^m\cap \mathcal{L}}(1-g_{s})\prod_{s'\in \mathcal{L}^m\backslash \mathcal{L}}\psi_{s'}(1-g_{s'})\nonumber\\
&\times \prod_{s''\in \mathcal{I}^m}g_{s''}\prod_{s'''\in \mathcal{L}^m\cap \mathcal{L} }\tilde{\psi}_{s'''}.\label{tmp}
\end{align}
In (\ref{tmp_lemma1}), $\tilde{\psi}_{s}(s\in \mathcal{S}\backslash(\{s_{\text{r}}\}\cup (\mathcal{L}\cap \mathcal{L}^m))$ appeared once in the denominator and numerator. Hence, only $(\tilde{\psi}_{s_{\text{r}}})^{-1}$ remains.
Therefore,
\begin{align}
(\ref{tmp})=&\frac{1}{\tilde{\psi}_{s_{\text{r}}}}\prod_{s\in \mathcal{L}^m\cap \mathcal{L}}(1-g_{s})\prod_{s'\in \mathcal{L}^m\backslash \mathcal{L}}\psi_{s'}(1-g_{s'})\nonumber\\
&\times\prod_{s''\in \mathcal{I}^m}g_{s''}\prod_{s'''\in \mathcal{L}^m\cap \mathcal{L} }\psi_{s'''}\\
=&\frac{1}{\tilde{\psi}_{s_{\text{r}}}}\prod_{s\in \mathcal{L}^m}\psi_{s}(1-g_{s})\prod_{s'\in \mathcal{I}^m}g_{s'}.
\end{align}
\section{Proof of Theorem \ref{theorem_recursive}}\label{proof_recursive}
\begin{align}
&\sum_{m\in \mathcal{M}}p(m|\bm{y})(W^m)^{\top}\bm{\mu}^m\label{ex_wmu}\\
=&\sum_{m\in \mathcal{M}}\sum_{s\in \mathcal{L}^m}p(m|\bm{y})W^{\top}_s\bm{\mu}_s\\
=&\sum_{s\in \mathcal{S}}\bigg\{\sum_{m\in \{m'\in \mathcal{M}|s\in \mathcal{L}^m\}}p(m|\bm{y})\bigg\}W^{\top}_s\bm{\mu}_s\\
=&\sum_{s\in \mathcal{S}}\bigg\{(1-\tilde{g}_s)\prod_{s'\in \text{An}(s)}\tilde{g}_{s'}\bigg\}W^{\top}_s\bm{\mu}_s\\
&(\because \text{\cite{nakahara} Theorem 2} )\nonumber \\
=&(1-\tilde{g}_{s_{\text{r}}})W^{\top}_s\bm{\mu}_s+\tilde{g}_{s_{\text{r}}}\nonumber \\&\times\sum_{s\in \mathcal{S}\backslash\{s_{\text{r}}\}}(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s\prod_{s'\in \text{An}(s)\backslash\{s_{\text{r}}\}}\tilde{g}_{s'}.\label{tmp1_theorem2}
\end{align}
Note that $\text{An}(s)$ is the set of ancestor nodes of $s\in \mathcal{S}$.
\begin{align}
&\sum_{s\in \mathcal{S}\backslash\{s_{\text{r}}\}}(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s\prod_{s'\in \text{An}(s)\backslash\{s_{\text{r}}\}}\tilde{g}_{s'} \nonumber\\
&=\sum_{s\in \text{Ch}(s_{\text{r}})}(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s\prod_{s'\in \text{An}(s)\backslash\{s_{\text{r}}\}}\tilde{g}_{s'}+\\
&\sum_{s''\in \mathcal{S}\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}(1-\tilde{g}_{s''})W^{\top}_{s''}\bm{\mu}_{s''} \prod_{s'''\in \text{An}(s'')\backslash\{s_{\text{r}}\}}\tilde{g}_{s'''}\nonumber\\
&=\sum_{s\in \text{Ch}(s_{\text{r}})}(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s+ \\
&\sum_{s'\in \mathcal{S}\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}(1-\tilde{g}_{s'})W^{\top}_{s'}\bm{\mu}_{s'}\prod_{s''\in \text{An}(s')\backslash\{s_{\text{r}}\}}\tilde{g}_{s''} \nonumber\\
&=\sum_{s\in \text{Ch}(s_{\text{r}})}\bigg\{(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s+ \tilde{g}_{s}\sum_{s'\in \mathcal{S}\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}\nonumber\\
&(1-\tilde{g}_{s'})W^{\top}_{s'}\bm{\mu}_{s'}\prod_{s''\in \text{An}(s')\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}\tilde{g}_{s''}\bigg\}.\label{tmp2_theorem2}
\end{align}
Therefore, from (\ref{tmp1_theorem2}) and (\ref{tmp2_theorem2})
\begin{align}
(\ref{ex_wmu})
=&\bigg\{(1-\tilde{g}_{s_{\text{r}}})W^{\top}_s\bm{\mu}_s+\tilde{g}_{s_{\text{r}}}\nonumber \\
&\sum_{s\in \mathcal{S}\backslash\{s_{\text{r}}\}}(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s\prod_{s'\in \text{An}(s)\backslash\{s_{\text{r}}\}}\tilde{g}_{s'}\bigg\}\label{before}\\
=&(1-\tilde{g}_{s_{\text{r}}})W^{\top}_s\bm{\mu}_s+\tilde{g}_{s_{\text{r}}}\nonumber \\
&\times \sum_{s\in \text{Ch}(s_{\text{r}})}\bigg\{(1-\tilde{g}_s)W^{\top}_s\bm{\mu}_s+ \tilde{g}_{s}\sum_{s'\in \mathcal{S}\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}\nonumber\\
&(1-\tilde{g}_{s'})W^{\top}_{s'}\bm{\mu}_{s'}\prod_{s''\in \text{An}(s')\backslash(\{s_{\text{r}}\}\cup \text{Ch}(s_{\text{r}}))}\tilde{g}_{s''}\bigg\}.\label{after}
\end{align}
The $\{\}$ in the (\ref{before}) and the $\{\}$ in the (\ref{after}) have the same structure. The same operation can be recursively computed by setting $r_s$ as in equation (\ref{r_s}).
\bibliographystyle{IEEEtran}
| {
"timestamp": "2022-05-03T02:30:07",
"yymm": "2202",
"arxiv_id": "2202.00568",
"language": "en",
"url": "https://arxiv.org/abs/2202.00568",
"abstract": "This study deals with two-dimensional (2D) signal processing using the wavelet packet transform. When the basis is unknown the candidate of basis increases in exponential order with respect to the signal size. Previous studies do not consider the basis as a random vaiables. Therefore, the cost function needs to be used to select a basis. However, this method is often a heuristic and a greedy search because it is impossible to search all the candidates for a huge number of bases. Therefore, it is difficult to evaluate the entire signal processing under a criterion and also it does not always gurantee the optimality of the entire signal processing. In this study, we propose a stochastic generative model in which the basis is regarded as a random variable. This makes it possible to evaluate entire signal processing under a unified criterion i.e. Bayes criterion. Moreover we can derive an optimal signal processing scheme that achieves the theoretical limit. This derived scheme shows that all the bases should be combined according to the posterior in stead of selecting a single basis. Although exponential order calculations is required for this scheme, we have derived a recursive algorithm for this scheme, which successfully reduces the computational complexity from the exponential order to the polynomial order.",
"subjects": "Signal Processing (eess.SP); Machine Learning (cs.LG)",
"title": "Stochastic 2D Signal Generative Model with Wavelet Packets Basis Regarded as a Random Variable and Bayes Optimal Processing",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126475856414,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7094397129702095
} |
https://arxiv.org/abs/1501.03643 | On robust width property for Lasso and Dantzig selector | Recently, Cahill and Mixon completely characterized the sensing operators in many compressed sensing instances with a robust width property. The proposed property allows uniformly stable and robust reconstruction of certain solutions from an underdetermined linear system via convex optimization. However, their theory does not cover the Lasso and Dantzig selector models, both of which are popular alternatives in the statistics community. In this letter, we show that the robust width property can be perfectly applied to these two models as well. Our results solve an open problem left by Cahill and Mixon. | \section{Introduction}
One of the main assignments of compressed sensing is to understand when it is possible to recover structured solutions to underdetermined systems of linear equations \cite{candes2014math}. During the past decade, there have developed many reconstruction guarantees; well-known concepts include restricted isometry property, null space property, coherence property, dual certificate, and more (the interested readers could refer to \cite{foucart2014math,zhang2012necessary,zhang2014one}). However, none of them is proved necessary for uniformly stable and robust reconstruction. Recently, Cahill and Mixon in \cite{jameson2014robust} introduced a new notion--robust width property, which completely characterizes the sensing operators in many compressed sensing instances. They restricted their attention into the following constrained optimization problem:
\begin{equation}\label{BP}
\min \|x\|_\sharp,~~\textrm{subject to} ~~ \|\Phi x-y\|_2\leq \epsilon \tag{$Q_{\epsilon}$}
\end{equation}
such that their theory can not cover the Lasso and Dantzig selector models, both of which are popular alternatives in the statistics community. Here, $\|\cdot\|_\sharp$ is some norm used to promote certain structured solutions, operator $\Phi$ and data $y$ are given, and $\epsilon$ measures the error. In this letter, we extend their results to two other probably more popular optimization problems of the Lasso/Basis Pursuit and Dantzig selector types. Our derived results completely solve an open problem left by Cahill and Mixon and hence prove that the notion of robust width is indeed a ubiquitous property. In the following, we recall some notations appeared in the paper \cite{jameson2014robust}.
Let $x^\natural$ be some unknown member of a finite-dimensional Hilbert space $\mathcal{H}$, and let $\Phi :\mathcal{H}\rightarrow \mathbb{F}^M$ denote some known linear operator, where $\mathbb{F}$ is either $\mathbb{R}$ or $\mathbb{C}$. Subset $\mathcal{A}\subseteq \mathcal{H}$ is a particular subset that consists of some type of structured members. $B_\sharp$ is the unit $\sharp$-ball.
\section{Robust width}
The robust width property was formally proposed in \cite{jameson2014robust}. We write down the definition and its equivalent form as follows.
\begin{definition}(\cite{jameson2014robust})
We say a linear operator $\Phi :\mathcal{H}\rightarrow \mathbb{F}^M$ satisfies the $(\rho,\alpha)$-robust width property over $B_\sharp$ if
$$ \|x\|_2\leq \rho \|x\|_\sharp$$
for every $x\in \mathcal{H}$ such that $\|\Phi x\|_2<\alpha \|x\|_2$; or equivalently if
$$\|\Phi x\|_2\geq \alpha \|x\|_2 $$
for every $x\in \mathcal{H}$ such that $\|x\|_2>\rho \|x\|_\sharp$.
\end{definition}
Here, we would like to point out the definition above is not completely new. In fact, when restricted to the case of $\ell_1$-minimization, it reduces to the $\ell_1$-constrained minimal singular value property which was originally defined in \cite{tang2011performance}.
\begin{definition}
For any $k \in \{1,2,\cdots, N\}$ and matrix $\Phi\in \mathbb{R}^{M\times N}$, define
the $\ell_1$-constrained minimal singular value of $\Phi$ by
\begin{equation*}r_k(\Phi)=\min_{x\neq 0, x\in S_k}\frac{\|\Phi x\|_2}{\|x\|_2}\end{equation*}
where $S_k=\{x\in \mathbb{R}^N: \|x\|_1\leq \sqrt{k}\|x\|_2\}$. If $r_k(\Phi)>0$, then we say $\Phi$ satisfies the $\ell_1$-constrained minimal singular value property with $r_k(\Phi)$.
\end{definition}
Work \cite{zhang2012constrained} exploited the geometrical aspect of the $\ell_1$-constrained minimal singular value property.
\section{Main results}
We first introduce the definition of compressed sensing space.
\begin{definition}(\cite{jameson2014robust})\label{cs}
A compressed sensing space $(\mathcal{H},\mathcal{A}, \|\cdot\|_\sharp )$ with bound $L$ consists of a finite-dimensional Hilbert space $\mathcal{H}$, a subset $\mathcal{A}\subseteq \mathcal{H}$, and a norm $\|\cdot\|_\sharp$ on $\mathcal{H}$ with following properties:
(i) $0\in \mathcal{A}$.
(ii) For every $a\in \mathcal{A}$ and $v\in \mathcal{H}$, there exists a decomposition $v= z_1+z_2$ such that
$$\|a+z_1\|_\sharp =\|a\|_\sharp+\|z_1\|_\sharp, ~~~~\|z_2\|_\sharp\leq L\|v\|_2.$$
\end{definition}
The subdifferential $\partial f(x)$ of a convex function $f$ at $x$ is the set-valued operator \cite{bauschke2011convex} given by
$$\partial f(x)=\{u\in\mathcal{H}: f(y)\geq f(x)+\langle u, y-x\rangle, \forall y\in \mathcal{H}\}.$$
The following lemma will be useful to establish our main results.
\begin{lemma}\label{lem1}
Let $\|\cdot\|_\diamond$ be the dual norm of $\|\cdot\|_\sharp$ on $\mathcal{H}$. If $u\in \partial \|x\|_\sharp$, then $\|u\|_\diamond\leq 1$. If $x\neq 0$ and $u\in \partial \|x\|_\sharp$, then $\|u\|_\diamond= 1$.
\end{lemma}
\begin{proof}
From the convexity of $\|\cdot\|_\sharp$ and the subdifferential definition, for any $u\in \partial \|x\|_\sharp$ and $v\in \mathcal{H}$ it holds
$$ \|v\|_\sharp \geq \|x\|_\sharp +\langle u, v-x\rangle.$$
Set $v=0$ and $v=2x$ to get $\langle u, x\rangle \geq \|x\|_\sharp $ and $\langle u, x\rangle \leq \|x\|_\sharp $ respectively. This implies $\langle u, x\rangle = \|x\|_\sharp $ and hence $\langle u, v\rangle \leq \|v\|_\sharp $. Similarly, by taking $-v\in \mathcal{H}$, we can get $-\langle u, v\rangle \leq \|v\|_\sharp $. Thus, $|\langle u, v\rangle| \leq \|v\|_\sharp $. Therefore, $$\|u\|_\diamond=\sup_{\|v\|_\sharp\leq 1}|\langle u, v\rangle|\leq \sup_{\|v\|_\sharp\leq 1 }\|v\|_\sharp\leq 1.$$
When $x\neq 0$, by the Cauchy-Schwartz inequality we get that $\|x\|_\sharp=\langle u, x\rangle\leq \|x\|_\sharp \|u\|_\diamond$ and hence $\|u\|_\diamond\geq 1$. So it must have $\|u\|_\diamond= 1$.
\end{proof}
Now, we state the characterization of uniformly stable and robust reconstruction via the Lasso/Basis Pursuit type model by utilizing the $(\rho,\alpha)$-robust width property.
\begin{theorem}
For any CS space $(\mathcal{H},\mathcal{A}, \|\cdot\|_\sharp )$ with bound $L$ and any linear operator $\Phi :\mathcal{H}\rightarrow \mathbb{F}^M$, the following are equivalent up to constants:
(a) $\Phi$ satisfies the $(\rho,\alpha)$-robust width property over $B_\sharp$.
(b) For every $x^\natural \in\mathcal{H}, \kappa\in (0,1), \lambda>0$ and $\omega\in\mathbb{F}^M$ satisfying $\|\Phi^T\omega\|_\diamond\leq \kappa\lambda$, any solution $x^*$ to the unconstrained optimization model
\begin{equation}\label{Lasso}
\min \frac{1}{2}\|\Phi x-(\Phi x^\natural +\omega)\|_2^2+\lambda \|x\|_\sharp \tag{$P_{\lambda}$}
\end{equation}
satisfies $\|x^*-x^\natural \|_2 \leq C_0 \|x^\natural -a \|_\sharp +C_1 \cdot \lambda$ for every $a\in \mathcal{A}$.
In particular, (a) implies (b) with
$$C_0=\left( \frac{1-\kappa}{2\rho}-L\right)^{-1},~~~~ C_1=\frac{1+\kappa}{\alpha^2\rho}$$
provided $\rho<\frac{1-\kappa}{2L}$. Also, (b) implies (a) with
$$ \rho =2C_0, ~~~~\alpha=\frac{\kappa}{2\tau C_1},$$
where $\tau=\sup_{\|x\|_\sharp\leq 1}\|\Phi x\|_2$.
\end{theorem}
\begin{proof}
Let $z=x^*-x^\natural$. We divide the proof of $(a)\Rightarrow (b)$ into four steps. They are partially inspired by \cite{candes2011tight} and \cite{jameson2014robust}.
\textbf{Step 1:} Prove the first relationship:
\begin{equation}\label{step1eq}
\|x^*\|_\sharp-\kappa \|z\|_\sharp \leq \|x^\natural\|_\sharp.
\end{equation}
Since $x^*$ is a minimizer to \eqref{Lasso}, we have
$$\frac{1}{2}\|\Phi x^*-(\Phi x^\natural +w)\|_2^2+\lambda \|x^*\|_\sharp\leq \frac{1}{2}\|\Phi x^\natural -(\Phi x^\natural +w)\|_2^2+\lambda \|x^\natural \|_\sharp.$$
Hence,
$$\frac{1}{2}\|(\Phi x^*-\Phi x^\natural)-w\|_2^2+\lambda \|x^*\|_\sharp\leq \frac{1}{2}\|w\|_2^2+\lambda \|x^\natural \|_\sharp.$$
Rearrange terms to give
$$\lambda \|x^*\|_\sharp\leq -\frac{1}{2}\|\Phi (x^* -x^\natural)\|_2^2+ \langle\Phi(x^*-x^\natural), w\rangle +\lambda \|x^\natural \|_\sharp \leq \langle x^*-x^\natural, \Phi^T w\rangle +\lambda \|x^\natural \|_\sharp.$$ By the Cauchy-Schwartz inequality and the condition $\|\Phi^Tw\|_\diamond\leq \kappa\lambda$, we obtain that
$$\langle x^*-x^\natural, \Phi^T w\rangle\leq \|x^*-x^\natural\|_\sharp \|\Phi^Tw\|_\diamond\leq \kappa\lambda\|x^*-x^\natural\|_\sharp.$$
Thus, $\lambda \|x^*\|_\sharp\leq \kappa\lambda\|x^*-x^\natural\|_\sharp+ \lambda \|x^\natural \|_\sharp$ from which the first relationship follows.
\textbf{Step 2:} Prove the second relationship:
\begin{equation}\label{step2eq}
\|z\|_\sharp\leq \frac{2}{1-\kappa}\|x^\natural -a\|_\sharp +\frac{2L}{1-\kappa}\|z\|_2.
\end{equation}
Pick $a\in \mathcal{A}$, and decompose $z=x^*-x^\natural=z_1+z_2$ according to the property (ii) in Definition \ref{cs} so that $\|a+z_1\|_\sharp =\|a\|_\sharp+\|z_1\|_\sharp$ and $\|z_2\|_\sharp\leq L\|z\|_2.$ In light of \eqref{step1eq}, we derive that
\begin{align*}
\|a\|_\sharp +\|x^\natural -a\|_\sharp &\geq \|x^\natural\|_\sharp \\
&\geq \|x^*\|_\sharp-\kappa \|z\|_\sharp \\
& =\|x^\natural+(x^*-x^\natural)\|_\sharp-\kappa \|x^*-x^\natural\|_\sharp\\
& =\|a + (x^\natural-a)+z_1+z_2\|_\sharp-\kappa \|z_1+z_2\|_\sharp\\
&\geq \|a+z_1\|_\sharp-\| x^\natural-a \|_\sharp-(1+\kappa)\|z_1\|_\sharp-\kappa \|z_2\|_\sharp\\
&=\|a\|_\sharp + \|z_1\|_\sharp-\| x^\natural-a \|_\sharp-(1+\kappa)\|z_2\|_\sharp-\kappa \|z_1\|_\sharp\\
&=\|a\|_\sharp + (1-\kappa)\|z_1\|_\sharp-\| x^\natural-a \|_\sharp-(1+\kappa)\|z_2\|_\sharp.
\end{align*}
Rearrange terms to give $$\|z_1\|_\sharp\leq \frac{2}{1-\kappa}\|x^\natural -a\|_\sharp+\frac{1+\kappa}{1-\kappa}\|z_2\|_\sharp$$ which implies
$$\|z\|_\sharp\leq \|z_1\|_\sharp+ \|z_2\|_\sharp\leq \frac{2}{1-\kappa}\|x^\natural -a\|_\sharp+\frac{2}{1-\kappa}\|z_2\|_\sharp.$$
Thus, the second relationship follows by invoking $\|z_2\|_\sharp\leq L\|z\|_2.$
\textbf{Step 3:} Derive the upper bound:
\begin{equation}
\|\Phi z\|_2^2\leq (1+\kappa)\lambda \|z\|_\sharp.
\end{equation}
The optimality condition of \eqref{Lasso} reads
$$\Phi^T(\Phi x^\natural +w-\Phi x^*)\in \lambda\cdot\partial \|x^*\|_\sharp.$$
By using Lemma \ref{lem1}, we get $\|\Phi^T(\Phi x^\natural +w-\Phi x^*)\|_\diamond \leq \lambda$. Thus,
\begin{subequations}
\begin{align*}
\|\Phi^T\Phi z\|_\diamond &= \|\Phi^T\Phi (x^*-x^\natural)\|_\diamond \\
&\leq \|\Phi^T(\Phi x^*-\Phi x^\natural-w)\|_\diamond +\|\Phi^Tw\|_\diamond \\
& \leq \lambda+\kappa \lambda=(1+\kappa)\lambda.
\end{align*}
\end{subequations}
Therefore,
$$\|\Phi z\|_2^2 = \langle z, \Phi^T\Phi z\rangle
\leq \|z\|_\sharp\cdot \|\Phi^T\Phi z\|_\diamond\leq (1+\kappa)\lambda \|z\|_\sharp,$$
where the first inequality follows from the Cauchy-Schwartz inequality.
\textbf{Step 4:} Finish the proof. Assume
$ \|z\|_2> C_0\cdot \|x^\natural -a \|_\sharp $, since otherwise we are done. In light of \eqref{step2eq}, we obtain
$$\|z\|_\sharp<\left[\frac{2}{C_0(1-\kappa)}+\frac{2L}{1-\kappa}\right]\|z\|_2=\rho^{-1}\|z\|_2,$$
i.e., $\|z\|_2>\rho\|z\|_\sharp$. By the $(\rho,\alpha)$-robust width property of $\Phi$, we have
$\|\Phi z\|_2\geq \alpha \|z\|_2$. Utilizing the upper bound of $\|\Phi z\|_2^2$ in Step 3, we derive that
$$\alpha^2\|z\|_2^2\leq \|\Phi z\|_2^2\leq (1+\kappa)\lambda \|z\|_\sharp<\frac{(1+\kappa)\lambda}{\rho}\|z\|_2.$$
Thus, $$\|z\|_2\leq \frac{(1+\kappa)\lambda}{\alpha^2\rho}=C_1\cdot \lambda\leq C_0 \|x^\natural -a \|_\sharp +C_1\cdot \lambda.$$
This completes the proof of $(a)\Rightarrow (b)$.
The proof of $(b)\Rightarrow (a)$. Pick $x^\natural$ such that $\|\Phi x^\natural\|_2<\alpha\|x^\natural\|_2$. By the expression of $\tau=\sup_{\|x\|_\sharp\leq 1}\|\Phi x\|_2$ and using the Cauchy-Schwartz inequality, we derive that
\begin{subequations}
\begin{align*}
\tau\cdot \alpha\|x^\natural\|_2 & >\tau\cdot \|\Phi x^\natural\|_2 = \sup_{\|x\|_\sharp\leq 1}\|\Phi x\|_2 \cdot \|\Phi x^\natural\|_2 \\
& \geq \sup_{\|x\|_\sharp\leq 1}\langle \Phi x, \Phi x^\natural\rangle = \sup_{\|x\|_\sharp\leq 1}\langle x, \Phi ^T \Phi x^\natural\rangle\\
&= \|\Phi ^T \Phi x^\natural\|_\diamond.
\end{align*}
\end{subequations}
Let $\lambda=\kappa^{-1}\tau\alpha \|x^\natural\|_2$ and $\omega=-\Phi x^\natural$. Then, we have
$$\kappa\lambda = \tau\cdot \alpha \|x^\natural\|_2\geq \|\Phi ^T \Phi x^\natural\|_\diamond=\|\Phi^T w\|_\diamond,$$
which implies that the choosing of $\lambda$ and $\omega$ satisfies the constrained condition $\|\Phi^T w\|_\diamond\leq \kappa\lambda$. Thereby, we can take $\omega=-\Phi x^\natural$ and hence conclude that $x^*=0$ is a minimizer of \eqref{Lasso}. Thus,
$$\|x^\natural\|_2=\|x^*-x^\natural\|_2\leq C_0\|x^\natural\|_\sharp+C_1\lambda = C_0\|x^\natural\|_\sharp+C_1\kappa^{-1}\tau\alpha \|x^\natural\|_2.$$
Take $\alpha =\frac{\kappa}{2\tau C_1}$ and $\rho=2C_0$ and rearrange terms to give $$\|x^\natural\|_2\leq \frac{C_0}{1-C_1\kappa^{-1}\tau\alpha}\|x^\natural\|_\sharp=\rho \|x^\natural\|_\sharp.$$
So the $(\rho,\alpha)$-robust width property of $\Phi$ holds.
\end{proof}
\begin{remark}
In the paper \cite{jameson2014robust}, to obtain a corresponding result for \eqref{BP}, it suffices for $\|\cdot\|_\sharp$ to satisfy:
(i)$\|x\|_\sharp \geq \|0\|_\sharp$ for every $x\in \mathcal{H}$, and
(ii) $\|x+y\|_\sharp \leq \|x\|_\sharp +\|y\|_\sharp$ for every $x, y\in\mathcal{H}$.
In contrast, Theorem 1 not only requires (i) and (ii) above, but also utilizes the convexity of $\|\cdot\|_\sharp$ and its dual norm. The additional requirement of convexity excludes the cases of nonconvex $\|\cdot\|_\sharp$. For example, the case of
$$\|x\|_\sharp=\|x\|_p^p:=\sum_{i=1}^N|x_i|^p, ~~0<p<1$$
is not covered by Theorem 1.
\end{remark}
With very similar arguments, we can show the following theorem which characterizes the uniformly stable and robust reconstruction via the Dantzig type model by utilizing the $(\rho,\alpha)$-robust width property.
\begin{theorem}
For any CS space $(\mathcal{H},\mathcal{A}, \|\cdot\|_\sharp )$ with bound $L$ and any linear operator $\Phi :\mathcal{H}\rightarrow \mathbb{F}^M$, the following are equivalent up to constants:
(a) $\Phi$ satisfies the $(\rho,\alpha)$-robust width property over $B_\sharp$.
(b) For every $x^\natural \in\mathcal{H}, \lambda>0$ and $\omega \in\mathbb{F}^M$ satisfying $\|\Phi^T\omega\|_\diamond\leq \lambda$, any solution $x^*$ to the following optimization model
\begin{equation}\label{Dant}
\min \|x\|_\sharp, ~~~\textrm{subject to}~~ \|\Phi^T(\Phi x-(\Phi x^\natural +\omega))\|_\diamond\leq \lambda \tag{$R_{\lambda}$}
\end{equation}
satisfies $\|x^*-x^\natural \|_2 \leq C_0 \|x^\natural -a \|_\sharp +C_1 \cdot \lambda$ for every $a\in \mathcal{A}$.
In particular, (a) implies (b) with
$$C_0=\left( \frac{1}{2\rho}-L\right)^{-1},~~~~ C_1=\frac{2}{\alpha^2\rho}$$
provided $\rho<\frac{1}{2L}$. Also, (b) implies (a) with
$$ \rho =2C_0, ~~~~\alpha=\frac{\kappa}{2\tau C_1},$$
where $\tau=\sup_{\|x\|_\sharp\leq 1}\|\Phi x\|_2$.
\end{theorem}
\begin{proof} The proof below follows from the pattern used for that of Theorem 1. Let $z=x^*-x^\natural$.
\textbf{Step 1: }Since $x^*$ is a minimizer of \eqref{Dant}, it holds that $\|x^*\|_\sharp\leq \|x^\natural\|_\sharp$. Now, repeat the argument for Step 2 in the proof of Theorem 1 to give
\begin{equation*}
\|z\|_\sharp\leq 2\|x^\natural -a\|_\sharp +2L\|z\|_2.
\end{equation*}
\textbf{Step 2:} Prove the upper bound:
\begin{equation*}
\|\Phi z\|_2^2\leq 2\lambda \|z\|_\sharp.
\end{equation*}
This follows from that
$$\|\Phi^T\Phi z\|_\diamond \leq \|\Phi^T(\Phi x^*-(\Phi x^\natural +w))\|_\diamond + \|\Phi^Tw\|_\diamond\leq 2 \lambda$$
and
$$\|\Phi z\|_2^2 = \langle z, \Phi^T\Phi z\rangle\leq \|z\|_\sharp\cdot \|\Phi^T\Phi z\|_\diamond.$$
The remained proof of $(a)\Rightarrow (b)$ follows by repeating the argument for Step 4 in the proof of Theorem 1.
The proof of $(b)\Rightarrow (a)$. Pick $x^\natural$ such that $\|\Phi x^\natural\|_2<\alpha\|x^\natural\|_2$. Let $\lambda=\tau\alpha \|x^\natural\|_2$ and $\omega=-\Phi x^\natural$. We have proved in the proof of Theorem 1 that such choosing of $\lambda$ and $\omega$ satisfies the constrained condition of $\|\Phi^Tw\|_\diamond\leq \lambda$ and hence $x^*=0$ is the unique minimizer of \eqref{Dant}. The remained proof of $(b)\Rightarrow (a)$ follows by repeating the corresponding part in the proof of Theorem 1.
\end{proof}
Note that the convexity of $\|\cdot\|_\sharp$ is not involved in the proof of Theorem 2.
\section*{Acknowledgements}
The author would like to thank Dr. Jameson Cahill for his communication and anonymous reviewers for their valuable comments, with which great improvements have been made in this manuscript. The work is supported by the National Science
Foundation of China (No.11501569 and No.61571008).
| {
"timestamp": "2016-04-05T02:09:35",
"yymm": "1501",
"arxiv_id": "1501.03643",
"language": "en",
"url": "https://arxiv.org/abs/1501.03643",
"abstract": "Recently, Cahill and Mixon completely characterized the sensing operators in many compressed sensing instances with a robust width property. The proposed property allows uniformly stable and robust reconstruction of certain solutions from an underdetermined linear system via convex optimization. However, their theory does not cover the Lasso and Dantzig selector models, both of which are popular alternatives in the statistics community. In this letter, we show that the robust width property can be perfectly applied to these two models as well. Our results solve an open problem left by Cahill and Mixon.",
"subjects": "Information Theory (cs.IT); Optimization and Control (math.OC)",
"title": "On robust width property for Lasso and Dantzig selector",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126469647338,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.709439712520132
} |
https://arxiv.org/abs/math/0703134 | On the spectral norm of a random Toeplitz matrix | Suppose that $T_n$ is a Toeplitz matrix whose entries come from a sequence of independent but not necessarily identically distributed random variables with mean zero. Under some additional tail conditions, we show that the spectral norm of $T_n$ is of the order $\sqrt{n \log n}$. The same result holds for random Hankel matrices as well as other variants of random Toeplitz matrices which have been studied in the literature. | \section{Introduction and results}\label{S:intro}
Let $X_0,X_1,X_2,\dotsc$ be a family of independent random
variables. For $n\ge 2$, $T_n$ denotes the $n\times n$ random
symmetric Toeplitz matrix $T_n = \big[ X_{|j-k|}\big]_{1\le j, k\le
n}$,
\[
T_n = \begin{bmatrix}
X_0 & X_1 & X_2 & \cdots & X_{n-2} & X_{n-1} \\
X_1 & X_0 & X_1 & & & X_{n-2} \\
X_2 & X_1 & X_0 & & & \vdots \\
\vdots & & & \ddots & & \vdots\\
X_{n-2} & & & & X_0 & X_1 \\
X_{n-1} & X_{n-2} & \hdotsfor{2} & X_1 & X_0
\end{bmatrix}.
\]
In \cite{Bai}, Bai asked whether the spectral measure of
$n^{-1/2} T_n$ approaches a deterministic limit measure $\mu$
as $n\to \infty$. Bryc, Dembo, and Jiang \cite{BDJ} and Hammond and
Miller \cite{HM} independently proved that this is so when the $X_j$
are identically distributed with variance $1$, and that with these
assumptions $\mu$ does not depend on the distribution of the
$X_j$. The measure $\mu$ does not appear to be a previously studied
probability measure, and is described via rather complicated
expressions for its moments.
This limiting spectral measure $\mu$ has unbounded support, which
raises the question of the asymptotic behavior of the spectral norm
$\| T_n \|$, i.e., the maximum absolute value of an eigenvalue of
$T_n$. (This problem is explicitly raised in \cite[Remark 1.3]{BDJ}.)
This paper shows, under slightly different assumptions from
\cite{BDJ,HM}, that $\| T_n \|$ is of the order $\sqrt{n\log n}$. Here
the $X_j$ need not be identically distributed, but satisfy stronger
moment or tail conditions than in \cite{BDJ,HM}. The spectral norm is
also of the same order for other related random matrix ensembles,
including random Hankel matrices. In the case of Hankel matrices,
Theorems \ref{T:upper} and \ref{T:lower} below generalize in a
different direction a special case of a result of Masri and Tonge
\cite{HM} on multilinear Hankel forms with $\pm 1$ Bernoulli entries.
\medskip
A random variable $X$ will be called \emph{subgaussian} if
\begin{equation}\label{E:subgaussian}
\mathbb{P} \big[|X| \ge t\big] \le 2 e^{-at^2} \quad \forall t>0
\end{equation}
for some constant $a>0$. A family of random variables is
\emph{uniformly subgaussian} if each satisfies \eqref{E:subgaussian}
for the same constant $a$.
\begin{thm}\label{T:upper}
Suppose $X_0,X_1,X_2,\dotsc$ are independent, uniformly subgaussian
random variables with $\mathbb{E} X_j = 0$ for all $j$. Then
\[
\mathbb{E} \| T_n \| \le c_1 \sqrt{n\log n},
\]
where $c_1 > 0$ depends only on the constant $a$ in the subgaussian estimate
\eqref{E:subgaussian} for the $X_j$.
\end{thm}
Simple scaling considerations show that one can take $c_1 = C
a^{-1/2}$ for some absolute constant $C>0$. In principle an explicit
value for $C$ can be extracted from the proof of Theorem
\ref{T:upper}. No attempt has been made to do so, since the
techniques used in this paper are suited for determining rough orders
of growth, and not precise constants. Similar remarks apply to the
constants which appear in the statements of Theorems \ref{T:limsup}
and \ref{T:lower} below.
\medskip
By strengthening the subgaussian assumption, the statement of Theorem
\ref{T:upper} can be improved from a bound on expectations to an
almost sure asymptotic bound. Recall that a real-valued random
variable $X$ (or more properly, its distribution) is said to satisfy a
\emph{logarithmic Sobolev inequality} with constant $A$ if
\[
\mathbb{E} \big[f^2(X) \log f^2(X)\big] \le 2A \ \mathbb{E} \big[ f'(X)^2\big]
\]
for every smooth $f:\mathbb{R}\to \mathbb{R}$ such that $\mathbb{E} f^2(X)=1$. Standard normal
random variables satisfy a logarithmic Sobolev inequality with
constant $1$. Furthermore, it is well known that independent random
variables with bounded logarithmic Sobolev constants are uniformly
subgaussian and possess the same concentration properties as
independent normal random variables (see \cite{Ledoux1} or
\cite[Chapter 5]{Ledoux2}).
\begin{thm}\label{T:limsup}
Suppose $X_0,X_1,X_2,\dotsc$ are independent, $\mathbb{E} X_j=0$ for all $j$,
and for some constant $A$, either:
\begin{enumerate}
\item \label{I:bounded} for all $j$, $|X_j|\le A$ almost surely; or
\item \label{I:LSI} for all $j$, $X_j$ satisfies a logarithmic Sobolev
inequality with constant $A$.
\end{enumerate}
Then
\[
\limsup_{n\to\infty} \frac{\| T_n \|}{\sqrt{n\log n}} \le c_2
\]
almost surely, where $c_2 > 0$ depends only on $A$.
\end{thm}
We remark that according to the definition used here, $T_n$ is a
submatrix of $T_{n+1}$, but this is only a matter of convenience in
notation. Theorem \ref{T:limsup} remains true regardless of the
dependence among the random matrices $T_n$ for different values of
$n$.
It seems unlikely that the stronger hypotheses of Theorem
\ref{T:limsup} are necessary. In fact a weaker version can be proved
under the hypotheses of Theorem \ref{T:upper} alone; see the remarks
following the proof of Theorem \ref{T:limsup} in Section
\ref{S:proofs}.
\medskip
When the $X_j$ have variance $1$, the upper bound $\sqrt{n\log n}$ of
Theorems \ref{T:upper} and \ref{T:limsup} is of the correct order. In
fact the matching lower bound holds under less restrictive tail
assumptions, as the next result shows.
\begin{thm}\label{T:lower}
Suppose $X_0,X_1,X_2,\dotsc$ are independent and for some constant $B$,
each $X_j$ satisfies
\[
\mathbb{E} X_j = 0, \quad \mathbb{E} X_j^2 = 1, \quad \mathbb{E} |X_j| \ge B.
\]
Then
\[
\mathbb{E} \| T_n \| \ge c_3 \sqrt{n\log n},
\]
where $c_3 > 0$ depends only on $B$.
\end{thm}
In the case that $\mathbb{E} X_j^2 = 1$ and $\mathbb{E} |X_j|^3 < \infty$, it is a
consequence of H\"older's inequality that $\mathbb{E} |X_j| \ge (\mathbb{E}
|X_j|^3)^{-1}$. Thus the lower bound on first absolute moments
assumed in Theorem \ref{T:lower} is weaker than an upper bound on
absolute third moments, and is in particular satisfied for uniformly
subgaussian random variables.
\medskip
Section \ref{S:proofs} below contains the proofs of Theorems
\ref{T:upper}--\ref{T:lower}.As mentioned above, Theorems
\ref{T:upper}--\ref{T:lower} also hold for other ensembles of random
Toeplitz matrices, as well as for random Hankel matrices. Section
\ref{S:remarks} discusses these extensions of the theorems and makes
some additional remarks.
\medskip
{\em Acknowledgement.} The author thanks A.\ Dembo for pointing out
the problem considered in this paper.
\bigskip
\section{Proofs}\label{S:proofs}
The proof of Theorem \ref{T:upper} is based on Dudley's entropy bound
\cite{Dudley} for the supremum of a subgaussian random process. Given
a random process $\{Y_x : x\in M\}$, a pseudometric on $M$ may be
defined by
\[
d(x,y) = \sqrt{\mathbb{E} |Y_x - Y_y|^2}.
\]
The process $\{Y_x : x\in M\}$ is called \emph{subgaussian} if
\begin{equation}\label{E:process}
\forall x,y\in M,\ \forall t>0, \quad
\mathbb{P} \big[|Y_x - Y_y| \ge t \big] \le 2 \exp\left[ -
\frac{b\ t^2}{d(x,y)^2} \right]
\end{equation}
for some constant $b>0$. For $\varepsilon > 0$, the $\varepsilon$-covering
number of $(M,d)$, $N(M,d,\varepsilon)$, is the smallest cardinality of a
subset $\mathcal{N}\subset M$ such that
\[
\forall x\in M \ \exists y\in \mathcal{N} : \ d(x,y)\le \varepsilon.
\]
Dudley's entropy bound is the following (see \cite[Proposition
2.1]{Talagrand2} for the version given here).
\begin{prop}\label{T:Dudley}
Let $\{Y_x : x\in M\}$ be a subgaussian random process with $\mathbb{E} Y_x =
0$ for every $x\in M$. Then
\[
\mathbb{E} \sup_{x\in M} |Y_x| \le K \int_0^\infty \sqrt{\log N(M,d,\varepsilon)}
\ d \varepsilon,
\]
where $K>0$ depends only on the constant $b$ in the subgaussian estimate
\eqref{E:process} for the process.
\end{prop}
\medskip
We will also need the following version of the classical Azuma-Hoeffding
inequality. This can be proved by a standard Laplace transform argument;
see e.g.\ \cite[Fact 2.1]{LPRT}.
\begin{prop}\label{T:AH}
Let $X_1,\dotsc,X_n$ be independent, symmetric, uniformly subgaussian
random variables. Then for any $a_1,\dotsc,a_n \in \mathbb{R}$ and $t>0$,
\[
\mathbb{P} \Biggl[\biggl|\sum_{j=1}^n a_j X_j \biggr| \ge t \Biggr]
\le 2 \exp \left[-\frac{b\ t^2}{\sum_{j=1}^n a_j^2}\right],
\]
where $b>0$ depends only on the constant $a$ in the subgaussian estimate
\eqref{E:subgaussian} for the $X_j$.
\end{prop}
\medskip
\begin{proof}[Proof of Theorem \ref{T:upper}]
We first reduce to the case in which each $X_j$ is symmetric. Let
$T_n'$ be an independent copy of $T_n$. Since $\mathbb{E} T_n = 0$, by
Jensen's inequality,
\[
\mathbb{E} \| T_n \| \le \mathbb{E} \big[ \mathbb{E} \big[ \|T_n-T_n'\| \big| T_n \big]\big]
= \mathbb{E} \|T_n-T_n'\|.
\]
The random Toeplitz matrix $(T_n-T_n')$ has entries $(X_j-X_j')$ which
are independent, symmetric, uniformly subgaussian random variables
(with a possibly smaller constant $a$ in the subgaussian estimate).
Thus we may assume without loss of generality that the $X_j$ are
symmetric random variables.
\medskip
We next bound $\|T_n\|$ by the supremum of a subgaussian random
process. A basic feature of the theory of Toeplitz matrices is
their relationship to multiplication operators (cf.\ \cite[Chapter
1]{BS}). Specifically, the finite Toeplitz matrix $T_n$ is an $n\times
n$ submatrix of the infinite Laurent matrix
\[
L_n = \big[ X_{|j-k|} {\bf 1}_{|j-k| \le n-1} \big]_{j,k \in \mathbb{Z}}.
\]
Consider $L_n$ as an operator on $\ell^2(\mathbb{Z})$ in the canonical way,
and let $\psi:\ell^2(\mathbb{Z}) \to L^2[0,1]$ denote the usual trigonometric
isometry $\psi(e_j)(x) = e^{2\pi ijx}$. Then $\psi L_n
\psi^{-1}:L^2\to L^2$ is the multiplication operator corresponding to
the $L^\infty$ function
\[
f(x) = \sum_{j=-(n-1)}^{n-1} X_{|j|} e^{2\pi i jx}
= X_0 + 2 \sum_{j=1}^{n-1} \cos(2\pi j x) X_j.
\]
Therefore
\begin{equation}\label{E:norms}
\|T_n \| \le \|L_n\| = \| f \|_\infty = \sup_{0\le x \le 1} |Y_x|,
\end{equation}
where
\[
Y_x = X_0 + 2 \sum_{j=1}^{n-1} \cos(2\pi j x) X_j.
\]
By Proposition \ref{T:AH}, the random process $\{Y_x : x\in[0,1]\}$
becomes subgaussian if $M=[0,1]$ is equipped with the pseudometric
\[
d(x,y) = \sqrt{\sum_{j=1}^{n-1} \big[ \cos(2\pi j x)
- \cos(2\pi jy)\big]^2}.
\]
\medskip
Finally, we bound $N([0,1],d,\varepsilon)$ in order to apply Proposition
\ref{T:Dudley}. Since $|\cos t| \le 1$ always, it follows that $d(x,y)
< 2\sqrt{n}$ and therefore $N([0,1],d,\varepsilon) = 1$ if $\varepsilon
> 2\sqrt{n}$. Next, since $|\cos s - \cos t| \le |s-t|$,
\[
d(x,y) \le 2\pi |x-y| \sqrt{\sum_{j=1}^{n-1} j^2} < 4 n^{3/2} |x-y|,
\]
which implies that
\[
N\bigl([0,1],d,\varepsilon\bigr) \le N\left([0,1],|\cdot |,
\frac{\varepsilon}{4n^{3/2}}\right) \le \frac{4n^{3/2}}{\varepsilon}.
\]
By \eqref{E:norms}, Proposition \ref{T:Dudley}, and the substitution
$\varepsilon = 4n^{3/2}e^{-t^2}$,
\begin{equation}\label{E:upper-Dudley}
\mathbb{E} \|T_n\| \le K \int_0^{2\sqrt{n}} \sqrt{\log\left(
\frac{4n^{3/2}}{\varepsilon}\right)}\ d\varepsilon
= 2\sqrt{2} n^{3/2} K \int_{\sqrt{2\log 2n}}^\infty t^2 e^{-t^2/2} \ dt.
\end{equation}
Integration by parts and the classical estimate
$\frac{1}{\sqrt{2\pi}}\int_s^\infty e^{-t^2/2}\ dt \le e^{-s^2/2}$ for
$s>0$ yield
\[
\int_s^\infty t^2 e^{-t^2/2}\ dt \le \big(s+\sqrt{2\pi}\big) e^{-s^2/2}.
\]
Combining the case $s=\sqrt{2\log 2n}$ of this estimate with
\eqref{E:upper-Dudley} completes the proof.
\end{proof}
\medskip
The proof of Theorem \ref{T:limsup} is based on rather classical
measure concentration arguments commonly applied to probability in
Banach spaces.
\begin{proof}[Proof of Theorem \ref{T:limsup}]
Denote by $M_0$ the $n\times n$ identity matrix, and for
$m=1,\dotsc,n-1$ let $M_m = \bigl[{\bf 1}_{|j-k|=m}\bigr]_{1\le j,k\le
n}$. Then $T_n$ can be written as the sum
\[
T_n = \sum_{j=0}^{n-1} X_j M_j
\]
of independent random vectors in the finite-dimensional Banach space
$\mathcal{M}_n$ equipped with the spectral norm. Observe that $\|M_j\|
\le 2$ for every $j$.
Under the assumption (\ref{I:bounded}), up to the precise values of
constants the estimate
\[
\mathbb{P} \bigl[ \| T_n \| \ge \mathbb{E} \| T_n \| + t \bigr]
\le e^{-t^2 / 32A^2n} \quad \forall t>0
\]
follows from any of several standard approaches to concentration of
measure (cf.\ Corollary 1.17, Corollary 4.5, or Theorem 7.3 of
\cite{Ledoux2}; the precise statement can be proved from Corollary
1.17). Combining this with Theorem 1 yields
\[
\mathbb{P}\Bigl[\|T_n \| \ge (c_1 + 8A) \sqrt{n\log n}\Bigr] \le \frac{1}{n^2},
\]
which completes the proof via the Borel-Cantelli lemma.
The proof under the assumption (\ref{I:LSI}) is similar. By the triangle
inequality and the Cauchy-Schwarz inequality,
\[
\|T_n\| \le 2 \sqrt{n \sum_{j=0}^{n-1} X_j^2},
\]
so that the map $(X_0,\dotsc,X_{n-1}) \mapsto \|T_n\|$ has Lipschitz
constant bounded by $2\sqrt{n}$. By the well-known tensorization and
measure concentration properties of logarithmic Sobolev inequalities
(cf.\ \cite[Sections 2.1--2.3]{Ledoux1} or \cite[Sections
5.1--5.2]{Ledoux2}),
\[
\mathbb{P} \bigl[ \| T_n \| \ge \mathbb{E} \| T_n \| + t \big]
\le e^{-t^2 / 4An} \quad \forall t>0.
\]
The proof is completed in the same way as before (with a different
dependence of $c_2$ on $A$).
\end{proof}
As remarked above, a weaker version of Theorem \ref{T:limsup} may be
proved under the assumptions of Theorem \ref{T:upper} alone. From the
proof of Proposition \ref{T:Dudley} in \cite{Talagrand2} one can
extract the following tail inequality under the assumptions of
Proposition \ref{T:Dudley}:
\begin{equation}\label{E:Dudley-tail}
\mathbb{P} \biggl[\sup_{x\in M} |Y_x| \ge t\biggr] \le 2 e^{-c t^2/\alpha^2}
\quad \forall t>0, \quad \mbox{where} \quad
\alpha=\int_0^\infty \sqrt{\log N(M,d,\varepsilon)} \ d \varepsilon.
\end{equation}
The explicit statement here is adapted from lecture notes of Rudelson
\cite{Rudelson}. Using the estimates derived in the proof of Theorem
\ref{T:upper} and applying the Borel-Cantelli lemma as above, one
directly obtains
\begin{equation}\label{E:weak-limsup}
\limsup_{n\to\infty} \frac{\| T_n \|}{\sqrt{n}\log n} \le c_4
\quad \mbox{almost surely}
\end{equation}
under the assumptions that the $X_j$ are symmetric and uniformly
subgaussian. The general (nonsymmetric but mean $0$) case can be
deduced from the argument for the symmetric case. Let $T_n'$ be an
independent copy of $T_n$. By independence, the triangle inequality,
and the tail estimate which follows from \eqref{E:Dudley-tail},
\[
\mathbb{P}\big[ \|T_n'\| \le s \big] \mathbb{P}\big[\|T_n\| \ge s+t\big]
\le \mathbb{P} \big[ \|T_n - T_n'\| \ge t\big]
\le 2 e^{-ct^2/n\log n}
\]
for some constant $c$ which depends on the subgaussian estimate for
the $X_j$. By Theorem \ref{T:upper} and Chebyshev's
inequality,
\[
\mathbb{P}\big[\|T_n'\| \le s \big] \ge 1-\frac{1}{s}c_1 \sqrt{n\log n}.
\]
Picking $s=2 c_1 \sqrt{n\log n}$ and $t=\sqrt{\frac{2n}{c}}\log n$
yields
\[
\mathbb{P}[ \big[\| T_n \| \ge c_4 \sqrt{n}\log n \big] \le \frac{4}{n^2}
\]
for some constant $c_4$, and \eqref{E:weak-limsup} then follows from
the Borel-Cantelli lemma.
\medskip
The proof of Theorem \ref{T:lower} amounts to an adaptation of the
proof of the lower bound in \cite{MT}, with much of the proof
abstracted into a general lower bound for the suprema of certain
random processes due to Kashin and Tzafriri \cite{KT1,KT2}. The
following is a special case of the result of \cite{KT2}.
\begin{prop}\label{T:KT}
Let $\varphi_j:[0,1]\to \mathbb{R}$, $j=0,\dotsc,n-1$ be a family of functions
which are orthonormal in $L^2[0,1]$ and satisfy $\| \varphi_j \|_{L^3[0,1]}
\le A$ for every $j$, and let $X_0,\dotsc,X_{n-1}$ be independent random
variables such that for every $j$,
\[
\mathbb{E} X_j = 0, \quad \mathbb{E} X_j^2 = 1, \quad \mathbb{E} |X_j| \ge B.
\]
Then for any $a_0,\dotsc,a_{n-1}\in \mathbb{R}$,
\[
\mathbb{E} \left[ \sup_{0\le x \le 1}
\Biggl|\sum_{j=0}^{n-1} a_j X_j \varphi_j(x) \Biggr| \right]
\ge K\ \|a\|_2 \sqrt{\log \frac{\|a\|_2}{\|a\|_4}},
\]
where $\|a\|_p = \big(\sum_{j=0}^{n-1} |a_j|^p\big)^{1/p}$ and $K>0$
depends only on $A$ and $B$.
\end{prop}
\medskip
\begin{proof}[Proof of Theorem \ref{T:lower}]
First make the estimate
\[
\|T_n\| = \sup_{v\in\mathbb{C}^n\setminus \{0\} }
\frac{|\inprod{T_n v}{v}|}{\inprod{v}{v}}
\ge \sup_{0\le x \le 1}\frac{1}{n} \big\vert\inprod{T_n v_x}{v_x}\big\vert,
\]
where $v_x\in \mathbb{C}^n$ is defined by $(v_x)_j = e^{2\pi i j x}$ for
$j=1,\dotsc,n$ and $\inprod{\cdot}{\cdot}$ is the standard inner
product on $\mathbb{C}^n$. Therefore
\begin{align*}
\|T_n\| & \ge \frac{1}{n} \sup_{0\le x\le 1} \Biggl|
\sum_{j,k=1}^n X_{|j-k|} e^{2\pi i (j-k)x} \Biggr|\\
&= \frac{1}{n} \sup_{0\le x\le 1}\Biggl| \sum_{j=-(n-1)}^{n-1} (n-|j|) X_{|j|}
e^{2\pi i j x} \Biggr| \\
&= \sup_{0\le x\le 1}
\Biggl| X_0 + 2\sum_{j=1}^{n-1} \left(1-\frac{j}{n}\right)
X_j \cos(2\pi j x) \Biggr| \\
&= \sup_{0\le x\le 1}\Biggl| \sum_{j=0}^{n-1} a_j X_j \varphi_j(x)
\Biggr|,
\end{align*}
where we have defined $a_0=1$, $a_j = \sqrt{2}(1-j/n)$ for $j\ge 1$,
$\varphi_0 \equiv 1$, and $\varphi_j(x) = \sqrt{2}\cos(2\pi j x)$ for
$j\ge 1$. It is easy to verify that $\|a\|_2 > \sqrt{n}/2$ and $\| a
\|_4 < 2 n^{1/4}$. The theorem now follows from Proposition
\ref{T:KT}.
\end{proof}
We remark that by combining Theorem \ref{T:lower} with the proof of
Theorem \ref{T:limsup}, one obtains a nontrivial bound on the left
tail of $\| T_n \|$ under the assumptions of Theorem \ref{T:limsup} and
the additional assumption that $\mathbb{E} X_j^2=1$ for every
$j$. Unfortunately, one cannot deduce an almost sure lower bound of
the form
\[
\liminf_{n\to\infty} \frac{\|T_n\|}{\sqrt{n\log n}} \ge c
\quad \mbox{almost surely}
\]
without more precise control over the constants in Proposition
\ref{T:KT} and the concentration inequalities used in the proof of
Theorem \ref{T:limsup}.
\bigskip
\section{Extensions and additional remarks}\label{S:remarks}
\subsection{Other random matrix ensembles}
For simplicity Theorems \ref{T:upper}--\ref{T:lower} were stated and
proved only for the case of real symmetric Toeplitz matrices. However,
straightforward adaptations of the proofs show that the theorems
hold for other related ensembles of random matrices. These include
nonsymmetric real Toeplitz matrices $\big[X_{j-k}\big]_{j,k\in\mathbb{Z}}$ for
independent random variables $X_j,$ $j\in \mathbb{Z}$, as well as complex
Hermitian or general complex Toeplitz variants. In the complex cases
one should consider matrix entries of the form $X_j = Y_j + i Z_j$,
where $Y_j$ and $Z_j$ are independent and each satisfy the tail or
moment conditions imposed on $X_j$ in the theorems as stated.
Closely related to the case of nonsymmetric random Toeplitz matrices
are random Hankel matrices $H_n = \big[X_{j+k-1}\big]_{1\le j,k\le
n}$, which are constant along skew diagonals. This ensemble was also
mentioned by Bai \cite{Bai}, and was shown to have a universal
limiting spectral distribution in \cite{BDJ}. Independently, Masri and
Tonge \cite{MT} considered a random $r$-linear Hankel form
\[(v_1,\dotsc,v_r)\mapsto \sum_{j_1,\dotsc,j_r=0}^{n} X_{j_1+\dotsb+j_r}
(v_1)_{j_1} \dotsb (v_r)_{j_r}
\]
in the case $\mathbb{P} [X_j = 1]=\mathbb{P}[X_j=-1]=1/2$, and showed that the
expected norm of this form is of the order $\sqrt{n^{r-1} \log n}$.
As observed in \cite[Remark 1.2]{BDJ}, $H_n$ has the same singular
values, and so in particular the same spectral norm, as the
(nonsymmetric) Toeplitz matrix obtained by reversing the order of the
rows of $H_n$. Therefore Theorems \ref{T:upper}--\ref{T:lower} apply
to $H_n$ as well. As mentioned in the introduction, the versions of
Theorems \ref{T:upper} and \ref{T:lower} for $H_n$ generalize the
$r=2$ case of the result of \cite{MT} to subgaussian matrix entries
$X_j$.
The methods of this paper can also be used to treat random Toeplitz
matrices with additional restrictions. For example, the theorems apply
to the ensemble of symmetric circulant matrices considered in
\cite[Remark 2]{BM} which is defined as $T_n$ here except for the
restriction that $X_{n-j}=X_j$ for $j=1,\dotsc,n-1$, and the closely
related symmetric palindromic Toeplitz matrices considered in
\cite{MMS}, in which $X_{n-j-1} = X_j$ for $j=0,\dotsc,n-1$. We remark
that \cite{BM,MMS} show that each of these ensembles, properly scaled
and with some additional assumptions, have a limiting spectral
distribution which is normal.
\medskip
\subsection{Weaker hypotheses}
It is unclear how necessary the tail or moment conditions on the $X_j$
are to the conclusions of the theorems. It appears likely (cf.\
\cite{YBK,BoseSen}) that versions of Theorems \ref{T:upper} and
\ref{T:limsup} remain true assuming only the existence of fourth
moments, at least when the $X_j$ are identically distributed. In
particular it is very likely that the assumptions of Theorem
\ref{T:limsup} can be relaxed considerably. Even within the present
proof, the assumption of a logarithmic Sobolev inequality can be
weakened slightly to that of a quadratic transportation cost
inequality; cf.\ \cite[Chapter 6]{Ledoux2}.
If the $X_j$ have nonzero means then the behavior of $\|T_n\|$ may
change. Suppose first that the $X_j$ are uniformly subgaussian and
$\mathbb{E} X_j = m\neq 0$ for every $j$. If $J_n$ denotes the $n\times n$
matrix whose entries are all $1$, then \eqref{E:weak-limsup} implies
that
\begin{equation}\label{E:limsup-m}
\limsup_{n\to\infty} \frac{\|T_n - mJ_n\|}{\sqrt{n}\log n} \le c
\quad \mbox{almost surely,}
\end{equation}
where $c$ depends on $m$ and the subgaussian estimate for the
$X_j$. Since $\|J_n\|=n$, \eqref{E:limsup-m} and the triangle
inequality imply a strong law of large numbers:
\begin{equation}\label{E:BoseSen}
\lim_{n\to \infty} \frac{\|T_n\|}{n} = |m| \quad \mbox{almost surely.}
\end{equation}
In \cite{BoseSen}, \eqref{E:BoseSen} was proved using estimates from
\cite{BDJ} under the assumption that the $X_j$ are identically
distributed and have finite variance. We emphasize again
that while the methods of this paper require stronger tail
conditions, we never assume the $X_j$ to be identically distributed.
More generally, the behavior of $\|T_n\|$ depends on the rate of
growth of the spectral norms of the deterministic Toeplitz matrices
$\mathbb{E} T_n$. The same argument as above shows that
\[
\lim_{n\to\infty} \frac{\|T_n\|}{\| \mathbb{E} T_n \|} = 1 \quad
\mbox{almost surely}
\]
if the random variables $(X_j-\mathbb{E} X_j)$ are uniformly subgaussian and
$\lim_{n\to\infty} \frac{\sqrt{n}\log n}{\| \mathbb{E} T_n \|} = 0$.
On the other hand, if $\| \mathbb{E} T_n\| = o(\sqrt{n\log n})$ then the
conclusion of Theorem \ref{T:upper} holds.
\subsection{Random trigonometric polynomials}
The supremum of the random trigonometric polynomial
\[
Z_x = \sum_{j=1}^n X_j \cos(2\pi j x),
\]
has been well-studied in the special case $\mathbb{P}[X_j=1] =
\mathbb{P}[X_j=-1] = 1/2$, in work dating back to Salem and Zygmund
\cite{SZ}. Observe that $Z_x$ is essentially equivalent to the process
$Y_x$ defined in the proof of Theorem \ref{T:upper}, and is also
closely related to the random process considered in the proof of
Theorem \ref{T:lower}. Hal\'asz \cite{Halasz} proved in particular that
\[
\lim_{n\to \infty}
\frac{\sup_{0\le x\le 1} |Z_x|}{\sqrt{n\log n}}
= 1 \quad \mbox{almost surely}.
\]
From this it follows that when $\mathbb{P}[X_j=1] = \mathbb{P}[X_j=-1] = 1/2$
for every $j$, the conclusion of Theorem \ref{T:limsup} holds with
$c_2 = 2$. Numerical experiments suggest, however, that the optimal
value of $c_2$ is $1$ in this case, and more generally when the $X_j$
are i.i.d.\ with mean $0$ and variance $1$.
Conversely, adaptations of the proofs in this paper yield less
numerically precise bounds for the supremum of $Z_x$ under the same
weaker assumptions on the $X_j$ in the statements of the theorems. We
remark that the techniques used to prove the results of
\cite{KT1,KT2,MT} cited above (and hence indirectly also Theorem
\ref{T:lower}) were adapted from the work of Salem and Zygmund in
\cite{SZ}.
\bibliographystyle{plain} | {
"timestamp": "2007-03-12T18:55:38",
"yymm": "0703",
"arxiv_id": "math/0703134",
"language": "en",
"url": "https://arxiv.org/abs/math/0703134",
"abstract": "Suppose that $T_n$ is a Toeplitz matrix whose entries come from a sequence of independent but not necessarily identically distributed random variables with mean zero. Under some additional tail conditions, we show that the spectral norm of $T_n$ is of the order $\\sqrt{n \\log n}$. The same result holds for random Hankel matrices as well as other variants of random Toeplitz matrices which have been studied in the literature.",
"subjects": "Probability (math.PR)",
"title": "On the spectral norm of a random Toeplitz matrix",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126463438262,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7094397120700544
} |
https://arxiv.org/abs/0805.4120 | Mixed Volume Techniques for Embeddings of Laman Graphs | Determining the number of embeddings of Laman graph frameworks is an open problem which corresponds to understanding the solutions of the resulting systems of equations. In this paper we investigate the bounds which can be obtained from the viewpoint of Bernstein's Theorem. The focus of the paper is to provide the methods to study the mixed volume of suitable systems of polynomial equations obtained from the edge length constraints. While in most cases the resulting bounds are weaker than the best known bounds on the number of embeddings, for some classes of graphs the bounds are tight. | \section{Introduction}
Let $G=(V,E)$ be a graph on $n$ vertices with $2n-3$ edges. If each
subset of $k$ vertices spans at most $2k-3$ edges, we say that $G$ has the
{\it Laman property} and call it a {\it Laman graph} (see \cite{Laman}).
A {\it framework} is a tuple $(G,L)$ where $G=(V,E)$ is a graph and $L=\{l_{i,j}\, :\, [v_i,v_j]\in E\}$ is a set of $|E|$ positive numbers interpreted as edge lengths.
For generic edge lengths, Laman graph frameworks are minimally rigid (see \cite{Connelly}),
i.e. they are rigid and they become flexible if any edge is removed.
A {\it Henneberg sequence} for a graph $G$ is a sequence
$(G_i)_{3\leq i \leq r}$ of Laman graphs such
that $G_3$ is a triangle, $G_r=G$ and each $G_i$ is obtained by
$G_{i-1}$ via one of the following two types of steps: A {\it
Henneberg I step} adds one new vertex $v_{i+1}$ and two new edges,
connecting $v_{i+1}$ to two arbitrary vertices of $G_i$. A {\it
Henneberg II step} adds one new vertex $v_{i+1}$ and three new edges,
connecting $v_{i+1}$ to three vertices of $G_i$ such that at least two
of these vertices are connected via an edge $e$ of $G_i$ and this certain
edge $e$ is removed (see Figure \ref{FigHenneberg}).
\begin{figure}[h]
\setlength{\unitlength}{0.18pt}
\begin{center}
\begin{picture}(1200,550)(0,0)
\put(10,40){\includegraphics[scale=0.18]{HI_ex_1}}
\put(650,40){\includegraphics[scale=0.18]{HII_ex_1}}
\put(0,0){\mbox{$v_1$}}
\put(450,0){\mbox{$v_2$}}
\put(640,0){\mbox{$v_1$}}
\put(1090,0){\mbox{$v_2$}}
\put(290,165){\mbox{$v_3$}}
\put(940,165){\mbox{$v_3$}}
\put(0,540){\mbox{$v_4$}}
\put(650,545){\mbox{$v_4$}}
\put(520,510){\mbox{$v_5$}}
\put(1170,510){\mbox{$v_5$}}
\put(980,340){\mbox{$v_6$}}
\end{picture}
\caption{A Henneberg I and a Henneberg II step. New edges are dashed and the deleted edge is pointed.}
\label{FigHenneberg}
\end{center}
\end{figure}
\setlength{\unitlength}{1pt}
Any Laman graph $G$ can be constructed via a Henneberg sequence and
any graph constructed via a Henneberg sequence has the Laman property (see
\cite{StreinuTheran,TayWhiteley}). We call $G$ a {\it Henneberg I graph} if it
is constructable using only Henneberg I steps. Otherwise we call it
{\it Henneberg II}.
Given a Laman graph framework we want to know how many embeddings, i.e. maps $\alpha: V \rightarrow {\mathbb R}^2$, exist such that the Euclidean distance between two points in the image is exactly $l_{i,j}$ for all $[v_i,v_j]\in E$. Since every rotation or translation of an embedding gives another one, we ask how many embeddings exist {\it modulo rigid motions}.
Due to the minimal rigidity property, questions about embeddings of Laman graphs arise naturally in
rigidity and linkage problems (see \cite{Haas,Thorpe}).
Graphs with less edges will have zero or infinitely many embeddings modulo rigid motions,
and graphs with more edges do not have any embeddings for a generic choice of edge lengths.
Determining the maximal number of embeddings (modulo rigid motions)
for a given Laman graph is an open problem.
The best upper bounds are due to Borcea and Streinu (see \cite{Borcea,BorceaStreinu})
who show that the number of embeddings is bounded by $\binom{2n-4}{n-2} \approx \frac{4^{n-2}}{\sqrt{n-2}}$. Their bounds are based on
degree results of determinantal varieties.
A general method to study the number of (complex) solutions of systems of
polynomial equations is to use Bernstein's Theorem \cite{Bernstein} for sparse
polynomial systems. This theorem provides bounds on the number of solutions
in terms of the mixed volume of the underlying Newton polytopes.
Since the systems of polynomial
equations describing the Laman embeddings are sparse, the question arose
how good these Bernstein bounds are for the Laman embedding problem.
While for concrete systems of equations, the mixed volume can be
computed algorithmically, studying the mixed volume for \emph{classes
of polytopes} is connected with a variety of issues in convex geometry
(such as understanding the Minkowski sum of the polytopes).
In this paper, we study the quality of the Bernstein bound on the Laman embedding problem and provide methods to handle the resulting convex geometric problems.
In most cases, our bounds are worse than the bounds in \cite{BorceaStreinu}.
However, we think that the general methodology of studying Bernstein
bounds nevertheless provides an interesting technique, and we see
the main contribution of this paper in providing the technical tools
(such as achieving to determine the mixed volume)
to compute these bounds for whole classes of graphs.
It is particularly
interesting that for some classes of graphs, the mixed volume
bound is tight.
To use these algebraic tools for the embedding problem
we formulate that problem as a
system of polynomial equations in the $2n$ unknowns $(x_1,y_1,\dots,x_n,y_n)$ where $(x_i,y_i)$ denote the coordinates of the embedding of the vertex $v_i$.
Each prescribed edge length translates into a polynomial equation. I.e. if $e_k:=[v_i,v_j]\in E$ with length $l_{i,j}$, we require $h_k(x):=(x_i-x_j)^2+(y_i-y_j)^2-l_{i,j}^2=0$. Thus we obtain a system of $|E|$ quadratic equations whose solutions represent the embeddings of our framework. To get rid of translations and rotations we fix one point $(x_1,y_1)=(c_1,c_2)$ and the direction of the embedding of the edge $[v_1,v_2]$ by setting $y_2=c_3$. (Here we assume without loss of generality that there is an edge between $v_1$ and $v_2$.) For practical reasons we choose $c_i \neq 0$ and as well $c_1\neq l_{1,2}$.
Hence we want to study the solutions to the following system of $2n$ equations.
\begin{equation}\label{SoE}
\left. \begin{cases} h_1(x):=x_1-c_1=0 \\ h_2(x):=y_1-c_2=0 \\ h_3(x):=x_2-(l_{1,2}-c_1)=0 \\ h_4(x):=y_2-c_3=0 \\ h_k(x):=(x_i-x_j)^2+(y_i-y_j)^2-l_{i,j}^2 =0 \quad \forall e_k=[v_i,v_j]\in E-\{[v_1,v_2]\} \end{cases} \right\}
\end{equation}
The paper is structured as follows. In Section~\ref{sec:Definitions} we review the concepts of mixed volumes and Bernstein's Theorem. In Section~\ref{sec:Tools} we present some technical tools to simplify mixed volume calculation. Then, in Section~\ref{sec:BKK} we discuss the quality of the Bernstein bounds on the Laman embedding problem.
\section{Preliminaries}\label{sec:Definitions}
\subsection{Mixed volumes and mixed subdivisions}\label{sec:MixedVolumes}
The \emph{Minkowski sum} of two sets $A_1, A_2 \subset {\mathbb R}^k$ is defined as
\[
A_1+A_2 = \left\{a_1+a_2 \, |\, a_1\in A_1, a_2\in A_2 \right\} \ .
\]
Let $P_1,\dots,P_k$ be $k$ polytopes in ${\mathbb R}^k$. For non-negative parameters
$\lambda_1,\dots,\lambda_k$ the function
$\text{vol}_k(\lambda_1P_1+\dots+\lambda_kP_k)$ is a homogeneous
polynomial of degree $k$ in $\lambda_1,\dots,\lambda_k$ with
non-negative coefficients (see e.g. \cite{Schneider, Webster}). The coefficient of the mixed
monomial $\lambda_1\cdots\lambda_k$ is called the {\it mixed volume of $P_1,\dots,P_k$} and is
denoted by $\MV_k(P_1,\dots,P_k)$.
We denote by $\textup{MV}_{k}(P_1,d_1; \dots; P_r,d_r)$ the mixed volume where $P_i$ is taken $d_i$ times and $\sum_{i=1}^r d_i =k$.
The mixed volume is invariant under permutation of its arguments, it is linear in each argument, i.e.
\begin{equation}
\MV_k(\dots, \alpha P_i + \beta P'_i,\dots)=
\alpha\, \MV_k(\dots, P_i ,\dots)+ \beta \, \MV_k(\dots, P'_i ,\dots) \label{linearity}
\end{equation}
and it generalizes the usual volume in the sense that
\begin{equation} \label{MV_Vol}
\MV_k(P,\dots,P)= k! \,\text{vol}_k (P)
\end{equation}
holds (see \cite{Schneider}).
Let $P=P_1+\dots+P_r\subset{\mathbb R}^k$ be a Minkowski sum of polytopes that affinely spans ${\mathbb R}^k$. A sum $C=F_1+\dots+F_r$ of faces $F_i\subset P_i$ is called {\it cell} of $P$. A {\it subdivision} of $P$ is a collection $\Gamma=\{C_1,\dots,C_m\}$ of cells such that each cell is of full dimension, the intersection of two cells is a face of both and the union of all cells covers $P$. Each cell is given a type $type(C)= (\text{dim}(F_1),\dots, \text{dim}(F_r))$. Clearly the entries in the type vector sum up to at least the dimension of the cell $C$. A subdivision is called {\it mixed} if for each cell $C\in \Gamma$ we have that $\sum d_i =k$ where $type(C)=(d_1,\dots,d_r)$.
Cells of type $(d_1,\dots,d_r)$ with $d_i\geq 1$ for each $i$ will be called {\it mixed cells}.
With this terminology the mixed volume can be calculated by
\begin{equation}\label{explicitMV_2}
\MV_k(P_1,d_1;\dots;P_r,d_r) =\sum_{C} d_1!\, \cdots d_r!\ \text{vol}_k\,(C)
\end{equation}
where the sum is over all cells $C$ of type $(d_1,\dots,d_r)$ in an arbitrary mixed subdivision of $P_1+\dots+P_r$ (see \cite{Huber}).
To construct mixed subdivisions we proceed as in \cite{Huber}. Not every subdivision can be constructed in this way but since we only need one arbitrary mixed subdivision this simple construction can be used. For each polytope $P_i$ choose a linear lifting function $\mu_i:{\mathbb R}^k \rightarrow {\mathbb R}$ identified by an element of ${\mathbb R}^k$. By $\hat{P_i}$ we denote the lifted polytopes $\conv \{(q,\langle \mu_i,q \rangle)\, : \, q\in P_i \}\subset {\mathbb R}^{k+1}$, where $\langle \cdot, \cdot \rangle$ denotes the Euclidean scalar product.
The set of those facets of $\hat{P}:=\hat{P}_1+\dots+\hat{P}_r$ which have an inward pointing normal with a positive last coordinate is called \emph{the lower hull} of $\hat{P}$. Projecting down this lower hull back to ${\mathbb R}^k$ by forgetting the last coordinate yields a subdivision of $P_1+\dots+P_r$. Such a subdivision is called {\it coherent} and is said to be {\it induced by} $\mu =(\mu_1,\dots,\mu_r)$.
\begin{example}\label{Example:1}
Let
\[
P=\conv\left\{\begin{pmatrix}0\\ 0 \end{pmatrix}, \begin{pmatrix}3\\ 0 \end{pmatrix},\begin{pmatrix}0\\ 2 \end{pmatrix},\begin{pmatrix}3\\ 2 \end{pmatrix} \right\}\, , \quad Q=\conv\left\{\begin{pmatrix}1\\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ \frac{3}{2}\end{pmatrix},\begin{pmatrix}3\\ 3 \end{pmatrix} \right\}\ .
\]
The Minkowski sum of $P$ and $Q$ is depicted in Figure~\ref{Figure:ExMinkSum} together with one of the possible coherent mixed subdivisions.
\begin{figure}[ht]
\setlength{\unitlength}{0.38pt}
\begin{center}
\begin{picture}(650,300)(0,0)
\put(0,0){\includegraphics[scale=0.38]{BasePoly} }
\put(390,30){\includegraphics[scale=0.38]{PolyWithSubdiv1}}
\put(100,120){\mbox{$P+Q$}}
\put(440,125){\mbox{$P$}}
\put(555,165){\mbox{$Q$}}
\put(500,200){\mbox{$C_1$}}
\put(524,100){\mbox{$C_2$}}
\put(595,140){\mbox{$C_3$}}
\put(460,55){\mbox{$C_4$}}
\end{picture}
\caption{Left: The Minkowski sum of $P$ and $Q$. Right: A mixed subdivision $\Gamma$ of $P+Q$.}
\label{Figure:ExMinkSum}
\end{center}
\end{figure}
\setlength{\unitlength}{1pt}
\end{example}
\subsection{BKK theory} \label{sec:Bernstein}
The main tool in this work is the following theorem that provides a connection between solutions to systems of polynomial equations and discrete geometry. For a polynomial $f=\sum_{\alpha\in A} c_{\alpha}x^{\alpha}\in {\mathbb C}[x_1,\dots,x_k]$ the Newton polytope $\NP(f)\subset{\mathbb R}^k$ is the convex hull of the monomial exponent vectors, i.e. $\NP(f)=\conv A$. Let ${\mathbb C}^* := {\mathbb C} \setminus \{0\}$.
\begin{theorem}[Bernstein \cite{Bernstein}] \label{The:Bernstein}
Given polynomials $f_1,\dots,f_k\in{\mathbb C}[x_1,\dots,x_k]$ with finitely
many common zeroes in $({\mathbb C}^*)^k$ and let $\NP(f_i)$ denote the Newton polytope of
$f_i$. Then the number of common zeroes of the $f_i$ in
$({\mathbb C}^*)^k$ is bounded above by the mixed volume
$\textup{MV}_k(\NP(f_1),\dots,\NP(f_k))$. Moreover for generic choices of the
coefficients in the $f_i$, the number of common solutions is exactly
$\textup{MV}_k(\NP(f_1),\dots,\NP(f_k))$.
\end{theorem}
Various attempts have been made to generalize these results to count all common roots in ${\mathbb C}^k$ (see for example \cite{EmirisVerschelde, HuberSturmfels2, LiWang}). The easiest, but sometimes not the best bound is $\MV_k(\text{conv}(\NP(f_1)\cup {0}),\dots,\text{conv}(\NP(f_k)\cup{0}))$ which is shown in \cite{LiWang}. Since the Newton polytopes of system (\ref{SoE}) all contain the point $0$ as a vertex, the mixed volume of (\ref{SoE}) yields a bound on the number of solutions in ${\mathbb C}$ rather then only on those in ${\mathbb C}^*$.
The bound on the number of solutions of a polynomial system arising from Bernstein's Theorem is also often referred to as the {\it BKK bound} due to the work of Bernstein, Khovanskii and Kushnirenko. The BKK bound generalizes the B\'ezout bound (see \cite[Chapter 7]{CLO2}) and for sparse polynomial systems it is often significantly better.
Bernstein also gives an explicit condition when a choice of coefficients is generic.
Let $w$ be a non-zero vector and let $\partial_w P$ denote the face of a polytope $P$ which is minimal with respect to the direction $w$. Also we set $\partial_w f = \sum_{\alpha \in \partial_w \NP(f)}c_{\alpha} x^{\alpha}$ to be the face equation with respect to $w$.
\begin{theorem}[Bernstein's Second Theorem \cite{Bernstein}] \label{The:Bernstein2}
If for all $w\neq 0$, the face system $\partial_w f_1 =0, \dots ,\partial_w f_k =0 $ has no solution in $(\mathbb{C}^*)^k$, then the mixed volume of the Newton polytopes of the $f_i$ gives the exact number of common zeros in $(\mathbb{C}^*)^k$ and all solutions are isolated. Otherwise it is a strict upper bound.
\end{theorem}
Note that it is necessary for a direction $w$ to be a witness of the degeneracy that it lies on the tropical prevariety (see \cite{FirstStepsTropical}) of the polynomials $f_1,\dots,f_k$.
\section{New technical tools to simplify mixed volume calculation} \label{sec:Tools}
In the special case of Henneberg I graphs system (\ref{SoE}) is of a shape that allows to separate the mixed volume calculation into smaller pieces.
The main tool to do this is the following Lemma.
An equivalent decomposition result was already mentioned in \cite{Burago} in which the authors refer to \cite{Fedotov} (in Russian) for the proof. For the convenience of the reader we provide here a proof based
on the properties of symmetric multilinear functions.
\begin{lemma}
\label{SeparationLemma}
Let $P_1,\dots,P_k$ be polytopes in ${\mathbb R}^{m+k}$ and
$Q_1,\dots,Q_m$ be polytopes in ${\mathbb R}^m\subset{\mathbb R}^{m+k}$ .
Then
\begin{equation} \label{eq:SepLemma}
\textup{MV}_{m+k}(Q_1,\dots,Q_m,P_1,\dots,P_k)=
\textup{MV}_{m}(Q_1,\dots,Q_m)\cdot\textup{MV}_{k}(\pi(P_1),\dots,\pi(P_k))
\end{equation}
where $\pi: {\mathbb R}^{m+k} \rightarrow {\mathbb R}^k$ denotes the projection on the
last $k$ coordinates.
\end{lemma}
\begin{proof}
First we show the Lemma in the \emph{semimixed case} where $Q_1=\dots=Q_m=:Q$ and $P_1=\dots=P_k=:P$, then
we use properties of symmetric multilinear functions to reduce the general situation to the
semimixed case.
By (\ref{MV_Vol}) we have to show first that
\begin{equation}\label{Case1}
\textup{MV}_{m+k}(Q,\dots,Q,P,\dots,P)=m!\, k!\ \textup{vol}_m(Q)\cdot\textup{vol}_k(\pi(P))
\end{equation}
where $Q$ is taken $m$ times and $P$ is taken $k$ times. But this formula for semimixed systems is a special case of Lemma 4.9 in \cite{Ewald} or also of Theorem 1 in \cite{Betke}.
Let $\mathcal{P}^m$ (resp. $\mathcal{P}^{m+k}$) be the set of all $m$-dimensional (resp. $(m+k)$-dimensional) polytopes and define two functions $g_1$ and $g_2$ on $(\mathcal{P}^m)^m\times (\mathcal{P}^{m+k})^k$ via
\begin{eqnarray*}
g_1(Q_1,\dots,Q_m,P_1,\dots,P_k) &:=& \textup{MV}_{m+k}(Q_1,\dots,Q_m,P_1,\dots,P_k)\\
g_2(Q_1,\dots,Q_m,P_1,\dots,P_k) &:=& \textup{MV}_{m}(Q_1,\dots,Q_m)\cdot\textup{MV}_{k}(\pi(P_1),\dots,\pi(P_k))\ .
\end{eqnarray*}
Due to the properties of mixed volumes (see Paragraph~\ref{sec:MixedVolumes}) it is easy to see that $g_1$ and $g_2$ are invariant under changing the order of the $Q_i$ and under changing the order of the $P_j$. Furthermore it follows from (\ref{linearity}) that both functions are linear in each argument.
Hence, for fixed $P_1, \dots, P_k$ the induced mappings
\[
\tilde{g}_i^{(P_1,\dots,P_k)}(Q_1,\dots,Q_m) := g_i(Q_1,\dots,Q_m,P_1,\dots,P_k) \qquad (i=1,2)
\]
are symmetric and multilinear, and analogously, for fixed $Q$, the mappings
\[
\bar{g}^{(Q)}_i(P_1,\dots,P_k) := g_i(Q,\dots,Q,P_1,\dots,P_k) \qquad (i=1,2)
\]
are symmetric and multilinear.
For any semigroups $A,B$ and any symmetric multilinear function
$f: A^n \rightarrow B$, it follows from an inclusion-exclusion argument (see \cite[Theorem 3.7]{Ewald}) that
\begin{equation} \label{multilinear}
f(a_1,\dots,a_n)= \frac{1}{n!}\sum_{1\leq i_1< \cdots <i_q\leq n}(-1)^{n-q} f(a_{i_1}+ \cdots+a_{i_q},\dots,a_{i_1}+\cdots+a_{i_q})\ .
\end{equation}
Hence we have for $i=1,2$ that
\begin{align*}
& g_i(Q_1,\dots,Q_m,P_1,\dots,P_k) \\
=\ &\tilde{g}_i^{(P_1,\dots,P_k)}(Q_1,\dots,Q_m) \\
=\ &\frac{1}{m!}\sum_{1\leq i_1<\cdots<i_q\leq m} (-1)^{m-q} \
\tilde{g}_i^{(P_1,\dots,P_k)} (Q_{i_1}+\cdots+Q_{i_q},\dots,Q_{i_1}+\cdots+Q_{i_q}) \\
=\ &\frac{1}{m!}\sum_{1\leq i_1<\cdots<i_q\leq m}(-1)^{m-q}\
\bar{g}^{(Q_{i_1}+\cdots+Q_{i_q})}_i(P_1,\dots,P_k) \ .
\end{align*}
Since we can expand $\bar{g}^{(Q_{i_1}+\cdots+Q_{i_q})}_i(P_1,\dots,P_k)$ by using (\ref{multilinear}) as well, we see that both functions $g_1$ and $g_2$ are fully determined by their images of tuples of polytopes where $Q_1=\cdots=Q_m=Q$ and $P_1=\cdots=P_k=P$. This proves the Lemma.
\end{proof}
Another technical tool which is employed in a subsequent proof is the following Lemma. This goes back to an idea of Emiris and Canny \cite {EmirisCanny} to use linear programming and the formula \ref{explicitMV_2} to compute the mixed volume.
\begin{lemma} \label{LiftingLemma}
Given polytopes $P_1,\dots,P_k$ $\subset {\mathbb R}^k$ and lifting vectors $\mu_1,\dots,\mu_k\in{\mathbb R}^k_{\geq 0}$. Denote the vertices of $P_i$ by $v^{(i)}_1,\dots,v^{(i)}_{r_i}$ and choose one edge $e_i=[v^{(i)}_{t_i}, v^{(i)}_{l_i}]$ from each $P_i$. Then
$C:=e_1+\dots+e_k$ is a mixed cell of the mixed subdivision induced by the liftings $\mu_i$ if and only if
\begin{enumerate}
\item[i)] The edge matrix $E:=V_a - V_b$ is non-singular (where $V_a:=(v^{(1)}_{t_1},\dots,v^{(k)}_{t_k})$ and $V_b:=(v^{(1)}_{l_1},\dots,v^{(k)}_{l_k})$) \ and
\item[ii)] For all polytopes $P_i$ and all vertices $v^{(i)}_s$ of $P_i$ which are not in $e_i$ we have:
\begin{equation}
\left( \langle \mu_1-\mu_i,\vec{e_1}\rangle,\dots,\langle\mu_k-\mu_i,\vec{e_k}\rangle \right)\cdot E^{-1} \cdot \left(v^{(i)}_{l_i}-v^{(i)}_s \right) \geq 0 \label{LiftingCondition}
\end{equation}
where $\vec{e_i}=v^{(i)}_{t_i}-v^{(i)}_{l_i}$.
\end{enumerate}
\end{lemma}
Before beginning with the proof we start with some auxiliary considerations about how to apply linear programming (\emph{LP}) here. In \cite{EmirisCanny} it is shown that the test, if a cell lies on the lower envelope of the lifted Minkowski sum can be formulated as a linear program. Let $\hat{m}_i\in {\mathbb R}^{k+1}$ denote the midpoint of the lifted edge $\hat{e}_i$ of $\hat{P}_i$ such that $\hat{m}=\hat{m}_1+\dots + \hat{m}_k$ is an interior point of the Minkowski sum $\hat{e}_1+\dots+\hat{e}_k$. Consider the linear program
\begin{align}
\text{maximize } s &\in {\mathbb R}_{\geq 0} \label{LP} \\
\text{s.t. } \hat{m} &- (0,\dots,0,s) \in \hat{P}_1+\dots+\hat{P}_k\nonumber \ .
\end{align}
If we denote the vertices of $P_i$ by $v^{(i)}_{1},\dots,v^{(i)}_{r_i}$ this can be written as
\begin{align*}
\text{maximize } s & \in {\mathbb R}_{\geq 0} \\
\text{s.t. } \hat{m}-(0,\dots,0,s) &= \displaystyle\sum_{i=1}^{k} \sum_{j=1}^{r_i} \lambda^{(i)}_{j} \hat{v}^{(i)}_{j} \\
\sum_{j=1}^{r_i} \lambda^{(i)}_{j} &=1 \quad \forall\, i=1,\dots,n \\
\lambda^{(i)}_{j} &\geq 0 \quad \forall\, i,j \ .\\
\end{align*}
$s$ measures the distance of $\hat{m}$ to the lower envelope of the Minkowski sum. Hence $\hat{m}$ lies on the lower envelope of $\hat{P}_1+\dots+\hat{P}_k$ if and only if the optimal value of (\ref{LP}) is zero.
In standard matrix form,
the linear program (\ref{LP}) can be written as $\max \{c^T x\, : \, Ax=b, x \ge 0 \}$ with
\begin{eqnarray*}
A&=& \begin{pmatrix}
v^{(1)}_{1} & \dots & v^{(1)}_{r_1} & \dots & \dots & v^{(k)}_{1} & \dots & v^{(k)}_{r_{k}} & {\bf 0 }_{k} \\[0.1cm]
\langle \mu_1,v^{(1)}_{1} \rangle & \dots &\langle \mu_1,v^{(1)}_{r_1} \rangle&\dots& \dots &\langle \mu_k,v^{(k)}_{1} \rangle& \dots &\langle \mu_k,v^{(k)}_{r_k} \rangle& 1 \\[0.1cm]
& {\bf 1 }^T_{r_1} & & {\bf 0 }^T_{r_2} & \dots & & {\bf 0 }^T_{r_{k}} & &0 \\[0.1cm]
& {\bf 0 }^T_{r_1} & & {\bf 1 }^T_{r_2} & \dots & & {\bf 0 }^T_{r_{k}} & &0 \\[0.1cm]
& \vdots & & & \ddots & & \vdots & &\vdots \\[0.1cm]
& {\bf 0 }^T_{r_1} & & {\bf 0 }^T_{r_2} & \dots & & {\bf 1 }^T_{r_{kn}} & &0 \\[0.1cm]
\end{pmatrix} \, , \\
b^T&=&(\hat{m}, {\bf 1 }^T_{k})\ \in {\mathbb R}^{2k+1} \, , \\
c^T&=&({\bf 0 }^T_{r_1+\dots +r_k},1) \ \in {\mathbb R}^{r_1+\dots+r_k+1}
\end{eqnarray*}
and variables
$x^T = (\lambda^{(1)}_{1},\dots,\lambda^{(1)}_{r_1}, \dots \dots ,\lambda^{(k)}_{1},\dots,\lambda^{(k)}_{r_{k}},s)\ \in {\mathbb R}^{r_1+\dots+r_k+1}$.
Here ${\bf 0 }_{k}$ and ${\bf 1 }_k$ denote the all-0-vector and the all-1-vector in ${\mathbb R}^k$, respectively.
In this notation the point $\hat{m}$ from \eqref{LP} corresponds to $\bar{x}=(\lambda^{(1)}_{1},\dots ,\lambda^{(k)}_{r_{k}},s)$ where $s=0$ and $\lambda^{(i)}_{j}=\frac{1}{2}$ if the edge $\hat{e}_i$ contains the vertex $\hat{v}^{(i)}_j$ and $\lambda^{(i)}_{j}=0$ otherwise.
Given a feasible vertex $\bar{x}\geq 0$ of the LP, let $B$ be a
(not necessarily unique) choice of columns of $A$ such that the submatrix $A_B$ consisting of these columns satisfies $A^{-1}_B \cdot b=\bar{x}$. Let $A_N$ be the submatrix of $A$ consisting of the remaining columns and define $c_B$ and $c_N$ in the same way. By LP duality (see, e.g. \cite{Groetschel})
$\bar{x}$ is optimal if and only if
\begin{equation}\label{SimplexCriterion}
c_N^T-c_B^T\cdot A^{-1}_B\cdot A_N \leq 0 \quad \text{(componentwise)} \ .
\end{equation}
To prove Lemma~\ref{LiftingLemma} we assume that $\bar{x}$ is optimal and deduce conditions on the lifting vectors $\mu_i$ by using the inequality \eqref{SimplexCriterion}.
\medskip\noindent
{\it Proof of Lemma \ref{LiftingLemma}.}
Note that $C$ is full-dimensional and hence has a non-zero volume if and only if $E$ is non-singular. In the following only this case will be considered. To simplify the notation write $\mu(V)$ to denote $(\langle \mu_1,v_1\rangle,\dots,\langle\mu_k,v_k\rangle)$.
We know that $C$ is a mixed cell if and only if the following $\bar{x}$ is the optimal solution to the linear program defined above:
\[
\bar{x}=(\lambda^{(1)}_{1},\dots,\lambda^{(k)}_{r_{k}},0) \text{ where } \lambda^{(i)}_{j}=\begin{cases} \frac{1}{2}, & j\in \{t_i, l_i\} \\ 0, & \text{else } \end{cases} \ .
\]
The submatrices of $A$ corresponding to $\bar{x}$ are
\[
A_B=\begin{pmatrix}V_a & V_b & {\bf 0 }_k \\
\mu(V_a) & \mu(V_b) & 1 \\
\textup{Id}_{k} & \textup{Id}_{k} & {\bf 0 }_k
\end{pmatrix}
\quad\text{and}\quad
A_N=\begin{pmatrix}v^{(i)}_s\\ \mu_r\cdot v^{(i)}_s \\ \xi_i \end{pmatrix}_{\begin{smallmatrix}1\leq i\leq k \\ 1\leq s\leq r_i \\ s\neq t_i,l_i\end{smallmatrix}}
\]
where $\xi_i$ denotes the $i^{\text{th}}$ unit vector. Since
\[
A_B^{-1}=\begin{pmatrix} E^{-1} & {\bf 0 }_k & -E^{-1}\cdot V_b \\
-E^{-1} & {\bf 0 }_k & E^{-1}\cdot V_a \\
-\mu(E)\cdot E^{-1} & 1 & \mu(E)\cdot E^{-1}\cdot V_b-\mu(V_b)
\end{pmatrix}
\]
and $c_N = (0,\dots,0) $ the criterion (\ref{SimplexCriterion}) implies that $\bar{x}$ is optimal if
and only if
\[
(0,\dots,0,1)\cdot A_B^{-1}\cdot A_N \geq 0 \ .
\]
But a single entry of the vector on the left can be explicitly computed as
\[
-\left(\mu(E)\cdot E^{-1}\right)\cdot v^{(i)}_s+\mu_r\cdot v^{(i)}_s+\left(\mu(E)\cdot E^{-1}\cdot V_b-\mu(V_b)\right)\cdot \xi_i
\]
which equals the left hand side of (\ref{LiftingCondition}).
\hfill $\Box$
\medskip
Note that (\ref{LiftingCondition}) is linear in the $\mu_j$. Hence, for a given a choice of edges
this condition defines a cone of lifting vectors which induce
a mixed subdivision that contains our chosen cell as a mixed cell.
\section{Application of the BKK theory on the graph embedding problem}\label{sec:BKK}
Our goal is to apply Bernstein's results to give bounds on the number of embeddings of
Laman graphs.
A first observation shows that for the formulation \eqref{SoE} the Bernstein bound is not
tight. Namely,
the system (\ref{SoE}) allows to choose a direction $w$ that satisfies the conditions of Bernstein's Second Theorem~\ref{The:Bernstein2}. The choice $w=(0,0,0,0,-1,-1,\dots,-1)$ yields the face system
\[
\left. \begin{cases} x_1-c_1=0 \\ y_1-c_2=0 \\ x_2-(l_{1,2}-c_1)=0 \\ y_2-c_3=0 \\ x_i^2+y_i^2=0 \quad \forall [v_1,v_i], [v_2,v_i] \in E \\ (x_i-x_j)^2+(y_i-y_j)^2=0\quad \forall [v_i,v_j]\in E \text{ with } i,j \neq 1,2 \end{cases} \right\}
\]
which has $(x_1,y_1,\dots,x_{n},y_{n})=(c_1,c_2,l_{1,2}-c_1,c_3,1,i,1,i,\dots,1,i)$ as a solution with non-zero complex entries. So the mixed volume of the system in (\ref{SoE}) is a strict upper bound on the number of graph embeddings.
To decrease this degeneracy we apply an idea of Ioannis Emiris\footnote{Personal communication at EuroCG 2008, Nancy} (see \cite{EmirisThesis}). Surprisingly the introduction of new variables for common subexpressions, which increases the B\'ezout bound, can decrease the BKK bound. Here we introduce for every $i=1,\dots,n$ the variable
$s_i$ together with the new equation $s_i=x_i^2+y_i^2$. This leads to the following system of equations.
\begin{equation}\label{SubSoE}
\left. \begin{cases} x_1-c_1=0 \\ y_1-c_2=0 \\ x_2-(l_{1,2}-c_1)=0 \\ y_2-c_3=0 \\ s_i+s_j-2x_ix_j-2y_iy_j-l_{i,j}^2 =0 \quad \forall [v_i,v_j]\in E-\{[v_1,v_2]\} \\
s_i -x_i^2 -y_i^2=0 \quad \forall i=1,\dots,n \end{cases} \right\}
\end{equation}
Experiments show that the system \eqref{SubSoE} is still not generic in the sense of Theorem~\ref{The:Bernstein2} for every underlying minimally rigid graph.
Hence the upper bound on the number of embeddings given by the mixed volume
might not be tight in every case.
\subsection{Henneberg I graphs}
For this simple class of Laman graphs the mixed volume bound is tight as we will demonstrate below. Our proof exploits the inductive structure of Henneberg I graphs which is why it cannot be used for Henneberg II graphs.
\begin{lemma}\label{HennebergILemma}
For a Henneberg I graph on $n$ vertices, the mixed volume of system $\eqref{SubSoE}$ equals $2^{n-2}$.
\end{lemma}
\begin{proof}
Each Henneberg sequence starts with a triangle for which system \eqref{SubSoE} has mixed volume $2$. Starting from the triangle we consider a sequence of Henneberg I steps and show that the mixed volume doubles in each of these steps.
In a Henneberg I step we add one vertex $v_{n+1}$ and two edges $[v_r,v_{n+1}]$, $[v_q,v_{n+1}]$ with lengths $l_{r,n+1}$ and $l_{q,n+1}$.
So our system of equations (\ref{SubSoE}) gets three new equations, namely
\begin{eqnarray}
s_{n+1} - x_{n+1}^2 - y_{n+1}^2 &=& 0 \label{eq_HI_1} \\
s_r + s_{n+1} - 2x_r x_{n+1} - 2y_r y_{n+1} - l_{r,n+1}^2 &=& 0 \label{eq_HI_2} \\
s_q + s_{n+1} - 2x_q x_{n+1} - 2y_q y_{n+1} - l_{q,n+1}^2 &=& 0. \label{eq_HI_3}
\end{eqnarray}
In the new system of equations these three are the only polynomials involving $x_{n+1}$, $y_{n+1}$ and $s_{n+1}$, so Lemma~\ref{SeparationLemma} can be used to calculate the mixed volume separately. The projections of the Newton polytopes of equations \eqref{eq_HI_1}, \eqref{eq_HI_2} and \eqref{eq_HI_3} to the coordinates $x_{n+1}$, $y_{n+1}$ and $s_{n+1}$ are
\[
\text{conv}\left\{\begin{pmatrix}2&0&0\end{pmatrix}^T, \begin{pmatrix}0&2&0\end{pmatrix}^T, \begin{pmatrix}0&0&1\end{pmatrix}^T \right\}
\]
and twice
\[
\text{conv}\left\{\begin{pmatrix}1&0&0\end{pmatrix}^T, \begin{pmatrix}0&1&0\end{pmatrix}^T, \begin{pmatrix}0&0&1\end{pmatrix}^T, \begin{pmatrix}0&0&0\end{pmatrix}^T \right\}\ .
\]
The mixed volume of these equals $2$. So by Lemma \ref{SeparationLemma} the mixed volume of the new system is twice the mixed volume of the system before the Henneberg I step.
\end{proof}
To get two new embeddings in every Henneberg I step we choose the new edge lengths to be almost equal to each other and much larger then all previous edge lengths (larger then the sum of all previous is certainly enough).
\begin{cor}[Borcea and Streinu \cite{BorceaStreinu}] \label{TheoremHI}
The number of embeddings of Henneberg I graph frameworks is less than or equal to $2^{n-2}$ and this bound is sharp.
\end{cor}
Of course the elementary proof described in \cite{BorceaStreinu} of this statement does not need such heavy machinery as Bernstein's Theorem. The purpose of Lemma~\ref{HennebergILemma} is to show that the techniques described in this work apply here and that the BKK bound is tight in this case.
\subsection{Laman graphs on 6 vertices}\label{sec:6Vertices}
The first Laman graphs which are not constructable using only Henneberg I steps arise on $6$ vertices. A simple case analysis shows that up to isomorphisms there are only two such graphs, the Desargues graph and $K_{3,3}$ (see Figure~\ref{DesarguesAndK33}).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.18]{Desargues} \qquad \includegraphics[scale=0.18]{K_33}
\end{center}
\caption{Left: Desargues graph. Right: $K_{3,3}$.}
\label{DesarguesAndK33}
\end{figure}
The number of embeddings of both graphs has been studied in detail. The Desargues graph is studied in \cite{BorceaStreinu} where the authors show that there can only be $24$ embeddings and that there exists a choice of edge lengths giving $24$ different embeddings. This is obtained by investigating the curve that is traced out by one of the vertices after one incident edge is removed.
Husty and Walter \cite{HustyWalter} apply resultants to show that $K_{3,3}$ can have up to $16$ embeddings and give as well specific edge lengths leading to $16$ different embeddings.\footnote{This corrects an earlier version of this paper.}
Both approaches rely on the special combinatorial structure of the specific graphs. The general bound in \cite{BorceaStreinu} for the number of embeddings of a graph with $6$ vertices yields $\binom{2\cdot(6-2)}{6-2}=70$. In this case the BKK bound gives a closer estimate. Namely the mixed volume of the system \eqref{SubSoE} (which uses the substitution trick to remove degeneracies) can be shown to be
$32$ for both graphs.\footnote{We used the PHCpack by Jan Verschelde for our mixed volume calculations, see \cite{phc}.}
\subsection{General case}
For the classes discussed above (Henneberg I, graphs on six vertices) as well as some other special cases,
the BKK bound on the number of embeddings resembles or even improves the general bound of $\binom{2n-4}{n-2} $.
For the general case, the mixed volume approach for the system (\ref{SoE}) without the substitutions suggested by Emiris provides a simple,
but very weak bound. However, it may be of independent interest that the mixed volume can be exactly determined as a function of $n$ and that in particular the value is independent of the structure of the Laman graph.
\begin{theorem} \label{GeneralBound}
For any Laman graph on $n$ vertices, the mixed volume of the initial system \eqref{SoE} is exactly $4^{n-2}$.
\end{theorem}
\begin{proof}
The mixed volume of (\ref{SoE}) is at most the product of the degrees of the polynomial equations because it is less than or equal to the B\'{e}zout bound (see \cite{SturmfelsSolving}). To show that the mixed volume is at least this number we will use Lemma~\ref{LiftingLemma} to give a lifting that induces a mixed cell of volume $4^{n-2}$.
For $i \in \{1, \dots, 4\}$ the Newton polytope $\NP(h_i)$ is a segment.
We claim that the polynomials $h_i$ can be ordered in a way such that
for $i\geq 5$, $\NP(h_i)$ contains the edge $[0,2 \xi_i]$ where $\xi_i$ denotes the $i^{th}$ unit vector.
To see this, note first that every polynomial $h_j$ ($1 \le j \le 2n$)
has a non-vanishing constant term and therefore $0 \in \NP(h_j)$.
For $i \in \{1, \dots, n\}$, each of the monomials $x^2_i$ and $y^2_i$ occurs in $h_j$ if and only if
the edge which is modeled by $h_j$ is incident to $v_i$.
Let $E' := E \setminus \{[v_1,v_2]\}$. The Henneberg construction of a Laman graph
allows to orient the edges such that in the graph
$(V,E')$ each vertex in $V \setminus \{v_1,v_2\}$ has exactly two incoming edges
(see \cite{JordanBerg,LeeStreinu}).
Namely,
in a Henneberg I step the two new edges point to the new vertex. For a Henneberg II step we remember the direction of the deleted edge $\stackrel{\longrightarrow}{[v_r,v_s]}$ and let the new edge, which connects the new vertex to $v_s$, point to $v_s$. The other two new edges point to the new vertex.
(Figure~\ref{FigHennebergDirect} depicts this in an example.)
This orientation shows how to order the polynomials $h_5, \dots, h_{2n}$ in such a way
that the polynomials $h_{2i-1}$ and $h_{2i}$ model edges which are incoming edges of
the vertex $v_i$ within the directed graph. Remembering that the order of the variables was $(x_1,y_1,\dots,x_n,y_n)$ this implies that $2\xi_{2i-1}\in \NP(h_{2i-1})$ and $2\xi_{2i}\in \NP(h_{2i})$.
\begin{figure}[ht]
\setlength{\unitlength}{0.18pt}
\begin{center}
\begin{picture}(1200,550)(0,0)
\put(10,40){\includegraphics[scale=0.18]{HI_dir}}
\put(650,40){\includegraphics[scale=0.18]{HII_dir}}
\put(0,0){\mbox{$v_1$}}
\put(450,0){\mbox{$v_2$}}
\put(640,0){\mbox{$v_1$}}
\put(1090,0){\mbox{$v_2$}}
\put(290,165){\mbox{$v_3$}}
\put(940,165){\mbox{$v_3$}}
\put(0,540){\mbox{$v_4$}}
\put(650,545){\mbox{$v_4$}}
\put(520,510){\mbox{$v_5$}}
\put(1170,510){\mbox{$v_5$}}
\put(980,340){\mbox{$v_6$}}
\end{picture}
\caption{A Henneberg I and a Henneberg II step with directed edges.}
\label{FigHennebergDirect}
\end{center}
\end{figure}
\setlength{\unitlength}{1pt}
Now Lemma~\ref{LiftingLemma} can be used to describe a lifting that induces a subdivision that has
$[\xi_1,0]+\dots+[\xi_4,0]+[2 \xi_5,0]+\dots+ [2 \xi_{2n},0] $ as a mixed cell. In the notation of Lemma~\ref{LiftingLemma} the chosen edges give rise to the edge matrix $E =\begin{pmatrix} \textup{Id}_4 & \mathbf{0}\\ \mathbf{0}& 2\textup{Id}_{2n-4} \end{pmatrix}$, where $\textup{Id}_k$ denotes the $k\times k$ identity matrix. Substituting this into the second condition (\ref{LiftingCondition}) of Lemma~\ref{LiftingLemma} we get that for each Newton polytope $\NP(h_i)$ all vertices $v^{(i)}_s$ of $\NP(h_i)$ which are not $0$ or $2\xi_i$ have to satisfy
\[
\left( \mu_{1_1} -\mu_{i_1},\dots,\mu_ {{2n}_{2n}}-\mu_{i_{2n}} \right)\cdot v^{(i)}_s \leq 0 \, ,
\]
where we denote by $\mu_j=(\mu_{j_1},\dots,\mu_{j_{2n}})\in {\mathbb Q}^{2n}$ the lifting vector for $\NP(h_j)$. Since all the entries of each $v^{(i)}_s$ are non-negative this can easily be done by choosing the vectors $\mu_j$ such that their $j^{th}$ entry is sufficiently small and all other entries are sufficiently large.
\end{proof}
The preliminary remarks at the beginning of this section further imply:
\begin{cor}
The number of embeddings of a Laman graph framework with generic edge lengths is strictly less then $4^{n-2}$.
\end{cor}
\subsection{Open problems and future prospects}
Examples like the case study of Laman graph frameworks on $6$ vertices in Paragraph~\ref{sec:6Vertices} suggest that the mixed volume of the system \eqref{SubSoE} gives a significantly better bound on the number of embeddings than the one analyzed in Theorem~\ref{GeneralBound}. However it remains open to compute the mixed volume of the system \eqref{SubSoE} as a function of $n$ like it was done for the system \eqref{SoE} in Theorem~\ref{GeneralBound}.
The focus of our paper was on embeddings in the plane. See \cite{BorceaStreinu,EmirisVarvitsiotis} for embeddings into higher-dimensional spaces. With regard to the Bernstein bounds
there are straightforward analogs of Lemma~\ref{HennebergILemma} and Theorem~\ref{GeneralBound} to higher dimensions.
| {
"timestamp": "2009-03-13T14:48:56",
"yymm": "0805",
"arxiv_id": "0805.4120",
"language": "en",
"url": "https://arxiv.org/abs/0805.4120",
"abstract": "Determining the number of embeddings of Laman graph frameworks is an open problem which corresponds to understanding the solutions of the resulting systems of equations. In this paper we investigate the bounds which can be obtained from the viewpoint of Bernstein's Theorem. The focus of the paper is to provide the methods to study the mixed volume of suitable systems of polynomial equations obtained from the edge length constraints. While in most cases the resulting bounds are weaker than the best known bounds on the number of embeddings, for some classes of graphs the bounds are tight.",
"subjects": "Combinatorics (math.CO)",
"title": "Mixed Volume Techniques for Embeddings of Laman Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126457229185,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.709439711619977
} |
https://arxiv.org/abs/2212.06429 | Extensions and automorphisms of Rota-Baxter groups | The notion of Rota-Baxter groups was recently introduced by Guo, Lang and Sheng [{\em Adv. Math.} 387 (2021), 107834, 34 pp.] in the geometric study of Rota-Baxter Lie algebras. They are closely related to skew braces as observed by Bardakov and Gubarev. In this paper, we study extensions of Rota-Baxter groups by constructing suitable cohomology theories. Among others, we find relations with the extensions of skew braces. Given an extension of Rota-Baxter groups, we also construct a short exact sequence connecting various automorphism groups, which generalizes the Wells short exact sequence. | \section{Introduction}
The Rota-Baxter operators, first introduced in 1960 by G. Baxter \cite{GB60} in the fluctuation theory of probability. Such operators can be viewed as a generalization of the integral operator on the algebra of continuous functions. In the past two decades, these operators gained great importance due to their connections with combinatorics, splitting of operads, solutions of the classical Yang-Baxter equation, Poisson geometry and mathematical physics \cite{GCR1969} \
In \cite{LHY2021}, L. Guo, H. Lang and Y. Sheng first introduced Rota-Baxter operators of weight $1$ on abstract groups. A group equipped with a Rota-Baxter operator of weight $1$ is called a Rota-Baxter group. They also considered smooth Rota-Baxter operators on Lie groups and showed that such operators can be differentiated to Rota-Baxter operators of weight $1$ on the corresponding Lie algebra. In \cite{VV2022}, V. G. Bardakov and V. Gubarev finds a connection between Rota-Baxter groups and left skew braces.
V. G. Bardakov and V. Gubarev showed that every Rota-Baxter operator on group induces a skew brace structure and every skew brace can be embedded inside a Rota-Baxter group. The concept of skew brace is generalized as weak brace, semi brace, inverse semi brace see \cite{CMMS22}, \cite{CCS21}, \cite{CCS2021}. A weak (left) brace is an algebraic structure $(S, +, \circ)$ where $(S, +)$ and $(S, \circ)$ are both inverse semigroup and for all $a, b, c \in S$, the following compatibility conditions holds:
$$a \circ (b + c ) = (a \circ b) - a + (a \circ c) \hspace{10mm} \mbox{ and } \hspace{10mm} a \circ a^\prime =-a+a,$$
where $-a, a^\prime$ are inverse with respect to $+$ and $\circ$. Rota-Baxter operators on groups are further generalized to Clifford semigroups and it has been shown that every Rota-Baxter operator on a Clifford semigroup induces a weak brace, see \cite{CMP}. It is well known that skew braces produce non-degenerate solution of Yang-Baxter equation and constructing Rota-Baxter operators on groups are equivalent to construct non-degenerate solutions of set-theoretical Yang-Baxter equation and these solutions need not to be non-degenerate in the case of Rota-Baxter operator on the Clifford semi-group. Cohomology and extension theory for weighted Rota-Baxter algebra and Lie algebra is investigated in \cite{AD1}, \cite{WZ}. A cohomology theory for relative Rota-Baxter operator is developed in \cite{JSZ}. Cohomology theory for the extensions of brace by an abelian group with trivial actions was investigated by L.Vendramin and V.Labed, see \cite{LV16}. A unified extension theory for non-abelian Rota-Baxter algebra and dendiform algebra by introducing non-abelian cohomology is studied in \cite{AN}. Cohomology theory for extensions of linear cycle sets have been developed by J. A. Guccione, J. J. Guccione and C. Valqui in \cite{GGV21}. Extensions with non-trivial actions, the second cohomology group and wells type exact sequence for abelian and non-abelian extensions of the skew brace are developed in \cite{DB18},\cite{NMY1}, \cite{N1}. Extensions and Well's exact sequence for groups are investigated in very fine details, see \cite{PSY18}, \cite{JL10} The question about the extensions of Rota-Baxter operator is asked in \cite{VV2022}. In this article we answer the same.
We develop extension theory for Rota-Baxter groups, define the second cohomology group of a Rota-Baxter group acting on a abelian group and establish connections between these. We define a faithful action of Rota-Baxter extensions of $(H, R_H)$ by $(\operatorname{Z} (I), R_I)$ on Rota-Baxter extensions of $(H, R_H)$ by $(I, R_I)$. We give a necessary and sufficient condition for inducing Rota-Baxter on semi-direct product of two Rota-Baxter groups. Further, we present an exact sequence connecting automorphism groups of Rota-Baxter groups with second cohomology group.
\section{Rota-Baxter groups}
In this section, we recall Rota-Baxter groups and their relation with left skew braces. Our main references are \cite{LHY2021,VV2022}.
\begin{defn}
Let $(G, \cdot)$ be a group. A set map $R_G: G \rightarrow G$ is said to be Rota-Baxter operator of weight $1$ on the group $G$ if
\begin{align*}
R_G(x) \cdot R_G(y)= R_G(x \cdot R_G(x) \cdot y \cdot R_G(x)^{-1}), \text{for all } x, y \in G.
\end{align*}
A group $(G, \cdot)$ equipped with a Rota-Baxter operator of weight $1$ is called a Rota-Baxter group. We denote a Rota-Baxter group simply by the pair $(G,R_G)$.
\end{defn}
\begin{defn}
Let $(G_1, R_{G_1})$ and $(G_2, R_{G_2})$ be two Rota-Baxter groups. A morphism of Rota-Baxter groups from $(G_1, R_{G_1})$ to $(G_2, R_{G_2})$ is a group homomorphism $\phi : G_1 \rightarrow G_2$ which satisfies $\phi \circ R_{G_1}= R_{G_2} \circ \phi $.
\end{defn}
Let $(G, R_G)$ be a Rota-Baxter group. A Rota-Baxter subgroup of $(G,R_G)$ is a subgroup $H \subseteq G$ that satisfies $R_G (H) \subseteq H$. It follows that, if $H$ is a Rota-Baxter subgroup then $(H, R_{G}|_H)$ is itself a Rota-Baxter group, and the inclusion map $H \hookrightarrow G$ is a morphism of Rota-Baxter groups.
\begin{defn}
A triple $(E,+,\circ)$, where $(E,+)$ and $(E, \circ)$ are groups is said to be a skew left brace if
$$a \circ (b+c)=a\circ b-a+a \circ c,$$
holds for all $a,b,c \in E$, where $-a$ denotes the inverse of $a$ in $(E, +)$.
\end{defn}
Next, we state a important result established by V. Bardakov and V. Gubarev in \cite{VV2022}, which links Rota-Baxter groups and skew left braces.
\begin{thm}
Let $(G, R_G)$ be a Rota-Baxter group. For $x, y \in G$ the operation
\begin{align}\label{gpop}
x \circ_{R_G} y:=x \cdot R_G(x)\cdot y\cdot R_G(x)^{-1}
\end{align}
defines a group structure on $G$ and $(G, \cdot, \circ_{R_G} )$ is a skew left brace.
\end{thm}
\begin{remark}
Every morphism of Rota-Baxter groups is a morphism of induced skew braces but converse need not be true.
\end{remark}
\begin{defn}
Let $(I, R_I)$ and $(H, R_H)$ be two Rota-Baxter groups. By a \emph{Rota-Baxter extension} of $(H, R_H)$ by $(I, R_I)$, we mean a Rota-Baxter group $(E,R_E)$ with an exact sequence
$$\mathcal{E} := 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \to 0,$$ such that $i$ and $\pi$ are morphisms of Rota-Baxter groups. We denote $i(y)=y$, which imply that $R_{E}$ restricted to $I$ is $R_I$.
By a set-theoretical section of $\mathcal{E}$ $($st-section in short$)$, we mean a map $s : H \rightarrow E$ such that $\pi s= Id_{H}$ and $s(0)=0$.
\end{defn}
\begin{remark}
Every extension $\mathcal{E}$ of Rota-Baxter groups is an extension of skew braces $H$ by $I$ but converse need not be true.
\end{remark}
\section{Cohomology of Rota-Baxter groups}
In this section, we investigate the relationship between modules with respect to usual group operation and group operation induced by Rota-Baxter operator. Additionally, we observe the connection between corresponding cohomologies.
Let $(H, R_H)$ be a Rota-Baxter group with group operation denoted by `$\cdot$'. Let $I$ be a right $(H, \cdot)$-module. For each $y \in I$ and $x \in H$, the action $ y*h =y R_H(h)$ defines an $(H, \circ_{R_H})$-module structure on $I$. Let $C^n(H, I)$ be the set of all functions from $H^n$ to $I$ which vanishes on all degenerate tuples.
For every right $(H, \cdot)$-module $I$, we have a cochain complex $( C^n(H, I),\delta^n )$, where coboundary map $\delta^n : C^n(H, I) \rightarrow C^{n+1}(H, I)$ defined by
\begin{align*}
(\delta^n f)(h_1, h_2,\ldots, h_{n+1}) = & f(h_2, \ldots,h_{n+1})+ \sum^{n}_{k=1} (-1)^k f(h_1, \ldots, h_k \cdot h_{k+1}, \ldots, h_{n+1})\\
& +f(h_1, \ldots,h_{n})h_{n+1}.
\end{align*}
Now, using the action of $(H, \circ_{R_H})$ on $I$, we can define a new cochain complex $( C^n(H, I),\partial^n )$ with the couboundary map $\partial^n : C^n(H, I) \rightarrow C^{n+1}(H, I)$ defined by
\begin{align*}
(\partial^n f)(h_1, h_2,\ldots, h_{n+1})= & f(h_2, \ldots,h_{n+1})+ \sum^{n}_{k=1} (-1)^k f(h_1, \ldots, h_k \circ_{R_H} h_{k+1}, \ldots, h_{n+1})\\
& +f(h_1, \ldots,h_{n})R_H(h_{n+1}).
\end{align*}
Now, consider an action of a Rota-Baxter group on an abelian Rota-Baxter group. Let $(I, R_I)$ be an abelian Rota-Baxter group and $(H, R_H)$ be an arbitrary Rota-Baxter group such that $I$ is a right $H$-module. Since every group acts on its module via automorphism, therefore there exists an anti-homomorphism $\mu: H \rightarrow \operatorname{Aut} (I)$. We denote $\mu(h)$ by $\mu_h$ for all $h \in H$. We assume that $\mu_h$ is an Rota-Baxter automorphism of $(I, R_I)$ for all $h \in H$.
Let $C^{n}_{RB}(H,I):=C^n(H, I) \bigoplus C^{n-1}(H, I) \bigoplus C^{n}(H, I)$ and in particular $C^{1}_{RB}(H,I):=C^1(H, I) \bigoplus C^{1}(H, I)$.
Define $\delta_{RB}^{n} : C^{n}_{RB}(H,I) \rightarrow C^{n+1}_{RB}(H,I)$ by
$$\delta_{RB}^{1}(f, g)=(\delta^1(f), \bar{f}+ R_{I}(g), \partial^1(g)),$$
$$\delta_{RB}^{n} (f, g, h)=(\delta^n(f), \partial^{n-1}(g)+ (-1)^{n+1}(\bar{f}-R_I(h)), \partial^n(h)) \hspace{10mm} \textrm{ for } n\geq 2,
$$
where $\bar{f}(h_1, \ldots, h_n)=f(R_H(h_1), \ldots, R_H(h_{n}))$ and $R_I(f)(h_1, \ldots, h_{n})=R_I(f(h_1, \ldots, h_{n+1})).$
Using the commutativity of $R_I$ and $ \mu_{R_H(h)}$, we have $\overline{\delta^n(f)}=\partial^n(\bar{f})$ and $R_I(\partial^n(f))=\partial^n(R_I(f))$.
Now
\begin{align*}
\delta^2_{RB}( \delta^1_{RB}(f, g)) & = \delta^2_{RB}(\delta^1(f), \bar{f}- R_{I}(g), \partial^1(g))\\
&=(\delta^2(\delta^1(f)), \partial^1(\bar{f}-R_I(g))-\overline{\delta^1(f)}+R_I(\partial^1(g)), \partial^2(\partial^1(g)))\\
&=0,
\end{align*}
and for $n \geq 2$,
\begin{align*}
\delta^{n+1}_{RB}( \delta^{n}_{RB}(f, g, h)) & = \delta^{n+1}_{RB}((\delta^n(f), \partial^{n-1}(g)+ (-1)^{n+1}(\bar{f}-R_I(h)), \partial^n(h))\\
&=(0, \partial^{n}(\partial^{n-1}(g)+ (-1)^{n+1}(\bar{f}-R_I(h)))+(-1)^{n+2}(\overline{\delta^n(f)}-R_I(\partial^n(h))), 0)\\
&= 0.
\end{align*}
Above observation shows that $(C^n_{RB}(H, I), \delta_{RB}^n)$ is a cochain complex.
Let $\sigma : (H, \circ_{R_H} ) \rightarrow \operatorname{Aut} (I) $ be an anti-homomorphism such that $R_I \sigma_h=\mu_{R_H(h)} R_I$. It seems that this assumption is artificial but in extension of Rota-Baxter groups such action comes naturally. Let $\partial_{\circ}^n : C^n(H, I) \rightarrow C^{n+1}(H, I)$ be the coboundary map with respect to the action $\sigma$ of $(H, \circ_{R_H})$ on $I$. It is easy to see that $R_I (\partial^n_{\circ}(f)) =\partial^n (R_I(f))$.
Define $\partial_{RB}^{n} : C^{n}_{RB}(H,I) \rightarrow C^{n+1}_{RB}(H,I)$
$$\partial_{RB}^{1}(f, g)=(\delta^1(f), \bar{f}+ R_{I}(g), \partial_{\circ}^1(g))
,$$
and
$$\partial_{RB}^{n} (f, g, h)=(\delta^n(f), \partial^{n-1}(g)+ (-1)^{n+1}(\bar{f}-R_I(h)), \partial_{\circ}^n(h)) \hspace{10mm} \textrm{ for } n\geq 2.
$$
We have
\begin{align*}
\partial_{RB}^{2} \partial_{RB}^1(f,g) &= \partial_{RB}^{2}(\delta^1(f), \bar{f}+ R_{I}(g), \partial_{\circ}^1(g))\\
&=(0, \partial^1(\bar{f}+ R_{I}(g))-(\bar{\delta^1(f)}-R_I(\partial^1_{\circ}(g), 0)\\
&=0.
\end{align*}
Similarly one can check that $\partial_{RB}^{n} \partial_{RB}^{n-1}(f,g, h)=0$ for $n \geq 2$. This shows that $(C^{n}_{RB}(H,I), \partial^n_{RB})$ is a cochain complex.\
Next, we define module over a Rota-Baxter group.
\begin{defn}
Let $I$ be an abelian group and $R_I : I \rightarrow I$ be a homomorphism. We say that $(I, R_I)$ is a right $(H, R_H)$-module if $I$ is a right $H$-module by an action $\mu$ and the following condition holds
\begin{align}
\mu_{R_H(h)}(R_I(z))= R_I\big(\mu_{ h R_H( h)}(z +R_I(z)) -\mu_{R_H(h)}(R_I(z))\big)
\end{align}
for all $h \in H$ and $z \in I$.
\end{defn}
\begin{example}
Let $(H, R_H)$ be any Rota-Baxter group and $I$ be a trivial $H$-module that is $yh=y$ for all $y \in I$ and $h \in H$. Then $(I, R_I)$ is a right $(H, R_H)$-module for all homomorphisms $R_I : I \rightarrow I$ .
\end{example}
\begin{example}
Let $(H, R_H)$ be a Rota-Baxter group and $I$ be a right $H$-module. Then $(I ,R_I)$ is a $(H, R_H)$-module, where $R_I : I \rightarrow I$ is the trivial homomorphism.
\end{example}
\begin{example}
Let $I$ be a right $H$-module and $R_I(x)=-x$ then $(I, R_I)$ is a $(H, R_H)$-module for all Rota-Baxter operators on $H$.
\end{example}
\begin{example}
Let $R_I^2=-R_I$ and $R_I$ commutes with action of $H$ on $I$. Then $(I, R_I)$ is a $(H, R_H)$-module.
\end{example}
Let $(I, R_I)$ be a $(H, R_H)$-module. Define $TC^n_{RBE}:= C^n(H, I)$ $\oplus$ $C^{n-1}(H, I)$ for $n \leq 3$. More precisely $TC^1_{RBE}:= C^1(H, I)$, $TC^2_{RBE}:= C^2(H, I)$ $\oplus$ $C^{1}(H, I)$ and $TC^3_{RBE}:= C^3(H, I)$ $\oplus$ $C^{2}(H, I)$.
Define $\partial^1_{RBE} : TC^1_{RBE}(H, I) \rightarrow TC^2_{RBE}(H, I) $ by
$$
\partial^1_{RBE}(\theta):= (\delta^1 (\theta), -\Phi^1(\theta)),
$$
where $\Phi^1 (\theta(h))=R_I( \mu_{R_H(h)}(\theta(h))) -\theta(R_H(h))$.
Define $\partial^2_{RBE}: TC^2_{RBE} \rightarrow TC^3_{RBE}$ by
\begin{align*}
\partial^2_{RBE}(f, g):=(\delta^2f ,\beta),
\end{align*}
and $\beta$ is given by
\begin{align}\label{valofb}
\beta(h_1, h_2) = & \partial^1(g)(h_1, h_2) -R_I \big(\mu_{ R_H(h_2)}(\mu_{h_2}(g(h_1))- g(h_1)\big)- \Phi^2(f)(h_1, h_2),
\end{align}
where
\begin{align*}
\Phi^2(f)(h_1, h_2) =& f(R_H(h_1), R_H(h_2))-R_I \big(\mu_{R_H(h_1 \circ_{R_H} h_2)}(f(h_1R_H(h_1), h_2R_H(h_1)^{-1})\\
& + \mu_{h_2 R_H(h_1)^{-1}}(f(h_1, R_H(h_1))) + f(h_2, R_H(h_1)^{-1})-f(R_H(h_1), R_H(h_1)^{-1})\big).
\end{align*}
\begin{lemma}
$\operatorname{Im} (\partial^1_{RBE}) \subseteq \operatorname{Ker} (\partial^2_{RBE})$.
\end{lemma}
\begin{proof}In order to prove above lemma. It is enough to show that $\partial^2_{RBE}\partial^1_{RBE}=0$. Let $\theta \in TC^1_{RBE}(H, I)$, we have $$\partial^1_{RBE}(\theta)= (\delta^1 (\theta), -\Phi^1(\theta)).
$$
Now on applying $\partial^2_{RBE}$ in the above identity, we have
$$
\partial^2_{RBE} \big(\partial^1_{RBE}(\theta)\big)= \partial^2_{RBE}\big (\delta^1 (\theta), -\Phi^1(\theta)\big).$$
First co-ordinate of above tupple is zero by defination of $\partial^2_{RBE} $. Writing second co-ordinate using \eqref{valofb}, we get
$$
\partial^1(-\Phi^1(\theta))(h_1, h_2)+R_I(\mu_{R_H(h_2)}(\mu_{h_2}(-\Phi^1(h_1))+\Phi^1(h_1))\\
-\Phi^2(\delta^1(\theta))(h_1, h_2))\big).
$$
To see that the second co-ordinate is also zero, we first expand the second co-ordinate term by term.
The first term
\begin{align}\label{first term}
\partial^1(-\Phi^1(\theta))(h_1, h_2)= & -\big(\Phi^1(\theta)(h_2)-\Phi^1(\theta)(h_1 \circ_{R_H} h_2)+\mu_{R_H(h_2)}(\Phi^1(\theta)(h_1)\notag\\
=&-\big(R_I( \mu_{R_H(h_2)}(\theta(h_2))) -\theta(R_H(h_2))-R_I( \mu_{R_H(h_1 \circ_{R_H} h_2)}(\theta(h_1 \circ_{R_H} h_2))) \notag\\
& +\theta(R_H(h_1 \circ_{R_H} h_2)) +\mu_{R_H(h_2)}(R_I( \mu_{R_H(h_1)}(\theta(h_1))) -\theta(R_H(h_1))) \big).
\end{align}
The second term
\begin{align}\label{second term}
R_I(\mu_{R_H(h_2)}(\mu_{h_2}(-\Phi(h_1))+\Phi(h_1))=&R_I(\mu_{R_H(h_2)}(\mu_{h_2}(-R_I( \mu_{R_H(h_1)}(\theta(h_1))) +\theta(R_H(h_1)))\notag\\
&+R_I( \mu_{R_H(h_1)}(\theta(h_1))) -\theta(R_H(h_1)),
\end{align}
and the third term
\begin{align}\label{third term}
\Phi^2(\delta^1(\theta))(h_1, h_2)=& \delta^1(\theta)(R_H(h_1), R_H(h_2))-R_I \big(\mu_{R_H(h_1 \circ h_2)}(\delta^1(\theta)(h_1R_H(h_1), h_2R_H(h_1)^{-1})\notag \\
& +\mu_{h_2 R_H(h_1)^{-1}}(\delta^1(\theta)(h_1, R_H(h_1))) + \delta^1(\theta)(h_2, R_H(h_1)^{-1})-\delta^1(\theta)(R_H(h_1), R_H(h_1)^{-1})\big) \notag\\
= & \theta(R_H(h_2))-\theta(R_H(h_1 \circ_{R_H} h_2))+\mu_{R_H(h_2)}(\theta(R_H(h_1))) -R_I \big(\mu_{R_H(h_1 \circ_{R_H} h_2)}(\theta(h_2R_H(h_1)^{-1})\notag \\
&-\theta(h_1 \circ_{R_H} h_2)+\mu_{h_2R_H(h_1)^{-1}}(\theta(h_1 R_H(h_1))) +\mu_{h_2 R_H(h_1)^{-1}}(\theta(R_H(h_1))-\theta(h_1 R_H(h_1)) \notag \\
& + \theta( R_H(h_1)^{-1})-\theta(h_2 R_H(h_1)^{-1})+ \mu_{R_H(h_1)^{-1}}(\theta(h_2 )) -\theta( R_H(h_1)^{-1})\notag \\
& +\mu_{R_H(h_1)^{-1}}(\theta( R_H(h_1)).
\end{align}
Adding \eqref{first term}, \eqref{second term}, \eqref{third term} and using $I$ is a $(H, R_H)$-module, it follows that $\partial^2_{RBE} \partial^1_{RBE}=0$.
\end{proof}
We denote $\operatorname{Z} ^1(H,I):=\operatorname{Ker} (\partial^1_{RBE})$ as set of derivations and $\operatorname{Z} ^2(H,I):=\operatorname{Ker} (\partial^2_{RBE})$ as set of $2$-cocycles from $(H, R_H)$ to $(I, R_I)$. The set $ \operatorname{Im} (\partial^1_{RBE})$ will be called as set of $2$-coboubdaries from $H$ to $I$ and denoted by $\operatorname{B} ^2_{RBE}(H, I)$ More precisely
\begin{align*}
\operatorname{Z} _{RBE}^1(H, I)=\big\{\substack{ \lambda \hspace{1mm} \in \hspace{1mm} TC^1_{RBE} \hspace{2mm}\big| \hspace{2mm} \lambda(h_1 h_2)= \lambda(h_2)+\mu_{h_2}(\lambda(h_1)), \lambda(R_H(h))=R_I(\mu_{R_H(h)}(\lambda(h)))\\
\forall \hspace{1mm} h_1,\hspace{1mm} h_2, \hspace{1mm}h \in \hspace{1mm}H.} \big\}
\end{align*}
and
\begin{align*}
\operatorname{Z} _{RBE}^2(H, I)=\Bigg\{\substack{ (\tau, g) \hspace{1mm} \in \hspace{1mm} TC^2_{RBE} \hspace{2mm} \big| \hspace{2mm}
\tau(h_1, h_2 h_3)+ \tau(h_2, h_3) -\tau(h_1 h_2, h_3) -\mu_{h_3}(\tau(h_1, h_2))=0,\\
\partial^1(g)(h_1, h_2) -R_I \big(\mu_{ R_H(h_2)}(\mu_{h_2}(g(h_1))- g(h_1)\big)- \Phi^2(\tau)(h_1, h_2)=0} \Bigg\}.
\end{align*}
We define the second cohomology group by $$\operatorname{H} ^2_{RBE}(H, I):=\operatorname{Z} _{RBE}^2(H, I)/ \operatorname{B} ^2_{RBE}(H, I)$$
\section{Abelian Extensions of Rota-Baxter groups}
In this section, we study abelian extensions of Rota-Baxter groups and find its connection with the second cohomology group.
Let $\mathcal{E} := 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \to 0$ be a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$, where the Rota-Baxter operator on $E$ is denoted by $R_E$ and $I$ is an abelian group. Let $s : (H \rightarrow E$ be any st-section of $\mathcal{E}$. Let $\nu:E \rightarrow \operatorname{Aut} (I)$, $\mu :H \rightarrow \operatorname{Aut} (I)$ and $\sigma :(H, \circ_{R_H}) \rightarrow \operatorname{Aut} (I, \circ_{R_I})$ be the maps defined by
\begin{align}\label{actions}
\nu_g(y)=& g^{-1} y g,\notag\\
\mu_h(y)= & s(h)^{-1}y s(h),\\
\sigma_h (y)= & s(h)^\prime \circ_{R_E} y \circ_{R_E} s(h),\notag
\end{align}
where $a^{\prime}$ denotes the inverse of $a$ with respect to $`\circ_{R_E}$'. Note that $\nu$, $ \mu$ and $\sigma$ are anti-homomorphisms and $ \mu$, $\sigma$ independent of the choice of an st-section, respectively. For any $a \in E$, there exists unique $h \in H $ and $y \in I$ such that $a=s(h)y$. Note that
\begin{align}\label{RB oper ab}
R_E(s(h)y)=&R_E(s(h)\nu^{-1}_{R_E(s(h))} (\nu_{R_E(s(h))}(y)))\notag\\
=& R_E(s(h)) R_E(\nu_{R_E(s(h))}(y)\notag\\
=& R_E(s(h)) R_I(\nu_{R_E(s(h))}(y).
\end{align}
We have $R_E(s(h))=s(\tilde{h})y_h$
for some unique $\tilde{h} \in H$ and $y_h \in I$. By applying $\pi$ on the both sides of $R_E(s(h))=s(\tilde{h})y_h$, and using $\pi R_E=R_H \pi$, we have
\begin{align}
R_H(\pi(s(h)))=& \tilde{h}.
\end{align}
Hence $ R_H(h)=\tilde{h}$. Now using the value of $R_E(s(h))$ in \eqref{RB oper ab}, we get
\begin{equation}\label{RB reduced}
R_E(s(h)y)= s(R_H(h))y_h R_I(\mu_{R_H(h)}(y)).
\end{equation}
Next consider the maps $\tau, \tilde{\tau}: H \times H \rightarrow I$ given by
\begin{align} \label{cocycle 1}
\tau(h_1, h_2)=&s(h_1 h_2)^{-1}s(h_1)s(h_2) \notag,\\
\tilde{\tau}(h_1, h_2)=&s(h_1 \circ_{R_H} h_2)^{\prime} \circ_{R_E} s(h_1) \circ_{R_E} s(h_2).
\end{align}
We have
\vspace{-.2cm}
\begin{align}\label{big}
R_E& (s(h_1)y_1) R_E(s(h_2)y_2)\notag\\
=& R_E(s(h_1)y_1 R_E(s(h_1)y_1)s(h_2)y_2 R_E(s(h_1)y_1)^{-1}) \notag \\
=&R_E(s(h_1)y_1 s(R_H(h_1))y_{h_1}R_I(\mu_{R_H(h_1)}(y_1))s(h_2)y_2R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1} s(R_H(h_1))^{-1}) \notag \\
=&R_E(s(h_1)s(R_H(h_1)) \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1)) s(h_2)y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}\notag \\
& y^{-1}_{h_1} s(R_H(h_1)^{-1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1}) \notag \\
=& R_E(s(h_1)s(R_H(h_1)) \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1)) s(h_2)s(R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}( y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1}) \notag \\
= & R_E(s(h_1 R_H(h_1)) \tau(h_1, R_H(h_1)) \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1)) s(h_2 R_H(h_1)^{-1})\notag \\ & \tau (h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}}( y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1}) \notag \\
=& R_E(s(h_1 R_H(h_1)) s(h_2 R_H(h_1)^{-1}) \mu_{ h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1))\mu_{R_H(h_1)}(y_1)y_{h_1}\notag\\
& R_I(\mu_{R_H(h_1)}(y_1))) \tau (h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}}( y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1}) \notag \\
=& R_E(s(h_1 R_H(h_1)h_2 R_H(h_1)^{-1}) \tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \mu_{ h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1)) \notag \\
& \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1))) \tau (h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}}( y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1}) \notag \\
=& s(R_H(h_1)R_H(h_2))y_{h_1 \circ_{R_H} h_2}R_I(\mu_{R_H(h_1 \circ_{R_H} h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \notag \\
& \mu_{ h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1)) \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1))) \tau (h_2, R_H(h_1)^{-1})\notag\\
& \mu_{R_H(h_1)^{-1}}( y_2 R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1})\tau(R_H(h_1), R_H(h_1)^{-1})^{-1}).
\end{align}
On the other hand, we have
\begin{align}\label{small}
R_E& (s(h_1)y_1) R_E(s(h_2)y_2)\notag \\
=& s(R_H(h_1)) y_{h_1} R_I(\mu_{R_H(h_1)}(y_1)) s(R_H(h_2)) y_{h_2} R_I(\mu_{R_H(h_2)}(y_2)) \notag \\
=&s(R_H(h_1)) s(R_H(h_2)) \mu_{R_H(h_2)}(y_{h_1} R_I(\mu_{R_H(h_1)}(y_1))) y_{h_2} R_I(\mu_{R_H(h_2)}(y_2)) \notag \\
=& s(R_H(h_1) R_H(h_2)) \tau(R_H(h_1), R_H(h_2)) \mu_{R_H(h_2)}(y_{h_1} R_I(\mu_{R_H(h_1)}(y_1))) y_{h_2} R_I(\mu_{R_H(h_2)}(y_2)).
\end{align}
Now by comparing \eqref{big} and \eqref{small}, we get
\begin{align}\label{2-cocycle condn}
\tau& (R_H(h_1), R_H(h_2)) \mu_{R_H(h_2)}(y_{h_1} R_I(\mu_{R_H(h_1)}(y_1))) y_{h_2}\notag\\
= & \hspace{1mm} y_{h_1 \circ_{R_H} h_2}R_I(\mu_{R_H(h_1 \circ_{R_H} h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \mu_{ h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1)) \notag \\
& \mu_{R_H(h_1)}(y_1)y_{h_1}R_I(\mu_{R_H(h_1)}(y_1))) \tau (h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}( R_I(\mu_{R_H(h_1)}(y_1))^{-1}y^{-1}_{h_1})\tau(R_H(h_1), R_H(h_1)^{-1})^{-1}).
\end{align}
Now onwards, we write elements of $I$ additively and putting $y_1=0$ in \eqref{2-cocycle condn}, we get
\begin{align}\label{cocycle condn}
\mu& _{R_H((h_1)}(y_{h_1})-y_{h_1 \circ_{R_H} h_2} +y_{h_2} -R_I \big(\mu_{ h_1 \circ_{R_H} h_2 R_H(h)^{-1}}(\mu_{h_2}(y_{h_1})- y_{h_1}\big)\notag \\
=& \tau(R_H(h_1), R_H(h_2))-R_I \big( \mu_{h_1 \circ_{R_H} h_2}(\tau(h_1R_H(h_1), h_2R_H(h_1)^{-1})\notag \\
&+\mu_{h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1))
+ \tau(h_2, R_H(h_1)^{-1})-\tau(R_H(h_1), R_H(h_1)^{-1})\big).
\end{align}
By \eqref{2-cocycle condn} and \eqref{cocycle condn}, we observe that $\mu$ and $R_I$ together satisfy the following compatibility condition :
\begin{align}\label{comp condn}
\mu_{R_H(h_2)}(R_I(\mu_{R_H(h_1)}(y)))=& R_I\big(\mu_{ h_2 R_H( h_2)}(\mu_{R_H(h_1)}(y)+ R_I(\mu_{R_H(h_1)}(y)))\notag \\
&-\mu_{R_H(h_2)}(R_I(\mu_{R_H(h_1)}(y)))\big),
\end{align}
for all $h_1, h_2 \in H$ and $y \in I$. This shows that $(I, R_I)$ is a $(H, R_H)$-module.
In the view of relation $R_E(s(h))=s(R_H(h)))y_h$, we define a map $g$ from $H$ to $I$ by
\begin{align}\label{RB map}
g(h)=y_h.
\end{align}
Clearly $g$ is well-defined and \eqref{cocycle condn} can be rewritten as
\begin{align}\label{new cocycle condn}
\partial& ^1(g)(h_1, h_2) -R_I \big(\mu_{ h_1 \circ_{R_H} h_2 R_H(h)^{-1}}(\mu_{h_2}(g(h_1))- g(h_1)\big)\notag \\
=& \tau(R_H(h_1), R_H(h_2))-R_I \big(\mu_{h_1 \circ_{R_H} h_2}(\tau(h_1R_H(h_1), h_2R_H(h_1)^{-1})
+\mu_{h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1)))\notag \\
& + \tau(h_2, R_H(h_1)^{-1})-\tau(R_H(h_1), R_H(h_1)^{-1})\big).
\end{align}
Note that
\begin{align}\label{circ cocycle}
R_E(s(h_1))R_E(s(h_2)) =& R_E(s(h_1) \circ_{R_E} s(h_2))\notag\\
=&R_E\big(s(h_1 \circ_{R_H} h_2) \circ_{R_E} \tilde{\tau}(h_1, h_2)\big)\notag \\
=&s(R_H(h_1 \circ_{R_H} h_2)) g(h_1 \circ_{R_H} h_2) R_I(\tilde{\tau}(h_1, h_2)).
\end{align}
Comparing \eqref{small} and \eqref{circ cocycle} with $y_1=y_2=0$, we have the following relation among $\tilde{\tau}$, $\tau$ and $g$,
\begin{align}\label{mixed}
\partial^1(g)(h_1, h_2)=&R_I( \tilde{\tau}(h_1, h_2))- \tau(R_H(h_1), R_H(h_2)).
\end{align}
This shows that $(\tau, g, \tilde{\tau}) \in \operatorname{Ker} (\partial^2_{RB})$.
Let $s_1$ and $s_2$ be two st-sections of $\mathcal{E}$. We have $s_2(h)=s_1(h)z_h$ for some unique $z_h \in I$. Define $\theta: H \rightarrow I$ by $\theta(h):=z_h$. Clearly, $\theta$ is a well-defined map. Let $\tau_1, \tau_2$ be the 2-cocycles corresponding to $s_1$ and $s_2$, respectively. The following relation is well-known
\begin{align} \label{cocycle relation}
-\theta(h_1 h_2) + \tau_1(h_1, h_2)+ {\mu}_{h_2}(\theta(h_1)) + \theta(h_2)=\tau_2(h_1, h_2).
\end{align}
Let $R_{E}(s_1(h))=s_1(R_H(h)) \prescript{}{1}{y}_{h}$ and $R_{E}(s_2(h))=s_2(R_H(h)) \prescript{}{2}{y}_{h}$. Then we have
\begin{align*}
s_1(R_H(h)) \theta(R_H(h)) \prescript{}{2}{y}_{h}= R_E(s_2(h))= R_E(s_1(h)\theta(h))= s_1(R_H(h))\prescript{}{1}{y}_{h} R_I(\mu_{R_H(h)}(\theta(h))).
\end{align*}
Thus,
\begin{equation}\label{RB cocycle diff}
\prescript{}{2}{y}_{h} - \prescript{}{1}{y_{h}}= R_I(\mu_{R_H(h)}(\theta(h))) -\theta(R_H(h)).
\end{equation}
If we define the maps $g_1, g_2 : H \rightarrow I$ corresponding to $s_1$ and $s_2$ by $g_i(h)=\prescript{}{i}{y}_{h}$, then \eqref{RB cocycle diff} can be rewritten as
\begin{align} \label{g condn}
g_2(h)-g_1(h)=R_I(\mu_{R_H(h)}(\theta(h))) -\theta(R_H(h)).
\end{align}
\begin{prop}\label{kernel}
Let $I$ be an abelian group and $$\mathcal{E} := 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \to 0$$ be a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$. For any st-section $s$, let $\tau$ and $g$ be the maps as defined in \eqref{cocycle 1} and \eqref{RB map}, respectively. Then $(\tau, g) \in \operatorname{Ker} (\partial^2_{RBE})$. Moreover, the cohomology class of $( \tau, g)$ is independent of the choice of any st-section of $\mathcal{E}$.
\end{prop}
\begin{proof}
It follows from \eqref{new cocycle condn} that the pair $(\tau, g) \in \operatorname{Ker} (\partial^2_{RBE})$. Let $s_1$ and $s_2$ be two st-sections of $\mathcal{E}(\tau_1, g_1)$ and $(\tau_2, g_2)$ be the pairs corresponding to $s_1$ and $s_2$. Then \eqref{cocycle relation} and \eqref{g condn} together implies that $(\tau_1, g_1)$ and $(\tau_2, g_2)$ are cohomologous. This completes the proof.
\end{proof}
\begin{prop}\label{correspondence}
Let $(I, R_I)$ be an $(H, R_H)$-module via action $
\mu$. Let $(\tau, g) \in \operatorname{Ker} (\partial^2_{RBE})$ then
the map $R: H \times I \rightarrow H \times I$ given by
\begin{align}\label{RB extn}
R(h, y):=\big( R_H(h), g(h)+R_I\big( \mu_{R_H(h)}(y )\big)\big)
\end{align}
defines a Rota-Baxter operator on the group extension $H \times I$ of $H$ by $I$ and $(H \times I, R)$ defines a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$, where group operation on $H \times I$ is given by
\begin{align}\label{group extension}
(h_1, y_1)+(h_2, y_2):=\big(h_1h_2, y_1+\mu_{h_2}(y_2)+\tau(h_1, h_2)\big).
\end{align}
Moreover, cohomologous $2$-cocycles define equivalent extensions. We denote such extension by $\mathcal{E}(\tau, g)$.
\end{prop}
\begin{proof} From the theory of group extensions it follows that the operation in \eqref{group extension} defines a group extension of $H$ by $I$ for details see \cite[Chapter 2]{PSY18}. Now we verify that the operator $R$ defined in \eqref{RB extn} is a Rota-Baxter operator on $H \times I$.
\begin{align}\label{RB proof}
R(h_1, y_1) R(h_2, y_2)=& \big( R_H(h_1), g(h_1)+R_I\big( \mu_{R_H(h_1)}(y_1 )\big)\big)\big( R_H(h_2), g(h_2)+R_I\big( \mu_{R_H(h_2)}(y_2 )\big)\big)\notag\\
=& \big( R_H(h_1) R_H(h_2), g(h_1)+R_I\big( \mu_{R_H(h_1)}(y_1 )\big)+\mu_{R_H(h_2)}\big(g(h_2)\notag \\
&+R_I(\mu_{R_H(h_2)}(y_2 ))+ \tau(R_H(h_1),R_H(h_2))\big).
\end{align}
Using the fact that $(\tau, g) \in \operatorname{Ker} (\partial^2_{RBE})$, the expression of \eqref{RB proof} becomes
\begin{align}\label{RB proof 2}
&\big(R_H(h_1 \circ_{R_H} h_2) , R_I\big( \mu_{R_H(h_1)}(y_1 )\big)+g(h_1 \circ_{R_H} h_2) +R_I \big(\mu_{ R_H(h)^{-1} R_H (h_1 \circ_{R_H} h_2)}(\mu_{h_2}g(h_1)- g(h_1)\big)\notag \\
&-R_I(\mu_{R_H (h_1 \circ_{R_H} h_2)}\big(\tau(h_1R_H(h_1), h_2R_H(h_1)^{-1}) + \mu_{h_2 R_H(h_1)^{-1}}(\tau(h_1, R_H(h_1))\notag \\
& + \tau(h_2, R_H(h_1)^{-1})- \tau(R_H(h_1)^{-1}, R_H(h_1))\big) +R_I\big( \mu_{R_H(h_2)}(y_2 )) \big).
\end{align}
Using \eqref{comp condn}, \eqref{RB proof 2} become
\begin{align*}
&=R\Big((h_1, y_1) R(h_1, y_1) (h_2, y_2) R(h_1, y_1)^{-1} \Big).
\end{align*}
This shows that $R$ defined in \eqref{RB extn} is a Rota-Baxter operator on $H \times I$. It is easy to verify that $R i= i R_I $ and $R \pi=\pi R_H$, which shows that $$ \mathcal{E}(\tau, g):=0 \to I \stackrel{i}{\to} H \times I \stackrel{\pi}{\to} H \to 0$$ is a Rota-Baxter extension, where $i$ and $\pi$ are natural injection and projection, respectively.
Next, we show that extensions corresponding to cohomologous $2$-cocycles are equivalent. Let $\mathcal{E}(\tau_1, g_1)$ and $\mathcal{E}(\tau_2, g_2)$ be two extensions corresponding to cohomologous $2$-cocycles $(\tau_1, g_1)$ and $(\tau_2, g_2)$, respectively. By the definition of $\operatorname{H} _{RBE}^2(H, I)$ there exists a map $\theta: H \rightarrow I$ such that
\begin{align*}
\tau_2(h_1, h_2)-\tau_1(h_1, h_2)=\delta^1(\theta)(h_1, h_2)
\end{align*}
and
\begin{align*}
g_2(h)-g_1(h)=\theta(R_H(h))-R_I(\mu_{R_H(h)}(\theta(h))),
\end{align*}
for all $h_1, h_2, h \in H$.
Define $ \varsigma: \mathcal{E}(\tau_1, g_1) \rightarrow \mathcal{E}(\tau_2, g_2)$ by $ \varsigma(h, y)=(h, y+\theta(h)).$ It follows that the map $ \varsigma$ is an isomorphism of groups. We will now show that $ \varsigma$ is a morphism of Rota-Baxter groups. Let $R_1$ and $R_2$ be the Rota-Baxter operator on $\mathcal{E}(\tau_1, g_1)$ and $\mathcal{E}(\tau_2, g_2)$ defined via \eqref{RB extn}. We have
\begin{align*}
\varsigma\big( R_1(h, y)\big) &= \varsigma\big( R_H(h), g_1(h)+R_I\big( \mu_{R_H(h)}(y )\big)\big)\\
&=\big( R_H(h), g_1(h)+R_I\big( \mu_{R_H(h)}(y )\big)+\theta(R_H(h))\big)\\
&=\big( R_H(h), g_2(h)+R_I\big( \mu_{R_H(h)}(y+\theta(h) )\big) \big)\\
&=R_2 \big( \varsigma (h, y) \big).
\end{align*}
This shows that $\varsigma$ is a morphism of Rota-Baxter groups.
\end{proof}
\begin{thm}\label{bij}
Let $(I, R_I)$ be an $(H, R_H)$-module and $\operatorname{Ext} _{\mu}(H, I)$ be set of equivalence classes of all extensions of $(H, R_H)$ by $(I, R_I)$ with action $\mu$. Then, $\operatorname{Ext} _{\mu}(H, I)$ is in bijection with $\operatorname{H} ^{2}_{RBE}(H, I)$.
\end{thm}
\begin{proof}Let $$\mathcal{E} := 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \to 0$$ be a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$. Let $s$ be a st-section of $\mathcal{E}$. Let $\tau$ and $g$ be maps as defined in \eqref{cocycle 1} and \eqref{RB map}, respectively. Define
\begin{align}
&\psi : \operatorname{Ext} _{\mu}(H, I) \rightarrow \operatorname{H} ^{2}_{RBE}(H, I) \hspace{.2cm} \mbox{by} \notag\\
&\psi(\mathcal{E}):=[(\tau, g)]
\end{align}
$[(\tau, g)]$ denotes the cohomology class of $(\tau, g)$. The map $\psi$ is well-defined is follows from Proposition \ref{kernel}. Conversely, define
\begin{align*}
&\omega : \operatorname{H} ^{2}_{RBE}(H, I) \rightarrow \operatorname{Ext} _{\mu}(H, I) \hspace{.2cm}\mbox{by} \\
&\omega([\tau, g]):= [\mathcal{E}(\tau, g)]
\end{align*}
where $[\mathcal{E}(\tau, g)]$ represents the equivalence class of $\mathcal{E}(\tau, g)$ in $\operatorname{H} ^{2}_{RBE}(H, I)$. To show that $\omega$ is well-defined, it is enough to show that $2$-cocycles corresponding to equivalent extensions are cohomologous. Let $E_1$ and $E_2$ be two equivalent Rota-Baxter extensions of $(H,R_H)$ by $(I, R_I)$ then we have
the following commutative diagram
$$\begin{CD}
0 @>>> I @>i>> E_1 @>{{\pi} }>> H @>>> 0\\
&& @V{\text{Id}} VV@V{\phi} VV @V{\text{Id} }VV \\
0 @>>> I @>i^\prime>> E_2 @>{{\pi^\prime} }>> H @>>> 0
\end{CD}$$
where $\phi: E_1 \rightarrow E_2$ is isomorphism of Rota-Baxter groups. Let $s_1$ and $s_2$ be st-sections corresponding to the extensions $E_1$ and $E_2$, respectively. Let $(\tau_1, g_1)$ and $(\tau_2, g_2)$ be 2-cocycles corresponding to $s_1$ and $s_2$, respectively. Let $s^{\prime}= \phi s_1$ then $s^{\prime}$ is an section of $E_2$. Let $(\tau^\prime, g^\prime)$ be $2$-cocycle corresponding to $s^\prime$ then $(\tau^\prime, g^\prime)=(\tau_1, g_1)$ and by Proposition \ref{kernel}, $(\tau^\prime, g^\prime)$ and $(\tau_2, g_2)$ are cohomologous. Hence $(\tau_1, g_1)$ and $(\tau_2, g_2)$ are cohomologous, which shows that $\omega$ is well-defined.
Next we prove that $ \psi$ and $ \omega$ are inverse of each other. In order to show that $\psi \omega= Id_{ \operatorname{H} ^{2}_{RBE}(H, I)}$, we will prove that $(\tau, g)$ is cohomologous to a 2-cocycle $(\tau^\prime, g^\prime)$ corresponding to some st-section of $\mathcal{E}(\tau, g).$ Let $s: H \rightarrow \mathcal{E}(\tau, g)$ given by $s(h)=(h, 0)$. Clearly, $s$ is a st-section of $\mathcal{E}(\tau, g)$. It can be easily seen that the $2$-cocycle corresponding to st-section $s$ is cohomologous to $(\tau, g)$. On the other side $\omega \psi= \operatorname{Id}_{\operatorname{Ext} _{\mu}(H, I)}$ follows from proposition \ref{correspondence}. This completes the proof.
\hfill $\Box$
\end{proof}
\begin{remark}
Let $(I, R_I)$ be a $(H, R_H)$-module. We know that every element of $ \operatorname{H} ^{2}_{RBE}(H, I)$ defines a Rota-Baxter extension and every Rota-Baxter extension is extension of induced skew left braces. Let $ \operatorname{H} ^{2}_{N}(H, I)$ be the second cohomology group of skew left brace $H$ with cofficients in $I$ (defined in \cite{NMY1}). There is a natural embedding $ \operatorname{H} ^{2}_{RBE}(H, I)$ into $ \operatorname{H} ^{2}_{N}(H, I)$. More precisely, for $[(\tau, g)] \in \operatorname{H} ^{2}_{RBE}(H, I) $ construct an extension and corresponding to that extension define $\tau$ and $\tilde{\tau}$ as defined in \eqref{cocycle 1}, then $[(\tau, g)] \rightarrow [(\tau, \tilde{\tau})]$ is the required embedding.
\end{remark}
\begin{remark}
If we consider $\mu_h=Id_H$ for all $h \in H$ $($in case of centeral extensions$)$. then the maps $\partial^1_{RBE}$ and $\partial^2_{RBE}$ could be simplified as $
\partial^1_{RBE}(\theta):= (\delta^1 (\theta), -\Phi^1(\theta)),$
where $\Phi^1 (\theta(h))=R_I(\theta(h)) -\theta(R_H(h))$
and $\partial^2_{RBE}(f, g):=(\delta^2f ,\beta).$ In this case $\beta$ and $\Phi^2(f)$ is given by
\begin{align*}
\beta(h_1, h_2) =& \partial^1(g)(h_1, h_2)- \Phi^2(f)(h_1, h_2),\notag\\
\Phi^2(f)(h_1, h_2)=& f(R_H(h_1), R_H(h_2))-R_I \big(f(h_1R_H(h_1), h_2R_H(h_1)^{-1})
+f(h_1, R_H(h_1))\\
& + f(h_2, R_H(h_1)^{-1})-f(R_H(h_1), R_H(h_1)^{-1})\big).
\end{align*}
We also have the following commutative diagram
\[ \begin{tikzcd}
C^1 \arrow{r}{\Phi^1} \arrow[swap]{d}{\delta^1} & C^1 \arrow{d}{\partial^1} \\%
C^2 \arrow{r}{\Phi^2}& C^2
\end{tikzcd}
\]
that is $\partial^1 \Phi^1=\Phi^2 \delta^1$.
\end{remark}
\noindent \textbf{Problem 1.} Is it possible to generalize the maps $\Phi^1$ and $\Phi^2$ such that $\partial^n \Phi^n=\Phi^{n+1} \delta^n$ for $n \geq 3$?
\section{General Extensions of Rota-Baxter groups}
In this section, we examine general extensions of Rota-Baxter groups. We also define an action of abelian extensions on non-abelian extensions.
Let $\mathcal{E} := 0 \to I \stackrel{}{\to} E \stackrel{\pi}{\to} H \to 0$ be a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$. Let $s : H \rightarrow E$ be a st-section of $\mathcal{E}$. Let $\nu, \mu, \sigma$ be as in \eqref{actions}.
Note that $\nu$ is an anti-homomorphism but $\mu$ and $\sigma$ in general need not be anti-homomorphisms. However they satisfy the following identity
\begin{align}
\mu_{h_1 h_2} =&i_{\tau(h_1, h_2){-1}} \mu_{h_2} \mu_{h_1},\notag\\
\sigma _{h_1 \circ_{R_H} h_2} =&i^{\circ}_{\tilde{\tau}(h_1, h_2)^{-1}} \sigma_{h_2} \sigma_{h_1},
\end{align}
where $i_x$ and $i^{\circ}_x$ are inner automorphisms of $I$ and $(I, \circ_{R_I})$, respectively, for $x \in I$.
We observed that $R_E(s(h))=s(R_H(h))y_h$ for some unique $y_h \in I$ and for any $a \in E$ there exists unique $h \in H $ and $y \in I$ such that $a=s(h)y$. Note that
\begin{align}\label{RB oper}
R_E(s(h)y)=&R_E(s(h)\nu_{R_E(s(h))^{-1}} (\nu_{R_E(s(h))}(y)))\notag\\
=& R_E(s(h)) R_E(\nu_{R_E(s(h))}(y)\notag\\
=& R_E(s(h)) R_I(\nu_{(s(R_H(h))y_h)}(y)\notag\\
=&s( R_H(h)) y_h R_I(i_{y^{-1}_h}(\mu_{R_H(h)}(y)))).
\end{align}
Using that $R_E$ is a Rota-Baxter operator on $E$, we have
\begin{align*}
& R_E (s(h_1)y_1) R_E(s(h_2)y_2)\notag\\
=& R_E(s(h_1)y_1 R_E(s(h_1)y_1)s(h_2)y_2 R_E(s(h_1)y_1)^{-1}) \notag \\
=&R_E\big(s(h_1)y_1 s(R_H(h_1))y_{h_1}R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))s(h_2)y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1}\notag \\
& y^{-1}_{h_1}s(R_H(h_1)^{-1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \notag \\
=&R_E\big(s(h_1)s(R_H(h_1)) \mu_{R_H(h_1)}(y_1) y_{h_1}R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))s(h_2)s(R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}} ( y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big)\notag \\
=& R_E\big(s(h_1 R_H(h_1)) \tau(h_1, R_H(h_1)) \mu_{R_H(h_1)}(y_1) y_{h_1}R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))s(h_2 R_H(h_1)^{-1})\notag \\
& \tau(h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}} ( y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1}) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big)\notag \\
=& R_E\big(s(h_1 R_H(h_1)) s(h_2 R_H(h_1)^{-1}) \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1)) \mu_{R_H(h_1)}(y_1) y_{h_1}\notag \\
& R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big)\tau(h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}} ( y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1})\notag \\
& \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big)\notag \\
\end{align*}
\begin{align}\label{ NA big}
=& R_E\Big(s(h_1 R_H(h_1) h_2 R_H(h_1)^{-1}) \tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1))\notag \\
& \mu_{R_H(h_1)}(y_1) y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big) \tau(h_2, R_H(h_1)^{-1}) \mu_{R_H(h_1)^{-1}}\big ( y_2 R_I(i_{y^{-1}_{h_1}}\notag \\
& (\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \Big)\notag \\
=& s(R_H(h_1) R_H(h_2)) y_{h_1 \circ_{R_H} h_2} R_I \Big( i^{-1}_{y_{h_1 \circ h_2}} \big( \mu_{R_H(h_1) R_H(h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1})\notag \\
& \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1)) \mu_{R_H(h_1)}(y_1) y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big)\tau(h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}\big ( y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \Big).
\end{align}
We also have another method to expand this expression
\begin{align}\label{NA small}
& R_E (s(h_1)y_1) R_E(s(h_2)y_2)\notag \\
=& s(R_H(h_1)) y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1))) s(R_H(h_2)) y_{h_2} R_I(i_{y^{-1}_{h_2}}(\mu_{R_H(h_2)}(y_2))) \notag \\
=&s(R_H(h_1)) s(R_H(h_2)) \mu_{R_H(h_2)}\big(y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big) y_{h_2} R_I(i_{y^{-1}_{h_2}}(\mu_{R_H(h_2)}(y_2))) \notag \\
=& s(R_H(h_1) R_H(h_2)) \tau(R_H(h_1), R_H(h_2)) \mu_{R_H(h_2)}\big(y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big) y_{h_2} \notag \\
& R_I(i_{y^{-1}_{h_2}}(\mu_{R_H(h_2)}(y_2))).
\end{align}
By comparing \eqref{ NA big} and \eqref{NA small}, we get
\begin{align}\label{NA parent}
& y_{h_1 \circ_{R_H} h_2}R_I \Big( i^{-1}_{y_{h_1 \circ_{R_H} h_2}} \big( \mu_{R_H(h_1) R_H(h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1))\notag \\
& \mu_{R_H(h_1)}(y_1) y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big)\tau(h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}\big ( y_2 R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))^{-1} y^{-1}_{h_1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \Big)=A.
\end{align}
where $A$ is given by
\begin{align*}
A=& \tau(R_H(h_1), R_H(h_2)) \mu_{R_H(h_2)}\big(y_{h_1} R_I(i_{y^{-1}_{h_1}}(\mu_{R_H(h_1)}(y_1)))\big) y_{h_2} R_I(i_{y^{-1}_{h_2}}(\mu_{R_H(h_2)}(y_2))).
\end{align*}
Also, we have
\begin{align}\label{circ condn}
R_E(s(h_1))R_E(s(h_2))= & R_E(s(h_1) \circ_{R_E} s(h_2))\notag \\
= & R_E(s(h_1 \circ_{R_H} h_2) \circ_{R_I} \tilde{\tau}(h_1, h_2))\notag\\
=& R_E(s(h_1 \circ_{R_H} h_2)) R_E(\tilde{\tau}(h_1, h_2))\notag\\
=& s(R_H(h_1) R_H(h_2)) y_{h_1 \circ_{R_H} h_2} R_I(\tilde{\tau}(h_1, h_2)).
\end{align}
By putting $y_1=y_2=0$ in \eqref{ NA big} and comparing with \eqref{circ condn}, we get
\begin{align}\label{circ to old}
R_I(\tilde{\tau}(h_1, h_2)) =& R_I \Big( i^{-1}_{y_{h_1 \circ_{R_H} h_2}} \big( \mu_{R_H(h_1) R_H(h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1})\notag \\
& \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1)) y_{h_1} \big)\tau(h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}\big ( y^{-1}_{h_1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \Big).
\end{align}
We have a well-defined map $g: H \rightarrow I$ given by
\begin{align}\label{g map}
g(h)=y_h
\end{align}
and \eqref{NA parent} and \eqref{circ to old} can be rewritten as
\begin{align}\label{NA parent 2}
& g(h_1 \circ_{R_H} h_2)R_I \Big( i^{-1}_{g(h_1 \circ_{R_H} h_2)} \big( \mu_{R_H(h_1) R_H(h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1}) \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1))\notag \\
& \mu_{R_H(h_1)}(y_1) g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big)\tau(h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}\big ( y_2 R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))^{-1} g(h_1)^{-1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \Big)=A,
\end{align}
where $A$ is given by
\begin{align*}
A=& \tau(R_H(h_1), R_H(h_2)) \mu_{R_H(h_2)}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big) g(h_2) R_I(i_{g(h_2)^{-1}}(\mu_{R_H(h_2)}(y_2))),
\end{align*}
and
\begin{align}\label{coho re}
R_I(\tilde{\tau}(h_1, h_2))=& R_I \Big( i^{-1}_{g(h_1 \circ_{R_H} h_2)} \big( \mu_{R_H(h_1) R_H(h_2)}(\tau(h_1 R_H(h_1), h_2 R_H(h_1)^{-1})\notag \\
& \mu_{h_2 R_H(h_1)^{-1}} \big(\tau(h_1, R_H(h_1)) g(h_1) \big)\tau(h_2, R_H(h_1)^{-1})\notag \\
& \mu_{R_H(h_1)^{-1}}\big (g(h_1)^{-1} \big) \tau(R_H(h_1), R_H(h_1)^{-1})^{-1} \big) \Big),
\end{align}
respectively.
It follows from calculations that $\tau$ satisfy
\begin{equation}\label{cocycle 2}
\tau(h_1, h_2 h_3) \tau(h_2, h_3) \tau(h_1 h_2, h_3)^{-1} (\mu_{h_3}(\tau(h_1, h_2)))^{-1}=0.
\end{equation}
If we have a Rota-Baxter extension $\mathcal{E} := 0 \to I \stackrel{}{\to} E \stackrel{\pi}{\to} H \to 0$ and a st-section $s$ then we can construct a triplet $(\mu, \tau, g)$ corresponding to $s$ as described in \eqref{actions}, \eqref{cocycle 1}
and \eqref{g map}.
Next we see that how the triplets corresponding to different sections are related. Let $s_1$ and $s_2$ be two different st-sections of $\mathcal{E}$. We have $s_2(h)=s_1(h)z_h$ and let $R_E(s_i(h)) =s_i(R_H(h)\prescript{}{i}{y}_{h}$ for some unique $z_h, \prescript{}{1}{y}_{h} $ and $\prescript{}{2}{y}_{h} \in I$. Define the maps $g_i$ and $\theta $ from $H$ to $I$ by $g_i(h)=\prescript{}{i}{y}_{h}$ and $\theta: H \rightarrow I$ by $\theta(h):=z_h$. Let $\prescript{}{i}{\mu}$ be action of $H$ on $I$ corresponding to $s_i$ as defined in \eqref{actions}. Then we have
\begin{align}\label{st action }
\prescript{}{2}{\mu}_h &= i_{\theta(h)^{-1}} \prescript{}{1}{\mu}_h.
\end{align}
Let $\tau_1, \tau_2$ be 2-cocycles corresponding to $s_1$ and $s_2$, respectively. The following relation is well known from theory of group extensions
\begin{align} \label{NA cocycle relation}
\theta(h_1 h_2)^{-1} \tau_1(h_1, h_2) \prescript{}{1}{\mu}_{h_2}
(\theta(h_1)) \theta(h_2)=\tau_2(h_1, h_2).
\end{align}
We see that how $g_1$ and $g_2$ are related
\begin{align}\label{first g}
R_E(s_2(h)) = s_2(R_H(h)) g_2(h) =s_1(R_H(h)) \theta(R_H(h)) g_2(h).
\end{align}
We also have,
\begin{align}\label{second g}
R_E(s_2(h)) = R_E(s_1(h) \theta(h))=s_1(R_H(h)) g_1(h) R_I\big(i_{g_{1}(h)^{-1}}(\prescript{}{1}{\mu}_{R_H(h)}(\theta(h)))\big).
\end{align}
By comparing \eqref{first g} and \eqref{second g}, we have
\begin{align}\label{NA g condn}
\theta(R_H(h)) g_2(h) &=g_1(h) R_I\big(i_{g_{1}(h)^{-1}}(\prescript{}{1}{\mu}_{R_H(h)}(\theta(h)))).
\end{align}
As we mentioned that $\mu$ defined in \eqref{actions} is, in general, not be a homomorphism. However, clearly $\mu$ composed with the natural projection
$$\operatorname{Aut} (I) \rightarrow \operatorname{Out} (I):=\operatorname{Aut} (I)/ \operatorname{Inn} (I)$$
is a homomorphism, denoted by
$\bar{\mu} : H \rightarrow \operatorname{Out} (I)$.
Now, we state a well-known theorem of group extensions, which stands true for Rota-Baxter extensions of groups.
\begin{thm}
The homomorphism $\bar{\mu}: H \rightarrow \operatorname{Out} (I)$ is independent of choice of any st-section $s : H \rightarrow E$. Furthermore, if $\prescript{}{1}{\bar{\mu}}$, $\prescript{}{2}{\bar{\mu}}$ are the homomorphisms associated with two equivalent Rota-Baxter extensions of $(H, R_H)$ by $(I, R_I)$, then $\prescript{}{1}{\bar{\mu}}=\prescript{}{2}{\bar{\mu}}$.
\end{thm}
In the view of proceeding theorem, for every equivalence class of extensions $[\mathcal{E}] \in Ext(H,I)$ there is a unique homomorphism $\bar{\mu}: H \rightarrow \operatorname{Out} (I),$ called the coupling associated to $[\mathcal{E}]$.
Let $(H, R_H)$ and $(I, R_I)$ be two Rota-Baxter groups and $\bar{\mu}: H \rightarrow \operatorname{Out} (I)$ a coupling. We set
$$\operatorname{Ext} _{\bar{\mu}}(H, I)=\{[\mathcal{E}]\hspace{.1cm} \vert \hspace{.1cm}\bar{\mu} \mbox{ is the coupling associated to } \mathcal{E} \}.$$
Thus, we can write
\vspace{-.01cm}
$$ \operatorname{Ext} (H, I)=\bigsqcup_{\bar{\mu}}\operatorname{Ext} _{\bar{\mu}}(H, I).$$
\begin{defn}
Let $(H, R_H)$ and $(I, R_I)$ be two Rota-Baxter groups. Let $\mu : H \rightarrow \operatorname{Aut} (I),$ $ \tau: H \times H \rightarrow I,$
and $g: H \rightarrow I $ be the maps that satisfy \eqref{st action }, \eqref{NA parent 2} and \eqref{cocycle 2}. We call the triplet $(\mu, \tau, g)$ to be an associated triplet corresponding to action of $H$ on $I$.
\end{defn}
\begin{thm}
Every $(\mu, \tau,g)$ be an associated triplet corresponding to action of $H$ on $I$ defines a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$ denoted by $\mathcal{E}(\mu, \tau,g)$. Furthermore, if $\mathcal{E}(\mu, \tau,g)$ is a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$ defined by $( \mu, \tau, g)$, then there exists a st-section of $\mathcal{E}(\mu, \tau,g)$ for which the associated triplet is $( \mu, \tau, g)$.
\end{thm}
\begin{proof}
Let $( \mu,\tau, g)$ be an associated triplet. Define $R: H \times_{\mu, \tau} I \rightarrow H \times_{\mu, \tau} I$ by
$$R(h, y)=\big( R_H(h), g(h)R_I (i_{g(h)^{-1}}(\mu_{R_H(h)}(y))) \big).$$
Here $H \times I$ is a group extension of $H$ by $I$ defined by $\mu$ and $\tau$. More precisely, the group structure on $H \times_{\mu, \tau} I$ is given by
$$(h,_1 y_1)(h_2, y_2)=(h_1 h_2, \tau(h_1, h_2) \mu_{h_2}(y_1)y_2).$$
The map $R$ defined above is a Rota-Baxter operator is follows from \eqref{NA parent 2}. It is easy to see that
$$\mathcal{E}(\mu, \tau,g) := 0 \to I \stackrel{i}{\to} H \times I\stackrel{\pi}{\to} H \to 0,$$
is a Rota-Baxter extension, where $i$ and $\pi$ are natural injection and projection, respectively. Let $ s : H \rightarrow H \times I $ be $s(h):=(h, 0)$, where $0$ denotes the identity of $I$. Then $s$ is a st-section of $\mathcal{E}(\mu, \tau,g)$ and the triplet corresponding to $s$ is $(\mu, \tau, g).$
\hfill $\Box$
\end{proof}
Let $\alpha$ be a homomorphism from $H$ to $\operatorname{Out} (I)$. Define $$\mathcal{Z}^2_{\alpha}(H, I):=\{(\mu, \tau, g) \hspace{1mm} \vert \hspace{1mm} \bar{\mu}=\alpha \mbox{ and } (\mu, \tau, g) \mbox{ is an associated triplet}\}.$$
Next we see relationship between the associated triplets which defines the equivalent extension. Let $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and
$(\prescript{}{2}{\mu}, \tau_2, g_2)$ be two elements of $\mathcal{Z}^2_{\alpha}(H, I)$. We say that $(\prescript{}{1}{\mu}, \tau_1, g_1) \sim (\prescript{}{2}{\mu}, \tau_2, g_2) $ if the extensions $\mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $\mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2)$ are equivalent Rota-Baxter extensions.
\begin{prop}
The relation `$\sim$' on $\mathcal{Z}^2_{\alpha}(H, I)$ is an equivalence relation.
\end{prop}
\begin{thm}
Two associated triplets $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $(\prescript{}{2}{\mu}, \tau_2, g_2)$ defines equivalent extensions if and only if there exists a map $\theta: H \rightarrow I$ such that $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $(\prescript{}{2}{\mu}, \tau_2, g_2)$ with $\theta$ satisfies \eqref{st action },\eqref{NA cocycle relation} and \eqref{NA g condn}.
\end{thm}
\begin{proof}
Let $\theta: H \rightarrow I$ be a map that satisfies \eqref{st action },\eqref{NA cocycle relation} and \eqref{NA g condn}. Define
$$ \psi : \mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2) \rightarrow \mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1) \mbox{ by }$$
$$\psi(h,y):=(h, \theta(h) y ).$$
Claim is that $\psi$ is an isomorphism of Rota-Baxter groups and the diagram commutes
$$\begin{CD}
0 @>>> I @>i_2>> \mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2) @>{{\pi_2} }>> H @>>> 0\\
&& @V{\text{Id}} VV@V{\psi} VV @V{\text{Id} }VV \\
0 @>>> I @>i_1>> \mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1) @>{{\pi_1} }>> H @>>> 0.
\end{CD}$$
\textit{Proof of the claim }: Let $R_1$ and $R_2$ be the Rota-Baxter operators corresponding to $\mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $\mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2)$, respectively. For $h_1, h_2 \in H$ and $y_1, y_2 \in I $, we have
\begin{align*}
\psi\big( (h_1, y_1)(h_2, y_2) \big)&=\psi\big(h_1 h_2, \tau_2(h_1, h_2) \prescript{}{2}{\mu}_{h_2}(y_1) y_2 \big)\notag \\
&=\big( h_1h_2, \theta(h_1 h_2)\tau_2(h_1, h_2) \prescript{}{2}{\mu}_{h_2}(y_1) y_2 \big)\notag \\
&=\big (h_1 h_2, \tau_1(h_1, h_2) \prescript{}{1}{\mu}_{h_2}(\theta(h_1)) \theta(h_2)\prescript{}{2}{\mu}_{h_2}(y_1) y_2 \big)\notag\\
&=\big (h_1 h_2, \tau_1(h_1, h_2) \prescript{}{1}{\mu}_{h_2}(\theta(h_1)) \prescript{}{1}{\mu}_{h_2}(y_1) \theta(h_2) y_2 \big)\notag \\
&=\psi(h_1, y_1) \psi(h_2, y_2).
\end{align*}
Next we have
\begin{align}\label{RBmor}
\psi R_2(h, y)&=\psi\big( R_H(h), g_2(h)R_I (i_{g_2(h)^{-1}}(\mu_{R_H(h)}(y))) \big)\notag \\
&= \big( R_H(h), \theta(R_H(h)) g_2(h)R_I (i_{g_2(h)^{-1}}(\mu_{R_H(h)}(y))) \big).
\end{align}
Using \eqref{NA g condn} in \eqref{RBmor}, we get that $\psi$ is a morphism of Rota-Baxter groups. Proving $\psi$ is bijection and the above diagram commutes is a simple verification.
\noindent \textit{Conversely}: Let $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $(\prescript{}{2}{\mu}, \tau_2, g_2)$ be two associated triplets such that there exist an isomorphism $\psi:\mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2) \rightarrow \mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1) $ and the diagram commutes
$$\begin{CD}
0 @>>> I @>i_2>> \mathcal{E}(\prescript{}{2}{\mu}, \tau_2, g_2) @>{{\pi_2} }>> H @>>> 0\\
&& @V{\text{Id}} VV@V{\psi} VV @V{\text{Id} }VV \\
0 @>>> I @>i_1>> \mathcal{E}(\prescript{}{1}{\mu}, \tau_1, g_1) @>{{\pi_1} }>> H @>>> 0.
\end{CD}$$
Let $h \in H $ and $y \in I$. Using the commutativity of the above diagram we have
$\psi i_2 = i_1 $ and $\pi_2 \psi= \pi_1$, which implies that $\psi(e,y)=(e, y)$ and $\psi(h, 0)=(h, g_h)$ for some unique $g_h \in I$.
Define $\theta: H \rightarrow I$ by $\theta(h):=g_h$. It is easy to see that $\theta$ is the required map.
\hfill $\Box$
\end{proof}
\begin{cor}
Let $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $(\prescript{}{2}{\mu}, \tau_2, g_2)$ be two elements of
$\mathcal{Z}^2_{\alpha}(H, I)$. Then $(\prescript{}{1}{\mu}, \tau_1, g_1)$ $\sim$ $(\prescript{}{2}{\mu}, \tau_2, g_2)$ if there exists a map $\theta: H \rightarrow I $ such that $(\prescript{}{1}{\mu}, \tau_1, g_1)$ and $(\prescript{}{2}{\mu}, \tau_2, g_2)$ satisfies \eqref{st action }, \eqref{NA cocycle relation} and \eqref{NA g condn}.
\end{cor}
\begin{defn}
Define $$\mathcal{H}^2_{\alpha}(H, I):=\mathcal{Z}^2_{\alpha}(H, I)/ \sim$$ and denote $[(\mu,\tau, g)] \in \mathcal{H}^2_{\alpha}(H, I)$, the equivalence class of $(\mu,\tau, g)$.
\end{defn}
\begin{thm}
There exist a bijection between $\mathcal{H}^2_{\alpha}(H, I)$ and $\operatorname{Ext} _{\alpha}(H, I).$
\end{thm}
\begin{proof}
The map $\phi: \mathcal{H}^2_{\alpha}(H, I) \rightarrow \operatorname{Ext} _{\alpha}(H, I) $ defined by $\phi([(\mu, \tau, g)]):=[\mathcal{E}(\mu, \tau,g)]$ is well defined and injective follows from definition of `$\sim$'. The map is surjective follows from the construction of $\mu, \tau$ and $g$ as defined in \eqref{actions}, \eqref{cocycle 1} and \eqref{g map}, respectively.
\end{proof}
\begin{prop}
Let $(H, R_H)$ and $(I, R_I)$ be Rota-Baxter groups such that $\operatorname{Z} (I)$ is a Rota-Baxter subgroup of $(I, R_I)$. Let $(\mu, \tau, g)$ be an associated triplet corresponding to action of $H$ on $I$, then $(\operatorname{Z} (I), R_I)$ is an $(H, R_H)$-module with action $\tilde{\mu}: H \rightarrow \operatorname{Aut} (\operatorname{Z} (I))$, where $\tilde{\mu}_h:=\mu_h \vert_{Z(I)}$. Moreover, every element of $\mathcal{Z}^2_{\alpha}(H, I)$ induces same action on $\operatorname{Z} (I)$.
\end{prop}
\begin{proof}
By putting $h_1=0$ in \eqref{NA parent 2}, we get an expression independent of $\tau$. Now using that $y_1, y_2, \in \operatorname{Z} (I)$ and $\operatorname{Z} (I)$ is invariant under $R_I$ and $\mu_h$ for $h \in H$, we can easily establish that $(\operatorname{Z} (I), R_I)$ is a $(H, R_H)$-module via action $\tilde{\mu}$.
\end{proof}
\begin{thm}\label{cocycle change}
Let $(\mu,\tau, g)$ be an associated triplet and $\mu^\prime $ be an action of $H$ on $I$ with $\overline{\mu}=\overline{\mu^\prime}$, then there exist $\tau^\prime : H\times H \rightarrow I$ and $g^\prime : H \rightarrow I $ such that $(\mu^\prime,\tau^\prime, g^\prime)$ is an associated triplet and $[(\mu,\tau, g)]=[(\mu^\prime,\tau^\prime, g^\prime)]$.
\end{thm}
\begin{proof}
The condition $\overline{\mu}=\overline{\mu^\prime}$ implies that there exists a map $\theta : H \rightarrow I $ such that $\mu_h=i_{\theta(h)} \mu^\prime_h$. Let $s_1$ be a st-section of $\mathcal{E}(\mu, \tau, g)$ inducing $(\mu, \tau, g)$. Define $s_2(h)=s_1(h) \theta(h)$ for $ h \in H$. Then the action induced by $s_2$ is $\mu^\prime$ and the $2$-cocycle $(\tau^\prime, g^\prime)$ corresponding to $s_2$ are the recquired maps. As $s_1$ and $s_2$ are different sections of same extension so they will induce equivalent extensions which implies that $[(\mu,\tau, g)]=[(\mu^\prime,\tau^\prime, g^\prime)]$.
\hfill $\Box$
\end{proof}
\begin{thm}
Let $(I, R_I)$ be a Rota-Baxter group such that $\operatorname{Z} (I)$ is a Rota-Baxter subgroup of $I$. Then there exists a free action of $\operatorname{H} ^2(H, \operatorname{Z} (I))$ on $\mathcal{H}^2_{\alpha}(H, I)$ where $(\operatorname{Z} (I), R_I)$ is a $(H, R_H)$-module with respect to the action induced by elements of $\mathcal{Z}^2_{\alpha}(H, I)$.
\end{thm}
\begin{proof}
Let $[(\tau^\prime, g^\prime)] \in \operatorname{H} ^2(H, \operatorname{Z} (I))$ and $[(\mu, \tau, g)] \in \mathcal{H}^2_{\alpha}(H, I)$. Define
\begin{align}\label{freeact}
[(\tau^\prime , g^\prime)] [(\mu, \tau, g)]:=[(\mu, \tau \tau^\prime , g g^\prime)],
\end{align}
where
$\tau \tau^\prime (h_1, h_2):=\tau(h_1, h_2) \tau^\prime (h_1, h_2)$ and $g g^\prime (h):=g(h) g^\prime(h)$, for all $h_1, h_2 , h \in H$. It is easy to see that $(\mu, \tau \tau^\prime , g g^\prime) \in \mathcal{Z}^2_{\alpha}(H, I) $. Let $(\tau_1^\prime, g_1^\prime), (\tau_2^\prime, g_2^\prime) \in \ker(\partial^2_{RBE})$ and $( \prescript{}{1}{\mu}_h , \tau_1, g_1), ( \prescript{}{2}{\mu}_h , \tau_2, g_2) \in \mathcal{Z}^2_{\alpha}(H, I)$ such that $[(\tau_1^\prime, g_1^\prime)]=[(\tau_2^\prime, g_2^\prime)]$ and $[( \prescript{}{1}{\mu}_h , \tau_1, g_1)]=[( \prescript{}{2}{\mu}_h , \tau_2, g_2)]$. We claim that $[( \prescript{}{1}{\mu}_h , \tau_1 \tau^\prime_1, g_1 g^\prime_1)] =[( \prescript{}{2}{\mu}_h , \tau_2 \tau^\prime_2, g_2 g^\prime_2)]$. As $[(\tau_1^\prime, g_1^\prime)]=[(\tau_2^\prime, g_2^\prime)]$, there exists a map $\theta$ such that $(\tau_1^\prime, g_1^\prime)$ and $(\tau_2^\prime, g_2^\prime)$ are cohomologous by $\theta$. Also, there exists a map $\theta^\prime$ such that $( \prescript{}{1}{\mu}_h , \tau_1, g_1) \sim ( \prescript{}{2}{\mu}_h , \tau_2, g_2)$ by $\theta^\prime$. Straightforward calculations shows that
$( \prescript{}{1}{\mu}_h , \tau_1 \tau^\prime_1, g_1 g^\prime_1) \sim ( \prescript{}{2}{\mu}_h , \tau_2 \tau^\prime_2, g_2 g^\prime_2)$ by $\theta \theta^\prime$, which proves the claim. This show that the action is well-defined.
Next we show that the defined action is free. Let $[(\tau^\prime, g^\prime)] [(\mu, \tau, g)]=[(\mu, \tau \tau^\prime , g g^\prime)]=[(\mu, \tau , g )]$, then we have $\theta : H \rightarrow I$ such that $(\mu, \tau \tau^\prime , g g^\prime) \sim (\mu, \tau , g )$ by $\theta$. It follows from \eqref{st action } that $\theta(h) \in \operatorname{Z} (I)$ for all $h \in H$ and \eqref{NA cocycle relation}, \eqref{NA g condn} proves that $[(\tau^\prime, g^\prime)]$ is the identity of $\operatorname{H} ^2(H, \operatorname{Z} (I))$, which shows that the action is free.
\hfill $\Box$
\end{proof}
\noindent \textbf{Problem 2.} Under what condition the action \eqref{freeact} becomes transitive?
\section{Split extensions of Rota-Baxter groups}
In this section, we study the special case where short exact sequence of Rota-Baxter groups splits as a sequence of groups and provide some examples.
Let $(H, R_H)$ and $(I, R_I)$ be Rota-Baxter groups and $\mathcal{E} := 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \to 0$ be an extension Rota-Baxter groups in which $\mathcal{E}$ splits as an extension of groups. That is, there exists a unique st-section $s: H \rightarrow E$ such that $s$ is a group homomorphism.
By putting $\tau(h_1, h_2)=0$ in \eqref{NA parent 2}, we get
\begin{align}\label{split condn final}
& \mu_{R_H(h_2)}\Big(g(h_1)R_I\big(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1))\big)\Big) g(h_2)R_I\big(i_{g(h_2)^{-1}}(\mu_{R_H(h_2)}(y_2))\big)\notag \\
&= g(h_1 \circ_{R_H} h_2) R_I\Big( i_{g(h_1 \circ_{R_H} h_2)^{-1}} \big( \mu_{R_H(h_1 \circ_{R_H} h_2)}(z)\big) \Big),
\end{align}
where $z$ is given by
\begin{align}\label{final z condn}
z=& \mu_{R_H(h_1) h_2 R_H(h_1)^{-1} }\Big(R_I\big(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1))\big)\Big) \mu_{h_2 R_H(h_1)^{-1}}\bigg(y_2 \Big(R_I\big(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1))\big) \Big)^{-1} g(h_1)^{-1} \bigg).
\end{align}
\begin{thm}\label{splitgrp}
Let $(H, R_H)$ and $(I, R_I)$ be two Rota-Baxter groups. Let $\mu$ be an anti-homomorphism from $H$ to $Aut(I)$ and $g$ is a map from $H$ to $I$ such that $g(0)=0$ and together $\mu$ and $g$ satisfy \eqref{split condn final} for all $h_1, h_2 \in H$ and $y_1, y_2 \in I$. Then the map $R: H \times I \rightarrow H \times I$ given by
\begin{align}\label{split RB operator}
R(h, y)=\Big(R_H(h), g(h) R_I\big(i_{g(h)^{-1}}(\mu_{R_H(h)}(y))\big)\Big)
\end{align}
defines a Rota-Baxter operator on the semi-direct product $H \ltimes_{\mu} I$, where the group operation in $H \ltimes_{\mu} I$ is given by
\begin{align*}
(h_1, y_1) (h_2, y_2) =&(h_1 h_2, \mu_{h_2}(y_1) y_2).
\end{align*}
\end{thm}
\begin{proof}
To show that $R$ defined in \eqref{split RB operator} is a Rota-Baxter operator, we expand both sides one by one. Consider,
\begin{align}\label{first split condn}
& R(h_1, y_1)R(h_2, y_2)\notag \\
=& \Big(R_H(h_1), g(h_1) R_I\big(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1))\big)\Big)\Big(R_H(h_2), g(h_2) R_I\big(i_{g(h_2)^{-1}}(\mu_{R_H(h_2)}(y_2))\big)\Big)\notag \\
=& \Big(R_H(h_1)R_H(h_2), \mu_{R_H(h_2)}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big) g(h_2) R_I\big(i_{g(h_2)^{-1}}(\mu_{R_H(h_2)}(y_2))\big)\Big).
\end{align}
On the other side, we have
\begin{align}\label{second condn split}
&R\big((h_1, y_1)R(h_1, y_1)(h_2, y_2)R(h_1, y_1)^{-1}\big)\notag \\
=&R\Big((h_1, y_1)\big(R_H(h_1), g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1))\big)(h_2, y_2)\notag \\
&(R_H(h_1)^{-1}, \mu_{R_H(h_1)^{-1}}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big)^{-1} \Big)\notag\\
=&R\Big(h_1 R_H(h_1), \mu_{R_H(h_1)}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big)\notag\\
&(h_2 R_H(h_1)^{-1}, \mu_{R_H(h_1)^{-1}}(y_2)\mu_{R_H(h_1)^{-1}}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big)^{-1}\Big)\notag\\
=& R\Big(h_1R_H(h_1)h_2 R_H(h_1)^{-1},\mu_{h_2R_H(h_1)^{-1}}\big(\mu_{R_H(h_1)}(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))))\big)\notag \\
& \mu_{R_H(h_1)^{-1}}(y_2)\mu_{R_H(h_1)^{-1}}\big(g(h_1) R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))\big)^{-1}\Big)\notag\\
=&\Big(R_H(h_1 \circ_{R_H} h_2), g(h_1 \circ_{R_H} h_2)R_I(i_{g(h_1 \circ_{R_H} h_2)^{-1}}\big(\mu_{R_H(h_1 \circ_{R_H} h_2)}(\mu_{h_2R_H(h_1)^{-1}}\big(\mu_{R_H(h_1)}(g(h_1) \notag \\
& R_I(i_{g(h_1)^{-1}}(\mu_{R_H(h_1)}(y_1)))))))\Big).
\end{align}
Using \eqref{split condn final} in \eqref{second condn split}, it follows that \eqref{first split condn} and \eqref{second condn split} are equal. This proves that the map defined in \eqref{split RB operator} is a Rota-Baxter operator.
We denote this Rota-Baxter group by $(H, I, \mu, g, R)$.
\hfill $\Box$
\end{proof}
\begin{thm}\label{splthm}
Let $(H, R_H)$ and $(I, R_I)$ be two Rota-Baxter groups. Then the Rota-Baxter group $(H, I, \mu, g, R)$ defines a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$ such that it splits as an extension of groups.
\end{thm}
\begin{proof}
Let $$\mathcal{E}(\mu, g):= 0 \to I \stackrel{i}{\to} (H, I, \mu, g, R) \stackrel{\pi}{\to} H \to 0$$
be a sequence of groups, where $i$ and $\pi$ are the natural injection and projection, respectively.
We have $$R(i(y))=R(0,y)= (0,R_I(y)) =i(R_I(y)),$$ and
$$\pi(R(h,y))=\pi \Big(R_H(h), g(h) R_I\big(i_{g(h)^{-1}}(\mu_{R_H(h)}(y))\big)\Big)=R_H(\pi(h, y)),$$
for all $h \in H$ and $y \in I$. This shows that $\mathcal{E}(\mu, g)$ is a Rota-Baxter extension. Let $s: H \rightarrow (H, I, \mu, g, R)$ given by $s(h)=(h, 0)$. It is easy to verify that $s$ is a st-section of $\mathcal{E}(\mu, g)$ and $s$ is also a group homomorphism. This shows that $\mathcal{E}(\mu, g)$ as an extension of groups splits.
\hfill $\Box$
\end{proof}
\begin{thm}
Let $\mathcal{E}: 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \rightarrow 0$ be an extension of Rota-Baxter groups such that it splits as an extension of groups. Then $\mathcal{E}$ is equivalent to $\mathcal{E}(\mu, g)$ for some $g: H \rightarrow I$ and anti-homomorphism $\mu : H \rightarrow \operatorname{Aut} (I)$.
\end{thm}
\begin{proof}
Let $\mathcal{E}: 0 \to I \stackrel{i}{\to} E \stackrel{\pi}{\to} H \rightarrow 0$ be an extension of Rota-Baxter groups such that it splits as an extension of groups. Define $\mu$ and $g$ corresponding to $s$ as defined in \eqref{actions} and \eqref{g map}. Then we can define a Rota-Baxter extension $\mathcal{E}(\mu, g)$ of $(H, R_H)$ by $(I, R_I)$ following Theorem \ref{splthm}. It is easy to verify that the map $$\phi : (E, R_E) \rightarrow \mathcal{E}(\mu, g)$$ given by $\phi(s(h) y)=(h, y)$ is an isomorphism of Rota-Baxter groups and the following diagram commutes
$$\begin{CD}
0 @>>> I @>i>> E @>{{\pi} }>> H @>>> 0\\
&& @V{\text{Id}} VV@V{\phi} VV @V{\text{Id} }VV \\
0 @>>> I @>i^\prime>> \mathcal{E}(\mu, g) @>{{\pi^\prime} }>> H @>>> 0.
\end{CD}$$
\hfill $\Box$
\end{proof}
\begin{thm}
Let $\mathcal{E}_1 := 0 \to I \stackrel{}{\to} E \stackrel{\pi_1}{\to} H \to 0,$ be an extension of skew left braces corresponding to some fixed skew left brace structure on $I$, $E$ and $H$ such that additive groups of $I$, $E$ and $H$ are complete groups. Then there exist Rota Baxter operators $R_I, R_E$ and $R_H$ on $I$, $E$ and $H$, respectively, such that the sequence $\mathcal{E}_2 := 0 \to(I, R_I) \stackrel{}{\to} (E, R_E) \stackrel{\pi_1}{\to} (H, R_H) \to 0$ defines an extension of Rota-Baxter groups and the skew brace extension defined by $\mathcal{E}_2$ is same as $\mathcal{E}_1$.
\end{thm}
\begin{proof}
Let $R_E$ be a Rota-Baxter operator on $E$ which induces skew left brace structure on $E$, that is under consideration in $\mathcal{E}_1$. As $I$ is an ideal of the skew left brace $E$ that means $I$ is itself a skew left brace. For $ x, y \in I$, we have
\begin{align*}
x \circ_{R_E} y =& x R_E(x) y R_E(x)^{-1}.
\end{align*}
As $I$ is a complete group and every skew brace structure on complete group is induced by a Rota-Baxter operator, thus we have $R_I : I \rightarrow I$ which induces the same skew brace structure on $ I.$ We have
\begin{align*}
x \circ_{R_I} y =& x \circ_{R_E} y,\\
R_I(x) y R_I(x)^{-1}=& R_E(x) y R_E (x)^{-1}.
\end{align*}
Since, the above equation is true for all $x, y \in I$, we have $R_I(x)R_E( x)^{-1} \in \operatorname{Z} ( I)$, which is trivial. Hence $R_I(x)=R_E(x)$ for all $x \in I$, this shows that $ I$ is invariant under $R_E$. Now from $\mathcal{E}_1$ we have $E/I \cong H$ as skew left brace and skew left brace structure on $E/I$ is induced by Rota-Baxter operator $\overline{R}_E: E/I \rightarrow E/I$, defined by
$ \overline{R}_E(\overline{x})=\overline{R_E(x) }$, where $\overline{x}$ denotes the image of $x \in E$ in $E/I$ under natural projection. Let skew left brace structure on $H$ is induced by some Rota-Baxter operator $R_H$. We have
\begin{align*}
\overline{\pi}_1 (\overline{x_1 \circ_{\overline{R}_E} x_2})= & \overline{\pi}_1 (\overline{x}_1 \circ_{\overline{R}_E} \overline{x}_2),\\
\overline{\pi}_1(\overline{x_1 R_E(x_1) x_2 R_E(x_1)^{-1} }) = & \overline{\pi}_1 (\overline{x}_1) \circ_{R_H} \overline{\pi}_1( \overline{x}_2),\\
\overline{\pi}_1(\overline{x}_1 ) )\overline{\pi}_1(\overline{R_E(x_1)}) \overline{\pi}_1(\overline{x_2}) \overline{\pi}_1(\overline{R_E(x_1)})^{-1}=& \overline{\pi}_1 (\overline{x}_1) R_H(\overline{\pi}_1 (\overline{x}_1)) \overline{\pi}_1 (\overline{x}_2) R_H(\overline{\pi}_1 (\overline{x}_1))^{-1}.
\end{align*}
Using that additive group of skew left brace $H$ is complete group, we have $\pi_1 R_E=R_H \pi_1$. This shows that $\mathcal{E}_2 := 0 \to(I, R_I) \stackrel{}{\to} (E, R_E) \stackrel{\pi_1}{\to} (H, R_H) \to 0$ is an extension of Rota-Baxter groups.
\hfill $\Box$
\end{proof}
Next, we give some examples of Rota-Baxter operators on group which comes through the extensions. We have used GAP \cite{GAP} to compute these examples.
\begin{example}
The symmetric group $S_3$ over $\{1, 2,3 \}$ have total $8$ Rota-Baxter operators and all of them are extension of homomorphisms $($Rota-Baxter operator is just homomorphism over abelian group$)$ on $I=\langle(1,2,3) \rangle $ and $H=S_3/I $, which is isomorphic to cyclic group of order $2$. Here we list all of them except the trivial operator \
$1)$ $R_1(x)=e$ for all $x \in I $ and $R_1(y)=(2,3)$ for all $y \notin I$.\
$2)$ $R_2(x)=e$ for all $x \in I $ and $R_2(y)=(1,3)$ for all $y \notin I$.\
$3)$ $R_3(x)=e$ for all $x \in I $ and $R_3(y)=(1,2)$ for all $y \notin I$.\
$4)$ $R_4(x)=x^2$ for all $x \in I $ and $R_4(2,3)=e, R_4(1,3)=(1,2,3), R_4(1,2)=(1,3,2)$.\
$5)$ $R_5(x)=x^2$ for all $x \in I $ and $R_5(2,3)=(1,3,2), R_5(1,3)=e, R_5(1,2)=(1,2,3)$.\
$6)$ $R_6(x)=x^2$ for all $x \in I $ and $R_6(2,3)=(1,2,3), R_6(1,3)=(1,3,2), R_6(1,2)=e$.\
$7)$ $R_7(x)=x^2$ for all $x \in I $ and $R_7(y)=y$ for all $y \notin I$.
\end{example}
\begin{example}
Let $D_4$ be the dihedral group of order $8$ as a subgroup of $S_4$ that is $D_4=\langle(1,2,3,4), (1,4)(2,3) \rangle$. We have $I=\langle(1,2,3,4) \rangle$ and $H=D_4/I \cong $ cyclic group of order $2$. There are total $52$ Rota-Baxter operators on $D_4$. We list some of them which are extension of homomorphism on $I$.
$1)$ $R_1(x)=e$ for all $x \in I $ and $R_1(y)=(1,3)(2,4)$ for all $y \notin I$.\
$2)$ $R_2(x)=e$ for all $x \in I $ and $R_2(y)=(2,4)$ for all $y \notin I$.\
$3)$ $R_3(x)=e$ for all $x \in I $ and $R_3(y)=(1,3)$ for all $y \notin I$.
\end{example}
\begin{example}
Let $Q_8=\langle (1,2,3,4)(5,6,7,8), (1,5,3,7)(2,8,4,6) \rangle $ be the quaternion group of order $8$ as a subgroup of $S_8$. We have computed that $Q_8$ have total $8$ Rota-Baxter operators. Let $I=\langle (1,2,3,4)(5,6,7,8) \rangle$, we list few which are extension of homomorphism on $I$.
$ 1)$ $R_1(x)=e $ for all $x \in I$ and $R_1(y)=(1,3)(2,4)(5,7)(6,8)$ $y \notin I$.\
$ 2)$ $R_2(x)=x $ for all $x \in Q_8$ except $x=(1,6,3,8)(2,5,4,7), (1,8,3,6)(2,7,4,5)$ and
$R_2((1,6,3,8)(2,5,4,7))= (1,8,3,6)(2,7,4,5), R_2((1,8,3,6)(2,7,4,5))=(1,6,3,8)(2,5,4,7)$.\
\end{example}
\section{Fundamental sequence of Well's}
Given a Rota-Baxter extension $\mathcal{E} : 0 \rightarrow I \stackrel{i}{\rightarrow} E \stackrel{\pi}{\rightarrow} H \rightarrow 0,$ here we establish various group actions on the set $\operatorname{Ext} _{\mu}(H, I)$ and construct an exact sequence connecting certain automorphism groups of $(E, R_E)$ with $\operatorname{H} _{RBE}^2(H, I)$.
Let $\operatorname{Aut} (H, R_H)$ denotes the automorphism group of the Rota-Baxter group $(H, R_H)$. We define an action of $\operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$ on $\operatorname{Ext} (H, I)$ as follows. For a pair $(\phi, \theta) \in \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$ of Rota-Baxter automorphisms, we define a new extension
$$\mathcal{E}^{(\phi, \theta)} : 0 \rightarrow I \stackrel{i\theta}{\longrightarrow} E \stackrel{\phi^{-1} \pi}{\longrightarrow} H \rightarrow 0.$$
Let
$\mathcal{E}_1: 0 \rightarrow I \stackrel{i}{\rightarrow} E_1 \stackrel{\pi}{\rightarrow} H \rightarrow 0$ and $\mathcal{E}_2: 0 \rightarrow I \stackrel{i'}{\rightarrow} E_2 \stackrel{\pi'}{\rightarrow} H \rightarrow 0$
be two equivalent extensions of $(H, R_H)$ by $(I, R_I)$. Then it is not difficult to show that the extensions $\mathcal{E}_1^{(\phi, \theta)}$ and $\mathcal{E}_2^{(\phi, \theta)}$ are also equivalent for any $(\phi, \theta) \in \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$. Thus, for a given $(\phi, \theta) \in \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$, we can define a map from $\operatorname{Ext} (H, I)$ to itself given by
\begin{equation}\label{act1 sb}
[\mathcal{E}] \mapsto [ \mathcal{E}^{(\phi, \theta)}].
\end{equation}
If $\phi$ and $\theta$ are identity automorphisms, then obviously $\mathcal{E}^{(\phi, \theta)} = \mathcal{E}$. It is also easy to see that
$$[\mathcal{E}] ^{(\phi_1, \theta_1) (\phi_2, \theta_2)}= \big([\mathcal{E}]^{(\phi_1, \theta_1)}\big)^{(\phi_2, \theta_2)}.$$
We conclude that the association \eqref{act1 sb} gives an action of the group $\operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$ on the set $\operatorname{Ext} (H, I)$.
As we know that $$\operatorname{Ext} (H, I) = \bigsqcup_{\bar{\mu}} \operatorname{Ext} _{\bar{\mu}}(H, I).$$
\emph{Let $(I, R_I)$ be a $(H, R_H)$-module by a fixed action $\mu : H \rightarrow \operatorname{Aut} (I) $.} Let $\operatorname{C} _{\mu}$ denote the stabilizer of $\operatorname{Ext} _{\mu}(H, I)$ in $\operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$. More explicitly,
$$\operatorname{C} _{\mu} = \{ (\phi, \psi) \in \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I) \mid \mu_h=\psi^{-1} \mu_{\phi (h)} \psi \}.$$
Notice that $\operatorname{C} _\mu$ is a subgroup of $\operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$, and it acts on $\operatorname{Ext} _{\mu}(H, I)$ by the same rule as given in \eqref{act1 sb}. \
Next we consider an action of $ \operatorname{C} _{\mu}$ on $\operatorname{H} ^2_{RBE}(H, I)$.
Let $(\phi, \theta) \in \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$ and $f \in \operatorname{Fun} (H^n, I)$, where $n \ge 1$ be an integer. Define $f^{(\phi, \psi)} : H^n \to I$ by setting
$$f^{(\phi, \psi)}(h_1, h_2, \ldots, h_n) := \psi^{-1}\big(f(\phi(h_1), \phi(h_2), \ldots, \phi(h_n))\big).$$
It is not difficult to see that the group $\operatorname{Aut} (H) \times \operatorname{Aut} (I)$ acts on the group $\operatorname{Fun} (H^n, I)$ as well as on the group $C^{n}(H, I)$, by automorphisms, given by the association
\begin{equation}\label{act2 sb}
f \mapsto f^{(\phi, \psi)}.
\end{equation}
It is also obvious that $\operatorname{C} _{\mu}$ acts on both of these sets. We are interested in the action of $\operatorname{C} _{ \mu}$ on $\operatorname{H} _{RBE}^2(H, I)$. The association \eqref{act2 sb} induces an action of $\operatorname{C} _{\mu}$ on $TC^2_{RBE} = C^2(H, I) \oplus C^1(H, I)$ by setting
\begin{equation}\label{act3 sb}
(\tau, g) \mapsto \big(\tau^{(\phi, \psi)}, g^{(\phi, \psi)}\big).
\end{equation}
\begin{lemma}\label{lemma-act3 sb}
For $(\phi, \psi) \in \operatorname{C} _{\mu}$, the following hold:
$(i)$ If $(\tau, g) \in \operatorname{Z} ^2_{RBE}(H,I)$, then $\big(\tau^{(\phi, \psi)}, g^{(\phi, \psi)}\big) \in \operatorname{Z} ^2_{RBE}(H,I)$.
$(ii)$ If $(\tau, g) \in \operatorname{B} ^2_{RBE}(H,I)$, then $\big(\tau^{(\phi, \psi)}, g^{(\phi, \psi)}\big) \in \operatorname{B} ^2_{RBE}(H,I)$.
Hence, the association \eqref{act3 sb} gives an action of $\operatorname{C} _{\mu}$ on $\operatorname{H} _{RBE}^2(H, I)$ by automorphisms, if we define
$$[(\tau, g)]^{(\phi, \psi)} =[\big(\tau^{(\phi, \psi)}, g^{(\phi, \theta)}\big)].$$
\end{lemma}
\begin{proof}
$(i)$ It is easy to see that $\tau^{(\phi, \psi)}$ satisfy \eqref{cocycle 2}. Replace $h_1, h_2$ in \eqref{2-cocycle condn} by $\phi(h_1), \phi(h_2)$ and applying $\psi^{-1}$ both side we have
\begin{align*}\label{abelianparent2}
& \psi^{-1}\Big(g(\phi(h_2))-g(\phi(h_1 \circ_{R_H} h_2)) + \mu_{R_H(\phi(h_2))}(g(\phi(h_1))) -R_I \big(\mu_{\phi( h_1 \circ h_2 R_H(h)^{-1})}(\mu_{h_2}(g(h_1))- g(h_1)\big)\Big) \notag \\
=&\psi \Big( \tau(R_H(\phi(h_1)), R_H(\phi(h_2)))-R_I \big(\mu_{\phi(h_1 \circ h_2)}(\tau(\phi(h_1R_H(h_1)), \phi(h_2R_H(h_1))^{-1})\notag \\
&+\mu_{\phi(h_2 R_H(h_1)^{-1})}(\tau(\phi(h_1), R_H(\phi(h_1)))
+ \tau(\phi(h_2), R_H(\phi(h_1))^{-1})-\tau(R_H(\phi(h_1)), R_H(\phi(h_1))^{-1})\big)\Big).
\end{align*}
Now using that $\psi$ and $\phi$ commutes with the operator $R_H$ and $R_I$, respectively and definition of $\operatorname{C} _{\mu}$ we easily see that $\big(\tau^{(\phi, \psi)}, g^{(\phi, \psi)}\big)$ satisfy \eqref{2-cocycle condn}.
$(ii)$ Let $\tau= \delta^1(\theta)$ and $g=-\Phi(\theta)$ for some $\theta \in TC^1_{RBE}$.
We have
\begin{align*}
\tau^{(\phi, \psi)}(h_1, h_2) &=\psi^{-1} \big(\theta(\phi(h_2))-\theta(\psi(h_1 \circ_{R_H} h_2))+\mu_{\phi(R_H(h_2))}(\theta(\phi(h_1))) \big) \delta^1(\theta^{(\phi, \psi)})(h_1, h_2).
\end{align*}
Similarly, we have
\begin{align*}
g^{(\phi, \psi)}= -\Phi(\theta^{(\phi, \psi)}).
\end{align*}
\hfill $\Box$
\end{proof}
\begin{remark}
The action of $\operatorname{C} _{\mu}$ on $\operatorname{H} ^2_{RBE}(H,I)$ as defined in the preceding lemma, can be transferred on
$\operatorname{Ext} _{\mu}(H,I)$ through the bijection given in Theorem \ref{bij}. Notice that the resulting action of $\operatorname{C} _{\mu}$ on $\operatorname{Ext} _{\mu}(H,I)$ agrees with the action defined in \eqref{act1 sb}.
\end{remark}
We now consider the action of $\operatorname{H} ^2_{RBE}(H,I)$ onto itself by right translation, which is faithful and transitive. Again using Theorem \ref{bij}, we can transfer this action on $\operatorname{Ext} _{\mu}(H,I) = \{[\mathcal{E}(\tau, g)] \mid [(\tau, g)] \in \operatorname{H} ^2_{RBE}(H,I)\}$. More precisely, for $[(\tau_1, g_1)] \in \operatorname{H} ^2_{RBE}(H,I)$, the action is given by
$$[\mathcal{E}(\tau, g)]^{[(\tau_1, g_1)]} =[ \mathcal{E}(\tau + \tau_1, g + g_1)],$$
for all $\mathcal{E}(\tau, g) \in \operatorname{Ext} _{\mu}(H,I)$. Notice that this action is faithful and transitive.
Consider the semi-direct product $\Gamma :=\operatorname{C} _{\mu} \ltimes \operatorname{H} ^2_{RBE}(H,I)$ under the action defined in Lemma \ref{lemma-act3 sb}. We wish to define an action of $\Gamma$ on $\operatorname{Ext} _{\mu}(H,I)$. For $(c, h) \in \Gamma$ and $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H,I)$, define
\begin{equation}\label{act4 sb}
[\mathcal{E}]^{(c, h)} = ([\mathcal{E}]^c)^h.
\end{equation}
\begin{lemma}\label{wells2 sb}
The rule in \eqref{act4 sb} defines an action of $\Gamma$ on $\operatorname{Ext} _{ \mu}(H,I)$.
\end{lemma}
\begin{proof}
Notice that for $(c_1,h_1), (c_2, h_2) \in \Gamma$, $(c_1,h_1)(c_2,h_2)=(c_1c_2, h_1^{c_2} \, h_2)$. So, it is enough to show that $\big([\mathcal{E}]^h\big)^c = \big([\mathcal{E}]^c\big)^{h^c}$ for each $c \in \operatorname{C} _{\mu}$, $h \in \operatorname{H} ^2_{RBE}(H,I)$ and $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H,I)$.
We know that $[\mathcal{E}]=[(H,I,\mu,\tau, g)]$ for some $[(\tau, g)] \in \operatorname{H} ^2_{RBE}(H,I)$. Then, for $h=[(\tau_h, g_h)] \in \operatorname{H} ^2_N(H, I)$, we have
\begin{eqnarray*}
\big([\mathcal{E}]^{h}\big)^c &=&[\mathcal{E}((\tau+\tau_h)^c), (g+g_h)^c)]\\
&=& [\mathcal{E}(\tau^c+ \tau_h^c, g^c +g_{h}^c)]\\
&=& ([\mathcal{E}(\tau^c, g^c)])^{h^c}\\
&=& \big([\mathcal{E}]^c\big)^{h^c}.
\end{eqnarray*}
The proof is now complete. \hfill $\Box$
\end{proof}
\vspace{-.4cm}
Let $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H,I)$ be a fixed extension. Since the action of $\operatorname{H} ^2_{RBE}(H,I)$ on $\operatorname{Ext} _{\mu}(H,I)$ is transitive and faithful, for each $c \in \operatorname{C} _{\mu}$, there exists a unique element (say) $h_c$ in $\operatorname{H} ^2_{RBE}(H,I)$ such that
$$[\mathcal{E}]^{c} = [\mathcal{E}]^{h_c}.$$
Thus, we have a well-defined map $ \omega(\mathcal{E}): \operatorname{C} _{\mu} \rightarrow \operatorname{H} ^2_{RBE}(H,I)$ given by
\begin{equation}\label{wells-map sb}
\omega(\mathcal{E})(c)=h_c, \hspace{.2cm}\mbox{ for} \hspace{.2cm} c \in \operatorname{C} _{\mu}.
\end{equation}
\begin{lemma}\label{wells3 sb}
The map $ \omega(\mathcal{E}): \operatorname{C} _{ \mu} \rightarrow \operatorname{H} ^2_{RBE}(H,I)$ given in \eqref{wells-map sb} is a derivation with respect to the action of $\operatorname{C} _{\mu}$ on $H^2_{RBE}(H,I)$ given in \eqref{act3 sb}.
\end{lemma}
\begin{proof}
Let $c_1, c_2 \in \operatorname{C} _{\mu}$ and $\omega(\mathcal{E})(c_1c_2) = h_{c_1c_2}$. Thus, by the definition of $\omega(\mathcal{E})$, $[\mathcal{E}]^{c_1c_2} = [\mathcal{E}]^{h_{c_1c_2}}$. Using the fact that $\big([\mathcal{E}]^{h}\big)^{c} = \big([\mathcal{E}]^{c}\big)^{h^c}$ for each $c \in \operatorname{C} _{(\nu, \mu, \sigma)}$, $h \in \operatorname{H} ^2_{RBE}(H,I)$, we have
\begin{eqnarray*}
[\mathcal{E}]^{h_{c_1c_2}} & = &[\mathcal{E}]^{(c_1c_2)}\\
&=& \big([\mathcal{E}]^{c_1}\big)^{c_2}\\
&=& \big([\mathcal{E}]^{h_{c_1}}\big)^{c_2}\\
&= & \big([\mathcal{E}]^{c_2}\big)^{(h_{c_1})^{c_2}}\\
&=& \big([\mathcal{E}]^{h_{c_2}}\big)^{(h_{c_1})^{c_2}}\\
&=& [\mathcal{E}]^{\big(h_{c_2} + (h_{c_1})^{c_2} \big)}.
\end{eqnarray*}
Since the action of $\operatorname{H} ^2_{RBE}(H,I)$ on $\operatorname{Ext} _{\mu}(H,I)$ is faithful, it follows that $h_{c_1c_2} = (h_{c_1})^{c_2} + h_{c_2}$. This implies that $\omega(\mathcal{E})(c_1c_2)=\big(\omega(\mathcal{E})(c_1)\big)^{c_2}+\omega(\mathcal{E})(c_2)$ which shows that $\omega(\mathcal{E})$ is a derivation. \hfill $\Box$
\end{proof}
Although $\omega(\mathcal{E})$ is not a homomorphism, but we can still talk about its set theoretic kernel, that is,
$$\operatorname{Ker} (\omega(\mathcal{E})) = \{c \in C_{\mu} \mid [\mathcal{E}]^c=[\mathcal{E}]\}.$$
Let $\mathcal{E}: 0 \rightarrow I \rightarrow E \overset{\pi}\rightarrow H \rightarrow 0$
be an extension of a Rota-Baxter group $(H, R_H)$ by an abelian Rota-Baxter group $(I, R_I)$ such that $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H,I)$.
Let $\operatorname{Aut} _I(E, R_E)$ denote the subgroup of $\operatorname{Aut} (E, R_E)$ consisting of all automorphisms of $E$ which normalize $I$, that is,
$$\operatorname{Aut} _I(E, R_E) := \{ \gamma \in \operatorname{Aut} (E, R_E) \mid \gamma(y) \in I \mbox{ for all } y \in I\}.$$
For $\gamma \in \operatorname{Aut} _I(E, R_E)$, set $\gamma_I := \gamma |_I$, the restriction of $\gamma$ to $I$, and $\gamma_H$ to be the automorphism of $(H, R_H)$ induced by $\gamma$. More precisely, $\gamma_H(h) = \pi(\gamma(s(h)))$ for all $h \in H$, where $s$ is a st-section of $\pi$. Notice that the definition of $\gamma_H$ is independent of the choice of a st-section. Define a group homomorphism $\rho(\mathcal{E}) : \operatorname{Aut} _I(E, R_E) \rightarrow \operatorname{Aut} (H, R_H) \times \operatorname{Aut} (I, R_I)$ by
$$\rho(\mathcal{E})(\gamma)=(\gamma_H, \gamma_I).$$
\begin{prop}\label{wells4 sb}
For the extension $\mathcal{E}$, we have $\operatorname{Im} (\rho(\mathcal{E})) = \operatorname{Ker} (\omega(\mathcal{E})) \subset \operatorname{C} _{\mu}$.
\end{prop}
\begin{proof}
We first show that $\operatorname{Im} (\rho(\mathcal{E})) \subset \operatorname{C} _{\mu}$. To show this, we need to prove that $\mu_{h} = \gamma_I^{-1} \mu_{\gamma_H(h)} \gamma_I$ for all $h \in H$. Let $s$ be an st-section of $\pi$ and $ x \in E$. Notice that $\gamma_I^{-1}$ is the restriction of $\gamma^{-1}$ on $I$. Also notice that for a given $x \in E$, $s(\pi(h)) = h y_h$ for some $y_h \in I$. Now for $h \in H$ and $ y \in I$, we have
\begin{eqnarray*}
\gamma_I^{-1} \mu_{\gamma_H(h)} \gamma_I(y) &=& \gamma^{-1} \big(\mu_{\pi(\gamma(s(h)))}(\gamma(y))\big)\\
&=& \gamma^{-1}\big(s(\pi(\gamma(s(h))))^{-1} \gamma (y) s(\pi(\gamma(s(h))))\big)\\
&=& \gamma^{-1}\big( (\gamma(s(h)) y_{\gamma(s(h))})^{-1} \gamma(y) \gamma(s(h)) y_{\gamma(s(h))} \big)\\
&=& (s(h) \gamma^{-1}( y_{\gamma(s(h))}))^{-1} y s(h) \gamma^{-1}( y_{\gamma(s(h))}) \\
&=& \gamma^{-1}( y_{\gamma(s(h))})^{-1} s(h)^{-1} y s(h) \gamma^{-1}( y_{\gamma(s(h))}) \\
&=& \gamma^{-1}( y_{\gamma(s(h))})^{-1} \mu_h(y) \gamma^{-1}( y_{\gamma(s(h))}) \\
&=& \mu_h(y).
\end{eqnarray*}
Hence, $\mu_{h} = \gamma_I^{-1} \mu_{\gamma_H(h)} \gamma_I$.
Now, we prove that $\operatorname{Im} (\rho(\mathcal{E})) = \operatorname{Ker} (\omega(\mathcal{E}))$.
Let $\rho(\mathcal{E})(\gamma) = (\gamma_H, \gamma_I)$ for $\gamma \in \operatorname{Aut} _I(E, R_E)$. We know that $s(\pi(x))=x y_x $ for some $y_x \in I$. Thus, we have
\begin{eqnarray*}
\gamma_H^{-1}(\pi(\gamma(s(h)))) &=& \pi\big(\gamma^{-1}(s(\pi(\gamma(s(h)))))\big)\\
&=& \pi\big(\gamma^{-1}\big(\gamma(s(h)) y_{\gamma(s(h))}\big)\big)\\
&= & h,
\end{eqnarray*}
which implies that the diagram commutes
$$\begin{CD}
0 @>>> I @>>> E@>{{\pi} }>> H @>>> 0\\
&& @V{\text{Id}} VV @V{\gamma} VV @V{\text{Id} }VV \\
0 @>>> I @>{\gamma_I}>> E @>{\gamma_H^{-1} \pi}>> H @>>> 0.
\end{CD}$$
Hence $[(\mathcal{E})]^{(\gamma_H, \gamma_I)} =[(\mathcal{E})]$, which shows that $\operatorname{Im} (\rho(\mathcal{E})) \subseteq \operatorname{Ker} (\omega(\mathcal{E}))$.
Conversely, if $(\phi, \psi) \in \operatorname{Ker} (\omega(\mathcal{E}))$, then there exists a Rota-Baxter group homomorphism $\gamma: (E, R_E) \rightarrow (E, R_E)$ such that the diagram commutes
$$\begin{CD}
0 @>>> I @>>> E@>{{\pi} }>> H @>>> 0\\
&& @V{\text{Id}} VV@V{\gamma} VV @V{\text{Id} }VV \\
0 @>>> I @>{\psi}>> E @>{ \phi^{-1} \pi}>> H @>>> 0.
\end{CD}$$
It is now obvious that $\gamma \in \operatorname{Aut} _I(E, R_E)$, $\psi = \gamma_I$ and $\phi = \gamma_H$. Hence
$\rho(\mathcal{E})(\gamma)=(\phi, \psi)$, which completes the proof. \hfill $\Box$
\end{proof}
\vspace{-.5cm}
Continuing with the above setting, set $\operatorname{Aut} ^{H, I}(E, R_E) := \{\gamma \in \operatorname{Aut} (E, R_E) \mid \gamma_I = \operatorname{Id}, \gamma_H = \operatorname{Id}\}$. Notice that $\operatorname{Aut} ^{H,I}(E, R_E)$ is precisely the kernel of $\rho(\mathcal{E})$. Hence, by using Proposition \ref{wells4 sb}, we get the following.
\begin{thm}\label{wells5 sb}
Let $\mathcal{E}: 0 \rightarrow I \rightarrow E \overset{\pi}\rightarrow H\rightarrow 0$ be an extension of a Rota-Baxter group $(H, R_E)$ by an abelian Rota-Baxter group $(I, R_I)$ such that $[\mathcal{E}] \in \operatorname{Ext} _{ \mu}(H,I)$. Then we have the following exact sequence of groups
$$0 \rightarrow \operatorname{Aut} ^{H,I}(E, R_E) \rightarrow \operatorname{Aut} _I(E, R_E) \stackrel{\rho(\mathcal{E})}{\longrightarrow} \operatorname{C} _{\mu} \stackrel{\omega(\mathcal{E})}{\longrightarrow} \operatorname{H} ^2_{RBE}(H,I),$$
where $\omega(\mathcal{E})$ is, in general, only a derivation.
\end{thm}
In the following result we give a new interpretation of the group $\operatorname{Aut} ^{H, I}(E, R_E)$.
\begin{prop}\label{wells6 sb}
Let $\mathcal{E} : 0 \rightarrow I \rightarrow E \overset{\pi}\rightarrow H \rightarrow 0$ be a Rota-Baxter extension of $(H, R_H)$ by $(I, R_I)$ such that $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H, I)$. Then $\operatorname{Aut} ^{H,I}(E, R_E) \cong \operatorname{Z} ^1_{RBE}(H,I)$.
\end{prop}
\begin{proof}
We know that every element $x \in E$ has a unique expression of the form $x=s(h)y$ for some $h \in H$ and $y \in I$. Let us define a map $\eta: \operatorname{Z} ^1_{RBE}(H,I) \rightarrow \operatorname{Aut} ^{H,I}(E, R_E)$ by
$$\eta(\lambda)((s(h)y) = s(h) \lambda(h) y,$$
where $\lambda \in \operatorname{Z} ^1_{RBE}(H,I)$.
Notice that the image of $\eta(\lambda)$ is independent of the choice of a st-section.
We claim that $\eta(\lambda) \in \operatorname{Aut} ^{H,I}(E, R_E)$. It follows from the theory of group extensions that $\eta(\lambda)$ is an automorphism of $E$ and it fixes $I$ and $H$. We just check that $\eta(\lambda)$ commutes with the operator $R_E$. For $h \in H$ and $y_1 \in I$, we have
\begin{align*}
\eta(\lambda) (R_E(s(h)y))&=\eta(\lambda)\big(s(R_H(h)) y_h R_I(\mu_{R_H(h)}(y))\big)\\
&=s(R_H(h)) \lambda(R_H(h)) y_h R_I(\mu_{R_H(h)}(y))\\
&=s(R_H(h)) R_I(\mu_{R_H(h)}(\lambda(h))) y_h R_I(\mu_{R_H(h)}(y))\\
&=s(R_H(h))y_h R_I(\mu_{R_H(h)}(\lambda(h)y)) \\
&=R_E\big(s(h)\lambda(h)y\big)\\
&=R_E( \eta(\lambda)(s(h)y))
\end{align*}
which shows that $\eta(\lambda) \in \operatorname{Aut} ^{H,I}(E, R_E)$.
We will now define a map $\zeta : \operatorname{Aut} ^{H,I}(E, R_E) \rightarrow \operatorname{Z} ^1_{RBE}(H,I)$ as follows. For $\gamma \in \operatorname{Aut} ^{H,I}(E, R_E)$ and $h \in H$, there exists a unique element (say) $y^{\gamma}_h \in I$ such that $\gamma(s(h)) = s(h) y^{\gamma}_h$. Thus, for $h \in H$, define $\zeta$ by
$$\zeta(\gamma)(h) = y^{\gamma}_h.$$
Notice that $\zeta(\gamma)$ is independent of the choice of a st-section. Further, it is a group-theoretic derivation that follows from the theory of group extensions. We check that it satisfies
\begin{eqnarray*}
\gamma(R_E(s(h))) &=& R_E(\gamma(s(h))),\\
\gamma(s(R_H(h)) y_h)&=& R_E(s(h)y^{\gamma}_h),\\
s(R_H(h))y^{\gamma}_{R_H(h)} y_h&=&s(R_H(h))y_h R_I(\mu_{R_H(h)}(y^{\gamma}_h)).
\end{eqnarray*}
Hence, we have $\zeta(\gamma)(R_H(h))=R_I(\mu_{R_H(h)}(\zeta(\gamma)(h)))$. We have now proved that $\zeta(\gamma) \in \operatorname{Z} ^1_{RBE}(H,I)$. Both $\eta$ and $\zeta$ are homomorphisms, and $\eta \zeta$ and $\zeta \eta$ are identity on $\operatorname{Autb} ^{H,I}(E)$ and $\operatorname{Z} ^1_N(H,I)$, respectively, is obvious. Hence $\operatorname{Autb} ^{H,I}(E) \cong \operatorname{Z} ^1_N(H,I)$, and the proof is complete. \hfill $\Box$
\end{proof}
\vspace{-.5cm}
We finally get the following Well's like exact sequence for Rota-Baxter groups.
\begin{thm}\label{wells7 sb}
Let $\mathcal{E}: 0 \rightarrow I \rightarrow E \overset{\pi}\rightarrow H \rightarrow 0$ be an extension of a Rota-Baxter group $(H, R_H)$ by an abelian Rota-Baxter group $(I, R_I)$ such that $[\mathcal{E}] \in \operatorname{Ext} _{\mu}(H,I)$. Then we have the following exact sequence of groups
$$0 \rightarrow \operatorname{Z} ^1_{RBE}(H,I) \rightarrow \operatorname{Aut} _I(E, R_E) \stackrel{\rho(\mathcal{E})}{\longrightarrow} \operatorname{C} _{(\mu,\sigma, \nu)} \stackrel{\omega(\mathcal{E})}{\longrightarrow} \operatorname{H} ^2_{RBE}(H,I),$$
where $\omega(\mathcal{E})$ is, in general, only a derivation.
\end{thm}
| {
"timestamp": "2022-12-14T02:09:20",
"yymm": "2212",
"arxiv_id": "2212.06429",
"language": "en",
"url": "https://arxiv.org/abs/2212.06429",
"abstract": "The notion of Rota-Baxter groups was recently introduced by Guo, Lang and Sheng [{\\em Adv. Math.} 387 (2021), 107834, 34 pp.] in the geometric study of Rota-Baxter Lie algebras. They are closely related to skew braces as observed by Bardakov and Gubarev. In this paper, we study extensions of Rota-Baxter groups by constructing suitable cohomology theories. Among others, we find relations with the extensions of skew braces. Given an extension of Rota-Baxter groups, we also construct a short exact sequence connecting various automorphism groups, which generalizes the Wells short exact sequence.",
"subjects": "Group Theory (math.GR)",
"title": "Extensions and automorphisms of Rota-Baxter groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126451020108,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7094397111698995
} |
https://arxiv.org/abs/2203.14534 | A Combinatorial Proof of a generalization of a Theorem of Frobenius | In this article, we shall generalize a theorem due to Frobenius in group theory, which asserts that if $p$ is a prime and $p^{r}$ divides the order of a finite group, then the number of subgroups of order $p^{r}$ is $\equiv$ 1(mod $p$). Interestingly, our proof is purely combinatorial and does not use much group theory. | \section*{1.Introduction}
Although Sylow's theorems are taught in almost all undergraduate
courses in abstract algebra, a generalization due to Frobenius does not seem to be as
well known as it ought to be. Frobenius' generalization states that
if $p$ is a prime and $p^{r}$ divides the order $N$ of a finite group
$G$, the number of subgroups of $G$ of order $p^{r}$ is $\equiv 1$
(mod $p$). The special case when $p^{r}$ is the largest power of $p$
dividing $N$ is part of Sylow's third theorem. Many of the standard
texts do not mention this theorem. One source is Ian Macdonald's
`Theory of Groups' \cite{M}. In fact, a further generalization due
to Snapper \cite{S} asserts that for any subgroup $K$ of order
$p^r$ and for any $s \geq r$ where $p^s$ divides the order of $G$,
the number of subgroups of order $p^s$ containing $K$ is also
$\equiv 1$ (mod $p$). In this article, we give a new proof of a
further extension of Snapper's result that is purely combinatorial
and does not use much group theory. Thus, we have a new
combinatorial proof of Frobenius's theorem as well. \vskip 5mm
\section*{2.Main results}
\vskip 5mm
\noindent We initially started by giving a combinatorial proof of
Frobenius's result and, interestingly, our method of proof yields as
a corollary an extension of Snapper's Theorem. Our proof builds on
the famous combinatorial proof of Cauchy's theorem which asserts
that if a prime divides the order of a group, there is an element of
that prime order. \vskip 5mm
\begin{theorem}
Let $G$ be a finite group of order $N$, and let $p$ be a prime. Let
$b_{0}<b_{1}<\cdots<b_{r}$ be nonnegative integers such that
$p^{b_{r}}$ divides $N$ and $P_{b_{0}}$ be a subgroup of $G$ of
order $p^{b_{0}}$. Then the number of ordered tuples
$(P_{b_{1}},P_{b_{2}},\cdots,P_{b_{r}})$ such that each $P_{b_{i}}$ is
subgroup of $G$ of order $p^{b_{i}}$ and
$$P_{b_{0}}\subset
P_{b_{1}}\subset \cdots \subset P_{b_{r}}$$ is $\equiv 1$ (mod $p$).
\end{theorem}
\vskip 5mm
\noindent The case $r=1$ is a Theorem due to Snapper \cite{S}
which is itself an extension of Frobenius's Theorem that corresponds
to the case $r=1, b_0=0$ in our Theorem.
\noindent Let us recall here the simple results in finite group
theory that we will need.
\begin{enumerate}
\item If $H$ is a subgroup of a finite group $G$ of order $N$, and the index $[G:H]$ is the smallest prime divisor of $N$, then $H$ is normal in $G$.
\item (Sylow's first theorem) If $G$ is a finite group of order $N$, $p$ a prime, $i\geq 0$ is an integer, $p^{i+1}|N$ and $P$ is a subgroup of $G$ of order $p^{i}$, then there is a subgroup $Q$ of $G$ containing $P$ of order $p^{i+1}$.
\end{enumerate}
We shall also use the following notations throughout.
\begin{enumerate}
\item For a finite set $S$, $|S|$ denotes the number of elements (cardinality) of $S$.
\item If $G$, $H$ are finite groups, $H\leq G$ means $H$ is a subgroup of $G$.
\item If $G$, $H$ are finite groups, $H\leq G$, $[G:H]$ denotes the index of $H$ in $G$.
\item If $G$ is a finite group, the order of $G$ is the number of elements of $G$.
\item If $H$ is a subgroup of a group $G$, $N_{G}(H)$ denotes the normalizer of $H$ in $G$.
\item For positive integers $a,b$, we write $a|b$ to mean $a$ divides $b$.
\end{enumerate}
\vskip 3mm
\noindent \textbf{Proof of Theorem.}\\
For ease of understanding, we divide the proof into three steps.
\vskip 3mm
\noindent \textbf{Step 1}: We tackle the case $r=1, b_0=0, b_{1}=1$
first, which just says that if $p$ divides the order of $G$, then
the number of subgroups of $G$ of order $p$ is $\equiv$ 1 (mod $p$).
\noindent Let $T$ = $\{(a_{1},a_{2},...,a_{p})\mid a_{i} \in G$
$\forall i$,
$a_{1}a_{2}...a_{p}=1\}$.\\
Observe that $|T|= N^{p-1}\equiv 0$ (mod $p$), as any choice of
$a_{1},...,a_{p-1}$ uniquely determines $a_{p}.$ Also, if not all
$a_{i}$'s are equal, then $(a_{1},a_{2},...,a_{p})\in T$ implies
$(a_{i},a_{i+1},...,a_{i+p-1})$ for $i=1,2,...,p$ (indices are
modulo $p$) are $p$ distinct elements of $T$. The reason is as
follows:\\
If $(a_{i},a_{i+1},...,a_{i+p-1})= (a_{j},a_{j+1},...,a_{j+p-1})$
for some $i\neq j$, then $a_{k}=a_{k+j-i}\forall k$. By induction,
$a_{k}=a_{k+\alpha(j-i)}$ for any integer $\alpha$. But $i\neq j$
implies $gcd(j-i,p)=1$, as $0<|i-j|<p$ and $p$ is a prime. So, $j-i$
is invertible modulo $p$. So any $1\leq l \leq p$ satisfies $l \equiv
1+\alpha(j-i)$ (mod $p$) for some integer $\alpha$. So,
$a_{l}=a_{1+\alpha(j-i)}=a_{1}$ for any $1\leq l \leq p.$ So,
$a_{l}$'s are all equal, which leads to a contradiction.\\
\noindent So, if $d$ is the number of elements of $G$ of order $p$, then
$0 \equiv |T| \equiv (1+d)$ (mod $p$). So, $d \equiv -1$ (mod $p$) (as there are exactly 1+$d$ elements of $T$ with all $a_{i}$'s equal.) In
each subgroup of order $p$, there are $p-1$ elements of order $p$,
different subgroups of order $p$ intersect at the identity. So,
$-1\equiv d$ =$(p-1)$(number of subgroups of order $p$) $\equiv -$(number of subgroups of order $p$) (mod $p$).
So, number of subgroups of
order $p$ is $\equiv$ 1 (mod $p$), which finishes the proof for the case $r=1, b_{0}=0, b_{1}=1.$ \vskip 3mm
\noindent \textbf{Step 2:} Now come to a general case. First, we
fix a notation. Let $H$ be any group of order $M$, $p^{n}\vert M,
p^{n+1} \nmid M$, $0\leq r \leq n.$ Let $P_{r}$ be a subgroup of
order $p^{r}$ in $H$. Define
$$S(P_{r},H)=\{(P_{r+1},P_{r+2},\cdots,P_{n}) | P_{i} \leq H, |P_{i}|=p^{i}~\forall~ i,
P_{r}\leq P_{r+1}\leq \cdots \leq P_{n}\leq H\}.$$ So, $S(P_{n},H)$ is a
singleton set, by convention.\\
For $r\leq i<n$ and a subgroup $P_{i}$ of $H$ of order $p^{i}$,
there is a subgroup $P_{i+1}^{\prime}$ of $H$ of order $p^{i+1}$
containing $P_{i}$, by Sylow's theorems.
$[P_{i+1}^{\prime}:P_{i}]=p,$ which is the smallest prime divisor of
$|P_{i+1}^{\prime}|$, so $P_{i}$ is normal in $P_{i+1}^{\prime}$. Hence $P_{i+1}^{\prime}\leq N_{G}(P_{i})$. So, $\frac{P_{i+1}^{\prime}}{P_{i}}$ is a subgroup of order $p$ in $\frac{N_{G}(P_{i})}{P_{i}}$.
So, $p \vert [N_{G}(P_{i}):P_{i}].$ By the same reasoning, any
subgroup $P_{i+1}$ of $H$ of order $p^{i+1}$ containing $P_{i}$ must
be a subgroup of $N_{G}(P_{i})$, and so $\frac{P_{i+1}}{P_{i}}$ is a subgroup of order $p$ in $\frac{N_{G}(P_{i})}{P_{i}}$. Conversely, any subgroup of order $p$ in $\frac{N_{G}(P_{i})}{P_{i}}$ gives rise via pullback to a subgroup $P_{i+1}$ of $N_{G}(P_{i})$( hence of $H$) of order $p^{i+1}$ containing $P_{i}$. So, there is a one-to-one
correspondence between such $P_{i+1}$ (subgroups of $G$ of order $p^{i+1}$ containing $P_{i}$) and the subgroups of order $p$
of the quotient group $\frac{N_{G}(P_{i})}{P_{i}}$.\\
\noindent So, the number of such $P_{i+1}$ is the number of
subgroups of order $p$ in $\frac{N_{G}(P_{i})}{P_{i}}$, which is
$\equiv$ 1 (mod $p$), in view of Step 1. So, in mod $p$, we can choose $P_{r+1}$ in 1
way, after each such choice we can choose $P_{r+2}$ in 1 way, and so
on. So, $|S(P_{r},H)| \equiv 1$ (mod $p$).\vskip 3mm
\noindent \textbf{Step 3:} Now come to the setup of our theorem. We
have $|S(P_{b_{0}},G)|\equiv 1$(mod $p$), by Step 2. Let us count
$|S(P_{b_{0}},G)|$ in another way. Let $x$ be the number of ordered
tuples as in the statement of our theorem. After choosing any of
such $x$ ordered tuples, we can choose
$(P_{b_{i}+1},...,P_{b_{i+1}-1})$ in $|S(P_{b_{i}},P_{b_{i+1}})|
\equiv$ 1(mod $p$) ways, for each $0\leq i\leq r-1$, and we can choose
$(P_{b_{r}+1},...,P_{n})$ in $|S(P_{b_{r}},G)| \equiv$ 1(mod $p$)
ways.\\
Now, $p^{n}$ is the largest power of $p$ dividing $N$, each $P_{i}$ is a subgroup of $G$ of order $p^{i}$ and
$P_{i} \leq P_{i+1}$ for all $b_{0}\leq i<n$. So, we obtain
$|S(P_{b_{0}},G)| \equiv$ $x$ (mod $p$). Hence finally we get $x
\equiv 1$ (mod $p$) which completes the proof. \vskip 3mm
\noindent \textbf{Remarks} \\The case $r=1$ is Snapper's result and
the further special case $r=1, b_{0}=0$ corresponds to Frobenius'
theorem.
| {
"timestamp": "2022-03-29T02:43:59",
"yymm": "2203",
"arxiv_id": "2203.14534",
"language": "en",
"url": "https://arxiv.org/abs/2203.14534",
"abstract": "In this article, we shall generalize a theorem due to Frobenius in group theory, which asserts that if $p$ is a prime and $p^{r}$ divides the order of a finite group, then the number of subgroups of order $p^{r}$ is $\\equiv$ 1(mod $p$). Interestingly, our proof is purely combinatorial and does not use much group theory.",
"subjects": "Group Theory (math.GR)",
"title": "A Combinatorial Proof of a generalization of a Theorem of Frobenius",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126525529015,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397107537744
} |
https://arxiv.org/abs/2002.10139 | Some results on pure ideals and trace ideals of projective modules | Let $R$ be a commutative ring with the unit element. It is shown that an ideal $I$ in $R$ is pure if and only if Ann$(f)+I=R$ for all $f\in I$. If $J$ is the trace of a projective $R$-module $M$, we prove that $J$ is generated by the ``coordinates" of $M$ and $JM = M$. These lead to a few new results and alternative proofs for some known results. | \section{Introduction and Preliminaries}
The concept of the trace ideals of modules has been the subject of research by some mathematicians around late 50's until late 70's and has again been active in recent years (see, e.g. \cite{Beckwith}, \cite{Dao et al.}, \cite{Herbera}, \cite{Herzog}, \cite{Jondrup}, \cite{lindo}, \cite{Vasconcelos 2} and \cite{Whitehead}). This paper
deals with some results on the trace ideals of projective modules. We begin with a few results on pure ideals which are used in their comparison with trace ideals in the sequel. After a few preliminaries in the present section, in section 2 a new characterization of pure ideals is given (Theorem \ref{Remark 030}) which
is followed by some corollaries. Section 3 is devoted to the trace ideal of
projective modules. Theorem \ref{Remark 0501} gives a characterization of the trace ideal of a projective module in terms of the ideal generated by the ``coordinates" of the elements of the module. This characterization enables us to deduce some new results on the trace ideal of projective modules like the statement on the trace ideal of the tensor product of two modules for which one of them is projective (Corollary \ref{Corollary VI89}), and some alternative proofs for a few known results such as Corollary \ref{Corollary IV} which shows that the trace ideal of a projective module is a pure ideal. \\
In this paper all rings are assumed to be commutative with the unit element. \\
The following theorem will be used in a number of our results.
\begin{theorem}\label{lemma 8} Every finitely generated flat module over a local ring is free.
\end{theorem}
{\bf Proof.} See \cite[Tag 00NZ]{Johan} or \cite[Theorem 7.10]{Matsumura}. $\Box$ \\
\begin{remark}\label{Remark I} Let $F$ be a free $R-$module with a basis $(e_{k})_{k\in K}$ and $M$ an $R-$module. If $x\in F\otimes_{R}M$ then there exists a unique sequence $(m_{k})\in\bigoplus\limits_{k\in K}M$ such that $x=\sum\limits_{k}e_{k}\otimes m_{k}$. In fact, the map $\bigoplus\limits_{k\in K}M\rightarrow F\otimes_{R}M$ given by $(m_{k})\rightsquigarrow\sum\limits_{k}e_{k}\otimes m_{k}$ is an isomorphism of $R-$modules. This, in particular, implies that if $\phi:R\rightarrow S$ is a morphism of rings then $(e_{k}\otimes1)_{k\in K}$ is a basis for the free $S-$module $F\otimes_{R}S$.
\end{remark}
A projective $R-$module is also called $R-$projective. The same terminology is used for free and flat modules as well. \\
\section{Pure ideals}
An ideal $I$ of a ring $R$ is called a pure ideal if the canonical ring map
$R\rightarrow R/I$ is a flat ring map. Pure ideals are studied in commutative and non-commutative algebra. In this section we give some results on pure ideals. \\
Theorem \ref{Remark 030} translates the $R$-flatness of $R/I$ in terms of coprime-ness of the ideal $I$ with the annihilator of any of its elements.
The main motivations comes from \cite[Theorem 7.13]{Matsumura} and first properties of an absolutely flat ring, a ring $R$ where every $R-$module is $R-$flat (see Corollary \ref{Corollary II}(iii)).
\begin{theorem}\label{Remark 030} An ideal $I$ of a ring $R$ is a pure ideal if and only if $\Ann(f)+I=R$ for all $f\in I$.
\end{theorem}
{\bf Proof.} First assume $R/I$ is $R-$flat. Suppose there is some $f\in I$ such that $\Ann(f)+I\neq R$. Thus there exists a prime ideal $\mathfrak{p}$ of $R$ such that $\Ann(f)+I\subseteq\mathfrak{p}$. Therefore, by Theorem \ref{lemma 8}, $I_{\mathfrak{p}}=0$. So there exists some $s\in R\setminus\mathfrak{p}$ such that $sf=0$. But this is a contradiction. \\
Conversely, let $\phi:M\rightarrow N$ be an injective morphism of $R-$modules. To prove the assertion it suffices to show that the induced map $M/IM\rightarrow N/IN$ given by $m+IM\rightsquigarrow \phi(m)+IN$ is injective. If $\phi(m)\in IN$ then we may write $\phi(m)=\sum\limits_{i=1}^{n}f_{i}x_{i}$ where $f_{i}\in I$ and $x_{i}\in N$ for all $i$. By the hypothesis, there are elements $b_{i}\in\Ann(f_{i})$ and $c_{i}\in I$ such that $1=b_{i}+c_{i}$. It follows that $1=(b_{1}+c_{1})(b_{2}+c_{2})\cdots(b_{n}+c_{n})=b+c$ where $b=b_{1}b_{2}\cdots b_{n}$ and $c\in I$. Thus $\phi(m)=b\phi(m)+c\phi(m)=\phi(cm)$. Therefore $m=cm\in IM$. $\Box$
\begin{remark} Here we give an elementary proof (without using of Theorem \ref{lemma 8}) for the implication ``$\Rightarrow$" of Theorem \ref{Remark 030}. If $R/I$ is $R-$flat, then the exact sequence of $R-$modules $\xymatrix{0\ar[r]&I\ar[r]&R}$ gives rise to the exact sequence: $$\xymatrix{0\ar[r]&I/I^{2}\simeq I\otimes_{R}R/I\ar[r]&R\otimes_{R}R/I\simeq R/I.}$$
Note that the above morphism sends each pure tensor $a\otimes(r+I)\in I\otimes_{R}R/I$ into $ra+I=0$. Hence, its image is zero and so $I=I^{2}$. Now if $f\in I$ then the exact sequence $\xymatrix{0\ar[r]&Rf\ar[r]&I}$ gives us the exact sequence: $$\xymatrix{0\ar[r]&Rf\otimes_{R}R/I\ar[r]&I\otimes_{R}R/I\simeq I/I^{2}=0.}$$
Thus $R/(\Ann(f)+I)\simeq R/\Ann(f)\otimes_{R}R/I\simeq Rf\otimes_{R}R/I=0$ and so $\Ann(f)+I=R$. \\
\end{remark}
\begin{corollary}\label{Corollary I} The annihilator of a finitely generated flat module is a pure ideal.
\end{corollary}
{\bf Proof.} Let $M$ be a finitely generated flat module over a ring $R$ with the annihilator $I=\Ann_{R}(M)$. Suppose there is some $f\in I$ such that $\Ann(f)+I\neq R$. Thus there exists a prime ideal $\mathfrak{p}$ of $R$ such that $\Ann(f)+I\subseteq\mathfrak{p}$. Clearly $M_{\mathfrak{p}}$ is a nonzero finitely generated flat module over the local ring $R_{\mathfrak{p}}$. Then, by Theorem \ref{lemma 8}, $M_{\mathfrak{p}}$ is a free $R_{\mathfrak{p}}-$module. So $I_{\mathfrak{p}}=\Ann_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})=0$. Hence there is some $s\in R\setminus\mathfrak{p}$ such that $sf=0$. Thus $s\in\Ann(f)\subseteq\mathfrak{p}$ which is a contradiction. Therefore by Theorem \ref{Remark 030}, $I$ is a pure ideal. $\Box$ \\
In view of Theorem \ref{Remark 030} and Corollary \ref{Corollary I}, the well-known characterization of the pure ideals could be further extended (see \cite[Tag 04PS]{Johan}).
\begin{corollary}\label{Corollary II} Let $I$ be an ideal of a ring $R$. Then the following statements are equivalent. \\
$\mathbf{(i)}$ $I$ is a pure ideal. \\
$\mathbf{(ii)}$ $I=\{f\in R: \Ann(f)+I=R\}$. \\
$\mathbf{(iii)}$ If $f\in I$ then there exists some $g\in I$ such that $f(1-g)=0$. \\
$\mathbf{(iv)}$ $\Supp(I)=\Spec(R)\setminus V(I)$. \\
$\mathbf{(v)}$ If $\mathfrak{p}$ is a prime ideal of $R$, then either $I_{\mathfrak{p}}=0$ or $I_{\mathfrak{p}}=R_{\mathfrak{p}}$.
\end{corollary}
{\bf Proof.} $\mathbf{(i)}\Rightarrow\mathbf{(iv)}:$ If $I_{\mathfrak{p}}\neq0$ then there exists some $f\in I$ such that $f/1\neq0$. This yields that $\Ann(f)\cap(R\setminus\mathfrak{p})=\emptyset$. It follows that $\Ann(f)\subseteq\mathfrak{p}$. By Theorem \ref{Remark 030}, $\Ann(f)+I=R$. Therefore $\mathfrak{p}\in\Spec(R)\setminus V(I)$. \\
$\mathbf{(iv)}\Rightarrow\mathbf{(v)}:$ Easy. \\
$\mathbf{(v)}\Rightarrow\mathbf{(i)}:$ If $\Ann(f)+I\neq R$ for some $f\in I$ then there exists a prime ideal $\mathfrak{p}$ of $R$ such that $\Ann(f)+I\subseteq\mathfrak{p}$. This yields that $f/1\neq0$ and so $I_{\mathfrak{p}}=R_{\mathfrak{p}}$. This means that $I\nsubseteq\mathfrak{p}$, a contradiction. \\
The remaining implications follow easily by applying Theorem \ref{Remark 030}. $\Box$ \\
Motivated by Theorem \ref{Remark 030}, then we call an ideal $I$ of a ring $R$ \emph{strongly pure} if $\Ann(f)+Rf=R$ for all $f\in I$. \\
Every regular ideal (i.e., generated by a set of idempotents) is a pure ideal, see \cite[Lemma 8.4]{A. Tarizadeh}. But the converse does not hold, see \cite[Example 4.7]{A. Tarizadeh 2}. However, regarding strongly pure ideals we have the following result.
\begin{proposition} Every strongly pure ideal is a regular ideal.
\end{proposition}
{\bf Proof.} Let $I$ be a strongly pure ideal of a ring $R$. If $f\in I$ then by hypothesis, there exist $g\in Rf$ and $h\in\Ann(f)$ such that $g+h=1$. Then $g(1-g)=gh=0$ and so $g$ is an idempotent. We also have $f=fg$. Thus $Rf=Rg$ is a regular ideal. Therefore $I$ is a regular ideal, because
$I=\sum\limits_{f\in I}Rf$. $\Box$ \\
The converse of above result does not hold. In fact, if $e$ is an idempotent of a ring $R$, then $Re$ is not necessarily a strongly pure ideal. As a specific example, the regular ideal $\mathbb{Z}$ is not strongly pure, since $\Ann(2)+2\mathbb{Z}=2\mathbb{Z}\neq\mathbb{Z}$. \\
In summary, in a given ring we have the following inclusions of sets: \\
Strongly pure ideals $\subseteq$ Regular ideals $\subseteq$ Pure ideals. \\
By \cite[Theorem 2.2(v)]{A. Tarizadeh 2}, a ring
$R$ is zero-dimensional if and only if for each $f\in R$ there exists a natural number $n\geqslant1$ such that $\Ann(f^{n})+Rf=R$.
Also note that if $f$ is a member of a ring $R$, then $\Ann(f)\subseteq\Ann(f^{2})\subseteq\cdots$.
Motivated by these observations, then we may define an ideal $I$ of a ring $R$ a \emph{quasi-pure} ideal if for each $f\in I$ there exists a natural number $n\geqslant1$ such that $\Ann(f^{n})+I=R$. Clearly an ideal $I$ is quasi-pure if and only if for each $f\in I$ there exists some $g\in I$ such that $f(1-g)$ is nilpotent. Similarly above, we call an ideal $I$ of a ring $R$ \emph{strongly quasi-pure} if for each $f\in I$ there exists a natural number $n\geqslant1$ such that $\Ann(f^{n})+Rf=R$. In a reduced ring $R$, these new notions are respectively coincide with the ``pure ideal'' and ``strongly pure ideal'' notions, because $\Ann(f)=\Ann(f^{n})$ for all $f\in R$ and $n\geqslant2$. If $R$ is a zero-dimensional ring, then each ideal of $R$ is strongly quasi-pure.
\begin{lemma}\label{Lemma iv quasi-pure} If each maximal ideal of a ring $R$ is quasi-pure, then $R$ is zero-dimensional.
\end{lemma}
{\bf Proof.} If $\mathfrak{p}$ is a prime ideal of $R$, then $\mathfrak{p}\subseteq\mathfrak{m}$ for some
maximal ideal $\mathfrak{m}$ of $R$. If the inclusion is strict, then we may choose some $f\in\mathfrak{m}$ such that $f\notin\mathfrak{p}$. By hypothesis, there exists some $g\in\mathfrak{m}$ such that $f(1-g)$ is nilpotent. If follows that $1-g\in\mathfrak{p}$ which is a contradiction. Hence, every prime ideal of $R$ is maximal. $\Box$ \\
If $R$ is an absolutely flat (i.e., reduced zero-dimensional) ring, then each ideal of $R$ is strongly pure, because it is well known that $R$ is absolutely flat if and only if it is von-Neumann regular ring (i.e., each $f\in R$ can be written as $f=f^{2}g$ for some $g\in R$). Here ``reducedness'' is crucial. For example, in the zero-dimensional ring $R=\mathbb{Z}/8\mathbb{Z}$, the ideal $(2)=\{0,2,4,6\}$ is not pure and so it is not strongly pure. The following result shows that the reverse also holds, even under the much weaker condition.
\begin{proposition} If each maximal ideal of a ring $R$ is a pure ideal, then $R$ is absolutely flat.
\end{proposition}
{\bf Proof.} By Lemma \ref{Lemma iv quasi-pure}, $R$ is zero-dimensional. It remains to show that $R$ is reduced. If $f\in R$ is nilpotent, then $f\in\mathfrak{m}$ for all $\mathfrak{m}\in\Max(R)$. By hypothesis, $\mathfrak{m}=\Ker\pi_{\mathfrak{m}}$ where $\pi_{\mathfrak{m}}:R\rightarrow R_{\mathfrak{m}}$ is the canonical ring map. So there exists some $g_{\mathfrak{m}}\in R\setminus\mathfrak{m}$ such that $fg_{\mathfrak{m}}=0$. Clearly the ideal $\big(g_{\mathfrak{m}}: \mathfrak{m}\in\Max(R)\big)$ is the whole ring $R$. Thus we may write $1=\sum\limits_{i=1}^{n}r_{i}g_{i}$ where $g_{i}:=g_{\mathfrak{m}_{i}}$ for all $i$. It follows that $f=\sum\limits_{i=1}^{n}r_{i}fg_{i}=0$. $\Box$ \\
For further information and applications of the pure ideals we refer the interested reader to the literature, especially to the recent works \cite{A. Tarizadeh} and \cite{A. Tarizadeh 2}-\cite{A. Tarizadeh 4}. \\
\section{The trace ideal of a projective module}
In this section we provide some results present our contributions on the
trace ideal of a projective module. \\
Recall that the \emph{trace ideal} of an $R-$module $M$ is the ideal $$\tr_{R}(M):=\sum\limits_{f\in M^{\ast}}f(M)$$
where $M^{\ast}=\Hom_{R}(M,R)$. \\
It is easy to see that the trace ideal of a free module $F$ is either the zero ideal or the whole ring, according to whether $F=0$ or $F\neq0$. \\
Let $M$ be a projective $R-$module and let $F$ be a free $R-$module which admits $M$ as a direct summand. Then there exists an $R-$submodule $N$ of $F$ such that $F=M+N$ and $M\cap N=0$. Let $(e_{k})_{k\in K}$ be a basis of $F$. Then each $m\in M$ can be written uniquely as $m=\sum\limits_{k}r_{m,k}e_{k}$ where $r_{m,k}=0$ for all but a finite number of indices $k$. The scalars $r_{m,k}$ are called the coordinates of $m$ with respect to the above basis. Note that the coordinates of each element of $M$ may be changed by passing from the basis $(e_{k})$ to another basis for $F$. But the following result (motivated by \cite[p. 132, Proposition 3.1]{Cartan-Eilenberg}) shows that the ideal generated by the coordinates of the members of $M$ is independent of the choice of a free module $F$ and a basis for it. In fact, it is the trace ideal of $M$.
\begin{theorem}\label{Remark 0501} Let $M$ be a projective $R-$module. Then $J=\tr_{R}(M)$ is generated by the coordinates of the elements of $M$. In particular, $JM=M$.
\end{theorem}
{\bf Proof.} Let $J'$ be the ideal of $R$ generated by the coordinates of the elements of $M$. Using the above setup, then for each $i\in K$, by the universal property of free modules, there exists a (unique) morphism of $R-$modules $g_{i}:F\rightarrow R$ such that $g_{i}(e_{k})=\delta_{i,k}$ where $\delta_{i,k}$ is the Kronecker delta. Setting $f_{i}:=g_{i}\circ j$ where $j:M\rightarrow F$ is the canonical injection. Then $r_{m,i}=f_{i}(m)\in J$ for all $m\in M$ and all $i\in K$. So $J'\subseteq J$. Also each $m=\sum\limits_{k}r_{m,k}x_{k}$ where each $e_{k}=x_{k}+y_{k}$ with $x_{k}\in M$ and $y_{k}\in N$. In particular, we have $JM=M$. If $\phi:M\rightarrow R$ is a morphism of $R-$modules then $\phi(x_{k})\in R$ for all $k\in K$, and so $\phi(m)\in J'$.
Therefore $J=J'$. $\Box$ \\
In the sequel we will observe that the above theorem has some interesting consequences. \\
The following result is well known, see \cite[Propositions A.1. and A. 3. and Theorem A.2. ]{Auslander-Goldman} and \cite[after Proposition 1.3]{Vasconcelos}. We give an alternative proof which seems shorter.
\begin{corollary}\label{lemma 1} The annihilator of a finitely generated projective module is generated by an idempotent element.
\end{corollary}
{\bf Proof.} If $M$ is a projective $R-$module then by Theorem \ref{Remark 0501}, $JM=M$ where $J=\tr_{R}(M)$. If moreover, $M$ is a finitely generated $R-$module then by the so called ``the determinant trick", there exists some $f\in J$ such that $1-f\in I:=\Ann(M)$. Using the expression of the generators of $M$ as a linear combinations of
the basis elements $e_{k}$ considered in the proof of Theorem \ref{Remark 0501}, it follows that $IJ=0$. Setting $g:=1-f$. Then we have $g^{2}=g(1-f)=g$. Thus $g$ is an idempotent. If $h\in I$ then $hg=h(1-f)=h$ and so $h\in Rg$. Therefore $I=Rg$. $\Box$
\begin{corollary}\label{Corollary VII980} Let $M$ be an $R-$module and $x\in M$. Then $Rx$ is $R-$projective if and only if $\Ann(x)$ is generated by an idempotent of $R$.
\end{corollary}
{\bf Proof.} The implication ``$\Rightarrow$'' is deduced from Corollary \ref{lemma 1}. Conversely, by hypothesis there exists an idempotent $e\in R$ such that $\Ann(x)=Re$. We have the canonical isomorphisms of $R-$modules $Rx\simeq R/Re\simeq R(1-e)$ and $R(1-e)$ is a direct summand of $R$. Hence, $Rx$ is $R-$projective. $\Box$
\begin{theorem}\label{Corollary III} If $\phi:R\rightarrow S$ is morphism of rings and $M$ is a projective $R-$module, then $\tr_{S}(M\otimes_{R}S)=\tr_{R}(M)S$.
\end{theorem}
{\bf Proof.} Clearly $\tr_{R}(M)S\subseteq\tr_{S}(M\otimes_{R}S)$. To see the reverse inclusion, using the notations of Theorem \ref{Remark 0501}, then $M\otimes_{R}S$ is a direct summand of the free $S-$module $F\otimes_{R}S$. But it is well known that the family $e_{k}\otimes1$ is a basis for this free module, see Remark \ref{Remark I} or \cite[Corollary in page 26]{Northcott}. By Theorem \ref{Remark 0501}, $\tr_{S}(M\otimes_{R}S)$ is generated by the coordinates of the pure tensors $m\otimes s$ of $M\otimes_{R}S$. We have $m\otimes s=\sum\limits_{k}\phi(r_{m,k})s(e_{k}\otimes 1)$ and each $\phi(r_{m,k})s\in\tr_{R}(M)S$. Therefore $\tr_{S}(M\otimes_{R}S)=\tr_{R}(M)S$. $\Box$ \\
The following result is well known, a sketch of its proof can be found in \cite[p. 16]{Jondrup} also see \cite[p. 269]{Vasconcelos 2}. We provide an alternative proof.
\begin{corollary}\label{Corollary IV} The trace ideal of a projective module over a commutative ring is a pure ideal.
\end{corollary}
{\bf Proof.} If $M$ is a projective $R-$module then for each prime ideal $\mathfrak{p}$ of $R$, by Theorem \ref{Corollary III}, $\tr_{R_{\mathfrak{p}}}(M_{\mathfrak{p}})=J_{\mathfrak{p}}$ where $J=\tr_{R}(M)$. But it is well known that every projective module over a local ring is free, see \cite[Theorem 2]{Kaplansky}. This yields that either $J_{\mathfrak{p}}=0$ or $J_{\mathfrak{p}}=R_{\mathfrak{p}}$. Hence, by Corollary \ref{Corollary II}, $J$ is a pure ideal. $\Box$ \\
It is important to notice that the ``commutativity'' assumption of Corollary \ref{Corollary IV} is crucial. If we drop this assumption then the assertion does not hold anymore, see \cite[Example 1.2]{Jondrup}. \\
In \cite[Proposition 1.1]{Jondrup} the converse of Corollary \ref{Corollary IV} is also proved which states that if $I$ is a pure ideal of a ring $R$ then there exists a projective $R-$module whose trace ideal is equal to $I$. \\
\begin{corollary}\label{Corollary VI89} If $M$ is a projective $R-$module then for each $R-$module $N$, $\tr_{R}(M\otimes_{R}N)=JJ'=J\cap J'$ where $J=\tr_{R}(M)$ and $J'=\tr_{R}(N)$.
\end{corollary}
{\bf Proof.} It is easy to see that $JJ'\subseteq\tr_{R}(M\otimes_{R}N)\subseteq J\cap J'$ for every two $R-$modules $M$ and $N$. If $M$ is a projective $R-$module then by Corollary \ref{Corollary IV},
$J$ is a pure ideal and so $JJ'=J\cap J'$. $\Box$
\begin{lemma}\label{Lemma I0100} Let $I$ be an ideal of a ring $R$ such that $\tr_{R}(I)=R$. Then $I$ is a finitely generated ideal.
\end{lemma}
{\bf Proof.} We may write $1=\sum\limits_{i=1}^{n}f_{i}(x_{i})$ where $f_{i}\in\Hom_{R}(I,R)$ and $x_{i}\in I$ for all $i$. If $y\in I$ then $y=\sum\limits_{i=1}^{n}f_{i}(y)x_{i}$. Hence, $I=(x_{1},\ldots,x_{n})$ is a finitely generated ideal. $\Box$ \\
Finally, we provide an alternative proof to the following result.
\begin{corollary}\cite[Proposition 1.1]{Vasconcelos 2}\label{Corollary V} Let $I$ be an ideal of a ring $R$ which is not contained in any minimal prime ideal of $R$. If $I$ is a projective $R-$module, then it is a finitely generated ideal.
\end{corollary}
{\bf Proof.} By Lemma \ref{Lemma I0100}, it suffices to show that $\tr_{R}(I)=R$. If not, then there exists a maximal ideal $\mathfrak{m}$ of $R$ such that $J:=\tr_{R}(I)\subseteq\mathfrak{m}$. There exists a minimal prime ideal $\mathfrak{p}$ of $R$ such that $\mathfrak{p}\subseteq\mathfrak{m}$. By the hypotheses, there exists some $x\in I$ such that $x\notin\mathfrak{p}$. We have $I=IJ\subseteq J$, see Theorem \ref{Remark 0501}. By Corollary \ref{Corollary IV}, $J$ is a pure ideal and so by Theorem \ref{Remark 030}, $\Ann(x)+J=R$. But $\Ann(x)\subseteq\mathfrak{p}$ and so $\Ann(x)+J\subseteq\mathfrak{m}$.
This is a contradiction and we win. $\Box$ \\
\textbf{Acknowledgements.} The author would like to give heartfelt thanks to the referee for very careful reading of the paper and for his/her excellent comments and modifications which significantly improved the paper.
| {
"timestamp": "2021-07-14T02:19:37",
"yymm": "2002",
"arxiv_id": "2002.10139",
"language": "en",
"url": "https://arxiv.org/abs/2002.10139",
"abstract": "Let $R$ be a commutative ring with the unit element. It is shown that an ideal $I$ in $R$ is pure if and only if Ann$(f)+I=R$ for all $f\\in I$. If $J$ is the trace of a projective $R$-module $M$, we prove that $J$ is generated by the ``coordinates\" of $M$ and $JM = M$. These lead to a few new results and alternative proofs for some known results.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Some results on pure ideals and trace ideals of projective modules",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712651931994,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.709439710303697
} |
https://arxiv.org/abs/1606.03773 | Remez-type inequalities for the hyperbolic cross polynomials | In this paper we study the Remez-type inequalities for trigonometric polynomials with harmonics from hyperbolic crosses. The interrelation between the Remez and Nikolskii inequalities for individual functions and its applications are discussed. | \section{Introduction}
In many questions in analysis one deals with a problem of finding the best possible way to estimate the global norm $\|f\|_{X(\Omega)}$ in terms of local norms $\|f\|_{X(\Omega\setminus B )}$.
In some cases, this problem can be reduced to the problem for certain approximation methods, in particular, polynomials. An important result in this topic is the Remez inequality.
For algebraic polynomials $P_n$,
the Remez inequality establishes a sharp upper bound for
$\|P_n\|_{L_\infty[-1,1]}$
if the measure of the subset of $[- 1,1]$, where the modulus of the polynomial is at most $1$, is known \cite{re}.
A sharp multidimensional inequality for algebraic polynomials was obtained by Brudnyi and
Ganzburg in \cite{br-ga}.
In the case of trigonometric polynomials $
T_n(x)=\sum_{|k|\le n} c_ke^{ikx}$,\;
$c_k\in \mathbb{C},$
the Remez inequality reads as follows: for any
Lebesgue measurable set $B\subset \mathbb{T}$ we have
\begin{equation}\label{rem-or}
\|T_n\|_{L_\infty([0,2\pi))}\le C(n,|B|)
\|T_n\|_{L_\infty([0,2\pi) \setminus B)}.
\end{equation}
In \cite{er}, (\ref{rem-or}) was proved with $C(n,|B|)=\exp({4n|B|})$ for
$|B|< \pi/ 2$;
the history of the question can be found in, e.g., \cite[Ch. 5]{bor}, \cite[Sec. 3]{ga}, and \cite{lor-g}.
The constant can be sharpened as $C(n,|B|)=\exp({2n|B|})$, see \cite[Th. 3.1]{ga}.
In case when the measure $|B|$ is big, that is, when $\pi/2<|B|<2\pi$, one has
$$ C(n,|B|) =\Big(\frac{17}{2\pi-|B|}\Big)^{2n},$$ see \cite{ga, nazarov} and references therein.
Asymptotics of the sharp constant in the Remez inequality was recently obtained in \cite{nurs} and \cite{misha} for $|B|\to 0$ and $|B|\to 2\pi$,
respectively.
Multidimensional variants of Remez' inequality for trigonometric polynomials
$$T_{\mathbf{n}}(x)=
\sum_{|k_1|\le {n_1}}
\cdots
\sum_{|k_d|\le {n_d}}
c_{\mathbf{k}} e^{i({\mathbf{k}},x)},
\quad
c_{\mathbf{k}}\in \mathbb{C}, \quad x\in \mathbb{T}^d, \quad d\ge 1,
$$
were obtained in \cite{nurs}:
$$
\|T_{\mathbf{n}}\|_{L_\infty(\mathbb{T}^d)}\le
\exp\Big( 2d \big( |B|\prod_{j=1}^d{ n_j}\big)^{1/d} \Big)
\,
\|T_{\mathbf{n}}\|_{L_\infty(\mathbb{T}^d \setminus B)}
$$
for
$$
|B|<
\Big(\frac{\pi}2\Big)^d \frac{\Big(\min\limits_{1\leq j\leq d}{n_j}\Big)^d}{\prod_{j=1}^dn_j}.
$$
This improves the previous results for
the case of $n_1=\cdots=n_d$
from \cite{prymak} and \cite{kroo}.
\vskip 0.2cm
It is worth mentioning that
Remez inequalities for exponential polynomials $$p(t)=\sum^n_{k=1}c_ke^{\lambda_kt}, \qquad c_k,\lambda_k\in\mathbb C,$$
are
sometimes called the Tur\'{a}n inequality after Paul Tur\'{a}n \cite{turan} who studied related inequalities for algebraic complex-valued polynomials.
In \cite{nazarov}, Nazarov proved that
for an interval $I\subset\mathbb R$ and a measurable set $E\subset I$ of positive Lebesgue measure one has $$\sup_{t\in I}|p(t)|\leq e^{\mu(I)
\max|{\textnormal{ Re}}\,\lambda_k|}\bigg(\frac {A\mu(I)}{\mu(E)}\bigg)^{n-1}\sup_{t\in E}|p(t)|.$$
Here,
$A >0$ is an absolute constant, independent of
$n$.
Many different applications of Remez type inequalities include extension theorems (see, e.g., \cite{Brudny, yomdin}) and polynomial inequalities
(see, e.g., \cite{ga, prymak, kroo}).
Moreover, Remez inequalities were used to obtain
the uncertainty principle relations of the type $$\|f\|^2_{L^2(\mathbb R)}\leq Ae^{A\mu(E)\mu(\Sigma)}\Big(\int_{\mathbb R\backslash E}|f|^2+\int_{\mathbb R\backslash\Sigma}|\widehat f|^2\Big)$$ for any function $f\in L^2(\mathbb R)$ (see \cite{nazarov})
and Logvinenko--Sereda type theorems (see \cite{kov, nazarov}).
In \cite{Naz1}, the authors used the Remez inequalities
to derive sharp dimension--free estimates for the distribution of values of polynomials in convex subsets in $\mathbb{R}^n$, which allows to obtain
interesting results
about the distribution of zeroes of random analytic functions. This topic is closely related to the known
Kannan-Lov\'{a}sz-Simonovits lemma.
In addition, the Remez inequality turns out to be useful to deal with
the Rademacher Fourier series $$ f(\theta)=\sum_{k\in\Bbb{Z}} \xi_k a_k e^{2\pi i k\theta}, $$ where $\xi_k$ are independent Rademacher random variables taking the values of $\pm 1$ with probability $1/2$ and the coefficient sequence $\{a_k\}\in \ell^2$. In particular, in \cite{Naz2}
the authors obtain $L^p$ bounds for the logarithm of a Rademacher Fourier series.
Remez inequality is closely related to the so-called Bernstein type inequalities \cite{fefferman, yo}, which have many applications in differential equations, potential theory, and dynamical systems, see \cite{yo}.
\vskip 0.3cm
The main goal of this paper is to prove the Remez-type inequalities for the hyperbolic cross trigonometric polynomials. We also establish connections between the Remez-type inequalities and the Nikol'skii-type inequalities in a general setting. We use the following
definitions of these inequalities.
\begin{Definition}\label{D1.1} We say that $f$ satisfies the Remez-type inequality with parameters $p$, $b$, $R$ (in other words, $RI(p,b,R)$ holds) if for any measurable $B\subset \Omega$ with measure $|B|\le b$
\begin{equation}\label{1.1}
\|f\|_{L_p(\Omega)} \le R\|f\|_{L_p(\Omega\setminus B)}.
\end{equation}
\end{Definition}
\begin{Definition}\label{D1.2} For $p>q$ we say that $f$ satisfies the Nikol'skii-type inequality with parameters $p$, $q$, $C$, $m$ (in other words, $NI(p,q,C,m)$ holds) if
\begin{equation}\label{1.2}
\|f\|_p \le Cm^\beta \|f\|_q,\quad \|f\|_p :=\|f\|_{L_p(\Omega)},\quad \beta:=1/q-1/p.
\end{equation}
\end{Definition}
In Section 2 we establish that the Remez-type inequalities and the Nikol'skii-type inequalities are closely connected. A typical result, that shows that $RI$ implies $NI$ is Proposition \ref{P2.1}, which gives for all $0<q<p\le\infty$
$$
RI(\infty,b,R) \Rightarrow NI(p,q,R^{q\beta},1/b).
$$
A typical result in the opposite direction, which shows that $NI$ implies $RI$ is Proposition \ref{P2.2} and Remark \ref{R2.1}: we have for $0<q<p\le\infty$
$$
NI(p,q,C,m) \Rightarrow RI(q,(C'm)^{-1},2^{\max(1,1/q)}).
$$
It is well known and easy to derive from the interpolation inequality
$$
\|f\|_q \le \|f\|_v^\theta\|f\|_p^{1-\theta}, \quad 0<v<q<p\le\infty,\quad \theta := (1/q-1/p)(1/v-1/p)^{-1}
$$
that for $0<v<q<p\le \infty$
$$
NI(p,q,C,m) \Rightarrow NI(q,v,C',m).
$$
This indicates that the Nikol'skii-type inequalities "propagate" from bigger values of $p$, $q$ to
smaller values of $q$, $v$. In Section 2 we note that a similar effect holds for the Remez-type inequalities. For instance, we prove that (see Lemma \ref{L2.2}) for $0<q<p<\infty$
$$
RI(p,b,R) \Rightarrow RI(q,b,R^{p/q}).
$$
The main results of the paper are in Section 3, where we study the Remez-type inequalities for the hyperbolic cross trigonometric polynomials. The above discussion shows that the strongest $RI$ are for $p=\infty$. In Section 3 we prove that (see Theorem \ref{T3.1})
the $RI(\infty,b(N),R(N))$ holds for all polynomials from $\mathcal T(N)$ (see the definition in Section 3) for $b(N) \asymp (N(\log N)^{d-1})^{-1}$ and $R(N)\asymp (\log N)^{d-1}$. We also prove that the extra factor $R(N)$ cannot be substantially improved. Namely, Proposition \ref{P3.1}
shows that even if we make a stronger assumption on $b(N) \asymp (N(\log N)^{A})^{-1}$ with arbitrarily large fixed $A$, we still cannot replace $R(N)\asymp (\log N)^{d-1}$ by
$R(N)\asymp (\log N)^{(d-1)(1-\delta)}$ with some $\delta>0$. This indicates that the Remez-type inequalities for $p=\infty$ for the hyperbolic cross polynomials differ from their univariate counterparts. It is not surprising, because it is known (see \cite{Tmon}) that the same phenomenon holds for the Bernstein and Nikol'skii inequalities. In Section 3 we establish that contrary to the case $p=\infty$ in the case
$p<\infty$ the $RI$ has the form similar to the univariate case (see Theorem \ref{T3.1p}). In particular, this implies that Theorem \ref{T3.1p} is sharp. The problem of sharpness of the $RI$ in the case $p=\infty$ is open. For instance, we do not know what is the best rate of decay
of $b(N)$, which guarantees the $RI(\infty,b(N),R(N))$ with the above $R(N)\asymp (\log N)^{d-1}$. Theorem \ref{T3.1} shows that it is sufficient to take $b(N) \asymp (N(\log N)^{d-1})^{-1}$. However, Theorem \ref{T3.2} shows that we can expect some improvements
on the rate of $b(N)$.
Here is other very interesting open problem.
{\bf Open problem.} What is the best rate of $\{b(N)\}$ to guarantee that $RI(\infty,b(N),C(d))$
holds for $\mathcal T(N)$?
This problem might be related to the discretization problem discussed in Subsections 2.4 and 3.4.
As usual, $f\ll g$ for $f,g\ge 0$ means that $f\le C g$ with $C$
independent of essential quantities, and $f\asymp g$ means that $f\ll g\ll f$.
\section{Some general inequalities}
In this section we show how the Remez-type inequality for an individual function $f$
can be used to derive the Nikol'skii-type inequalities for $f$. Also, we show how the Nikol'skii-type inequalities imply the Remez-type inequalities. These results show that the Remez-type and the Nikol'skii-type inequalities are closely related. In addition, we show that the discretization inequality (see below for the definition) implies the Remez-type inequalities.
\subsection{Remez-type inequalities}
Suppose $f$ is a continuous on a compact $\Omega$ function. Let $\mu$ be a normalized measure on $\Omega$. Assume that the following Remez-type inequality holds: for any measurable $B\subset \Omega$ with measure $|B|\le b$
\begin{equation}\label{2.1}
\|f\|_{L_\infty(\Omega)} \le R\|f\|_{L_\infty(\Omega\setminus B)}.
\end{equation}
We now show that inequality (\ref{2.1}) implies the Remez-type inequality for $f$ in the $L_p(\Omega)$, $0<p<\infty$.
\begin{Lemma}\label{L2.1} We have for $0<p<\infty$
\begin{equation}\label{2.1+}
RI(\infty,b,R) \Rightarrow RI(p,b/2,2^{1/p}R).
\end{equation}
\end{Lemma}
\begin{proof}
We prove inequality (\ref{1.1}) for $B$, satisfying $|B|\le b/2$.
Take any set $B\subset \Omega$ with $|B|\le b/2$ and estimate
\begin{equation}\label{2.2}
\int_B |f|^pd\mu \le |B|\|f\|_\infty^p,\quad \|f\|_\infty := \|f\|_{L_\infty(\Omega)}.
\end{equation}
Define $B'\subset \Omega\setminus B$, $|B'|=b/2$, to be such that for all $x\in B'$ we have
\begin{equation}\label{2.3}
|f(x)| \ge \sup_{u\in (\Omega\setminus B)\setminus B'} |f(u)|.
\end{equation}
Denote $B'' := B\cup B'$. Then $|B''|\le b$. By (\ref{2.1}) and (\ref{2.3}) for all $x\in B'$
$$
|f(x)| \ge \sup_{u\in \Omega\setminus B''} |f(u)| \ge R^{-1}\|f\|_\infty.
$$
Therefore,
\begin{equation}\label{2.4}
\|f\|_\infty^p \le \frac{2}{b} \int_{B'} (R|f|)^pd\mu.
\end{equation}
Inequalities (\ref{2.2}) and (\ref{2.4}) imply
$$
\int_{\Omega} |f|^pd\mu = \int_{B} |f|^pd\mu +\int_{\Omega\setminus B} |f|^pd\mu \le
\int_{B'} (R|f|)^pd\mu + \int_{\Omega\setminus B} |f|^pd\mu
$$
\begin{equation}\label{2.5}
\le (R^p+1)\int_{\Omega\setminus B} |f|^pd\mu.
\end{equation}
In other words
\begin{equation}\label{2.6}
\|f\|_{L_p(\Omega)} \le (R^p+1)^{1/p}\|f\|_{L_p(\Omega\setminus B)} \le 2^{1/p}R \|f\|_{L_p(\Omega\setminus B)}
\end{equation}
for any $B$ with $|B|\le b/2$.
\end{proof}
\begin{Remark}\label{R2.0}
Note that the reverse implication of relation (\ref{2.1+}) is not valid. More precisely, there are no positive constants $C_1$ and $C_2$ such that
for $0<p<\infty$
$$
RI(p,b,R)
\Rightarrow RI(\infty,C_1 b,C_2 R).
$$
This follows immediately from Proposition \ref{P3.1}
and Theorem \ref{T3.1p}
below.
\end{Remark}
\begin{Lemma}\label{L2.2} We have for $0<q<p<\infty$
$$
RI(p,b,R) \Rightarrow RI(q,b,R^{p/q}).
$$
\end{Lemma}
\begin{proof} Let $p<\infty$ and let $f$ satisfy the $RI(p,b,R)$: for any $B$, $|B|\le b$ we have
\begin{equation}\label{0.1}
\int_{\Omega}|f|^p\mu \le R^p\int_{\Omega\setminus B} |f|^pd\mu.
\end{equation}
Let $B^*$ be a set of measure $b$ such that for all $x\in B^*$ we have
$$
|f(x)| \ge \sup_{u\in \Omega\setminus B^*}|f(u)| =: T.
$$
Then, by (\ref{0.1}) for $f\neq 0$ we have $T>0$. It is clear that for any $B$, $|B|=b$ we have
$$
\int_{\Omega\setminus B} |f|^qd\mu \ge \int_{\Omega\setminus B^*} |f|^qd\mu.
$$
We estimate from below $\int_{\Omega\setminus B^*} |f|^qd\mu$. By (\ref{0.1}) we get
$$
R^p\int_{\Omega\setminus B^*} |f|^pd\mu \ge \int_{\Omega } |f|^pd\mu = \int_{\Omega\setminus B^*} |f|^pd\mu + \int_{ B^*} |f|^pd\mu
$$
and
\begin{equation}\label{0.2}
(R^p-1)\int_{\Omega\setminus B^*} |f|^pd\mu \ge \int_{ B^*} |f|^pd\mu.
\end{equation}
Using the inequalities $|f|/T \le 1$ on $\Omega\setminus B^*$ and $|f|/T \ge 1$ on $ B^*$ we write
$$
(R^p-1)\int_{\Omega\setminus B^*} (|f|/T)^qd\mu \ge (R^p-1)\int_{\Omega\setminus B^*} (|f|/T)^pd\mu
$$
and continue by (\ref{0.2})
\begin{equation}\label{0.3}
\ge \int_{ B^*} (|f|/T)^pd\mu \ge \int_{ B^*} (|f|/T)^qd\mu.
\end{equation}
This implies
$$
R^p\int_{\Omega\setminus B^*} |f|^qd\mu \ge \int_{\Omega } |f|^qd\mu,
$$
which completes the proof.
\end{proof}
Note that also as in Remark \ref{R2.0} the Remez inequality for $p<\infty$ keeps only "strong monotonicity" property with respect to parameters.
\begin{Remark}\label{R2.2}
There are no positive constants $C_1$ and $C_2$ such that
for
$0<v<p<\infty$
\begin{equation}\label{1-5}
RI(v,b,R) \Rightarrow RI(p,C_1 b, C_2R^{v/p}).
\end{equation}
\end{Remark}
First we prove that the inequality
$$
NI(p,q,C,m)
\Rightarrow
NI(q,v,C',m),\qquad 0<v<q<p\le \infty,
$$
mentioned in the introduction,
is not invertible in the following sense.
\begin{Remark}\label{R2.3}
The implication
$$
NI(q,v,C',m)
\Rightarrow NI(p,q,C,c m), \qquad c>0,
$$
does not hold in general.
\end{Remark}
Note that the case $p=\infty$ follows easily from Theorems \ref{T3.5} and \ref{T3.6} below.
In the case $p<\infty$ we will use the following sharp Nikol'skii inequalities for spherical harmonics recently obtained in \cite{dd}.
Let
$\mathcal{H}_n^d$ be the space of all spherical harmonics of degree $n$ on $\mathbb{S}^{d-1}=\{ x\in \mathbb{R}^{d}: \|x\|=1\}$,
where $\|\cdot\|$ denotes the Euclidean norm of ${\mathbb R}^d$.
In particular,
it is proved in \cite{dd} that for $d\ge 3$
\begin{description}
\item{({\em i})}
if
$1\leq v\leq 2$ and $ v<q \leq
\frac{dv'}{d-2}$,
then
\begin{equation}\label{1-6}
\sup_{\substack{Y_n\in\mathcal{H}_n^d}}\frac{\|Y_n\|_q}{\|Y_n\|_v} \asymp n^{\frac{d-2}2(\frac 1v-\frac 1q)},
\end{equation}
\item{({\em ii})} if $ \frac{2d-2}{d-2}<q <p \leq \infty$, then
\begin{equation}\label{1-7}\sup_{\substack{Y_n\in\mathcal{H}_n^d}}\frac{\|Y_n\|_p}{\|Y_n\|_q}\asymp n^{(d-1) (\frac 1q-\f1p)}.
\end{equation}
\end{description}
Since $1\leq v\leq 2$ always implies $ \frac{2d-2}{d} <v'$, inequalities
(\ref{1-6}) and (\ref{1-7})
give
$$
NI(q,v,C_1(q,v),m)
\nRightarrow NI(p,q,C_2(p,q),C_3 m)
$$
for
$$ \frac{2d-2}{d-2}<q< \frac{dv'}{d-2}$$
and
$$
1\leq v\le 2<q<p\le \infty.
$$
This completes the proof of Remark \ref{R2.3}.
To prove Remark \ref{R2.2}, we first note that
Proposition \ref{P2.2} implies that
$$
NI(q,v,C_1(q,v),m) \Rightarrow
RI(v,\frac{C_1'(q,v)}{m},2^{\max({1,1/v})}),
\qquad
0< v<q\le \infty.
$$
On the other hand, Proposition \ref{P2.1}
yields that
$$
RI(p,\frac{C_1''(p,v,q)}{m},C_2(p,v)) \Rightarrow NI(p,q,C_2(p,v),\frac{m}{C_1''(p,v,q)})
$$
for $0<q<p<\infty$.
Combining these estimates with
inequality (\ref{1-5}) for $0<v<p<\infty$, we finally get
$$
NI(q,v,C_1(q,v),m) \Rightarrow
NI(p,q,C_2(p,v),\frac{m}{C_1''(p,v,q)})
$$
for $0<v<q<p<\infty$. These contradicts Remark \ref{R2.3}.
Thus, the proof of Remark \ref{R2.2} is now complete.
\subsection{Remez-type inequality implies Nikol'skii-type inequalities}
First, we derive from (\ref{2.1}) Nikol'skii-type inequalities for $f$. We begin with estimating
$\|f\|_\infty$ in terms of $\|f\|_q$, $0<q<\infty$. Let, as above, $B^*$ be a set of measure $b$ such that for all $x\in B^*$ we have
$$
|f(x)| \ge \sup_{u\in \Omega\setminus B^*}|f(u)|.
$$
Then by (\ref{2.1}) we have for all $x\in B^*$
$$
|f(x)| \ge R^{-1}\|f\|_\infty.
$$
Therefore,
$$
\|f\|_q^q \ge \int_{B^*}|f|^qd\mu \ge |B^*|(R^{-1}\|f\|_\infty)^q.
$$
We obtain from here
\begin{equation}\label{2.7}
\|f\|_\infty \le Rb^{-1/q}\|f\|_q.
\end{equation}
Let now $0<q<p<\infty$. We have
$$
\|f\|_p = \||f|^{1-q/p}|f|^{q/p}\|_p \le \|f\|_\infty^{1-q/p} \|f\|_q^{q/p}.
$$
Using (\ref{2.7}) we continue
$$
\le (Rb^{-1/q})^{1-q/p}\|f\|_q^{1-q/p}\|f\|_q^{q/p} = R^{q\beta}b^{-\beta}\|f\|_q,\quad \beta:=1/q-1/p.
$$
Thus we have proved the following statement.
\begin{Proposition}\label{P2.1} Remez-type inequality (\ref{2.1}) implies Nikol'skii-type
inequality
\begin{equation}\label{2.8}
\|f\|_p \le R^{q\beta}b^{-\beta}\|f\|_q,\quad \beta:=1/q-1/p
\end{equation}
for all $0<q<p\le\infty$. In other words,
$$
RI(\infty,b,R) \Rightarrow NI(p,q,R^{q\beta},1/b).
$$
\end{Proposition}
Second, we consider the case of $RI(p,b,R)$ with $p<\infty$.
\begin{Proposition}\label{P2.1p} For $0<q<p< \infty$ we have
$$
RI(p,b,R) \Rightarrow NI(p,q,R,1/b).
$$
\end{Proposition}
\begin{proof} We use the same notations as in the above proof of Lemma \ref{L2.2}.
First, we bound from above the thresholding parameter $T$. We have
$$
\|f\|_q^q \ge \int_{B^*}|f|^qd\mu \ge T^qb,
$$
which implies
\begin{equation}\label{0.4}
T \le \|f\|_q b^{-1/q}.
\end{equation}
Second, we estimate
\begin{equation}\label{0.5}
\int_{\Omega\setminus B^*} (|f|/T)^qd\mu \ge \int_{\Omega\setminus B^*} (|f|/T)^pd\mu \ge T^{-p}R^{-p}\|f\|_p^p.
\end{equation}
Relations (\ref{0.5}) and (\ref{0.4}) imply
\begin{equation}\label{0.6}
\|f\|_p^p \le R^p T^{p-q} \|f\|_q^q \le R^pb^{-(p-q)/q}\|f\|_q^p
\end{equation}
and
$$
\|f\|_p \le Rb^{-\beta}\|f\|_q.
$$
\end{proof}
Note that the statements of Propositions \ref{P2.1} and \ref{P2.1p}
are sharp in the following sense.
\begin{Remark}\label{R3.1}
For $0<q<p\le \infty$
the implication
$$
NI(p,q,R,1/b)
\Rightarrow
RI(p,C_1 b,C_2 R({q,\beta})),\qquad C_1, C_2>0,
$$
does not hold in general.
\end{Remark}
In particular, this follows from
Proposition \ref{P3.1} and
Theorem \ref{T3.5} taking $p=\infty$ and $0<q \le 1$.
\begin{Remark}\label{R3.2}
In light of Lemmas \ref{L2.1} and \ref{L2.2}, one can ask if
for $0<q<p\le \infty$
the following implication
$$
RI(q,b,R) \Rightarrow NI(p,q, C_1 R(p,q) ,C_2/b),\qquad C_1, C_2>0,
$$
holds, which is stronger than the one stated in Propositions \ref{P2.1} and \ref{P2.1p}.
Again, Proposition \ref{P3.1} and
Theorem \ref{T3.5} with $p=\infty$ and $0<q \le 1$
show that this is not the case.
\end{Remark}
\subsection{Nikol'skii inequality implies Remez inequality}
We prove here the following statement.
\begin{Proposition}\label{P2.2} Suppose that a function $f$ satisfies the Nikol'skii inequality
\begin{equation}\label{2.9}
\|f\|_p \le C(p,q)m^\beta \|f\|_q,\quad 1\le q<p\le \infty,\quad \beta:=1/q-1/p.
\end{equation}
Then there exists a constant $C'(p,q)$ such that for any set $B\in \Omega$, $|B|\le (C'(p,q)m)^{-1}$ we have
\begin{equation}\label{2.9'}
\|f\|_{L_q(\Omega)} \le 2\|f\|_{L_q(\Omega\setminus B)}.
\end{equation}
\end{Proposition}
\begin{proof} Denote $B^c := \Omega\setminus B$ and $\chi_A$ the characteristic function of a set $A$. Then
\begin{equation}\label{2.10}
\|f\|_q \le \|f\chi_{B^c}\|_q +\|f\chi_B\|_q.
\end{equation}
Further, by H{\"o}lder inequality with parameter $p/q$ and our assumption (\ref{2.9}) we obtain
\begin{equation}\label{2.11}
\|f\chi_B\|_q \le \|f\|_p |B|^\beta \le C(p,q)m^\beta |B|^\beta \|f\|_q.
\end{equation}
Making measure $|B|$ small enough to satisfy $C(p,q)m^\beta |B|^\beta\le 1/2$ we derive from
(\ref{2.11}) and (\ref{2.10}) the required inequality.
\end{proof}
\begin{Remark}\label{R2.1} Proposition \ref{P2.2} holds for all $0<q<p\le\infty$ with $2$ replaced by $2^{1/q}$ in (\ref{2.9'}) in case $q<1$.
\end{Remark}
\begin{Remark}\label{R2.2'}
Remark \ref{R3.2} shows that the reverse statement to Proposition \ref{P2.2} does not hold in general.
\end{Remark}
Propositions \ref{P2.1p} and \ref{P2.2} yield the following result.
\begin{Remark}\label{R2.3}
Let $W_m$, $m\in {\mathbb{N}}$ or $m>0$, be a collection of subclasses of $L_r(\Omega)$, $A<r<B$.
The following two conditions are equivalent:
\begin{description}
\item{\textnormal{(i)}} for any $A<q<p<B$ we have
$$
\sup_{f\in W_m} \frac{\|f\|_p}{\|f\|_q}\asymp \lambda(m)^{ \frac 1q-\f1p};
$$
\item{\textnormal{(ii)}} for any $A<r<B$ we have
$$
\sup_{f\in W_m} \sup \Big\{|B|: \|f\|_{L_r(\Omega)} \le R\|f\|_{L_r(\Omega\setminus B)}
\Big\}\asymp \f1 {\lambda(m)}.
$$
\end{description}
\end{Remark}
In many cases the Nikol'skii type inequalities are known.
Proposition \ref{P2.2} and Remark \ref{R2.1} allow us to derive the Remez type inequalities from these known results.
We illustrate this on some examples.
\begin{Example}\label{E1.1}
\textnormal{(i).}
Taking into account the results from \cite{nessel},
for each trigonometric polynomial
$$T(x)=\sum_{k\in \operatorname{supp} \widehat{T}} c_k \exp(ikx), \qquad \operatorname{supp} \widehat{T}=\{k\in \mathbb Z^d: \widehat{T}(k)=c_k\ne 0 \}
$$
we have
$
\|T\|_{L_p(\mathbb T^d)} \le 2^{\max(1,1/p)}\|T\|_{L_p(\mathbb T^d\setminus B)},
$$
where
$$|B|\le \frac{1}{C}\left\{
\begin{array}{ll}
{1}/{N( \operatorname{supp}(\widehat{T}))}, & \hbox{$0<p< 2$;} \\
{1}/{N( p_0 \textnormal{Conv}(\operatorname{supp}(\widehat{T})))}, & \hbox{$2\le p< \infty$,}
\end{array}
\right.
$$
$N(X)$ is the number of latice points in $X\subset{\mathbb R}^d$,
$p_0$ is the smallest integer not less than $p/2$, and $\textnormal{Conv}(\operatorname{supp}(\widehat{T}))$ denotes the convex hull of $\operatorname{supp}(\widehat{T}).$
\textnormal{(ii).}
For each trigonometric polynomial
$$T_n(x)=\sum_{k=1}^n c_k \exp(i{n_k} x), \qquad n_k\in \mathbb Z,
$$
we have
$
\|T_n\|_{L_p(\mathbb T)} \le 2^{\max(1,1/p)}\|T_n\|_{L_p(\mathbb T\setminus B)},
$$
where
$$|B|\le \frac{1}{C} \left\{
\begin{array}{ll}
{1}/{n }, & \hbox{$0<p\le 2$;} \\
{1}/{n^{p/2}}, & \hbox{$2<p< \infty$.}
\end{array}
\right.
$$
This follows from results of Belinskii \cite{belin}.
\textnormal{(iii).} Sharp Nikol'skii inequalities for spherical harmonics given by inequalities
(\ref{1-6}) and (\ref{1-7}) imply that
for any ${Y_n\in\mathcal{H}_n^d}$,
$d\ge 3$,
we have that
$Y_n\in RI(p, 1/b, 2^{\max(1,1/p)})$ with
$$|B|\le \frac{1}{C} \left\{
\begin{array}{ll}
{1}/{n^{\frac{d-2}2} }, & \hbox{$0<p< 2$;} \\
{1}/{n^{{d-1}} }, & \hbox{$\frac{2d-2}{d-2}<p< \infty$.}
\end{array}
\right.
$$
\textnormal{(iv).}
Let $\Lambda_n=\{\lambda_0<\lambda_1\cdots<\lambda_n \}$ be a set of real numbers. Let us denote by $E(\Lambda_n)$ the collection of all linear combination of $e^{\lambda_0 t}, e^{\lambda_1 t}, \cdots, e^{\lambda_n t}$ over ${\mathbb R}$. Then the sharp Nikol'skii inequality from \cite{erd2007}
imply that
$
\|P\|_{L_p([a,b])} \le 2^{\max(1,1/p)}\|P\|_{L_p([a,b]\setminus B)},\qquad P\in E(\Lambda_n),\qquad 0<p<\infty,
$$
where
$$|B|\le \frac{1}{C\left(n^2+\sum_{j=1}^n|\lambda_j| \right)}.
$$
\textnormal{(v).} For functions $f$ such that $\operatorname{supp} \widehat{f}$ is compact, using \cite{nessel}, we have that
$$
\|f\|_{L_p({\mathbb R}^d)} \le 2^{\max(1,1/p)}\|f\|_{L_p({\mathbb R}^d\setminus B)},
$$
where
$$|B|\le \frac{1}{C}\left\{
\begin{array}{ll}
{1}/{\mu ( \operatorname{supp}(\widehat{f}))}, & \hbox{$0<p< 2$;} \\
{1}/{ p_0^n \mu (\textnormal{Conv}(\operatorname{supp}(\widehat{T})))}, & \hbox{$2\le p< \infty$,}
\end{array}
\right.
$$
$\mu ( X)$ is the Lebesgue measure of $X$ and $p_0$ is the smallest integer not less than $p/2$.
\end{Example}
Note that Remark \ref{R2.3} shows that if we have sharp Nikol'skii inequalities, which is the case in (ii), (iii), and (iv), then
the corresponding Remez inequalities obtained above are also sharp.
\subsection{Discretization inequality implies Remez inequality}
We prove the following theorem here.
\begin{Theorem}\label{T2.1} Let $f$ be a continuous periodic function on $\mathbb T^d$. Assume that there exists a set $X_m=\{\mathbf x^j\}_{j=1}^m \subset \mathbb T^d$ such that for all functions $f_\mathbf y(\mathbf x):= f(\mathbf x-\mathbf y)$ we have the discretization inequality
\begin{equation}\label{2.12}
\|f_\mathbf y\|_\infty \le D\max_{1\le j\le m}|f_\mathbf y(\mathbf x^j)|.
\end{equation}
Then for any $B$ with $|B|<1/m$ we have (\ref{2.1}) with $R=D$.
\end{Theorem}
\begin{proof} Consider the function
$$
g(\mathbf y) := \sum_{j=1}^m \chi_B(\mathbf x^j-\mathbf y).
$$
At each point $\mathbf y$ either $g(\mathbf y)=0$ or $g(\mathbf y)\ge 1$. We prove that for $B$, $|B|<1/m$ there is a point $\mathbf y^*$ such that $g(\mathbf y^*)=0$. We prove this by contradiction. If such point $\mathbf y^*$ does not exist then $g(\mathbf y)\ge 1$ for all $\mathbf y\in\mathbb T^d$ and
$$
\int_{\mathbb T^d} gd\mu \ge 1.
$$
On the other hand
$$
\int_{\mathbb T^d} gd\mu = m|B|<1.
$$
The obtained contradiction proves the existence of $\mathbf y^*$ such that $g(\mathbf y^*)=0$. This implies in turn that for all $j$ we have $\chi_B(\mathbf x^j -\mathbf y^*)=0$ or, in other words, $\mathbf x^j -\mathbf y^* \in B^c:=\mathbb T^d\setminus B$. Next, by (\ref{2.12})
$$
\|f\|_\infty =\|f_{\mathbf y^*}\|_\infty \le D\max_{1\le j\le m}|f_{\mathbf y^*}(\mathbf x^j)| = D\max_{1\le j\le m}|f(\mathbf x^j-\mathbf y^*)| \le D \sup_{\mathbf x\in B^c}|f(\mathbf x)|.
$$
This completes the proof.
\end{proof}
\section{Hyperbolic cross polynomials}
\subsection{Remez inequality}
Let
$$
\Gamma(N) :=\{\mathbf k=(k_1,\dots,k_d): \prod_{j=1}^d \bar k_j \le N, \quad \bar k_j:=\max(1,|k_j|)\}
$$
be the hyperbolic cross and
$$
\mathcal T(N):=\{f(\mathbf x), \mathbf x=(x_1,\dots,x_d): f(\mathbf x) = \sum_{\mathbf k\in \Gamma(N)}c_\mathbf k e^{i(\mathbf k,\mathbf x)}\}.
$$
\begin{Theorem}\label{T3.1} There exist two positive constants $C_1(d)$ and $C_2(d)$ such that for any set $B\subset \mathbb T^d$ of normalized measure $|B|\le (C_2(d)N(\log N)^{d-1})^{-1}$ and for any
$f\in \mathcal T(N)$ we have
\begin{equation}\label{3.1}
\|f\|_\infty \le C_1(d)(\log N)^{d-1} \sup_{\mathbf u\in \mathbb T^d \setminus B} |f(\mathbf u)|.
\end{equation}
\end{Theorem}
\begin{proof}
Denote by $\mathcal V_N$ the de la Vall{\' e}e Poussin kernel for the hyperbolic cross $\Gamma(N)$:
${\hat \mathcal V}_N(\mathbf k)=1$ for $\mathbf k\in \Gamma(N)$ and ${\hat \mathcal V}_N(\mathbf k)=0$ for $\mathbf k\notin \Gamma(2^dN)$. It is known (see, for instance, \cite{Tmon}, Chapter 1) that there exists a kernel $\mathcal V_N$ with the following properties:
\begin{equation}\label{3.2}
\|\mathcal V_N\|_1 \le C'(d)(\log N)^{d-1},\qquad \|\mathcal V_N\|_\infty \le C''(d) N(\log N)^{d-1} .
\end{equation}
Then for any $f\in \mathcal T(N)$ we have $f=f\ast \mathcal V_n$, where $\ast$ means convolution.
Let $B$ be a set of small measure. We have for $f\in \mathcal T(N)$
$$
\|f\|_\infty = \|(2\pi)^{-d}\int_{\mathbb T^d} f(\mathbf u)\mathcal V_N(\mathbf x-\mathbf u)d\mathbf u\|_\infty
$$
$$
=\|(2\pi)^{-d}\int_{\mathbb T^d\setminus B} f(\mathbf u)\mathcal V_N(\mathbf x-\mathbf u)d\mathbf u + (2\pi)^{-d}\int_{B} f(\mathbf u)\mathcal V_N(\mathbf x-\mathbf u)d\mathbf u\|_\infty
$$
$$
\le \max_{\mathbf u\in \mathbb T^d\setminus B} |f(\mathbf u)|\|\mathcal V_N\|_1 + |B| \|f\|_\infty \|\mathcal V_N\|_\infty
$$
$$
\le C'(d)(\log N)^{d-1} \max_{\mathbf u\in \mathbb T^d\setminus B} |f(\mathbf u)| + C''(d) N(\log N)^{d-1} |B| \|f\|_\infty.
$$
If $C''(d) N(\log N)^{d-1} |B|\le1/2$ then
$$
\|f\|_\infty \le 2C'(d)(\log N)^{d-1} \max_{\mathbf u\in \mathbb T^d\setminus B} |f(\mathbf u)|.
$$
This completes the proof with $C_1(d) := 2C'(d)$ and $C_2(d) := 2C''(d) $.
\end{proof}
Theorem \ref{T3.1} cannot be improved in a certain sense. The following statement holds for $d\ge 2$.
\begin{Proposition}\label{P3.1} The following statement is false: There exist $\delta>0$, $A$, $c$, and $C$ such that for any $f\in \mathcal T(N)$ and any set $B\subset \mathbb T^d$ of measure
$|B| \le (cN(\log N)^A)^{-1}$ the Remez-type inequality holds
\begin{equation}\label{3.3}
\|f\|_\infty \le C(\log N)^{(d-1)(1-\delta)} \sup_{\mathbf u\in \mathbb T^d\setminus B} |f(\mathbf u)|.
\end{equation}
\end{Proposition}
\begin{proof} We use Proposition \ref{P2.1} with $p=\infty$. Our assumption (\ref{3.3}) gives (\ref{2.1}) with $b=(cN(\log N)^A)^{-1}$ and $R=C(\log N)^{(d-1)(1-\delta)}$. Therefore, by Proposition \ref{P2.1} with $p=\infty$ (see also (\ref{2.7})) we get for all $f\in\mathcal T(N)$
\begin{equation}\label{3.4}
\|f\|_\infty \le Rb^{-1/q}\|f\|_q,\quad 1\le q<\infty.
\end{equation}
It is known (see \cite{Tmon} and Theorem \ref{T3.3} below) that it should be
\begin{equation}\label{3.5}
Rb^{-1/q} \ge C(d,q) N^{1/q} (\log N)^{(d-1)(1-1/q)}.
\end{equation}
Substituting our $b$ and $R$ expressed in terms of $N$ and choosing large enough $q$ and $N$, we obtain a contradiction in (\ref{3.5}).
\end{proof}
By (\ref{2.6}) Theorem \ref{T3.1} implies the following Remez-type inequality for all $0<p<\infty$.
\begin{Corollary}\label{C3.1} There exist two positive constants $C_1(d,p)$ and $C_2(d)$ (this constant is from Theorem \ref{T3.1}) such that for any set $B\subset \mathbb T^d$ of normalized measure $|B|\le (2C_2(d)N(\log N)^{d-1})^{-1}$ and for any
$f\in \mathcal T(N)$ we have
\begin{equation}\label{3.6p}
\|f\|_p \le C_1(d,p)(\log N)^{d-1} \|f\|_{L_p( \mathbb T^d \setminus B)}.
\end{equation}
\end{Corollary}
Proposition \ref{P2.2}, Remark \ref{R2.1}, and the Nikol'skii inequalities in Theorem \ref{T3.6}, allow us to improve the above Corollary \ref{C3.1}.
\begin{Theorem}\label{T3.1p} For $0<q<\infty$ there exist two positive constants $C_1(d,q)$ and $C_2(d,q)$ such that for any set $B\subset \mathbb T^d$ of normalized measure $|B|\le (C_2(d,q)N)^{-1}$ and for any
$f\in \mathcal T(N)$ we have
\begin{equation}\label{3.7p}
\|f\|_q \le C_1(d,q) \|f\|_{L_q( \mathbb T^d \setminus B)}.
\end{equation}
\end{Theorem}
Theorems \ref{T3.1p} and Theorem \ref{T3.6} imply the following combination of Nikol'skii-type and Remez-type inequalities.
\begin{Theorem}\label{TNR1} For $0<q\le p<\infty$ there exist two positive constants $C_1=C_1(d,p,q)$ and $C_2=C_2(d,p,q)$ such that for any set $B\subset \mathbb T^d$ of normalized measure $|B|\le (C_2N)^{-1}$ and for any
$f\in \mathcal T(N)$ we have
\begin{equation}\label{3.7pq}
\|f\|_p \le C_1 N^\beta \|f\|_{L_q( \mathbb T^d \setminus B)},\quad \beta:=1/q-1/p.
\end{equation}
\end{Theorem}
\subsection{Improved Remez inequality in case $d=2$}
Proposition \ref{P3.1} shows that we cannot substantially improve on the additional factor $(\log N)^{d-1}$ in (\ref{3.1}) of Theorem \ref{T3.1}. In this subsection we will improve the bound $b$ on the measure of a set $B$. Our technique is based on the Riesz products. It works in the case $d=2$. We introduce some notations. Let $\mathbf s=(s_1,\dots,s_d)$ be a vector with nonnegative integer coordinates ($\mathbf s \in \mathbb Z^d_+$) and
$$
\rho(\mathbf s):= \{\mathbf k=(k_1,\dots,k_d)\in \mathbb Z^d_+ : [2^{s_j-1}]\le |k_j|<2^{s_j},\quad j=1,\dots,d\}
$$
where $[a]$ denotes the integer part of a number $a$. Denote for a natural number $n$
$$
Q_n := \cup_{\|\mathbf s\|_1\le n}\rho(\mathbf s); \qquad \Delta Q_n := Q_n\setminus Q_{n-1} = \cup_{\|\mathbf s\|_1=n}\rho(\mathbf s)
$$
with $\|\mathbf s\|_1 = s_1+\dots+s_d$ for $\mathbf s\in \mathbb Z^d_+$. We call a set $\Delta Q_n$ {\it hyperbolic layer}. For a set $\Lambda \subset \mathbb Z^d$ denote
$$
\mathcal T(\Lambda) := \{f\in L_1: \hat f(\mathbf k)=0, \mathbf k\in \mathbb Z^d\setminus \Lambda\} .
$$
For any two integers $a\ge 1$ and $0\le b<a$, we shall denote by $AP(a,b)$ the arithmetical progression of the form $al+b$, $l=0,1,\dots$. Set
$$
H_n(a,b) := \{\mathbf s=(s_1,s_2): \mathbf s\in \mathbb Z_+^2,\quad \|\mathbf s\|_1=n, \quad s_1,s_2\ge a,\quad s_1\in AP(a,b)\}.
$$
Define
$$
\rho'(\mathbf s) := \{\mathbf m=(m_1,m_2): [2^{s_i-2}]\le |m_i|< 2^{s_i}, i=1,2\}.
$$
Let us define the polynomials $\mathcal A_{\mathbf s} (\mathbf x)$
for $\mathbf s = (s_1 ,\dots,s_d)\in{\mathbb{N}}^d_0$
$$
\mathcal A_{\mathbf s} (\mathbf x) :=\prod_{j=1}^d\mathcal A_{s_j}(x_j),
$$
with $\mathcal A_{s_j}(x_j)$ defined as follows:
$$
\mathcal A_0 (x) := 1, \quad \mathcal A_1 (x) := \mathcal V_1 (x) - 1, \quad
\mathcal A_s (x) := \mathcal V_{2^{s-1}} (x) -\mathcal V_{2^{s-2}} (x),
\quad s\ge 2,
$$
where $\mathcal V_m$ are the de la Vall\'ee Poussin kernels.
Then for $d=2$
$$
{\mathcal A}_\mathbf s \in \mathcal T(\rho'(\mathbf s)).
$$
For a subspace $Y$ in $L_2(\mathbb T^d)$ we denote by $Y^\perp$ its orthogonal complement.
We need the following lemma on the Riesz product, which is Lemma 2.1 from \cite{TE3}.
\begin{Lemma}\label{L2.1v} Take any trigonometric polynomials $t_\mathbf s\in \mathcal T(\rho'(\mathbf s))$ and form the function
$$
\Phi(\mathbf x) := \prod_{\mathbf s\in H_n(a,b)}(1+t_\mathbf s).
$$
Then for any $a\ge 6$ and any $0\le b<a$ this function admits the representation
$$
\Phi(\mathbf x) = 1+ \sum_{\mathbf s\in H_n(a,b)} t_\mathbf s(\mathbf x) +g(\mathbf x)
$$
with $g\in \mathcal T(Q_{n+a-6})^\perp$.
\end{Lemma}
We remind that we restrict ourselves to $d=2$. Denote
$$
t_\mathbf s := {\mathcal A}_\mathbf s/M,\qquad M:= \max_{\mathbf s\in H_n(a,b)} \|{\mathcal A}_s\|_\infty\asymp 2^n.
$$
Consider the Riesz product
$$
\Phi := \prod_{\mathbf s\in H_n(a,b)} \left(1+\frac{it_\mathbf s}{\sqrt{N}}\right),\quad N:= |H_n(a,b)|.
$$
Then it is easy to derive from the inequality $\left|1+\frac{it_\mathbf s}{\sqrt{N}}\right| \le \left(1+\frac{1}{N}\right)^{1/2}$ that (see Remark 2.1 from \cite{TE4})
$$
|\Phi| \le C.
$$
Moreover, by Lemma \ref{L2.1v} we have
$$
\Phi = 1+\frac{i}{\sqrt{N}}\sum_{\mathbf s\in H_n(a,b)} t_\mathbf s + w,\quad w \in \mathcal T(Q_{n+a-6})^\perp.
$$
Thus,
\begin{equation}\label{3.6}
\| \sum_{\mathbf s \in H_n(a,b)} t_\mathbf s + N^{1/2}\text{Im}(w)\|_\infty \le CN^{1/2}.
\end{equation}
We now bound $\|w\|_1$. We introduce some more notations. Denote
$$
H_n^k := \{(\mathbf s^1,\dots,\mathbf s^k): \mathbf s^j\in H_n(a,b), j=1,\dots,k, \quad \text{are distinct} \}
$$
$$
h_n := AP(a,b)\cap [a,n-a].
$$
We have
$$
w = \sum_{k=2}^N \left(\frac{i}{N^{1/2}M}\right)^k\sum_{(\mathbf s^1,\dots,\mathbf s^k)\in H_n^k} \prod_{j=1}^k {\mathcal A}_{\mathbf s^j}
$$
$$
= \sum_{k=2}^N \left(\frac{i}{N^{1/2}M}\right)^k\sum_{s^1_1 \in h_n} \sum_{s^2_1\in h_n: s^2_1 <s^1_1} \dots \sum_{s^k_1\in h_n: s^k_1 <s^{k-1}_1} \prod_{j=1}^k {\mathcal A}_{\mathbf s^j}.
$$
Therefore,
\begin{equation}\label{3.7}
\|w\|_1 \le \sum_{k=2}^N \left(\frac{1}{N^{1/2}M}\right)^k\sum_{s^1_1 \in h_n} \sum_{s^2_1\in h_n: s^2_1 <s^1_1} \dots \sum_{s^k_1\in h_n: s^k_1 <s^{k-1}_1} \|\prod_{j=1}^k {\mathcal A}_{\mathbf s^j}\|_1.
\end{equation}
Next,
$$
\|\prod_{j=1}^k {\mathcal A}_{\mathbf s^j}\|_1 \le \|{\mathcal A}_{s^1_1}(x_1)\|_1 \prod_{j=2}^k \|{\mathcal A}_{s^j_1}(x_1)\|_\infty
\|{\mathcal A}_{s^k_2}(x_2)\|_1 \prod_{j=1}^{k-1} \|{\mathcal A}_{s^j_2}(x_2)\|_\infty
$$
\begin{equation}\label{3.8}
\le C2^{s^2_1+\cdots+s^k_1+n-s^1_1 +\cdots+ n-s^{k-1}_1}.
\end{equation}
Inequalities (\ref{3.7}) and (\ref{3.8}) imply
$$
\|w\|_1 \le C\sum_{k=2}^N \left(\frac{1}{N^{1/2}M}\right)^k N2^{n(k-1)} \ll 2^{-n}.
$$
Therefore,
\begin{equation}\label{3.9}
\| M N^{1/2} \text{Im}(w)\|_1 \ll N^{1/2}.
\end{equation}
Bounds (\ref{3.9}) and (\ref{3.6}) with $a=6$ imply that there exist a function $t\in \mathcal T(Q_n)^\perp$ such that
\begin{equation}\label{3.10}
\| \sum_{\mathbf s\in H_n(a,b)} {\mathcal A}_\mathbf s -t\|_1 \ll n,
\end{equation}
and
\begin{equation}\label{3.11}
\| \sum_{\mathbf s\in H_n(a,b)} {\mathcal A}_\mathbf s -t\|_\infty \ll n^{1/2}2^n.
\end{equation}
Consider
$$
\Delta \mathcal V_n := \sum_{\mathbf s: n\le \|\mathbf s\|_1\le n+2} {\mathcal A}_\mathbf s.
$$
Note that for any $f\in \mathcal T(\Delta Q_n)$ we have $f\ast \Delta \mathcal V_n =f$.
The above inequalities (\ref{3.10}) and (\ref{3.11}) imply the following assertion.
\begin{Lemma}\label{L3.1} There exists $T\in \mathcal T(Q_n)^\perp$ such that
$$
\|\Delta \mathcal V_n - T\|_1 \ll n,\quad \|\Delta \mathcal V_n - T\|_\infty \ll n^{1/2} 2^n.
$$
\end{Lemma}
In the same way as Theorem \ref{T3.1} was derived from inequalities (\ref{3.2}) the following theorem can be derived from Lemma \ref{L3.1}.
\begin{Theorem}\label{T3.2} Let $d=2$. There exist two positive constants $C_1$ and $C_2$ such that for any set $B\subset \mathbb T^2$ of normalized measure $|B|\le (C_22^nn^{1/2})^{-1}$ and for any
$f\in \mathcal T(\Delta Q_n)$ we have
\begin{equation}\label{3.12}
\|f\|_\infty \le C_1n \sup_{\mathbf u\in \mathbb T^2 \setminus B} |f(\mathbf u)|.
\end{equation}
\end{Theorem}
\subsection{The Nikol'skii inequalities}
The following two theorems are from \cite{Tmon}, Ch.1, Section 2.
\begin{Theorem}\label{T3.3} Suppose that $1 \le q < \infty $. Then
$$
\sup_{f\in \mathcal T(N)}\|f\|_{\infty}
/ \|f\|_q\asymp N^{1/q}(\log N)^{(d-1)(1-1/q)}.
$$
\end{Theorem}
\begin{Theorem}\label{T3.4} Suppose that $1 \le q \le p < \infty $. Then
$$
\sup_{f\in \mathcal T(N)}\|t \|_p /
\|f\|_q\asymp N^{1/q-1/p}.
$$
\end{Theorem}
In this subsection we extend the above two theorems to the range of parameters $0<q<p\le \infty$. We begin with the case $p=\infty$.
\begin{Theorem}\label{T3.5} Suppose that $0< q < \infty $. Then
$$
\sup_{f\in \mathcal T(N)}\|f\|_{\infty}
/ \|f\|_q\asymp N^{1/q}(\log N)^{(d-1)(1-1/q)_+}.
$$
\end{Theorem}
\begin{proof} We prove the upper bound in the case $0<q<1$. The corresponding lower bounds in this case follow from the univariate case. We derive the required inequality from Theorem \ref{T3.3} with $q=1$. Let $f\in \mathcal T(N)$. Then
$$
\|f\|_1 = \| |f|^{1-q} |f|^q\|_1 \le \||f|^{1-q}\|_\infty \||f|^q\|_1 = \|f\|_\infty^{1-q} \|f\|_q^q.
$$
Applying Theorem \ref{T3.3} with $q=1$ we continue
$$
\le C(d)N \|f\|_1 \|f\|_\infty^{-q} \|f\|_q^q.
$$
This implies
\begin{equation}\label{3.13}
\|f\|_\infty^q \le C(d)N\|f\|_q^q \quad \text{and}\quad \|f\|_\infty \le (C(d)N)^{1/q}\|f\|_q.
\end{equation}
which completes the proof.
\end{proof}
\begin{Theorem}\label{T3.6} Suppose that $0 < q < p < \infty $. Then
$$
\sup_{f\in \mathcal T(N)}\|t \|_p /
\|f\|_q\asymp N^{1/q-1/p}.
$$
\end{Theorem}
\begin{proof} We prove the upper bound in the case $0<q<1$.
We have
$$
\|f\|_p = \||f|^{1-q/p}|f|^{q/p}\|_p \le \|f\|_\infty^{1-q/p} \|f\|_q^{q/p}.
$$
Using Theorem \ref{T3.5} we continue
$$
\le (C(d)N)^{(1-q/p)/q}\|f\|_q^{1-q/p}\|f\|_q^{q/p} = (C(d)N)^{\beta}\|f\|_q,\quad \beta:=1/q-1/p.
$$
The sharpness of Nikolskii's inequality, i.e., the part $"\gg"$, follows from
the one-dimensional Jackson kernel example:
$$T(x)=\left(\frac{\sin \frac{nt}2}{n \sin\frac{t}2}\right)^{2r}, \qquad r\in {\mathbb{N}}.$$
See \cite[\S 4.9]{timan} for $1\le p \le \infty$; in the case of $0<p<1$ it is enough to take $r$ large enough ($r>\frac1{2p}$).
\end{proof}
\subsection{Discretization}
An operator $T_n$ with the following properties was constructed in \cite{T93}. The operator $T_n$ has the form
$$
T_n(f) = \sum_{j=1}^m f(\mathbf x^j) \psi_j(\mathbf x),\quad m\le c(d)2^n n^{d-1},\quad \psi_j \in \mathcal T(Q_{n+d})
$$
and
\begin{equation}\label{3.21}
T_n(f) =f,\quad f\in \mathcal T(Q_n),
\end{equation}
\begin{equation}\label{3.22}
\|T_n\|_{L_\infty\to L_\infty} \asymp n^{d-1}.
\end{equation}
Properties (\ref{3.21}) and (\ref{3.22}) imply that all $f\in\mathcal T(Q_n)$ satisfy the discretization inequality (see \cite{KT} and \cite{DTU}, subsection 2.5)
\begin{equation}\label{3.23}
\|f\|_\infty \le C(d)n^{d-1} \max_{1\le j\le m} |f(\mathbf x^j)|.
\end{equation}
Note that Theorem \ref{T2.1} and the discretization inequality (\ref{3.23}) give other proof of Theorem \ref{T3.1}.
Theorem \ref{T2.1} and Proposition \ref{P3.1} imply the following assertion.
\begin{Proposition}\label{P3.2} The following statement is false: There exist $\delta>0$, $A$, $c$, and $C$ such that there exists a set $X_m=\{\mathbf x^j\}_{j=1}^m$ with $m\le c2^nn^A$, which provides the discretization inequality for $\mathcal T(Q_n)$:
$$
\|f\|_\infty \le Cn^{(d-1)(1-\delta)} \max_{1\le j\le m} |f(\mathbf x^j)|,\quad f\in\mathcal T(Q_n).
$$
\end{Proposition}
Thus, an extra factor $n^{d-1}$ in the discretization inequality for $\mathcal T(Q_n)$ cannot be substantially improved, if we limit ourselves to the number of $m\ll 2^nn^A$ points. It is proved in \cite{KT} (see \cite{DTU}, subsection 2.5, for a discussion) that in the case $d=2$ in order to drop the extra factor $n$ in (\ref{3.23}) we need to use at least $2^{n(1+c_0)}$, $c_0>0$, points. It is clear that the necessary condition for the discretization inequality (\ref{3.23}) to hold with some extra factor is $m\ge |Q_n| =\dim\mathcal T(Q_n) \asymp 2^n n^{d-1}$.
Therefore, the way from discretization inequality to the Remez inequality, provided by Theorem \ref{T2.1}, cannot give a better bound than $b(Q_n)\asymp (2^n n^{d-1})^{-1}$.
However, the direct proof of the Remez inequality in Theorem \ref{T3.2} gives for $d=2$ a better bound $b(\Delta Q_n) \asymp (2^n n^{1/2})^{-1}$.
{\bf Acknowledgements.}
This research was carried out when the authors visited the Centre de Recerca Matem\`{a}tica in Barcelona and the Hausdorff Research Institute for Mathematics in Bonn.
The first named author was partially supported by the Clay Mathematical Institute grant.
The second named author was partially supported by
MTM 2014-59174-P, 2014 SGR 289, and the Hausdorff Research Institute.
| {
"timestamp": "2016-06-14T02:13:23",
"yymm": "1606",
"arxiv_id": "1606.03773",
"language": "en",
"url": "https://arxiv.org/abs/1606.03773",
"abstract": "In this paper we study the Remez-type inequalities for trigonometric polynomials with harmonics from hyperbolic crosses. The interrelation between the Remez and Nikolskii inequalities for individual functions and its applications are discussed.",
"subjects": "Classical Analysis and ODEs (math.CA); Numerical Analysis (math.NA)",
"title": "Remez-type inequalities for the hyperbolic cross polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126500692714,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397089534649
} |
https://arxiv.org/abs/1102.4302 | Non-Archimedean Unitary Operators | We describe a subclass of the class of normal operators on Banach spaces over non-Archimedean fields (A. N. Kochubei, J. Math. Phys. 51 (2010), article 023526) consisting of operators whose properties resemble those of unitary operators. In particular, an analog of Stone's theorem about one-parameter groups of unitary operators is proved. | \section{INTRODUCTION}
{\bf 1.1}. In a previous paper \cite{K1}, we found a class of
non-Archimedean normal operators, bounded linear operators on Banach
spaces over non-Archimedean fields possessing orthogonal, in the
non-Archimedean sense, spectral decompositions. It is a natural problem
now to find out what operators in the non-Archimedean setting should be
seen as unitary ones. Classically, the correspondence between
selfadjoint and unitary operators extends, via the well-known
functional calculus, the correspondence $\lambda \mapsto e^{i\lambda }$
between real numbers and complex numbers from the unit circle.
There is no direct analog of the function $e^{i\lambda }$ in the
non-Archimedean case. However we will see that its natural counterpart
in the context of non-Archimedean operator theory is the function
$\lambda \mapsto z^{\lambda }$ where $\lambda$ runs the ring $\Zp$ of
$p$-adic integers, $z$ belongs to the group of principal units of a
non-Archimedean field $K$ (in another language, $z$ is a positive
element of $K$ \cite{Sch}). The image of this function also belongs to
the group of principal units. This prompts to define a non-Archimedean
unitary operator as an operator of the form $I+V$ where $I$ is the unit
operator, $V$ is a normal operator in the sense of \cite{K1}, $\|V\| <
1$. The normality assumption is essential -- otherwise $I+V$ can be
non-diagonalizable together with $V$. This shows (see also \cite{Shil})
that the isometricity is not a substitute of unitarity in the
non-Archimedean case. In fact we will use a refined version of the
above definition; see Section 2.
In classical operator theory, the main result about unitary operators
is Stone's theorem about the representation of a one-parameter unitary
group in the form $t\mapsto e^{itA}$ where $t\in \mathbb R$, $A$ is a
selfadjoint operator. We find its non-Archimedean analog -- a
one-parameter group parametrized by the group of principal units of
$\Qp$ has the form $s^A$ where $A$ belongs to the class of normal
operators in the sense of \cite{K1}, $\|A\| \le 1$. This result can be
reformulated from the setting with the parameter from the group of
principal units to the case of the parameter from $\Zp$.
{\bf 1.2.} Let us recall principal notions and results from \cite{K1}.
We will not explain the basic notions of non-Archimedean analysis; see
\cite{PS,R,vR,Sch}.
Let $A$ be a bounded linear operator on a Banach space $\B$ over a
complete non-Archimedean valued field $K$ with a nontrivial valuation;
$|\cdot |$ will denote the absolute value in $K$. Denote by $\LA$ the
commutative Banach algebra generated by the operators $A$ and $I$.
$\LA$ is a closure of the algebra $K[A]$ of polynomials in $A$, with
respect to the norm of operators; thus $\LA$ is a Banach subalgebra of
the algebra $L(\B )$ of all bounded linear operators. Elements $\lambda
\in K$ are identified with the operators $\lambda I$.
The spectrum $\MA$ is defined (see \cite{Berk}) as the set of all
bounded multiplicative seminorms on $\LA$. In a natural topology, it is
a nonempty Hausdorff compact topological space. If the algebra $\LA$ is
uniform, that is $\|T^2\|=\|T\|^2$ for any $T\in \LA$, and all its
characters take their values in $K$, then \cite{Berk} the space $\MA$
is totally disconnected and $\LA$ is isomorphic to the algebra $C(\MA
,K)$ of continuous functions on $\MA$ with values from $K$. In this
case, under the above isomorphism, the characteristic functions
$\eta_\Lambda$ of nonempty open-closed subsets $\Lambda \subset \MA$
correspond to idempotent operators $E(\Lambda )\in \LA$, $\|E(\Lambda
)\|=1$. These operators form a finitely additive norm-bounded
projection-valued measure on the algebra of open-closed sets, with the
non-Archimedean orthogonality property
$$
\|f\| =\sup\limits_\Lambda \|E(\Lambda )f\|,\quad f \in \B.
$$
An operator $A$, for which the above picture takes place, is called
{\it normal}. We will call a normal operator {\it strongly normal}, if
its spectrum $\sigma (A)$ is a nonempty totally disconnected compact
subset of $K$, and $\MA =\sigma (A)$. A strongly normal operator admits
the spectral decomposition
$$
A=\int\limits_{\sigma (A)}\lambda \,E(d\lambda ).
$$
More generally, we get a functional calculus assigning to any
$K$-valued continuous function $\varphi$ the operator
$$
\varphi (A)=\int\limits_{\sigma (A)}\varphi (\lambda )\,E(d\lambda ),
$$
such that
$$
\| \varphi (A)\| \le \sup\limits_{\lambda \in \sigma (A)}|\varphi
(\lambda )|.
$$
Some sufficient conditions of strong normality were found in \cite{K1}.
Let $\dim \B <\infty$, and $\B =K^n$, with the norm $\|(x_1,\ldots
,x_n)\|=\max\limits_{1\le i\le n}|x_i|$. An operator $A$ is
represented, with respect to the standard basis in $K^n$, by a matrix
$(a_{ij})_{i,j=1}^n$. Its operator norm coincides with $\|A\|
=\max\limits_{i,j}|a_{ij}|$ (see \cite{Se}). It is sufficient to
consider the case where $\|A\|=1$.
Let $\widehat{K}$ be the residue field of the field $K$. Together with
the operator $A$, we consider its reduction, the operator $\AAA$ on the
$\widehat{K}$-vector space $\widehat{\B }=\widehat{K}^n$ corresponding
to the matrix $(\widehat{a_{ij}})$ where $\widehat{a_{ij}}$ is the
image of $a_{ij}$ under the canonical mapping $O\to \widehat{K}$ ($O$
is the ring of integers of the field $K$). An operator $A$ is called
nondegenerate, if $\AAA \ne \nu I$ for any $\nu \in \widehat{K}$.
It was proved in \cite{K1} that $A$ is strongly normal, if it is
nondegenerate, all its eigenvalues belong to $K$, and its reduction
$\AAA$ is diagonalizable. These conditions are satisfied, for example,
if $\AAA$ has $n$ different eigenvalues from $\widehat{K}$.
In the infinite-dimensional situation, a similar result holds \cite{K1}
(with the representation of operators by infinite matrices), if we
assume in addition that $K$ is algebraically closed, $\B$ is the space
of sequences tending to zero, $A$ is a bounded operator with a compact
spectrum, and the resolvent of $A$ belongs, in a weak sense, to the
space of Krasner analytic functions outside the spectrum. For example,
if a compact operator (that is a norm limit of a sequence of finite
rank operators) is such that its reduction is diagonalizable, then it
is strongly normal.
Note that for a strongly normal operator $A$ and any continuous
$K$-valued function $\varphi$ on $\sigma (A)$, the operator $B=\varphi
(A)$ is strongly normal. Indeed, considering the functional model of
the algebra $\LA$ we see that the spectrum of the operator $B$
coincides with the set $f(\sigma (A))$. The Banach algebra $\mathcal
L_B$ is a subalgebra of $\LA$, and its functional model coincides with
the closure in $C(\sigma (A),K)$ of the set of functions $\pi \circ f$
where $\pi$ is an arbitrary polynomial. The convergence of the sequence
$\pi_n\circ f$ in $C(\sigma (A),K)$ is equivalent to the convergence of
the sequence of polynomials $\pi_n$ in the space $C(\sigma (B),K)$. By
Kaplansky's theorem (see Theorem 43.3 in \cite{Sch}), $\mathcal L_B$ is
isometrically isomorphic to $C(\sigma (B),K)$.
\section{Unitary Operators}
{\bf 2.1.} An operator $U$ on a Banach space $\B$ over a complete
non-Archimedean valued field $K$ with a nontrivial valuation will be
called {\it unitary}, if $U=I+V$ where $\|V\|<1$ and $V$ is strongly
normal. A unitary operator admits a spectral decomposition
$$
U=\int\limits_{\sigma (V)}(1+\lambda )E_V(d\lambda
)=\int\limits_{\sigma (U)}\mu E_U(d\mu ).
$$
Here $E_V$ is the spectral measure of the operator $V$, the mapping
$\varphi (\lambda )=1+\lambda$ transforms the spectrum of $V$ into that
of $U$, $E_U(M)=E_V(\varphi^{-1}(M))$ for any open-closed subset of
$\sigma (U)$.
Below we assume that the field $K$ is an extension of the field $\Qp$
of $p$-adic numbers, and the absolute value $|\cdot |$ is an extension
of the $p$-adic absolute value.
Denote by $\UU$ the group of principal units of the field $K$, that is
$\UU=\{ 1+\lambda :\ \lambda \in K,|\lambda |<1\}$. We will consider
{\it one-parameter groups} $U(s)$, $s\in \UU$, of unitary operators,
that is families of unitary operators, continuous with respect to the
norm of operators, such that
\begin{equation}
U(s_1s_2)=U(s_1)U(s_2),\quad s_1,s_2\in \UU .
\end{equation}
A one-parameter group of unitary operators can be constructed as
follows. Let $A$ be a strongly normal operator, $\|A\|\le 1$, $\sigma
(A)\subseteq \Zp$. Consider the function $f_s(\lambda )=(1+z)^\lambda$
where $s=1+z$, $z\in K$, $|z|<1$, $\lambda \in \Zp$. This function can
be defined by its Mahler expansion \cite{R,Sch}:
$$
f_s(\lambda )=\sum\limits_{n=0}^\infty z^nP_n(\lambda )
$$
where
$$
P_n(\lambda )=\frac{\lambda (\lambda -1)\cdots (\lambda -n+1)}{n!},\
n\ge 1;\quad P_0(\lambda )\equiv 1.
$$
An equivalent definition \cite{R,Sch} can be made in terms of the
approximation of a $p$-adic integer $\lambda$ by a sequence of
nonnegative integers, for which the function is defined in the
straightforward way.
Set
\begin{equation}
U(s)=(1+z)^A=\int\limits_{\sigma (A)}(1+z)^\lambda E_A(d\lambda ),\quad
s=1+z\in \UU.
\end{equation}
Due to the non-Archimedean orthonormality of the Mahler basis, we have
$$
(1+z)^\lambda =1+v_z(\lambda ),\quad v_z(\lambda
)=\sum\limits_{n=1}^\infty z^nP_n(\lambda ),
$$
$$
\sup\limits_{\lambda \in \Zp}\left| \sum\limits_{n=1}^\infty
z^nP_n(\lambda )\right| =|s|<1,
$$
so that $U(s)$ is a unitary operator, that is $U(s)=I+V(s)$ where
$$
V(s)=\int\limits_{\sigma (A)}v_z(\lambda )E_A(d\lambda
)=\int\limits_{\sigma (V(s))}\mu E_{V(s)}(d\mu ),\quad
E_{V(s)}(M)=E_A(v_z^{-1}(M)).
$$
It follows from the approximative description of the function $f_s$
that $U(s)$ possesses the required group property. Next, let
$s_1=1+z_1$, $s_2=1+z_2$, $|z_1|<1$, $|z_2|<1$. Using (2) we find that
$$
U(s_1)-U(s_2)=\int\limits_{\sigma (A)}\left[ \sum\limits_{n=1}^\infty
\left( z_1^n-z_2^n\right) P_n(\lambda )\right] E_A(d\lambda ),
$$
so that
$$
\|U(s_1)-U(s_2)\| \le \sup\limits_{\lambda \in \Zp}\left|
\sum\limits_{n=1}^\infty \left( z_1^n-z_2^n\right) P_n(\lambda )\right|
\le \sup\limits_{n\ge 1}\left| z_1^n-z_2^n\right| =|s_1-s_2|.
$$
Therefore the function $s\mapsto U(s)$ is continuous with respect to
the operator norm.
\medskip
{\bf 2.2.} In particular, the above construction makes sense for $s\in
\UUQ$, and the formula (2) defines a norm-continuous one-parameter
group of unitary operators $\UUQ\ni s\mapsto U(s)$. The next result is
a converse statement, an analog of Stone's theorem.
\medskip
\begin{teo}
Let $U(s)$, $s\in \UUQ$, $p\ne 2$, be a norm continuous one-parameter
group of unitary operators, such that the spectrum of the strongly
normal operator $U(1+p)-I$ is contained in $p\Zp$. Then there exist
such a strongly normal operator $A$, $\sigma (A)\subseteq \Zp$, that
$U(s)=s^A$, $s\in \UUQ$.
\end{teo}
\medskip
{\it Proof}. Each element $s\in \UUQ$ can be represented, in a unique
way, as
\begin{equation}
s=(1+p)^\zeta ,\quad \zeta \in \Zp .
\end{equation}
Indeed, set $\zeta =\dfrac{\log s}{\log (1+p)}$ (see \cite{R,Sch}
regarding properties of the $p$-adic logarithm). We have $\zeta \in
\Qp$, $|\log s|\le p^{-1}$, $|\log (1+p)|=|p|=p^{-1}$, so that $\zeta
\in \Zp$. On the other hand, $\exp (\zeta \log (1+p))=(1+p)^\zeta$
(\cite{Sch}, Theorem 47.10), which implies (3).
Let us write the canonical representation
$$
\zeta =\zeta_0+\zeta_1p+\zeta_2p^2+\cdots ,\quad \zeta_j\in
\{0,1,\ldots ,p-1\}.
$$
The series $(1+p)^\zeta =\sum\limits_{n=0}^\infty p^nP_n(\zeta )$
converges uniformly with respect to $\zeta \in \Zp$, so that the
function $\zeta \mapsto (1+p)^\zeta$ is continuous. Due to the norm
continuity of $U$,
\begin{equation}
U(s)=\lim\limits_{n\to \infty}[U(1+p)]^{\zeta_0+\zeta_1p+\cdots
+\zeta_np^n}.
\end{equation}
Denote
$$
a_n(\lambda )=(1+p)^{(\zeta_0+\zeta_1p+\cdots +\zeta_np^n)\lambda
},\quad \lambda \in \Zp .
$$
Let us prove that
\begin{equation}
a_n(\lambda )\longrightarrow (1+p)^{\zeta \lambda },\quad \text{as
$n\to \infty$},
\end{equation}
uniformly with respect to $\lambda$.
Indeed, we have the estimate
\begin{multline*}
\left| a_n(\lambda )-(1+p)^{\zeta \lambda }\right| =\left|
(1+p)^{(\zeta_{n+1}p^{n+1}+\zeta_{n+2}p^{n+2}+\cdots )\lambda
}-1\right|
\\
=\left| \sum\limits_{k=1}^\infty
p^kP_k((\zeta_{n+1}p^{n+1}+\zeta_{n+2}p^{n+2}+\cdots )\lambda )\right|
\le \sup\limits_{k\ge 1}p^{-k}\left|
P_k((\zeta_{n+1}p^{n+1}+\zeta_{n+2}p^{n+2}+\cdots )\lambda )\right|
\end{multline*}
where
$$
\left| P_k((\zeta_{n+1}p^{n+1}+\zeta_{n+2}p^{n+2}+\cdots )\lambda
)\right| \le p^{-n-1}|k!|^{-1}\le p^{-n-1+\frac{k-1}{p-1}},
$$
so that, uniformly with respect to $\lambda \in \Zp$,
$$
\left| a_n(\lambda )-(1+p)^{\zeta \lambda }\right| \le
p^{-n-1}\sup\limits_{k\ge 1}p^{-k+\frac{k-1}{p-1}}\longrightarrow 0,
$$
as $n\to \infty$.
Now we return to the expression (4). By our assumption, $U(1+p)=I+V$
where $V$ is a strongly normal operator, $\sigma (V)\subseteq p\Zp$. We
have
$$
U(1+p)=\int\limits_{\sigma (V)}(1+\lambda )E_V(d\lambda )
$$
The Banach algebra $\LV$ generated by the operator $V$ contains the
strongly normal operator
$$
A=\frac1{\log (1+p)}\log (I+V)=\frac1{\log
(1+p)}\sum\limits_{k=1}^\infty \frac{(-1)^{k-1}}{k}V^k
=\int\limits_{\sigma (V)}\frac{\log (1+\lambda )}{\log
(1+p)}E_V(d\lambda ).
$$
Obviously, $\sigma (A)\subseteq \Zp$ and
$$
(1+p)^{\frac{\log (1+\lambda )}{\log (1+p)}}=1+\lambda ,
$$
so that $U(1+p)=(1+p)^A$, and it follows from (4) that
$$
U(s)=\lim\limits_{n\to \infty}\left[
(1+p)^A\right]^{\zeta_0+\zeta_1p+\cdots +\zeta_np^n}.
$$
Switching to the functional model and using (5) and (3) we obtain the
required formula for the operators $U(s)$. $\qquad \blacksquare$
\medskip
Note that the condition regarding the operator $U(1+p)-I$ is satisfied
automatically, if $U(1+p)=I+V$, $\|V\|<1$, and $K=\Qp$.
\medskip
{\bf 2.3.} Let $W(z)$, $z\in \Zp$, $p\ne 2$, be a norm continuous
unitary representation of the additive group $\Zp$, and the spectrum of
the operator $W(p^{-1}\log (1+p))-I$ lies in $p\Zp$. Denote $s=e^{pz}$,
$U(s)=W(z)$. Then $s\in \UUQ$, and $s\mapsto U(s)$ is a one-parameter
group satisfying the conditions of the above Theorem. We obtain the
expression
$$
W(z)=e^{pzA},\quad z\in \Zp ,
$$
where $A$ is a strongly normal operator, $\sigma (A)\subseteq \Zp$.
\section{Example. Galois Representations}
In this section we follow \cite{Berg,Fon,Sen}.
Let $K$ be a finite extension of $\Qp$, and $\varepsilon^{(n)}\in
\bar{K}$ ($\bar{K}$ is an algebraic closure of $K$) is a sequence of
primitive roots of unity of orders $p^n$, such that
$$
\varepsilon^{(0)}=1,\quad \varepsilon^{(1)}\ne 1,\quad
(\varepsilon^{(n+1)})^p=\varepsilon^{(n)},\ n=0,1,\ldots.
$$
Denote $K_n=K(\varepsilon^{(n)})$, $K_\infty
=\bigcup\limits_{n=0}^\infty K_n$, $G_K=\Gal (\bar{K}/K)$. Let
$\mu_{p^n}$ be the set of roots of unity of order $p^n$; thus
$\varepsilon^{(n)}\in \mu_{p^n}$, $n\ge 0$. The cyclotomic character
$\chi:\ G_K\to \Zp^*=\{ x\in \Zp :\ |x|=1\}$ is defined via the
equality
$$
\sigma (\zeta )=\zeta^{\chi (\sigma )},\quad \text{for all $\sigma \in
G_K$, $\zeta \in \mu_{p^\infty}=\bigcup\limits_{n=0}^\infty
\mu_{p^n}$.}
$$
$\chi$ is continuous with respect to the standard topology of $G_K$ as
a profinite group.
The kernel of the cyclotomic character coincides with $H_K=\Gal
(\bar{K}/K_\infty )$. Therefore $\chi$ identifies $\Gamma_K=\Gal
(K_\infty /K)=G_K/H_K$ with an open subgroup of the multiplicative
group $\Zp^*$.
By definition, {\it a $p$-adic representation} $V$ of the group $G_K$
is a finite-dimensional vector space over $\Qp$ with a continuous
linear action of $G_K$.
Let $\widehat{K}_\infty$ be the $p$-adic completion of $K_\infty$. Let
us consider the action of $\Gamma_K$ on the $\widehat{K}_\infty$-vector
space $\left( \Cp \otimes_{\Qp}V\right)^{H_K}$ of elements from $\Cp
\otimes_{\Qp}V$ fixed under the action of $H_K$. If $d=\dim_{\Qp}V$,
then $\left( \Cp \otimes_{\Qp}V\right)^{H_K}$ is a
$\widehat{K}_\infty$-vector space of dimension $d$. The group
$\Gamma_K$ acts on the union $\mathbb D_{\text{Sen}}(V)$ of
finite-dimensional subspaces of $\left( \Cp
\otimes_{\Qp}V\right)^{H_K}$ invariant with respect to $\Gamma_K$, and
$\dim_{K_\infty}\mathbb D_{\text{Sen}}(V)=d$.
By Sen's theorem \cite{Sen}, there is a unique $K_\infty$-linear
operator $\Theta_V$ on $\mathbb D_{\text{Sen}}(V)$, such that for any
$\omega \in \mathbb D_{\text{Sen}}(V)$, there exists such an open
subgroup $\Gamma_\omega \subset \Gamma_K$ that
\begin{equation}
\sigma (\omega )=\left[ \exp \left( \Theta_V\log \chi (\sigma )\right)
\right] \omega
\end{equation}
for all $\sigma \in \Gamma_\omega$.
A representation $V$ is called a {\it Hodge-Tate representation} if,
for a certain basis $e_1,\ldots ,e_d\in \mathbb D_{\text{Sen}}(V)$, the
operator $\Theta_V$ is diagonal, with eigenvalues from $\mathbb Z$. In
this case, we can introduce a norm in $\mathbb D_{\text{Sen}}(V)$
setting
$$
\|x_1e_1+\cdots +x_de_d\|=\max (|x_1|,\ldots ,|x_d|),\quad x_1,\ldots
,x_d\in K_\infty .
$$
Then $\Theta_V$ is obviously strongly normal, $\|\Theta_V\| \le 1$.
Taking into account the continuity of $\chi$ we can choose a so small
open subgroup $\Lambda \subset \Gamma_K$ that $|\chi (\sigma )-1| \le
p^{-1}$ for all $\sigma \in \Lambda$. For every $\sigma \in \Lambda$,
the right-hand side of (6) defines a unitary operator on $\mathbb
D_{\text{Sen}}(V)$.
\section*{ACKNOWLEDGEMENT}
This work was supported in part by the Ukrainian Foundation
for Fundamental Research, Grant 29.1/003.
\medskip
| {
"timestamp": "2011-02-22T02:04:12",
"yymm": "1102",
"arxiv_id": "1102.4302",
"language": "en",
"url": "https://arxiv.org/abs/1102.4302",
"abstract": "We describe a subclass of the class of normal operators on Banach spaces over non-Archimedean fields (A. N. Kochubei, J. Math. Phys. 51 (2010), article 023526) consisting of operators whose properties resemble those of unitary operators. In particular, an analog of Stone's theorem about one-parameter groups of unitary operators is proved.",
"subjects": "Functional Analysis (math.FA); Mathematical Physics (math-ph); Number Theory (math.NT)",
"title": "Non-Archimedean Unitary Operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126457229186,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397058029228
} |
https://arxiv.org/abs/2106.09103 | Approximately invertible elements in non-unital normed algebras | We introduce a concept of approximately invertible elements in non-unital normed algebras which is, on one side, a natural generalization of invertibility when having approximate identities at hand, and, on the other side, it is a direct extension of topological invertibility to non-unital algebras. Basic observations relate approximate invertibility with concepts of topological divisors of zero and density of (modular) ideals. We exemplify approximate invertibility in the group algebra, Wiener algebras, and operator ideals. For Wiener algebras with approximate identities (in particular, for the Fourier image of the convolution algebra), the approximate invertibility of an algebra element is equivalent to the property that it does not vanish. We also study approximate invertibility and its deeper connection with the Gelfand and representation theory in non-unital abelian Banach algebras as well as abelian and non-abelian C*-algebras. | \section{Introduction}
Invertibility is one of the central concepts in the study of unital Banach (or, more generally, topological) algebras. However, this concept is closely related to the existence of an identity in the algebra. Every non-unital Banach algebra $\mathcal{A}$ may be embedded into a unital algebra $\mathcal{A}_1$; such unitization is very useful for some goals,
but it is not very appropriate for others. In particular, every element of the original algebra $\mathcal{A}$
is not invertible in the unital algebra $\mathcal{A}_1$.
An important tool in non-unital Banach algebras is the concept of \emph{approximate identity}, which serves as a good substitute for an identity. This concept goes back to the earliest studies of the group algebra $L^1(G)$ and in this case the approximate identities have been systematically investigated by Weil in his book~\cite{Weil}. However, certain concrete approximate identities were considered long before the abstract definition was introduced. Approximate identities were apparently first considered explicitly by Segal in~\cite{Segal}, who constructed an approximate identity bounded by one for any norm-closed self adjoint subalgebra of the algebra of bounded linear operators on a Hilbert space. Then Dixmier used the approximate identity as the main tool in his book~\cite{Dixmier} to carry through all the basic theory of C*-algebras. Various results on left, right and two-sided approximate identities are described in Dixon's papers~\cite{Dixon1} and~\cite{Dixon2}. From that time many results known for C*-algebras, or for unital algebras, have been extended to algebras with approximate identity, see many books on the topic, e.g.~\cite{Doran-Wichmann}, \cite{Kaniuth}, \cite{Larsen}, and \cite{Palmer}.
The topological divisors of zero as well as approximate (or, topological) identities are two instances suggesting the topologization of algebraic concepts. In this paper we propose a concept of \emph{approximate invertible elements} as an attempt to provide a ``topologization of invertibility'' in non-unital algebras. The main object of study of this paper reads as follows:
\begin{definition}\label{def:ApproInv}
An element $x$ of a topological algebra $\mathcal{A}$ is said to be \emph{approximately right invertible}
if there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that
$(xr_{j})_{j\in J}$ is an approximate identity in $\mathcal{A}$. Similarly, an element $x\in\mathcal{A}$ is \emph{approximately left invertible}
if there is a net $(l_{j})_{j\in J}$ in $\mathcal{A}$ such that
$(l_{j}x)_{j\in J}$ is an approximate identity in $\mathcal{A}$.
\end{definition}
To our best knowledge, the suggested concept of approximate invertibility is not included in any available literature we have seen although it seems very natural in the context of algebras with approximate identities. A closely related notion in unital topological algebras is given by Thatte and Bhatt~\cite{ThatteBhatt} as a topological invertibility. This concept was further developed by Akkar et al.~\cite{Akkar-Beddaa-Oudadess}. In fact, both concepts coincide in unital topological algebras and, moreover, they collapse to invertibility in unital Banach algebras. However, classical invertibility and topological invertibility make no sense in non-unital algebras, thus approximate invertibility serves as an extension of these concepts to non-unital algebras. In the literature one can find yet another related concept -- a topological quasi-invertibility, see~\cite{Najmi, Zohri-Jabbari-2010}.
Already in 2001 M.~Abel~\cite{Abel} provided a characterization of topological algebras in which the set of topologically quasi-invertible elements coincides with the set of quasi-invertible elements.
A similar terminology (approximately invertible maps, approximate inverses, or even approximate invertibility) is used in other contexts, and in a different sense, see for example~\cite{Boxer,Harte,Kromer,Schuster,Zames}. Moreover, in operator-theoretic community there is a notion of approximately invertible operators (by a sequence of approximate operators) studied e.g. in the book~\cite{HagenRochSilbermann}. Note that few authors use sometimes the term ``approximate invertible operators'' instead of ``Fredholm operators''.
Regarding the operator theory, our motivation to introduce the approximate invertibility concept is related to projects dealing with density of the range of certain convolution operator arising in the study of Toeplitz and Toeplitz-type operators acting on various function spaces (usually, the weighted Bergman spaces over the upper half-plane, or the unit disk in the complex plane, \cite{EM,EV,EMV,HHM,HMV},
as well as wavelet function spaces on the affine group \cite{H}). In these cases approximate inverses for some particular convolutions have been constructed. In particular, the main step in recent papers \cite{EM,HHM,HMV,HMM}
was to construct an appropriate Dirac net, and using this net to show a density result of a function algebra under consideration in certain C*-algebra (e.g., the algebra SO($\mathbb{N}$),
or the algebra VSO$(\mathbb{R}_+)$).
An idea of Wiener deconvolution technique on the real line has already been elaborated in~\cite{HMM}.
Immediately we have observed that the used techniques may be generalized to non-unital normed algebras leading to the concept of approximate invertibility as introduced in the paper.
We generalize this idea in Section~\ref{sec:applications}.
Although we will work mainly with normed algebras, approximate invertibility notion and many results of this paper can be easily generalized to topological algebras as well.
The paper is organized as follows.
In Section \ref{Sec:intro-app-inv} the notion of approximately invertible elements is introduced, exemplified and the relations with convergence, topological divisors of zero and ideals are studied in an elementary way. A detailed study of approximate invertibility in some classes of algebras such as
Banach algebras, C*-algebras and involutive algebras is given in Section \ref{sec:AppInv in algebras}. In each case we aim to provide a characterization of approximately invertible elements by means of Gelfand transform, modular ideals, or non-degenerate representations, see Theorem~\ref{Appinvl-modi-C*} and Proposition~\ref{prop:criterion_ainv_in_csa}.
Section~\ref{Sec:Examp} brings several interesting examples of algebras with or without approximately invertible elements. Particular examples have served us as a motivation for investigating approximate invertibility in some classes of algebras studied in Section \ref{sec:AppInv in algebras}. The most important (from the viewpoint of applications in the study of Toeplitz operator algebras and other parts of time-frequency analysis) is a Wiener algebra possessing an approximate identity where an element of this algebra is approximately invertible
if and only if it does not vanish, see Theorem~\ref{thm:Wiener_ainv-Wie-alg}.
Finally, we provide a necessary and sufficient condition for the left and right approximate invertibility in operator ideals (including the C*-algebra of compact operators acting on an infinite-dimensional separable Hilbert space), see Theorem~\ref{prop:ApprInvCompact-opi}. An application to the density in Banach modules is given in Section~\ref{sec:applications}. As a by-product we get a result about density of the image of the convolution operator. In the last Section \ref{Sec:rem-open-pro} we present some questions, remarks and open problems related to approximately invertible elements for further investigation.
\section{Invertibility in non-unital algebras}\label{Sec:intro-app-inv}
If the invertibility cannot be used and we still wish to share the good properties of approximate identities, a new concept of \emph{approximately invertible} elements in non-unital normed algebras can be used. In this section we exemplify this concept, relate it with existing ones in the literature and study several elementary properties related to convergence, topological divisors of zero and ideals.
\subsection{Approximate identities in action}
First we summarize some basic properties of approximate identities in non-unital normed algebras. For details see \cite{Conway,Dixon1,Dixon2,Dixon3,Doran-Wichmann,Feichtinger,Larsen,Murphy,Palmer,Rudin,Zelazko}.
\begin{definition}\label{DefinitionAppId}
An \emph{approximate identity}
in a normed algebra $\mathcal{A}$ is a net $(\aid_{j})_{j\in J}$ in $\mathcal{A}$
such that for every $x$ in $\mathcal{A}$ it holds
$\displaystyle
\lim_{j\in J}\aid_{j}x=\lim_{j\in J}x\aid_{j}=x.
$
\end{definition}
In fact, in this definition it is sufficient to consider only non-zero elements $x\in \mathcal{A}$. In a similar way left and right approximate identity is defined. The existence of an approximate identity in a dense subset of algebra is immediate. Namely, if $\mathcal{A}$ is a normed algebra with an approximate identity
and $S$ be a dense subset of $\mathcal{A}$, then $\mathcal{A}$ has an approximate identity with values in $S$, see \cite[Lemma 1.4]{Doran-Wichmann}.
An approximately unital algebra shares some of the properties of a unital algebra.
Obviously, if $\mathcal{A}$ is a unital algebra with unit $e$ and $J$ is an arbitrary directed set, then we can define an approximate identity $(\aid_{j})_{j\in J}$ in $\mathcal{A}$ easily by the rule $\aid_j=e$ for all $j\in J$. Also, from Definition~\ref{DefinitionAppId} it follows that if an approximate identity is a divergent net, then the normed algebra is non-unital. The following (in some sense reverse) observation is immediate.
\begin{proposition}\label{prop:convergentAI}
Let $(\aid_j)_{j\in J}$ be an approximate identity in $\mathcal{A}$ having a limit $u\in\mathcal{A}$. Then $u$ is the unit in $\mathcal{A}$.
\end{proposition}
On the one hand, unbounded approximate identities may look useless, see e.g.~\cite{Dixon3} for some pathological examples in incomplete normed algebras. On the other hand, they may be particularly useful in other contexts, e.g. in so-called Segal algebras.
For more details we refer the interested reader to \cite{ReiterStegeman}. Therefore we distinguish between \emph{norm bounded} and \emph{operator norm bounded} approximate identities in normed algebras.
\begin{definition}\label{Defin:normb-opb-app-i}
Let $(\mathcal{A},\|\cdot\|)$ be a normed algebra and $(\aid_{j})_{j\in J}$ be a left approximate identity in $\mathcal{A}$. Then
\begin{itemize}
\item [\rm{(a)}] $(\aid_{j})_{j\in J}$ is said to be \emph{norm bounded} if there is a finite constant $M>0$ such that
$\|\aid_{j}\|\leq M$ for every $j\in J$.
\item [\rm{(b)}] $(\aid_{j})_{j\in J}$ is said to be \emph{operator norm bounded} if the net $(T_{j})_{j\in J}$ of left multiplication operators $T_{j}x=e_{j}x$ is such that
$$\sup_{j\in J}\|T_{j}\|_{\textrm{op}}<+\infty,\,\,\, \text{where}\, \|\cdot\|_{\textrm{op}}\,\,\text{is the operator norm}.$$
\end{itemize}
\end{definition}
The assertion of the following proposition permits us to extend operator norm bounded approximately identities in separable normed algebras to its completion. In addition, it is used later in the proof of Theorem~\ref{thm:dense_ideals_implies_ainv} and in the section about convolution algebras.
\begin{proposition}\label{prop:compl-opnb-algopnb}
Let $(\mathcal{A},\|\cdot\|_{\mathcal{A}})$ be a separable normed algebra and $(\tilde{\mathcal{A}},\|\cdot\|_{\tilde{\mathcal{A}}})$ be its completion. Then $(\tilde{\mathcal{A}},\|\cdot\|_{\tilde{\mathcal{A}}})$ has an (operator) norm bounded approximate identity $(\tilde{e}_{n})_{n\in \mathbb{N}}$ if and only if $(\mathcal{A},\|\cdot\|_{\mathcal{A}})$ has an (operator) norm bounded approximate identity.
\end{proposition}
\begin{proof}
The proof for bounded approximate identities may be found in~\cite[\textsection 6]{Reiter71}. Next, we give a proof for operator bounded approximate identities.
Suppose that $(\tilde{\mathcal{A}},\|\cdot\|_{\tilde{\mathcal{A}}})$ has operator norm bounded approximate identity $(\tilde{e}_{n})_{n\in \mathbb{N}}$ and let us denote by $\tilde{T}_{n}$ the linear operators given by $\tilde{T}_{n}x=e_{n}x$, $x\in\tilde{\mathcal{A}}$, where $M=\sup_{n\in\mathbb{N}}\|\tilde{T}_{n}\|_{\textrm{op}}<\infty$ by Definition \ref{Defin:normb-opb-app-i}. Then by \cite[Lemma 1.4]{Doran-Wichmann} there exists a sequence $(e_{n})_{n\in\mathbb{N}}$ of elements in $\mathcal{A}$ such that $(e_{n})_{n\in\mathbb{N}}$ is an approximate identity of $\tilde{\mathcal{A}}$ (and of course in $\mathcal{A}$) with $\|e_{n}-\tilde{e}_{n}\|_{\tilde{\mathcal{A}}}<\frac{1}{n}$. Denote by $T_{n}$ the linear operator acting on $\tilde{\mathcal{A}}$ given by $T_{n}x=e_{n}x$. Therefore, for every $x\in\mathcal{A}$
\begin{align*}
\|T_{n}x\|_{\mathcal{A}}&\leq \|T_{n}x-\tilde{T}_{n}x\|_{\tilde{\mathcal{A}}}+\|\tilde{T}_{n}x\|_{\tilde{\mathcal{A}}}\leq \|e_{n}-\tilde{e}_{n}\|_{\tilde{\mathcal{A}}}\,\|x\|_{\mathcal{A}}+M\,\|x\|_{\mathcal{A}}\\
&\leq \dfrac{\|x\|}{n}+M\, \|x\|_{\mathcal{A}}\leq \left(1+\sup_{n\in\mathbb{N}}\|\tilde{T}_{n}\|_{\textrm{op}}\right)\,\|x\|_{\mathcal{A}}.
\end{align*}
Consequently, $\|T_{n}\|_{\textrm{op}}\leq M+1$ for every $n\in\mathbb{N}$ and hence we have $\sup_{n\in\mathbb{N}}\|T_{n}\|_{\textrm{op}}<+\infty$. i.e., $(e_{n})_{n\in\mathbb{N}}$ is an operator norm bounded approximate identity in $\mathcal{A}$.
Conversely, it is an easy exercise left to the reader to extend an operator norm bounded approximate identity for $(\mathcal{A},\|\cdot\|_{\mathcal{A}})$ to all of $(\tilde{\mathcal{A}},\|\cdot\|_{\tilde{\mathcal{A}}})$, observing that it is still an approximate identity there, and of course still bounded in the operator norm.
\end{proof}
\begin{remark}
The proof of necessity of $(\mathcal{A},\|\cdot\|_{\mathcal{A}})$ having an operator norm bounded approximate identity in Proposition \ref{prop:compl-opnb-algopnb} may be done using nets instead of sequences.
\end{remark}
\subsection{Approximate invertibility and related concepts}
In a unital algebra $\mathcal{A}$ (with $e$ being the unit in $\mathcal{A}$) the invertibility of an element $x\in\mathcal{A}$ may be described using the net $y_j=x^{-1}$ for each $j\in J$ from a directed net $J$, such that for each $z\in\mathcal{A}$ it holds $$\lim_{j\in\,J}xy_jz = \lim_{j\in\,J}zxy_j = \lim_{j\in\,J}y_jxz = \lim_{j\in\,J}zy_jx = z.$$ It means that the nets $(xy_j)_{j\in\,J}$ and $(y_jx)_{j\in\,J}$ are approximate identities in $\mathcal{A}$. A natural generalization of this observation for the case of non-unital normed algebras is given in Definition~\ref{def:ApproInv} being the main object of our study.
If the corresponding approximate identity $(xr_{j})_{j\in J}$, resp. $(l_{j}x)_{j\in J}$, is norm bounded (operator norm bounded), then we speak about \textit{boundedly} (\textit{op-boundedly}) approximately right, resp. left invertible element $x\in\mathcal{A}$. Clearly, zero element in $\mathcal{A}\ne\{0\}$ cannot be approximately invertible in $\mathcal{A}$.
As far as we know the concept of approximate invertibility is not included in any available literature we have seen, although it seems very natural in the context of algebras with approximate identities. A closely related notion is given by Thatte and Bhatt~\cite{ThatteBhatt} who defined the concept of \textit{topological invertibility in unital topological algebras}. This concept was further developed by Akkar et al.~\cite{Akkar-Beddaa-Oudadess} with improving some proofs. For the sake of consistency with our considerations we recall it here in the context of normed algebras only.
\begin{definition}
An element $x$ in a unital normed algebra $\mathcal{A}$ (with unit $e$) is called \textit{topologically right invertible} in $\mathcal{A}$, if there is a net $(r_j)_{j\in J}$ in $\mathcal{A}$ such that $xr_j \to e$. Similarly, $x\in\mathcal{A}$ is called \textit{topologically left invertible} in $\mathcal{A}$, if there is a net $(l_j)_{j\in J}$ in $\mathcal{A}$ such that $l_jx \to e$.
\end{definition}
\begin{example}
Arens' algebra $$L^w([0,1]):= \bigcap_{1\leq p\leq +\infty} L_p([0,1])$$ is a unital complete metrizable algebra with pointwise operations and the topology of $L_p$-convergence for each $1\leq p<+\infty$. Then $f(x)=x$ is not invertible in $L^w([0,1])$, but it is topologically invertible, because there is a sequence $g_n(x) = \chi_{[1/n,1]}(x)\frac{1}{x}\in L^w([0,1]), \, n\in\mathbb{N},$ such that $\|fg_n-1\|_p \to 0$ for each $p$. Also, this sequence serves to show that $f$ is approximately invertible as well. In fact, in unital algebras both concepts coincide.
\end{example}
\begin{proposition}\label{prop:TopInv<->AppInv}
An element $x$ in a unital normed algebra $\mathcal{A}$ is approximately right invertible if and only if it is topologically right invertible.
\end{proposition}
\begin{proof} Let $x$ be an approximately right invertible element in $\mathcal{A}$ (with unit $e$). Thus, there is a net $(r_j)_{j\in J}$ in $\mathcal{A}$ such that for each $z\in \mathcal{A}$ we have $xr_jz\to z$. Then for $z=e$ we get topological right invertibility of $x$.
If $x$ is topologically right invertible in $\mathcal{A}$, then there is a net $(r_j)_{j\in J}$ in $\mathcal{A}$ such that $\|xr_j-e\|\to 0$. Then for each $z\in\mathcal{A}$, we have $\|xr_jz-z\| = \|(xr_j-e)z\|\leq \|xr_j-e\|\cdot\|z\|.$ Thus, $\lim_{j}\|xr_jz-z\|=0$. Similarly, $\|zxr_j-z\|\to 0$. Therefore, $x$ is approximately right invertible in $\mathcal{A}$.
\end{proof}
For a normed algebra $\mathcal{A}$ we denote by $\Inv_r(\mathcal{A})$ the set of all right invertible elements in $\mathcal{A}$, by $\TopInv_r(\mathcal{A})$ the set of all topologically right invertible elements in $\mathcal{A}$, and by $\AppInv_r(\mathcal{A})$ the set of all approximately right invertible elements in $\mathcal{A}$, respectively. Their left versions are denoted analogously.
It is well-known from~\cite{ThatteBhatt} that in a unital Banach algebra topological invertibility coincides with invertibility. This implies that in a unital Banach algebra $\mathcal{A}$ we have
$ \Inv_r(\mathcal{A})=\TopInv_r(\mathcal{A})=\AppInv_r(\mathcal{A}).$
Finally, the concept of approximate right (left) invertibility generalizes topological right (left) invertibility to the case of non-unital algebras, where the concepts of invertible and topologically invertible elements are not defined. Indeed,
\begin{itemize}
\item if $\mathcal{A}$ is a non-unital normed (or topological) algebra,
then $\Inv_r(\mathcal{A})=\TopInv_r(\mathcal{A})=\emptyset\subseteq\AppInv_r(\mathcal{A})$;
\item if $\mathcal{A}$ is a unital normed (or topological) algebra,
then $\Inv_r(\mathcal{A})\subseteq\TopInv_r(\mathcal{A})=\AppInv_r(\mathcal{A})$;
\item if $\mathcal{A}$ is a unital Banach algebra, then
$\Inv_r(\mathcal{A})=\TopInv_r(\mathcal{A})=\AppInv_r(\mathcal{A})$.
\end{itemize}
From this point of view we are mainly interested in non-unital algebras throughout this paper to investigate properties of approximate invertibility in detail.
In the literature one can find yet another related concept -- a topological quasi-invertibility, see~\cite{Najmi, Zohri-Jabbari-2010}. Since we are working with normed algebras $\mathcal{A}$, we give the definition in that context although it can be stated in the context of topological algebras in general. Given $a,b\in\mathcal{A}$, we define the "circle operation" $a\circ b:=ab-a-b$.
\begin{definition}\rm
An element $a$ of a
normed algebra $\mathcal{A}$ is called \emph{right quasi-invertible}
if there exists an element $b\in\mathcal{A}$ such that $a\circ b=0$. An element $a\in\mathcal{A}$
is \emph{topologically right quasi-invertible}
if there exists a net $(b_j)_{j\in J}$
in $\mathcal{A}$ such that the net $(a\circ b_j)_{j\in J}$ converges to the zero element of $\mathcal{A}$. We denote the set of all topologically right quasi-invertible elements of $\mathcal{A}$ by $\operatorname{TopQInv}_r(\mathcal{A})$.
\end{definition}
Note that the zero element of an algebra $\mathcal{A}$ always belongs to $\operatorname{TopQInv}_r(\mathcal{A})$. It should be noted here that the set of topologically quasi-invertible elements had also been considered by M. Abel~\cite{Abel} already in 2001, however from another point of view. Indeed, Abel provided a characterization of topological algebras in which the set of topologically quasi-invertible elements coincides with the set of quasi-invertible elements. Recently, Abel and Z\'{a}rate-Rodr\'{i}guez~\cite{A-ZR} studied the properties of left, right, and two-sided topologically quasi-invertible elements showing that all these sets are $G_\delta$-sets in F-algebras $\mathcal{A}$.
A connection between $\operatorname{TopQInv}_r(\mathcal{A})$ and $\TopInv_r(\mathcal{A})$ for a unital normed algebra $\mathcal{A}$ is well-known. Let us mention that in a unital algebra $\mathcal{A}$ with unit $e$ the equation $a\circ b=0\Leftrightarrow (e-a)(e-b)=e$ holds for each $a,b\in\mathcal{A}$. This implies that $x$ is right quasi-invertible if and only if $e-x$ is right invertible. A topological version then reads as follows.
\begin{proposition}{\cite[Proposition 2.2(i)]{Zohri-Jabbari-2010}}\label{prop-TopQInvr=e-TopInvr}
Let $\mathcal{A}$ be a
normed algebra with unit $e$. Then
$a\in\operatorname{TopQInv}_r(\mathcal{A})$ if and only if $(e-a)\in\TopInv_r(\mathcal{A})$.
\end{proposition}
The previous result in a unital algebra $\mathcal{A}$ can be written as $\operatorname{TopQInv}_r(\mathcal{A})=\{e\}-\TopInv_r(\mathcal{A})$. In the case of a non-unital algebra $\mathcal{A}$ we can use \cite[Proposition 2.2(iii)]{Zohri-Jabbari-2010} to conclude that $\operatorname{TopQInv}_r(\mathcal{A})=\operatorname{TopQInv}_r(\mathcal{A}_{1})$, where $\mathcal{A}_{1}$ is the unitization of the normed algebra $\mathcal{A}$. Thus, by Proposition \ref{prop-TopQInvr=e-TopInvr} and remarks above
we have
$\operatorname{TopQInv}_r(\mathcal{A})=\{(0,1)\}-\AppInv_r(\mathcal{A}_{1}).$
\subsection{Relation with
ideals}\label{sec:ideals}
In the literature we can find a little bit different definition of topological invertibility, see e.g.~\cite{Peimbert-Hoyo}. Indeed, the equalities $\overline{x\mathcal{A}} = \overline{\mathcal{A} x} = \mathcal{A}$, with $x$ being an element of a unital normed algebra $\mathcal{A}$, are taken therein as the definition of topological invertibility of $x\in\mathcal{A}$. Now we extend these equalities to the case of approximate invertibility. Note that the image (range) of the left multiplication operator $L_x$ is the right principal ideal
generated by $x$, i.e., $\operatorname{Ran}(L_x) = x\mathcal{A}$.
Now, we show that the approximate right invertibility of $x$
is closely related to the density of $x\mathcal{A}$ in $\mathcal{A}$.
\begin{theorem}\label{thm:dense_ideals_implies_ainv}
Let $\mathcal{A}$ be a normed algebra with an approximate identity and $x\in\mathcal{A}$.
Then $x\mathcal{A}$ is dense in $\mathcal{A}$, if and only if $x\in\AppInv_r(\mathcal{A})$.
\end{theorem}
\begin{proof}
Suppose that $x\mathcal{A}$ is dense in $\mathcal{A}$. Then by \cite[Lemma 1.4]{Doran-Wichmann}
we can find an approximate identity $(\aid_j)_{j\in J}$ with values in $x\mathcal{A}$.
It means that $\aid_j$ may be written as $xr_{j}$
with some $r_{j}\in\mathcal{A}$,
and by definition of approximate invertibility it means that
$x\in\AppInv_r(\mathcal{A})$.
Conversely, if $x\in\AppInv_{r}(\mathcal{A})$ then there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$
such that $(xr_{j})_{j\in J}$ is an approximate identity in $\mathcal{A}$.
Hence, given $z\in\mathcal{A}$, the net $(xr_{j}z)_{j\in J}$
takes values in $x\mathcal{A}$ and converges to $z$.
Therefore, $x\mathcal{A}$ is dense in $\mathcal{A}$.
\end{proof}
Note that if in a normed algebra $\mathcal{A}$ with an approximate identity there exists a principal dense ideal, namely $x_{0}\mathcal{A}$, then by Theorem \ref{thm:dense_ideals_implies_ainv} we have $x_{0}\in\AppInv(\mathcal{A})$, i.e., $\AppInv(\mathcal{A})\neq\emptyset.$
\begin{proposition}\label{prop:x-oveAx}
Let $\mathcal{A}$ be a
normed algebra with an approximate identity.
Then $x\in\overline{x\mathcal{A} }$.
\end{proposition}
\begin{proof}
Let $(e_{j})_{j\in\,J}$ be an approximate identity in $\mathcal{A}$. Then, the net $(xe_{j})_{j\in\,J}$ takes values in $x\mathcal{A} $ and hence $\displaystyle x = \lim_{j\in J}xe_{j}\in\overline{x\mathcal{A} }$.
\end{proof}
\begin{proposition}
Let $\mathcal{A}$ be a non-unital normed algebra and
$x\in\AppInv_r(\mathcal{A})$.
Then $x\notin\mathcal{A} x$.
\end{proposition}
\begin{proof}
Suppose that $x\in\mathcal{A} x$.
Then there is an element $y\in\mathcal{A}$ such that $x=yx$.
On the other hand, since $x$ is approximately right invertible,
there is a net $(r_{j})_{j\in\,J}$ in $\mathcal{A}$
such that $(xr_{j})_{j\in\,J}$ is an approximate identity in $\mathcal{A}$.
Now, $y=\lim\limits_{j\in\,J} yxr_{j}=\lim\limits_{j\in\,J} xr_{j}$.
By Proposition~\ref{prop:convergentAI}, $y$ is an identity in $\mathcal{A}$,
which contradicts the hypothesis.
\end{proof}
\begin{theorem}\label{conj-app=A/I}
Let $\mathcal{A}$ be a non-unital normed algebra with an approximate identity
and $\mathcal{I}_{\hspace{0pt} r}(\mathcal{A})$ be the set of all closed proper right ideals in $\mathcal{A}$. Then \begin{equation}\label{app=au}
\AppInv_{r}(\mathcal{A})= \mathcal{A}\setminus \bigcup_{I\in\mathcal{I}_{\hspace{0pt} r}(\mathcal{A})}\hspace{-5pt} I.
\end{equation}
\end{theorem}
\begin{proof}
Suppose that $x\in\AppInv_{r}(\mathcal{A})$ and assume there exists $I_{x}\in \mathcal{I}_r(\mathcal{A})$ such that $x\in I_{x}$. Thus, $x\mathcal{A}\subset I_{x}$ and, in consequence, $\mathcal{A}=I_{x}$ because $x\mathcal{A}$ is dense in $\mathcal{A}$ by Theorem \ref{thm:dense_ideals_implies_ainv}, which yields a contradiction. Then $\displaystyle x\in \mathcal{A}\setminus \bigcup_{I\in\mathcal{I}_r(\mathcal{A})}\hspace{-5pt}I$.
On the other hand, let $\displaystyle x\in\mathcal{A}\setminus \bigcup_{I\in\mathcal{I}_r(\mathcal{A})}\hspace{-5pt} I$ and assume that $x\notin\AppInv_{r}(\mathcal{A})$. Therefore, $\overline{x\mathcal{A}}\in\mathcal{I}_{r}(\mathcal{A})$ since $x\mathcal{A}$ is not dense in $\mathcal{A}$ by Theorem \ref{thm:dense_ideals_implies_ainv}. Moreover,
$x\in\overline{x\mathcal{A}}$ by Proposition \ref{prop:x-oveAx}. This contradicts our assumption, and hence $x\in\AppInv_{r}(\mathcal{A})$, i.e.,
\eqref{app=au} holds.
\end{proof}
In a unital normed algebra $\mathcal{A}$ every proper left [right, two-sided] ideal is contained in a maximal left [right, two-sided] ideal. It is a crucial tool in studying normed algebras with identity, but it is not valid for algebras without identity. Next we study the regular, or modular ideals in normed algebras without identity.
For more details see \cite[Section 1.1]{Larsen}.
\begin{definition}
Let $\mathcal{A}$ be an algebra. A left [right; two-sided]
ideal $I$ is said to be \emph{modular}, if there exists some $v\in\mathcal{A}$ such that $xv-x\in\,I$ [$vx-x\in\,I$; $xv-x\in\,I$ and $vx-x\in\,I$]
for all $x\in\mathcal{A}$. The element $v$ is called a left [right; two-sided] \emph{identity modulo} $I$.
\end{definition}
\begin{proposition}\label{prop:ainv_modid}
Let $\mathcal{A}$ be a non-unital normed algebra.
If $x\in\AppInv_r(\mathcal{A})$,
then there is no maximal modular right ideal $J$ such that $x\in\,J$.
\end{proposition}
\begin{proof}
If there is a maximal modular right ideal $J\subsetneq\mathcal{A}$
such that $x\in\,J$, then $x\mathcal{A}\subset\,J$.
However, every maximal modular right ideal is closed,
see \cite{Larsen}, then by Theorem \ref{thm:dense_ideals_implies_ainv} we have $\mathcal{A}=\overline{x\mathcal{A}}\subset\,J$.
Thus, $J=\mathcal{A}$, which is a contradiction with the maximality of $J$.
Hence, such a maximal modular right ideal does not exist.
\end{proof}
Given a proper closed right ideal $I$ in $\mathcal{A}$,
we denote by $\mathcal{H}_r(I,\mathcal{A})$ the set of all
maximal right modular ideals containing $I$. The converse implication of Proposition \ref{prop:ainv_modid} follows easily in normed algebras with ``rich'' sets of maximal right modular ideals, i.e., $\mathcal{H}_{r}(I,\mathcal{A})\neq\emptyset$. In that case, the approximate right invertibility may be described as follows.
\begin{proposition}\label{prop:rich_modular_ideals}
Let $\mathcal{A}$ be a non-unital normed algebra $\mathcal{A}$ with an approximate identity such that $\mathcal{H}_r(I,\mathcal{A})\ne\emptyset$
for every proper closed right ideal $I$ in $\mathcal{A}$.
Then the following conditions are equivalent:
\begin{enumerate}
\item $x\in\AppInv_r(\mathcal{A})$,
\item $x\mathcal{A}$ is dense in $\mathcal{A}$,
\item for every maximal right modular ideal $J$ in $\mathcal{A}$ we have
$x\notin J$.
\end{enumerate}
\end{proposition}
\begin{proof}The equivalence (i)$\Leftrightarrow$(ii) and the implication (ii)$\Rightarrow$(iii) were proven before in Theorem \ref{thm:dense_ideals_implies_ainv} and
Proposition \ref{prop:ainv_modid}, respectively.
Suppose that (ii) is not true. Then $\overline{x\mathcal{A}}$ is a proper closed right ideal of $\mathcal{A}$. Let $J\in \mathcal{H}_r(\overline{x\mathcal{A}},\mathcal{A})$, then $x\in\overline{x\mathcal{A}}\subset J$ by Proposition \ref{prop:x-oveAx}.
\end{proof}
If $\mathcal{A}$ is a radical algebra,
then the set $\mathcal{H}_{r}(I,\mathcal{A})$ is empty. For the group algebra $L^1(G)$ with a non-discrete locally compact abelian topological group $G$ and $C_{0}(X)$ with a locally compact Hausdorff topological space $X$, the set $\mathcal{H}_{r}(I,\mathcal{A})$ is nonempty, see \cite[\textsection 8.7]{Larsen}.
More general examples include the regular semisimple tauberian abelian Banach algebras, see~\cite[Lemma 5.1.9]{Kaniuth}, \cite[\textsection 8.7]{Larsen}.
\subsection{Elementary properties}
This section summarizes some elementary properties of approximate invertibility. The first results connect approximate invertibility with convergence of the corresponding net and construction of approximate identities.
\begin{proposition}\label{prop:AppInv->divergent_net}
Let $\mathcal{A}$ be a non-unital normed algebra. If $x\in\AppInv_r(\mathcal{A})$, then the corresponding net $(r_j)_{j\in J}$ in $\mathcal{A}$ is not convergent.
\end{proposition}
\begin{proof}
Suppose that $(r_j)_{j\in J}$ is convergent in $\mathcal{A}$, i.e., there exists an element $r\in\mathcal{A}$ such that $r_j\to r$ in $\mathcal{A}$. Since $x\neq 0$, then the continuity of multiplication yields that $xr_j\to xr$. Moreover, $x\in\mathcal{A}$ is approximately right invertible, which means that the net $(xr_j)_{j\in J}$ is an approximate identity in $\mathcal{A}$ with the limit $xr$. Thus, Proposition~\ref{prop:convergentAI} states that $\mathcal{A}$ is unital, which is a contradiction.
\end{proof}
\begin{proposition}\label{Proposition:net}
Let $\mathcal{A}$ be a normed algebra and $x\in\mathcal{A}$ be boundedly approximately right and left invertible. Then there exists a net $(w_k)_{k\in K}$ such that the net $(xw_{k}x)_{k\in\,K}$ is a bounded approximate identity in $\mathcal{A}$.
\end{proposition}
\begin{proof}
Let $(l_i)_{i\in I}$ and $(r_j)_{j\in J}$ be the nets
such that $(l_i x)_{i\in I}$ and $(x r_j)_{j\in J}$
are bounded approximate identities in $\mathcal{A}$. By~\cite[Corollary 7, Chapter 6]{Feichtinger}
the net $(xr_{j}l_{i}x)_{(i,j)\in I\times J}$ is a bounded approximate identity in $\mathcal{A}$. Now it is enough to put $w_{(i,j)}=r_jl_i$ for $(i,j)\in I\times J$.
\end{proof}
\begin{remark}
Moreover, if the assumption of the latter proposition is fulfilled, then
for each $z\in\mathcal{A}$ it holds \begin{equation}\label{limu_pv_q}\lim_{(i,j)\in I\times J} l_ixzxr_j = z,
\end{equation} where $(l_i)_{i\in\,I}$ and $(r_j)_{j\in\,J}$ are the corresponding nets from the proof of Proposition~\ref{Proposition:net}. The proof of~(\ref{limu_pv_q}) is based on the inequality $\|l_ixzxr_j-z\|\leq \|l_ixz-z\|\cdot\|xr_j\|+\|zxr_j-z\|.$
By \cite[Proposition 2.6]{Doran-Wichmann} the net $(l_{i}x+xr_{j}-xr_{j}l_{i}x)_{(i,j)\in I\times J}$ is a bounded approximate identity in $\mathcal{A}$.
\end{remark}
The following result describes a connection of approximate invertibility with multiplication operator.
\begin{lemma}\label{left-multi}
Let $\mathcal{A}$ be a normed algebra. If $x\in\mathcal{A}$ is boundedly or op-boundedly approximately left invertible in $\mathcal{A}$, then the left
multiplication operator $L_x\colon \mathcal{A}\to\mathcal{A}$ given by $L_{x}(y)=xy$ for $y\in\mathcal{A}$, is bounded.
\end{lemma}
\begin{proof}
Assume $x$ is boundedly, but not op-boundedly, approximately left invertible in $\mathcal{A}$. Then there exists a net $(l_{j})_{j\in J}$ such that $(l_{j}x)_{j\in J}$ is an approximate identity in $\mathcal{A}$ with $\|l_{j}x\|\leq M$ for all $j$ and some $M>0$. Thus, $\|L_{x}(l_{j}xy)\|\leq M\,\| x\|\,\|y\|$ for every $j$ and every $y\in\mathcal{A}$. Now, by continuity of the product and the norm $\|\cdot\|$ in $\mathcal{A}$ one has $\displaystyle\lim_{j\in J}\|L_{x}(l_{j}xy)\|=\|L_{x}(y)\|$. Hence, $\|L_{x}(y)\|\leq K\|y\|$ for every $y\in\mathcal{A}$ with $K=M\|x\|$.
Now, assume $x$ is op-boundedly, but not boundedly, approximately left invertible in $\mathcal{A}$. Then there exists a net $(l_{j})_{j\in J}$ such that $(l_{j}x)_{j\in J}$ is an approximate identity in $\mathcal{A}$ with $M=\sup_{j\in J}\|T_{j}\|_{\rm{op}}<\infty$, where $\|T_{j}\|_{\rm{op}}=\sup_{\|y\|\leq 1}\|l_{j}xy\|$ for all $j$. Thus, for every $j$ and every $y\in\mathcal{A}$ we have, $$\|L_{x}(l_{j}xy)\|=\|xT_{j}y\|\leq \| x\|\,\|T_{j}y\|\leq \|x\|\,\|T_{j}\|_{\rm{op}}\,\|y\|\leq M\,\|x\|\,\|y\|.$$ Now, by continuity of the product and the norm $\|\cdot\|$ in $\mathcal{A}$ one has $\displaystyle\lim_{j\in J}\|L_{x}(l_{j}xy)\|=\|L_{x}(y)\|$. Hence, $\|L_{x}(y)\|\leq K\|y\|$ for every $y\in\mathcal{A}$ with $K=M\|x\|$.
\end{proof}
Now, we prove that the set of all boundedly approximately invertible elements of an abelian algebra is closed with respect to (finitely many elements) multiplication.
\begin{proposition}
\label{prop:bounded_approx_inv_of_product}
Let $\mathcal{A}$ be an abelian normed algebra, and $x_1, \dots, x_n\in\mathcal{A}$, $n\in\mathbb{N}$. Then $x_1\cdots x_n$ is boundedly approximately invertible in $\mathcal{A}$ if and only if for each $k$ in $\{1,\ldots,n\}$ the element $x_k$ is boundedly approximately invertible in $\mathcal{A}$.
\end{proposition}
\begin{proof}
Let $x_1\cdots x_n$ be boundedly approximately invertible in $\mathcal{A}$. Then there exists a net $(u_i)_{i\in I}$ in $\mathcal{A}$ such that $(x_1\cdots x_nu_i)_{i\in I}$ is a bounded approximate identity in $\mathcal{A}$, i.e., there is a constant $K>0$ such that $\|x_1\cdots x_n u_i\|\leq K$ for each $i\in I$, and for each $z\in\mathcal{A}$ it holds $x_1\cdots x_n u_i z \to z$. By the commutativity, it is sufficient to show that $x_1$ is boundedly approximately invertible in $\mathcal{A}$. For each $i$ in $I$ we put $w_i = x_2\cdots x_n u_i$. Clearly, $\|x_1w_i\|\leq K$ for each $i\in I$, and for each $z\in\mathcal{A}$ the net $(x_1w_iz)_{i\in I}$ converges to $z$, i.e., $(x_1w_i)_{i\in I}$ is a bounded approximate identity in $\mathcal{A}$. Thus, $x_1$ is boundedly approximately invertible in $\mathcal{A}$.
In the converse direction, it is sufficient to treat the case $n=2$ only, the rest will proceed by induction. Let $x_{1}$ and $x_{2}$ be boundedly approximately invertible in $\mathcal{A}$, i.e., there exist nets $(u_i)_{i\in I}$ and $(v_j)_{j\in J}$ in $\mathcal{A}$ and constants $K,M>0$ such that $\|x_1u_i\|\leq K$ for each $i\in I$ and $\|x_2v_j\|\leq M$ for each $j\in J$. The mapping $B_{x_{1}}(x,y)=x_{1}yx$ is a bilinear form with $\|B_{x_{1}}\|\leq K \|x_{1}\|$ by Lemma \ref{left-multi}. Now, by \cite[Proposition 10]{Feichtinger} for any $z\in\mathcal{A}$ we have
\[\lim_{(i,j)\in I\times J}x_{1}x_{2}u_{i}v_{j}z=\lim_{i\in I}\lim_{j\in J}B_{x_{1}}(u_{i}z,v_{j}x_{2})=\lim_{i\in I}\lim_{j\in J}(x_{1}u_{i}z)(x_{2}v_{j})=z,\]
i.e., $x_{1}x_{2}$ is approximately invertible in $\mathcal{A}$.
\end{proof}
An analogue to Proposition~\ref{prop:bounded_approx_inv_of_product} also holds for op-boundedly approximately invertible elements.
\begin{proposition}
\label{prop:opbounded_approx_inv_of_product}
Let $\mathcal{A}$ be an abelian normed algebra, and $x_1,\dots,x_n\in\mathcal{A}$, $n\in\mathbb{N}$.
Then $x_1\cdots x_n$ is op-boundedly approximately invertible in $\mathcal{A}$ if and only if
for each $k$ in $\{1,\ldots,n\}$ the element $x_k$ is op-boundedly approximately invertible in $\mathcal{A}$.
\end{proposition}
\begin{proof}
The proof is almost the same as the proof of Proposition~\ref{prop:bounded_approx_inv_of_product}.
In the second part,
we suppose that $\|L_{x_1 u_i}\|_{\text{op}}\le K$
for each $i$ in $I$
and $\|L_{x_2 v_j}\|_{\text{op}}\le M$ for each $j$ in $J$.
Next, we apply the upper bound
$\|L_{x_1 x_2 u_i v_j}\|_{\text{op}}
=\|L_{x_1 u_i} L_{x_2 v_j}\|_{\text{op}}
\le K M$.
\end{proof}
\begin{proposition}\label{prop:right_zero_divisor_is_not_appinvr}
Let $\mathcal{A}$ be a non-unital normed algebra.
If $x\in\AppInv_r(\mathcal{A})$,
then $x$ is not a right zero divisor in $\mathcal{A}$.
\end{proposition}
\begin{proof}
Suppose that $x\ne0$ and there exists $y\in\mathcal{A}$ such that $y\ne0$ and $yx=0$.
If $(r_j)_{j\in J}$ is a net in $\mathcal{A}$, then $yxr_j=0$ for $j\in J$.
So, the net $(yxr_j)_{j\in J}$ cannot converge to $y$, and $x\notin\AppInv_r(\mathcal{A})$.
\end{proof}
Finally, we mention a relation with topological divisors of zero. Recall that an element $x$ of an algebra $\mathcal{A}$ is called a \textit{left} (resp. \textit{right}) \textit{topological divisor of zero} if there is a net $(l_j)_{j\in J}$
(resp. $(r_j)_{j\in J}$) in $\mathcal{A}$
such that the net $(xl_j)_{j\in J}$ (resp. $(r_j x)_{j\in J}$) converges to $0$ whereas neither $(l_j)_{j\in J}$ nor $(r_j)_{j\in J}$ does not converge to $0$.
Different characterizations of topological divisors of zero may be found in~\cite[Theorem~1.6.2]{Larsen}.
As it is well-known, in unital Banach algebras topological divisors of zero are “fundamentally” non-invertible (i.e., there is no larger normed algebra in which they might become invertible). Recently, Schulz, Brits, and Hasse have shown that in non-unital Banach algebras admitting a two-sided (left, resp. right) not necessarily bounded approximate identity each element is a two-sided (right, resp. left) topological divisor of zero, cf. \cite[Theorem 1.2]{SchulzBritsHasse2017}. Thus, the relationship between approximately invertible elements and topological divisors of zero in non-unital Banach algebras is an easy consequence. However, we provide an alternative (and independent) proof of this result.
\begin{proposition}\label{AppInvR->TdzL}
Let $\mathcal{A}$ be a non-unital Banach algebra. Then every approximately right invertible element in $\mathcal{A}$ is a left topological divisor of zero.
\end{proposition}
\begin{proof}
If $x\in\AppInv_r(\mathcal{A})$, then there is a net $(r_j)_{j\in J}$ in $\mathcal{A}$ such that $(xr_j)_{j\in J}$ is an approximate identity in $\mathcal{A}$.
Suppose that $x$ is not a left topological divisor of zero. Then $\zeta_l(x) = \inf\limits_{y\in\mathbb{S}(\mathcal{A})} \|xy\| > 0$, see \cite[Theorem 1.6.2, page 46]{Larsen}. Without loss of generality we may assume that $\zeta_l(x)\geq 1$ (replacing $x$ by $x/\zeta_l(x)$), which is equivalent to the assertion that \begin{equation}\label{eq:inequality}
\|xy\|\geq \|y\|\,\,\, \textrm{for each}\,\, y\in\mathcal{A}.
\end{equation} Since $(xr_j)_{j\in J}$ is an approximate identity in $\mathcal{A}$ and
$$
\|(r_i-r_j)x\| \leq \|x(r_i-r_j)x\|=\|(xr_ix-x) - (xr_jx-x)\| \leq \|(xr_i)x-x\| + \|(xr_j)x-x\|,
$$
the net $(r_jx)_{j\in J}$ is Cauchy in $\mathcal{A}$.
From the completeness of $\mathcal{A}$ it results that $(r_jx)_{j\in J}$ converges in $\mathcal{A}$, say to $u\in\mathcal{A}$.
Immediately, $\displaystyle \lim_{j\in J}x(xr_{j})=x=\lim_{j\in J} (xr_{j})x
=\lim_{j\in J} x(r_{j}x)=xu,$
but $\|xr_j-u\|\le\|x(xr_{j})-xu\|$ by \eqref{eq:inequality},
therefore $(xr_j)_{j\in J}$ converges to $u$.
By Proposition~\ref{prop:convergentAI}, $u$ is the unit in $\mathcal{A}$,
which contradicts the assumption that $\mathcal{A}$ is not unital. Therefore, we conclude that $x$ is a left topological divisor of zero.
\end{proof}
\begin{remark}
Clearly, in non-unital abelian Banach algebras the above result implies that each approximately invertible element is a topological divisor of zero. In non-commutative case we need to assume that $x$ is approximately right and left invertible in order $x$ to be a two-sided topological divisor of zero.
\end{remark}
\section{Approximate invertibility in some classes of algebras}\label{sec:AppInv in algebras}
Here we provide a detailed study of approximate invertibility in some classes of algebras such as Banach algebras, C*-algebras and involutive algebras.
\subsection{Non-unital abelian Banach algebras}
Invertibility of elements of an abelian Banach algebra is a matter of Gelfand's spectral theory. Here we study approximate invertibility and its connection with Gelfand's transform for the case of non-unital abelian Banach algebras.
The Gelfand theory of non-unital abelian Banach algebras is explained, for example, in~\cite[Section 1.3]{Murphy} and \cite[Section~3.1]{Larsen}.
For an algebra $\mathcal{A}$ denote by $\Characters{\mathcal{A}}$ the set of its \emph{characters}, i.e., non-zero algebra homomorphisms $\mathcal{A}\to\mathbb{C}$. The set $\Characters{\mathcal{A}}$ is a subset of the closed unit ball of the dual space of $\mathcal{A}$.
This set is equipped with the relative weak-* topology and may be identified one-to-one with the set $\{M\colon M\,\text{is maximal modular ideal of $\mathcal{A}$}\}$ by means of the mapping $\varphi\mapsto\ker\varphi$.
Given $x\in\mathcal{A}$, we denote by $\hat x$ the function $\Characters{\mathcal{A}}\to\mathbb{C}$
defined by $\hat{x}(\phi)=\phi(x)$ for every $\phi\in\Characters{\mathcal{A}}$.
The correspondence $x\mapsto\hat{x}$ is called the \emph{Gelfand transform}.
It is well known that the Gelfand transform is a homomorphism of $\mathcal{A}$
into $C_0(\Characters{\mathcal{A}})$. Moreover, if $\|\cdot\|_{\infty}$ denotes the sup-norm on $\mathcal{C}_{0}(\mathcal{M}_{\mathcal{A}})$, then $\|\hat{x}\|_{\infty}\leq\|x\|$ for every $x\in\mathcal{A}$.
\begin{proposition}
\label{prop:aid_converges_pointwisely_to_one}
Let $\mathcal{A}$ be a non-unital abelian Banach algebra
and $(\aid_{j})_{j\in J}$ be an approximate identity in $\mathcal{A}$.
Then $ \lim_{j\in J} \hat{\aid}_{j}(\phi)=1$ for every $\phi\in\mathcal{M}_{\mathcal{A}}$.
\end{proposition}
\begin{proof}
Let $\phi\in\mathcal{M}_{\mathcal{A}}$.
Select $x\in\mathcal{A}$ such that $\phi(x)\ne0$, i.e. $\hat{x}(\phi)\ne0$.
Since $\phi$ is a multiplicative functional on $\mathcal{A}$, we have
$
|\hat{x}(\phi)|\,|\hat{\aid}_j(\phi)-1|
=|\phi(x)\phi(\aid_j)-\phi(x)|
=|\phi(x\aid_j-x)|
\le\|x\aid_j-x\|.
$
Since the net $(\|x\aid_j-x\|)_{j\in J}$ tends to $0$, the net $(\hat{\aid}_j(\phi)-1)_{j\in J}$ tends to zero as well.
\end{proof}
Since every maximal modular ideal $I$ of a non-unital abelian Banach algebra $\mathcal{A}$ is of the form $I=\ker(\varphi)$ for some $\varphi\in\Characters{\mathcal{A}}$, Theorem \ref{thm:dense_ideals_implies_ainv} and Proposition \ref{prop:ainv_modid} may be summarized as follows.
\begin{proposition}\label{prop:ainv_situation_in_cba}
Let $\mathcal{A}$ be a non-unital abelian Banach algebra with an approximate identity
and $x\in\mathcal{A}$. The following statements hold:
\begin{enumerate}
\item[\rm{(a)}] $x\in\AppInv(\mathcal{A})$ if and only if
$x\mathcal{A}$ is dense in $\mathcal{A}$.
\item[\rm{(b)}] If $x\in\AppInv(\mathcal{A})$, then there is no maximal modular ideal $J$ such that $x\in J$.
\item[\rm{(c)}] If $x\in\AppInv(\mathcal{A})$,
then $\hat{x}(\varphi)\ne0$ for every
$\varphi$ in $\Characters{\mathcal{A}}$.
\end{enumerate}
\end{proposition}
\begin{remark}
By Proposition \ref{prop:rich_modular_ideals} the reverse implication in (b) need not hold in general, see the small disk algebra in Section~\ref{Section:SmallDiskAlgebra}. However, there are some classes of non-unital abelian Banach algebras where the converse holds true, e.g., non-unital abelian C*-algebras, see Section~\ref{Section:AbelianC*-algebras}.
More general, the reverse implication in (b) holds for the regular semisimple tauberian abelian Banach algebras, see~\cite[Lemma 5.1.9]{Kaniuth}. A general characterization of all these algebras is yet unknown for us.
\end{remark}
The next result is a consequence of Proposition \ref{prop:ainv_situation_in_cba} saying that the set $\AppInv(\mathcal{A})$ is a proper subset of $\mathcal{A}\setminus\{0\}$ provided $\Characters{\mathcal{A}}\neq\emptyset$.
\begin{corollary}\label{cor-app=A/0}
Let $\mathcal{A}\neq\{0\}$ be a non-unital abelian Banach algebra with an approximate identity
such that $\mathcal{A}\setminus\{0\}=\AppInv(\mathcal{A})$. Then $\Characters{\mathcal{A}}=\emptyset$, i.e., there are no proper modular ideals in $\mathcal{A}$.
\end{corollary}
\subsection{Non-unital, non-abelian C*-algebras and maximal modular
ideals}\label{Section:nonAbelianC*-algebras}
Next, we study approximately invertible elements in C*-algebras by means of maximal modular right ideals and pure states.
It is well-known that in a C*-algebra $\mathcal{A}$ the following identity holds:
\begin{equation}\label{norminvolution}
\|x^{*}\|=\|x\|,\quad\, x\in\mathcal{A}.
\end{equation}
However, since in any normed algebra $\mathcal{A}$ with a continuous involution $x\rightarrow\,x^{*}$ there is an equivalent submultiplicative norm satisfying the condition \eqref{norminvolution}, see \cite{Zelazko}, we may consider the mentioned relationship in algebras $\mathcal{A}$ with involutions and norms satisfying \eqref{norminvolution}. The relationship between an element approximately right (left) invertible and its conjugate is the following.
\begin{proposition}\label{AppInvr-AppInvl}
Let $\mathcal{A}$ be a normed $*$-algebra. Then $x\in\AppInv_{r}(\mathcal{A})$ if and only if $x^{*}\in\AppInv_{\ell}(\mathcal{A})$.
\end{proposition}
\begin{proof}
If $x\in\AppInv_{r}(\mathcal{A})$, then there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that for all $y\in\mathcal{A}$ we have $\lim_{j\in J}\|xr_{j}y-y\|=\lim_{j\in J}\|yxr_{j}-y\|=0$. Hence, by \eqref{norminvolution}, the net $(l_{j}=r_{j}^{*})_{j\in J}$, is such that
\begin{align*}
\lim_{j\in J}\|l_{j}x^{*}y-y\|&=\lim_{j\in J}\left\|\left(l_{j}x^{*}y-y\right)^{*}\right\|
=\lim_{j\in J}\|y^{*}xr_{j}-y^{*}\|=0\\
&=\lim_{j\in J}\|yl_{j}x^{*}-y\|=\lim_{j\in J}\|xr_{j}y^{*}-y^{*}\|, \quad y\in\mathcal{A}.
\end{align*}
That is, $(l_{j}x)_{j\in J}$ is an approximate identity in $\mathcal{A}$ proving that $x^{*}\in\AppInv_{\ell}(\mathcal{A})$. The same argument provides the sufficiency.
\end{proof}
It is well-known that every C*-algebra possesses an approximate identity (see, for instance \cite[Theorem 3.1.1]{Murphy}).
We denote by $\operatorname{PS}(\mathcal{A})$ the set of all pure states on $\mathcal{A}$.
For any $\tau\in\operatorname{PS}(\mathcal{A})$, the set
\begin{equation}\label{Nt}
N_{\tau}=\{a\in\mathcal{A}\colon \tau(a^{*}a)=0\}
\end{equation}
is a maximal modular left ideal of $\mathcal{A}$,
and $\tau\mapsto N_\tau$ is a bijective correspondence
between the pure states and the modular maximal left ideals of a C*-algebra, see for instance \cite[Theorem 5.3.5]{Murphy}. Then the criterion for approximate invertibility in a non-unital C*-algebra reads as follows.
\begin{theorem}\label{Appinvl-modi-C*}
Let $\mathcal{A}$ be a non-unital non-abelian C*-algebra, and $x\in\mathcal{A}$. Then the following statements are equivalent:
\begin{enumerate}
\item $x\in\AppInv_{r}(\mathcal{A})$.
\item $x$ does not belong to any maximal modular right ideal.
\item $\tau(xx^{*})\neq 0$ for all $\tau\in \operatorname{PS}(\mathcal{A})$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Proposition~\ref{prop:ainv_modid},
(i) implies (ii) and the equivalence (ii)$\Leftrightarrow$(iii) follows from \cite[Theorem 5.3.5]{Murphy}, so
we only have to prove that (iii) implies (i).
Suppose that (i) is not true. Then, by Theorem \ref{thm:dense_ideals_implies_ainv} $x\mathcal{A} $ is not dense in $\mathcal{A}$ and hence $\overline{\mathcal{A} x^{*}}$ is a proper left ideal of $\mathcal{A}$ by Proposition \ref{AppInvr-AppInvl} and
$x^{*}\hspace{-5pt}\in\hspace{-3pt}\overline{\mathcal{A} x^{*}}$ by Proposition \ref{prop:x-oveAx}.
However,
by~\cite[Theorem 5.3.3]{Murphy}, there exists $\tau \hspace{-3pt}\in \hspace{-3pt}\operatorname{PS}(\mathcal{A})$ such that $\overline{\mathcal{A} x^{*}}\hspace{-3pt}\subseteq \hspace{-3pt} N_\tau$.
So, $x^{*}\hspace{-3pt}\in\hspace{-3pt} N_\tau$ meaning that (iii) is not true.
\end{proof}
\subsection{Abelian C*-algebras}\label{Section:AbelianC*-algebras}
In abelian C*-algebras $\mathcal{A}$, any pure state is a character, see for instance \cite[Theorem 5.1.6]{Murphy}, thus Theorem \ref{Appinvl-modi-C*} may be reformulated accordingly providing that the condition $\hat{x}(\phi)\ne0$ for every $\phi\in\Characters{\mathcal{A}}$
is not only necessary, but also sufficient
for the approximate invertibility of $x$. By famous Gelfand--Naimark theorem, every abelian C*-algebra $\mathcal{A}$
is isometrically isomorphic to $C_0(\Characters{\mathcal{A}})$.
Therefore, without loss of generality, we may only consider the case $\mathcal{A}=C_0(X)$,
where $X$ is a non-empty locally compact Hausdorff space. Now, we are going to show Theorem \ref{Appinvl-modi-C*} for abelian C*-algebras using only elementary tools and as a-by product we get that the interior of the set $\AppInv(C_{0}(X))$ is empty for every non-compact locally compact Hausdorff space $X$.
Denote by $\mathcal{C}$ the set of all non-empty compact subsets of $X$
equipped with the partial order $\subseteq$.
Note that $\mathcal{C}$ is \emph{directed} by $\subseteq$:
if $K_1,K_2\in\mathcal{C}$, then $K_1\cup K_2\in\mathcal{C}$,
$K_1\subseteq K_1\cup K_2$ and $K_2\subseteq K_1\cup K_2$.
For every $K\in\mathcal{C}$ denote by $1_K$
the function $K\to\mathbb{C}$
defined by $1_K(t)=1$ for every $t\in K$.
\begin{proposition}
\label{prop:crit_aid_in_csa}
Let $(\aid_j)_{j\in J}$ be a bounded net in $C_0(X)$.
Then the following conditions are equivalent:
\begin{enumerate}
\item $(\aid_j)_{j\in J}$ is an approximate identity in $C_0(X)$;
\item for every $K\in\mathcal{C}$,
the net of restrictions $(\aid_j|_K)_{j\in J}$ converges uniformly to $1_K$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii).
Let $(\aid_j)_{j\in J}$ be an approximate unit in $C_0(X)$
and $K\in\mathcal{C}$. By Uryson's Lemma there exists a function $f\in C_0(X)$
such that $f|_K=1_K$. Since $(\aid_j)_{j\in J}$ is an approximate unit in $C_0(T)$,
then $\sup_{t\in X}|f(t) \aid_j(t)-f(t)|\to0$
and, in particular, $\sup_{t\in K}|f(t)\aid_j(t)-f(t)|\to0$.
The latter means that $\sup_{t\in K}|\aid_j(t)-1|\to0$.
(ii)$\Rightarrow$(i). Put $\displaystyle M=\sup_{j\in J}\|e_j\|_{\infty}.$
Let $f\in C_0(X)$ and $\varepsilon>0$.
The case $f\equiv0$ is trivial, so we suppose that $f\nequiv0$.
Using the definition of $C_0(X)$ choose $K\in\mathcal{C}$
such that $
|f(t)|\le\dfrac{\varepsilon}{M+1}
$ for every $t\in X\setminus K$.
Since $(\aid_j|_K)_{j\in J}$ converges uniformly to $1_{\hspace{0pt} K}$,
find $j_1\hspace{0pt}\in\hspace{0pt} J$ such that $\displaystyle\sup_{t\in K}|\aid_{j}(t)-\hspace{-1pt}1|\hspace{0pt}<\hspace{0pt}\frac{\varepsilon}{\|f\|_{\infty}}$, for every~$j\hspace{0pt}\succ\hspace{0pt} j_1$.
Let $j\succ j_1$.
Then $|f(t)\aid_j(t)-f(t)|
\le\|f\|_{\infty}\,|\aid_j(t)-1|<\varepsilon,$ for every $t\in K$ and for every $t\in X\setminus K$
\begin{align*}
|f(t)\aid_j(t)-f(t)|
\le |f(t)| |\aid_j(t)-1|
\le |f(t)| (M+1) < \varepsilon.
\end{align*} This completes the proof.
\end{proof}
\begin{corollary} \label{cor:aid_in_csa}
Let $(\aid_K)_{K\in\mathcal{C}}$ be a net in $C_0(X)$ such that
$\|\aid_K\|_\infty=1$ for every $K\in\mathcal{C}$
and $\aid_K(t)=1$ for every $K\in\mathcal{C}$ and every $t\in K$.
Then $(\aid_K)_{K\in\mathcal{C}}$ is an approximate identity in $C_0(X)$.
\end{corollary}
\begin{proof}
Given a compact $K_1\in\mathcal{C}$,
for every $K\in\mathcal{C}$ with $K_1\subseteq K$
we have $\aid_K|_{K_1}=1_{K_1}$.
It means that the net $(\aid_K|_{K_1})_{K\in\mathcal{C}}$
converges uniformly to $1_{K_1}$.
By Proposition~\ref{prop:crit_aid_in_csa},
the net $(\aid_K)_{K\in\mathcal{C}}$
is an approximate identity in $C_0(X)$.
\end{proof}
\begin{proposition}[criterion
for approximate invertibility in abelian C*-algebras]
\label{prop:criterion_ainv_in_csa}
Let $X$ be a non-empty locally compact Hausdorff space and $f\in C_0(X)$.
Then the following conditions are equivalent:
\begin{enumerate}
\item $f$ is approximately invertible;
\item $f C_0(X)$ is dense in $C_0(X)$;
\item $f(t)\ne0$ for every $t\in X$.
\end{enumerate}
\end{proposition}
\begin{proof}
This is a particular case of Theorem~\ref{Appinvl-modi-C*},
but we will provide a more elementary proof.
By Theorem \ref{thm:dense_ideals_implies_ainv},
(i) is equivalent to (ii). By Proposition \ref{prop:ainv_modid} if (ii) holds then there is no maximal modular ideal $J$ of $C_{0}(X)$ such that $f\in J$. However, by \cite[Theorem 3.1.2 and Theorem 4.1.2]{Larsen} all the modular ideals in $C_{0}(X)$ are of the form $J=\left\{g \mid g \in C_{0}(X), g(t)=0\right\}$ for some $t\in X$. Thus $f(t)\neq0$ for all $t\in X$.
On the other hand, let us suppose that (iii) holds and prove (i).
For every $K\in\mathcal{C}$, applying the Uryson's Lemma,
we select a function $\aid_K\in C_0(X)$ with values in $[0,1]$
such that $\aid_K(t)=1$ for every $t\in K$.
After that we construct $g_K\colon T\to\mathbb{C}$ by the rule
$g_{K}=\dfrac{\aid_{K}}{f}.
$
Then $g_K\in C_0(X)$ and $f g_K=\aid_K$.
By Corollary~\ref{cor:aid_in_csa},
$(\aid_K)_{K\in\mathcal{C}}$ is an approximate identity.
\end{proof}
\begin{proposition}\label{app-inv-C0T}
Let $X$ be a non-compact locally compact Hausdorff space.
Then the interior of $\AppInv(C_0(X))$ is empty.
\end{proposition}
\begin{proof}
Let $f\in\AppInv(C_0(X))$ and $\varepsilon>0$.
Choose $K\in\mathcal{C}$ such that
$|f(t)|<\varepsilon/2$ for every $t\in X\setminus K$.
Choose an open set $U$ such that $K\subset U$ and $U\ne X$.
Using the Urysohn’s Lemma for locally compact Hausdorff spaces,
construct a function $g\in C_0(X)$ such that
$g(t)=1$ for every $t\in K$, $0\le g(t)\le 1$ for every $t\in X$
and $\operatorname{supp}(g)\subset U$.
Then $fg\in C_0(X)$,
$fg$ vanishes on $X\setminus U$ and therefore
by Proposition~\ref{prop:criterion_ainv_in_csa} $fg\notin\AppInv(C_0(X))$.
Moreover, $fg$ coincides with $f$ on $K$, and
\begin{displaymath}
\|fg-f\|_\infty
= \sup_{t\in X\setminus K}|f(t)g(t)-f(t)|
= \sup_{t\in X\setminus K}|1-g(t)|\, |f(t)| \le \frac{\varepsilon}{2},
\end{displaymath}
which proves the result.
\end{proof}
From Proposition~\ref{prop:ainv_situation_in_cba} and Proposition~\ref{prop:criterion_ainv_in_csa} we get the following consequence.
\begin{corollary}\label{HatAppinv-AppinvCo}
Let $\mathcal{A}$ be a non-unital abelian Banach algebra with an approximate identity such that $\Characters{\mathcal{A}}\neq\emptyset$. Then $\hat{x}\in\AppInv(C_{0}(\Characters{\mathcal{A}}))$ for every $x\in\AppInv(\mathcal{A}).$
\end{corollary}
Next, we give a sufficient condition for the set of approximately invertible elements in a non-unital abelian Banach algebra with an approximate identity under which its interior is empty.
\begin{corollary}\label{coro-open-int-emp}
Let $\mathcal{A}$ be a non-unital abelian Banach algebra with an approximate identity. If the Gelfand transform from $\mathcal{A}$ into $C_{0}(\Characters{\mathcal{A}})$ is an open mapping, then the interior of $\AppInv(\mathcal{A})$ is empty.
\end{corollary}
\begin{proof} Let us denote by $\operatorname{Int}B$ and $\Gamma(x)=\hat{x}$ the interior of the set $B$ and
the Gelfand transform from $\mathcal{A}$ into $C_{0}(\Characters{\mathcal{A}})$, respectively. If $\Gamma$ is an open mapping, then $\Gamma\left(\operatorname{Int}(\AppInv(\mathcal{A}))\right)\subset \operatorname{Int}\left(\Gamma(\AppInv(\mathcal{A}))\right)$. Therefore, by Corollary \ref{HatAppinv-AppinvCo} we have $\Gamma(\AppInv(\mathcal{A}))\subset \AppInv(C_{0}(\Characters{\mathcal{A}}))$ and hence\linebreak $\operatorname{Int}\left(\Gamma(\AppInv(\mathcal{A}))\right)\subset \operatorname{Int}(\AppInv(C_{0}(\Characters{\mathcal{A}})))$. Now, by Proposition \ref{app-inv-C0T} the proof is completed.
\end{proof}
\begin{remark}\label{rem-conse-int-empty}
Let $\mathcal{A}$ be a non-unital abelian Banach algebra with an approximate identity. If the set $\widehat{\mathcal{A}}=\{\hat{x}\colon x\in\mathcal{A}\}$ is a closed subspace of $C_{0}(\Characters{\mathcal{A}})$ with $x\mapsto\hat{x}$ being the Gelfand transform, then by Open Mapping Theorem such transform is open and hence by Corollary \ref{coro-open-int-emp} the set $\AppInv(\mathcal{A})$ has empty interior.
\end{remark}
\subsection{Involutive algebras, homomorphisms and representations}
It is very natural to work with abstract algebras by means of homomorphisms with other well-known algebras or by their representations on $\mathcal{B}(\mathcal{H})$.
From now on we consider involutions and norms in $\mathcal{A}$ satisfying \eqref{norminvolution}.
First we shall prove that the morphisms of normed algebras
having dense images
preserve the bounded approximate right (or left) invertibility.
\begin{proposition}\label{proposition-homomorphism}
Let $\mathcal{A}, \mathcal{B}$ be normed algebras, and $\pi:\mathcal{A}\rightarrow\mathcal{B}$ be a continuous homomorphism with $\overline{\pi(\mathcal{A})}=\mathcal{B}$. If $x$ is boundedly approximately right invertible in $\mathcal{A}$,
then $\pi(x)$ is boundedly approximately right invertible in $\mathcal{B}$.
\end{proposition}
\begin{proof}
The case $\pi=0$ is trivial; we assume that $\pi\ne0$.
Since $x$ is boundedly approximately right invertible, there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that $(xr_{j})_{j\in J}$ is a approximate identity in $\mathcal{A}$, which is bounded, say by $M>0$.
We are going to prove that the net $(\pi(x)\pi(r_j))_{j\in J}$
is a bounded approximate identity in $\mathcal{B}$.
The boundedness is obvious: $\|\pi(x)\pi(r_j)\|\le M\|\pi\|$
for every $j\in J$.
Let $v\in\mathcal{B}$ and $\varepsilon>0$.
By the assumption $\overline{\pi(\mathcal{A})}=\mathcal{B}$,
we can find $u\in\mathcal{A}$ such that
\begin{equation}\label{e1}
\|\pi(u)-v\|<\frac{\varepsilon}{2(M\|\pi\|+1)}.
\end{equation}
Since $(xr_j)_{j\in J}$ is an approximate identity in $\mathcal{A}$,
there exists $j_{0}$ such that for every $j\succeq j_{0}$
\begin{equation}\label{e2}
\|xr_{j}u-u\|<\frac{\varepsilon}{2\|\pi\|}.
\end{equation}
Thus, by \eqref{e1} and \eqref{e2}, for every $j\succeq j_0$ we get
\begin{align*}
\|\pi(x)\pi(r_{j})v-v\|
&\leq\|\pi(xr_{j})\|\,\|v-\pi(u)\|+\|\pi(xr_{j}u-u)\|+\|\pi(u)-v\|\\
&\leq\,(M\|\pi\|+1)\|v-\pi(u)\|+\|\pi\|\,\|(xr_{j}u-u)\|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.
\end{align*}
It can be proved in a similar manner that the net
$(v\pi(x)\pi(r_j))_{j\in J}$ converges to $v$.
\end{proof}
\begin{definition}\cite{Arveson}
Let $\mathcal{H}$ be a complex Hilbert space. Every homomorphism $\pi$ of an $*$-algebra $\mathcal{A}$ into the $C^{*}$-algebra $\mathcal{B}(\mathcal{H})$ such that $\pi(x^{*})=\pi^{*}(x)$ will be called a \emph{representation} of the algebra $\mathcal{A}$ in the space $\mathcal{H}$. If $\overline{\pi(\mathcal{A})\mathcal{H}}=\mathcal{H}$, the representation $\pi$ is said to be \emph{non-degenerate}.
\end{definition}
It is known \cite[Theorem 25.10]{Zelazko} that every representation $x\mapsto\pi(x)$ of an $*$-algebra $\mathcal{A}$ is continuous and satisfies $\|\pi(x)\|\leq\|x\|$. Therefore, we can deduce an analogous result to Proposition~\ref{proposition-homomorphism}.
\begin{proposition}\label{representationandAppInvr}
Let $\mathcal{H}$ be a complex Hilbert space, and $\pi$ be a non-degenerate representation of the $*$-algebra $\mathcal{A}$ in $\mathcal{H}$. If $x\in\mathcal{A}$ is boundedly approximately right invertible, then
\begin{enumerate}
\item [1.] there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that $(\pi(xr_{j}))_{j\in J}$ converges in the strong operator topology to the identity operator;
\item [2.] $\overline{\pi(x)\mathcal{H}}=\mathcal{H}$.
\end{enumerate}
\end{proposition}
\begin{proof}
1. Let $x$ be boundedly approximately right invertible. Then there is a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that $\sup_{j}\|xr_{j}\|\leq M$ for some $M>0$ and $(xr_{j})_{j\in J}$ is an approximation of the identity in $\mathcal{A}$.
Let $v\in\mathcal{H}$ and $\varepsilon>0$. Then we can find $u\in\mathcal{H}$ and $a_{v}\in\mathcal{A}$ such that \begin{equation}\label{1piav}
\|\pi(a_{v})u-v\|<\dfrac{\varepsilon}{2(M+1)}
\end{equation}
because the representation is non-degenerate.
Now, for $a_{v}\in\mathcal{A}$ and $\varepsilon>0$, there exists $j_{0}=j_{0}(a_{v},\varepsilon)$ such that
\begin{equation}\label{2piav}
\|xr_{j}a_{v}-a_{v}\|<\dfrac{\varepsilon}{2\|u\|}\quad\text{for all}\ j\succeq j_{0}.
\end{equation}
Therefore, by \eqref{1piav} and \eqref{2piav} for all $j\succeq j_{0}$ we have
\begin{align*}
\|\pi(xr_{j})v-v\
&\leq\left(\|\pi(xr_{j})\|+1\right) \|v-\pi(a_{v})u\|+\|\pi(xr_{j}a_{v}-a_{v})u\|\\
&\leq(M+1)\|v-\pi(a_{v})u\|+\|xr_{j}a_{v}-a_{v}\|\,\|u\|<\varepsilon.
\end{align*}
Thus, $(\pi(xr_{j}))_{j\in J}$ converges to $\mathrm{Id}_{\mathcal{H}}$ in strong operator topology.
2. Let $(r_{j})_{j\in J}$ be the net from the part (a).
Hence, given $u\in\mathcal{H}$, we define a net $(y_j)_{j\in J}$
in $\pi(x)\mathcal{H}$ by $y_j=\pi(r_j)u$
and obtain
$\lim_{j\in J}\pi(x)y_{j}=\lim_{j\in J}\pi(x)\pi(r_{j})u=\lim_{j\in J}\pi(xr_{j})u=u.$
That shows that $\overline{\pi(x)\mathcal{H}}=\mathcal{H}$.
\end{proof}
\begin{corollary}
Let $\mathcal{A}$ be a $*$-algebra and $\pi$ be a non-degenerate
representation of $\mathcal{A}$ in a complex Hilbert space $\mathcal{H}$.
If $x\in\mathcal{A}$ is boundedly approximately left invertible, then $\pi(x)$ is injective.
\end{corollary}
\begin{proof}
If $x\in\mathcal{A}$ is boundedly approximately left invertible, then by \eqref{norminvolution} and Propositions \ref{AppInvr-AppInvl}, \ref{representationandAppInvr} we get that $x^{*}$ is boundedly approximately right invertible in $\mathcal{A}$, and $\overline{\pi(x^{*})\mathcal{H}}=\mathcal{H}$. Moreover,
$\left[\ker\,\pi(x)\right]^{\perp}=\overline{\operatorname{Rang}\pi^{*}(x)}=\overline{\pi(x^{*})\mathcal{H}}=\mathcal{H},$
so $\ker\,\pi(x)=\{0\}$. Thus, $\pi(x)$ is injective.
\end{proof}
\section{Approximate invertibility in examples}\label{Sec:Examp}
In this section we provide several examples of algebras with as well as without approximatively invertible elements. These algebras were a motivation for investigating approximate invertibility in some classes of algebras studied in Section~\ref{sec:AppInv in algebras}.
\subsection{Small disk algebra}\label{Section:SmallDiskAlgebra}
Denote by $\mathcal{A}(\mathbb{D})$ the usual disk algebra
consisting of all continuous functions
$f\colon\overline{\mathbb{D}}\to\mathbb{C}$
which are analytic in $\mathbb{D}$.
Consider the ``small disk algebra'' defined as
\[
\mathcal{A}_0(\mathbb{D}):= \{f\in\mathcal{A}(\mathbb{D})\colon\ f(0)=0\}.
\]
Since the character space of $\mathcal{A}(\mathbb{D})$
can be naturally identified with $\overline{\mathbb{D}}$
and $\mathcal{A}_0(\mathbb{D})$ is the maximal ideal of $\mathcal{A}(\mathbb{D})$
associated to the point $0$,
we can conclude by~\cite[Theorem 7.3.1]{Larsen}
that $\mathcal{M}_{\mathcal{A}_0(\mathbb{D})}$ can be naturally identified with
$\overline{\mathbb{D}}\setminus\{0\}$.
Notice that $\mathcal{A}_0(\mathbb{D})$ is generated by the monomial function
$\chi_1\colon\overline{\mathbb{D}}\to\mathbb{C}$ such that
$\chi_1(z)=z$.
By Schwarz lemma, for every $f\in\mathcal{A}_0$ and every $z\in\mathbb{D}$ it holds
$|f(z)|\le |z|\,\|f\|_\infty.
$ The following lemma shows that the elements of $\mathcal{A}_0$
can not be uniformly close
to the constant $1$ in the annulus $1/2\le|z|\le 1$.
\begin{lemma}\label{lem:small_disk_algebra_far_from_unity}
If $f\in\mathcal{A}_0$, then
\begin{equation}\label{eq:small_disk_algebra_far_from_unity}
\sup_{1/2\le|z|\le1}|f(z)-1|\ge\frac{1}{3}.
\end{equation}
\end{lemma}
\begin{proof}
Denote $\|f\|_\infty$ by $M$. Note that
$M=\sup_{|z|=1}|f(z)|.$ If $M\ge 4/3$, then $\displaystyle
\sup_{1/2\le|z|\le1}\hspace{0pt}|f(z)-1|
\ge\hspace{0pt} M-1\hspace{0pt} \ge\frac{1}{3}.$
If $M<4/3$, then by Schwarz lemma
$|f(1/2)|\le M/2 < 2/3$,
and $\displaystyle
\sup_{1/2\le|z|\le1}|f(z)-1|
\ge 1-|f(1/2)|\ge \frac{1}{3}.$
In both cases the inequality \eqref{eq:small_disk_algebra_far_from_unity} holds.
\end{proof}
In the next proposition we show that $\chi_{1}$
can not be approximated by products
of two (or more) elements of $\mathcal{A}_0(\mathbb{D})$.
\begin{lemma}\label{lem:monomial_can_not_be_approximated}
For every $f_1,f_2\in\mathcal{A}_0(\mathbb{D})$ it holds
$\|f_1 f_2 - \chi_1\|_\infty \ge \frac{1}{3}.
$
\end{lemma}
\begin{proof}
Since $\mathcal{A}_0(\mathbb{D})$ is generated by $\chi_1$,
we can write $f_1(z)f_2(z)$ as $z^2 h(z)$
with some function $h$ analytic on $\mathbb{D}$
and continuous on $\overline{\mathbb{D}}$.
Therefore, by Lemma \ref{lem:small_disk_algebra_far_from_unity} we get
\begin{align*}
\|f_1 f_2 - \chi_1\|_\infty&=\sup_{|z|=1}\left|z^{2}h(z)-z\right|=\sup_{z\in \overline{\mathbb{D}}}\left|zh(z)-1\right|\geq \sup_{1/2\leq|z|\leq1}\left|zh(z)-1\right|\geq\frac{1}{3}
\end{align*}providing the result.
\end{proof}
As a corollary
of Lemma~\ref{lem:monomial_can_not_be_approximated} we immediately have the main properties of the small disk algebra.
\begin{proposition}\label{prop:A0_is_poor}
The algebra $\mathcal{A}_0(\mathbb{D})$ has no approximate identities,
no approximately invertible elements, and no dense principal ideals.
\end{proposition}
The function $\chi_1$ does not vanish in any point of $\overline{\mathbb{D}}\setminus\{0\}$,
but the principal ideal $\chi_1 \mathcal{A}_0(\mathbb{D})$ is not dense.
Thus, $\mathcal{A}_0(\mathbb{D})$ provides a counterexample to converse implication in Proposition~\ref{prop:ainv_modid}. Moreover, $\chi_1$ is not a topological divisor of zero in $\mathcal{A}_0(\mathbb{D})$.
In fact, $\|f\chi_1\|_\infty=\|f\|_\infty$ for every $f\in \mathcal{A}_0(\mathbb{D})$. So, in the algebra $\mathcal{A}_0(\mathbb{D})$
there are no approximately invertible elements, but not every element is a topological divisor of zero.
\subsection{Wiener algebras}
For a locally compact Hausdorff space $X$, a normed algebra $(W,\|\cdot\|_{W})$ of complex-valued continuous functions on $X$ with pointwise operations is said to be a \textit{Wiener algebra}
(see Reiter and Stegeman \cite[Chapter 2]{ReiterStegeman}), if
\begin{enumerate}[label=(WA\arabic*),
labelsep=*,
leftmargin=*,
widest=(WA1)]
\item for each $t\in X$, the evaluation functional $f\mapsto f(t)$ on $W$ is continuous;
\label{WA1}
\item for any closed set $E\subsetneq X$ and any point $t\notin E$, there is a function $f\in\,W$ vanishing on $E$ and such that $f(t)\neq0$;
\label{WA2}
\item if $f\in W$ with $f(t)\neq0$ at a point $t\in X$, then there is a function $g\in\,W$ such that $g(x)=\dfrac{1}{f(x)}$ for all $x$ in some neighborhood of $t$;
\label{WA3}
\item the compactly supported elements in $W$ form a dense subspace.
\label{WA4}
\end{enumerate}
In this section, we suppose that $X$ is a locally compact non-compact Hausdorff space,
$(W,\|\cdot\|_{W})$ is a Wiener algebra on $X$, and, additionally,
that $W$ has an approximate identity. Due to condition~\ref{WA2}, for each point $t\in X$ there exists a function $f\in W$ such that $f(t)=1$; thus, the kernel of every evaluation functional is a maximal modular ideal. On the other hand, it is known~\cite[Proposition 2.6.1]{ReiterStegeman}
that maximal closed ideals in $W$ correspond to the evaluation functionals. Therefore, $\Characters{W}$ can be naturally identified with $X$.
%
\begin{theorem}
\label{thm:Wiener_ainv-Wie-alg}
Let $f\in W$. Then the following conditions are equivalent:
\begin{enumerate}
\item There exists is a net $(h_j)_{j\in J}$ in $W$
such that $(f h_j)_{j\in J}$ is an approximate identity in $W$,
and the functions $fh_j$ have compact supports.
\item $f$ is approximately invertible in $W$.
\item $fW$ is dense in $W$.
\item The Gelfand transform $\widehat{f}$ of $f$
does not vanish on $\Characters{W}$.
\item $f$ does not vanish on $X$.
\end{enumerate}
\end{theorem}
\begin{proof}
We only prove the implication (v)$\Rightarrow$(i),
since the remaining implications are simple.
By assumption, $W$ has an approximate identity, hence by~\ref{WA4} and by~\cite[Proposition 2.4.2]{ReiterStegeman} we have that $W$ has an approximate identity $(\aid_{j})_{j\in J}$ where $\aid_{j}$ has compact support for every $j$ in $J$. Suppose that $f(t)\neq0$ for every $t\in X$. Then for every $j$ in $J$,
by the analogue of Wiener's division lemma for Wiener algebras
~\cite[Corollary 2.1.11]{ReiterStegeman},
there exists $g_{j}$ in $W$ such that $f g_{j}=\aid_{j}$.
\end{proof}
The following consequence is similar to Proposition~\ref{app-inv-C0T} and Corollary~\ref{coro-open-int-emp}.
\begin{corollary}
Let $(W,\|\cdot\|_{W})$ be a Wiener algebra with an approximate identity. Then the interior of $ \AppInv(W)$ is empty.
\end{corollary}
\begin{proof}
By Theorem~\ref{thm:Wiener_ainv-Wie-alg}, the functions with compact support in $X$ are not approximately invertible. Thus, any ball $B_{\delta}(f)$ centered at $f\in W$ with radius $\delta$ contains functions that are not approximately invertible and hence the interior of $ \AppInv(W)$ is empty.
\end{proof}
\begin{example}[the Fourier algebra]
\label{example:convolution_algebra}
Let $G$ be a non-discrete locally compact abelian group
equipped with a Haar measure $\mu$.
We shall use the additive notation for $G$.
The space $L^1(G)$ with the convolution operation $\ast$
is a non-unital commutative algebra.
We identify the space $\Characters{L^1(G)}$
with the dual group $\widehat{G}$.
Then the Gelfand transform for the algebra $L^1(G)$
coincides with the Fourier transform
$\mathcal{F}\colonL^1(G)\to C_0(\widehat{G})$,
and the image of this transform, i.e., $\mathcal{F}(L^1(G))$,
is known as the Fourier algebra
(some authors call it the Wiener algebra).
Of course, this is a typical example of a Wiener algebra, and the previous results of this section can be applied to this example.
It is well known that $\mathcal{F}(L^1(G))$
has norm bounded approximate identities
(with norms bounded by $1$)
and that the functions in
$\mathcal{F}(L^1(G))$ with compact
support are dense there
(see \cite[Theorem 8.1.2, pp. 184--187]{Larsen}
or \cite[Proposition 5.4.1]{ReiterStegeman}).
Therefore, by \cite[Lemma 1.4]{Doran-Wichmann},
$\mathcal{F}(L^1(G))$
contains an approximate identity $(\aid_{j})_{j\in J}$
such that the functions $\aid_{j}$ have compact supports for each $j$ in $J$.
For this example,
the implication (v)$\Rightarrow$(i)
from Theorem~\ref{thm:Wiener_ainv-Wie-alg}
can be proved
by the classical Wiener's Division Lemma~\cite[Lemma 1.4.2]{ReiterStegeman},
and the implication (v)$\Rightarrow$(iii)
can also be proved in a different way,
by~\cite[Theorem 6.1.4]{ReiterStegeman}
and Proposition~\ref{prop:rich_modular_ideals}.
\end{example}
\begin{example}[Segal algebras]
\label{example:Fourier-Segal-algebra}
Let $G$ be a non-discrete locally compact abelian group
equipped with a Haar measure $\mu$. As in Example \ref{example:convolution_algebra}, we shall use the additive notation for $G$.
A subalgebra $S$ of $L^1(G)$ is said to be a \emph{Segal algebra} if
\begin{enumerate}[label=(SA\arabic*),
labelsep=*,
leftmargin=*,
widest=(SA1)]
\item
\label{SA1}
the space $S$ is dense in $L^1(G)$ with respect to the norm $\|\cdot\|_{1}$;
\item
\label{SA2}
$S$ is invariant under translations, i.e., $L_y f\in S$ for every $f$ in $S$ and $y$ in $G$,
where $L_{y}f(x)=f(x-y)$;
\item
\label{SA3}
$S$ is a Banach algebra under some norm $\|\cdot\|_{S}$ which is invariant under translations;
\item
\label{SA4}
for every $f\in S$ and every $\varepsilon>0$,
there is a neighborhood $U$ of $0$ such that
$\|L_{y}f-f\|_{S}<\varepsilon$ for all $y$ in $U$.
\end{enumerate}
Let $W$ be the the Fourier image of $S$
(i.e., $\mathcal{F} S$),
considered with the norm carried over from $S$.
Then $W$ is a subalgebra of $\mathcal{F}(L^1(G))$,
and $W$ is isomorphic to $S$.
Moreover, $W$ is a Wiener algebra and possesses approximate identities,
see~\cite[Proposition 6.2.8]{ReiterStegeman}.
If $S$ is a non-trivial Segal algebras, i.e., $S\subsetneq L^1(G)$, then the approximate identities in $S$ are bounded with respect to the $L^1$-norm but they are never bounded in the norm $\|\cdot\|_{S}$, see e.g. \cite[Theorem 1.2]{Burnham}. Now, \cite[Proposition 6.2.6]{ReiterStegeman} implies that elements of $W$ have bounded approximate units, and by
\cite[Proposition 9.5]{Doran-Wichmann}
this implies that $W$ has an approximate identity (possibly unbounded).
\end{example}
\begin{example}[Beurling algebras]
This generalization of Example~\ref{example:convolution_algebra}
was studied by Domar~\cite{Domar1956}.
Let $G$ be a non-discrete locally compact abelian group equipped with a Haar measure $\mu$,
and let $w\colon G\to[1,+\infty)$
be a measurable function with respect to the Haar measure,
such that $w$ is bounded on every compact set and submultiplicative in the following sense:
\[
w(x+y)\le w(x)w(y)\qquad(x,y\in G).
\]
Moreover, we suppose that $w$ satisfies the Beurling--Domar condition:
\[
\sum_{n\geq1}\dfrac{w(nx)}{n^{2}}<+\infty\qquad (x\in G).
\]
Then $L^1(G, w\,\mathrm{d}\mu)$,
considered with the norm $\|\cdot\|_{1,w}$
and with the convolution operation,
is a Banach algebra,
and its Fourier image
$W:=\mathcal{F} L^1(G,w\,\mathrm{d}\mu)$
is a Wiener algebra, see \cite[Proposition 6.3.2 and Lemma~A.1.4]{ReiterStegeman} based on various results from~\cite{Domar1956}.
Moreover, \cite[Proposition 3.7.6]{ReiterStegeman} implies that elements of $W$ have bounded approximate units, and by
\cite[Proposition 9.5]{Doran-Wichmann}
this implies that $W$ has an approximate identity (possibly unbounded).
\end{example}
\subsection{Operator ideals}
In this section we exemplify the approximate invertibility in certain operator algebras. First we recall a few known facts for operators, see for instance \cite[\textsection2.4]{Brezis}, providing a motivation for the detailed study. A linear operator $T\in\mathcal{B}(\mathcal{H})$ is right invertible if there exists $S\in\mathcal{B}(\mathcal{H})$ such that $TS=\operatorname{Id}$. The existence of a right inverse of $T\in\mathcal{B}(\mathcal{H})$ is guaranteed if and only if $T$ is surjective. On the other hand, a linear operator $T\in\mathcal{B}(\mathcal{H})$ is left invertible if there exists $S\in\mathcal{B}(\mathcal{H})$ such that $ST=\operatorname{Id}$, and the existence of left inverse of $T\in\mathcal{B}(\mathcal{H})$ is guaranteed if and only if $T$ is injective and $\mathrm{Rang}(T)$ is closed (or $T$ is bounded below). These results can be extended to linear operators acting on Banach spaces, but the concept of complemented subspaces is needed, see for instance \cite[Theorem 2.12, Theorem 2.13]{Brezis}.
In what follows, $\mathcal{H}$ is an infinite-dimensional separable Hilbert space.
Then the identity operator in $\mathcal{H}$ is not compact,
and the compact operators acting on $\mathcal{H}$ cannot be neither \textit{bounded below} nor \textit{surjective},
see~\cite[Theorem 4.18]{Rudin}.
Thus, compact operators acting on $\mathcal{H}$
are neither right, nor left invertible.
Thence, finding conditions for a weaker form of invertibility of compact operators is an important task. We will address this task in a slightly more general setting of operator ideals.
We denote by $\mathfrak{F}(\mathcal{H})$
the collection of all operators $F$ in $\mathcal{B}(\mathcal{H})$ such that $\operatorname{Rank}(F):= \dim\mathrm{Rang}(F)$ is finite. The elements of $\mathfrak{F}(\mathcal{H})$ are finite linear combinations of $f\otimes g$, where $(f\otimes g)(h):= \langle h,g\rangle f$.
For $n$ in $\mathbb{N}$, we refer to
$
a_{n}(S)\hspace{0pt}=\hspace{0pt}\inf\left\{\,\|S-F\|\colon \hspace{0pt}\operatorname{Rank}(F)\hspace{0pt}<n\right\}
$
as $n$-th approximation number of $S\in\mathcal{B}(\mathcal{H})$.
The C*-algebra $\mathcal{K}(\mathcal{H})$ of compact operators
(also known as \emph{completely continuous} operators), acting on $\mathcal{H}$, may be characterized as
$\displaystyle
\mathcal{K}(\mathcal{H})=\left\{S\in\mathcal{B}(\mathcal{H})\colon\lim_{n\rightarrow\infty}a_{n}(S)=0\right\}.$
The study of operador ideals was started by Calkin~\cite{Calkin-1941}.
Following~\cite{Pietsch-2017},
we say that a subspace $\mathfrak{U}$ of $\mathcal{B}(\mathcal{H})$
is an \emph{operator ideal}
if for any $S\in\mathcal{B}(\mathcal{H})$ and $T\in\mathfrak{U}$ such that $a_{n}(S)\leq a_{n}(T)$, we have $S\in\mathfrak{U}$.
Obviously, this definition implies that $\mathfrak{U}$ is a two-sided ideal.
In this section, we fix a proper operator ideal $\mathfrak{U}$ on $\mathcal{H}$, $\mathfrak{F}(\mathcal{H})\subseteq \mathfrak{U}\subseteq\mathcal{K}(\mathcal{H})$.
We assume that:
\begin{enumerate}[label=(OI\arabic*),
labelsep=*,
leftmargin=*,
widest=(OI1)]
\item
\label{OI1}
$\mathfrak{U}$ is provided with a norm $\|\cdot\|_{\mathfrak{U}}$,
\item
\label{OI2}
$\mathfrak{U}$ is complete with respect to the induced uniformity,
\item
\label{OI3}
$\mathfrak{F}(\mathcal{H})$ is dense in $\mathfrak{U}$
with respect to $\|\cdot\|_{\mathfrak{U}}$, and
\item
\label{OI4}
the normalization condition
$\|f\otimes g\|_{\mathfrak{U}}=\|f\|\,\|g\|$
holds for every $f,g$ in $\mathcal{H}$.
\end{enumerate}
By \cite[Proposition 6.1.4]{Pietsch-1980},
it follows that
$\|A\|\le \|A\|_{\mathfrak{U}}$ for every $A$ in $\mathfrak{U}$. Notice that $\|\cdot\|_{\mathfrak{U}}$ is equivalent to the operator norm $\|\cdot\|$ only if $\mathfrak{U}=\mathcal{K}(\mathcal{H})$.
The most important particular cases are the Schatten classes, including the ideal of all compact operators. More generally, Pietsch~\cite[\textsection 6.1]{Pietsch-1980} considered operator ideas provided with quasi-norms, and the results of this section can be extended to that context (treating $\mathfrak{U}$ as a topological algebra).
A fundamental tool for studying the operator ideals in $\mathcal{B}(\mathcal{H})$ is the singular value decomposition theorem, also known as Schmidt's Theorem (see, for example, \cite[Theorem D.3.2]{Pietsch-1980}).
According to this theorem, each operator $S\in\mathcal{K}(\mathcal{H})$ admits a representation
\begin{equation}\label{eq:HSch-opi}
S\hspace{-2pt}=\sum_{n\in\mathbb{N}}\lambda_{n} u_{n}\otimes e_{n},
\end{equation}
where $\lambda_{n}=a_{n}(S)$, $n\in\mathbb{N}$,
is a non-increasing sequence converging to zero,
and $(u_{n})_{n\in\mathbb{N}}$ and $(e_{n})_{n\in\mathbb{N}}$ are orthonormal sequences in $\mathcal{H}$.
First, we characterize the approximate identities in $\mathfrak{U}$ that are bounded in the uniform norm.
\begin{proposition}\label{prop-sec-appr-id-comp-opi2}
Let $(S_{\hspace{0pt} j})_{j\in J}$ be a net in $\mathfrak{U}$ such that $\sup_{j\in J}\|S_{\hspace{0pt} j}\|<\infty$. Then the following assertions are equivalent: \begin{enumerate}
\item $(S_{\hspace{0pt} j})_{j\in J}$ is an approximate identity in $\mathfrak{U}$;
\item $S_{\hspace{0pt} j} v\to v$ and $S_{\hspace{0pt} j}^{*} v\to v$ for every $v\in \mathcal{H}$.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)$\Rightarrow$(ii).
Suppose $(S_{\hspace{0pt} j})_{j\in J}$ is an approximate identity in $\mathfrak{U}$.
Let $v\in\mathcal{H}$.
Denote by $P_v$ the orthogonal projection in $\mathcal{H}$ such that $P_v(\mathcal{H})=\operatorname{span}\{v\}$.
Then $P_v\in\mathfrak{F}(\mathcal{H})\subseteq\mathfrak{U}$,
hence the nets $(S_{\hspace{0pt} j} P_v)_{j\in J}$ and $( P_vS_{\hspace{0pt} j})_{j\in J}$ converge to $P_v$ in $\mathfrak{U}$.
Since
\begin{align*}
\|S_{\hspace{0pt} j}v-v\|
&=\|S_{\hspace{0pt} j}P_{v}v-P_{v}v\|
\le \|S_{\hspace{0pt} j}P_{v}-P_{v}\|\,\|v\|
\le \|S_{\hspace{0pt} j}P_{v}-P_{v}\|_{\mathfrak{U}}\,\|v\|,\\
\|S_{\hspace{0pt} j}^{*}v-v\|
&=\|S_{\hspace{0pt} j}^{*}P_{v}v-P_{v}v\|
\le \|S_{\hspace{0pt} j}^{*}P_{v}-P_{v}\|\,\|v\|=\|P_{v}S_{\hspace{0pt} j}-P_{v}\|\,\|v\|
\le \|P_{v}S_{\hspace{0pt} j}-P_{v}\|_{\mathfrak{U}}\,\|v\|,
\end{align*}
we conclude that $S_{\hspace{0pt} j} v\to v$ and $S_{\hspace{0pt} j}^{*} v\to v$.
(ii)$\Rightarrow$(i).
Let $(S_{\hspace{0pt} j})_{j\in J}$ be such that $\displaystyle\sup_{j\in J}\|S_{\hspace{0pt} j}\|<\infty$ and $S_{\hspace{0pt} j}v\rightarrow v$ for every $v\in \mathcal{H}$. Let us show that $\lim_{j\in J}\|S_{\hspace{0pt} j} T - T\|_{\mathfrak{U}}=0$, when $T$ is of the form $T:= u\otimes e$, with $u,e\in\mathcal{H}$. Then
$S_{\hspace{0pt} j} T =
(S_{\hspace{0pt} j} u)\otimes e$ and
\[
\|S_{\hspace{0pt} j} T - T\|_{\mathfrak{U}}
=\|(S_{\hspace{0pt} j} u) \otimes e - u \otimes e\|_{\mathfrak{U}}
=\|(S_{\hspace{0pt} j} u - u)\otimes e\|_{\mathfrak{U}}
=\|S_{\hspace{0pt} j} u - u\|\,\|e\|,
\]
and the last expression tends to zero.
If $T\in\mathfrak{F}(\mathcal{H})$, then $T$ is a finite linear combination of operators $ u\otimes e$, and $\displaystyle\lim_{j}\|S_{\hspace{0pt} j} T - T\|_{\mathfrak{U}}=0$.
Let $C\in\mathfrak{U}$ and $\varepsilon>0$.
Then, there exists a finite-rank linear operator $T_{\hspace{0pt} N}$ such that $\|T_{\hspace{0pt} N}-C\|_{\mathfrak{U}}\hspace{0pt}< \frac{\varepsilon}{2(M+1)} $, where $M=\sup_{j\in J}\|S_{\hspace{0pt} j}\|$, because $\mathfrak{F}(\mathcal{H})$ is dense in $\mathfrak{U}$ with respect to $\|\cdot\|_{\mathfrak{U}}$. By the above argument there exists $j_{0}\in J$ such that $\|S_{\hspace{0pt} j} T_{\hspace{0pt} N}-T_{\hspace{0pt} N}\|_{\mathfrak{U}}< \dfrac{\varepsilon}{2}$ for all $j\succeq j_{0}$. In this way, we have for all $j\succeq j_{0}$
\begin{align*}\label{aux-n-3-c-opi2}
\|S_{\hspace{0pt} j}C-C\|_{\mathfrak{U}
&\leq \|T_{\hspace{0pt} N}-C\|_{\mathfrak{U}}
+\|S_{\hspace{0pt} j}C-S_{\hspace{0pt} j}T_{\hspace{0pt} N}\|_{\mathfrak{U}}
+\|S_{\hspace{0pt} j}T_{\hspace{0pt} N}-T_{\hspace{0pt} N}\|_{\mathfrak{U}}\nonumber\\
&\leq
\left(\sup_{j\in J} \|S_{\hspace{0pt} j}\|+1\right) \|C-T_{\hspace{0pt} N}\|_{\mathfrak{U}}
+\|S_{\hspace{0pt} j}T_{\hspace{0pt} N}-T_{\hspace{0pt} N}\|_{\mathfrak{U}}
< \varepsilon.
\end{align*}
The proof that $(TS_{\hspace{0pt} j})_{j\in J}$ converges to $T\in\mathfrak{U}$ follows almost literally as above, using the identity $(u\otimes e)S_{\hspace{0pt} j}=u\otimes (S_{\hspace{0pt} j}^{*}e)$. Consequently, $(S_{\hspace{0pt} j})_{j\in J}$ is an approximate identity in $\mathfrak{U}$.
\end{proof}
\begin{proposition}\label{prop:projections_sequence-opi2}
Let $(b_j)_{j\in \mathbb{N}}$
be an orthonormal basis for $\mathcal{H}$.
For every $m$ denote by $S_{\hspace{0pt} m}$ the orthogonal projection
onto the subspace generated by $b_1,\ldots,b_m$:
\[
S_{\hspace{0pt} m} = \sum_{j=1}^m b_{j}\otimes b_{j},
\]
Then $(S_{\hspace{0pt} m})_{m\in\mathbb{N}}$
is an approximate identity for $\mathfrak{U}$.
\end{proposition}
\begin{proof}
Since $S_{\hspace{0pt} m}\in\mathfrak{U}$, $S_{\hspace{0pt} m}=S_{\hspace{0pt} m}^{*}$, $\|S_{\hspace{0pt} m}\|=1$, for every $m\in\mathbb{N}$, and $S_{\hspace{0pt} m}v\rightarrow v$ for every $v\in \mathcal{H}$, by Proposition \ref{prop-sec-appr-id-comp-opi2} we conclude that $(S_{\hspace{0pt} m})_{m\in\mathbb{N}}$ is an approximate identity in $\mathfrak{U}$.
\end{proof}
Now, we are ready to provide necessary and sufficient conditions for the left and right invertibility in operator ideals.
\begin{theorem}\label{prop:ApprInvCompact-opi}
Let $T\in\mathfrak{U}$. Then following statements hold:
\begin{enumerate}
\item[1.] $T\in\AppInv_{r}(\mathfrak{U})$
if and only if $T(\mathcal{H})$ is dense in $\mathcal{H}$.
\item[2.] $T\in\AppInv_{l}(\mathfrak{U})$
if and only if $\ker(T)=\{0\}$.
\end{enumerate}
\end{theorem}
\begin{proof}
1. Suppose that $T(\mathcal{H})$ is not dense in $\mathcal{H}$. Choose $a\in\mathcal{H}$ such that $\|a\|=1$ and $a\perp T(\mathcal{H})$. Then the orthogonal projection $P_{a}=a\otimes a$ from $\mathcal{H}$ onto $\operatorname{span}\{a\}$ belongs to $\mathfrak{U}$ and for every operator $U\in\mathfrak{U}$ we have
$\|TU-P_a\|_{\mathfrak{U}}\ge\|TU-P_a\|\ge \|TUa-P_a a\|\ge\operatorname{dist}(a,T(\mathcal{H}))=1.$
Since the right ideal $T\mathfrak{U}$
is not dense in $\mathfrak{U}$, the operator $T$ is not approximately right invertible in $\mathfrak{U}$ by Theorem \ref{thm:dense_ideals_implies_ainv}.
Let $T\in\mathfrak{U}$ be such that $T(\mathcal{H})$ is dense in $\mathcal{H}$. Write $T$ in the form \eqref{eq:HSch-opi} with $(u_{n})_{n\in\mathbb{N}}$ being an orthonormal basis of $\mathcal{H}$
and $\lambda_j=a_{j}(T)>0$ for every $j$.
Let
$\displaystyle U_m=\sum_{k=1}^m \frac{1}{\lambda_k}\,u_{k}\otimes e_{k}$.
Then
\[
T U_m v
=\sum_{j\in\mathbb{N}} \sum_{k=1}^m \lambda_j\,\frac{1}{\lambda_k}
\langle v,e_k\rangle\, \langle u_k,u_j\rangle\,e_j
=\sum_{j=1}^m e_{j}\otimes e_{j}(v).
\]
Thus,
the sequence $(T U_m)_{m\in\mathbb{N}}$ is an approximate identity
in $\mathfrak{U}$ by Proposition~\ref{prop:projections_sequence-opi2}.
2. The proof follows by Proposition \ref{AppInvr-AppInvl} and by identity $\ker T=(\mathrm{Rang}(T^{*}))^{\perp}$.
\end{proof}
By \cite[Example 5.1.1]{Murphy},
the pure states of the algebra $\mathcal{K}(\mathcal{H})$
are of the form $\tau_a$ for some $a\in\mathcal{H}$ with $\|a\|=1$, where
$\tau_a(T):=\langle T a,a\rangle$.
This fact and the correspondence between the pure states and maximal modular left or right ideals~\cite[Example 5.1.1]{Murphy},
yields the upcoming simple description of such ideals in the algebra $\mathcal{K}(\mathcal{H})$.
Recall that $N_\tau$ is defined
by~\eqref{Nt} for every pure state $\tau$.
In our situation,
\[
N_{\tau_a}
=\{T\in\mathcal{K}(\mathcal{H})\colon\ \tau_a(T^\ast T)=0\}
=\{T\in\mathcal{K}(\mathcal{H})\colon\
\langle T^\ast T a,a\rangle = 0\}
=\{T\in\mathcal{K}(\mathcal{H})\colon\ Ta=0\}.
\]
\begin{proposition}\label{prop:maximal_modular_ideals_in_CompactOperators}
1. The maximal right modular ideals of $\mathcal{K}(\mathcal{H})$ are of the form $N_{\tau_a}^{\ast}$
for some $a\in\mathcal{H}$ with $\|a\|=1$.
Moreover,
\[
N_{\tau_a}^{\ast}
=\{T\in\mathcal{K}(\mathcal{H})\colon\
\mathrm{Rang}(T)\perp a\}
=(I-P_a)\mathcal{K}(\mathcal{H}).
\]
2. The maximal left modular ideals of $\mathcal{K}(\mathcal{H})$ are of the form $N_{\tau_a}$
for some $a\in\mathcal{H}$ with $\|a\|=1$.
Moreover,
\[
N_{\tau_a}
=\{T\in\mathcal{K}(\mathcal{H})\colon\
a\in\ker(T)\}
=\mathcal{K}(\mathcal{H})(I-P_a).
\]
\end{proposition}
In the C*-algebra $\mathcal{K}(\mathcal{H})$ of compact operators,
we may apply Theorem~\ref{Appinvl-modi-C*}
and Proposition~\ref{prop:maximal_modular_ideals_in_CompactOperators}
for proving
Theorem~\ref{prop:ApprInvCompact-opi}.
\section{An application to the density in Banach modules}\label{sec:applications}
In this section we work with products $ab$, where $a$ and $b$ belong to two different spaces. We suppose that $(\mathcal{A},\|\cdot\|)$
is a non-unital Banach algebra and $(B,\|\cdot\|_{B})$ is a left Banach $\mathcal{A}$-module with respect to an operation $\bullet\colon\mathcal{A}\times B\to B$,
see~\cite[Definition~32.14]{Hewitt-Ross-II}.
In all this section we will require a strong additional assumption:
$B=\mathcal{A}\bullet B$.
The following result and its corresponding proof may be found in~\cite[Remark 32.33 (a)]{Hewitt-Ross-II}, we reproduce it here for the sake of completeness.
\begin{proposition}
\label{prop-module-1}
Let $(\aid_{j})_{j\in J}$ be a left approximate identity in $\mathcal{A}$.
Then $(\aid_{j}\bullet b)_{j\in J}$ converges to $b$ for any $b\in B$.
\end{proposition}
\begin{proof}
Let $b\in B$. By the assumption $B=\mathcal{A}\bullet B$, there exist $a\in\mathcal{A}$ and $c\in B$ such that $b=a\bullet c$.
Thus, by~\cite[Definition 32.14]{Hewitt-Ross-II},
\begin{align*}
\|\aid_{j}\bullet b-b\|_{B}
&=\|\aid_{j}\bullet (a\bullet c)-a\bullet c\|_{B}
=\|(\aid_{j}a)\bullet c-a\bullet c\|_{B}
\\
&=\|(\aid_{j}a-a)\bullet c\|_{B}
\leq \text{const}\,
\|c\|_{B}\,\|\aid_{j}a-a\|.
\end{align*}
The last factor tends to zero,
because $(\aid_{j})_{j\in J}$
is a left approximate identity in $\mathcal{A}$.
\end{proof}
\begin{proposition}
\label{prop-module-2}
Let $x\in\AppInv_{r}(\mathcal{A})$. Then $x\bullet B$ is dense in $B$.
\end{proposition}
\begin{proof}
Since $x\in\AppInv_{r}(\mathcal{A})$, there exists a net $(r_{j})_{j\in J}$ in $\mathcal{A}$ such that $(xr_{j})_{j\in J}$ is an approximate identity in $\mathcal{A}$. Now, given any $b\in B$,
by Proposition~\ref{prop-module-1} we have that $\bigl((xr_{j})\bullet b\bigr)_{j\in J}$ converges to $b$.
On the other hand, for every $j$ in $J$ we have
$(xr_{j})\bullet b
=x\bullet(r_{j}\bullet b)
\in x\bullet B$,
i.e., the net $\bigl((xr_{j})\bullet b\bigr)_{j\in J}$
takes values in $x\bullet B$.
Consequently, $\overline{x\bullet B}=B$.
\end{proof}
In particular, the results can be applied to $\mathcal{A}$ being the convolution algebra $L^1(G)$ and $B$ being a homogeneous Banach space.
Recall that a homogeneous Banach space on a non-discrete locally compact abelian group $G$ is a Banach space $(B,\|\cdot\|_{B})$ of measurable functions on $G$, such that is closed under translations, i.e., given any $f\in B$ we have $L_{x}f\in B$ for all $x\in G$,
the norm $\|\cdot\|_{B}$ is invariant under $L_{x}$, and the map $x\mapsto L_{x}f$ of $G$ into $B$ is continuous.
Following~\cite{Wa77}, we define
$\circledast\colonL^1(G)\times B\to B$ by
\[
f\circledast g=\int_{G}f(x)L_{x}g\,\mathrm{d}\mu(x),
\]
where the integral is understood in the weak sense. It is known~\cite[Theorem 2.11]{Wa77}
that $B$ is a $L^1(G)$-module
and $L^1(G)\circledast B=B$.
The following results are particular cases of Propositions~\ref{prop-module-1} and~\ref{prop-module-2}, therefore we omit the proof. Their particular cases are well known, when $(\aid_j)_{j\in J}$ is a Dirac net or a summability kernel in the sense of~\cite{ka76}.
\begin{corollary} \label{cor:net+dense} Let $(B,\|\cdot\|_{B})$ be a homogeneous Banach space on $G$.
\begin{itemize}
\item[(i)] If $g\in B$ and $(\aid_j)_{j\in J}$ is an approximate identity in $L^1(G)$, then the net $( g\circledast\aid_j)_{j\in J}$ converges to $g$ in $B$. \item[(ii)] If $f\inL^1(G)$ is such that $\widehat{f}(t)\ne0$ for every $t\in\widehat{G}$, then the set $f\circledast B$ is dense in $B$.
\end{itemize}
\end{corollary}
\begin{example}
Let $B=L^p(G)$ or $B=C_{bu}(G)$. Then $f\circledast g$ is just $f\ast g$, see \cite[Remark 2.14]{Wa77}.
For these spaces, Corollary \ref{cor:net+dense} provides a sufficient condition for density of the image of the convolution operator. By Theorem~\ref{thm:Wiener_ainv-Wie-alg}, this condition is necessary in the case $B=L^1(G)$.
A natural problem is to find necessary and sufficient conditions for other spaces $B$. \end{example}
Above results are in close connection with recent papers of the second author~\cite{fegu20,fegu21} addressing the question of completeness of sets of shifts in minimal tempered standard spaces. In this situation non-vanishing Fourier transform implies that the set of translates is total in the Banach algebra.
\begin{example}
Let $(S,\|\cdot\|_{S})$ be a Segal algebra
in $L^1(G)$, see Example \ref{example:Fourier-Segal-algebra}. This means that $S$ is a homogeneous Banach space and a dense subspace of $L^1(G)$.
By \cite{du74} or by \cite[Theorem 2.16]{Wa77}, we have $S=L^1(G)\ast S$. So, Corollary \ref{cor:net+dense}
can be applied to $B=S$ with $f\circledast g=f\ast g$.
\end{example}
\section{Several questions and open problems}\label{Sec:rem-open-pro}
With the development of this project we have seen that approximate invertibility is a generalization of invertibility and is related with the density of principal ideals.
Now we present some remarks, questions, and open problems related to the new concept, its properties, characterizations or extensions.
An immediate observation is that given a non-unital normed algebra with an approximate identity and non-trivial idempotent (or nilpotent) elements, the set of all approximately right invertible elements is proper. Indeed, if $\mathcal{A}$ is a non-unital normed algebra with an approximate identity and $x$ is a non-trivial idempotent or nilpotent element in $\mathcal{A}$, i.e., $x^{2}=x$ or $x^{n}=0$, for some $n\geq1$, respectively, then $x$ does not belong to $\AppInv_{r}(\mathcal{A})$. Otherwise, there exists a net $(r_{j})_{j}$ in $\mathcal{A}$ such that $(xr_{j})_{j}$ is an approximate identity for $\mathcal{A}$. However, if $x=x^{2}\neq0$, then
$\lim_{j}x(xr_{j})=\lim_{j}xr_{j}=x.$
Therefore, the net $(xr_{j})_{j}$ has limit $x$ and hence $\mathcal{A}$ is unital by Proposition~\ref{prop:convergentAI}, which is a contradiction.
\begin{question}
Are there non-unital normed algebras with approximate identities and non-trivial idempotent elements such that this idempotent element does not belong to any maximal modular right ideal?
\end{question}
Despite several characterizations of approximately invertible elements in some classes of algebras presented in the paper, we were not able to find concrete algebras violating certain properties. One such case is described in the following problem.
\begin{question}
Are there non-unital Banach algebras with approximate identities such that the conditions (i) and (iii) from Proposition \ref{prop:rich_modular_ideals} are not equivalent?
\end{question}
In this paper we have mainly dealt with normed algebras, but some results may be immediately extended to topological algebras as well. In~\cite{ThatteBhatt} a wide class of topological algebras for which topological invertibility collapses to invertibility is provided. Therefore, we may ask
\begin{question}
Is there a wider class of unital topological algebras where approximate invertibility coincides with invertibility? Provide a characterization of such algebras.
\end{question}
Recall that $x\in\mathcal{A}$ is approximately left and right invertible in $\mathcal{A}$, if there exist nets $(l_i)_{i\in I}$ and $(r_j)_{j\in J}$ such that the nets $(l_ix)_{i\in I}$ and $(xr_j)_{j\in J}$ are approximate identities in $\mathcal{A}$. Denote by $\AppInv_0(\mathcal{A})$ the set of all true bilateral approximately invertible elements from $\mathcal{A}$, i.e., $x\in\AppInv_0(\mathcal{A})$ if and only if there exists a net $(t_k)_{k\in K}$ such that the nets $(t_k x)_{k\in K}$ and $(x t_k)_{k\in K}$ are approximate identities in $\mathcal{A}$. Clearly, for each abelian topological algebra $\mathcal{A}$ it holds
$\AppInv(\mathcal{A}) = \AppInv_0(\mathcal{A})$.
\begin{question}
Does there exist a non-abelian non-unital topological algebra $\mathcal{A}$ such that $\AppInv(\mathcal{A}) = \AppInv_0(\mathcal{A})$? \end{question}
Recent Abel results on sets of topologically quasi-invertible sets, see~\cite{A-ZR}, motivate further research on the structure of the sets of $\AppInv_{\ell}(\mathcal{A})$ and $\AppInv_r(\mathcal{A})$ in $\mathcal{A}$. For instance, for which algebras $\mathcal{A}$ are these sets $G_\delta$-sets? This stimulates many questions regarding properties of an extension of the topological spectrum of elements, spectral mapping property, spectral radius, etc.
It would be interesting to generalize the concept of the spectrum to algebras with approximate identities. This generalization leads to the several questions.
\begin{question}
Given an element $a\in\mathcal{A}$, a complex number $\lambda$ and an approximate identity $(e_j)_{j\in J}$ in $\mathcal{A}$ we could consider the following condition:
there exists a net $(r_j)_{j\in J}$ such that $((\lambda e_j - a) r_j)_{j\in J}$ is an approximate identity in $\mathcal{A}$.
Does this condition depend on the selection of $(e_j)_{j\in J}$?
\end{question}
Since the approximate invertibility turns out to be related with the density of principal ideals, there may be some connections with tauberian theorems worth of further investigations
(see recent papers \cite{fe88,fe15,fegu20,pivi19,vipira11}).
\textbf{Acknowledgements.}
The first named author wishes to thank the Universidad de Caldas for financial support and hospitality.
This is a part of the first author’s project ``Elementos aproximadamente invertibles en C*-\'algebras y sus aplicaciones en teor\'ia de operadores''. Third author acknowledges the support of the Slovak Research and Development Agency under the contract No. DS-2016-0028 and APVV-16-0337. The forth author is grateful to the CONACYT (Mexico) project ``Ciencia de Frontera'' FORDECYT-PRONACES/61517/2020.
| {
"timestamp": "2021-06-18T02:02:47",
"yymm": "2106",
"arxiv_id": "2106.09103",
"language": "en",
"url": "https://arxiv.org/abs/2106.09103",
"abstract": "We introduce a concept of approximately invertible elements in non-unital normed algebras which is, on one side, a natural generalization of invertibility when having approximate identities at hand, and, on the other side, it is a direct extension of topological invertibility to non-unital algebras. Basic observations relate approximate invertibility with concepts of topological divisors of zero and density of (modular) ideals. We exemplify approximate invertibility in the group algebra, Wiener algebras, and operator ideals. For Wiener algebras with approximate identities (in particular, for the Fourier image of the convolution algebra), the approximate invertibility of an algebra element is equivalent to the property that it does not vanish. We also study approximate invertibility and its deeper connection with the Gelfand and representation theory in non-unital abelian Banach algebras as well as abelian and non-abelian C*-algebras.",
"subjects": "Functional Analysis (math.FA)",
"title": "Approximately invertible elements in non-unital normed algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712645102011,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397053528454
} |
https://arxiv.org/abs/2302.11475 | Degrees and Network Design: New Problems and Approximations | While much of network design focuses mostly on cost (number or weight of edges), node degrees have also played an important role. They have traditionally either appeared as an objective, to minimize the maximum degree (e.g., the Minimum Degree Spanning Tree problem), or as constraints which might be violated to give bicriteria approximations (e.g., the Minimum Cost Degree Bounded Spanning Tree problem). We extend the study of degrees in network design in two ways. First, we introduce and study a new variant of the Survivable Network Design Problem where in addition to the traditional objective of minimizing the cost of the chosen edges, we add a constraint that the $\ell_p$-norm of the node degree vector is bounded by an input parameter. This interpolates between the classical settings of maximum degree (the $\ell_{\infty}$-norm) and the number of edges (the $\ell_1$-degree), and has natural applications in distributed systems and VLSI design. We give a constant bicriteria approximation in both measures using convex programming. Second, we provide a polylogrithmic bicriteria approximation for the Degree Bounded Group Steiner problem on bounded treewidth graphs, solving an open problem from [Kortsarz and Nutov, Discret. Appl. Math. 2022] and [Guo et al., Algorithmica 2022]. | \section{Introduction}
The overarching theme of network design problems is to find ``inexpensive'' subgraphs that satisfy some type of connectivity constraints. The notion of ``inexpensive'' is often either the number of edges (unweighted cost) or the sum of edge costs (weighted cost). However, it has long been recognized that in many applications \emph{vertex degrees} matter as much (or more) than cost. This is particularly true in the context of networking and distributed systems, where the degree of a node often corresponds to the ``load'' on that node, as well as in VLSI design. So there has been a significant amount of work on handling degrees, either instead of or in addition to cost, which has led to many seminal papers and results. With degrees as an objective, these include the well known local search approach of F\"urer and Raghavachari~\cite{FR94} for the Minimum Degree Spanning Tree problem and the Minimum Degree Steiner Tree problem. With degrees as a constraint, these include the iterative rounding \cite{jain1} approach of Singh and Lau~\cite{SL15} for the Minimum-Cost Bounded-Degree Spanning Tree problem, as well as many extensions (most notably to Survivable Network Design with degree bounds~\cite{LNS09}, but see~\cite{ravi1} for many other examples).
In this paper we extend the study of degrees in network design in two ways. First, we introduce what is (to the best of our knowledge) a new class of problems. Instead of bounding the cost and individual degrees as in \cite{SL15,LNS09},
our objective is to obtain minimum cost while satisfying a bound on the $\ell_p$-norm of the node degree vector. This interpolates between the maximum degree (the $\ell_{\infty}$-norm) and the total number of edges or unweighted cost (the $\ell_1$-degree). Second, we solve a well known open problem: We give a poly-logarithmic bicriteria approximation for the Group Steiner Tree problem with degree bounds on bounded treewidth graphs.
\subparagraph{$\ell_p$-Objective.}
While the maximum degree is often a reasonable objective, as is minimizing the total cost (either with or without degree bounds), there are many natural situations where none of these approaches are fully satisfactory. If we simply ignore the degrees and focus on cost (weighted or unweighted), then we might end up with a solution with highly imbalanced degrees, leading to large load at particular nodes. If we ignore costs and simply optimize the maximum degree, then we might return a solution with far more edges than are needed: if the structure of the graph forces some node to have large degree, then if we simply try to minimize the maximum degree we will not even try to make the degrees of other nodes small. Finally, optimizing under individual degree bounds implicitly assumes that nodes ``really have'' these degree bounds, i.e., they come from some external constraint. But this is of course not always the case: often we do not have real bounds on individual nodes, but rather a more vague desire to ``keep degrees small''.
Hence we want some way of making sure that the maximum degree is small, but also encouraging few edges. A natural function that simultaneously accomplishes both of these goals is the $\ell_p$-norm of the degree vector, i.e., the function $\left( \sum_{v \in V} (\deg_v)^p \right)^{1/p}$ for $p \geq 1$ (and in particular for $p =2$), where $\deg_v$ is the degree of $v$ in the output subgraph. When $p=1$ this is simply (twice) the number of edges (i.e., the unweighted cost), and when $p = \infty$ this is the maximum degree. But for intermediate values of $p$, it discourages very large degrees (in particular the maximum degree) since $p>1$ implies that large degrees have a larger effect on the norm than smaller degrees, while still also being effected on a non-trivial way by the smaller degrees. So we can either use the $\ell_p$-norm as an objective function, or we can use it as a constraint that is far more flexible than having simple degree constraints at every node.
This intuition, that the $\ell_p$-norm takes into account both the maximum and the distribution simultaneously, is one reason why the $\ell_p$-norm has been an important objective function
for combinatorial problems. For example, the Set Cover problem was studied under the $\ell_p$ norm of the vector of number of elements assigned to each set \cite{anu1}. It was also extensively studied in scheduling problems (see for example \cite{azar1, azar2, arvind1, IL23}). To the best of our knowledge, the $\ell_p$ norm has not been studied in the context of network design, with the notable recent exception of \emph{graph spanners}~\cite{CDR19,CDR19-approx}, where the direct applications of spanners to distributed systems led to exactly this motivation. Similarly, MST with $\ell_p$ norm is very important for VLSI design, since in many such settings we are forced to use spanning trees and hence the number of edges is fixed. So minimizing the $\ell_p$ norm will likely derive a balanced degree vector which is of key importance for these VLSI application (see, for example, \cite{china,vlsi1}).
Motivated by the above discussion, we introduce and give the first approximation for the Survivable Network design problem with low cost under a bound on the $\ell_p$ norm of the degree vector.
\subparagraph{Group Steiner Tree with Degree Bounds.}
In addition to the study of $\ell_p$-norm problems, we also make significant progress on a known open problem: approximating Group Steiner Tree with degree bounds on bounded treewidth graphs. The Group Steiner Tree problem (without degree bounds) is a classical optimization problem \cite{GKR1} which has played a central role in network design. In this problem there is a designated root node $r$, and a collection of (not necessarily disjoint) \emph{groups} of vertices. The goal is to find a subtree which connects at least one vertex from each group to $r$, while minimizing the total cost of all edges in the subtree. The Degree Bounded Group Steiner problem was
first raised by
Hajiaghayi in \cite{MH} (in the 8th Workshop on Flexible Network Design), motivated by the online version of the problem and applications to VLSI design. In particular, while low cost is highly desirable, this cost is payed only once, while later the VLSI circuit is applied (evaluated) constantly. Low degrees
imply that the computation of the value of the circuit can be done faster. See a discussion of why low degrees are important for Group Steiner in \cite{guy1}.
Unfortunately, despite significant recent interest in this problem~\cite{GKL22,guy1}, progress has been elusive. In particular, polylogarithmic bicriteria approximations were not even known for simple classes such as \emph{series-parallel} graphs, i.e., for graphs with treewidth $2$. We go far beyond series-parallel graphs, and give results for bounded treewidth graphs.
\subsection{Our Results and Techniques}
We begin in Section~\ref{sec:lp} with a study of the $\ell_p$-Survivable Network Design problem. We are given the input graph $G = (V, E)$, with edge costs $c \in \mathbb{R}_{\geq 0}^E$.
There is a connection requirement vector $r \in \mathbb{Z}_{\geq 0}^{\binom{V}{2}}$, a number $p \geq 1$ and a bound $A$ on the $\ell_p$ norm of the degree vector of the output graph. The goal of the problem is to find the minimum-cost subgraph $H$ of $G$ satisfying the following:
\begin{itemize}
\item {\bf (connection requirements)} for every $u, v \in V$ with $u \neq v$, there are at least $r_{u, v}$ edge disjoint paths between $u$ and $v$ in $H$, and
\item {\bf (degree constraint)} $\left(\sum_{v \in V}d_H^p(v)\right)^{1/p} \leq A$, where $d_H(v)$ is the degree of $v$ in $H$.
\end{itemize}
We assume the input instance is feasible; that is, there is a valid sub-graph $H$ satisfying both requirements. Let ${\mathrm{opt}}$ be the minimum cost of a valid subgraph $H$.
The main theorem we prove for the problem is the following:
\begin{theorem}
\label{thm:network-design}
There is a (randomized) algorithm which, given an instance of \textsc{$\ell_p$-Survivable Network Design}, outputs a subgraph $H$ satisfying the connection requirements and which has the following properties.
\begin{itemize}
\item The expected cost of $H$ is at most $2 \cdot {\mathrm{opt}}$.
\item The expectation of the $\ell_p$-norm of the degree vector is at most $2^{1/p} 5^{1-1/p} \cdot A$.
\end{itemize}
\end{theorem}
For the special case of $\ell_p$-Spanning Tree problem, where $r_{uv}=1$ for all $u,v \in V$, we improve the expected cost to at most ${\mathrm{opt}}$ (rather than $2\cdot {\mathrm{opt}}$) and the expectation of the $\ell_p$-norm of the degree vector to at most $2^{1-1/p} \cdot A$.
Our main approach is to leverage the fact that the $\ell_p$-norm is \emph{convex}. This allows us to write a convex relaxation for the problem, which can then be solved efficiently using standard convex programming techniques. We then round this solution using an iterative rounding approach. Making this work requires overcoming a number of issues, possibly the trickiest of which is handling fractional degrees that are less than $1$. Note that a fractional solution could have many nodes with very small fractional degree (e.g., $1/n$). Due to the structure of the $\ell_p$-norm, such small values contribute far less to the $\ell_p$-norm than they ``should'' (in an integral solution). To get around this, we actually \emph{change} the $\ell_p$-constraint in a way that acts differently for values less than $1$, while still maintaining convexity. With this change in place, we can solve the relaxation, interpret the fractional degrees ``as if'' they are true degree bounds, and then round using existing results on iterative rounding for degree-bounded network design.
\medskip
We then move to our second problem, Group Steiner Tree with Degree Bounds on bounded treewidth graphs. In the problem, we are given a graph $G = (V, E)$ with treewidth ${\mathrm{tw}}$, a cost vector $c \in \mathbb{R}_{\geq 0}^E$, a root $r$, and $k$ sets $S_1, S_2, \cdots, S_k$. We are additionally given a degree bound ${\mathrm{db}}_v \in \mathbb{Z}_{>0}$ for every $v \in V$. The goal of the problem is to choose a minimum-cost subgraph $H$ of $G$ such that for every $t \in k$, $H$ contains a path from $r$ to some vertex in $S_t$, and $d_H(v) \leq {\mathrm{db}}_v$ for every $v \in V$. By minimality, the optimum $H$ is always a tree. We solve an open problem from \cite{GKL22} and \cite{guy1} by giving a polylogarthmic bicriteria algorithm as long as the treewidth is bounded. In particular, we prove the following theorem.
\begin{theorem} \label{thm:GST-bd-treewidth}
There is an $n^{O({\mathrm{tw}} \log {\mathrm{tw}})}$-time randomized algorithm for the Group Steiner Tree with Degree Bounds problem on bounded treewidth graphs which has $O(\log^2 n)$ approximation ratio and $O(\log^2 n)$-degree violation.
\end{theorem}
In order to achieve this result, we introduce and study a ``tree labeling'' problem in Section~\ref{sec:tree-labeling}. There is a rooted full binary tree, and we need to give a label $\ell_u$ for each node $u$ in the tree from a subset $L_u$ of potential labels. For every internal node $u$ with two children $v$ and $v'$ there are some consistency constraints on the labels, which say that the triple $(\ell_u, \ell_v, \ell_{v'})$ must be from some given subset $\Gamma_u \in L_u \times L_v \times L_{v'}$. Then we have some covering constraints, each specified by a set $S$ of labels: the constraint requires that at least one node has its label in $S$. Finally, we have many cost constraints. For each such constraint, a label is given a cost, and we require that the total cost of all labels used is at most 1. For this problem we give a randomized algorithm that outputs a labeling that satisfies all consistency constraints, and approximately satisfies the covering and cost constraints with reasonable probability, assuming the given instance is feasible. It runs in polynomial time when the depth of the tree is $O(\log n)$ and each $L_u$ has $O(1)$-size. The main techniques of the algorithm are adaptations of the LP-rounding algorithm in \cite{GKL22} for their degree-bounded network design problem. We introduce the tree labeling problem as a host for these techniques, and adapt them for the problem.
We then show in Section~\ref{sec:reduction} that we can reduce Group Steiner Tree with Degree Bounds on bounded treewidth graphs to this tree labeling problem. Let ${\mathrm{tw}}$ be the treewidth of the graph; it is known from \cite{Bod88} that we can assume the decomposition tree of $G$ is an $O(\log n)$-depth binary tree, with bag size $O({\mathrm{tw}})$. This decomposition tree will be the tree in the tree-labeling instance. For each bag in the tree, a label will contain the set of edges we take from the bag, and some connectivity information on the vertices in the bag. We define the consistency constraints so that if they are satisfied, then the connectivity information is correct. A group being connected can be captured by a covering constraint in the tree labeling instance, and the edge cost constraint and degree constraints can be formulated as cost constraints in the instance. Using the algorithm for the tree labeling instance, we obtain a tree with small cost that satisfies degree bounds approximately, and connects a group with reasonable probability. The final output then is obtained by running the procedure many times and taking the union.
\subsection{Other Related Work}
For the survivable network design problem without any degree constraints, the classic result of Jain \cite{jain1} gives a $2$-approximation algorithm using the iterative rounding method.
In \cite{GKR1} an $O(\log^2 n)$ approximation is given for the Group Steiner problem on tree inputs,
and an $O(\log^3 n)$ for the Group Steiner problem (without degree constraints)
for general graphs. The approximation for trees
is almost the best possible, unless NP problems can be solved
in quasi-polynomial time \cite{eran1}. \cite{GKL22} gave a bicriteria
approximation for the Group Steiner Tree Problem with degree bounds on tree inputs, with approximation ratio $O(\log^2 n)$ and degree violation $O(\log n))$. Both bounds are nearly optimal \cite{eran1, irit7}.
In \cite{DV} the authors gave
an $O(\log^2 n)$-approximation
ratio for Group Steiner problem on bounded treewidth graphs
(without degree bounds).
In \cite{guy1} an $O(\log^2 n)$ approximation is given for the Group Steiner problem with minimum maximal degree, but without costs.
\subsection{Notation}
Given a graph $H$ and a vertex $v$ in $H$, we shall use $\delta_H(v)$ to denote the set of edges in $H$ incident to $v$, and $d_H(v) = |\delta_H(v)|$ to denote its degree. Given a rooted tree $T$ and a vertex $v$ in $T$, we use $\Lambda_T(v)$ to denote the set of children of $v$ in $T$, and $\Lambda^*_T(v)$ to denote the set of descendants of $v$ in $T$ (including $v$ itself). When $H$ and $T$ are clear from the context, we shall omit them in the subscript. For example, this happens when $H = G$ is the input graph.
For a real vector $z$ over some domain, and a subset $S$ of elements in the domain, we define $z(S):=\sum_{i \in S}z_i$ to denote the sum of $z$ values of elements in $S$.
\iffalse
\section{Definitions}
We want to consider classical network design problems (spanning tree, Steiner tree, Steiner forest, group Steiner tree, directed Steiner tree, etc.) with a new objective function: the $\ell_p$-norm of the degree vector. That is, given any graph $G$, we can define the degree vector $d_G$ to be the $n$-dimensional vector in which the entry for vertex $v$ is precisely $d_G(v)$, the degree of $v$ in $G$. Then we will use as the objective function the $\ell_p$-norm of this vector, i.e., we will try to find a feasible subgraph $H$ which minimizes optimize $\|d_H\|_p$, and will let $\|H\|_p$ denote $\|d_H\|_p$. Note that the $\|H\|_1$ is exactly twice the number of edges in $H$, so is equivalent to the unweighted (or uniform weighted) version of these classical problems. On the other hand, $\|H\|_{\infty}$ is equal to the max degree, so we get the classical min-max degree objective. We will consider general $\ell_p$-norms, but will be particularly concerned with the $\ell_2$-norm, as it is a natural place in between these two extremes.
\section{Spanning Tree}
Let's consider spanning trees. The $\ell_1$-objective is trivial (it is always exactly $2n-2$), and the $\ell_{\infty}$-objective has a $+1$-approximation from F\"urer and Raghavachari~\cite{FR94}. Let's think about the $\ell_2$-objective.
\subsection{Algorithm}
We first write the obvious convex relaxation:
\begin{align*}
\min\quad &\sum_{v \in V} y_v^2& \\
\text{s.t.}\quad & \sum_{e \sim v} x_e = y_v & \forall v\in V \\
&\vec{x} \in \text{spanning tree polytope} & \\
\end{align*}
This can be solved in polynomial time with the Ellipsoid algorithm, getting an optimal solution $(x^*, y^*)$. Clearly this is a relaxation, so $\left(\sum_{v \in V} (y^*_v)^2\right)^{1/2} \leq OPT$. We now treat the $y^*$ values as ``degree bounds"\mdnote{Does this work when they're fractional? Presumably not -- could change initial bounds by $-\epsilon$ then do rounding to get lossless algorithm}, and write a new LP:
\begin{align*}
\min\quad &\sum_{v \in V} x_e & \\
\text{s.t.}\quad & \sum_{e \sim v} x_e = y_v & \forall v\in V \\
& y_v \leq y^*_v & \forall v \in V \\
&\vec{x} \in \text{spanning tree polytope} & \\
\end{align*}
We actually don't really care about the objective function: the point is that this LP defines a polytope which is nonempty (since it contains $(x^*, y^*)$) and for which we can find a basic feasible solution / extreme point $(x,y)$ in which $y_v \leq y^*_v$ for all $v \in V$. Once we have such a solution, we apply the Singh and Lau~\cite{SL15} $+1$-rounding to it to get an integral solution in which every vertex has degree at most $y_v + 1$ . Note that this crucially uses the fact that every $y_v \geq 1$. We should also check that fractional $y_v$ don't hurt us, e.g., if $y_v = 2.1$ then when we round we'll get something where $v$'s degree is at most $3$, not at most $4$.
\subsection{Analysis}
Let $T$ be the tree which is returned.
It's trivial to see that this is a $2$-approximation: $d_T(v) \leq y_v+1 \leq 2y_v$ for all $v \in V$, and thus
\begin{align*}
\|T\|_2 &= \sqrt{\sum_{v \in V} d_T(v)^2} \leq \sqrt{\sum_{v \in V} (y_v+1)^2} \leq \sqrt{\sum_{v \in V} (2y_v)^2} \leq 2\sqrt{\sum_{v \in V} (y^*_v)^2} \leq 2 \cdot OPT.
\end{align*}
We claim that it is actually a $\sqrt{5}/2$-approximation. Intuitively, this is because we have the property that the rounding still gives a tree, so the total number of edges is unchanged. So if in the rounding the degree of some node goes up, then a vertex with smaller degree must have their degree go down (it can't be a node of bigger degree or else we would have a solution better than the fractional solution). So the ``worst case" is intuitively a Hamiltonian path, which gets ``rounded" to about $n/2$ nodes with degree $3$ and about $n/2$ nodes of degree $1$ (rather than about $n$ nodes of degree $2$). This would give approximation of about
\begin{align*}
\frac{\sqrt{9n/2 + n/2}}{\sqrt{4n}} = \sqrt{5}/2
\end{align*}
\subsection{To Do}
\begin{itemize}
\item Nail down the $\sqrt{5}/2$ bound more formally
\item APX-hardness? Current hardness from Hamiltonian path, so only NP-hard.
\end{itemize}
\section{Steiner Tree}
Let's start out fixing notation: we're given a graph $G = (V, E)$ and a set of terminals $S \subseteq V$. Our goal is to find a subgraph $T$ (WLOG a tree) which connects all the terminals which minimizes $\|T\|_2$.
Currently for Steiner Tree there is a small constant approximation for the $\ell_1$-objective (number of edges) and a $+1$-approximation using iterative rounding for the $\ell_{\infty}$ (max degree) objective. So the obvious question is whether the $\ell_2$ objective also allows for small constants, or whether something is fundamentally different.
We thought we had a constant in two different way, but both approaches turned out to be flawed. So currently we have only relatively weak bounds. We give two algorithms: the first works well when $OPT$ is large while the second works well when $OPT$ is small, so we can trade them off.
\subsection{Algorithms}
\subsubsection{Algorithm 1} Our first algorithm is based off of trying to use the same approach that we used for Spanning Tree: solve the convex relaxation (obviously with steiner tree constraints rather than spanning tree constraints), and then round using Singh and Lau (since they also have a $+1$-approximation for Steiner Tree). The issue here is that when we solve the convex relaxation, the fractional degrees $y^*$ that we get back might be less than $1$. Let $A = \{v \in V : y^*_v < 1\}$. We create a new degree vector $\hat y$ by setting $\hat y_v = 1$ if $v \in A$ and $\hat y_v = y^*_v$ if $v \not\in A$. We then solve the LP with the $\hat y$ degree bounds to get an extreme point, and then apply the $+1$-rounding to get a tree $T$. This clearly gives a feasible solution. The total cost of the algorithm is:
\begin{align*}
\|T\|_2 &= \left( \sum_{v \in V} d_T(v)^2\right)^{1/2}
\leq \left( \sum_{v \in V} (\hat y_v + 1)^2 \right)^{1/2}
\leq \left( \sum_{v \in A} 2^2 + \sum_{v \not\in A} (y^*_v+1)^2\right)^{1/2} \\
&\leq \left( \sum_{v \in A} 4 + \sum_{v \not\in A} (2y^*_v)^2 \right)^{1/2} \leq \left(4n + 4\cdot OPT^2\right)^{1/2} \leq O(OPT + \sqrt{n})
\end{align*}
\subsubsection{Algorithm 2} Our second algorithm is based on a very different approach: we set all terminals to have weight $0$ and all non-terminals to have weight $1$, and then use as a black box the $O(\log |S|)$-approximation algorithm of Klein and Ravi for minimum node-weighted steiner tree, getting back a tree $T$. Because of how we set the weights, this is an $O(\log |S|)$-approximation for the problem of minimizing the number of Steiner nodes.
Clearly this gives a feasible solution. To analyze its cost, we first need some observations.
\begin{lemma}
$|S| \leq OPT^2$.
\end{lemma}
\begin{proof}
In any solution every terminal must have degree at least $1$, and thus $OPT \geq \left( \sum_{v \in S} 1^2\right)^{1/2} = |S|^{1/2}$.
\end{proof}
\begin{lemma}
$\sum_{v : d_T(v) \geq 3} d_T(v)^2 \leq O(OPT^4)$
\end{lemma}
\begin{proof}
We first claim that $\sum_{v : d_T(v) \geq 3} d_T(v) \leq O(|S|)$. To see this, note that if we contract all degree $2$ nodes we are left with a tree in which all leaves are terminals and there are no degree $2$ nodes. Thus the number of nodes in this tree is $O(|S|)$, so there are $O(|S|)$ edges in the tree.
Now it's just a simple calculation:
\begin{align*}
\sum_{v : d_T(v) \geq 3} d_T(v)^2 \leq \left(\sum_{v : d_T(v) \geq 3} d_T(v)\right)^2 \leq O(|S|^2) \leq O(OPT^4)
\end{align*}
as claimed.
\end{proof}
Let $A \subseteq V \setminus S$ be the Steiner nodes of $T$.
\begin{lemma}
$|A| \leq 2OPT^2 \log OPT$
\end{lemma}
\begin{proof}
The optimal solution must use $\Omega(|A| / \log |S|)$ steiner nodes by the approximation ratio of Klein-Ravi. Each of these nodes will have degree at least $1$. Hence $OPT \geq \left(\frac{|A|}{\log |S|}\right)^{1/2}$. Since $|S| \leq OPT^2$, this implies that $|S| \leq OPT^2 \log (OPT^2) = 2 OPT^2 \log OPT$.
\end{proof}
\begin{theorem}
$\|T\|_2 \leq O(OPT^2)$
\end{theorem}
\begin{proof}
Putting the lemmas together implies that
\begin{align*}
\|T\|_2 &= \left(\sum_{v \in T} d_T(v)^2\right)^{1/2} \\
&= \left( \sum_{v : d_T(v) \leq 2} 4 + \sum_{v : d_T(v) \geq 3} d_T(v)^2\right)^{1/2} \\
& \leq \left( 4(|S| + |A|) + O(OPT^4) \right)^{1/2} \\
& \leq \left( OPT^2 + 2 OPT^2 \log OPT + O(OPT^4)\right)^{1/2} \\
& = O(OPT^2)
\end{align*}
as claimed.
\end{proof}
\subsubsection{Combining the algorithms}
Now we have an algorithm which returns a solution of cost $O(OPT + \sqrt{n})$ and an algorithm which returns a solution of cost $O(OPT^2)$. So by running both and taking whichever is better, we get a solution with approximation ratio at most
\begin{align*}
\frac{\min(O(OPT + \sqrt{n}), O(OPT^2))}{OPT} \leq O(n^{!/4})
\end{align*}
\subsection{To do}
\begin{itemize}
\item Fundamental question: is $\ell_2$-norm asymptotically harder than $\ell_1$ and $\ell_{\infty}$?
\item In other words: can we prove some super constant hardness?
\begin{itemize}
\item Approach: first prove APX-hardness, then try to use some kind of graph product or layering to boost this.
\end{itemize}
\end{itemize}
\fi
\section{\texorpdfstring{$\ell_p$}{Lp}-Survivable Network Design} \label{sec:lp}
In this section, we give our iterative rounding algorithm for $\ell_p$-survivable network design problem. Recall that we are given a graph $G = (V, E)$ with cost vector $c \in \mathbb{R}_{\geq 0}^E$, a connection requirement vector $r \in \mathbb{Z}_{\geq 0}^{{\binom{V}{2}}}$, and a bound $A$ on the $\ell_p$ norm of the degree vector.
\begin{definition}
We say a polytope ${\mathcal{P}} \in [0, 1]^E$ is good if it is upward-closed \footnote{This means for every $x \in {\mathcal{P}}$ and $x' \in [0, 1]^E$ with $x' \geq x$, we have $x' \in {\mathcal{P}}$.} and the following holds: For every vector $x \in \{0, 1\}^E$, we have that $x \in {\mathcal{P}}$ if and only if the graph $(V, \{e \in E: x_e = 1\})$ satisfies the connection requirements.
\end{definition}
Notice that the above definition does not capture the degree constraints. This is done using the following definition. For a real vector $B \in [1, \infty]^V$, we define ${\mathcal{Q}}_B := \{x \in [0, 1]^E: \forall v \in V, x(\delta(v)) \leq B_v\}$ to be the set of all vectors satisfying the degree bounds defined by $B$.
\begin{definition}
\label{defn:integral}
Let $\alpha \geq 1$ and $\beta \geq 0$ be two real numbers and ${\mathcal{P}}$ be a good polytope. We say ${\mathcal{P}}$ is $(\alpha,\beta)$-integral if for every $B \in [1, \infty]^V$, every non-integral extreme point $x$ of ${\mathcal{P}} \cap {\mathcal{Q}}_B$ satisfies at least one of the following two properties:
\begin{enumerate}[label=(\ref{defn:integral}\alph*), leftmargin=*]
\item there exists an edge $e \in E$ with $1/\alpha \leq x_e < 1$, \label{event:round}
\item there exists a vertex $v \in V$ such that $x(\delta(v)) = B_v$ and $|\{e \in \delta(v): x_e > 0\}| \leq B_v + \beta$. \label{event:relax}
\end{enumerate}
\end{definition}
It is well known that for Survivable Network Design there is a $(2, 3)$-integral polytope ${\mathcal{P}}$ \cite{LNS09}. For the special case of spanning tree problem, i.e, $r \equiv 1$, there is a $(1, 1)$-integral polytope \cite{SL15}.
We will use these polytopes in our algorithm, and will show that that their existence implies good approximation algorithms. More formally, we prove the following theorem.
\begin{theorem} \label{thm:polytope-alg}
Assuming the existence of an $(\alpha, \beta)$-integral polytope, there is a randomized algorithm which outputs a subgraph $H$ of $G$ satisfying the connection requirements. The expected cost of $H$ is at most $\alpha \cdot \text{opt}$ and the expectation of the $p$-norm of degree vector is at most $\alpha^{1/p}(\alpha+\beta)^{1-1/p}A$; recall that ${\mathrm{opt}}$ is the value of the instance.
\end{theorem}
Note that this theorem, together with the existence of a $(2,3)$-integral polytope for the general case and a $(1,1)$-integral polytope for the spanning tree case, imply Theorem~\ref{thm:network-design}. So we focus on proving Theorem~\ref{thm:polytope-alg}.
\subsection{The Convex Program}
Define a function $f:\mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ as follows: $f(x) = \begin{cases}
x & \text{if } x \in [0, 1]\\
x^p & \text{if } x > 1
\end{cases}$.
Figure~\eqref{fig:f} shows this function for $p = 2$. This is a convex function for $p \geq 1$.
Let ${\mathcal{P}}$ be an $(\alpha, \beta)$-integral polytope. The following is our convex programming relaxation for the problem:
\begin{equation}
\min \sum_{e \in E}c_ex_e \qquad \text{s.t.} \qquad x \in {\mathcal{P}}, \qquad
\sum_{v \in V} f(x(\delta(v))) \leq A^p.
\label{LP}
\end{equation}
Recall that using our notation, $x(\delta(v))$ is the sum of $x$ values of edges incident to $v$ in $G$.
\eqref{LP} is a convex program and can be solved efficiently. Since the indicator vector of the optimum subgraph $H$ satisfies all the constraints, the value of the convex program is at most $\text{opt}$.
We note that if we instead used the function $f(x) = x^p$ (i.e., without handling the $0 \leq x \leq 1$ case separately), we would still have a convex relaxation of our problem. However, it is not hard to show that this relaxation has an extremely large integrality gap (even if we are allowed to violate the $\ell_p$-norm constrain by a polylogarithmic factor). Treating $0 \leq x \leq 1$ differently from $x > 1$ is one of the key ideas in our approximation algorithm.
\begin{figure}
\centering
\begin{subfigure}[b]{0.28\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{f.pdf}
\caption{The function $f$ for $p = 2$.}
\label{fig:f}
\end{subfigure}\hspace{0.1\textwidth}
\begin{subfigure}[b]{0.6\textwidth}
\centering
\begin{algorithmic}[1]
\State Solve LP\eqref{LP} to obtain a solution $x$ \label{step:solve-LP}
\State let $B_v \gets \max\{x(\delta(v)), 1\}$ for every $v \in V$ \label{step:init-B}
\While{true} \label{step:while}
\State randomly choose an extreme point $x'$ of ${\mathcal{P}} \cap {\mathcal{Q}}_B$ such that $\E[x'] = x$ \label{step:x'}
\State $x \gets x'$ \label{step:x-gets-x'}
\If{$x$ is integral} \Return $x$ \EndIf \label{step:return}
\If {case \ref{event:round} happens for some $e = (u, v) \in E$}
\State $x_e \gets 1,
B_v \gets x(\delta(v)), B_u \gets x(\delta(u))$ \label{step:round-e}
\Else \Comment{case \ref{event:relax} happens for some $v$}
\State $B_v \gets \infty$ \label{step:relax}
\EndIf
\EndWhile
\end{algorithmic}
\caption{Iterative Rounding Algorithm for Network Design.}
\label{alg:iterative-rounding}
\end{subfigure}
\caption{The function $f$ and the iterative rounding algorithm.}
\end{figure}
\subsection{The Iterative Rounding Algorithm}
Our iterative rounding algorithm is described in Figure~\eqref{alg:iterative-rounding}. In Step~\ref{step:solve-LP}, we solve the convex relaxation~\eqref{LP} to obtain an extreme solution $x$, which can be done in polynomial time using standard techniques. Then in Step~\ref{step:init-B} we define $B_v = \max\{x(\delta(v)), 1\}$ for every $v$ to be the upper bound on the degree of $v$. So, before Loop~\ref{step:while}, we have $x \in {\mathcal{P}} \cap {\mathcal{Q}}_B$. We shall maintain this property before and after each iteration of the loop.
In each iteration of Loop~\ref{step:while}, we randomly choose a vertex point $x'$ of ${\mathcal{P}} \cap {\mathcal{Q}}_B$ such that $\E[x'] = x$ (Step~\ref{step:x'}) and then update $x$ to be the $x'$ (Step~\ref{step:x-gets-x'}). This is possible since at the beginning of the iteration we have $x \in {\mathcal{P}} \cap {\mathcal{Q}}_B$. If $x$ is integral, we then return $x$ in Step \ref{step:return}. If we did not return, by that ${\mathcal{P}}$ is $(\alpha, \beta)$-integral, either \ref{event:round} or \ref{event:relax} happens. In the former case, we update $x_e$ to $1$, and change $B_v$ and $B_u$ for the two end vertices $u, v$ of $e$ so that we still have $B_{v'} = \max\{x(\delta(v')), 1\}$ for every $v' \in V$ (Step~\ref{step:round-e}). In the latter case, we change $B_v$ to $\infty$ so that there will be no degree constraint for $v$ from now on. Notice that in either case, we maintain the invariant that $x \in {\mathcal{P}} \cap {\mathcal{Q}}_B$ as ${\mathcal{P}}$ is upward-closed.
Notice that once $x_e$ becomes $0$ or $1$ in some iteration, it will remain unchanged. This holds since for $\E[x'_e] = x_e \in \{0, 1\}$ to hold, we must always have $x'_e = x_e$. When the algorithm terminates, it returns an integral $x$ which satisfies the connectivity requirements. This holds since we have $x \in {\mathcal{P}}$ and ${\mathcal{P}}$ is good. The algorithm will terminate in $O(|E|)$ iterations since in every iteration, we either fixed the value of some $x_e$ to 1, or changed some $B_v$ from a finite number to $\infty$.
\subsection{Analysis of the Algorithm}
We now begin to analyze the algorithm. As discussed, the algorithm will terminate with a subgraph which satisfies the connectivity requirements.
To prove Theorem~\ref{thm:polytope-alg}, we need to analyze the total cost and the $\ell_p$-norm of the degrees.
In Step~\ref{step:round-e}, we say that we \emph{round} the edge $e$. In Step~\ref{step:relax}, we say we \emph{relax} the vertex $v$.
At any time of the algorithm, we define a vector $\bar x \in [0, 1]^E$ as follows. If $e$ has not been rounded yet, then let $\bar x_e = x_e$. Otherwise, let $\bar x_e$ be the value of $x_e$ right before Step~\ref{step:round-e} in which we round $e$. Thus, from the moment, $\bar x_e$ remains unchanged.
Let $T$ be the number of iterations we run Loop~\ref{step:while}; notice that this is a random variable. For every integer $t \in [0, T]$, we let $x^t, \bar x^t, B^t$ to be the values of $x, \bar x, B$ at the end of the $t$-th iteration of the Loop~\ref{step:while}. So $x^T$ is the output of the algorithm.
\begin{obs}
\label{obs:barx}
The following statements are true.
\begin{enumerate}[leftmargin=*, label=(\ref{obs:barx}\alph*), itemsep=0pt]
\item During Loop~\ref{step:while}, we always have $\bar x_e \leq x_e \leq \alpha \bar x_e$ for every $e \in E$. \label{property:barx-to-x}
\item Assume $x^0(\delta(v)) < 1$ for a vertex $v \in V$. Then at the first moment when $x(\delta(v)) \geq 1$ holds, we have $\bar x(\delta(v)) \leq 1$. \label{property:delta-v-becomes-1}
\item $\bar x(\delta(v))$ does not change from the first moment $x(\delta(v)) \geq 1$ holds, until the moment $v$ is relaxed, or the end of the algorithm if this does not happen. \label{property:barx-no-change}
\end{enumerate}
\end{obs}
\begin{proof}
$\bar x_e \leq x_e$ by the definition of $\bar x_e$. Moreover, $x_e \leq \alpha \bar x_e$ as if $x_e > \bar x_e$, then $x_e = 1$ and $x_e \geq \frac1\alpha$. So \ref{property:barx-to-x} holds.
To prove \ref{property:delta-v-becomes-1}, we consider two scenarios. In the first scenario, the moment is after Step~\ref{step:x-gets-x'} in some iteration. In this scenario, $\bar x(\delta(v)) = x(\delta(v)) = 1$ since $B_v = 1$ at the moment. In the second scenario, the moment is after we round some edge $e \in \delta(v)$ in Step~\ref{step:round-e}. In this case $\bar x(\delta(v))$ is the same as $x(\delta(v))$ before the step, which is strictly less than 1.
\ref{property:barx-no-change} holds since we maintained $B_v = x(\delta(v))$ from the moment $x(\delta(v))$ becomes at least 1. If $\bar x_e \neq x_e$ at some time, it must be the case that $x_e = 1$. In this case, both $x_e$ and $\bar x_e$ will not change in the future.
\end{proof}
We can now analyze the expected cost of the algorithm. First, though, we will need a structural result.
\begin{lemma}
\label{lemma:martingale}
For every edge $e \in E$, the sequence ${\bar x}^0_e, {\bar x}^1_e, \cdots, {\bar x}^T_e$ is a martingale.
\end{lemma}
\begin{proof}
Focus on an iteration $t \geq 1$ and edge $e \in E$, and we fix the sequence ${\bar x}^0_e, {\bar x}^1_e, \cdots, {\bar x}^{t-1}_e$. For simplicity we use $\E'[\cdot]$ to denote $\E[\cdot | {\bar x}^0_e, {\bar x}^1_e, \cdots, {\bar x}^{t-1}_e]$. We need to prove $\E'[\bar x^t_e] = \bar x^{t-1}_e$.
If we rounded $e$ in iteration $t$ or before, then $\bar x^t_e = \bar x^{t-1}_e$ happens with probability 1. So, we can assume that $e$ has not been rounded by the end of iteration $t$. In this case, $\bar x^{t-1}_e = x^{t-1}_e$.
So, in iteration $t$, either \ref{event:round} happens for some $e' \neq e$, or \ref{event:relax} happens. In either case, we have $\E'[{\bar x}^t_e] = \E'[x^t_e] = x^{t-1}_e = {\bar x}^{t-1}_e$ by the way we define the distribution for $x'$ in Step~\ref{step:x'}. Therefore, ${\bar x}^0_e, {\bar x}^1_e, \cdots, \bar x^T_e$ is a martingale.
\end{proof}
\begin{corollary} \label{cor:cost}
$\E\left[\sum_{e \in E}c_ex^T_e\right] \leq \alpha \sum_{e \in E} c_e x^0_e$.
\end{corollary}
\begin{proof}
\begin{align*}
\E\left[\sum_{e \in E}c_ex^T_e\right] \leq \alpha \E\left[\sum_{e \in E}c_e\bar x^T_e\right] = \alpha \sum_{e \in E}c_e \bar x^0_e = \alpha \sum_{e \in E} c_e x^0_e.
\end{align*}
The inequality is by \ref{property:barx-to-x} and the first equality used Lemma~\ref{lemma:martingale}.
\end{proof}
Now that we understand the expected cost, it only remains to analyze the degree constraint. From now on we fix a vertex $v \in V$. We upper bound $x^T(\delta(v))$, which will in turn give an upper bound on $\E\left[(x^T(\delta(v)))^p\right]$. The main lemma we prove is
\begin{lemma}
\label{lemma:bound-degree}
For every $v \in V$, we have $\E\left[(x^T)^p(\delta(v))\right] \leq \alpha (\alpha + \beta)^{p-1} \cdot f(x^0(\delta(v)))$.
\end{lemma}
\begin{proof}
We first consider the case $x^0(\delta(v)) \geq 1$. Let $t$ be the iteration in which $v$ is relaxed, or let $t = T$ if $v$ is not relaxed during the algorithm. By Property~\ref{property:barx-no-change}, $\bar x(\delta(v))$ does not change until the end of iteration $t$. Then, we have $x^T(\delta(v)) \leq x^t(\delta(v)) + \beta \leq \alpha \bar x^t(\delta(v)) + \beta = \alpha \bar x^0(\delta(v)) + \beta = \alpha x^0(\delta(v)) + \beta$. Notice that this happens with probability 1.
Notice that $\E[x^T(\delta(v))] \leq \alpha \E [\bar x^T(\delta(v))] = \alpha \bar x^0(\delta(v)) = \alpha x^0(\delta(v))$ by Lemma~\ref{lemma:martingale}. We have:
\begin{align*}
\E\left[(x^T(\delta(v)))^p\right] \leq \frac{\alpha x^0(\delta(v))}{\alpha x^0(\delta(v)) + \beta} (\alpha x^0(\delta(v)) + \beta)^p = \alpha x^0(\delta(v))(\alpha x^0(\delta(v)) + \beta)^{p-1}.
\end{align*}
This implies
\begin{align*}
\frac{\E\left[(x^T)^p(\delta(v))\right]}{f(x^0(\delta(v)))} \leq \frac{\alpha x^0(\delta(v))(\alpha x^0(\delta(v)) + \beta)^{p-1}}{(x^0(\delta(v)))^p} = \alpha \left(\alpha + \frac{\beta}{x^0(\delta(v))}\right)^{p-1}
\leq \alpha(\alpha+\beta)^{p-1}.
\end{align*}
Now we consider the second case: $x^0(\delta(v)) < 1$. Assume $x(\delta(v)) \geq 1$ happens at some time of the algorithm. By \ref{property:delta-v-becomes-1}, at the first moment when $x(\delta(v)) \geq 1$, we have $\bar x(\delta(v)) \leq 1$. By \ref{property:barx-no-change}, from the moment until the moment $v$ becomes relaxed (or until the end of the algorithm if $v$ is never relaxed), $\bar x(\delta(v))$ does not change. Therefore, immediately after $v$ becomes relaxed, we have $\bar x(\delta(v)) \leq 1$. Thus $x^T(\delta(v))$ is at most the value of $x(\delta(v)) + \beta$ at this moment, which is at most $\alpha\bar x(\delta(v)) + \beta \leq \alpha + \beta$. Again, we have $\E[x^T(\delta(v))] \leq \alpha x^0(\delta(v))$. So
\begin{align*}
\E\left[(x^T(\delta(v)))^p\right] \leq \frac{\alpha x^0(\delta(v))}{\alpha + \beta} (\alpha + \beta)^p = \alpha x^0(\delta(v))(\alpha + \beta)^{p-1}.
\end{align*}
Then,
\begin{align*}
\frac{\E\left[(x^T)^p(\delta(v))\right]}{f(x^0(\delta(v)))} \leq \frac{\alpha x^0(\delta(v))(\alpha + \beta)^{p-1}}{x^0(\delta(v))} = \alpha(\alpha+\beta)^{p-1}.
\end{align*}
So, we always have $\E\left[(x^T)^p(\delta(v))\right] \leq \alpha(\alpha+\beta)^{p-1} f(x^0(\delta(v)))$. This implies $\E[\sum_{v}(x^T_\delta(v))^p] \leq \alpha(\alpha+\beta)^{p-1} A^p$.
Now consider the case where $x(\delta(v)) \geq 1$ never happens; that is, $x^T(\delta(v)) < 1$. As $\E\left[x^T(\delta(v))\right] = x^0(\delta(v))$. Then we have $\E\left[(x^T)^p(\delta(v))\right] \leq x^0(\delta(v))$. The lemma clearly holds.
\end{proof}
Corollary~\ref{cor:cost} and Lemma~\ref{lemma:bound-degree} imply Theorem~\ref{thm:polytope-alg}, which in turn implies Theorem~\ref{thm:network-design}.
\section{A Tree Labeling Problem}
\label{sec:tree-labeling}
In this section, we introduce a tree labeling problem to which we reduce the Group Steiner Tree problem with degree bounds on bounded-treewidth graphs. We are given a full binary tree ${\mathbf{T}} = ({\mathbf{V}}, {\mathbf{E}})$ rooted at ${\mathbf{r}} \in {\mathbf{V}}$.\footnote{It is not important to require the binary tree to be full; our algorithm works when some internal node has only one child. Assuming every internal node have 2 children is only for notational convenience.} For every vertex $u \in {\mathbf{V}}$, we are given a finite set $L_u$ of \emph{labels} for $u$; we assume $L_u$'s are disjoint and let $L:= \union_{u \in {\mathbf{V}}}L_u$. The output is a labeling $\vec\ell = (\ell_u \in L_u)_{u \in {\mathbf{V}}}$ of the vertices ${\mathbf{V}}$, that satisfies the constraints described below.
\begin{itemize}
\item {\bf (consistency constraints)}\ \ For every internal node $u$ of ${\mathbf{T}}$ with two children $v$ and $v'$, we are given a set $\Gamma_u \subseteq L_u \times L_{v} \times L_{v'}$. A valid labeling $\vec \ell$ must satisfy $(\ell_u, \ell_v, \ell_{v'}) \in \Gamma_u$.
\item {\bf (covering constraints)}\ \ We are given $k$ subsets $S_1, S_2, \cdots, S_k \subseteq L$. A valid labeling $\vec \ell$ needs to satisfy that for every $t \in [k]$, $\ell({\mathbf{V}}) \cap S_t \neq \emptyset$, where $\ell({\mathbf{V}})$ is defined as $\{\ell_u: u \in {\mathbf{V}}\}$. In words, $\ell({\mathbf{V}})$ needs to intersect every $S_t$.
\item {\bf (cost constraints)}\ \ We are given $m \geq 0$ linear constraints defined by the costs $(c^i_\ell \in [0, 1])_{i \in [m], \ell \in L}$. For every $i \in [m]$, a valid labeling $\vec \ell$ needs to satisfy $\sum_{u \in {\mathbf{V}}}c^i_{\ell_u} \leq 1$. In words, there are $m$ types of resource, and we have 1 unit of each type. Setting the label of $u$ to $\ell$ will use $c^i_\ell$ units of type $i$-resource.
\end{itemize}
We say a labeling $\vec\ell = (\ell_u \in L_u)_{u \in {\mathbf{V}}}$ is \emph{consistent} if it satisfies the consistency constraints. Given a consistent labeling $\vec\ell$, we say it covers group $S_t$ if $\ell({\mathbf{V}}) \cap S_t \neq \emptyset$. We define its type-$i$ cost to be ${\mathrm{cost}}^i(\vec \ell):= \sum_{u \in {\mathbf{V}}}c^i_{\ell_u}$.
So a valid labeling $\vec \ell$ for the instance is a consistent one that covers all groups, and has ${\mathrm{cost}}^i(\vec\ell) \leq 1$ for every $i \in [m]$.
Given a label tree instance, we let $n = |{\mathbf{V}}|$, $D$ be the height of ${\mathbf{T}}$ (the maximum number of edges in a root-to-leaf path in ${\mathbf{T}}$)
and $\Delta = \max_{u \in {\mathbf{V}}}|L_u|$ be the maximum size of any $L_u$. The main theorem we prove is the following:
\begin{theorem}
\label{thm:label-tree-main}
Assume we are given a feasible label tree instance $({\mathbf{T}} = ({\mathbf{V}}, {\mathbf{E}}), {\mathbf{r}}, (L_u)_u, \break (\Gamma_u)_u, (S_t)_{t \in [k]}, A \in [0, 1]^{m \times L})$, i.e., there is a valid labeling. There is a randomized algorithm that in time $\mathrm{poly}(n) \cdot \Delta^{O(D)}$ outputs a consistent labeling $\vec \ell$ such that the following holds.
\begin{enumerate}[label=(\ref{thm:label-tree-main}\alph*), leftmargin=*]
\item For every $t \in [k]$, we have $\Pr[\vec \ell \text{ covers group }S_t] \geq \frac{1}D$. \label{property:label-tree-covering}
\item For every $i \in [m]$, we have $\E\big[\exp\big(\ln(1 + \frac{1}{2D}) \cdot {\mathrm{cost}}^i(\vec \ell)\big)\big] \leq 1+\frac1D$.
\label{property:label-tree-cost}
\end{enumerate}
\end{theorem}
Property~\ref{property:label-tree-cost} gives a tail concentration bound on ${\mathrm{cost}}^i(\vec \ell)$. The remaining part of this section is dedicated to the proof of Theorem~\ref{thm:label-tree-main}.
\subsection{Construction of a super-tree $T^\circ$}
In this section, we construct a rooted tree $T^\circ = (V^\circ, E^\circ)$ of size $O(n)\Delta^{O(D)}$ such that a consistent labeling of ${\mathbf{T}}$ corresponds to what we call \emph{a consistent sub-tree}.
So we can reduce the problem to finding the latter object. The root of $T^\circ$ is $r$. Each internal node of $T^\circ$ is either a \emph{selector node}, or a \emph{copier node}; their meanings will be clear soon. Each node $p \in V^\circ$ is \emph{associated with} a node $u$ in $T$. Each non-root selector node or leaf node is \emph{associated with} a label $\ell \in L_u$. We shall use $p$ and $q$ and their variants to denote nodes in $T^\circ$, and $u$ and $v$ and their variants to denote nodes in ${\mathbf{T}}$.
The algorithm for constructing $T^\circ$ is described in Algorithm~\ref{alg:construct-T-circ}, which calls the procedure $\mathsf{construct\mhyphen tree}$ described in Algorithm~\ref{alg:construct-tree}. See Figure~\ref{fig:tree} for the illustration of the construction of $T^\circ$ from ${\mathbf{T}}$. For a node $p \in V^\circ$, we use $\Lambda(p)$ denotes the set of children of $p$ in $T^\circ$, and $\Lambda^*(p)$ denotes the set of descendants of $p$ in $T^\circ$, including $p$ itself.
\begin{algorithm}[H]
\caption{Main algorithm for the construction of $T^\circ$}
\label{alg:construct-T-circ}
\begin{algorithmic}[1]
\State create a node $r$ associated with ${\mathbf{r}}$ as the root of $T^\circ$, and let $r$ be a \emph{selector node}
\For{every $\ell \in L_{{\mathbf{r}}}$}:
\State create a child $p$ of $r$, associated with node ${\mathbf{r}}$ and label $\ell$
\State call $\mathsf{construct\mhyphen tree}(p, {\mathbf{r}}, \ell)$
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{$\mathsf{construct\mhyphen tree}(p, u, \ell)$ \Comment{$p \in V^\circ, u \in {\mathbf{V}}, \ell \in L_u$}}
\label{alg:construct-tree}
\begin{algorithmic}[1]
\If{$u$ has no children} \Return \Comment{$p$ is a leaf node.}\EndIf
\State let $p$ be a \emph{selector node},
let $v$ and $v'$ be the two children of $u$ in ${\mathbf{T}}$
\For{every $\ell' \in L_v, \ell'' \in L_{v'}$ such that $(\ell, \ell', \ell'') \in \Gamma_u$}
\State create a child $p'$ of $p$, associated with $u$, let $p'$ be a \emph{copier node},
\State create two children $q$ and $q'$ of $p'$, associate $q$ with node $v$ and label $\ell'$, associate $q'$ with node $v'$ and label $\ell''$
\State call $\mathsf{construct\mhyphen tree}(q, v, \ell')$ and $\mathsf{construct\mhyphen tree}(q', v', \ell'')$
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\iflipics
\includegraphics[scale=0.81]{tree.pdf}
\else
\includegraphics[scale=1]{tree.pdf}
\fi
\caption{An example for the construction of $T^\circ$. The tree on the left side is ${\mathbf{T}}$, and the tree on the right side is ${\mathbf{T}}^\circ$. The labels of the nodes in ${\mathbf{T}}$ are shown besides them. In $T^\circ$, selectors, copiers and leaves are denoted as empty circles, solid circles and empty squares respectively. The nodes in the two yellow polygons are associated with ${\mathbf{r}}$ and a respectively. The numbers in the circles and squares indicate the labels associated with the nodes. In the example, the triples in $\Gamma_{\mathbf{r}}$ with the first coordinate being $1$ are $(1,4, 6), (1, 5, 6)$ and $(1, 5, 7)$. The triples in $\Gamma_{\mathrm{a}}$ with the first coordinate being $5$ are $(5, 9, 11), (5, 9, 12)$ and $(5, 10, 11)$.}
\label{fig:tree}
\end{figure}
Now we can define consistent sub-trees of $T^\circ$:
\begin{definition}[Consistent sub-trees]
Given a sub-tree $T$ of $T^\circ$ that contains $r$, we say $T$ is \emph{consistent} if the following conditions hold.
\begin{itemize}[itemsep=0pt]
\item Every selector node $p$ in $T$ has exactly one child in $T$.
\item If $p$ is a copier node in $T$, then both of its children in $T^\circ$ are in $T$.
\end{itemize}
\end{definition}
The definition explains the names ``selector'' and ``copier'': a selector node $p$ in $T$ needs to select one of its children in $T^\circ$ and add it to $T$, and the children of a copier node $p$ will follow the node $p$ to enter $T$.
It is easy to see a one-to-one correspondence between consistent labelings $\vec \ell = (\ell_u \in L_u)_{u \in {\mathbf{V}}}$ of ${\mathbf{T}}$, and consistent sub-trees $T$ of $T^\circ$. Given the consistent labeling $\vec \ell$, the correspondent sub-tree $T$ of $T^\circ$ can be constructed as follows. First, we add $r$ and its child $p$ associated with label $\ell_{\mathbf{r}}$ to $T$. Then we grow the tree from $p$ using a recursive procedure. Assume $p$ is associated with node $u$ in ${\mathbf{T}}$ and label $\ell \in L_u$. If $u$ is a leaf, we stop the procedure. Otherwise let $v$ and $v'$ be the two children of $u$, then we add the copier child $p'$ of $p$ that corresponds to the tuple $(\ell_{\mathbf{r}}, \ell_v, \ell_{v'})$ to $T$. We also add its two children $q$ and $q'$ to $T$. Then we run the procedure recursively over $q$ and $q'$. Conversely, given a consistent sub-tree $T$ of $T^\circ$, we can recover a consistent labeling $\vec \ell$ of ${\mathbf{T}}$.
For convenience, we extend the costs $(c^i_\ell)_{i \in [m], \ell \in L}$ to vertices in $V^\circ$: For every non-root selector node or leaf node $p \in V^\circ$ associated with a label $\ell$, we define $c^i_p = c^i_\ell$ for every $i \in [m]$. For the root or a copier node $p$, we define $c^i_p = 0$. For a consistent sub-tree $T = (V, E)$ of $T^\circ$, and $i \in [m]$, we define its type-$i$ cost to be ${\mathrm{cost}}^i(T) = \sum_{p \in V}c^i_p$.
This will be the same as ${\mathrm{cost}}^i(\vec \ell)$, for the labeling $\vec \ell$ correspondent to $T$.
We also extend the groups $S_1, S_2, \cdots, S_k$ to node sets in $T^\circ$: for every $t \in [k]$, $S'_t$ contains the set of nodes $p \in V^\circ$ whose associated label is in $S_t$. Then, a consistent labeling $\vec \ell$ covers a group $S_t$ if and only if the correspondent sub-tree $T = (V, E)$ covers $S'_t$, namely, $V \cap S'_t \neq \emptyset$.
Therefore, we are guaranteed that there is a consistent sub-tree $T^*$ of $T^\circ$ that covers all groups $S'_1, S'_2, \cdots, S'_k$, and has ${\mathrm{cost}}^i(T^*) \leq 1$ for every $i \in [m]$. Our goal is to output a random consistent sub-tree $T$ satisfying the conditions correspondent to \ref{property:label-tree-covering} and \ref{property:label-tree-cost}. This is done using an LP-based algorithm.
\subsection{The LP relaxation for finding $T = (V, E)$}
Now we describe the LP relaxation that we use to find $T = (V, E)$. For every vertex $p \in V^\circ$, we use $x_p$ to indicate if $p$ is in $T$, i.e., $p \in V$. For every $t \in [k]$ and $q \in S'_t$, we use $y^t_q$ to indicate if $q$ is the node in $T$ we choose to cover $S'_t$. There might be multiple nodes in $V \cap S'_t$, and in this case, we only choose one node in the set to cover $S'_t$; the choice can be arbitrary. The LP is as follows.
\noindent\begin{minipage}[t]{0.54\textwidth}
\begin{alignat}{2}
x_r &= 1\label{LPC:root}\\
\sum_{q \in \Lambda(p)} x_q &= x_p &\qquad &\forall \text{ selector } p \in {V^\circ} \label{LPC:selector}\\
x_q &= x_p &\qquad &\forall \text{ copier } p \in {V^\circ}, q \in \Lambda(p) \label{LPC:copier}\\
x_p &\geq 0 &\qquad &\forall p \in V^\circ \label{LPC:non-negative}
\end{alignat}
\end{minipage}\hfill
\begin{minipage}[t]{0.44\textwidth}
\begin{alignat}{2}
\sum_{q \in \Lambda^*(p) \cap S'_t}y^t_q &\leq x_p &\qquad &\forall p \in V^\circ, t \in [k]\label{LPC:group-packing}\\
\sum_{q \in S'_t}y^t_q &= 1 &\qquad &\forall t \in [k] \label{LPC:group-covering}\\
\sum_{q \in \Lambda^*(p)} c^i_q x_q &\leq x_p &\qquad &\forall p \in V^\circ, i \in [m] \label{LPC:cost}
\end{alignat}
\end{minipage}
Constraints \eqref{LPC:root}-\eqref{LPC:non-negative} in the LP are for the consistency requirements. \eqref{LPC:root} says the root is always in $T$. \eqref{LPC:selector} says if a selector node $p$ is in $T$, then exactly one of its children is in $T$. \eqref{LPC:copier} says if a copier node $p$ is in $T$, and $q$ is a child of $p$, then $q$ is also in $T$. \eqref{LPC:non-negative} is the non-negativity condition. \eqref{LPC:group-packing} and \eqref{LPC:group-covering} deal with the covering requirements. \eqref{LPC:group-packing} says if $p$ is in $T$, then we choose at most one descendant of $p$ to cover the group $S'_t$; notice that the constraint implies $y^t_p \leq x_p$ if $p \in S'_t$. \eqref{LPC:group-covering} says we choose exactly one node in $T$ to cover $S'_t$. \eqref{LPC:cost} handles the cost requirement: If $p$ is included in $T$, then the type-$i$ cost of the descendants of $p$ in $T$ is at most $1$.
\subsection{The rounding algorithm}
We solve LP\eqref{LPC:root} to obtain a solution $x \in [0, 1]^{V^\circ}$. We add $r$ to $T$ and call $\mathsf{recursive\mhyphen rounding}(r)$ to obtain a sub-tree $T = (V, E)$. The procedure is defined in Algorithm~\ref{alg:recursive-rounding}.
\begin{algorithm}
\caption{$\mathsf{recursive\mhyphen rounding}(p)$}
\label{alg:recursive-rounding}
\begin{algorithmic}[1]
\If{$p$ is a selector node}
\State choose one vertex $q \in \Lambda(p)$ randomly, so that $q$ is chosen with probability $\frac{x_q}{x_p}$
\State add $q$ to $T$, and call $\mathsf{recursive\mhyphen rounding}(q)$
\Else \Comment{$p$ is a copier or leaf node}
\For{every $q \in \Lambda(p)$}
\State with probability $\frac{x_q}{x_p} = 1$: add $q$ to $T$, and call $\mathsf{recursive\mhyphen rounding}(q)$
\EndFor
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{obs}
$T$ is always consistent. For every $p \in V^\circ$, we have $\Pr[p \in V] = x_p$.
\end{obs}
\begin{proof}
For a selector node $p$ in $T$, we always choose exactly one child of $p$ and add it to $T$. For a copier node $p$ added to $T$ and one of its child $q$, $q$ is added to $T$ with probability 1. By the probabilities we add nodes to $T$, we can see that $\Pr[p \in V] = x_p$ for every $p \in V^\circ$.
\end{proof}
\subsection{Analysis of probabilities of group coverage}
In this section, we fix $t \in [k]$ and analyze the probability that $T$ coves the group $S'_t$; or equivalently, the correspondent labeling covers the group $S_t$. This will prove Property~\ref{property:label-tree-covering}. For every vertex $p \in V^\circ$, we define $z_p := \sum_{q \in \Lambda^*(p) \cap S'_t}y^t_q$, which indicates whether $S'_t$ is covered by vertices in the sub-tree rooted at $p$. By \eqref{LPC:group-packing}, we have $z_p \leq x_p$. By \eqref{LPC:group-covering}, we have $z_r = 1 = x_r$.
We define the height of a node $p \in V^\circ$ to be the maximum number of copier nodes in a path from $p$ to one of its descendant leaves. We bound the probability that the tree rooted at $p$ covers $S'_t$ using inductions:
\begin{lemma}
Assume $p \in V^\circ$ has height $h$. Then we have $\Pr\Big[\Lambda^*(p) \cap V \cap S'_t \neq \emptyset \big| p \in V\Big] \geq \frac1{h+1}\frac{z_p}{x_p}$.
\end{lemma}
\begin{proof}
If $p \in S'_t$ then $\Pr\Big[\Lambda^*(p) \cap V \cap S'_t \neq \emptyset \big|{p \in V}\Big] = 1 \geq \frac{z_p}{x_p}$. The inequality holds trivially. So, we can assume $p \notin S'_t$, and we prove the lemma for nodes $p$ from bottom to top in the tree $T^\circ$. Suppose $p$ is a leaf; then $h = 0$, and $z_p = 0$ as we assumed $p \notin S'_t$. The inequality trivially holds.
So we can assume $p$ be a non-leaf node of height $h$, and assume the lemma holds for every $q \in \Lambda(p)$. First assume $p$ is a selector node. Then all children of $p$ have height at most $h$.
\begin{align*}
\Pr\Big[\Lambda^*(p) \cap V \cap S'_t \neq \emptyset \big| p \in V\Big] &\geq \sum_{q \in \Lambda(p)} \frac{x_q}{x_p}\cdot \frac{1}{h+1}\cdot\frac{z_q}{x_q} = \sum_{q \in \Lambda(p)} \frac{1}{h+1}\cdot\frac{z_q}{x_p} = \frac1{h+1} \cdot \frac{z_p}{x_p}.
\end{align*}
Then consider the case that $p$ is a copier node. All children of $p$ have height at most $h-1$. Even though $p$ has exactly two children, our analysis works if it has any number of children.
\begin{align*}
\Pr&\Big[\Lambda^*(p) \cap V \cap S'_t \neq \emptyset \big| p \in V\Big] \geq 1-\prod_{q \in \Lambda(p)}\left(1 - \frac{1}{h} \cdot \frac{z_q}{x_q}\right) = 1 - \prod_{q \in \Lambda(p)}\exp\left(-\frac{1}{h} \cdot \frac{z_q}{x_p}\right)\\
&= 1 - \exp\left(-\frac{1}{h}\cdot \frac{z_p}{x_p}\right) \geq \frac{1}{h} \cdot \frac{z_p}{x_p} - \frac12\left(\frac{1}{h} \cdot \frac{z_p}{x_p}\right)^2 \geq \frac{1}{h}\cdot \frac{z_p}{x_p} - \frac12\left(\frac{1}{h} \right)^2\frac{z_p}{x_p} \\
&= \left(\frac{2h-1}{2h^2}\right)\frac{z_p}{x_p} \geq \frac{1}{h+1}\cdot \frac{z_p}{x_p}.
\end{align*}
The first equality in the first line used that $x_q = x_p$ for every $q \in \Lambda(p)$. The second equality used that $z_p = \sum_{q \in \Lambda(p)}z_q$ as $p \notin S'_t$. The first inequality in the second line used that $e^{-\theta} \leq 1-\theta + \frac {\theta^2}2$ for every $\theta \geq 0$. The second inequality used that $\frac{z_p}{x_p} \leq 1$.
\end{proof}
Notice that the height of the root $r$ of $T^\circ$ is $D-1$. Applying the above lemma with $p = r$, we have that $T$ covers group $S'_t$ with probability at least $\frac{1}{D} \cdot \frac{z_r}{x_r} = \frac{1}{D}$. So, the correspondent $\vec \ell$ covers $S_t$ with probability at least $\frac1D$, proving Property~\ref{property:label-tree-covering}.
\subsection{Concentration bound on costs}
\label{appendix:concentration}
In this section, we prove Property~\ref{property:label-tree-cost}. To this end, we fix an index $i \in [m]$ and analyze the type-$i$ cost of $T = (V, E)$. For notation convenience, we use $c_p$ to denote $c^i_p$, and cost for type-$i$ cost.
For every vertex $p \in V^\circ$, let $w_p = \sum_{q \in \Lambda^*(p)} c_q x_q$ be the fractional cost incurred by the sub-tree of $T^\circ$ rooted at $p$. By \eqref{LPC:cost}, we have $w_p \leq x_p$. Let $W_p = \sum_{q \in \Lambda^*(p) \cap V} c_q$ be the cost of $T$ incurred by descendants of $p$. So, we have $\E[W_p] = w_p$.
As is typical, we shall introduce a parameter $s > 0$ and consider the expectation of the random exponential variables ${\mathbf{e}}^{s W_p}$. Later we shall set $s = \ln(1+\frac1{2D})$, but the main lemma holds for any $s > 0$. We define an $\alpha_h$ for every integer $h \geq 0$ as
$\alpha_0 = {\mathbf{e}}^s$ and $\alpha_h = {\mathbf{e}}^{\alpha_{h-1}-1}, \forall h \geq 1$.
Notice that $\alpha_0, \alpha_1, \ldots$ is an increasing sequence.
In this section, we count selector nodes in the definition of heights: the height of a node $p \in V^\circ$ is the maximum number of selector nodes in a path from $p$ to its descendant leaf. The main lemma we prove in this section is:
\begin{lemma}
\label{lemma:bound-exp-mp}
For any node $p$ in $T^\circ$ of height $h$, we have
$
\E\Big[{\mathbf{e}}^{s W_p} \big| p \in V \Big] \leq \alpha_h^{w_p/x_p}.
$
\end{lemma}
\begin{proof}
Again, we prove the lemma for nodes $p$ from bottom to top of the tree $T^\circ$. Focus on a node $p$ of height $h$. Consider the case where $p$ is a copier or leaf node. Then all children of $p$ has height at most $h$.
\begin{align*}
\E\Big[{\mathbf{e}}^{sW_p}\big|p \in V\Big] &= {\mathbf{e}}^{sc_p} \prod_{q \in \Lambda(p)}\E \Big[{\mathbf{e}}^{sW_q} \big| q \in V \Big] =\alpha_0^{c_px_p/x_p} \prod_{q \in \Lambda(p)}\alpha_h^{w_q/x_p} \leq \alpha_h^{c_px_p/x_p} \prod_{q \in \Lambda(p)}\alpha_h^{w_q/x_p}=\alpha_h^{w_p/x_p}.
\end{align*}
The last inequality used that $\alpha_0 \leq \alpha_h$, and the last equality used that $w_p = c_p x_p + \sum_{q \in \Lambda(p)} w_q$.
Now suppose $p$ is a selector. Then all children of $p$ have height at most $h-1$. Conditioned on $p \in V$, the rounding procedure adds exactly one child $q$ of $p$ to $V$. Then, we have
\begin{align*}
\E\Big[{\mathbf{e}}^{sW_p}\big|p \in V\Big] &= {\mathbf{e}}^{sc_p}\cdot \sum_{q \in \Lambda(p)}\frac{x_q}{x_p} \E\Big[{\mathbf{e}}^{sW_q}\big|q \in V\Big]
= {\mathbf{e}}^{sc_p} \cdot \sum_{q \in \Lambda(p)}\frac{x_q}{x_p}\alpha_{h-1}^{w_q/x_q}\\
&\leq {\mathbf{e}}^{sc_p} \left(\left(\frac{w_p}{x_p} - c_p\right)\cdot \alpha_{h-1} + \left(1-\frac{w_p}{x_p} + c_p\right)\right) = {\mathbf{e}}^{sc_p}\left(1 + \left(\frac{w_q}{x_q} - c_p\right)(\alpha_{h-1} - 1)\right)\\
&\leq {\mathbf{e}}^{sc_p}\cdot \exp\left(\left(\frac{w_p}{x_p} - c_p\right)(\alpha_{h-1} - 1)\right) = {\mathbf{e}}^{sc_p} \cdot \alpha_h^{w_p/x_p - c_p} \leq \alpha_h^{w_p/x_p}.
\end{align*}
To see the inequality in the second line, we notice the following four facts: (i) $\alpha_{h-1}^\theta$ is a convex function of $\theta$, (ii) $w_q/x_q \in [0, 1]$ for every $q \in \Lambda(p)$, (iii) $\sum_{q \in \Lambda(p)}\frac{x_q}{x_p} = 1$ and (iv) $\sum_{q\in \Lambda(p)}\frac{x_q}{x_p}\cdot\frac{w_q}{x_q} = \sum_{q\in \Lambda(p)}\frac{w_q}{x_p} = \frac{w_p}{x_p} - c_p$. The equality in the last line is by the definition of $\alpha_h$. The last inequality used that ${\mathbf{e}}^s = \alpha_0 \leq \alpha_h$.
\end{proof}
The height of the root $r$ is $D$.\footnote{The height of $r$ is $D+1$ by definition, but Lemma~\ref{lemma:bound-exp-mp} holds when we define its height to be $D$, as one can collapse the first two levels of $T^\circ$ into one level.} Now, we set $s = \ln(1+\frac{1}{2D})$. We prove inductively the following lemma:
\begin{lemma}
\label{lemma:bound-alpha}
For every $h \in[0, D]$, we have $\alpha_h \leq 1 + \frac{1}{2D - h}$.
\end{lemma}
\begin{proof}
By definition, $\alpha_0 = {\mathbf{e}}^s = 1+ \frac{1}{2D}$ and thus the statement holds for $h = 0$. Let $h \in [1, D]$ and assume the statement holds for $h-1$. Then, we have
\begin{align*}
\alpha_h &= {\mathbf{e}}^{\alpha_{h-1}-1} \leq {\mathbf{e}}^{1 + \frac{1}{2D-h+1}} \leq 1 + \frac{1}{2D-h+1} + \left(\frac{1}{2D-h+1}\right)^2\\
&= 1 + \frac{2D-h+2}{(2D-h+1)^2} \leq 1 + \frac{1}{2D - h}.
\end{align*}
The first inequality used the induction hypothesis and the second one used that for every $\theta \in [0, 1]$, we have $e^\theta \leq 1 + \theta + \theta^2$.
\end{proof}
To wrap up, we apply Lemma~\ref{lemma:bound-exp-mp} on $p = r$. Notice that $r \in V$ always happens, at $W_r = {\mathrm{cost}}^i(T)$. We have $\E\big[\exp\big(\ln\big(1+\frac1{2D}\big) \cdot {\mathrm{cost}}^i(T)\big)\big] \leq \alpha_D^{w_r/x_r} \leq 1 + \frac1D$ by Lemma~\ref{lemma:bound-alpha} and that $w_r \leq x_r = 1$. Using the correspondence between sub-trees of $T^\circ$ and labelings of ${\mathbf{T}}$ proves Property~\ref{property:label-tree-cost}.
\section{Reduction of Degree-Bounded Group Steiner Tree on Bounded-Treewidth Graphs to Tree-Labeling Problem} \label{sec:reduction}
In this section we prove Theorem~\ref{thm:GST-bd-treewidth}, by reducing Group Steiner Tree with degree bounds on bounded treewidth graphs to the tree labeling problem studied in Section~\ref{sec:tree-labeling}. Recall the input of the problem contains a graph $G = (V, E)$ with edge costs $c \in R_{\geq 0}^E$, a root $r$, $k$ groups $S_1, S_2, \cdots, S_k$ of vertices, and a degree bound ${\mathrm{db}}_v \in \mathbb{Z}_{>0}$ for every $v \in V$.
Without loss of generality, we assume $\{r\}, S_1, S_2, \cdots, S_k$ are mutually disjoint. Again, we use ${\mathrm{opt}}$ to denote the minimum-cost of a valid subgraph $H$.
Let ${\mathbf{T}} = (B, {\mathbf{E}})$ be the tree decomposition of the graph $G = (V, E)$. Every $b \in B$ is called a bag and let $X_b \subseteq V$ be the set of vertices contained in the bag $b$. We can add the root $r$ to all the bags, which increases the maximum size of a bag by at most 1. It was show in \cite{Bod88} that we can assume ${\mathbf{T}}$ is a rooted binary tree of depth $O(\log n)$, by sacrificing the bag size by an $O(1)$ factor. We summarize the properties as follows:
\begin{itemize}
\item ${\mathbf{T}}$ is a full binary tree rooted at ${\mathbf{r}}$, with depth $O(\log n)$.
\item $|X_b| \leq O(1)\cdot {\mathrm{tw}}$ for every $b \in B$.
\item For every edge $(u, v) \in E$, there is some $b \in B$ with $\{u, v\} \subseteq X_ b$.
\item For every $v \in V$, the set of bags $b$ with $v \in X_b$ is connected in ${\mathbf{T}}$.
\end{itemize}
For every $e \in E$, let $b_e$ be the highest node $b$ such that $X_b$ contains both end vertices of $e$. This is well-defined due to the last property in the above list. For every $b \in B$, we let $E_b = \{e \in E: b_e = b\}$. So, $(E_b)_{b \in B}$ forms a partition of $E$.
\subparagraph{Notations on Partitions.}
Given two partitions $\Pi$ and $\Pi'$ of a common set $X$, we say $\Pi'$ refines $\Pi$ if any two elements in $X$ that are in the same set in $\Pi'$ are also in the same set in $\Pi$. We use $\Pi' \leq \Pi$ to denote that $\Pi'$ refines $\Pi$. Given two partitions $\Pi$ and $\Pi'$ of $X$, we use $\Pi \vee \Pi'$ to denote the join of $\Pi$ and $\Pi'$ w.r.t the relation $\leq$. That is, we define a graph where there is an edge between $u$ and $v$ if they are in the same set in $\Pi$ or $\Pi'$. Then two vertices $u$ and $v$ are in the same set in the partition $\Pi \vee \Pi'$ if and only if they are in the same connected component in the graph.
Abusing notations slightly, if an element $v$ is not included in a partition $\Pi$, we treat $\{v\}$ as a singleton set in $\Pi$. This allows us to extend the operators $\leq$ and $\vee$ to two partitions $\Pi$ and $\Pi'$ with different ground sets. Given a partition $\Pi$ and a set $X$, we let $\Pi[X]$ be the partition $\Pi$ restricted to the ground set $X$: two elements $u, v \in X$ are in the same set in $\Pi[X]$ if and only if they are in the same set in $\Pi$.
For any set $F \subseteq E$ of edges, we define ${\mathrm{CC}}(F)$ to be the partition of the vertices incident to $F$, such that $u$ and $v$ are in the same set in ${\mathrm{CC}}(F)$ if and only if they are in the same connected component in $(V, F)$.
\subparagraph{Construction of Labels and Consistency Triples.}
The tree ${\mathbf{T}}$ for the tree-labeling instance is the same as the decomposition tree ${\mathbf{T}}$. (This is the reason we use the same notion ${\mathbf{T}}$.) So we have ${\mathbf{V}} = B$. Now we fix a bag $b \in B$ and define the set $L_b$ of labels for $b$.
To define the labels, we let $H = (V_H, E_H)$ be any sub-graph of $G$, which we should think of as the output of the GST problem. Fix a bag $b \in B$, let $\Lambda^*(b)$ be the set of descendants of $b$ in ${\mathbf{T}}$, including $b$ itself. We then make the following definitions:
\begin{itemize}
\item $F_b(H) := E_H \cap E_b$ is the set of edges from $E_b$ that are included in $H$.
\item $\Pi^\downarrow_b(H)$ is the partition of $X_b$ so that two vertices $u, v \in X_b$ is in the set in $\Pi^\downarrow_b(H)$ if and only if they are connected in the graph $(V_H, E_H \cap \bigcup_{b' \in \Lambda^*(b)} E_{b'})$.
\item $\Pi^\uparrow_b(H)$ is the partition of $X_b$ so that two vertices $u, v \in X_b$ is in the set in $\Pi^\downarrow_b(H)$ if and only if they are connected in the graph $(V_H, E_H \cap \bigcup_{b' \in B \setminus \Lambda^*(b) \cup \{b\}} E_{b'})$.
\end{itemize}
In words, $\Pi^\downarrow_b(H)$ and $\Pi^\downarrow_b(H)$ respectively indicate the partition of $X_b$ correspondent to the edges of $H$ in bags below and above $b$ respectively.
Without knowing $H$, we can define the label set $L_b$ for $b$ to be all tuples $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)$ such that $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b) = (F_b(H), \Pi^\downarrow_b(H), \Pi^\uparrow_b(H))$ for some valid output graph $H$. We then define the consistency tuples $\Gamma_b$'s so that a consistent labeling gives a valid outputs sub-graph $H$. \medskip
Formally, let $L_b$ be the set of all tuples $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)$ such that
\begin{itemize}
\item $F_b \subseteq E_b$ is a forest over $X_b$, ${\mathrm{CC}}(F_b) \leq \Pi^\downarrow_b$ and ${\mathrm{CC}}(F_b) \leq \Pi^\uparrow_b$,
\item if $b = {\mathbf{r}}$, then $\Pi^\uparrow_b = {\mathrm{CC}}(F_b)$, and
\item if $b$ is a leaf, then $\Pi^\downarrow_b = {\mathrm{CC}}(F_b)$.
\end{itemize}
Then we define the set $\Gamma_b$ of triples, for an inner vertex $b$ in ${\mathbf{T}}$ with two children $b'$ and $b''$. We have $\big((F_b, \Pi^\downarrow_b, \Pi^\uparrow_b), (F_{b'}, \Pi^\downarrow_{b'}, \Pi^\uparrow_{b'}), (F_{b''}, \Pi^\downarrow_{b''}, \Pi^\uparrow_{b''})\big) \in \Gamma_b$ if and only if
\begin{itemize}
\item $\Pi^\downarrow_b = \Big(\Pi^\downarrow_{b'} \vee \Pi^\downarrow_{b''} \vee {\mathrm{CC}}(F_b)\Big)[X_b]$,
\item $\Pi^\uparrow_{b'} = \Big(\Pi^\uparrow_{b} \vee \Pi^\downarrow_{b''} \vee {\mathrm{CC}}(F_{b'})\Big)[X_b]$, and
\item $\Pi^\uparrow_{b''} = \Big(\Pi^\uparrow_{b} \vee \Pi^\downarrow_{b'} \vee {\mathrm{CC}}(F_{b''})\Big)[X_{b''}]$.
\end{itemize}
\begin{claim}\label{claim:connectivity-truthful}
Let $\{(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)\}_{b \in B}$ be a consistent labeling of the tree ${\mathbf{T}}$. Let $H = (V, \union_{b \in B} F_b)$. Then we have $\Pi^\downarrow_b[H] = \Pi^\downarrow_b$ and $\Pi^\uparrow_b[H] = \Pi^\uparrow_b$ for every $b \in B$.
\end{claim}
The claim says that if the labels are consistent, then $\Pi^\downarrow_b$ and $\Pi^\uparrow_b$ represent their true values.
\subparagraph{Construction of Covering and Cost Constraints.}
The requirement that all groups are connected to $r$ can be captured by the covering constraint in the tree-labeling problem. For every $t \in [k]$, a label $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b) \in L_b$ for some $b \in B$ can satisfy the group $S_t$ if for some $s \in S_t$ we have $(s, r)$ are in the same set in the partition $\Pi^\downarrow_b \vee \Pi^\uparrow_b$.
The edge costs and degree constraints can be captured by the cost constraints in the tree-labeling instance. Consider the costs first. Using binary search, we assume we know the optimum cost $C^*$ for the instance. For every bag $b \in B$ and every label $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)$, the cost of the label is $c(F_b) := \sum_{e \in F_b} c_e$. We disallow this label by removing it if $c(F_b) > C^*$. Scaling all costs by $C^*$ so that all costs are in $[0, 1]$. So, the cost being at most $C^*$ in the group Steiner tree instance is equivalent to that the cost of all labels is at most $1$.
Finally we consider the degree constraints $d_H(v) \leq {\mathrm{db}}_v$ for every $v \in V$. For every $v \in V$, we define a cost constraint in the tree-labeling instance. For every bag $b \in B$ with $v \in X_b$, and every label $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)$, the cost of the label is $|\delta(v) \cap F_b|$, where $\delta(v)$ is the incident edges of $v$ in $G$. Again, we disallow the label if $|\delta(v) \cap F_b| > {\mathrm{db}}_v$, and we scale the costs by ${\mathrm{db}}_v$ so that all costs are in $[0, 1]$. Then the degree constraint on $v$ is reduced to this cost requirement in the tree labeling instance.
\subparagraph{Wrapping Up.} We then run the algorithm in Theorem~\ref{thm:label-tree-main} on the constructed tree-labeling instance. Let $(F_b, \Pi^\downarrow_b, \Pi^\uparrow_b)$ be the label of a bag $b$, and let $H = (V, \union_{b \in B}F_b)$. By Claim~\ref{claim:connectivity-truthful}, the consistency constraints guarantee that the $\Pi^\downarrow_b$ and $\Pi^\uparrow_b$ truthfully represent the connectivity of the graph $G$. So, if the covering constraint for a group $S_t$ is satisfied, then $H$ indeed connects $r$ and $S_t$. Recall that $D = O(\log n)$ is the depth of the tree ${\mathbf{T}}$. By Properties~\ref{property:label-tree-covering} and \ref{property:label-tree-cost}, we have
\begin{itemize}
\item For every $t \in [k]$, $H$ connects $r$ and $S_t$ with probability at least $\frac{1}{D}$.
\item $\E\big[\exp(\ln (1+\frac1{2D}) \cdot \frac{c(H)}{C^*})\big] \leq 1 + \frac1D$.
\item $\E\big[\exp(\ln (1+\frac1{2D}) \cdot \frac{d_H(v)}{{\mathrm{db}}_v})\big] \leq 1 + \frac1D$ for every $v \in V$.
\end{itemize}
We run the algorithm for $M = \Theta(D\log n) = \Theta(\log^2 n)$ times, with a large hidden constant in the $O(\cdot)$ notation, and output the union $H$ of all sub-graphs constructed by the $M$ times. With high probability, all groups are connected to $r$ in $H$. $\E[\exp(\ln (1+\frac1{2D}) \cdot \frac{d_H(v)}{{\mathrm{db}}_v})] \leq (1 + \frac1D)^M = n^{O(1)}$. Using Markov inequality, we have $\exp(\ln (1 + \frac1{2D})\cdot \frac{d_h(v)}{{\mathrm{db}}_v}) \leq n^{O(1)}$ for every $v \in V$ with high probability. That is, $d_h(v) \leq O({\mathrm{db}}_v \log n \cdot D) = O(\log^2n ){\mathrm{db}}_v$ with high probability. Similarly, with high probability, we have $c(H) \leq O(\log ^2n)C^*$.
We then analyze the running time of the algorithm. The key parameter deciding the running time is $\Delta$, the maximum size of a label set $L_b$. As we assumed $F_b$ is a forest over $X_b$ and $|X_b| \leq O({\mathrm{tw}})$, there are ${\mathrm{tw}}^{O({\mathrm{tw}})}$ different possibilities for $F_b$. There are also ${\mathrm{tw}}^{O({\mathrm{tw}})}$ possibilities for each of $\Pi^\downarrow_b$ and $\Pi^\uparrow_b$. So, $|L_b| \leq {\mathrm{tw}}^{O({\mathrm{tw}})}$ for every $b \in B$. Therefore, the running time of the algorithm is $\mathrm{poly}(n) \cdot \Delta^{O(D)} = \mathrm{poly}(n) \cdot ({\mathrm{tw}}^{O({\mathrm{tw}})})^{O(\log n)} = n^{O({\mathrm{tw}} \log {\mathrm{tw}})}$. This finishes the proof of Theorem~\ref{thm:GST-bd-treewidth}.
\bibliographystyle{plainurl}
| {
"timestamp": "2023-02-23T02:17:26",
"yymm": "2302",
"arxiv_id": "2302.11475",
"language": "en",
"url": "https://arxiv.org/abs/2302.11475",
"abstract": "While much of network design focuses mostly on cost (number or weight of edges), node degrees have also played an important role. They have traditionally either appeared as an objective, to minimize the maximum degree (e.g., the Minimum Degree Spanning Tree problem), or as constraints which might be violated to give bicriteria approximations (e.g., the Minimum Cost Degree Bounded Spanning Tree problem). We extend the study of degrees in network design in two ways. First, we introduce and study a new variant of the Survivable Network Design Problem where in addition to the traditional objective of minimizing the cost of the chosen edges, we add a constraint that the $\\ell_p$-norm of the node degree vector is bounded by an input parameter. This interpolates between the classical settings of maximum degree (the $\\ell_{\\infty}$-norm) and the number of edges (the $\\ell_1$-degree), and has natural applications in distributed systems and VLSI design. We give a constant bicriteria approximation in both measures using convex programming. Second, we provide a polylogrithmic bicriteria approximation for the Degree Bounded Group Steiner problem on bounded treewidth graphs, solving an open problem from [Kortsarz and Nutov, Discret. Appl. Math. 2022] and [Guo et al., Algorithmica 2022].",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "Degrees and Network Design: New Problems and Approximations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712645102011,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397053528454
} |
https://arxiv.org/abs/1907.09507 | Robust and optimal sparse regression for nonlinear PDE models | This paper investigates how models of spatiotemporal dynamics in the form of nonlinear partial differential equations can be identified directly from noisy data using a combination of sparse regression and weak formulation. Using the 4th-order Kuramoto-Sivashinsky equation for illustration, we show how this approach can be optimized in the limits of low and high noise, achieving accuracy that is orders of magnitude better than what existing techniques allow. In particular, we derive the scaling relation between the accuracy of the model, the parameters of the weak formulation, and the properties of the data, such as its spatial and temporal resolution and the level of noise. | \section{Introduction}
Partial differential equations (PDEs) provide a natural description for the temporal evolution of spatially extended systems in various fields of science and engineering.
Historically and practically important examples include wave equations arising in many areas of physics, the Schr\"{o}dinger equation in quantum mechanics, the Navier-Stokes equations in fluid dynamics, and reaction-diffusion equations used to model physical, chemical, or biological systems.
In the past, models of such systems were almost always constructed from first principles or using a suitable empirical approach.
However, in recent years, a data-driven paradigm for learning the dynamics has emerged, which leverages the modern prevalence of data and computational power to create models when the underlying governing laws have eluded first-principles derivation.
Many indirect methods for learning the dynamics that do not require a PDE have been proposed.
Notable examples include equation-free modeling \cite{kevrekidis2003}, artificial neural networks \cite{hsu1997,raissi2018deep,pathak2018}, dynamic mode decomposition \cite{tu2013} and Koopman operator approaches \cite{mezic2013}, balanced truncation \cite{rowley2005}, and resolvent-based analysis \cite{mckeon2010}.
While these techniques can provide an economical approximate description of the dynamics, this is done at the cost of losing the mathematical structure that affords physical intuition or interpretability.
Symbolic regression, which was originally used to derive nonlinear ordinary differential equations describing low-dimensional systems \cite{bomgard2007, schmidt2009}, offers an enticing alternative by allowing construction of exact models and discovery of conservation laws.
The genetic algorithms used in these earlier studies are however computationally expensive, preventing application of this approach to high-dimensional systems.
Thus, the recent emergence of a sparse regression approach for model discovery \cite{xu_2008,khanmohamadi_2009,brunton2016,rudy2017,reinbold_2019} has made a significant impact.
Applied to spatially extended systems, this approach allows data-driven discovery of governing equations in the form of PDEs by evaluating a library of candidate terms containing partial derivatives at a large number of points and using a regularized regression procedure to compute the coefficients of each term and select a parsimonious model.
Sparse regression has proven computationally efficient and capable of reconstructing numerous canonical PDEs \cite{rudy2017,li_2019}, but it faces serious difficulties when used for analysis of experimental data.
One complication is that the proper choice of parsimonious model is often unclear.
In many implementations, it relies on a manual Pareto analysis to balance model accuracy and complexity \cite{brunton2016} or on an automatic but complex thresholding procedure (e.g. sequential threshold ridge regression \cite{rudy2017}) that tends to be sensitive to the choice of parameters.
More importantly, existing sparse regression methods often suffer from low accuracy even in the absence of noise and completely break down at noise levels characteristic of realistic applications.
This is because they inherently require explicit numerical evaluation of partial derivatives of the data, which is a notably ill-conditioned problem.
In this paper, we present a weak formulation of the sparse regression problem that eliminates this fundamental issue.
We also suggest a simple thresholding procedure that can always identify the correct form of the governing PDE even in the presence of extremely high noise.
Finally, we explore how this extremely flexible and robust approach can be optimized and tuned to the properties of the underlying data set to maximize accuracy.
This paper has the following structure.
Section \ref{sec:methods} describes our approach and the system used to test it.
Results are presented and interpreted in Section \ref{sec:results}, and conclusions are discussed in Section \ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
We consider the problem of using the data ${\bf u}({\bf x},t)$ to identify a parsimonious mathematical model in the form of a PDE
\begin{align}
\sum_{n=1}^N c_n \mathbf{f}_n(\mathbf{x},t,\mathbf{u}, \partial_t\mathbf{u},\partial^2_t\mathbf{u}, \nabla\mathbf{u},\nabla^2\mathbf{u}, \cdots) = 0
\label{eq:pde}
\end{align}
where each term in the sum is a function of ${\bf u}$ and its partial derivatives in space and time with constant coefficients $c_n$.
In most applications, the form of the basis functions $\mathbf{f}_n$ can be restricted based on physical considerations, such as symmetries, conservation laws, etc. \cite{bar_1999,reinbold_2019}.
Typically, $\mathbf{f}_n$ are taken to be products of powers of independent variables (${\bf x}$, $t$) and dependent variables (${\bf u}$ and its various derivatives), although the form can be arbitrary in theory.
Our goal is to determine the constants $c_n$ for the terms that should be present in the model while eliminating the dynamically insignificant and thus likely spurious terms.
Sparse regression aims to convert the PDE (\ref{eq:pde}) to a tractable (and ideally, robust) linear algebra problem.
Conventionally this is done by evaluating all of the terms in the PDE at a random collection of points $({\bf x}_k,t_k)$ using finite differences \cite{vallette_1997,bar_1999}, spectral methods \cite{xu_2008,khanmohamadi_2009}, or polynomial approximation \cite{rudy2017,reinbold_2019}.
All of these approaches are extremely sensitive to noise, especially when high-order derivatives are present.
We will instead pursue a weak formulation of the problem that can be obtained by multiplying (\ref{eq:pde}) by a weight $\mathbf{w}_j(\mathbf{x},t)$ and then integrating the result over a domain $\Omega_k$.
Repeating the process for $K$ distinct combinations of weight functions and integration domains yields the linear system
\begin{align}
Q\mathbf{c} = 0
\label{eq:lin}
\end{align}
where $\mathbf{c} = [c_1, \dots, c_N]^T$ and $Q = [\mathbf{q}_1, \dots, \mathbf{q}_N]$ is a ``library'' matrix, with each column $\mathbf{q}_n\in\mathbb{R}^K$ consisting of the integrals of the function $\mathbf{f}_n$ with all $K$ combinations of weights $\mathbf{w}_j$ and domains $\Omega_k$.
Note that there is an extra degree of freedom in \eqref{eq:lin} corresponding to the normalization of $\mathbf{c}$.
Conventionally this is dealt with by assuming that ${\bf f}_1=\partial_t{\bf u}$, setting $c_1=1$, and solving the overdetermined system that corresponds to the choice $K\gg N$ using least squares or some regularized version of it \cite{rudy2017}.
This is however not always a valid assumption: it is usually unknown {\it a priori} whether any given temporal derivative should be included in the PDE at all, whereas in this case a particular term is forced into the model.
Moreover, even if this term should be present in the model, the regression effectively assumes that the time derivative was computed without error, which reduces the practical accuracy of the procedure.
We will therefore not make the assumption that the model has the form of an evolution equation and consider the linear problem \eqref{eq:lin} in its most general form.
The normalization of $\mathbf{c}$ can be fixed by adding an extra row with arbitrary nonzero elements to $Q$, after which the resulting equation (\ref{eq:lin}) can be solved by
ordinary least squares.
A more elegant solution pursued in the present study is to instead compute $\mathbf{c}$
as the right singular vector of $Q$ corresponding to the smallest singular value.
Note that this corresponds to the solution of a constrained least squares problem for $Q^TQ\mathbf{c}=0$:
\begin{align}
\mathbf{c}=\argmin_{\|\mathbf{c}\|=1}\|Q^TQ\mathbf{c}\|.
\label{eq:linn}
\end{align}
Once a suitable solution has been obtained by further constraining the problem, the resulting parsimonious model can be rewritten in the form of an evolution equation by solving for a term such as $\partial_t\mathbf{u}$ (or $\partial_t^2\mathbf{u}$ for a wave equation).
To obtain a parsimonious model, we employ an iterative procedure to eliminate unnecessary terms from \eqref{eq:pde}.
At each step $i$, singular value decomposition is used to obtain the solution ${\bf c}^i$ given the matrix $Q^i$, and the residual $\eta^i = \|Q^i {\bf c}^i\|$ is computed.
We then find the term with the smallest $\|c^i_n\mathbf{q}_n\|/\|\mathbf{q}_n\|$
and construct $Q^{i+1}$ by eliminating the column ${\bf q}_n$ from $Q^i$.
The corresponding term is eliminated from the model if $\eta^{i+1}<\gamma\eta^i$, where $\gamma>1$ is some fixed constant (we use $\gamma=1.4$ in the present study).
The iteration terminates at step $i$ if $\eta^{i+1}>\gamma\eta^i$, yielding a parsimonious model.
We find that this method compares favorably to alternatives such as sequential threshold ridge regression \cite{rudy2017} as it robustly eliminates spurious terms without requiring extremely careful choice of parameters.
Moreover, the sparsification parameter has a simple interpretation: $\gamma-1$ is the maximum acceptable relative increase in the residual resulting from discarding a single library term.
We illustrate the advantages of our approach by applying it to the Kuramoto-Sivashinsky equation \cite{kuramoto_1978, sivashinsky_1985}
\begin{align}\label{eq:kse}
c_1\partial_t u+c_2u\partial_x u+c_3\partial_x^2u+c_4\partial_x^4u = 0
\end{align}
which has posed a significant challenge in past studies of sparse regression \cite{xu_2008,rudy2017} because it contains a fourth-order partial derivative that is difficult to evaluate numerically with adequate accuracy.
Here $c_1=\cdots=c_4=1$ are all constants, although our approach can easily be extended even to the case when these coefficients are functions of time and/or space, as discussed below.
Since this is a scalar equation in one spatial and one temporal dimension, we use scalar weight functions $w^j(x,t)$.
If we denote the terms in the model \eqref{eq:kse} by $f_1,\cdots,f_4$, then
\begin{align}
&q_1^{jk} = \int_{\Omega_k} w_j\partial_t u\,d\Omega,\quad
q_2^{jk} = \int_{\Omega_k} w_ju\partial_x u\,d\Omega, \nonumber\\
&q_3^{jk} = \int_{\Omega_k} w_j\partial^2_x u\,d\Omega,\quad
q_4^{jk} = \int_{\Omega_k} w_j\partial^4_x u\, d\Omega,
\end{align}
where $d\Omega=dx\,dt$.
The key feature of the weak formulation is that it can almost always be used to completely eliminate, or at least reduce the order of, the derivatives acting on the noisy data by integrating by parts.
In our particular case,
\begin{align}
&q_1^{jk} = -\int_{\Omega_k} u\partial_tw_j\, d\Omega,\quad
q_2^{jk} = -\frac{1}{2}\int_{\Omega_k} u^2\partial_xw_j\, d\Omega,\nonumber\\
&q_3^{jk} = \int_{\Omega_k} u\partial^2_x w_j\, d\Omega,\quad
q_4^{jk} = \int_{\Omega_k} u\partial^4_xw_j\, d\Omega
\end{align}
under the assumption that $w_j$ and its first three partial derivatives with respect to $x$ vanish on the boundary $\partial\Omega_k$.
In our implementation, we use the composite trapezoidal rule to evaluate the integrals numerically.
Note that although this particular PDE features constant coefficients, terms with variable coefficients can be treated in a similar manner.
For instance, suppose that the coefficient of the term $\partial_x^4u$ is a function of ${\bf x}$ and $t$ that can be expanded in some (finite) basis as
\begin{align}
c_4({\bf x},t) = \sum_p c'_pg_p({\bf x},t)
\end{align}
with some constants $c'_p$. Then
\begin{align}
\int_{\Omega_k} w_jc_4\partial^4_x u\, d\Omega = \sum_p c'_p q_p^{jk},
\end{align}
where
\begin{align}
q_p^{jk}=\int_{\Omega_k} u\partial^4_x(g_pw_j)\, d\Omega.
\end{align}
Sparse regression for a model including such a term would then simply require expanding the library $Q$ to include additional columns ${\bf q}_p$ with entries $q_p^{jk}$.
In this case as well, no derivatives of the noisy $u$ are used in finding the elements of $Q$.
Although in principle integration domains of any shape can be used, here we will only consider rectangular domains of a fixed size
\begin{align}
\Omega_k = \{(x,t) \ : \ |x-x_k| \leq H_x, |t-t_k| \leq H_t\}
\end{align}
where the centers $(x_k, t_k)$ of the rectangles $\Omega_k$ are chosen randomly.
Similarly, there are many possible choices for the weight functions satisfying the boundary conditions on $\partial\Omega_k$; we focus on functions of the form
\begin{align}\label{eq:wf}
w_j = (\underline{x}^2-1)^\alpha(\underline{t}^2-1)^\beta e^{\pm il\pi\underline{x}}e^{\pm im\pi\underline{t}},
\end{align}
where $\underline{x} = (x-x_k)/H_x$, $\underline{t} = (t-t_k)/H_t$ are nondimensionalized independent variables and $\alpha\ge 4$, $\beta\ge 1$, $l\ge 0$, and $m \ge 0$ are integers.
Note that there are four weight functions (corresponding to the four different choices of the signs in the exponentials) for each pair of nonzero $l$ and $m$. The integrals $q_n^{jk}$ are all of the form
\begin{align}
F_n^{lm}=\int_{-1}^1d\underline{t}\int_{-1}^1d\underline{x} f_n^{\alpha\beta}(\underline{x},\underline{t}) e^{\pm il\pi\underline{x}}e^{\pm im\pi\underline{t}},
\end{align}
where
\begin{align}
f_n^{\alpha\beta}(\underline{x},\underline{t})=f_n (u,\underline{x},\underline{t}) (\underline{x}^2-1)^\alpha(\underline{t}^2-1)^\beta,
\end{align}
so $F_n^{lm}$ are the coefficients of the two-dimensional Fourier series for $f_n^{\alpha\beta}(\underline{x},\underline{t})$.
Although $f_n(u,\underline{x},\underline{t})$ is not periodic on $\Omega_k$, the functions $f_n^{\alpha\beta}(\underline{x},\underline{t})$ are.
Moreover, $f_n^{\alpha\beta}(\underline{x},\underline{t})$ has at least $\alpha-1$ continuous derivatives in $\underline{x}$ and ${\beta-1}$ continuous derivatives in $\underline{t}$, so the Fourier coefficients decay according to $F_n^{lm}\sim l^{-\alpha}m^{-\beta}$.
The powers $\alpha$ and $\beta$ therefore control the width of the Fourier spectrum of the entries $q_n^{jk}$ in the library $Q$, while the choice of $l$ and $m$ allows us to tune the frequencies of the weights to the spectral properties of the data.
The convergence rate of Fourier series turns out to control the accuracy with which the integrals are evaluated using data that are available only on a discrete grid.
For simplicity, we will assume that the same weight functions are integrated on every domain.
It is possible to use either weight functions involving only a single pair of frequencies (e.g., $l$ and $m$) or a range of frequencies in space and/or time.
To test our sparse regression approach, we computed a solution of the Kuramoto-Sivashinsky equation, using the integrator described in Ref. \onlinecite{rudy2017} to generate data on a physical domain with dimensions $L_x = 32\pi$ and $L_t = 500$.
The numerical integration generated data with spatial resolution $\Delta x=0.0491$ using a time step $\Delta t=0.005$, which was then downsampled to a lower spatial resolution $\delta_x$ and temporal resolution $\delta_t$.
Unless noted otherwise, the results presented below are for $\delta_x = 0.1964$ and $\delta_t = 1$.
For reference, the solution has a correlation length $\ell_x \sim1.67 \approx 8.5 \delta_x$ and correlation time $\ell_t\sim 8=8\delta_t$.
To test the effects of noise, Gaussian noise with standard deviation $\sigma s_u$ was added to the data for various choices of $\sigma$, where $s_u \approx 1.3$ is the sample standard deviation of $u$ on the whole domain.
\begin{figure*}[t]
\subfigure[]{\includegraphics[width=\columnwidth]{FourierX.pdf}} \hspace{4mm}
\subfigure[]{\includegraphics[width=\columnwidth]{FourierT.pdf}}
\caption{The power spectrum over (a) space and (b) time, normalized so that the maximum is 1. The black dots show the spectrum of the original data. The symbols correspond to the spectra of the windowed data multiplied by envelopes $E^{\alpha\beta}(\underline{x}, \underline{t})$ (with different choices of $\alpha$ or $\beta$) on a ``typical'' integration domain $\Omega_k$ (i.e., averaged over 1000 uniformly distributed choices of $(x_k, t_k)$. The spatial and temporal frequencies correspond to $\kappa_l=2\pi l/F_x$ and $\omega_m=2\pi m/F_t$ for the windowed data.
}
\label{fig:Fourier}
\end{figure*}
To test the ability of the algorithm to eliminate spurious terms, in addition to the terms present in the Kuramoto-Sivashinsky equation \eqref{eq:kse}, we also included terms $\partial_x u$, $\partial_x^3 u$, $u$, $u^2$, $u^3$, and $1$ (which represents a hypothetical forcing) in our library.
The corresponding integrals were rewritten using integration by parts to remove derivatives acting on $u$, as described previously.
In the next section, we quantify the performance of our sparse regression approach using two key metrics: how well the algorithm can discriminate between the essential and spurious terms and how accurately it can determine the coefficients of the essential terms.
Since the data were generated using a known model, we know which terms are essential (those contained in the PDE \eqref{eq:kse}).
If the reference model is unavailable, ensemble regression \cite{reinbold_2019} may be used instead to help distinguish essential terms from spurious ones.
\section{Results}
\label{sec:results}
As discussed previously, the elements of the library matrix $Q$ are given by the Fourier coefficients of the different terms included in the generalized model (windowed by the envelope $E^{\alpha\beta}(\underline{x}, \underline{t}) = (\underline{x}^2-1)^\alpha(\underline{t}^2-1)^\beta$ on each domain $\Omega_k$);
hence knowledge of the Fourier spectrum of the data is crucial for an optimal choice of the size of the integration domains $\Omega_k$ and the weight functions $w_j$.
The power spectrum (or, more precisely, the absolute value of the Fourier coefficients) of the noiseless data on the entire physical domain
is shown in Figure \ref{fig:Fourier}.
In space, the spectrum is sharply peaked around a wave number $\kappa\approx 0.625$.
At high wave numbers, the spectrum decays exponentially, $P\propto e^{-\kappa/\bar{\kappa}}$ where $\bar{\kappa}\approx 0.3$.
In time, the spectrum is peaked at zero frequency $\omega$ and decays as a power law, $P\propto \omega^{-\chi}$ with $\chi\approx 2.5$.
Having characterized the data, we turn to the investigation of how the performance of our algorithm depends on the choice of various parameters.
Since the number of parameters is quite large, instead of exploring the entire parameter space, we focus on the dependence on one or two parameters at a time, with the remaining parameters staying fixed.
Specifically, the noise level $\sigma$ is fixed to 3\% and we use the following near-optimal parameters in the sparse regression.
The dimensions of the integration domain are $F_x =2H_x=14.73$ and $F_t =2H_t=75$. This choice corresponds to an equal number of grid points in both directions, $F_x/\delta^*_x=F_t/\delta^*_t=75$.
Unless noted otherwise, we use a single set of weights with $\alpha=\beta=8$, $l=1$, $m=2$, and the sparsification parameter is $\gamma = 1.4$.
We generally use every combination of 4 weight functions over 50 integration domains, so that the total number of library rows is $K=200$.
To characterize the stochastic effects, for each set of parameters, we used an ensemble of $M=100$ trials featuring different random distributions of the integration domains and realizations of noise.
First, we tested the ability of the method to reconstruct the correct form of the PDE \eqref{eq:kse} for various values of $\gamma$ with all other parameters fixed at their near-optimal values.
Our iterative regression procedure proved very robust for a fairly wide range of values of $\gamma$.
In particular, at a noise level of 30\%, it performed perfectly for $1.1\leq\gamma\leq2$, with the reconstructed model containing no missing or spurious terms in all of the trials.
For the highest noise level considered here (100\%), we found perfect performance for $1.2\leq\gamma\leq1.5$. In some fraction of the trials, spurious terms appeared at lower $\gamma$ and missing terms at higher $\gamma$, as shown in Figure \ref{fig:Spur}.
For reference, without the benefit of the weak formulation, sparse regression failed \cite{rudy2017} to correctly reconstruct the lambda-omega reaction-diffusion system, which is only second-order, for noise level as low as 1\%.
The accuracy of regression (i.e., model identification) was quantified by computing the relative error in each parameter of the Kuramoto-Sivashinsky equation
\begin{align}
\Delta c_n = \left|\frac{c_n-\bar{c}_n}{\bar{c}_n}\right|,
\end{align}
where $\bar{c}_n$ and $c_n$ are the true and estimated values of the model parameters, respectively, for $n=2,3,4$.
We normalize the estimated parameters so that $c_1 = 1$.
In all of the following figures, we plot the estimated mean value of $\Delta c_n$ with 95\% error bars, where all of the parameters in the regression procedure are held at their near-optimal values stated previously unless noted otherwise.
In particular, Fig. \ref{fig:Noise} shows the accuracy of regression as a function of $\sigma$ for two different choices of data resolution ($\delta_x$ and $\delta_t$).
It is worth noting that the average relative error is $\Delta c_n\sim 10^{-10}$ for all of the parameters for noiseless data with the higher of the two resolutions.
However, even for $1\%$ noise, $\Delta c_n\sim 2\times10^{-4}$, which is more than three orders of magnitude smaller than what had been achieved in previous studies \cite{rudy2017}.
The results are very similar for all three parameters; as this is generally the case, in subsequent figures, we only show the generally largest error $\Delta c_4$, which corresponds to the term $\partial_x^4 u$ involving the highest-order derivative.
We find two distinct regimes.
At higher noise levels, the error in evaluating the library matrix entries is due primarily to the averaged effect of noise. Applying the central limit theorem, we find that the relative error scales as
\begin{align}\label{eq:epsn}
\varepsilon_n\sim\sigma u_s\sqrt{\frac{\delta_x\delta_t}{F_xF_t}}.
\end{align}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{Spur.pdf}}
\caption{Fraction $p$ of identified models with spurious (circles) or missing (squares) terms at maximum noise level ($\sigma=1$) as a function of the sparsification parameter $\gamma$.
}
\label{fig:Spur}
\end{figure}
At low noise levels, the parameter accuracy is controlled by numerical error, which has two different sources.
The first source is a numerical error in the data itself, which is due to the finite accuracy of the integrator that ``solves'' the Kuramoto-Sivashinsky equation.
This source dominates for smaller $\delta_x$ and $\delta_t$.
For experimental data, this source would correspond to systematic error.
For larger $\delta_x$ and $\delta_t$, the parameter inaccuracy is mainly due to the error in computing the library matrix entries based on data that are available on a discrete grid.
Suppose we want to use numerical quadratures to evaluate an integral
\begin{align}
I=\int_0^L g(x)dx,
\end{align}
where $g(x)\in C^{m}$ (i.e., has $m$ continuous derivatives) and $g^{(i)}(0)=g^{(i)}(L)$ for all $0\leq i<m$.
Then, for the composite trapezoidal rule on a grid with spacing $h$, the relative error associated with the discretization can be estimated using exact Euler-Maclaurin formulas\cite{trefethen2014} and is found to scale as $h^{m+2}|g^{(m+2)}|$ for $m$ even (or $h^{m+1}|g^{(m+1)}|$ for $m$ odd), where a characteristic value of the derivative on the interval $[0,L]$ is used.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{Noise.pdf}}
\caption{Parameter errors $\Delta c_n$ as a function of noise level. The circles, triangles, and squares correspond to $n=2, 3, 4$ respectively, and the empty and filled symbols indicate results for data with double resolution ($\delta_x = 0.0982, \delta_t = 0.5$) and half resolution ($\delta_x = 0.393, \delta_t = 2$), respectively. The dashed lines show the predicted scaling.
}
\label{fig:Noise}
\end{figure}
Generalizing this result to two dimensions (and assuming $\delta_x\ll \min(\ell_x,F_x)$, $\delta_t\ll \min(\ell_t,F_t)$), we find an estimate of the relative discretization error for an element of the library matrix $Q$ that involves a temporal derivative of order $\nu_t$ and/or spatial derivative of order $\nu_x$:
\begin{align}\label{eq:mu}
\varepsilon_d\sim
\begin{cases}
h^{\mu+2},& \mu\ \mathrm{even}\\
h^{\mu+1},& \mu\ \mathrm{odd}
\end{cases}
\end{align}
where $h=\delta_t/\ell_t\approx\delta_x/\ell_x$ and
$\mu=\min(\alpha-\nu_x,\beta-\nu_t)$.
It is easy to check that, due to the conditions on $\alpha$ and $\beta$, we always have $\mu\ge 0$, as it should be for the trapezoidal rule.
The Kuramoto-Sivashinsky equation features terms that all involve derivatives, with the lowest order being one and the highest being four;
hence, for even $\alpha=\beta\ge 4$, the exponent $\mu$ ranges between $\alpha-2$ and $\alpha+1$.
Therefore the scaling
\begin{align} \label{eq:low_h}
\varepsilon_d\sim h^{\alpha-2}
\end{align}
dominates for lower $h$, while the scaling
\begin{align} \label{eq:high_h}
\varepsilon_d\sim h^{\alpha}
\end{align}
dominates for higher $h$.
\begin{figure*}[t]
\subfigure[]{\includegraphics[width=\columnwidth]{normX.pdf}} \hspace{4mm}
\subfigure[]{\includegraphics[width=\columnwidth]{normT.pdf}}
\caption{Scaling of the first four columns of the library $Q$ with the size of the integration domain in the (a) spatial and (b) temporal directions. The columns correspond to $\partial_t u$ (squares), $u\partial_x u$ (circles), $\partial_x^2 u$ (triangles), and $\partial_x^4 u$ (diamonds).
}
\label{fig:norm}
\end{figure*}
The error $\Delta c_n$ can be found using perturbation theory.
Let $\bar{Q}$ be the library matrix evaluated using a continuous noiseless solution so that $\bar{Q}\bar{\mathbf{c}}=0$ exactly (we assume that $\bar{Q}$ corresponds to the parsimonious model).
In the presence of measurement noise and/or discretization error, the error in evaluating each entry $q_n^{jk}$ of the library matrix is proportional to $\varepsilon=\max(\varepsilon_d,\varepsilon_n)$, so
\begin{align}
Q=\bar{Q}+\varepsilon\hat{Q}
\end{align}
for some matrix $\hat{Q}$ whose entries are distributed as white Gaussian noise.
Note that the entries of $\hat{Q}$ are $O(F_xF_t)$.
The entries of $\bar{Q}$ have a more complicated scaling that is determined by the Fourier spectrum of the data (i.e., exponential in space, power law in time).
Specifically, we find (cf. Fig. \ref{fig:norm})
\begin{align}\label{eq:psin}
\|\bar{\bf q}_n\|\propto
\begin{cases}
F_xF_t, & F_x\ll\ell_x,F_t\ll\ell_t\\
e^{-\lambda_nF_x}\left(\frac{\ell_t}{F_t}\right)^{\xi_n}\ell_x\ell_t,& F_x\gg\ell_x,F_t\gg\ell_t
\end{cases}
\end{align}
where $\lambda_n=O(\ell_x^{-1})$ and $\xi_n=O(1)$ are some positive constants.
To leading order in $\varepsilon$, the least squares solution to \eqref{eq:lin} is given by
\begin{align}
\mathbf{c} = \bar{\mathbf{c}}-\varepsilon \bar{Q}^+\hat{Q}\bar{\mathbf{c}},
\end{align}
where $\bar{Q}^+$ is the Moore-Penrose pseudoinverse of $\bar{Q}$.
Since the elements of $\hat{Q}$ can be considered uncorrelated, we have for $F_x\gg\ell_x$ and $F_t\gg\ell_t$
\begin{align}\label{eq:pow0}
\Delta c_n \propto \varepsilon \frac{F_xF_tK^{-1/2}}{\psi(F_x,F_t)},
\end{align}
where the numerator and denominator describe the scaling of the entries of $\hat{Q}$ and $\bar{Q}$, respectively. Following from \eqref{eq:psin},
\begin{align}\label{eq:psi}
\psi(F_x,F_t)= e^{-\lambda F_x}\left(\frac{\ell_t}{F_t}\right)^{\xi}\ell_x\ell_t
\end{align}
with some positive constants $\lambda=O(\ell_x^{-1})$ and $\xi=O(1)$.
For low $\sigma$, we have $\varepsilon=\varepsilon_d$ and therefore $\Delta c_n$ is independent of $\sigma$.
For high $\sigma$, we have $\varepsilon=\varepsilon_n$, so combining \eqref{eq:pow0} and \eqref{eq:epsn} we find
\begin{align}\label{eq:pow}
\Delta c_n \propto \sigma \sqrt{\frac{\delta_x\delta_t}{KF_xF_t}}\frac{F_xF_t}{\psi(F_x,F_t)}.
\end{align}
The predicted scaling of $\Delta c_n$ with $\sigma$ in both regimes is consistent with the results shown in Fig. \ref{fig:Noise}.
In particular, we find that the effect of changing the resolution of the data is quite minor at high $\sigma$, where $\Delta c_n\propto h$ according to \eqref{eq:pow}. At low $\sigma$, the effect is much stronger: for $\alpha=\beta=8$, we have $\Delta c_n\propto h^6$ according to \eqref{eq:mu}.
The dependence of the scaling in \eqref{eq:mu} on $\alpha$ and $\beta$ is further confirmed by Fig. \ref{fig:res}, which shows results for noiseless data.
In the $\alpha=\beta=4$ case, we observe the scaling law $\Delta c_n\propto h^2$ corresponding to \eqref{eq:low_h} in the entire range of $h$ we examined.
When $\alpha=\beta=6$, the parameter error scales according to $\Delta c_n\propto h^4$ for small $h$ and $\Delta c_n\propto h^6$ for large $h$, which correspond to the limiting cases \eqref{eq:low_h} and \eqref{eq:high_h}, respectively.
We should also note that for $h$ as large as $1/4$, the accuracy remains very good.
Thus, the method is suitable for fairly sparse data.
\begin{figure}[t]
\subfigure[]{\includegraphics[width=\columnwidth]{res.pdf}}
\caption{Parameter error $\Delta c_4$ as a function of the resolution of noiseless data for $\alpha=\beta=4$ (circles) and $\alpha=\beta=6$ (squares). The dashed lines show the predicted scaling.
}
\label{fig:res}
\end{figure}
As illustrated in Fig. \ref{fig:numDom}, we also observe the scaling for $\Delta c_n$ with $K$ predicted by \eqref{eq:pow}.
This scaling is expected to break down when the total area of the integration domains exceeds the area of the physical domain due to the loss of statistical independence between the data on different integration domains, leading to an increased linear dependence of the rows of the library matrix $Q$.
We can expect the error to asymptote to
\begin{align}\label{eq:scaleNd}
\Delta c_n\propto\varepsilon N_d^{-1/2}
\end{align}
for $K\gg N_d$, where
\begin{align}
N_d=\frac{L_xL_t}{F_xF_t}
\end{align}
is the area ratio.
For the reference set of parameters, saturation did not occur over the range of $K$ we tested.
To more easily observe the saturation effect, we set $l=m=0$, so that only one weight function is used and the number of integration domains equals $K$ (rather than $K/4$ for nonzero $l$ and $m$).
Furthermore, we reduce the size of the physical domain to $L_x=16\pi$ and $L_t = 250$, so that $N_d \approx 11$ is relatively small.
As Fig. \ref{fig:numDom} illustrates, for large $K$, the parameter accuracy indeed asymptotes to a constant.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{numDom.pdf}}
\caption{Parameter error $\Delta c_4$ as a function of the number of library rows $K$. Only the $l=m=0$ weight function is used and the physical domain size is reduced to $L_x=16\pi$, $L_t = 250$. The dashed lines show the predicted scaling.
}
\label{fig:numDom}
\end{figure}
The scaling described by \eqref{eq:scaleNd} can also be observed in the dependence of $\Delta c_n$ on the size of the physical domain (and hence $N_d$) with all other parameters fixed.
This dependence is quite important, since it determines how much data needs to be collected to identify the model with meaningful precision.
As Fig. \ref{fig:L} illustrates, choosing the physical domain to be just double the size of the (optimal) integration domain in both directions (which corresponds to $N_d=4$) already yields a rather acceptable accuracy when only one weight function is used.
When $l$ and $m$ are nonzero, accurate reconstruction is possible even if $N_d$ is only slightly greater than $1$.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{L.pdf}
\caption{Parameter error $\Delta c_4$ as a function of $N_d$ for $l=m=0$ and $K=500$. Squares correspond to fixing $L_t = 100$ and varying $L_x$ from $15.7$ to $98.2$. Circles correspond to fixing $L_x=19.6$ and varying $L_t$ from $80$ to $500$. The dashed line shows the predicted scaling.
}
\label{fig:L}
\end{figure}
\begin{figure*}[t]
\subfigure[]{\includegraphics[width=\columnwidth]{sizeX.pdf}}
\subfigure[]{\includegraphics[width=\columnwidth]{sizeT.pdf}}
\caption{Parameter error $\Delta c_4$ as a function of the (a) spatial and (b) temporal dimensions of integration domains when only the $l=m=0$ weight function is used (squares) and for the optimal choice of $l$ and $m$ (circles).
}
\label{fig:size}
\end{figure*}
Next, we consider how the error in the estimated coefficients depends on the choice of the integration domain size.
Figure \ref{fig:size} shows the dependence of the error $\Delta c_4$ on the size of the integration domains for two different choices of the weight functions.
In panel (a), $F_t$ is fixed to 75 and $F_x$ is taken to vary, and in panel (b), $F_x$ is fixed to 14.73 with $F_t$ varying.
In both cases, we find that there is an optimal domain size with $F_x \approx 14.73$ and $F_t \approx 75$; moreover, the optimal values remain approximately the same even if we vary the size of the other dimension or the choice of weight functions.
For small $F_x$ and/or $F_t$, the error is large because (a) the integration domain is too small to effectively average out the influence of noise and (b) the numerical quadrature error is large (both $\epsilon_n$ and $\epsilon_d$ increase as $F_x$ and/or $F_t$ decrease).
For large $F_x$ and $F_t$, we enter the regime described by \eqref{eq:pow0}, which predicts that the error should grow exponentially in $F_x$ and as a power of $F_t$.
Indeed, this is exactly what we observe in Fig. \ref{fig:size}.
Based on \eqref{eq:psin}, it appears that the optimal choice of $F_x$ and $F_t$ corresponds to the crossover between these two regimes, i.e., $F_x\propto\ell_x$ and $F_t\propto\ell_t$.
Our numerical results suggest that the optimal choice corresponds to $F_x/\ell_x\approx F_t/\ell_t\approx 8$.
\begin{figure*}[t]
\subfigure[]{\includegraphics[width=\columnwidth]{freqX.pdf}}
\subfigure[]{\includegraphics[width=\columnwidth]{freqT.pdf}}
\caption{Parameter error $\Delta c_4$ as a function of (a) the wave number $\kappa_l= 2\pi l/F_x$ and (b) the frequency $\omega_m= 2\pi m/F_t$.
}
\label{fig:freq}
\end{figure*}
Finally, let us address the optimal choice of frequencies appearing in the weight functions \eqref{eq:wf}.
Figure \ref{fig:freq} shows the effect of varying either $l$ or $m$ with all other parameters fixed at their reference values.
Specifically, we plot $\Delta c_4$ versus $\kappa_l = 2\pi l/F_x$ and $\omega_m = 2\pi m/F_t$.
(Note that when $l$ or $m$ is 0, the number of distinct weight functions is halved, so we correspondingly double the number of integration domains to keep the number of rows in the library constant.)
One could assume that the optimal values would be given by the dominant frequencies of the original data (as we discussed previously, the dominant wave number is $\kappa\approx 0.625$ and the dominant temporal frequency is $\omega=0$).
According to Fig. \ref{fig:Fourier}, windowing the data broadens the peaks but leaves both dominant frequencies roughly the same: $\kappa \approx 0.8$ ($l=2$) and $\omega = 0$ ($m=0$).
Unfortunately, it turns out that we cannot use the spectra to exactly predict the optimal frequencies, which are $\kappa \approx 0.4$ ($l=1$) and $\omega \approx 0.2$ ($m = 2$ or $3$).
However, choosing the frequencies based on the spectra still produces reasonably good accuracy (within a factor of $4$ or so of the optimal result).
These results suggest that using weight functions with a combination of different frequencies may be more robust and/or accurate.
To test this hypothesis, we considered the case in which weight functions with a range of frequencies in space or time were included, with the total number of library rows fixed at $200$.
However, this approach yielded a decrease in the accuracy, as the broader choice of weight functions did not compensate for a decrease in the number of integration domains.
This suggests that the optimal strategy is to use a large number of integration domains while keeping the frequencies of the weight functions fixed.
\section{Conclusions}
\label{sec:conclusions}
We have introduced a robust and flexible approach to data-driven discovery of models in the form of nonlinear PDEs.
The approach uses a weak formulation, coupled with a novel sparse regression procedure, to obtain a parsimonious description.
We have demonstrated its capability to identify PDEs, even with high-order derivatives, from extremely noisy data with unprecedented accuracy.
For instance, with 1\% noise, we were able to reduce the error in estimating the parameters of the 4th-order Kuramoto-Sivashinsky equation from 50\% \cite{rudy2017} to just $2 \times 10^{-4}$.
Furthermore, whereas correct identification of the functional form of the underlying PDE has been far from guaranteed at any noise level using past approaches, our algorithm was able to reconstruct the Kuramoto-Sivashinsky equation accurately in $100\%$ of cases from data with a signal-to-noise ratio of 100\%.
This impressive performance is achieved by shifting the partial derivatives from the data onto a known smooth weight function using integration by parts, thus avoiding the large errors incurred by repeated numerical differentiation.
Our method also proved to be well-adapted to sparse data, maintaining errors of less than $0.1\%$ for a grid resolution only $4$ times finer than the correlation length/time.
Such reliability and high accuracy in the presence of noisy or sparse data is indispensable for analysis of experimental data.
Notably, even in the absence of noise, our results compare very favorably with those of previous studies \cite{xu_2008,rudy2017} because the discretization error of the algorithm can be made extremely small: for the Kuramoto-Sivashinsky equation, the relative error in all parameters can easily be reduced to $10^{-10}$.
It is also important to mention that the computational cost of our algorithm is comparable to that of existing sparse regression methods.
We also derived the scaling laws that describe the accuracy of the regression as a function of the parameters used in the algorithm and the properties of the data.
These scaling laws can be used to fully exploit the flexibility of the weak formulation approach by tuning its various paramters.
In particular, the size of the input used by the regression can be controlled by choosing both the number of different integration domains and the number of different weight functions.
We have shown that the number of integration domains plays a much more important role than the number of weight functions: the best results can be obtained by using a set of weight functions with a fixed shape (frequency and envelope) and a large number of integration domains.
Furthermore, we have determined the optimal shape of the weights and the optimal size of the integration domains.
The latter turned out to be determined by the correlation length and time describing the data (with the size roughly an order of magnitude larger than these characteristic scales).
We have also shown that, although the error can be reduced further by using data on ever-larger physical domains, satisfactory results can be obtained for physical domains that are just a factor of two larger than the optimal integration domain in each dimension.
\begin{acknowledgments}
This material is based upon work supported by the National Science Foundation under Grant No. CMMI-1725587. DG gratefully acknowledges the support of the Letson Undergraduate Research Scholarship.
\end{acknowledgments}
\section{References}
\input{Chaos2019.bbl}
\end{document}
| {
"timestamp": "2019-07-24T02:01:15",
"yymm": "1907",
"arxiv_id": "1907.09507",
"language": "en",
"url": "https://arxiv.org/abs/1907.09507",
"abstract": "This paper investigates how models of spatiotemporal dynamics in the form of nonlinear partial differential equations can be identified directly from noisy data using a combination of sparse regression and weak formulation. Using the 4th-order Kuramoto-Sivashinsky equation for illustration, we show how this approach can be optimized in the limits of low and high noise, achieving accuracy that is orders of magnitude better than what existing techniques allow. In particular, we derive the scaling relation between the accuracy of the model, the parameters of the weak formulation, and the properties of the data, such as its spatial and temporal resolution and the level of noise.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Robust and optimal sparse regression for nonlinear PDE models",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126444811033,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397049027679
} |
https://arxiv.org/abs/2203.12725 | Robust Coordinate Ascent Variational Inference with Markov chain Monte Carlo simulations | Variational Inference (VI) is a method that approximates a difficult-to-compute posterior density using better behaved distributional families. VI is an alternative to the already well-studied Markov chain Monte Carlo (MCMC) method of approximating densities. With each algorithm, there are of course benefits and drawbacks; does there exist a combination of the two that mitigates the flaws of both? We propose a method to combine Coordinate Ascent Variational Inference (CAVI) with MCMC. This new methodology, termed Hybrid CAVI, seeks to improve the sensitivity to initialization and convergence problems of CAVI by proposing an initialization using method of moments estimates obtained from a short MCMC burn-in period. Unlike CAVI, Hybrid CAVI proves to also be effective when the posterior is not from a conditionally conjugate exponential family. | \section{Introduction}
In Bayesian statistics, computing a posterior distribution is of primary interest; however, finding a closed form expression for the posterior proves to be a difficult task \citep{blei2017}. The main hurdle comes with computing the normalizing constant, since integration (especially in more than one dimension) can be computationally expensive. One workaround to this problem is to use MCMC sampling. Although MCMC will eventually converge to the true posterior distribution, the rate of convergence may be too slow for some practical uses \citep{casella2004}.
In hopes of finding a more computationally efficient approximation method than MCMC, VI was developed. The key distinction is that VI uses optimization methods instead of sampling methods to learn the underlying posterior distribution \citep{blei2017}. Specifically, CAVI is a form of VI that implements the coordinate ascent algorithm for its optimization routine. Although CAVI converges quickly, CAVI is highly sensitive to initialization in general; thus, CAVI is rarely applied outside of posteriors lying in the conditionally conjugate exponential family, where this flaw is mitigated by the existence of closed-form optimal updates \citep{blei2017}.
We thus see that CAVI is generally limited in its application and MCMC cannot be efficiently used on large datasets. In this paper, we propose a new method that combines CAVI with MCMC, addressing the drawbacks inherent to both of these techniques. This new methodology is computationally efficient, applicable to any family of posteriors, and robust to initialization, allowing better scaling and wider application than either of the two existing methods.
The remainder of this paper is structured as follows. In Section \ref{sec:back}, we give further background for MCMC and CAVI and examine other approaches to combining them. In Section \ref{sec:hybrid}, we introduce the new method for combining MCMC and CAVI. In Section \ref{sec:simStudy}, we study the performance of our new method in comparison with existing posterior approximation techniques by conducting a simulation study\footnote{Code for the simulation study can be found at \url{https://github.com/neil-dey/robust-vi-mcmc}}. Specifically, Section \ref{subsec:exp} focuses on applying these methods to a posterior that is analytically known, whereas Section \ref{subsec:nonexp} applies the methods to an intractable posterior distribution.
\section{Background}
\label{sec:back}
The primary goal of both MCMC and VI is to estimate a posterior distribution of latent variables, $\vb*{z}$, given observed data, $\vb*{x}$. Rather than directly estimating the posterior $p(\vb*{z}\,|\,\vb*{x})$, VI considers a family of simpler \textit{variational distributions}, $q(\vb*{z}\,|\,\boldsymbol \lambda)$, that are shaped by a set of \textit{variational parameters}, $\boldsymbol \lambda$, and aims to find the variational parameter $\boldsymbol \lambda^*$ that allows $q(\vb*{z}\,|\,\boldsymbol \lambda^*)$ to be ``close to" $p(\vb*{z}\,|\,\vb*{x})$ \citep{zhang2019} . VI measures this ``closeness" with the Kullback-Leibler (KL) divergence:
\begin{equation*}
\KL(q\parallel p) = \int_\mathbb{R} \log(\frac{q(x)}{p(x)}) q(x) \dd{x}.
\end{equation*}
In this paper, we work under the popular framework of minimizing the KL divergence over a \textit{mean-field} variational family, which assumes that the latent variables are mutually independent. That is,
\begin{equation*}
q(\vb*{z}; \vb*{\lambda}) = \prod_{i=1}^m q_i(z_i; \vb*{\lambda}_i).
\end{equation*}
Hence, we approximate the posterior $p(\vb*{z}\,|\, \vb*{x})$ with $q^*(\vb*{z};\vb*{\lambda}^*)$ given by
\begin{equation*}
q^*(\vb*{z};\vb*{\lambda}^*
) = \argmin_{q\in\mathscr{D}} \KL(q(\vb*{z}; \vb*{\lambda}) \parallel p(\vb*{z}\,|\,\vb*{x})),
\end{equation*}
where $\mathscr{D}$ is the mean-field variational family. However, calculating the KL divergence requires calculating the unknown marginal with respect to the data, $p(\vb*{x})$. Thus, VI instead maximizes the tractable-to-compute Evidence Lower Bound (ELBO) \citep{blei2017}, denoted $\elbo$, given by
\begin{equation*}
\elbo(q) = \E_q[\log p(\vb*{z}, \vb*{x})] - \E_{q}[\log q(\vb*{z})] .
\end{equation*}
\begin{algorithm}[!ht]
\DontPrintSemicolon
\KwInput{A model $p(\vb*{x}, \vb*{z})$, a data set $\vb*{x}$}
\KwOutput{A variational density $q(\vb*{z}) = \prod_{j=1}^m q_j(z_j)$}
\Initialize{Variational factors $q_j(z_j)$}{}
\While{ELBO has not converged}
{
\For{$j \in \{1, \ldots, m\}$}{
Set $q_j(z_j) \propto \exp(\E_{-j}[\log p(z_j \,|\, \vb*{z}_{-j}, \vb*{x}))$
}
Compute $\elbo(q) = \E[\log p(\vb*{z}, \vb*{x})] + \E[\log q(\vb*{z})]$
}
\Return $q(\vb*{z})$
\caption{CAVI algorithm given by \cite{blei2017}}
\label{fig:caviAlg}
\end{algorithm}
A common method of maximizing the ELBO is CAVI, shown in Algorithm \ref{fig:caviAlg}. CAVI is the usual coordinate ascent algorithm, treating the variational factors of $q$ as the coordinates of the input to $\elbo$. To see this more explicitly, we can rewrite the ELBO as a function of the variational parameters:
\begin{equation*}
\elbo(\vb*{\lambda}_1, \ldots, \vb*{\lambda}_n) = \int_{\mathbb{R}^m} \log(\frac{p(\vb*{z}, \vb*{x})}{\prod_{i=1}^m q_i(z_i;\,\vb*{\lambda}_i)}) \cdot \prod_{i=1}^m q_i(z_i ;\, \vb*{\lambda}_i) \dd{z}.
\end{equation*}
The behavior of coordinate ascent is well studied; a proof of convergence under weak assumptions is presented in \cite{luenberger1973}, and it can be shown that coordinate ascent converges linearly for (at least locally) strictly convex objective functions with compact sublevel sets \citep{abatzoglou1982}. CAVI is also well-studied, with the coordinate ascent update for $q_i$ available in closed form for conditionally conjugate exponential families \citep{blei2017}; this closed form update is shown in the for-loop of Algorithm \ref{fig:caviAlg}.
Some computational and inferential advantages of CAVI are shown in \cite{braun2010}; however, the results rely on the convexity of the objective function, which is rarely the case for the ELBO. Thus, \cite{blei2017} points out that in the case of non-convex optimization, CAVI is highly sensitive to the initialization of the $q_j(z_j; \vb*{\lambda}_j)$ factors---a significant drawback to CAVI. Furthermore, the literature does not typically apply CAVI to non-conditionally conjugate exponential family posteriors. Thus, alternatives to CAVI have been suggested. For example, \cite{zhang2019}, \cite{hoffman2013}, and \cite{titsias194} all focus on stochastic gradient descent (SGD) approaches to the optimization rather than coordinate ascent. Regardless of the techniques used to optimize the ELBO, there has also been work in exploiting the structures of certain models to increase the speed of convergence, such as in \cite{zhang2019}.
Many approaches to combining VI and MCMC exist in the literature. For example, \cite{mimno2012sparse} implements Gibbs sampling to approximate the expectations that cannot be computed analytically but are necessary for the SGD approach to VI. Similarly, \cite{salimans2015markov} focuses on the SGD approach to VI, with the insight of adding auxiliary variables and integrating MCMC steps into the posterior approximations. Additionally, \cite{de2013variational} does impressive work in combining MCMC and VI by using VI to improve the proposals generated by the Metropolis-Hastings algorithm.
The work in \cite{ruiz2019contrastive} is notable in its approach to combine MCMC and VI by applying MCMC to the initial variational distribution, thus proposing a new variational distribution (possibly of a different family). Furthermore, they develop a new divergence criterion since exactly computing the KL divergence with respect to an MCMC posterior estimate is not feasible.
\section{Hybrid CAVI}
\label{sec:hybrid}
As mentioned in the introduction, a point of contention for CAVI is its sensitivity to initial values, as it is possible for the algorithm to only find local extrema. Hence, we propose a new algorithm, termed Hybrid CAVI, that is less sensitive to initialization than CAVI. The idea of Hybrid CAVI is to use the burn-in process from MCMC to inform the initialization to a CAVI routine. This will improve the inferential robustness of CAVI while maintaining its computational speed.
Hybrid CAVI begins by performing MCMC as usual. However, the burn-in period is very short---much less than what is needed to approach the stationary distribution. We then continue for another few MCMC iterations and use this as an approximation, $\widehat{\pi}$, to the true posterior distribution. We then calculate the moments of $\widehat{\pi}$ and initialize the variational parameters using the method of moments---so that the moments of the initial variational distribution match those of $\widehat{\pi}$.
By performing these few initial MCMC steps, the inputs to the subsequent CAVI routine are often close enough to the truth that CAVI can finish quickly and accurately. Therefore, the power of our method is that it directly addresses the main pitfall of CAVI (sensitivity to initialization), while still maintaining a fast runtime.
\section{Simulation Study}
\label{sec:simStudy}
\subsection{Application to Conditionally Conjugate Exponential Families}
\label{subsec:exp}
We now examine the efficacy the Hybrid CAVI approach to estimating a posterior distribution. In order to compare CAVI, MCMC, and Hybrid CAVI, we use the following simple setup. We generate data
$$\vb*{x}_1, \ldots, \vb*{x}_{100}\overset{iid}{\sim} N_2\qty(\mqty[27 \\ 13], \mqty[38 & 0.8 \\ 0.8 & 4])$$
with a prior distribution for the mean $\vb*{\mu}$ given as $\vb*{\mu} \sim N_2(\vb*{0}, 50\vb*{I})$. The true posterior is then given by
\begin{equation}\label{eq:truepost}
p(\mu\,|\,\vb*{x}) \sim N_2\qty(\mqty[27.230 \\ 12.991], \mqty[0.377 & 0.00793 \\ 0.00793 & 0.400])
\end{equation}
For the CAVI and Hybrid CAVI algorithms, let the variational factors $q_i(\mu_i;\, \vb*{\lambda}_i)$ be normal:
\begin{equation*}
\mu_i \,|\, \vb*{\lambda}_i \sim N(\lambda_{i, 1}, \lambda_{i, 2}).
\end{equation*}
\subsection*{CAVI with Analytic Expected Value}
Given the above setup, it is straightforward to directly calculate the expectation needed in the CAVI algorithm shown in Algorithm \ref{fig:caviAlg}; this drastically improves computation time. Table \ref{fig:results1} displays the computation times and convergence results for the CAVI algorithm using closed form expected values. We mentioned that a drawback of CAVI is sensitivity to initialization; however, with closed form updates available in this simulation, initializations for the mean and variance of the initial variational distribution proved not to be an issue in this particular example. The motivation for the three initial conditions, $\{\bb{\lambda^1, \lambda^2, \lambda^3}\}$, will become clearer in the section that follows. Needless to say, this demonstration in which CAVI converges close to the true posterior (as can be confirmed by comparison to the true posterior from Equation \ref{eq:truepost} or by the small KL divergence) in a short amount of time is a strong argument in favor of CAVI. It is clear that in this case, where closed-form updates for CAVI are available, there is no need for Hybrid CAVI.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Initialization & Time (s) & Estimated Posterior & KL Div.\\
\hline
$\mqty[\vb*{\lambda^1}_1\\\vb*{\lambda^1}_2]= \mqty[(10 , 1) \\ (10 , 1)]$ & 0.696 & $N_2\qty(\mqty[27.233 \\ 12.991], \mqty[0.375 & 0 \\ 0 & 0.0398])$ $\vphantom{\mqty[0\\0\\0]}$ & 70.359\\
$\mqty[\vb*{\lambda^2}_1\\\vb*{\lambda^2}_2]= \mqty[(25 , 1) \\ (10 , 1)]$ & 0.692 & $N_2\qty(\mqty[27.233 \\ 12.991], \mqty[0.375 & 0 \\ 0 & 0.0398])$ $\vphantom{\mqty[0\\0\\0]}$ & 70.359 \\
$\mqty[\vb*{\lambda^3}_1\\\vb*{\lambda^3}_2]= \mqty[(10 , 1) \\ (20 , 1)]$ & 0.670 & $N_2\qty(\mqty[27.233 \\ 12.991], \mqty[0.375 & 0 \\ 0 & 0.0398])$ $\vphantom{\mqty[0\\0\\0]}$ & 70.359\\
\hline
\end{tabular}
\caption{Results from using CAVI with a closed form expression for the coordinate updates written into the program.}
\label{fig:results1}
\end{center}
\end{table}
\subsection*{CAVI with Numerical Optimization of ELBO}
The previous subsection used a closed form expression for the CAVI update of each variational parameter. However, it is rare in research to be working with easy, closed form densities, let alone having conjugate priors such as in our example. Indeed, we only have closed-form expressions for CAVI updates for conditionally conjugate exponential families. Hence, it may often be the case that CAVI is run identically to block-coordinate ascent on the ELBO (i.e. the variational parameter update is obtained through numerical optimization). However, using numerical optimization methods rather than the optimal closed-form update at each step lends itself to sensitivity to initialization of variational parameter values and also increases the computational burden of the algorithm. We thus repeat the previous experiment so that the ELBO is calculated using numerical integration and coordinate updates are optimized using a bounded Newton-Conjugate Gradient algorithm.
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Initialization & Time (s) & Estimated Posterior & KL Div. \\
\hline
$\mqty[\vb*{\lambda^1}_1\\\vb*{\lambda^1}_2]= \mqty[(10 , 1) \\ (10 , 1)]$ & 557.2 & $N_2\qty(\mqty[10.765 \\ 10.709], \mqty[0.230 & 0 \\ 0 & 0.980])$ $\vphantom{\mqty[0\\0\\0]}$ & 36456 \\
$\mqty[\vb*{\lambda^2}_1\\\vb*{\lambda^2}_2]= \mqty[(25 , 1) \\ (10 , 1)]$ & 572.2 & $N_2\qty(\mqty[25.524 \\ 10.774], \mqty[0.568 & 0 \\ 0 & 0.984 ])$ $\vphantom{\mqty[0\\0\\0]}$ & 1014\\
$\mqty[\vb*{\lambda^3}_1\\\vb*{\lambda^3}_2]= \mqty[(10 , 1) \\ (20 , 1)]$ & 712.4 & $N_2\qty(\mqty[10.392 \\ 19.002], \mqty[0.897 & 0 \\ 0 & 0.326])$ $\vphantom{\mqty[0\\0\\0]}$ & $\infty$\\
\hline
\end{tabular}
\caption{Results from using CAVI with a numerical optimization method to maximize the ELBO.}
\label{fig:results2}
\end{center}
\end{table}
From Table $\ref{fig:results2}$, we confirm our hypothesis that CAVI can be sensitive to initialization. We chose the three initial values above because we viewed them as reasonable guesses for a practitioner to have:
\begin{itemize}
\item The practitioner has no intuition for the covariance structure, and so initializes the covariance to be the identity matrix
\item The practitioner uses several different forms for the mean term: one in which the components are equal ($\vb*{\lambda^1}$), and two in which they are unequal ($\vb*{\lambda^2}$ and $\vb*{\lambda^3}$)
\end{itemize}
Evidently, the different initializations display wildly different behaviors. We see that $\bb{\lambda^1}$ and $\bb{\lambda^3}$ provide convergence results far from the truth\footnote{The KL divergence of $\infty$ is due to a floating-point underflow causing a $\log(0)$ term to appear in the calculation.} (determined by the large KL divergences), and all three take roughly 10 minutes to compute. Initializing with $\bb{\lambda^2}$ yields the relative best posterior estimate because the initial values were nearly equal to the true parameter values. Despite this, it still estimates many of the posterior parameters quite poorly.
\subsection*{Hybrid CAVI with Numerical Optimization of ELBO}
To test the efficacy of Hybrid CAVI, we will use the mean components of the same three $\bb{\lambda}$ initial values as before, and see how Hybrid CAVI performs relative to the existing methods. In addition to comparing the results in Table \ref{fig:results3} to the results in Tables \ref{fig:results1} and \ref{fig:results2}, we can compare Hybrid CAVI's performance to the results from running a complete MCMC procedure. Note that both MCMC and Hybrid CAVI are using the Metropolis-Hastings MCMC algorithm (see \cite{bayesChoice} for details).
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Algorithm & Initialization & Time (s) & Estimated Posterior & KL Div. \\
\hline
MCMC & $\bb{\mu^1} = \mqty[10\\10]$ & 654.0 & $N_2\qty(\mqty[27.230 \\ 12.993], \mqty[0.418 & 0.0121 \\ 0.0121 & 0.0397])$ $\vphantom{\mqty[0\\0\\0]}$ & 71.128 \\
& $\bb{\mu^2} = \mqty[25\\10]$ & 444.1 & $N_2\qty(\mqty[27.231 \\ 12.995], \mqty[0.352 & 0.00793 \\ 0.00793 & 0.0404])$ $\vphantom{\mqty[0\\0\\0]}$ & 69.979\\
& $\bb{\mu^3} = \mqty[10\\20]$ & 541.7 & $N_2\qty(\mqty[27.222 \\ 12.983], \mqty[0.409 & 0.0135 \\ 0.0135 & 0.0427])$ $\vphantom{\mqty[0\\0\\0]}$ & 67.845\\
\hline
Hybrid & $\bb{\mu^1} = \mqty[10\\10]$ & 61.1 & $N_2\qty(\mqty[27.409 \\ 12.934], \mqty[0.288 & 0 \\ 0 & 0.0263])$ $\vphantom{\mqty[0\\0\\0]}$ & 95.750\\
CAVI & $\bb{\mu^2} = \mqty[25\\10]$ & 56.2 & $N_2\qty(\mqty[27.153 \\ 13.027], \mqty[0.235 & 0 \\ 0 & 0.0263])$ $\vphantom{\mqty[0\\0\\0]}$ & 95.139\\
& $\bb{\mu^3} = \mqty[10\\20]$ & 261.7 & $N_2\qty(\mqty[27.208 \\ 13.031], \mqty[0.446 & 0 \\ 0 & 0.0263])$ $\vphantom{\mqty[0\\0\\0]}$ & 90.404\\
\hline
\end{tabular}
\caption{Results for MCMC and Hybrid CAVI. Note that MCMC was run using 20,000 total steps with a 15,000 step burn-in. Hybrid CAVI used 1,000 total steps with a 900 step burn-in.}
\label{fig:results3}
\end{center}
\end{table}
The results presented in Table \ref{fig:results3} illustrate how it is much better to do Hybrid CAVI over the ordinary CAVI algorithm when limited to numerical optimization of the ELBO. The threshold for how many iterations of MCMC are necessary for Hybrid CAVI will be application-dependent, but there is clear merit in considering performing this MCMC burn-in. Not only are the parameter estimates closer to the truth, but we see a decrease in computation time by nearly a factor of 10 for Hybrid CAVI compared to both CAVI and MCMC, while yielding only a mild increase in KL divergence compared to a full MCMC procedure.
\subsection{Application to Non-Conditionally Conjugate Exponential Families}
\label{subsec:nonexp}
The above simulation demonstrates the advantages of Hybrid CAVI over MCMC and CAVI when closed-form updates are unavailable. Contrarily, it is obviously not useful to use Hybrid CAVI over CAVI when an analytic update exists. Thus, we demonstrate Hybrid CAVI on a multivariate $t$-distribution, which does not lie in a conditionally conjugate exponential family, and so no closed-form updates exist.
In our new setup, we generate 100 data points $\vb*{x}_1, \ldots, \vb*{x}_{100} \sim F_{X, Y}$, where $F_{X, Y}$ is a distribution function generated using a Gaussian copula with covariance matrix
\begin{equation*}
\mqty[1 & 0.5 \\ 0.5 & 1]
\end{equation*}
and has marginals $F_X \sim t_8$ and $F_Y \sim t_{50}$.
We then wish to estimate the degrees of freedom parameters $\nu_1$ and $\nu_2$ for the marginal distributions. To do so, following the advice of \cite{gelman2006}, we have an inverse uniform prior on $\nu_i$ given by $1/\nu_i \sim \operatorname{Uniform}(0, 1/2)$. For the CAVI algorithm, we then choose our variational factors $q_i(\nu_i; \vb*{\lambda}_i)$ to be shifted Gamma densities:
\begin{equation*}
\nu_i - 2 \,|\, \vb*{\lambda}_i \sim \operatorname{Gamma}(\lambda_{i, 1}, \lambda_{i, 2})
\end{equation*}
Because there is no closed form expression for the posterior, we take the posterior estimated by MCMC as the ground truth, and compare CAVI and Hybrid CAVI to the MCMC posterior. The KL divergence is then approximated using a discrete summation rather than the usual integral formulation (due to MCMC returning discrete sample points). Given marginal sample means $m_1$, $m_2$ and marginal sample variances $s_1^2$, $s_2^2$ of an MCMC chain, the method of moments estimators for $\vb*{\lambda}_i$ are given by $\lambda_{i, 1} = (m_i-2)^2/s_i^2$ and $\lambda_{i, 2} = s_i^2/(m_i-2)$.
The initializations for MCMC and Hybrid CAVI are chosen to be the median vector on the prior ($\vb*{\nu^1} = (4, 4)$) and another arbitrary large starting point ($\vb*{\nu^2} = (30, 30)$). For CAVI, we choose a completely uninformed starting value ($\vb*{\lambda^1}$), an initialization that is very close to the optimal ($\vb*{\lambda^2}$), and an initialization that swaps the true optimal parameter values ($\vb*{\lambda^3}$).
\begin{table}[!htb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Algorithm & Initialization & Time (s) & Approx. KL Div. \\
\hline
MCMC & $\vb*{\nu^1} = \mqty[4 \\ 4]$ $ \vphantom{\mqty[0\\0\\0]}$ & 33806 & 0 \\
\hline
Hybrid CAVI & $\vb*{\nu^1} = \mqty[4 \\ 4]$ $ \vphantom{\mqty[0\\0\\0]}$ & 1671 & $2.471$ \\
& $\vb*{\nu^2} = \mqty[30 \\ 30]$ $ \vphantom{\mqty[0\\0\\0]}$ & 1557 & $2.915$ \\
\hline
CAVI & $\mqty[\vb*{\lambda^1}_1\\\vb*{\lambda^1}_2] = \mqty[(10, 10) \\ (10, 10)]$ $ \vphantom{\mqty[0\\0\\0]}$& 743& $\infty$ \\
&$\mqty[\vb*{\lambda^2}_1\\\vb*{\lambda^2}_2] = \mqty[(3.2, 0.6) \\ (1.7, 6.9)]$ $ \vphantom{\mqty[0\\0\\0]}$& 553 & 0.788\\
&$\mqty[\vb*{\lambda^3}_1\\\vb*{\lambda^3}_2] = \mqty[(1.7, 6.9) \\ (3.2, 0.6)]$ $ \vphantom{\mqty[0\\0\\0]}$& 577 & 9.081\\
\hline
\end{tabular}
\label{fig:results4}
\end{center}
\caption{Results for all three posterior estimation techniques. Note that MCMC was run using 100,000 total steps with a 10,000 step burn-in. Hybrid-CAVI used 500 total steps with a 400 step burn-in.}
\end{table}
We once again see the advantages of Hybrid CAVI: The method achieves a small KL divergence in a fairly short amount of time relative to CAVI and MCMC respectively. Indeed, Hybrid CAVI is again much more stable in its accuracy compared to CAVI (only falling short of CAVI when CAVI is initialized close to the optimal values) and takes a fraction of the time of a full MCMC procedure. It is thus evident that Hybrid CAVI reaps the benefits exhibited by both CAVI and MCMC while mitigating their respective flaws.
\bibliographystyle{authordate1}
| {
"timestamp": "2022-03-25T01:06:04",
"yymm": "2203",
"arxiv_id": "2203.12725",
"language": "en",
"url": "https://arxiv.org/abs/2203.12725",
"abstract": "Variational Inference (VI) is a method that approximates a difficult-to-compute posterior density using better behaved distributional families. VI is an alternative to the already well-studied Markov chain Monte Carlo (MCMC) method of approximating densities. With each algorithm, there are of course benefits and drawbacks; does there exist a combination of the two that mitigates the flaws of both? We propose a method to combine Coordinate Ascent Variational Inference (CAVI) with MCMC. This new methodology, termed Hybrid CAVI, seeks to improve the sensitivity to initialization and convergence problems of CAVI by proposing an initialization using method of moments estimates obtained from a short MCMC burn-in period. Unlike CAVI, Hybrid CAVI proves to also be effective when the posterior is not from a conditionally conjugate exponential family.",
"subjects": "Computation (stat.CO)",
"title": "Robust Coordinate Ascent Variational Inference with Markov chain Monte Carlo simulations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126438601955,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7094397044526902
} |
https://arxiv.org/abs/1907.00892 | Sampling And Reconstruction Of Diffusive Fields On Graphs | In this paper, the focus is on the reconstruction of a diffusive field and the localization of the underlying driving sources on arbitrary graphs by observing a significantly smaller subset of vertices of the graph uniformly in time. Specifically, we focus on the heat diffusion equation driven by an initial field and an external time-invariant input. When the underlying driving sources are modeled as an initial field or external input, the sources (hence the diffusive field) can be recovered from the subsampled observations without imposing any band-limiting or sparsity constraints. When the diffusion is induced by both the initial field and external input, then the field and sources can be recovered from the subsampled observations, however, by imposing band-limiting constraints on either the initial field or external input. For heat diffusion on graphs, we can compensate for the unobserved vertices with the temporal samples at the observed vertices. If the observations are noiseless, then the recovery is exact. Nonetheless, the developed least squares estimators perform reasonably well with noisy observations. We apply the developed theory for localizing and recovering hot spots on a rectangular metal plate with a cavity. | \section{Introduction}
Graph signal processing extends tools from classical signal processing to deal with data defined on networks and other irregular domains~\cite{ortega2018graph,shuman2013Emerging,sandryhaila2014big}.
We often come across such datasets in many diverse applications such as environmental sensing, traffic monitoring, mapping the human brain~\cite{huang2016graph}, cybersecurity~\cite{shah2010detecting}, and social networks, to list a few.
Similar to how we understand many physical phenomena by a partial differential equation that explains the evolution of a spatiotemporal field and relates it to the inducing sources, we can also understand the temporal evolution of data over a network or an irregular domain using a partial differential equation. For example, the heat equation is often used to model the traffic movement, infection or virus spread, or rumor propagation~\cite{chen2016detecting,shah2010detecting}.
In this work, we focus on heat diffusion over networks. Specifically, we are interested in recovering diffusive signals on a graph by sampling a significantly smaller subset of vertices of the graph. This essentially amounts to localizing the underlying sources that drive the diffusion process from the observations that are collected at a few nodes. Oftentimes, the sources (e.g., traffic bottleneck or rumor sources) that induce the diffusion process are highly localized in the network or sparse in the vertex domain and hence are not usually bandlimited. Therefore, we require new sampling and recovery methods for graph signals that do not impose any structural or band-limiting constraints, unlike some of the existing graph sampling methods~\cite{chen2015discrete, chepuri2018graph,marques2015sampling}. Although band-limiting constraints are not needed for recovering the second-order statistics of a signal defined on a graph from the subsampled observations, the framework developed in~\cite{chepuri2017graph} cannot be used for localizing diffusive sources. Spatio-temporal sampling and reconstruction of diffusive fields on a regular domain under the assumption that the inducing sources are sparse are studied in~\cite{lu2009distributed,ranieri2011sampling}. Assuming that the underlying sources are known,~\cite{teke2017time} focuses on estimating the time instance when the sources appear. In contrast, we will assume that the start time of the sources are is known.
In this work, we develop a graph sampling method to recover diffusive fields induced by an initial field and/or an external input that does not vary with time. The main results of this paper are as follows. When the underlying driving sources are modeled as an initial field or external input, we can localize and recover the sources by sampling a significantly smaller subset of vertices of the graph uniformly in time and by using a simple least squares estimator. To do so, we do not impose any constraints on the sources such as sparsity or bandlimitedness. Since we can compensate for the unobserved vertices with the temporal samples at the observed vertices, we can recover the sources without imposing any constraints. However, when the diffusion field is due to both the initial field and external input, to reconstruct the diffusive fields from the subsampled observations, we require either the initial field or external input to be bandlimited. If the observations are noiseless, then the recovery is exact. Nonetheless, the developed estimators perform reasonably well with noisy observations.
Throughout this paper, we will use upper (lower) case boldface letters to denote matrices (column vectors), and we will denote sets
using calligraphic letters.
\section{Graph signals}
Consider an undirected graph ${\cal G} = \{{\cal V},{\cal E}\}$ with $N$ vertices (or nodes), where ${\cal V} = \{v_1,v_2,\ldots,v_N\}$ and ${\cal E}$ represent the vertex set and edge set, respectively. Let us denote the graph Laplacian matrix associated with ${\cal G}$ as ${\boldsymbol L} \in \mathbb{R}^{N \times N}$. A graph signal is a function $x : \mathcal{V} \rightarrow \mathbb{C}$ with $x(v)$ being the value of the function at vertex $v \in {\cal V}$. Let us collect the function values $\{x(v_n)\}_{n=1}^N$ in a length-$N$ vector ${\boldsymbol x} = [x_1,x_2,\ldots,x_N]^T$.
For undirected graphs ${\boldsymbol L}$ is real symmetric, and hence admits an eigendecomposition ${\boldsymbol L}= {\boldsymbol U} {\boldsymbol \Lambda}{\boldsymbol U}^T$ with ${\boldsymbol U} = [{\boldsymbol u}_1, \cdots, {\boldsymbol u}_N]$ being the eigenvector matrix collecting the eigenvectors $\{{\boldsymbol u}_n\}_{n=1}^N$ and ${\boldsymbol \Lambda} = {\rm diag}[\lambda_1,\cdots,\lambda_N] $ being the diagonal matrix containing the corresponding eigenvalues $\{\lambda_n\}_{n=1}^N$. Here, $\diag[\cdot]$ refers
to a diagonal matrix with its argument on the main diagonal. The eigenvectors and eigenvalues of ${\boldsymbol L}$ provide the notion of frequency in the graph setting~\cite{shuman2013Emerging,sandryhaila2014big}. Specifically, $\{{\boldsymbol u}_n\}_{n=1}^N$ forms an orthonormal Fourier-like basis for graph signals with the graph frequencies denoted by $\{\lambda_n\}_{n=1}^N$.
The {\it graph Fourier transform} of ${\boldsymbol x}$, denoted by ${\boldsymbol x}_f$, is given by
\begin{equation}
\label{eq:GFT}
{\boldsymbol x}_f = {\boldsymbol U}^T {\boldsymbol x} \Leftrightarrow {\boldsymbol x} = {\boldsymbol U} {\boldsymbol x}_f.
\end{equation}
We say that a graph signal ${\boldsymbol x}$ is bandlimited, if its graph Fourier transform ${\boldsymbol x}_f$ is sparse (i.e., contains a very few nonzero entries). Due to the uncertainty principle~\cite{bruckstein2002generalized}, a sparse graph signal ${\boldsymbol x}$ is not bandlimited in general.
The frequency content of graph signals may be modified using {\it linear shift-invariant graph filters}~\cite{sandryhaila2013discrete} of the form
\begin{equation}
\label{eq:graph_fitler}
{\boldsymbol H}= {\boldsymbol U} \diag[{{\boldsymbol h}_f}] {\boldsymbol U}^T \in \mathbb{R}^{N \times N},
\end{equation}
where ${\boldsymbol h}_f$ is the frequency response of the graph filter.
\section{Data model} \label{sec:prob}
Let us consider a signal $x(\mathbb{D},t)$ in a physical domain $\mathbb{D}$ and temporal domain $t$. We will assume that $x(\mathbb{D},t)$ obeys the \emph{heat equation}
\begin{equation}
\frac{\partial x(\mathbb{D},t)}{\partial t} = \alpha \nabla^{2}x(\mathbb{D},t)+ q(\mathbb{D}),
\label{eq:heatequation}
\end{equation}
where $ \nabla^{2}$ is the Laplace operator, $\alpha$ is the diffusion constant, and $q(\mathbb{D})$ is the external time-invariant input. When $t=0$, $x(0) = x(\mathbb{D},0)$ represents the {\it initial field distribution}. Without loss of generality, from now on we will assume $\alpha = -1$.
To solve such a differential equation on a surface or manifold, the manifold is discretized (e.g., using a Delaunay mesh), and the Laplace operator is replaced with a discrete Laplacian matrix (more specifically, a \emph{cotan-Laplacian} matrix) denoted by ${\boldsymbol L}$. Thus, approximating \eqref{eq:heatequation} to
\begin{equation}
\frac{\partial {\boldsymbol x}(t)}{\partial t} = - {\boldsymbol L} {\boldsymbol x}(t) + {\boldsymbol q},
\label{eq:heateqGraph}
\end{equation}
where ${\boldsymbol x}(t) = [x_1(t),\ldots, x_N(t)]^T \in \mathbb{R}^N$ and ${\boldsymbol q} = [q_1,\ldots, q_N]^T \in \mathbb{R}^N$ are signals defined on the graph represented by the Laplacian matrix ${\boldsymbol L}$. The differential equation \eqref{eq:heateqGraph} models heat diffusion on graphs, where the diffusion field is induced by ${\boldsymbol x}(0)$ and ${\boldsymbol q}$.
The solution to the non-homogenous differential equation \eqref{eq:heateqGraph} is given by~\cite{strang2015differentialeq}
\begin{equation}
\begin{aligned}
{\boldsymbol x}(t) &= e^{-t {\boldsymbol L}} {\boldsymbol x}(0)+ \int_{0}^{t} e^{-s{\boldsymbol L}}{\boldsymbol q} \,\,ds \\
&= {\boldsymbol U} e^{-t {\boldsymbol \Lambda}} {\boldsymbol U}^T {\boldsymbol x}(0) + {\boldsymbol U} \left(\int_{0}^{t} e^{-s{\boldsymbol \Lambda}} \,\,ds \right){\boldsymbol U}^T {\boldsymbol q} \\
&= {\boldsymbol U} e^{-t {\boldsymbol \Lambda}} {\boldsymbol x}_{f}(0) + {\boldsymbol U} \left(\int_{0}^{t} e^{-s{\boldsymbol \Lambda}} \,\,ds \right){\boldsymbol q}_f
\label{eq:heateqSol}
\end{aligned}
\end{equation}
where $e^{{\boldsymbol L}} = {\boldsymbol U} e^{{\boldsymbol \Lambda}} {\boldsymbol U}^T \in \mathbb{R}^{N \times N}$ denotes the matrix exponential of ${\boldsymbol L} \in \mathbb{R}^{N \times N}$, ${\boldsymbol x}(0)$ is the initial field distribution at $t=0$. Here, ${\boldsymbol x}_{f}(0) = {\boldsymbol U}^T{\boldsymbol x}(0)$ and ${\boldsymbol q}_f = {\boldsymbol U}^T{\boldsymbol q}$ are, respectively the graph Fourier transforms of ${\boldsymbol x}(0)$ and ${\boldsymbol q}$. From \eqref{eq:graph_fitler}, we can see that the diffusive field ${\boldsymbol x}(t)$ is obtained by filtering ${\boldsymbol x}(0)$ and ${\boldsymbol q}$ with graph filters having frequency responses $e^{-t {\boldsymbol \Lambda}}$ and $\int_{0}^{t} e^{-s{\boldsymbol \Lambda}} \,\,ds$, respectively.
Let us introduce the vectors ${\boldsymbol a}(t) =[e^{-\lambda_1 t}, \ldots,e^{-\lambda_N t}]^T$ and ${\boldsymbol b}(t) = [f_t(\lambda_1), \ldots, f_t(\lambda_N)]^T$, where
\[
f_t(\lambda) = \int_{0}^{t} e^{-\lambda s}\,\,ds = \frac{1-e^{-t\lambda}}{\lambda}
\]
with $f_t(0) = t$, and $f_0(\lambda) = 0$. We can now express \eqref{eq:heateqSol} compactly as
\begin{equation}
{\boldsymbol x}(t) = {\boldsymbol U} \diag[{\boldsymbol x}_{f}(0)] {\boldsymbol a}(t) + {\boldsymbol U} \diag[{\boldsymbol q}_f] {\boldsymbol b}(t).
\end{equation}
Next, let us sample ${\boldsymbol x}(t)$ uniformly in time at instances $\{t_k = \Delta k, k=1,2,\cdots,T\}$ with step size $\Delta $ to obtain the data matrix ${\boldsymbol X} = [{\boldsymbol x}(t_1), {\boldsymbol x}(t_2),\cdots, {\boldsymbol x}(t_T)] \in \mathbb{R}^{N \times T}$, which is given by
\begin{equation}
{\boldsymbol X}= {\boldsymbol U} \diag[{\boldsymbol x}_{f}(0)] {\boldsymbol A}^T + {\boldsymbol U} \diag[{\boldsymbol q}_f] {\boldsymbol B}^T,
\label{eq:heateqSol_vec}
\end{equation}
where ${\boldsymbol A} = [{\boldsymbol a}(t_1), {\boldsymbol a}(t_2),\ldots,{\boldsymbol a}(t_T)]^T \in \mathbb{R}^{T \times N}$ and ${\boldsymbol B} = [{\boldsymbol b}(t_1),{\boldsymbol b}(t_2), \ldots,{\boldsymbol b}(t_T)]^T \in \mathbb{R}^{T \times N}$. Also, let us observe a subset of $K$ out of $N$ mesh points and denote this subset with ${\cal K} \subseteq {\cal V}$, where $|{\cal K}| = K$. By introducing a selection matrix ${\boldsymbol \Phi} \in \{0,1\}^{K \times N}$ that selects the field values at vertices indicated by ${\cal K}$, we can mathematically relate the subsampled observations to ${\boldsymbol X}$ as
\[
{\boldsymbol Y} = [{\boldsymbol y}(t_1), {\boldsymbol y}(t_2),\cdots, {\boldsymbol y}(t_T)] = {\boldsymbol \Phi} {\boldsymbol X}.
\]
In what follows, we will develop estimators to recover ${\boldsymbol x}(0)$ and/or ${\boldsymbol q}(0)$ from ${\boldsymbol Y}$.
\section{Diffusion field induced by ${\boldsymbol x}(0)$ or ${\boldsymbol q}$} \label{sec:init}
In this section, we will develop a simple least squares estimator for reconstructing the diffusion field induced by ${\boldsymbol x}(0)$ or ${\boldsymbol q}$ from the subsampled data matrix ${\boldsymbol Y}$. More importantly, we do not impose any band-limiting constraints on the sources. This means that the sources may be sparse in the vertex domain and can model localized events such as rumor or infection sources in a complex network, traffic accidents in a road network, or diffusion of hot spots on a surface, to list a few .
Consider the case in which the diffusion field \eqref{eq:heateqSol} is induced by only the initial field ${\boldsymbol x}(0)$ and the external input ${\boldsymbol q} = {\bf 0}$.
From \eqref{eq:heateqSol} and \eqref{eq:heateqSol_vec}, we have
\[
{\boldsymbol Y} = {\boldsymbol \Phi}{\boldsymbol X}= {\boldsymbol \Phi} {\boldsymbol U} \diag[{\boldsymbol x}_{f}(0)] {\boldsymbol A}^T.
\]
Vectorizing ${\boldsymbol Y}$, we get a system of $KT$ equations in $N$ unknowns given by
\begin{equation}
{\boldsymbol y} = {\rm vec}({\boldsymbol Y}) = \left({\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}\right) {\boldsymbol x}_{f}(0),
\label{eq:obsx0}
\end{equation}
where $\circ$ denotes the Khatri-Rao (i.e., columnwise Kronecker) product, ${\rm vec}(\cdot)$ refers to the matrix vectorization
operator. Here, we have used the property ${\rm vec}({\boldsymbol A}\diag[{\boldsymbol b}]{\boldsymbol C})= ({\boldsymbol C}^T \circ {\boldsymbol A}){\boldsymbol b}$.
Suppose we choose $K$ and $T$ such that $KT \geq N$, and if the matrix ${\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}$ has full-column rank, then we can estimate ${\boldsymbol x}_{f}(0)$ using least squares as
\[
\widehat{{\boldsymbol x}}_{f}(0) = \left({\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}\right)^\dag {\boldsymbol y},
\]
and localize the sources as
\[
\widehat{{\boldsymbol x}}(0) = {\boldsymbol U} \widehat{{\boldsymbol x}}_{f}(0).
\]
Using this in \eqref{eq:heateqSol} allows us to compute the diffusive field at any time $t$ and at all the vertices. When the diffusion field is induced by ${\boldsymbol q}$ with ${\boldsymbol x}(0)={\bf 0}$, the least squares estimator for ${\boldsymbol q}$ may be developed along the similar lines.
The rank of the Khatri-Rao product of two matrices ${\boldsymbol A}$ and ${\boldsymbol B}$ (of appropriate dimensions) with no all-zero column satisfies~\cite{sidiropoulos2000uniqueness}
\[
\rm rank ({\boldsymbol A} \circ {\boldsymbol B}) \geq \max\{\rm rank({\boldsymbol A}),\rm rank({\boldsymbol B})\}.
\]
When the sampling time instances $\{t_1,\cdots,t_T\}$ and the eigenvalues of ${\boldsymbol L}$ are distinct, then the $T \times N$ Vandermonde matrix ${\boldsymbol A}$ will have full column rank of $N$ for $T \geq N$ and by construction does not have an all-zero column. Therefore, selecting rows of ${\boldsymbol U}$ such that there are no all-zero columns ensures that the rank of the matrix ${\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}$ will be $N$. In fact, observing only one node uniformly in time might result in the matrix ${\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}$ that has full column rank. However, in practice, depending on the observation time window, diffusion constant and the spectrum of ${\boldsymbol L}$, ${\boldsymbol A}$ might be ill-conditioned. In such cases, ${\boldsymbol \Phi}$ may be designed using sparse sensing (or sensor selection) techniques (e.g., see~\cite{chepuri2016sparse,ortiz2019sparse}) to obtain a full column rank matrix ${\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}$.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{./figures/mesh.eps}
\caption{Discretized metal plate with a cavity. The red (black) dots represent the observed (unobserved) vertices.}
\label{fig:mesh}
\end{figure}
\begin{figure*}[!h]
\centering
\psfrag{Celcius}{\scriptsize Celcius}
\psfrag{true}{\scriptsize True}
\psfrag{Estimated}{\scriptsize Estimated}
\psfrag{Mesh #89}{\scriptsize $x(v_{89},t)$}
\psfrag{Mesh #88}{\scriptsize $x(v_{88},t)$}
\psfrag{Mesh #90}{\scriptsize $x(v_{90},t)$}
\psfrag{Mesh #73}{\scriptsize $x(v_{73},t)$}
\psfrag{Field diffusion [Celcius]}{\hskip-3mm\scriptsize Field diffusion [Celcius]}
\psfrag{time}{\scriptsize time [s]}
\psfrag{Source intensity}{\scriptsize Source intensity}
\psfrag{Node index}{\scriptsize Node index}
\psfrag{T=8}{\scriptsize $T=8$ }
\psfrag{T=16}{\scriptsize $T=16$ }
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{./figures/x0.eps}
\caption{}
\label{fig:x0}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{./figures/xt1.eps}
\caption{}
\label{fig:xt1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{./figures/x0hat.eps}
\caption{}
\label{fig:x0hat}
\end{subfigure}
\\[1.5em]
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.8\columnwidth]{./figures/q.eps}
\caption{}
\label{fig:q}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{./figures/xt2.eps}
\caption{}
\label{fig:xt2}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.9\columnwidth]{./figures/qhat.eps}
\caption{}
\label{fig:qhat}
\end{subfigure}
\caption{Recovery of diffusive fields. (a) Initial field distribution. (b) Evolution of the field with time when ${\boldsymbol q} = {\bf 0}$. (c) Least squares based localization and reconstruction of ${\boldsymbol x}(0)$. (d) Time-invariant and smooth external input ${\boldsymbol q}$. (e) Evolution of field with time due to ${\boldsymbol x}(0)$ and ${\boldsymbol q}$. (f) Least squares based reconstruction of ${\boldsymbol q}$.}
\end{figure*}
\section{Diffusion field induced by ${\boldsymbol x}(0)$ and ${\boldsymbol q}$}
In this section, we consider the case in which the diffusion field is induced by ${\boldsymbol x}(0)$ and a bandlimited time-invariant input ${\boldsymbol q}$, and provide a simple least squares estimator to recover the underlying sources from the subsampled data matrix ${\boldsymbol Y}$. Although we restrict ${\boldsymbol q}$ to be bandlimited, we do not impose any band-limiting or other structural constraints on ${\boldsymbol x}(0)$. The heat diffusion equation may be used to understand the movement of traffic in cities. Although the usual traffic movement may be assumed to be a smooth signal on a road network, there could exist a localized traffic bottleneck (e.g., due to an accident), which is a sparse non-bandlimited graph signal. Such diffusive fields may be modeled using \eqref{eq:heateqGraph} with a sparse ${\boldsymbol x}(0)$ representing the localized events and a bandlimited
${\boldsymbol q}$ representing the usual activity.
Recall that if ${\boldsymbol q}$ is bandlimited, then ${\boldsymbol q}_f$ will be sparse.Without loss of generality, let us assume that the first $P$ entries of ${\boldsymbol q}_f = [q_{f,1},q_{f,2},\ldots, q_{f,N}]^T$ are nonzero. Then, the bandlimited (or smooth) signal ${\boldsymbol q}$ may be expressed as a linear combination of the first few eigenvectors as
\begin{equation}
\label{eq:bl_q}
{\boldsymbol q} = \sum_{i=1}^P {\boldsymbol u}_i q_{f,i} = {\boldsymbol U}_P {\boldsymbol q}_{f,P},
\end{equation}
where ${\boldsymbol U}_P \in \mathbb{R}^{N \times P}$.
Vectorizing ${\boldsymbol Y}$ in \eqref{eq:heateqSol_vec}, we have
\begin{equation}
{\boldsymbol y} ={\rm vec}({\boldsymbol Y}) =\left[\begin{array}{cc}{\boldsymbol A} \circ {\boldsymbol \Phi}{\boldsymbol U} & {\boldsymbol B} \circ {\boldsymbol \Phi} {\boldsymbol U} \end{array}\right] \left[\begin{array}{c}{\boldsymbol x}_{f}(0) \\{\boldsymbol q}_{f}\end{array}\right].
\label{eq:vec_y_q}
\end{equation}
Substituting \eqref{eq:bl_q}, we get a linear system of $KT$ equations in $N + P$ unknowns
\begin{equation}
\begin{aligned}
{\boldsymbol y} & = \left({\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}\right) {\boldsymbol x}_{f}(0) + \left({\boldsymbol B} \circ {\boldsymbol \Phi} {\boldsymbol U}\right) {\boldsymbol U}^T {\boldsymbol U}_P {\boldsymbol q}_{f,P}\\
&= \left[\begin{array}{cc}{\boldsymbol A} \circ {\boldsymbol \Phi}{\boldsymbol U} & ({\boldsymbol B} \circ {\boldsymbol \Phi} {\boldsymbol U}){\boldsymbol U}^T{\boldsymbol U}_P \end{array}\right] \left[\begin{array}{c}{\boldsymbol x}_{f}(0) \\{\boldsymbol q}_{f,P}\end{array}\right].
\end{aligned}
\label{eq:vec_y_blq}
\end{equation}
If the matrix ${\boldsymbol \Psi} = \left[{\boldsymbol A} \circ {\boldsymbol \Phi}{\boldsymbol U} \quad ({\boldsymbol B} \circ {\boldsymbol \Phi} {\boldsymbol U}){\boldsymbol U}^T{\boldsymbol U}_P \right]$ has full column rank, which requires $KT \geq N+P$, we can use least squares to obtain
\[
\left[\begin{array}{c}\widehat{{\boldsymbol x}}_{f}(0) \\\widehat{{\boldsymbol q}}_{f,P}\end{array}\right] = {\boldsymbol \Psi}^\dag {\boldsymbol y},
\]
and subsequently localize the underlying sources as
\[
\widehat{{\boldsymbol x}}(0) = {\boldsymbol U} \widehat{{\boldsymbol x}}_{f}(0); \,\, \widehat{{\boldsymbol q}} = {\boldsymbol U}_P \widehat{{\boldsymbol q}}_{f,P}.
\]
In the previous subsection, we have seen that by appropriately selecting $K < N$ rows of ${\bf U}$ we may obtain a full column rank matrix ${\boldsymbol A} \circ {\boldsymbol \Phi} {\boldsymbol U}$ as ${\boldsymbol A}$ has full-column rank. As a consequence, by appropriately selecting $P > K$ rows of ${\bf U}$ will only increase the rank of $\left[{\boldsymbol A} \circ {\boldsymbol \Phi}{\boldsymbol U} \quad {\boldsymbol B} \circ {\boldsymbol \Phi} {\boldsymbol U}\right]$ by $N+P$. This means that we have to impose some structural constraint on ${\boldsymbol q}$ to recover it uniquely from the subsampled data when the diffusive field is induced by both ${\boldsymbol x}(0)$ and ${\boldsymbol q}$. In other words, by sampling in time and observing all the $N$ nodes, we can recover $2N$ unknowns ${\boldsymbol x}(0)$ and ${\boldsymbol q}$ without any band-limiting constraints.
\section{Numerical experiments}
In this section, we apply the developed theory of graph sampling for reconstructing diffusive fields induced by hot spots on a metal block with a cavity. We use the \emph{partial differential equation} toolbox from MATLAB to mesh the surface. The generated mesh with $N=134$ vertices is shown in Fig.~\ref{fig:mesh}. We observe $K=32$ vertices uniformly in time in the interval $[0, 1.44] $s with step size $\Delta = 0.16$s and $T = 10$. The sampled vertices are also indicated in Fig.~\ref{fig:mesh}. We present results for the following two cases: (i) diffusive field induced by ${\boldsymbol x}(0)$, and (ii) diffusive field induced by ${\boldsymbol x}(0)$ and ${\boldsymbol q}$.
As discussed in Section~\ref{sec:init}, to recover diffusive fields induced by ${\boldsymbol x}(0)$ with ${\boldsymbol q} = {\bf 0}$, we do not require any band-limiting constraints.
To demonstrate this, for ${\boldsymbol x}(0)$, we use a very sparse vector with only two non-zero entries at vertices $v_{88}$ and $v_{89}$. Since this initial field distribution is highly localized in the vertex domain, it is not bandlimited. Fig.~\ref{fig:x0} shows the initial field distribution at $t=0$, and Fig.~\ref{fig:xt1} shows the evolution of the diffusive field at vertices $v_{73}$, $v_{88}$, $v_{89}$, and $v_{90}$ for different time instances.
In Fig.~\ref{fig:x0hat}, we can see the exact localization of the hot spots in the noiseless setting using a simple linear least squares estimator, and more importantly, without using any sparsity constraints. In Fig.~\ref{fig:rmse}, we consider a noisy setting in which the observations in \eqref{eq:obsx0} are corrupted with Gaussian noise having zero mean and variance $10^{-5}$. We show the normalized root mean squared error (RMSE), averaged over 1000 independent Monte-Carlo experiments, for different values of $K$. Although the error decreases as $K$ increases, we can see that increasing $T$ beyond a certain value does not lead to better performance. This is because ${\boldsymbol A}$ becomes ill-conditioned as $T$ increases.
For the case in which the diffusion field is induced due to both ${\boldsymbol x}(0)$ and ${\boldsymbol q}$, we use a sparse ${\boldsymbol x}(0)$ as before, and a bandlimited ${\boldsymbol q}$ with $P=5$. Fig.~\ref{fig:q} shows the external time-invariant input, which is smooth on the surface. When ${\boldsymbol q} \neq {\bf 0}$, we can see in Fig.~\ref{fig:xt2} that the field values do not decay with time as earlier. Fig.~\ref{fig:qhat} shows the exact recovery of ${\boldsymbol q}$ using a simple linear least squares estimator (the reconstruction of ${\boldsymbol x}(0)$ is similar to Fig.~\ref{fig:x0hat}, hence not shown), where we do not impose any sparsity constraints for recovering ${\boldsymbol x}(0)$. As before, gathering more samples in time does not lead to better performance as both ${\boldsymbol A}$ and ${\boldsymbol B}$ become ill-conditioned, and as a consequence we need to sample more vertices.
\begin{figure}
\psfrag{mse}{\hskip-6mm \scriptsize Normalized RMSE}
\psfrag{sensors}{\scriptsize $K$ }
\centering
\includegraphics[width=0.9\columnwidth]{./figures/rmse.eps}
\caption{Normalized root mean squared error for the diffusion field induced by ${\boldsymbol x}(0)$ with ${\boldsymbol q} = {\bf 0}$.}
\label{fig:rmse}
\end{figure}
\section{Concluding remarks}
In this paper, we discussed the sampling and recovery of diffusive fields on graphs induced by possibly non-bandlimited sources. When the diffusion field is induced by an initial field or a time-invariant external input, we can localize and recover the sources by sampling a significantly smaller subset of nodes uniformly in time without imposing any band-limiting constraints and by using a simple least squares estimator. For diffusive fields induced due to an initial field and external input, we can exactly recover the sources from noiseless subsampled data when we constrain the external input to be bandlimited. When the observations are noiseless, the recovery is exact. In essence, for diffusion models on graphs, we can compensate for the unobserved vertices with the temporal samples at the observed vertices.
\bibliographystyle{IEEEtran}
| {
"timestamp": "2019-07-02T02:34:26",
"yymm": "1907",
"arxiv_id": "1907.00892",
"language": "en",
"url": "https://arxiv.org/abs/1907.00892",
"abstract": "In this paper, the focus is on the reconstruction of a diffusive field and the localization of the underlying driving sources on arbitrary graphs by observing a significantly smaller subset of vertices of the graph uniformly in time. Specifically, we focus on the heat diffusion equation driven by an initial field and an external time-invariant input. When the underlying driving sources are modeled as an initial field or external input, the sources (hence the diffusive field) can be recovered from the subsampled observations without imposing any band-limiting or sparsity constraints. When the diffusion is induced by both the initial field and external input, then the field and sources can be recovered from the subsampled observations, however, by imposing band-limiting constraints on either the initial field or external input. For heat diffusion on graphs, we can compensate for the unobserved vertices with the temporal samples at the observed vertices. If the observations are noiseless, then the recovery is exact. Nonetheless, the developed least squares estimators perform reasonably well with noisy observations. We apply the developed theory for localizing and recovering hot spots on a rectangular metal plate with a cavity.",
"subjects": "Signal Processing (eess.SP)",
"title": "Sampling And Reconstruction Of Diffusive Fields On Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9787126506901791,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7094397035864881
} |
https://arxiv.org/abs/1408.3887 | Completion of continuity spaces with uniformly vanishing asymmetry | The classical Cauchy completion of a metric space (by means of Cauchy sequences) as well as the completion of a uniform space (by means of Cauchy filters) are well-known to rely on the symmetry of the metric space or uniform space in question. For qausi-metric spaces and quasi-uniform spaces various non-equivalent completions exist, often defined on a certain subcategory of spaces that satisfy a key property required for the particular completion to exist. The classical filter completion of a uniform space can be adapted to yield a filter completion of a metric space. We show that this completion by filters generalizes to continuity spaces that satisfy a form of symmetry which we call uniformly vanishing asymmetry. | \section{Introduction}
The theories of the completion of metric spaces and the completion
of uniform spaces are well-known and understood. There is little to
no doubt as to what completion should mean in these cases and there
are several (equivalent of course) constructions of the completions.
The situation is different when considering quasi-metric spaces and
quasi-uniform spaces. The lack of symmetry (see \cite{SH} for a detailed
account of symmetry and completions in the context of quantaloid enriched
categories) sabotages the standard completion constructions that work
in the symmetric case and the theory bifurcates with several different
notions of complete objects and different completion processes existing
in the literature (see e.g., \cite{Alemany96,Carlson71,Doitchinov88,Doitchinov91,Flagg99,Kivuvu08,Kivuvu09,Kunzi02,Lowen99,Render95,Romaguera02,Sherwood66}).
Consider the category ${\bf QMet}$ of quasi-metric spaces and uniformly
continuous functions, and the category ${\bf QUnif}$ of quasi-uniform
spaces and their morphisms. With a given quasi-metric space $(X,d)$
one can associate two quasi-uniform structures, one generated by the
entourages $U_{\varepsilon}=\{(x,y)\in X\times X\mid d(x,y)<\varepsilon\}$,
the other generated by the entourages $U^{\varepsilon}=\{(x,y)\in X\times X\mid d(y,x)<\varepsilon\}$,
giving rise to the two parallel functors in the diagram
\[
\xymatrix{{\bf Met^{uva}}\ar[r] & {\bf QMet}\ar@<2pt>[r]^{U(-)}\ar@<-2pt>[r]_{U(-^{op})} & {\bf QUnif}}
\]
and it is natural to consider their equalizer. From the universal
property of the equalizer it follows that ${\bf Met^{uva}}$ extends
${\bf Met}$, the full subcategory of ${\bf QMet}$ spanned by the
ordinary metric spaces, but it is strictly larger.
A quasi-metric space is the same thing as a $V$-\emph{continuity space
}or a $V$-\emph{space, }a concept introduced by Flagg in \cite{Flagg97a},
where $V$ is the value quantale $[0,\infty]$, viewed as a complete
lattice, with ordinary addition. Everything above can be repeated
with ${\bf Met}$ and ${\bf QMet}$ replaced, respectively, by ${\bf Met_{V}}$
(the category of symmetric $V$-spaces and uniformly continuous functions)
and ${\bf QMet_{V}}$ (the category of $V$-spaces and uniformly continuous
functions) for any value quantale $V$. In more detail, the aim of
this work is the construction of the dotted functors in the commutative
diagram
\[
\xymatrix{ & {\bf QMet_{V}\ar@<2pt>[r]\ar@<-2pt>[r]} & {\bf QUnif}\\
{\bf Met_{V}}\ar@{^{(}->}[r]\ar@{..>}[d] & {\bf Met_{V}^{uva}}\ar[u]^{e}\ar[r]\ar@{..>}[d] & {\bf Unif}\ar[u]\ar[d]\\
{\bf cMet_{V}}\ar@{^{(}->}[r] & {\bf cMet_{V}^{uva}}\ar[r] & {\bf cUnif}
}
\]
where the three lower vertical arrows are completion functors, thus
showing that the classical completion extends to the equalizer. In
more detail, in the diagram, the lower right vertical functor is the
standard construction of the completion of a uniform space via minimal
Cauchy filters. We show that this construction extends to separated
$V$-spaces in the equalizer. The construction is a metric re-incarnation
of that giving rise to the lower right functor. Most of the existing
notions of completions of $V$-spaces, when restricted to symmetric
spaces, yield (essentially) the same completion but these constructions
bifurcate to non-isometric completions for general $V$-spaces. It
is quite straightforward to manually check for most completions in
the literature that when restricted to $\mathbf{Met_{V}^{uva}}$,
the results are isometric. We thus expand the domain of the definition
of the standard completion to what appears to be the maximum possible.
In the context of the completion of metric spaces, one of the striking differences between the completion by means of Cauchy sequences and Cauchy filters is that the former requires a quotient construction, identifying sequences of distance $0$, while the latter enjoys a canonical choice of representative, namely the round filter $\mathcal F_\succ $ generated by fattening the elements in $\mathcal F$. In the construction of the completion we give below we also treat round filters in the full generality of $V$-spaces and we exhibit the 'roundification' process as a left adjoint on appropriately constructed categories.
The plan of the article is as follows. Section $2$ recounts some
basic facts on value quantales and $V$-spaces following \cite{Flagg97a}.
Section $3$ introduces the concept of uniformly vanishing asymmetry,
the notion of symmetry required for the completion, which is then
presented in Section $4$.
\section{Value quantales and $V$-spaces}
Recall that a \emph{complete lattice} $L$ is a poset which possesses,
for all $S\subseteq L$, a meet $\bigwedge S$ and a join $\bigvee S$.
The top and bottom elements are denoted, respectively, by $\infty$
and $0$. The \emph{well-above }relation $\succ$ is derived from
the poset structure in $L$ (or any poset) as follows. For $a,b\in L$,
$a$ is said to be well-above $b$, denoted by $a\succ b$, or $b\prec a$,
if, given any $S\subseteq L$ such that $b\ge\bigwedge S$, there
exists $s\in S$ with $a\ge s$.
A \emph{value quantale}, as introduced in \cite{Flagg97a} by Flagg,
is a pair $(V,+)$ where $V$ is a complete lattice and $+$ is an
associative and commutative binary operation on $V$ such that
\begin{itemize}
\item $x+0=x$
\item $x=\bigwedge\{y\in V\mid y\succ x\}$
\item $x+\bigwedge S=\bigwedge(x+S)$
\item $a\wedge b\succ0$
\end{itemize}
for all $x\in V$, $S\subseteq V$, and $a,b\in V$ with $a\succ0$
and $b\succ0$ ($x+S$ means $\{x+s\mid s\in S\}$). When the ambient
value quantale $V$ is clear from the context, we will write $\varepsilon\succ0$
as shorthand for the claim that $\varepsilon\in V$ and that $\varepsilon\succ0$
holds in $V$.
\begin{rem}
More commonly, a quantale is defined by the duals of the axioms above,
but in the context of this work we adhere to Flagg's original notation.
\end{rem}
\subsection{Value quantale fundamentals}
We list those properties of value quantales that are needed for the
proofs that follow. We provide no arguments for the claims we make
in this section since the proofs are either immediate or are found
in \cite{Flagg97a}. Let $V$ be a value quantale.
\begin{itemize}
\item For all $x,y,z\in V$,
\begin{enumerate}
\item if $x\succ y$, then $x\ge y$
\item if $x\succ y$ and $y\ge z$, then $x\succ z$
\item if $x\ge y$ and $y\succ z$, then $x\succ z$.
\end{enumerate}
\item For all $x,y,a,b\in V$, if $x\le y$ and $a\le b$, then $x+a\le y+b$.
\item For all $\varepsilon\succ0$, there exists $\delta\succ0$ such that
$\delta+\delta\le\varepsilon$. More generally, for all $a\in V$ and
$n\ge1$ there exists $\delta\succ0$ such that $n\cdot\delta\le\varepsilon$,
where $n\cdot\delta$ denotes the $n$-fold addition of $\delta$
with itself.
\item For all $x,z\in V$, if $x\prec z$, then there exists $y\in V$ with
$x\prec y\prec z$ (this result is known as the \emph{interpolation
property}).
\item Fix $b\in V$. Since $b+\square:V\to V$ preserves meets, it has a
left adjoint denoted by $\square-b:V\to V$, characterized by the
property that $a-b\le c\iff a\le b+c$, for all $a,c\in V$. Among
the numerous properties of this notion of subtraction, the one we
will use is $(a-b)-c=a-(b+c)$.
\item For all $a\in V$, we have $a=\bigwedge\{a+\varepsilon\mid\varepsilon\succ0\}$.
Consequently, for all $a,b\in V$, $a\le b$ if, and only if, $a\le b+\varepsilon$
for all $\varepsilon\succ0$.
\end{itemize}
Value quantales are the objects of a $2$-category $\mathbb{V}$ as
follows. A morphism $\alpha:V\to W$ in $\mathbb{V}$ is a monotone
function of the underlying lattices such that
\begin{itemize}
\item $\alpha(0)=0$, and
\item $\alpha(a+b)\le\alpha(a)+\alpha(b)$
\end{itemize}
for all $a,b\in V$. Each hom set $\mathbb{V}(V,W)$ is given a poset
structure by declaring, for $\alpha,\beta:V\to W$, that $a\le\beta$
precisely when $\beta(a)\le\alpha(a)$ for all $a\in V$. Interpreting
the poset $\mathbb{V}(V,W)$ as a category thus describes the $2$-cells
in the $2$-category $\mathbb{V}$.
\subsection{$V$-spaces\label{sub:catPers}}
Flagg introduced value quantales to replace the traditional non-negative
extended real numbers and act as generalized codomains for distance
functions $d:X\times X\to V$. Thus, a \emph{$V$-space} (called a
$V$-\emph{continuity space} in \cite{Flagg97a}), is a triple $(X,d,V)$
where $V$ is a value quantale, $X$ is a set, and $d:X\times X\to V$
is a function satisfying
\begin{itemize}
\item $d(x,x)=0$; and
\item $d(x,z)\le d(x,y)+d(y,z)$
\end{itemize}
for all $x,y,z\in X$. A $V$-space $(X,d,V)$ is \emph{symmetric}
if $d(x,y)=d(y,x)$ for all $x,y\in X$ and it is called \emph{separated
}if, for all $x,y\in X$, the equalities $d(x,y)=0=d(y,x)$ imply
$x=y$.
For a given value quantale $V$, the category ${\bf Met_{V}}$ consists
of all symmetric $V$-spaces as objects and all \emph{uniformly continuous
}mappings as morphisms $f:X\to Y$ (i.e., those functions satisfying
that for all $\varepsilon\succ0$ there exists $\delta\succ0$ such that
$d(f(x_{1}),f(x_{2}))\le\varepsilon$ whenever $d(x_{1},x_{2})\le\delta$).
Ignoring size issues, the assignment $V\mapsto{\bf Met_{V}}$ extends
to a $2$-functor ${\bf Met_{-}}:\mathbb{V}\to{\bf Cat}$ into the
$ $$2$-category of categories. In more detail, if $\alpha:V\to W$
is a morphism of value quantales, then ${\bf Met_{\alpha}}:{\bf Met_{V}}\to{\bf Met_{W}}$
sends a $V$-space $(X,d)$ to $(X,\alpha_{*}d)$ where $\alpha_{*}d(x,y)=\alpha(d(x,y))$,
which is easily seen to be a $W$-space. If now $\beta:V\to W$ is
another morphism such that $\alpha\le\beta$, then it is immediately
verified that there is a natural transformation ${\bf Met_{\alpha}\to{\bf Met}_{\beta}}$,
where the component at $X$ is the identity function on the underlying
sets.
Similarly, one defines the categories ${\bf QMet_{V}}$ of $V$-spaces
and again one obtains a similar $2$-functor ${\bf QMet}_{-}:\mathbb{V}\to{\bf Cat}$.
The full subcategory ${\bf QMet_{V}^{0}}$ of ${\bf QMet_{V}}$ spanned
by the separated $V$-spaces is reflective, with the reflector $-_{0}:{\bf QMet_{V}}\to{\bf QMet_{V}^{0}}$
mapping $X$ to $X_{0}=X/\sim$, where $x\sim y$ precisely when $d(x,y)=d(y,x)=0$.
Similarly, the full subcategory ${\bf Met_{V}^{0}}$ of ${\bf Met_{V}}$
spanned by the separated symmetric $V$-spaces is reflective. It is
immediate that if $(X,d)$ is a $V$-space, then so is the \emph{dual
space} $X^{op}=(X,d^{op})$, where $d^{op}(x,y)=d(y,x)$.
\begin{rem}
$V$-spaces are in fact enriched $V$-categories. However, notice
that then enriched functors correspond to non-expanding functions
rather than the uniformly continuous ones we consider.
\end{rem}
$V$-spaces are general enough to capture all topological spaces in
the sense that for every topological space $X$, there is a value
quantale $V$ such that $X$ is $V$-metrizable (Theorem $4.15$ in
\cite{Flagg97a}). Further, the category ${\bf Top}$ is equivalent
to the category ${\bf QMet}_{T}$ whose objects are all pairs $(V,X)$
where $V$ is a value quantale and $X$ is a $V$-space, and a morphism
$(V,X)\to(W,Y)$ is a continuous function $f:X\to Y$ (see \cite{Weiss13}
for more details).
\section{Uniformly vanishing asymmetry}
We introduce now a class of spaces with a sufficient amount of symmetry
to allow for the classical completion via Cauchy filters to carry
through.
For a $V$-space $X$, a point $x\in X$, and $\varepsilon\succ0$
let ${\bf B}_{\varepsilon}(x)=\{y\in X\mid d(x,y)\le\varepsilon\}$ and
similarly let ${\bf B}^{\varepsilon}(x)=\{y\in X\mid d(y,x)\le\varepsilon\}$.
We extend the notation ${\bf B}_{\varepsilon}(x)$ to subsets $S\subseteq X$
by defining ${\bf B}_{\varepsilon}(S)=\bigcup_{s\in S}{\bf B}_{\varepsilon}(s)$,
with ${\bf B}^{\varepsilon}(S)$ defined similarly. Notice that ${\bf B}_{\varepsilon}(x)$
in $X^{op}$ is precisely ${\bf B}^{\varepsilon}(x)$ in $X$. The set
${\bf B}_{\varepsilon}(x)$ is a closed set in the topology generated
by the sets of the form $\{y\in X\mid d(x,y)\prec\varepsilon\}$,
where $x$ varies over $X$ and $\varepsilon\succ0$ varies in $V$
(see Theorem $4.4$ in \cite{Flagg97a}). That topology is denoted
by $\mathcal{O}(X)$ and one obtains the functor $\mathcal{O}:{\bf QMet_{V}}\to{\bf Top}$.
A straightforward verification shows that a $V$-space $X$ gives
rise to a quasi-uniform space $U(X)$, where the entourages are generated
by $\{(x,y)\in X\times X\mid d(x,y)\le\varepsilon\}$, where $\varepsilon\succ0$
varies in $V$, giving rise to a fully faithful functor $U:{\bf QMet_{V}}\to{\bf QUnif}$.
For a $V$-space $X$, the conditions
\begin{itemize}
\item for all $x\in X$ and for all $\varepsilon\succ0$ there exists $\delta\succ0$
such that ${\bf B}^{\delta}(x)\subseteq{\bf B}_{\varepsilon}(x)$ and
such that ${\bf B}_{\delta}(x)\subseteq{\bf B}^{\varepsilon}(x)$ (any
such $\delta$ will be called a \emph{modulus of symmetry }for $\varepsilon$);
\item the identity function $X^{op}\to X$ is a homeomorphism;
\item $\mathcal{O}(X)=\mathcal{O}(X^{op})$
\end{itemize}
are equivalent. If $X$ satisfies these conditions, then $X$ is said
to have\emph{ vanishing asymmetry.} Similarly, the conditions
\begin{itemize}
\item for all $\varepsilon\succ0$ there exists $\delta\succ0$ such that
if $d(y,x)\le\delta$, then $d(x,y)\le\varepsilon$ (any such $\delta$
will be called a \emph{uniform modulus of symmetry }for $\varepsilon$);
\item the identity functions $X^{op}\to X$ and $X\to X^{op}$ are uniformly
continuous;
\item $U(X)=U(X^{op})$
\end{itemize}
are equivalent. If $X$ satisfies these conditions, then $X$ is said
to have \emph{uniformly vanishing asymmetry}. Clearly, if $X$ has
uniformly vanishing asymmetry, then $X$ has vanishing asymmetry.
Let ${\bf Met_{V}^{va}}$ and ${\bf Met_{V}^{uva}}$ be the full subcategories
of ${\bf QMet_{V}}$ spanned by the spaces with vanishing asymmetry
and the spaces with uniformly vanishing asymmetry, respectively. Consider
the diagram
\[
\xymatrix{{\bf Met_{V}^{uva}}\ar@{^{(}->}[r]\ar@{^{(}->}[d] & {\bf QMet_{V}}\ar@<2pt>[r]^{U(-)}\ar@<-2pt>[r]_{U(-^{op})}\ar[d]_{id} & {\bf QUnif}\ar@<-2pt>[d]\ar@<2pt>[d]\\
{\bf Met_{V}^{va}}\ar@{^{(}->}[r] & {\bf QMet_{V}}\ar@<2pt>[r]^{\mathcal{O}(-)}\ar@<-2pt>[r]_{\mathcal{O}(-^{op})} & {\bf Top}
}
\]
where the square on the left consists of inclusions, and the right
vertical arrows are the standard constructions of the topology associated
to a quasi-uniform space. The diagram commutes as long as one does
not incorrectly mix different functors in the square on the right,
and we note that from the definition of (uniformly) vanishing asymmetry,
the top and bottom parts of the diagram are equalizers.
\section{Completion}
From this point onwards, we fix a value quantale $V$ and a $V$-space
$X$. We develop the relevant ingredients for constructing a completion
of $X$ as the set of all minimal Cauchy filters on $X$.
Recall that a \emph{filter }on a set $X$ is a non-empty collection
$\mathcal{F}\subseteq\mathcal{P}(X)$ such that $A\in\mathcal{F}\implies B\in\mathcal{F}$
for all $A\subseteq B\subseteq X$, and $A\cap B\in\mathcal{F}$ for
all $A,B\in\mathcal{F}$ (we do not require that $\emptyset\notin\mathcal{F}$,
so in particular the power-set $\mathcal{P}(X)$ is a filter, the
unique filter containing the empty set, referred to as an \emph{improper
filter}). A \emph{filter base} is a collection $\mathcal{B}\subseteq\mathcal{P}(X)$
such that for all $A,B\in\mathcal{B}$ there exists $C\in\mathcal{B}$
with $C\subseteq A\cap B$. It follows immediately that a filter base
$\mathcal{B}$ gives rise to a filter $\mathcal{F}$, the least filter
containing $\mathcal{B}$, given explicitly by $\mathcal{F}=\{D\subseteq X\mid\exists C\in\mathcal{B},C\subseteq D\}$.
By a filter (resp. filter base) on a $V$-space is meant a filter
(resp. filter base) on the underlying set.
A filter $\mathcal{F}$ is said to \emph{converge }to $x\in X$, written
$\mathcal{F}\to x$, if ${\bf B}_{\varepsilon}(x)\in\mathcal{F}$ for
all $\varepsilon\succ0$. Convergence interpreted in $X^{op}$ is
referred to as op-convergence, thus $\mathcal{F}$ \emph{op-converges}
to $x$, denoted by $\mathcal{F}\to^{op}x$, when ${\bf B}^{\varepsilon}(x)\in\mathcal{F}$
for all $\varepsilon\succ0$.
\begin{defn}
A filter $\mathcal{F}$ on $X$ is said to be a \emph{Cauchy filter
}if for all $\varepsilon\succ0$ there exists $x\in X$ such that
${\bf B}_{\varepsilon}(x)\in\mathcal{F}$. If, moreover, $\mathcal{F}$
does not contain any proper Cauchy subfilter, then $\mathcal{F}$
is called a \emph{minimal Cauchy filter}.
\end{defn}
$X$ is said to be \emph{Cauchy complete }if every proper Cauchy filter
on $X$ converges. The dual notion of a Cauchy filter is that of an
\emph{op-Cauchy filter}, namely when for all $\varepsilon\succ0$
there exists an $x\in X$ with ${\bf B}^{\varepsilon}(x)\in\mathcal{F}$,
that is $\mathcal{F}$ is op-Cauchy in $X$ precisely when $\mathcal{F}$
is Cauchy in $X^{op}$. The $V$-space $X$ is \emph{op-Cauchy complete
}if every proper op-Cauchy filter on $X$ op-converges. For spaces
with uniformly vanishing asymmetry introduced above, the dual concepts
of Cauchy completeness and op-Cauchy completeness coincide. A \emph{completion
}of $X$ is a Cauchy complete $V$-space $\hat{X}$ together with
an isometry $X\to\hat{X}$ with dense image in $\hat{X}$.
\begin{defn}
A filter $\mathcal{F}$ in $X$ is said to be a \emph{round filter
}if for all $F\in\mathcal{F}$ there exists $\varepsilon\succ0$ such
that ${\bf B}_{\varepsilon}(x)\in\mathcal{F}$ implies ${\bf B}_{\varepsilon}(x)\subseteq F$,
for all $x\in X$.
\end{defn}
The omitted proof of the following result is completely formal.
\begin{prop}
\label{prop:Cauchy plus round implies minimal cauchy}If $\mathcal{F}$
is Cauchy and round, then $\mathcal{F}$ is minimal Cauchy.
\end{prop}
Let $\mathcal{F}$ be a filter. Consider the collection $\{{\bf B}_{\varepsilon}(F)\mid F\in\mathcal{F},\,\,\varepsilon\succ0\}$,
which is a filter base since ${\bf B}_{\varepsilon\wedge\delta}(F\cap F')\subseteq{\bf B}_{\varepsilon}(F)\cap{\bf B}_{\delta}(F')$
(recalling that $\varepsilon\wedge\delta\succ0$). The generated filter
is denoted by $\mathcal{F}_{\succ}$.
\begin{prop}
\label{prop:Cauchy implies cauchy}If $\mathcal{F}$ is Cauchy, then
$\mathcal{F}_{\succ}$ is Cauchy.\end{prop}
\begin{proof}
Let $\varepsilon\succ0$ and $\delta\succ0$ with $2\cdot\delta\le\varepsilon$.
Let $x\in X$ with ${\bf B}_{\delta}(x)\in\mathcal{F}$, and so ${\bf B}_{\delta}({\bf B}_{\delta}(x))\in\mathcal{F}_{\succ}$.
Then ${\bf B}_{\varepsilon}(x)\in\mathcal{F}_{\succ}$ follows by ${\bf B}_{\delta}({\bf B}_{\delta}(x))\subseteq{\bf B}_{2\cdot\delta}(x)\subseteq{\bf B}_{\varepsilon}(x)$. \end{proof}
\begin{lem}
\label{lem:UBA implies round}If $X$ has uniformly vanishing asymmetry
and $\mathcal{F}\ne\mathcal{P}(X)$ is a filter on $X$, then $\mathcal{F}_{\succ}$
is round. \end{lem}
\begin{proof}
It suffices to show that for a given basis element ${\bf B}_{\varepsilon}(F)\in\mathcal{F}_{\succ}$
there exists a $\delta\succ0$ such that if ${\bf B}_{\delta}(y)\in\mathcal{F}_{\succ}$,
then ${\bf B}_{\delta}(y)\subseteq{\bf B}_{\varepsilon}(F)$. Let $\delta_{1}\succ0$
with $2\cdot\delta_{1}\le\varepsilon$ and let $\delta_{2}\succ0$
be a uniform modulus of symmetry for $\delta_{1}$. Set $\delta=\delta_{1}\wedge\delta_{2}$.
Suppose that ${\bf B}_{\delta}(y)\in\mathcal{F}_{\succ}$ for some
$y\in X$. Then ${\bf B}_{\delta}(y)\supseteq{\bf B}_{\varepsilon'}(F')\supseteq F'$
for some $F'\in\mathcal{F}$ and $\varepsilon'\succ0$. Choose some $x\in F\cap F'$.
Then, $x\in{\bf B}_{\delta}(y)$ and so $d(y,x)\le\delta$. To show
now that ${\bf B}_{\delta}(y)\subseteq{\bf B}_{\varepsilon}(F)$, notice
that if $z\in{\bf B}_{\delta}(y)$, then $d(y,z)\le\delta$, and so
$d(x,z)\le d(x,y)+d(y,z)\le\delta_{1}+\delta_{1}\le\varepsilon$,
and thus $z\in{\bf B}_{\varepsilon}(F)$. \end{proof}
\begin{cor}
\label{cor:problematicSolved}If $X$ has uniformly vanishing asymmetry
and $\mathcal{F}\ne\mathcal{P}(X)$ is a Cauchy filter on $X$, then
$\mathcal{F}_{\succ}$ is a minimal Cauchy filter.
\end{cor}
\begin{cor}
If $X$ has uniformly vanishing asymmetry, then a filter $\mathcal{F}$
is minimal Cauchy if, and only if, $\mathcal{F}$ is Cauchy and round. \end{cor}
\begin{proof}
One direction is Proposition~\ref{prop:Cauchy plus round implies minimal cauchy}.
For the other direction, if $\mathcal{F}$ is minimal Cauchy, then
$\mathcal{F}=\mathcal{F}_{\succ}$, which is round.
\end{proof}
The results above translate to interesting categorical relations between
Cauchy and round filters, as we now show. Let ${\bf Fil_{V}}$ be
the category whose objects are all pairs $(X,\mathcal{F})$ where
$X$ is a $V$-space with uniformly vanishing asymmetry and $\mathcal{F}$
is a filter on $X$. The morphisms $f:(X,\mathcal{F})\to(Y,\mathcal{G})$
are uniformly continuous functions $f:X\to Y$ with the property that
$f(\mathcal{F})\supseteq\mathcal{G}$, where $f(\mathcal{F})=\{S\subseteq Y\mid f^{-1}(S)\in\mathcal{F}\}$
(which is easily seen to be a filter). Let ${\bf RFil_{V}}$ and ${\bf CFil_{V}}$
be the full subcategories of ${\bf Fil_{V}}$ spanned by round filters
and by Cauchy filters, respectively. Let ${\bf CRFil_{V}}={\bf CFil_{V}}\cap{\bf RFil_{V}}$.
Finally, let ${\bf rFil_{V}}$ be the full subcategory of ${\bf Fil_{V}}$
spanned by the \emph{restricted} objects, i.e., objects $(X,\mathcal{F})$
where $\mathcal{F}\ne\mathcal{P}(X)$ if $X\ne\emptyset$. Similarly
one defines the other restricted full subcategories. Consider the
diagram
\[
\xymatrix{ & & {\bf rFil_{V}}\ar[d]\ar@<-4pt>[ddll]_{(-)_{\succ}}\\
& & {\bf Fil_{V}}\ar[d]\\
{\bf rRFil_{V}}\ar[uurr]\ar[r]^{=} & {\bf RFil_{V}}\ar@<-2pt>@{..>}[ur]\ar[r] & {\bf Met_{V}^{uva}}\ar@{-->}@<-4pt>[u]\ar@{..>}@<4pt>[u]\ar@{..>}@<4pt>[l]\ar@{-->}@<4pt>[r] & {\bf CFil_{V}\ar@{-->}@<-2pt>[ul]\ar[l]} & {\bf rCFil_{V}\ar[uull]\ar[l]}\ar@<-2pt>[ddll]_{\quad(-)_{\succ}}\\
& & {\bf CRFil_{V}}\ar@<-2pt>[ur]\ar[ul]\ar[u]\\
& & {\bf rCRFil_{V}}\ar[u]^{=}\ar@<-2pt>[uurr]\ar[uull]
}
\]
where the upwards directed arrows in both diamonds are inclusion functors
and all of the arrows in the smaller diamond pointing towards the
centre are the obvious forgetful functors. The other arrows (which
are detailed below), with the exception of the pair on the upper left
side of the outer diamond, are all adjunctions, with the left adjoint
depicted on top or to the left of its right adjoint. The left adjoint
${\bf Met_{V}^{uva}\to{\bf Fil_{V}}}$ sends $X$ to $(X,\mathcal{P}(X))$
while the right adjoint sends $X$ to $(X,\{X\})$. For these functors
the dotted and the dashed triangles commute. We note that in the degenerate
case $V=\{0=\infty\}$, the inner diamond reduces to identity functors,
${\bf Met_{V}^{uva}\cong{\bf Set}}$, and ${\bf Fil_{V}}$ is the
category of filters introduced in \cite{Blass77}.
\begin{rem}
Regarding the forgetful functor $p:{\bf CRFil_{V}\to Met_{V}^{uva}},$
recall that the fiber over an object $X$ is the category consisting
of all of the objects in ${\bf CRFil_{V}}$ that project to $X$ and
all morphisms that project to the identity on $X$. This category
is essentially a set and is precisely the completion of $X$ we construct
below. \end{rem}
\begin{prop}
\label{prop:Problemaitc}The construction $(X,\mathcal{F})\mapsto(X,\mathcal{F}_{\succ})$
is the object part of a functor $(-)_{\succ}:{\bf Fil_{V}\to{\bf Fil_{V}}}$
which further sends $f:\mathcal{F}\to\mathcal{G}$ to $f:\mathcal{F}_{\succ}\to\mathcal{G}_{\succ}$.
The restriction of this functor to ${\bf rFil_{V}}$ gives rise to
the functor at the top left of the diagram above. \end{prop}
\begin{proof}
Note that uniform continuity of $f:X\to Y$ implies that for all $\varepsilon\succ0$
there exists $\delta\succ0$ such that $f^{-1}({\bf B}_{\varepsilon}(S))\supseteq{\bf B}_{\delta}(f^{-1}(S))$,
for all $S\subseteq Y$. Now, to show that $(-)_{\succ}$ is functorial,
suppose that $f:(X,\mathcal{F})\to(Y,\mathcal{G})$ is a morphism,
i.e., that $f(\mathcal{F})\supseteq\mathcal{G}$, and we need to show
that $f:(X,\mathcal{F}_{\succ})\to(Y,\mathcal{G}_{\succ})$ is a morphism,
i.e., that $f(\mathcal{F}_{\succ})\supseteq\mathcal{G}_{\succ}$.
Indeed, if $G\in\mathcal{G_{\succ}}$, then $G\supseteq{\bf B}_{\varepsilon}(G')$
for some $G'\in\mathcal{G}$ and $\varepsilon\succ0$. It thus follows
that $f^{-1}(G)\supseteq f^{-1}({\bf B}_{\varepsilon}(G'))\supseteq{\bf B}_{\delta}(f^{-1}(G'))$
for a suitable $\delta\succ0$. Since $f^{-1}(G')\in\mathcal{F}$
we conclude that $f^{-1}(G)\in\mathcal{F}_{\succ}$. The claim about
the image of the functor is Corollary~\ref{cor:problematicSolved}.
\end{proof}
Note that generally speaking $\mathcal{G}\supseteq\mathcal{G}_{\succ}$
but strict inclusion may hold even if $\mathcal{G}$ is already round.
The fact that for a round filter that is also Cauchy $\mathcal{G}_{\succ}=\mathcal{G}$
(by minimality) is crucial in the following proof.
\begin{prop}
The functor $(-)_{\succ}:{\bf rFil_{V}}\to{\bf rRFil_{V}}$ restricts
to a functor $(-)_{\succ}:{\bf rCFil_{V}\to rCRFil_{V}}$. This functor
is left adjoint to the inclusion functor ${\bf rCRFil_{V}\to{\bf rCFil_{V}}}.$\end{prop}
\begin{proof}
The claim about the restriction landing in Cauchy filters is Proposition~\ref{prop:Cauchy implies cauchy}.
To establish that $(-)_{\succ}$ is left adjoint to the inclusion,
we need to show for a Cauchy and round filter $\mathcal{G}$ on $Y$
and an arbitrary Cauchy filter $\mathcal{F}$ on $X$, that $f:(X,\mathcal{F}_{\succ})\to(Y,\mathcal{G})$
is a morphism, i.e., that $f(\mathcal{F}_{\succ})\supseteq\mathcal{G}$,
if, and only if, $f:(X,\mathcal{F})\to(Y,\mathcal{G})$ is a morphism,
i.e., $f(\mathcal{F})\supseteq\mathcal{G}$. Since $\mathcal{F}\supseteq\mathcal{F}_{\succ}$,
it follows that $f(\mathcal{F})\supseteq f(\mathcal{F}_{\succ})$,
and thus one of the implications is trivial. Assume now that $f(\mathcal{F})\supseteq\mathcal{G}$,
and we need to show that $f(\mathcal{F}_{\succ})\supseteq\mathcal{G}$.
Let $G\in\mathcal{G}$. As $\mathcal{G}$ is Cauchy and round, thus
minimal Cauchy, we have that $\mathcal{G}_{\succ}=\mathcal{G}$, and
so there exists some $G'\in\mathcal{G}$ and $\varepsilon\succ0$
with $G\supseteq{\bf B}_{\varepsilon}(G')$. Then $f^{-1}(G)\supseteq f^{-1}({\bf B}_{\varepsilon}(G'))\supseteq{\bf B}_{\delta}(f^{-1}(G'))$
for some $\delta\succ0$, and since $f^{-1}(G')\in\mathcal{F}$, we
conclude that $f^{-1}(G)\in\mathcal{F}_{\succ}$.
\end{proof}
This concludes the description of the functors in the diagram above.
We now turn to the details of the completion construction. For $x\in X$
let $\mathcal{F}_{x}$ be the filter generated by the filter base
$\mathcal{B}_{x}=\{{\bf B}_{\varepsilon}(x)\mid\varepsilon\succ0\}$,
which is clearly Cauchy. The dual construction is the filter $\mathcal{F}^{x}$
generated by the filter base $\mathcal{B}^{x}=\{{\bf B}^{\varepsilon}(x)\mid\varepsilon\succ0\}$.
For general $V$-spaces, a filter may be Cauchy without being op-Cauchy
and $\mathcal{F}_{x}=\mathcal{F}^{x}$ need not hold.
\begin{prop}
\label{prop:compiffopcomp}If $X$ has vanishing asymmetry, then $\mathcal{F}_{x}=\mathcal{F}^{x}$
for all $x\in X$. If $X$ has uniformly vanishing asymmetry, then
a filter $\mathcal{F}$ is Cauchy if, and only if, it is op-Cauchy.
Consequently, $X$ is Cauchy complete if, and only if, it is op-Cauchy
complete.\end{prop}
\begin{proof}
To show that $\mathcal{F}_{x}=\mathcal{F}^{x}$ it suffices to argue
on basis elements. If $X$ has vanishing asymmetry, then given ${\bf B}_{\varepsilon}(x)\in\mathcal{B}_{x}$
let $\delta\succ0$ be such that ${\bf B}^{\delta}(x)\subseteq{\bf B}_{\varepsilon}(x)$,
which thus shows that ${\bf B}_{\varepsilon}(x)\in\mathcal{F}^{x}$ ,
and so $\mathcal{F}_{x}\subseteq\mathcal{F}^{x}$. The reverse inequality
follows similarly. Suppose now that $X$ has uniformly vanishing asymmetry
and that $\mathcal{F}$ is Cauchy. Given $\varepsilon\succ0$, let
$\delta\succ0$ be a corresponding modulus of uniform symmetry. There
is then $x\in X$ with ${\bf B}_{\delta}(x)\in\mathcal{F}$, and since
${\bf B}_{\delta}(x)\subseteq{\bf B}^{\varepsilon}(x)$, it follows that
${\bf B}^{\varepsilon}(x)\in\mathcal{F}$, and so $\mathcal{F}$ is op-Cauchy.
The reverse implication is similar. The last assertion in the proposition
follows since $\mathcal{F}\to x$ is equivalent to $\mathcal{F}_{x}\subseteq\mathcal{F}$,
and $\mathcal{F}\to^{op}x$ is equivalent to $\mathcal{F}^{x}\subseteq\mathcal{F}$. \end{proof}
\begin{prop}
If $X$ has uniformly vanishing asymmetry, then $\mathcal{F}_{x}$
is round. \end{prop}
\begin{proof}
Given ${\bf B}_{\varepsilon}(x)\in\mathcal{F}_{x}$, let $\delta_{1}\succ0$
with $2\cdot\delta_{1}\le\varepsilon$ and let $\delta_{2}\succ0$
be a uniform modulus of symmetry for $\delta_{1}$. Set $\delta=\delta_{1}\wedge\delta_{2}$
and suppose ${\bf B}_{\delta}(y)\in\mathcal{F}_{x}$. Then clearly
$x\in{\bf B}_{\delta}(y)$, thus $d(y,x)\le\delta$, implying that
$d(x,y)\le\delta_{1}$. Now, to show that ${\bf B}_{\delta}(y)\subseteq{\bf B}_{\varepsilon}(x)$,
notice that if $z\in{\bf B}_{\delta}(y)$, then $d(x,z)\le d(x,y)+d(y,z)\le\delta_{1}+\delta_{1}\le\varepsilon$,
and so $z\in{\bf B}_{\varepsilon}(x)$.
\end{proof}
For subsets $S,T\subseteq X$, let $d(S,T)=\bigwedge_{s\in S,t\in T}d(s,t)$,
and for collections $\mathcal{E}_{1},\mathcal{E}_{2}\subseteq\mathcal{P}(X)$,
let $d(\mathcal{E}_{1},\mathcal{E}_{2})=\bigvee_{S\in\mathcal{E}_{1},T\in\mathcal{E}_{2}}d(S,T)$,
giving rise to a function $d:\mathcal{P}(\mathcal{P}(X))\times\mathcal{P}(\mathcal{P}(X))\to V$.
Let $\tilde{X}\subseteq\mathcal{P}(\mathcal{P}(X))$ be the set of
all proper (i.e., $\mathcal{P}(X)$ is excluded) Cauchy filters on
$X$ and let $\hat{X}\subseteq\tilde{X}$ be the set of all minimal
Cauchy filters. It is easy to see that if $\mathcal{B}_{\mathcal{F}}$
and $\mathcal{B}_{\mathcal{G}}$ are filter bases for $\mathcal{F}$
and $\mathcal{G}$ respectively, then $d(\mathcal{F},\mathcal{G})=d(\mathcal{B}_{\mathcal{F}},\mathcal{B}_{\mathcal{G}})$.
(Alternatively, notice that $\mathcal{F}_{x}$ is $(\mathcal{P}_{x})_{\succ}$,
where $\mathcal{P}_{x}$ is the principal filter on $x$.)
The following computation is convenient to record for the proofs below.
\begin{prop}
\label{prop:conv}Suppose that $S\subseteq{\bf B}^{\delta}(x)$ and
$T\subseteq{\bf B}_{\varepsilon}(y)$ and $S, T \ne\emptyset$.
Then $d(S,T)\le d(x,y)+\delta+\varepsilon$ and $d(y,x)\le d(T,S)+\delta+\varepsilon$.\end{prop}
\begin{proof}
Let $s\in S$ and $t\in T$ be arbitrary. Then $d(S,T)\le d(s,t)\le d(s,x)+d(x,y)+d(y,t)\le\delta+d(x,y)+\varepsilon$,
which is the first inequality. By the distributivity law in $V$,
the second inequality will follow by showing that $d(y,x)\le d(t,s)+\delta+\varepsilon$
for all $s\in S$ and $t\in T$. Indeed, $d(y,x)\le d(y,t)+d(t,s)+d(s,x)\le\varepsilon+d(t,s)+\delta$. \end{proof}
\begin{lem}
If $X$ has uniformly vanishing asymmetry, then $(\tilde{X},d)$ is
a $V$-space, which itself has uniformly vanishing asymmetry. \end{lem}
\begin{proof}
$d(\mathcal{F},\mathcal{F})=0$ since for all $F,F'\in\mathcal{F}$,
$F\cap F'\ne\emptyset$. To establish that $d(\mathcal{F},\mathcal{H})\le d(\mathcal{F},\mathcal{G})+d(\mathcal{G},\mathcal{H})$
it suffices to show, for fixed $\varepsilon\succ0$, $F\in\mathcal{F}$,
and $H\in\mathcal{H}$, that there exists $G\in\mathcal{G}$ such
that $d(F,H)\le d(F,G)+d(G,H)+\varepsilon$. Let $\delta\succ0$ be
such that $2\cdot\delta\le\varepsilon$ and let $\delta'\succ0$ be
a uniform modulus of symmetry for $\delta$, and set $\eta=\delta\wedge\delta'$.
As $\mathcal{G}$ is Cauchy, there is $x\in X$ such that $G={\bf B}_{\eta}(x)\in\mathcal{G}$.
And then
\begin{eqnarray*}
d(F,G)+d(G,H)+\varepsilon & \ge & \bigwedge_{f\in F,y,z\in G,h\in H}d(f,y)+d(z,h)+2\cdot\delta\\
& \ge & \bigwedge_{f\in F,y,z\in G,h\in H}d(f,y)+d(y,x)+d(x,z)+d(z,h)\\
& \ge & \bigwedge_{f\in F,h\in H}d(f,h)=d(F,H)
\end{eqnarray*}
as required for showing that $\tilde{X}$ is a $V$-space.
To show that $\tilde{X}$ has uniformly vanishing asymmetry, let $\varepsilon\succ0$
be given and let $\eta\succ0$ with $2\cdot\eta\le\varepsilon$. Let
$\delta_{1}\succ0$ be a uniform modulus of symmetry for $\eta$, and
$\delta\succ0$ with $2\cdot\delta\le\delta_{1}$. Suppose that $d(\mathcal{G},\mathcal{F})\le\delta$,
which means that $d(G,F)\le\delta$ for all $F\in\mathcal{F}$ and
$G\in\mathcal{G}$. To show that $d(\mathcal{F},\mathcal{G})\le\varepsilon$
it suffices to show that $d(F_{0},G_{0})\le\varepsilon$ for fixed
$F_{0}\in\mathcal{F}$ and $G_{0}\in\mathcal{G}$. Since $\mathcal{F}$
is Cauchy (and thus op-Cauchy) and since $\mathcal{G}$ is Cauchy,
there exist $x,y\in X$ with ${\bf B}^{\delta'}(x)\in\mathcal{F}$
and ${\bf B}_{\delta'}(y)\in\mathcal{G}$, where $\delta'\succ0$
satisfies $2\cdot\delta'\le\delta$. Let $S=F_{0}\cap{\bf B}^{\delta'}(x)$
and $T=G_{0}\cap{\bf B}_{\delta'}(y)$. Then, using Proposition~\ref{prop:conv}
(here and in the following computation), $d(y,x)\le d(T,S)+\delta\le2\cdot\delta\le\delta_{1}$
and thus $d(x,y)\le\eta$. Finally, $d(F_{0,}G_{0})\le d(S,T)\le d(x,y)+\eta\le2\cdot\eta\le\varepsilon$,
as required.
\end{proof}
For any $V$-space, setting $x\sim y$ whenever $d(x,y)=d(y,x)=0$
is an equivalence relation, and $X_{0}$, the set of equivalence classes
becomes a separated $V$-space where the distance function is given
by $d([x],[y])=d(x,y)$. We note that if $X$ has vanishing asymmetry,
then $d(x,y)=0$ implies $d(y,x)=0$ and if $X$ is also separated,
then $\mathcal{O}(X)$ is Hausdorff. In particular, the following
result (whose proof is immediate and thus omitted), implies that if
$X$ has vanishing asymmetry, then $X_{0}$ is Hausdorff.
\begin{prop}
If $X$ has (uniformly) vanishing asymmetry, then so does $X_{0}$.\end{prop}
\begin{thm}
If $X$ has uniformly vanishing asymmetry, then $\hat{X}$ is isometric
to $\tilde{X}_{0}$.\end{thm}
\begin{proof}
For any two Cauchy filters $\mathcal{F},\mathcal{G}$ on $X$, their
intersection is again a filter but it need not be Cauchy. However,
if $d(\mathcal{F},\mathcal{G})=0$, then $\mathcal{F}\cap\mathcal{G}$
is Cauchy. Indeed, let $\varepsilon\succ0$ and let $\delta_{1}\succ0$
with $2\cdot\delta_{1}\le\varepsilon$. Let $\delta_{2}\succ0$ be
a uniform modulus of symmetry for $\delta_{1}$, and let $\delta_{3}\succ0$
satisfy $2\cdot\delta_{3}\le\delta_{2}$. Set $\delta=\delta_{2}\wedge\delta_{3}$.
There exists $x\in X$ with ${\bf B}_{\delta}(x)\in\mathcal{F}$ and
$y\in X$ with ${\bf B}^{\delta}(y)\in\mathcal{G}$, and since $d(\mathcal{F},\mathcal{G})=0$
it follows that $d({\bf B}_{\delta}(x),{\bf B}^{\delta}(y))=0$, and
thus that $d(x,y)\le2\cdot\delta\le\delta_{1}$. Now, $d(y,s)\le\delta_{1}$
for all $s\in{\bf B}^{\delta}(y)$ and so $d(x,s)\le d(x,y)+d(y,s)\le2\cdot\delta_{1}\le\varepsilon$,
leading to ${\bf B}^{\delta}(y)\subseteq{\bf B}_{\varepsilon}(x)$. It
thus follows that ${\bf B}_{\varepsilon}(x)\in\mathcal{G}$, which establishes
that $\mathcal{F}\cap\mathcal{G}$ is Cauchy.
It now follows that each equivalence class $[\mathcal{F}]$ contains
a unique minimal Cauchy representative. Indeed, it is easily seen
that $d(\mathcal{F},\mathcal{F}_{\succ})=0$ so that $\mathcal{F}_{\succ}\in[\mathcal{F}]$.
If $\mathcal{F}_{1},\mathcal{F}_{2}$ are two minimal Cauchy filters
with $\mathcal{F}_{1}\sim\mathcal{F}_{2}$, then $\mathcal{F}_{1}\cap\mathcal{F}_{2}$
is Cauchy so that minimality forces $\mathcal{F}_{1}=\mathcal{F}_{2}$.
The bijective isometry $\tilde{X}_{0}\to\hat{X}$ is thus given by
$[\mathcal{F}]\mapsto\mathcal{F}_{\succ}$. \end{proof}
\begin{cor}
If $X$ has uniformly vanishing asymmetry, then $\hat{X}$ is a separated
$V$-space with uniformly vanishing asymmetry.
\end{cor}
Recall that when $X$ has uniformly vanishing asymmetry every the
filters $\mathcal{F}_{x}$ are round (and clearly Cauchy). We then
obtain the function $\iota:X\to\hat{X}$, given by $\iota(x)=\mathcal{F}_{x}$,
called the \emph{canonical embedding }(even though it is injective
if, and only if, $X$ is separated).
\begin{lem}
\label{lem:canonicalEmbedIsIsom}If $X$ has uniformly vanishing asymmetry,
then the canonical embedding $\iota:X\to\hat{X}$ is an isometry. \end{lem}
\begin{proof}
Clearly, $d({\bf B}_{\varepsilon}(x),{\bf B}_{\delta}(y))\le d(x,y)$,
thus $d(\mathcal{B}_{x},\mathcal{B}_{y})\le d(x,y)$, and therefore
$d(\mathcal{F}_{x},\mathcal{F}_{y})\le d(x,y)$. For the other direction,
we will use the fact that $\mathcal{F}_{y}=\mathcal{F}^{y}$ (cf.
Proposition~\ref{prop:compiffopcomp}), so it suffices to show that
$d(x,y)\le d(\mathcal{B}_{x},\mathcal{B}^{y})$. To that end, let
$\rho\succ0$, and $\eta\succ0$ with $2\cdot\eta\le\rho$. Since in general $d(x,y)-\varepsilon-\delta\le d({\bf B}_{\varepsilon}(x),{\bf B}^{\delta}(y))$ we have $d(\mathcal{B}_{x},\mathcal{B}^{y})\ge d({\bf B}_{\eta}(x),{\bf B}^{\eta}(y))\ge(d(x,y)-\eta)-\eta=d(x,y)-(\eta+\eta)\ge d(x,y)-\rho$.
Thus, $d(x,y)\le d(\mathcal{B}_{x},\mathcal{B}^{y})+\rho$, and as
$\rho\succ0$ is arbitrary, the desired inequality follows. \end{proof}
\begin{cor}
If $X$ is separated and has uniformly vanishing asymmetry, then the
canonical embedding $\iota:X\to\hat{X}$ is injective. \end{cor}
\begin{lem}
\label{lem:CanonEmbIsDense}If $X$ has uniformly vanishing asymmetry,
then the image $\iota(X)$ is dense in $\hat{X}$. \end{lem}
\begin{proof}
Fix $\mathcal{G}\in\hat{X}$ and $\varepsilon\succ0$. Let $\delta\succ0$
be a uniform modulus of symmetry for $\varepsilon$, and since $\mathcal{G}$
is Cauchy we may find $x\in X$ with ${\bf B}_{\delta}(x)\in\mathcal{G}$.
To show that $d(\mathcal{G},\mathcal{F}_{x})\le\varepsilon$ it suffices
to show that $d(G,{\bf B}_{\eta}(x))\le\varepsilon$ for all $\eta\succ0$
and $G\in\mathcal{G}$. Let $y\in G\cap{\bf B}_{\delta}(x)$. Then
$d(x,y)\le\delta$ implies $d(G,{\bf B}_{\eta}(x))\le d(G\cap{\bf B}_{\delta}(x),x)\le d(y,x)\le\varepsilon$.\end{proof}
\begin{thm}
\label{thm:IsComplete}If $X$ has uniformly vanishing asymmetry,
then $\hat{X}$ is Cauchy complete. \end{thm}
\begin{proof}
It suffices to show that every proper Cauchy filter on $\hat{X}$ converges
to a minimal Cauchy filter on $X$. Let $\mathbb{A}$ be a Cauchy
filter on $\hat{X}$ and $\varepsilon\succ0$. Then there is a minimal
Cauchy filter $\mathcal{L}\in\hat{X}$ such that $\mathbf{B}_{\varepsilon}(\mathcal{L})=\{\mathcal{G}\in\hat{X}\mid d(\mathcal{L},\mathcal{G})\le\varepsilon\}$
is in $\mathbb{A}$. It is straightforward to verify that $\mathcal{F}=\{F\subseteq X\mid\exists A\in\mathbb{A},F\in\bigcap A\}$
is a filter. Next, to show that $\mathcal{F}$ is Cauchy, let $\varepsilon\succ0$
and $\delta_{1}\succ0$ with $4\cdot\delta_{1}\le\varepsilon$.
Further, let $\delta_{2}\succ0$ be a uniform modulus of symmetry for $\delta_{1}$.
Note $\delta _3 = \delta_{1}\wedge\delta_{2}\succ0$. Fix $\mathcal{G}\in\mathbf{B}_{\varepsilon}(\mathcal{L})$.
Since $\mathcal{G}$ is Cauchy, there is $y\in X$ such that $\mathbf{B}_{\delta_{3}}(y)\in\mathcal{G}$.
For $a\in\mathbf{B}_{\delta_{3}}(y)$, we have $d(x,a)\le d(x,y)+d(y,a)\le d(\mathbf{B}_{\delta_{1}}(x),\mathbf{B}_{\delta_{3}}(y))+2\cdot\delta_{1}+\delta_{3}\le4\cdot\delta_{1}\le\varepsilon$, thus $\mathbf{B}_{\delta_{3}}(y)\subseteq\mathbf{B}_{\varepsilon}(x)$
which implies $\mathbf{B}_{\varepsilon}(x)\in\mathcal{G}$. Since $\mathcal{G}\in\mathbf{B}_{\varepsilon}(\mathcal{L})$
is arbitrary, $\mathbf{B}_{\varepsilon}(x)\in\bigcap\mathbf{B}_{\varepsilon}(\mathcal{L})$,
thus $\mathbf{B}_{\varepsilon}(x)\in\mathcal{F}$.
Finally, we show that $\mathbb{A}$ converges to the minimal Cauchy
filter $\mathcal{F}_{\succ}$. Let $\varepsilon\succ0$ and $\delta_{1}\succ0$
with $2\cdot\delta_{1}\le\varepsilon$, and further let $\delta_{2}\succ0$ be a
uniform modulus of symmetry for $\delta_{1}$.
Note that $\delta = \delta_{1}\wedge\delta_{2}\succ 0$.
There is $\mathcal{L}\in\hat{X}$
such that $\mathbf{B}_{\delta}(\mathcal{L})\in\mathbb{A}$. Then it
suffices to show that $\mathbf{B}_{\delta}(\mathcal{L})\subseteq\mathbf{B}_{\varepsilon}(\mathcal{F}_{\succ})$.
Let $\mathcal{M}\in\mathbf{B}_{\delta}(\mathcal{L})$, $L_{0}\in\mathcal{L}$
and $F_{0}\in\mathcal{F}$. This means that there is $A\in\mathbb{A}$
such that $F_{0}\in\bigcap A$. Since $\mathbb{A}$ is a proper filter, $\mathbf{B}_{\delta}(\mathcal{L})\cap A\neq\emptyset$
and $d(\mathcal{L},\mathcal{G})\le\delta\le\delta_{2}$
implies $d(\mathcal{G},\mathcal{L})\le\delta_{1}$ for every $\mathcal{G}\in\mathbf{B}_{\delta}(\mathcal{L})\cap A$.
Since $F_{0}$ is also in $\mathcal{G}$, $d(F_{0},L_{0})\le\bigvee_{G\in\mathcal{G},L\in\mathcal{L}}d(G,L)=d(\mathcal{G},\mathcal{L})\le\delta_{1}$,
thus $d(F_{0},L_{0})\le\delta_{1}$. Since $F_{0}\in\mathcal{F}$
and $L_{0}\in\mathcal{L}$ are arbitrary, we obtain $d(\mathcal{F},\mathcal{L})\le\delta_{1}$.
Then $d(\mathcal{F}_{\succ},\mathcal{L})\le d(\mathcal{F}_{\succ},\mathcal{F})+d(\mathcal{F},\mathcal{L})\le0+\delta_{1}$
which implies that $d(\mathcal{F}_{\succ},\mathcal{M})\le d(\mathcal{F}_{\succ},\mathcal{L})+d(\mathcal{L},\mathcal{M})\le\delta_{1}+\delta\le\delta_{1}+\delta_{1}\le\varepsilon$,
thus $\mathcal{M}\in\mathbf{B}_{\varepsilon}(\mathcal{F}_{\succ})$.
Since $\mathcal{M}\in\mathbf{B}_{\delta}(\mathcal{L})$ is arbitrary,
$\mathbf{B}_{\delta}(\mathcal{L})\subseteq\mathbf{B}_{\varepsilon}(\mathcal{F}_{\succ})$
and since $\varepsilon\succ0$ is arbitrary, it follows that $\mathbb{A}$
converges to $\mathcal{F}_{\succ}$.
\end{proof}
Obviously, the construction $X\mapsto\hat{X}$ is functorial. The
following two corollaries follow by standard arguments from Lemma~\ref{lem:canonicalEmbedIsIsom},
Lemma~\ref{lem:CanonEmbIsDense}, and Theorem~\ref{thm:IsComplete}:
\begin{cor}
The universal property
\[
\xymatrix{ & X\ar[dr]^{f}\ar[dl]_{\iota}\\
\hat{X}\ar[rr]_{F} & & Y
}
\]
stating that for any Cauchy complete $V$-space $Y$ with uniformly
vanishing asymmetry and any uniformly continuous function $f$ there
exists a unique uniformly continuous extension $F$, holds for all
separated $V$-spaces $X$ with uniformly vanishing asymmetry.
\end{cor}
\begin{cor}
Every separated $V$-space $X$ with uniformly vanishing asymmetry
has a completion, unique up to a unique isomorphism.
\end{cor}
Relating back to the categorical point-of-view, i.e., to the $2$-functor
${\bf QMet_{-}}:\mathbb{V}\to{\bf Cat}$ from Section~\ref{sub:catPers},
the constructions above may be summarized as follows. Consider the
obvious 2-functors ${\bf sMet_{-}^{uva}},{\bf scMet_{-}^{uva}}:\mathbb{V}\to{\bf Cat}$
mapping $V$ to ${\bf sMet_{V}^{uva}}$ and to $s{\bf cMet_{V}^{uva}}$
(the categories of separated $V$-spaces with uniformly vanishing
asymmetry and of complete separated $V$-spaces with uniformly vanishing
asymmetry), respectively. The completion functor ${\bf sMet_{V}^{uva}}\to{\bf scMet_{V}^{uva}}$
defines a $2$-natural transformation $\hat{-}:{\bf sMet_{-}^{uva}}\to{\bf scMet_{-}^{uva}}$.
| {
"timestamp": "2014-08-19T02:13:44",
"yymm": "1408",
"arxiv_id": "1408.3887",
"language": "en",
"url": "https://arxiv.org/abs/1408.3887",
"abstract": "The classical Cauchy completion of a metric space (by means of Cauchy sequences) as well as the completion of a uniform space (by means of Cauchy filters) are well-known to rely on the symmetry of the metric space or uniform space in question. For qausi-metric spaces and quasi-uniform spaces various non-equivalent completions exist, often defined on a certain subcategory of spaces that satisfy a key property required for the particular completion to exist. The classical filter completion of a uniform space can be adapted to yield a filter completion of a metric space. We show that this completion by filters generalizes to continuity spaces that satisfy a form of symmetry which we call uniformly vanishing asymmetry.",
"subjects": "General Topology (math.GN)",
"title": "Completion of continuity spaces with uniformly vanishing asymmetry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126500692717,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7094397031364107
} |
https://arxiv.org/abs/1110.5672 | Kazhdan--Lusztig cells and the Frobenius--Schur indicator | Let $W$ be a finite Coxeter group. It is well-known that the number of involutions in $W$ is equal to the sum of the degrees of the irreducible characters of $W$. Following a suggestion of Lusztig, we show that this equality is compatible with the decomposition of $W$ into Kazhdan--Lusztig cells. The proof uses a generalisation of the Frobenius--Schur indicator to symmetric algebras, which may be of independent interest. | \section{Introduction} \label{sec0}
Let $G$ be a finite group and assume that all complex irreducible
characters of $G$ can be realised over the real numbers. Then, by
a well-known result due to Frobenius and Schur, the number of
involutions in $G$ (that is, elements $g \in G$ such that $g^2=1$) is
equal to the sum of the degrees of the irreducible characters of $G$.
In this note, we consider the case where $G=W$ is a finite Coxeter group.
Following a suggestion of Lusztig, we show that the above equality is
compatible with the decomposition of $W$ into cells, as defined by
Kazhdan and Lusztig \cite{KaLu} (in the equal parameter case) and by
Lusztig \cite{Lusztig83} (in general). The proof relies on two basic
ingredients. The first consists of establishing a suitable generalisation
of the ``Frobenius--Schur indicator'' to symmetric algebras. This will be
done in Section~\ref{sec1}, and may be of independent interest. The
second ingredient is the theory around Lusztig's ring $J$ (originally
introduced in \cite{Lu2}) or, rather, its more elementary version
constructed in \cite{my02}; see Section~\ref{sec2}.
To state the main result, let us fix some notation. Let $S$ be a set
of simple reflections in $W$. Let $\{c_s \mid s \in S\} \subseteq
{\mathbb{Z}}_{\geq 0}$ be a set of ``weights'' where $c_s=c_{s'}$ whenever $s,
s' \in S$ are conjugate in $W$. This gives rise to a weight function
$L \colon W \rightarrow {\mathbb{Z}}$ in the sense of Lusztig \cite{Lusztig03};
for $w \in W$, we have $L(w)=c_{s_1}+\ldots + c_{s_k}$ where $w=
s_1\cdots s_k$ ($s_i \in S$) is a reduced expresssion for~$w$. (The
original setup in \cite{KaLu} corresponds to the case where $c_s=1$
for all $s \in S$.) Using the Kazhdan--Lusztig basis of the generic
Iwahori--Hecke algebra associated with $W,L$, one can define partitions
of $W$ into left, right and two-sided cells. For any such left cell
$\Gamma$ of $W$, we have a corresponding left $W$-module $[\Gamma]_1$
with a standard basis indexed by the elements of $\Gamma$; see \cite{KaLu}
(equal parameter case) or \cite{Lusztig83} (in general).
\begin{thm} \label{propleft} The number of involutions in a left cell
$\Gamma$ is equal to the number of terms in a decomposition of $[\Gamma]_1$
as a direct sum of simple $W$-modules.
\end{thm}
For $W$ of classical type and the equal parameter case, the above result
(in a somewhat more precise form, see Example~\ref{exp1} below) was first
obtained by Lusztig \cite[12.17]{LuBook}, using the representation theory
of a finite reductive group with Weyl group $W$. Our proof works uniformly
for all $W,L$ (including $W$ of non-crystallographic type). In
Corollary~\ref{proptwo}, we also obtain a similar result for two-sided
cells. Along the way, we establish some properties of left cell modules
which previously were only known to hold in the equal parameter case;
see Corollaries~\ref{cor32} and \ref{cor30}.
\section{Symmetric algebras and the Frobenius--Schur indicator} \label{sec1}
Let $K$ be a field of characteristic $0$ and ${\mathcal{H}}$ be a finite-dimensional
associative $K$-algebra (with $1$). We assume that ${\mathcal{H}}$ is split semisimple
and symmetric, with trace form $\tau \colon {\mathcal{H}} \rightarrow K$.
Let ${\operatorname{Irr}}({\mathcal{H}})$ be the set of simple ${\mathcal{H}}$-modules (up to isomorphism).
For $E \in {\operatorname{Irr}}({\mathcal{H}})$, let $\chi_E \colon {\mathcal{H}} \rightarrow K$ be the
corresponding character, $\chi_E(h)=\mbox{trace}(h,E)$ for all $h \in {\mathcal{H}}$.
We have
\[ \tau=\sum_{E \in {\operatorname{Irr}}({\mathcal{H}})} c_E^{-1} \, \chi_E\]
where each $c_E$ is a certain non-zero element of $K$, called the
{\em Schur element} associated with $E$. (We refer to \cite[Chap.~7]{gepf}
for basic facts about symmetric algebras.)
We shall further assume that there is a $K$-linear anti-involution
\[\dagger\colon {\mathcal{H}} \rightarrow {\mathcal{H}}, \qquad h \mapsto h^\dagger.\]
This allows us to define, for any finite-dimensional (left) ${\mathcal{H}}$-module $M$,
a corresponding {\em contragredient} module $\hat{M}$. As a $K$-vector
space, we have $\hat{M}= {\operatorname{Hom}}_K(M,K)$; the action of $h\in{\mathcal{H}}$ on
$f\in\hat{M}$ is determined by $(h.f)(m)=f(h^\dagger.m)$ for all $m \in M$.
\begin{defn} \label{rem2} Let $M$ be a finite-dimensional (left) ${\mathcal{H}}$-module.
We shall say that a bilinear map $(\;,\;) \colon M \times M \rightarrow K$
is ${\mathcal{H}}$-invariant if
\[(h.m,m')=(m,h^\dagger.m')\qquad\mbox{for all $h\in{\mathcal{H}}$ and $m,m'\in M$}.\]
Via the isomorphism ${\operatorname{Hom}}_K(M,K) \otimes_K M \cong {\operatorname{Hom}}_K(M,M)$
(and an identification of $M$ with ${\operatorname{Hom}}_K(M,K)$ using dual bases),
an ${\mathcal{H}}$-invariant bilinear from on $M$ can also be interpreted as an
${\mathcal{H}}$-module homomorphism $\hat{M} \rightarrow M$, and vice versa.
In particular, for $E \in {\operatorname{Irr}}({\mathcal{H}})$, we have $E \cong \hat{E}$ if and
only if there exists a non-degenerate ${\mathcal{H}}$-invariant bilinear form on $E$;
also note that a non-zero ${\mathcal{H}}$-invariant bilinear form on $E$ is
automatically non-degenerate (by Schur's Lemma).
\end{defn}
Given any basis $B$ of ${\mathcal{H}}$, we denote by
$B^\vee=\{b^\vee \mid b \in B\}$ the corresponding dual basis, that is, we
have
\[ \tau(b'b^\vee)=\left\{\begin{array}{cl} 1 & \quad \mbox{if $b=b'$},\\
0 & \quad \mbox{otherwise}.\end{array}\right.\]
\begin{defn} \label{def1} Let $B_0$ be a basis fo ${\mathcal{H}}$. We say that
$B_0$ is {\em $\dagger$-symmetric} if $b^\dagger=b^\vee$ for all $b \in B_0$.
\end{defn}
The standard example is the case where ${\mathcal{H}}=K[G]$ is the group algebra
of a finite group $G$ over $K={\mathbb{C}}$ and $\tau$ is the trace form defined
by $\tau(1)=1$ and $\tau(g)=0$ for $g \in G$ such that $g \neq 1$. We
have an anti-involution $\dagger\colon {\mathcal{H}} \rightarrow {\mathcal{H}}$ given by
$g^\dagger=g^{-1}$; then $B_0=G$ is a $\dagger$-symmetric basis of
${\mathcal{H}}$. Further examples are provided by the algebra $\tilde{J}$ in
Section~\ref{sec2} and by the ``based rings'' considered by Lusztig
\cite{Lu4}.
\begin{rem} \label{rem1} Assume that there exists a $\dagger$-symmetric
basis $B_0$. This implies that
\begin{equation*}
\tau(h)=\tau(h^\dagger) \qquad \mbox{for all $h \in {\mathcal{H}}$}.\tag{a}
\end{equation*}
Indeed, write the identity element of ${\mathcal{H}}$ as $1_{{\mathcal{H}}}=\sum_{b\in
B_0} \alpha_b\, b$ where $\alpha_b \in K$ for all $b \in B_0$. Then
a straightforward computation shows that $\tau(b^\dagger)=\tau(1_{{\mathcal{H}}}
b^\vee)=\alpha_b$ for all $b \in B_0$. Now, we certainly have $1_{{\mathcal{H}}}=
1_{{\mathcal{H}}}^\dagger=\sum_{b \in B_0} \alpha_b\, b^\vee$. Hence, similarly,
we also obtain $\tau(b)=\tau(1_{{\mathcal{H}}}^\dagger b)=\alpha_b$ for all $b\in
B_0$. Thus, (a) holds. Now let $E \in {\operatorname{Irr}}({\mathcal{H}})$. Then, clearly, we have
\begin{equation*}
\chi_{\hat{E}}(b)=\chi_E(b^\dagger)=\chi_E(b^\vee) \qquad \mbox{for
all $b \in B_0$}.\tag{b}
\end{equation*}
This also implies that $c_E=c_{\hat{E}}$ since
\[ \tau(b)=\tau(b^\dagger)=\sum_{E \in {\operatorname{Irr}}({\mathcal{H}})} c_E^{-1}\,
\chi_E(b^\dagger)= \sum_{E \in {\operatorname{Irr}}({\mathcal{H}})} c_E^{-1}\, \chi_{\hat{E}}(b)
\qquad \mbox{for all $b \in B_0$}.\]
\end{rem}
At first sight, the condition in Definition~\ref{def1} looks rather
strong. But the following remark shows that $\dagger$-symmetric bases
of ${\mathcal{H}}$ always exist under some quite natural assumptions.
\begin{rem} \label{rem0} There exists a $\dagger$-symmetric basis of ${\mathcal{H}}$
if the following two conditions are satisfied:
\begin{itemize}
\item[(a)] $\tau(h)=\tau(h^\dagger)$ for all $h \in {\mathcal{H}}$.
\item[(b)] $K$ is sufficiently large (which means
here: $K$ contains sufficiently many square roots).
\end{itemize}
Indeed, consider the bilinear form ${\mathcal{H}} \times {\mathcal{H}} \rightarrow K$,
$(h,h') \mapsto \tau(h'h^\dagger)$. By (a), this bilinear form is
symmetric; furthermore, one easily sees that it is non-degenerate. Hence,
since $\mbox{char}(K)=0$, there exists an orthogonal basis of ${\mathcal{H}}$
with respect to that form. If now $K$ contains sufficiently many square
roots, then we can rescale the basis elements and obtain an orthonormal
basis of ${\mathcal{H}}$; any such basis is $\dagger$-symmetric.
\end{rem}
We can now state the following two propositions which generalise classical
results concerning the Frobenius--Schur indicator for characters of finite
groups (see, for example, Etingof et al. \cite[\S 5.1]{eti11}) to
symmetric algebras as above.
\begin{prop} \label{prop21} Assume that $B_0$ is a $\dagger$-symmetric
basis of ${\mathcal{H}}$. Let $E \in {\operatorname{Irr}}({\mathcal{H}})$ and define
\[\nu_E:=\frac{1}{c_E\dim E}\sum_{b\in B_0}\chi_E(b^2).\]
Then we have $\nu_E \in \{0,\pm 1\}$; furthermore, the following hold:
\begin{itemize}
\item[(a)] $\nu_E =0$ if and only if $E \not\cong \hat{E}$.
\item[(b)] $\nu_E =1$ if and only if $E \cong \hat{E}$ and there exists a
non-degenerate, symmetric ${\mathcal{H}}$-invariant bilinear form on $E$.
\item[(c)] $\nu_E =-1$ if and only if $E \cong \hat{E}$ and there exists a
non-degenerate, alternating ${\mathcal{H}}$-invariant bilinear form on $E$.
\end{itemize}
(In particular, $\nu_E$ does not depend on the choice of $B_0$.)
\end{prop}
\begin{proof} This very closely follows the original proof of Frobenius
and Schur, as presented by Curtis \cite[Chap. IV, \S 3]{cur99}. We choose
a basis of $E$ and obtain a corresponding matrix representation $\rho
\colon {\mathcal{H}} \rightarrow M_d(K)$ where $d=\dim E$. For $h \in {\mathcal{H}}$ and
$i,j \in \{1,\ldots,d\}$, we denote by $\rho_{ij}(h)$ the
$(i,j)$-coefficient of $\rho(h)$. Taking the dual basis in $\hat{E}$, a
matrix representation afforded by $\hat{E}$ is then given by
$\hat{\rho}(b)=\rho(b^\vee)^\prime$ for all $b \in B_0$, where the prime
denotes the transpose matrix.
Assume first that $E \not\cong \hat{E}$. Then the Schur relations in
\cite[7.2.2]{gepf} yield:
\[ \sum_{b \in B_0} \rho_{ij}(b)\,\hat{\rho}_{kl}(b^\vee)=0 \qquad
\mbox{ for all $i,j,k,l \in \{1,\ldots,d\}$}.\]
Using the above description of $\hat{\rho}$, we conclude that
\[ \sum_{b \in B_0} \rho_{ij}(b)\,\rho_{lk}(b)=0 \qquad
\mbox{ for all $i,j,k,l \in \{1,\ldots,d\}$}.\]
Now let $l=j$ and $k=i$. Then summing over all $i,j$ yields
\[ 0=\sum_{1\leq i,j \leq d} \sum_{b \in B_0} \rho_{ij}(b)\,\rho_{ji}(b)
=\sum_{b \in B_0} \sum_{1 \leq i \leq d} \rho_{ii}(b^2)=\sum_{b \in B_0}
\chi_E(b^2).\]
Thus, we have $\nu_E=0$ in this case, as required.
Now assume that $E\cong \hat{E}$. This means that there exists an
invertible matrix $P \in M_d(K)$ such that
\[P\,\rho(b)=\rho(b^\vee)^\prime\,P\qquad \mbox{for all $b \in B_0$}.\]
A standard argument using Schur's Lemma (see \cite[p.~153]{cur99}) then
shows that $P^\prime=\eta P$ where $\eta=\pm 1$. Note that a similar
statement is true for any matrix $Q \in M_d(K)$ such that $Q\rho(b)=
\rho(b^\vee)^\prime Q$ for all $b \in B_0$. Indeed, by Schur's Lemma, $Q$
will be a scalar multiple of $P$ and so $Q^\prime=\eta Q$, with the same
$\eta$ as before. Now our given $P$ defines a bilinear form $(\;,\;) \colon
E \times E \rightarrow K$; the fact that $P\rho(b)=\rho(b^\vee)^\prime P$
for all $b \in B_0$ means that $(\;,\;)$ is ${\mathcal{H}}$-invariant. Thus, we have
already shown that if $E \cong \hat{E}$, then there exists a non-degenerate
${\mathcal{H}}$-invariant bilinear form on $E$ which is either symmetric or
alternating. (Conversely, if such a bilinear form exists, then $E \cong
\hat{E}$; see Remark~\ref{rem1}.) It remains to see how $\eta$ is determined.
For this purpose, let $U \in M_d(K)$ be any matrix and define
\[ Q_U:=\sum_{b \in B_0} \rho(b)^\prime\, U\,\rho(b)=\sum_{b\in B_0}
\hat{\rho}(b^\vee)\,U \,\rho(b).\]
The second equality shows that $Q_U \rho(b)=\rho(b^\vee)^\prime Q_U$ for
all $b \in B_0$; see \cite[7.1.10]{gepf}. Hence, as we just remarked, we must
have $Q_U^\prime=\eta Q_U$ and so
\[ \sum_{1\leq i,j\leq d}\sum_{b\in B_0} \rho_{il}(b)\,u_{ij}\,\rho_{jk}(b)=
\eta\sum_{1\leq i,j\leq d}\sum_{b\in B_0}\rho_{ik}(b)\,u_{ij}\,\rho_{jl}(b)\]
for all $k,l\in \{1,\ldots,d\}$, where we write $U=(u_{ij})$. Now take
$U$ to be the matrix with coefficient $1$ at position $(k,l)$ and
coefficient $0$, otherwise. Then we obtain
\[ \sum_{b \in B_0} \rho_{kl}(b)\,\rho_{lk}(b)=\eta\sum_{b \in B_0}
\rho_{kk}(b)\,\rho_{ll}(b) \qquad\mbox{for fixed $k,l\in\{1,\ldots,d\}$}.\]
Summing over all $k,l$ yields
\[ \sum_{b \in B_0} \chi_E(b^2)=\eta\sum_{b \in B_0} \chi_E(b)^2.\]
Finally, since $E\cong\hat{E}$, we have $\chi_E(b)=\chi_E(b^\vee)$. Hence,
the right hand side of the above identity equals $\eta\sum_{b \in B_0}
\chi_E(b)\,\chi_E(b^\vee)$ which, by the orthogonality relations for the
irreducible characters of ${\mathcal{H}}$ (see \cite[7.2.4]{gepf}), equals $\eta\,
c_E \dim E$. Thus, $\nu_E= \eta=\pm 1$, as required.
Once the above statements are proved, it follows that for any $E \in
{\operatorname{Irr}}({\mathcal{H}})$ we have $\nu_E\in \{0, \pm 1\}$ and the equivalences in
(a), (b), (c) hold.
\end{proof}
In the standard example where ${\mathcal{H}}={\mathbb{C}}[G]$ for a finite group $G$, we have
$c_E=|G|/\dim E$ for all $E \in {\operatorname{Irr}}({\mathcal{H}})$ (see \cite[7.2.5]{gepf}). Hence,
in this case, the formula for $\nu_E$ in Proposition~\ref{prop21} indeed
is the classical formula for the Frobenius--Schur indicator.
\begin{prop} \label{prop22} Assume that there exists a $\dagger$-symmetric
basis $B_0$ of ${\mathcal{H}}$. Then
\[ \operatorname{trace}(\dagger \colon {\mathcal{H}} \rightarrow {\mathcal{H}})
=\sum_{E \in {\operatorname{Irr}}({\mathcal{H}})} \nu_E\, \dim E.\]
In particular, if $B_0=B_0^\vee$, then
\[ |\{b \in B_0 \mid b^\vee=b\}|=\sum_{E \in {\operatorname{Irr}}({\mathcal{H}})} \nu_E\, \dim E.\]
\end{prop}
\begin{proof} The second equality certainly follows from the first:
under the given assumption on $B_0$, we have $\mbox{trace}(\dagger)=
|\{b \in B_0 \mid b^\vee=b\}|$.
In order to prove the first equality, we compute the trace of $\dagger$
with respect to a basis of ${\mathcal{H}}$ arising from the Wedderburn decomposition.
Let $E \in {\operatorname{Irr}}({\mathcal{H}})$. Choosing a basis of $E$, we obtain a corresponding
matrix representation $\rho \colon {\mathcal{H}} \rightarrow M_d(K)$ where
$d=\dim E$. We set
\[ e_{ij}^E=\frac{1}{c_E}\sum_{b \in B_0} \rho_{ji}(b^\vee)\,b
\qquad \mbox{for $i,j \in \{1,\ldots,d\}$}.\]
Then, by \cite[7.2.7]{gepf}, the matrix $\rho(e_{ij}^E)$ has
coefficient $1$ at position $(i,j)$ and coefficient $0$, otherwise;
furthermore, $e_{ij}^E$ acts as zero on any simple ${\mathcal{H}}$-module which
is not isomorphic to $E$. The elements
\[ \{ e_{ij}^E \mid E \in {\operatorname{Irr}}({\mathcal{H}}), \; 1 \leq i,j \leq \dim E\}\]
form a $K$-basis of ${\mathcal{H}}$. We shall now compute the trace of $\dagger$
with respect to this basis. First note that, since the dual basis of
$B_0^\vee$ is $B_0$ and since $e_{ij}^E$ is independent of the choice of
the basis of ${\mathcal{H}}$ (see \cite[\S 7.2]{gepf}), we have
\[ e_{ij}^E=\frac{1}{c_E}\sum_{b \in B_0}\rho_{ji}(b^\vee) b=
\frac{1}{c_E} \sum_{b \in B_0}\rho_{ji}(b)b^\vee\]
and so
\[ (e_{ij}^E)^\dagger=\frac{1}{c_E} \sum_{b \in B_0} \rho_{ji}(b^\vee)
b=\frac{1}{c_E}\sum_{b \in B_0} \hat{\rho}_{ij}(b^\vee)b=
e_{ji}^{\hat{E}},\]
where we use the fact that $c_E=c_{\hat{E}}$; see Remark~\ref{rem1}.
This already shows that those $e_{ij}^E$ where $E \not\cong\hat{E}$
will not contribute to the trace of $\dagger$. So let us now assume that
$E \cong \hat{E}$. Let $d=\dim E$. Then there exists an invertible
matrix $P\in M_d(K)$ such that $P\rho(b)=\rho(b^\vee)^\prime P$ for all
$b \in B_0$. Write $P=(p_{ij})$ and $P^{-1}=(\tilde{p}_{ij})$. Then we have
\[ (e_{ij}^E)^\dagger=\frac{1}{c_E}\sum_{b \in B_0}\rho_{ji}(b)\,b=
\frac{1}{c_E}\sum_{b \in B_0}\sum_{1\leq k,l\leq d} \tilde{p}_{jk}\,
p_{li}\, \rho_{lk}(b^\vee)\,b=\sum_{1\leq k,l\leq d}
\tilde{p}_{jk}\,p_{li}\, e_{kl}^E.\]
The coefficient of $e_{ij}^E$ in the expression on the right hand side
is $\tilde{p}_{ji}p_{ji}$. The contribution to the trace of $\dagger$
from basis vectors corresponding to $E$ will be the sum of all these
terms. Now, we have $P^\prime=\nu_E P$; see the proof of
Proposition~\ref{prop21}. Hence, the contribution from $E$ is
\[ \sum_{1\leq i,j\leq d} \tilde{p}_{ji}p_{ji}=\nu_E \sum_{1\leq i,j
\leq d} \tilde{p}_{ij}p_{ji}=\nu_E\,\mbox{trace}(P^{-1}P)=\nu_E \dim E.\]
Consequently, we have $\mbox{trace}(\dagger)=\sum_E \nu_E \dim E$ where
the sum runs over all $E \in {\operatorname{Irr}}({\mathcal{H}})$ such that $E \cong \hat{E}$.
Since $\nu_E=0$ for all $E\in {\operatorname{Irr}}({\mathcal{H}})$ such that $E \not\cong\hat{E}$,
this yields the desired formula.
\end{proof}
\begin{exmp} \label{expreal} Let $B_0$ be a $\dagger$-symmetric basis
of ${\mathcal{H}}$ and assume that $K \subseteq{\mathbb{R}}$. We claim that then $\nu_E=1$
for all $E \in {\operatorname{Irr}}({\mathcal{H}})$. To see this, we adapt the classical argument
for finite groups. Let $E \in {\operatorname{Irr}}({\mathcal{H}})$. Choosing a basis of $E$, we
obtain a corresponding matrix representation $\rho \colon {\mathcal{H}}
\rightarrow M_d(K)$ where $d=\dim E$. We set
\[ Q:=\sum_{b \in B_0} \rho(b)^\prime\, \rho(b)=\sum_{b\in B_0}
\hat{\rho}(b^\vee)\,\rho(b).\]
Clearly, $Q$ is symmetric. As in the proof of Proposition~\ref{prop21},
the second equality shows that $Q\rho(b)=\hat{\rho}(b)Q$ for all
$b \in B_0$, so $Q$ defines a symmetric, ${\mathcal{H}}$-invariant bilinear
form on $E$. Now, the diagonal coefficients of $Q$ are sums of squares
of elements of $K$, at least some of which are non-zero (since $\rho(b)
\neq 0$ for at least some $b \in B_0$). Hence, since $K \subseteq {\mathbb{R}}$,
these diagonal coefficients are non-zero and so $Q \neq 0$. By Schur's Lemma,
$Q$ is invertible. Thus, we are in case (b) of Proposition~\ref{prop21}.
\end{exmp}
Finally, we remark that there is an extensive literature on further
generalisations of the Frobenius--Schur indicator, but usually this is done
in the framework of Hopf algebras; see, for example, Guralnick--Montgomery
\cite{gumo} and the references there.
\section{The ring $\tilde{J}$} \label{sec2}
We shall now apply the results of the previous section to cells in finite
Coxeter groups. Let $W$ be a finite Coxeter group and $S$ be a set of simple
reflections in $W$. We fix a weight function $L \colon W \rightarrow {\mathbb{Z}}$
in the sense of Lusztig \cite{Lusztig03}, where we assume that $L(s) \geq 0$
for all $s \in S$. Using the Kazhdan--Lusztig basis in the generic
Iwahori--Hecke algebra associated with $W,L$, we can define partitions of
$W$ into left, right and two-sided cells. (Note that these notions depend
on $L$).
The key tool to study these cells will be the theory around Lusztig's
ring $J$, originally introduced in \cite{Lu2} in the equal parameter case.
Subsequently, Lusztig \cite{Lusztig03} extended the theory to the general
case, assuming that certain conjectural properties hold; see
{\bf P1}--{\bf P15} in \cite[14.2]{Lusztig03}. In order to avoid the
dependence on these conjectural properties, we shall work with a version
of Lusztig's ring introduced in \cite{myedin}. Let $\tilde{J}$ denote this
new version of $J$. The principal advantage of $\tilde{J}$ is that it can be
constructed without any assumption on $W,L$. On the other hand, the results
that are known about $\tilde{J}$ are not as strong as those for $J$ but, as
we shall see, they are sufficient to deduce Theorem~\ref{propleft}. (See
Remark~\ref{finrem} below for some comments on the relation between $J$
and $\tilde{J}$.)
We now recall the basic facts about the construction of $\tilde{J}$;
we use \cite[\S 1.5]{geja} as a reference. Let $K \subseteq {\mathbb{C}}$ be any
field which is a splitting field for $W$. Let ${\operatorname{Irr}}_K(W)$ denote the set
of simple $K[W]$-modules (up to isomorphism) and write
\[ {\operatorname{Irr}}_K(W)=\{E^\lambda \mid \lambda \in \Lambda\} \qquad \mbox{(for
some finite indexing set $\Lambda$)}.\]
For each $\lambda \in \Lambda$ let $M(\lambda)$ be a basis of $E^\lambda$.
Then, by the construction in \cite[\S 1.4]{geja}, we obtain
corresponding {\em leading matrix coefficients}
\[ c_{w,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}} \in K \qquad \mbox{where $w \in W$, $\lambda
\in \Lambda$ and ${\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)$}.\]
(The construction uses the generic Iwahori--Hecke algebra associated with
$W,L$ and, hence, the above numbers depend on $L$.) For $x,y,z\in W$, we set
\[ \tilde{\gamma}_{x,y,z}=\sum_{\lambda\in \Lambda} \sum_{{\mathfrak{s}},{\mathfrak{t}},{\mathfrak{u}}
\in M(\lambda)} f_\lambda^{-1} \,c_{x,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}}\, c_{y,\lambda}^{{\mathfrak{t}}
{\mathfrak{u}}}\,c_{z,\lambda}^{{\mathfrak{u}}{\mathfrak{s}}},\]
where each $f_\lambda \in K$ is a non-zero element obtained from the
corresponding Schur element of the generic Iwahori--Hecke algebras
associated with $W,L$ (see \cite[1.3.1]{geja}). Now $\tilde{J}$ is an
associative algebra over $K$, with a basis $\{t_w \mid w \in W\}$. The
multiplication is given by
\[ t_xt_y=\sum_{z \in W} \tilde{\gamma}_{x,y,z} t_{z^{-1}} \qquad \mbox{for
$x,y \in W$}.\]
There is an identity element given by $1_{\tilde{J}}=\sum_{w \in W}
\tilde{n}_wt_w$ where
\[\tilde{n}_w=\sum_{\lambda \in \Lambda} \sum_{{\mathfrak{s}} \in M(\lambda)}
f_\lambda^{-1}\, c_{w,\lambda}^{{\mathfrak{s}}\fs}\qquad\mbox{for all $w\in W$}.\]
The algebra $\tilde{J}$ is symmetric with trace form $\tau \colon\tilde{J}
\rightarrow K$, where $\tau(t_w)=\tilde{n}_w$ for all $w \in W$. We also
note that $\tilde{n}_w= \tilde{n}_{w^{-1}}$ and, hence, $\tau(t_w)=
\tau(t_{w^{-1}})$ for all $w \in W$ (see \cite[1.5.3(c)]{geja}).
Furthermore, the map
\[ \dagger \colon \tilde{J} \rightarrow \tilde{J}, \qquad t_w \mapsto
t_{w^{-1}},\]
is an anti-involution of $\tilde{J}$ and $B_0=\{t_w \mid w \in W\}$ is a
$\dagger$-symmetric basis of $\tilde{J}$. Finally, $\tilde{J}$
is split semisimple and we have a corresponding labelling
\[ {\operatorname{Irr}}(\tilde{J})=\{\tilde{E}^\lambda \mid \lambda \in \Lambda\} \quad
\mbox{such that} \quad \dim E^\lambda=\dim \tilde{E}^\lambda \quad
\mbox{for all $\lambda\in \Lambda$}.\]
We have $\tau=\sum_{\lambda \in \Lambda} f_\lambda^{-1} \chi_{\tilde{E}}$,
hence the numbers $f_\lambda$ ($\lambda \in \Lambda$) are the Schur
elements of $\tilde{J}$. (For all these facts, see \cite[\S 1.5]{geja}.)
Now, by imitating the original definitions of Kazhdan and Lusztig
\cite{KaLu}, one can define partitions of $W$ into left, right and two-sided
``$\tilde{J}$-cells''; see \cite[\S 1.6]{geja}. (If we just say ``left
cell'', ``right cell'' or ``two-sided cell'', then this is always meant to
be a cell in the sense of Kazhdan and Lusztig, with respect to the given
weight function~$L$.) Here are some of the essential properties that we
shall need:
\begin{itemize}
\item[{\bf (J1)}] If $\tilde{\gamma}_{x,y,z}\neq 0$, then $x,y^{-1}$
belong to the same left $\tilde{J}$-cell, $y,z^{-1}$ belong to the same left
$\tilde{J}$-cell and $z,x^{-1}$ belong to the same left $\tilde{J}$-cell.
(See \cite[1.6.4]{geja}.)
\item[{\bf (J2)}] For $\lambda \in \Lambda$, the set of all $w \in W$
such that $c_{w,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}}\neq 0$ for some ${\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)$
is contained in a two-sided $\tilde{J}$-cell. (See \cite[1.6.11]{geja}.)
\item[{\bf (J3)}] If $\Gamma$ is a left cell of $W$, then $\Gamma$ is a
union of left $\tilde{J}$-cells. A similar statement holds for right cells
and two-sided cells. (See \cite[2.1.20]{geja}.)
\end{itemize}
Now let ${\mathfrak{C}}$ be a left $\tilde{J}$-cell or, slightly more generally, a
union of left $\tilde{J}$-cells. Then {\bf (J1)} implies that
\[ [{\mathfrak{C}}]_{\tilde{J}}=\langle t_x\mid x\in{\mathfrak{C}}\rangle_K\subseteq\tilde{J}\]
is a left ideal in $\tilde{J}$ and, thus, can be viewed as a left
$\tilde{J}$-module. For $\lambda \in \Lambda$, denote by $\tilde{m}({\mathfrak{C}},
\lambda)$ the multiciplity of $\tilde{E}^\lambda \in {\operatorname{Irr}}(\tilde{J})$ as
an irreducible constituent of $[{\mathfrak{C}}]_{\tilde{J}}$. Clearly, if ${\mathfrak{C}}_1,
\ldots,{\mathfrak{C}}_r$ are the left $\tilde{J}$-cells contained in ${\mathfrak{C}}$, then
\[ [{\mathfrak{C}}]_{\tilde{J}}=\bigoplus_{1\leq i \leq r} [{\mathfrak{C}}_i]_{\tilde{J}} \quad
\mbox{and}\quad \tilde{m}({\mathfrak{C}},\lambda)=\sum_{1\leq i \leq r}
\tilde{m}({\mathfrak{C}}_i, \lambda) \quad \mbox{for all $\lambda \in \Lambda$}.\]
\begin{lem} \label{copylus} Let ${\mathfrak{C}},{\mathfrak{C}}'$ be left
$\tilde{J}$-cells of $W$.
\begin{itemize}
\item[(a)] We have ${\operatorname{Hom}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}},[{\mathfrak{C}}']_{\tilde{J}})
=\{0\}$ unless ${\mathfrak{C}},{\mathfrak{C}}'$ are contained in the same two-sided
$\tilde{J}$-cell.
\item[(b)] In general, we have $\dim {\operatorname{Hom}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}},
[{\mathfrak{C}}']_{\tilde{J}})=|{\mathfrak{C}}' \cap ({\mathfrak{C}}')^{-1}|$; in particular, ${\mathfrak{C}}\cap
({\mathfrak{C}}')^{-1}=\varnothing$ unless ${\mathfrak{C}},{\mathfrak{C}}'$ are contained in the same
two-sided $\tilde{J}$-cell.
\item[(c)] If ${\mathfrak{C}}={\mathfrak{C}}'$, then the subspace $\tilde{J}_{\mathfrak{C}}:=\langle t_w
\mid w \in {\mathfrak{C}} \cap {\mathfrak{C}}^{-1} \rangle_K \subseteq \tilde{J}$ is a
subalgebra isomorphic to ${\operatorname{End}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}})$. Furthermore,
$\tilde{J}_{{\mathfrak{C}}}$ is split semisimple and has identity element
$1_{{\mathfrak{C}}}:=\sum_{w \in {\mathfrak{C}} \cap {\mathfrak{C}}^{-1}} \tilde{n}_w\,t_w$.
\end{itemize}
\end{lem}
\begin{proof} (a) Assume that ${\operatorname{Hom}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}},
[{\mathfrak{C}}']_{\tilde{J}})\neq\{0\}$. This means that there is some $\lambda \in
\Lambda$ such that $\tilde{m}({\mathfrak{C}},\lambda)>0$ and $\tilde{m}({\mathfrak{C}}',\lambda)
>0$. By \cite[1.8.1]{geja}, there exist $w \in {\mathfrak{C}}$ and $w' \in {\mathfrak{C}}'$ such
that $c_{w,\lambda}^{{\mathfrak{s}}\fs}\neq 0$ and $c_{w',\lambda}^{{\mathfrak{t}}\ft}\neq 0$
for some ${\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)$. By {\bf (J2)}, $w$ and $w'$ are
contained in the same two-sided $\tilde{J}$-cell. Consequently, ${\mathfrak{C}}$
and ${\mathfrak{C}}'$ must be contained in the same two-sided $\tilde{J}$-cell.
(b) This is modelled on the argument of Lusztig \cite[12.15]{LuBook}.
First we show that
\begin{equation*}
\dim {\operatorname{Hom}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}}, [{\mathfrak{C}}']_{\tilde{J}})\geq
|{\mathfrak{C}} \cap ({\mathfrak{C}}')^{-1}|.\tag{$*$}
\end{equation*}
If ${\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}=\varnothing$, this is clear. Now assume that
${\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}\neq \varnothing$. Let $y \in {\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}$ and
$x \in {\mathfrak{C}}$. Then $t_x t_{y^{-1}}=\sum_{z \in W} \tilde{\gamma}_{x,y^{-1},z}
t_{z^{-1}}$. If $\tilde{\gamma}_{x,y^{-1},z} \neq 0$, then $y^{-1},z^{-1}$
belong to the same left $\tilde{J}$-cell and so $z^{-1} \in {\mathfrak{C}}'$; see
{\bf (J1)}. It follows that we have a well-defined left $\tilde{J}$-module
homomorphism
\[ \varphi_y \colon [{\mathfrak{C}}]_{\tilde{J}} \rightarrow [{\mathfrak{C}}']_{\tilde{J}},
\qquad t_x \mapsto t_xt_{y^{-1}} \;(x \in {\mathfrak{C}}).\]
We claim that the collection of maps $\{\varphi_y \mid y \in {\mathfrak{C}}\cap
({\mathfrak{C}}')^{-1}\}$ is linearly independent in ${\operatorname{Hom}}_K([{\mathfrak{C}}]_{\tilde{J}},
[{\mathfrak{C}}']_{\tilde{J}})$. Indeed, assume that
\[ \sum_{y \in {\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}} \alpha_y \,\varphi_y=0 \qquad
\mbox{where $\alpha_y \in K$ for all $y \in {\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}$}.\]
Let $x \in {\mathfrak{C}} \cap ({\mathfrak{C}}')^{-1}$. Applying the above linear
combination to $t_x$ and then evaluating the trace form $\tau$ on the
resulting expression, we obtain
\[ 0=\sum_{y \in {\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}} \alpha_y \tau(\varphi_y(t_x))=
\sum_{y \in {\mathfrak{C}}\cap ({\mathfrak{C}}')^{-1}} \alpha_y \tau(t_xt_{y^{-1}})=
\alpha_x,\]
using the fact that $B_0=\{t_w \mid w \in W\}$ is a $\dagger$-symmetric
basis of $\tilde{J}$. Thus, we have $\alpha_x=0$ for all $x\in {\mathfrak{C}}\cap
({\mathfrak{C}}')^{-1}$, as required. This certainly implies that ($*$) holds.
We can then complete the proof by a counting argument, exactly as in
\cite[12.15]{LuBook}. In particular, this shows that
\[ \{\varphi_y \mid y \in {\mathfrak{C}} \cap ({\mathfrak{C}}')^{-1}\} \mbox{ is a vector
space basis of ${\operatorname{Hom}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}},[{\mathfrak{C}}']_{\tilde{J}})$}.\]
(c) Let ${\mathfrak{C}}={\mathfrak{C}}'$. By (b), we have ${\mathfrak{C}}\cap {\mathfrak{C}}^{-1}\neq \varnothing$.
Let $x,y \in {\mathfrak{C}}\cap {\mathfrak{C}}^{-1}$ and write $t_xt_y=\sum_{z \in W}
\tilde{\gamma}_{x,y,z}t_{z^{-1}}$. If $\tilde{\gamma}_{x,y,z}\neq 0$
then $y,z^{-1}$ belong to the same left $\tilde{J}$-cell and $z,x^{-1}$
belong to the same left $\tilde{J}$-cell; see again {\bf (J1)}. Thus, we
must have $z \in {\mathfrak{C}} \cap {\mathfrak{C}}^{-1}$. This shows that $\tilde{J}_{\mathfrak{C}}$ is a
subalgebra of $\tilde{J}$. Using now the construction in the proof of
(b), we obtain an isomorphism of vector spaces
\[\varphi\colon\tilde{J}_{\mathfrak{C}}\rightarrow{\operatorname{End}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}}),
\qquad t_y \mapsto \varphi_y \quad (y \in {\mathfrak{C}} \cap {\mathfrak{C}}^{-1}).\]
We note that, for any $h \in \tilde{J}_{{\mathfrak{C}}}$, the map $\varphi(h)$ is
given by right multiplication with $h^\dagger$. This certainly implies that
$\varphi$ is an algebra homomorphism.
Finally, being isomorphic to the endomorphism algebra of a module of
a split semisimple algebra, $\tilde{J}_{\mathfrak{C}}$ itself has an identity
element and is split semisimple. Let $1_{{\mathfrak{C}}}$ be the identity
element and write $1_{{\mathfrak{C}}}= \sum_{w \in {\mathfrak{C}}\cap {\mathfrak{C}}^{-1}} \alpha_w\,
t_w$ where $\alpha_w \in K$. If $x \in {\mathfrak{C}}\cap {\mathfrak{C}}^{-1}$, then $t_x
\in \tilde{J}_{{\mathfrak{C}}}$ and so
\[\tilde{n}_x=\tau(t_{x^{-1}})=\tau(t_{x^{-1}}1_{{\mathfrak{C}}})=\sum_{w \in
{\mathfrak{C}}\cap {\mathfrak{C}}^{-1}} \alpha_w\, \tau(t_{x^{-1}}t_w)=\alpha_x.\]
Thus, $1_{{\mathfrak{C}}}$ has the required expression.
\end{proof}
\begin{rem} \label{copylus1} Let ${\mathfrak{C}}$ be a left $\tilde{J}$-cell. Recall
that, by definition, the left $\tilde{J}$-module $[{\mathfrak{C}}]_{\tilde{J}}=
\langle t_w \mid w \in {\mathfrak{C}}\rangle_k$ is a left ideal in $\tilde{J}$. By
Lemma~\ref{copylus}(c), the element $1_{{\mathfrak{C}}}$ is an idempotent in
$\tilde{J}$, and it is contained in $[{\mathfrak{C}}]_{\tilde{J}}$.
In fact, we claim that
\[ [{\mathfrak{C}}]_{\tilde{J}}=\tilde{J}1_{{\mathfrak{C}}}.\]
Indeed, since $1_{{\mathfrak{C}}} \in [{\mathfrak{C}}]_{\tilde{J}}$, it is clear that
$\tilde{J}1_{{\mathfrak{C}}} \subseteq [{\mathfrak{C}}]_{\tilde{J}}$. Conversely, we note that
right multiplication by $1_{\tilde{J}}$ is the identity element of
${\operatorname{End}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}})$; see the proof of
Lemma~\ref{copylus}(c). Thus, for any $w \in {\mathfrak{C}}$, we have $t_w=t_w
1_{{\mathfrak{C}}} \in \tilde{J}1_{{\mathfrak{C}}}$, as required.
\end{rem}
For any subset $X \subseteq W$, we denote by $X_{(2)}$ the set of
involutions in $X$.
\begin{lem} \label{lem31} Let ${\mathfrak{C}}$ be a union of left $\tilde{J}$-cells of
$W$. Then we have
\[ |{\mathfrak{C}}_{(2)}|=\sum_{\lambda \in \Lambda} \tilde{m}({\mathfrak{C}},\lambda).\]
\end{lem}
\begin{proof} Since ${\mathbb{R}}$ is known to be a splitting field for $W$ (see
\cite[6.3.8]{gepf}), we will assume in this proof that $K \subseteq {\mathbb{R}}$.
Let ${\mathfrak{C}}_1,\ldots,{\mathfrak{C}}_r$ be the left $\tilde{J}$-cells which are contained
in ${\mathfrak{C}}$. Then, clearly, ${\mathfrak{C}}_{(2)}$ is the union of the sets of involutions
in ${\mathfrak{C}}_1,\ldots,{\mathfrak{C}}_r$; furthermore, as already noted above, we have
$\tilde{m}({\mathfrak{C}}, \lambda)=\tilde{m}({\mathfrak{C}}_1,\lambda)+ \ldots+\tilde{m}({\mathfrak{C}}_r,
\lambda)$ for all $\lambda \in \Lambda$. Thus, it will be sufficient to
deal with the case where $r=1$ and ${\mathfrak{C}}={\mathfrak{C}}_1$ is just one left
$\tilde{J}$-cell. In this case, consider the split semisimple algebra
${\mathcal{H}}:=\tilde{J}_{{\mathfrak{C}}}$; see Lemma~\ref{copylus}(c). We note that $\dagger$
restricts to an anti-involution of ${\mathcal{H}}$ which we denote by the same symbol.
Furthermore, $\tau$ restricts to a trace form on ${\mathcal{H}}$ where $B_{0,{\mathfrak{C}}}=
\{t_w \mid w \in {\mathfrak{C}} \cap {\mathfrak{C}}^{-1}\}$ is a $\dagger$-symmetric basis of
$\tilde{J}_{{\mathfrak{C}}}$ such that $B_{0,{\mathfrak{C}}}=B_{0,{\mathfrak{C}}}^\vee$. Thus, we can apply
the results in Section~\ref{sec1} to ${\mathcal{H}}$. Since $K \subseteq {\mathbb{R}}$ and
since ${\mathfrak{C}}_{(2)} \subseteq {\mathfrak{C}} \cap {\mathfrak{C}}^{-1}$, we conclude that
\[ |{\mathfrak{C}}_{(2)}|=\sum_{M\in {\operatorname{Irr}}({\mathcal{H}})} \dim M \qquad \mbox{(see
Proposition~\ref{prop22} and Example~\ref{expreal})}.\]
It remains to note that, since $\tilde{J}$ is split semisimple and since
we have an isomorphism ${\mathcal{H}}\cong {\operatorname{End}}_{\tilde{J}}([{\mathfrak{C}}]_{\tilde{J}})$, there
is a bijection between ${\operatorname{Irr}}({\mathcal{H}})$ and the set of simple $\tilde{J}$-modules
which appear as constituents of $[{\mathfrak{C}}]_{\tilde{J}}$; furthermore, if
$\tilde{E}^\lambda \in {\operatorname{Irr}}(\tilde{J})$ is such a constituent, then the
corresponding simple ${\mathcal{H}}$-module has dimension $\tilde{m}({\mathfrak{C}},\lambda)$.
(This follows from simple facts about Hom functors; see, e.g.,
\cite[4.1.3]{geja}.)
\end{proof}
\begin{cor} \label{cormult} Let ${\mathfrak{C}}$ be a left $\tilde{J}$-cell.
Then $[{\mathfrak{C}}]_{\tilde{J}}$ is multiplicity-free if and only if ${\mathfrak{C}}_{(2)}=
{\mathfrak{C}} \cap {\mathfrak{C}}^{-1}$.
\end{cor}
\begin{proof} Recall that ${\mathfrak{C}}_{(2)}\subseteq {\mathfrak{C}}\cap{\mathfrak{C}}^{-1}$. By
Lemma~\ref{lem31}, we have $|{\mathfrak{C}}_{(2)}|= \sum_{\lambda \in \Lambda}
\tilde{m}({\mathfrak{C}},\lambda)$. On the other hand, by Lemma~\ref{copylus}(b),
we have $|{\mathfrak{C}}\cap {\mathfrak{C}}^{-1}|=\sum_{\lambda \in \Lambda} \tilde{m}({\mathfrak{C}},
\lambda)^2$. Hence, if $[{\mathfrak{C}}]_{\tilde{J}}$ is multiplicitly-free, then
$\tilde{m}({\mathfrak{C}},\lambda)=\tilde{m}({\mathfrak{C}},\lambda)^2$ for all $\lambda \in
\Lambda$ and so ${\mathfrak{C}}_{(2)}={\mathfrak{C}}\cap {\mathfrak{C}}^{-1}$. On the other hand, if
${\mathfrak{C}}_{(2)}={\mathfrak{C}}\cap {\mathfrak{C}}^{-1}$, then $\sum_{\lambda \in \Lambda} \tilde{m}
({\mathfrak{C}},\lambda)=\sum_{\lambda \in \Lambda} \tilde{m}({\mathfrak{C}},\lambda)^2$ and
so $\tilde{m}({\mathfrak{C}},\lambda) \in \{0,1\}$ for all $\lambda\in \Lambda$.
\end{proof}
\begin{rem} \label{ll4} Let $\lambda \in \Lambda$ and $w \in W$. By
\cite[1.5.7]{geja}, we have
\begin{equation*}
c_{w,\lambda}=\mbox{trace}(t_w,\tilde{E}^\lambda) \qquad \mbox{where} \qquad
c_{w,\lambda}:=\sum_{{\mathfrak{s}} \in M(\lambda)} c_{w,\lambda}^{{\mathfrak{s}}\fs}. \tag{a}
\end{equation*}
Thus, up to signs, the numbers $c_{w,\lambda}$ are the leading coefficients
of character values as defined by Lusztig \cite{Lu4}. We claim that
\begin{equation*}
c_{w,\lambda}=0 \quad \mbox{unless $w,w^{-1}$ belong to the same left
$\tilde{J}$-cell}. \tag{b}
\end{equation*}
Indeed, assume that $c_{w,\lambda} \neq 0$. Using (a), we conclude that
$t_w$ can not be nilpotent. Consequently, $t_w^2\neq 0$ and so
$\tilde{\gamma}_{w,w,x} \neq 0$ for some $x \in W$. Hence, by {\bf (J1)},
the elements $w,w^{-1}$ must belong to the same left $\tilde{J}$-cell, as
claimed. (In the equal parameter case, this argument is due to Lusztig
\cite[3.5]{Lu4}.)
\end{rem}
\begin{exmp} \label{remc} Recall that $1_{\tilde{J}}=\sum_{w \in W}
\tilde{n}_w t_w$. Let ${\mathcal{D}}=\{w \in W \mid \tilde{n}_w\neq 0\}$. If
{\bf P1}--{\bf P15} in \cite[14.2]{Lusztig03} were known to hold for
$W,L$, then we could deduce that every element of ${\mathcal{D}}$ is an involution.
In the present context, we can at least show that $w,w^{-1}$ belong to
the same left $\tilde{J}$-cell. Indeed, if $\tilde{n}_w\neq 0$, then the
defining equation shows that $c_{w,\lambda} \neq 0$ for some $\lambda \in
M(\lambda)$. So Remark~\ref{ll4}(b) implies that $w,w^{-1}$ belong to the
same left $\tilde{J}$-cell, as claimed. In particular, if we are in a case
where all $\tilde{J}$-modules $[{\mathfrak{C}}]_{\tilde{J}}$ are multiplicity-free (for
any left $\tilde{J}$-cell ${\mathfrak{C}}$ of $W$), then $w^2=1$ for all $w \in {\mathcal{D}}$
(see Corollary~\ref{cormult}).
\end{exmp}
Now let $\Gamma$ be a left cell of $W$. Recall that we have a corresponding
left $K[W]$-module $[\Gamma]_1$. For any $\lambda \in \Lambda$, let
$m(\Gamma,\lambda)$ denote the multiplicity of $E^\lambda \in {\operatorname{Irr}}_K(W)$ as
an irreducible constituent of $[{\mathfrak{C}}]_1$. Now, $\Gamma$ is a union of left
$\tilde{J}$-cells; see {\bf (J3)}. Thus, in order to complete the proof of
Theorem~\ref{propleft}, we need to compare the multiplicities $m(\Gamma,
\lambda)$ and $\tilde{m}(\Gamma,\lambda)$.
\begin{lem} \label{lem32} With the above notation, we have $\tilde{m}
(\Gamma,\lambda)=m(\Gamma,\lambda)$ for any $\lambda \in \Lambda$.
Consequently, Theorem~\ref{propleft} holds.
\end{lem}
\begin{proof} Let $\lambda \in \Lambda$. Using \cite[Prop.~4.7]{my02} (see
also the proof of \cite[2.2.4]{geja}), we have
\[ \sum_{{\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)} \sum_{w \in \Gamma} c_{w,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}}
\, c_{w^{-1},\lambda}^{{\mathfrak{t}}{\mathfrak{s}}}=m(\Gamma,\lambda)\,f_\lambda\,|M(\lambda)|.\]
On the other hand, let ${\mathfrak{C}}_1,\ldots,{\mathfrak{C}}_r$ be the left $\tilde{J}$-cells
which are contained in $\Gamma$. Then, using \cite[1.8.1]{geja}, we have
\[ \sum_{{\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)} \sum_{w \in {\mathfrak{C}}_i} c_{w,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}}
\, c_{w^{-1},\lambda}^{{\mathfrak{t}}{\mathfrak{s}}}=\tilde{m}({\mathfrak{C}}_i,\lambda)\,f_\lambda \,
|M(\lambda)| \qquad \mbox{for $i=1,\ldots,r$}.\]
Summing these identities over $i=1,\ldots,r$ and using the fact that
$\tilde{m}(\Gamma,\lambda)=\tilde{m}({\mathfrak{C}}_1,\lambda)+\ldots+\tilde{m}
({\mathfrak{C}}_r,\lambda)$, we obtain
\[ \sum_{{\mathfrak{s}},{\mathfrak{t}} \in M(\lambda)} \sum_{w \in \Gamma} c_{w,\lambda}^{{\mathfrak{s}}{\mathfrak{t}}}
\, c_{w^{-1},\lambda}^{{\mathfrak{t}}{\mathfrak{s}}}=\tilde{m}(\Gamma,\lambda)\,f_\lambda\,
|M(\lambda)|.\]
We conclude that $m(\Gamma,\lambda)=\tilde{m}(\Gamma,\lambda)$, as required.
In combination with Lemma~\ref{lem31}, this yields that $|\Gamma_{(2)}|=
\sum_{\lambda \in \Lambda} m(\Gamma,\lambda)$. Note that the right hand
side is just the number of terms in a decomposition of $[\Gamma]_1$ as a
direct sum of simple $K[W]$-modules. Thus, Theorem~\ref{propleft} is proved.
\end{proof}
\begin{cor}[See Lusztig \protect{\cite[5.8]{LuBook}} in the equal
parameter case] \label{cor32} Let $\Gamma$ be a left cell of $W$ and
$\lambda,\mu \in \Lambda$. Then
\[ \sum_{w \in \Gamma} c_{w,\lambda}\, c_{w^{-1},\mu}=\left\{
\begin{array}{cl} m(\Gamma,\lambda)\,f_\lambda & \qquad
\mbox{if $\lambda=\mu$},\\ 0 & \qquad \mbox{otherwise}.\end{array}\right.\]
\end{cor}
\begin{proof} Let ${\mathfrak{C}}_1,\ldots, {\mathfrak{C}}_r$ be the left $\tilde{J}$-cells which
are contained in $\Gamma$. By \cite[1.8.1]{geja}, it is already known that,
for any $i \in \{1,\ldots,r\}$, we have
\[ \sum_{w \in {\mathfrak{C}}_i} c_{w,\lambda}\, c_{w^{-1},\mu}=\left\{
\begin{array}{cl} \tilde{m}({\mathfrak{C}}_i,\lambda)\,f_\lambda & \qquad
\mbox{if $\lambda=\mu$},\\ 0 & \qquad \mbox{otherwise}.\end{array}\right.\]
We sum these identities over all $i=1,\ldots,r$. Then it remains to use
Lemma~\ref{lem32} and the fact that $\tilde{m}(\Gamma,\lambda)=
\tilde{m}({\mathfrak{C}}_1,\lambda)+ \ldots+\tilde{m}({\mathfrak{C}}_r,\lambda)$.
\end{proof}
\begin{cor}[See Lusztig \protect{\cite[12.15]{LuBook}} in the equal
parameter case] \label{cor30} Let $\Gamma,\Gamma'$ be left cells of $W$.
Then
\[ \dim {\operatorname{Hom}}_{K[W]}([\Gamma]_1,[\Gamma']_1)=|\Gamma\cap (\Gamma')^{-1}|.\]
Furthermore, we have ${\operatorname{Hom}}_{K[W]}([\Gamma]_1,[\Gamma']_1)=\{0\}$ unless
$\Gamma, \Gamma'$ are contained in the same two-sided cell of $W$.
\end{cor}
\begin{proof} We have
\begin{align*}
\dim {\operatorname{Hom}}_{K[W]}([\Gamma]_1,[\Gamma']_1)&= \sum_{\lambda \in \Lambda}
m(\Gamma,\lambda)\, m(\Gamma',\lambda)\\
&=\sum_{\lambda \in \Lambda} \tilde{m}(\Gamma,\lambda)\,
\tilde{m}(\Gamma',\lambda) \qquad \mbox{(see Lemma~\ref{lem31})}\\
&=\dim {\operatorname{Hom}}_{\tilde{J}}([\Gamma]_{\tilde{J}},[\Gamma']_{\tilde{J}}).
\end{align*}
Thus, it is sufficient to show that $\dim {\operatorname{Hom}}_{\tilde{J}}
([\Gamma]_{\tilde{J}},[\Gamma']_{\tilde{J}})=|\Gamma\cap (\Gamma')^{-1}|$.
To see this, let ${\mathfrak{C}}_1,\ldots, {\mathfrak{C}}_r$ be the left $\tilde{J}$-cells which
are contained in $\Gamma$ and let ${\mathfrak{C}}_1',\ldots,{\mathfrak{C}}_s'$ be the left
$\tilde{J}$-cells which are contained in $\Gamma'$. Then we have
\[ \dim{\operatorname{Hom}}_{\tilde{J}}([\Gamma]_{\tilde{J}}, [\Gamma']_{\tilde{J}})=
\sum_{1\leq i\leq r} \sum_{1\leq j \leq s} \dim{\operatorname{Hom}}_{\tilde{J}}
([{\mathfrak{C}}_i]_{\tilde{J}}, [{\mathfrak{C}}_j']_{\tilde{J}}),\]
and so the desired equality immediately follows from Lemma~\ref{copylus}(b).
Finally, if $\Gamma,\Gamma'$ are not contained in the same two-sided cell,
then ${\mathfrak{C}}_i,{\mathfrak{C}}_j$ (for any $i,j$) are not contained in the same
two-sided $\tilde{J}$-cell; see {\bf (J3)}. Thus, Lemma~\ref{copylus}(a)
and the above formula show that ${\operatorname{Hom}}_{K[W]}([\Gamma]_1,[\Gamma']_1)=
\{0\}$ in this case.
\end{proof}
\begin{exmp} \label{expl} Let $\Gamma,\Gamma'$ be left cells of $W$ which
are contained in the same two-sided cell. If $L(s)=1$ for all $s \in S$, it
is known that we always have $\Gamma \cap (\Gamma')^{-1}\neq \varnothing$;
see Lusztig \cite[12.16]{LuBook}.~--~For general $L$, it can happen that
$\Gamma\cap (\Gamma')^{-1}=\varnothing$; see \cite[Cor.~4.8]{mykl} (case
``$b=2a$'' in Table~2) for an example in type $F_4$.
\end{exmp}
\begin{cor} \label{cor31} Let $\Gamma$ be a left cell of $W$. Then
$[\Gamma]_1$ is multiplicity-free if and only if $\Gamma_{(2)}=\Gamma
\cap \Gamma^{-1}$.
\end{cor}
\begin{proof} Once Theorem~\ref{propleft} and Corollary~\ref{cor30} are
established, this follows by an argument entirely analogous to that in
Corollary~\ref{cormult}.
\end{proof}
Now let ${\mathfrak{c}}$ be a two-sided cell of $W$. Given $E^\lambda\in {\operatorname{Irr}}_K(W)$,
we write $E^\lambda\sim_L {\mathfrak{c}}$ and say that $E^\lambda$ belongs to ${\mathfrak{c}}$
if $E^\lambda$ is a constituent of some $[\Gamma]_1$ where $\Gamma$ is a
left cell which is contained in ${\mathfrak{c}}$.
\begin{cor} \label{proptwo} The number of involutions in a two-sided cell
${\mathfrak{c}}$ of $W$ is equal to $\sum_\lambda \dim E^\lambda$ where the sum runs
over all $\lambda \in \Lambda$ such that $E^\lambda \sim_L {\mathfrak{c}}$.
\end{cor}
\begin{proof} Let $\Gamma^1, \ldots,\Gamma^m$ be all the left cells of $W$
(with respect to the given $L$). Then the direct sum $\bigoplus_{1 \leq i
\leq m} [\Gamma^i]_1$ is isomorphic to the regular representation of $W$
and so each $E^\lambda\in {\operatorname{Irr}}_K(W)$ appears with multiplicity equal to
$\dim E^\lambda$ in that direct sum. Now, by Corollary~\ref{cor30}, we
have ${\operatorname{Hom}}_{K[W]} ([\Gamma^i]_1, [\Gamma^j]_1)=\{0\}$ whenever $\Gamma^i$,
$\Gamma^j$ are not contained in the same two-sided cell of $W$. Consequently,
if $I$ denotes the set of all $i \in \{1,\ldots,m\}$ such that $\Gamma^i
\subseteq {\mathfrak{c}}$, then we have
\[\sum_{i \in I} [\Gamma^i]_1=\sum_{\lambda \in \Lambda \,:\, E^\lambda
\sim_L {\mathfrak{c}}} (\dim E^\lambda) \,E^\lambda\]
(in the appropriate Grothendieck group of representations). Thus, the
number of terms in a decomposition of $\bigoplus_{i \in I} [\Gamma^i]_1$
as a direct sum of irreducible representations is equal to $\sum_\lambda
\dim E^\lambda$ where the sum runs over all $\lambda \in \Lambda$ such
that $E^\lambda \sim_L {\mathfrak{c}}$. On the other hand, by Theorem~\ref{propleft},
this number is also equal to the sum $\sum_{i \in I} |\Gamma_{(2)}^i|$
which is just the number of involutions in~${\mathfrak{c}}$.
\end{proof}
\begin{exmp}[Lusztig] \label{exp1} Assume that we are in the equal parameter
case where $L(s)=1$ for all $s \in S$. Let ${\mathfrak{c}}$ be a two-sided cell of $W$.
By \cite[Chap.~4]{LuBook}, one can attach a certain finite group ${\mathcal{G}}=
{\mathcal{G}}_{{\mathfrak{c}}}$ to ${\mathfrak{c}}$ (or the corresponding family of ${\operatorname{Irr}}_K(W)$). Assume
now that $W$ is of classical type. Then $|{\mathcal{G}}_{{\mathfrak{c}}}|=2^d$ for some $d
\geq 0$ and, by \cite[12.17]{LuBook}, it is known that $[\Gamma]_1$ is
multiplicity-free with exactly $2^d$ irreducible constituents, for every
left cell $\Gamma \subseteq {\mathfrak{c}}$. Hence, $\Gamma_{(2)}=\Gamma\cap
\Gamma^{-1}$ also has cardinality $2^d$ for any left cell $\Gamma\subseteq
{\mathfrak{c}}$. Now let $E_{\mathfrak{c}}$ be the unique special representation which belongs
to ${\mathfrak{c}}$ (see \cite[4.14, 5.25]{LuBook}). Since $E_{\mathfrak{c}}$ occurs with
multiplicity $1$ in $[\Gamma]_1$ for every left cell $\Gamma\subseteq{\mathfrak{c}}$,
we conclude that $|{\mathfrak{c}}_{(2)}|= 2^d \dim E_{\mathfrak{c}}$. Combining this with
Corollary~\ref{proptwo}, we obtain the following identity:
\[ 2^d \dim E_{\mathfrak{c}}=\sum_{\lambda \in \Lambda\,:\, E^\lambda \sim_L {\mathfrak{c}}}
\dim E^\lambda,\]
which shows that the order of the group ${\mathcal{G}}_{{\mathfrak{c}}}$ is
determined by the set of all $E^\lambda\in {\operatorname{Irr}}_K(W)$ which belong to
${\mathfrak{c}}$. (If $W$ is of exceptional type, then such an identity
will not hold in general.)
\end{exmp}
\begin{rem} \label{finrem} Assume that the conjectural properties
{\bf P1}--{\bf P15} in \cite[14.2]{Lusztig03} hold for $W,L$. Then we do
have $J=\tilde{J}$; see \cite[2.3.16]{geja}. In particular, this implies
that:
\begin{itemize}
\item $\tilde{\gamma}_{x,y,z} \in {\mathbb{Z}}$ and $\tilde{n}_w=\pm 1$ for all
$x,y,z,w\in W$;
\item every left $\tilde{J}$-cell contains a unique element of ${\mathcal{D}}=
\{w \in W \mid \tilde{n}_w\neq 0\}$;
\item the left cells of $W$ are precisely the left $\tilde{J}$-cells.
\end{itemize}
(See \cite[2.5.3]{geja}.) It would be highly interesting to prove these
statements directly, without reference to {\bf P1}--{\bf P15}; at present,
we do not see any way of doing this. In \cite{mypamq}, we have formulated
a somewhat different set of conjectural properties which, in some cases, are
easier to verify than {\bf P1}--{\bf P15}. However, the case where $W$ if
of type $B_n$ and $L$ is a general weight function remains completely open.
\end{rem}
\medskip
\noindent {\bf Acknowledgements.} I wish to thank George Lusztig for
a discussion at the Isaac Newton Institute (Cambridge, September 2011)
where he suggested that Corollary~\ref{proptwo} might be true in general.
| {
"timestamp": "2011-12-20T02:01:16",
"yymm": "1110",
"arxiv_id": "1110.5672",
"language": "en",
"url": "https://arxiv.org/abs/1110.5672",
"abstract": "Let $W$ be a finite Coxeter group. It is well-known that the number of involutions in $W$ is equal to the sum of the degrees of the irreducible characters of $W$. Following a suggestion of Lusztig, we show that this equality is compatible with the decomposition of $W$ into Kazhdan--Lusztig cells. The proof uses a generalisation of the Frobenius--Schur indicator to symmetric algebras, which may be of independent interest.",
"subjects": "Representation Theory (math.RT)",
"title": "Kazhdan--Lusztig cells and the Frobenius--Schur indicator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126482065489,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7094397017861783
} |
https://arxiv.org/abs/1707.02078 | Computational Krylov-based methods for large-scale differential Sylvester matrix problems | In the present paper, we propose Krylov-based methods for solving large-scale differential Sylvester matrix equations having a low rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low-dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as BDF or Rosenbrock method and the obtained solution is used to build the low rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches. | \section{Introduction}
In the present paper, we consider the differential Sylvester
matrix equation (DSE in short) of the form
\begin{equation}\label{sylv1}
\left\{
\begin{array}{l}
\dot X(t)=A(t)\,X(t)+X(t)\,B(t)+E(t)F(t)^T;\; (DSE) \\
\;X(t_0)=X_0,\; \; t \in [t_0, \, T_f],
\end{array}
\right.
\end{equation}
\noindent where $ A(t) \in \mathbb{R}^{n \times n} $, $ B(t) \in \mathbb{R}^{p \times p}$ and
$E(t) \in \mathbb{R}^{n \times s} $ and $F(t) \in \mathbb{R}^{p \times s} $ are full rank matrices, with $ s\ll n,\,p $. The initial condition is given in a factored form as $X_0=Z_0 \widetilde{Z} _0^T$ and the matrices $A$ and $B$ are assumed to be large and sparse.\\
Differential Sylvester equations play a fundamental role in many areas such as control,
filter design theory, model reduction problems, differential
equations and robust control problems \cite{abou03,corless}.
For such differential matrix equations, only a few attempts have been
made for large problems. \\
Let us first recall the following theoretical result which gives an expression of the exact solution of \eqref{sylv1}.
\begin{theorem}\label{theo1}
\cite{abou03}
The unique solution of the general differential Sylvester equation
\begin{equation}
\displaystyle {\dot X}(t)=A(t)\,X(t)+X(t)\,B(t)+M(t);\;\; X(t_0)=X_0
\end{equation}
is defined by
\begin{equation}\label{solexacte1}
X(t) = \Phi_A(t,0)X_0{\Phi_{B^T}^T(t,t_0)}+\int_{t_0}^t \Phi_A(t,\tau)M(\tau){\Phi_{B^T}^T(t,\tau)}d\tau.
\end{equation}
where the transition matrix $\Phi_A(t,t_0)$ is the unique solution to the problem
$$\displaystyle {\dot \Phi}_A(t,t_0)=A(t) \Phi_A(t,t_0),\;\; \Phi_A(t_0,t_0)=I.$$
Futhermore, if $A$ is assumed to be a constant matrix, then we have
\begin{equation}\label{solexacte2}
X(t)=e^{(t-t_0)A}X_0e^{(t-t_0)B}+\int_{t_0}^t e^{(t-\tau)A}M(\tau)e^{(t-\tau)B}d\tau.
\end{equation}
\end{theorem}
We notice that the problem \eqref{sylv1} is equivalent to the linear ordinary differential equation
\begin{equation}\label{kron}
\left\{
\begin{array}{c c l}
\dot{x}(t)& =& \mathcal{A}(t)x(t)+b(t) \\
x_0 & = & vec(X_0)
\end{array}
\right.
\end{equation}
where $\mathcal{A}= I \otimes A(t) + B^T(t) \otimes I$, $x(t)=vec(X(t)$ and $b(t) = vec(E(t)F(t)^T)$, where $vec(Z)$ is the long vector obtained by stacking the columns of the matrix $Z$, forming a sole column. For moderate size problems, it is then possible to directly apply an integration method to solve \eqref{kron}. However, this approach is not suitable for large problems. From now on, we assume that the matrices $A$ and $B$ are time independent.\\
In the present paper, we will consider projection methods onto extended block Krylov (or block Krylov) subspaces associated to the pairs $(A,E)$ and $(B^T,F)$ defined as follows
$$\mathbb{K}_m(A,E)={\rm range}(E,AE,\ldots,A^{m-1}E)$$
for block Krylov subspaces, or
$${\cal K}_m(A,E)={\rm range}(A^{-m}E,\ldots,A^{-1}E,E,AE,\ldots,A^{m-1}E)$$
for extended block Krylov subspaces when the matrix $A$ is nonsingular. Notice that the extended block Krylov subspace ${\cal K}_m(A,E)$ is a sum of two block Krylov subspaces associated to the pairs $(A,E)$ and $(A^{-1},A^{-1}E)$:
$$ {\cal K}_m(A,E)=\mathbb{K}_{m}(A,E)\, + \, \mathbb{K}_m(A^{-1},A^{-1}E). $$
To compute an orthonormal basis $\{V_1,\ldots,V_m\}$, where $V_i$ is of dimension $n\times d$ where $d=s$ for the block Krylov and $d=2s$ in the extended block Krylov case, two algorithms have been defined: the first one is the well known block Arnoldi algorithm and the second one is the extended block Arnoldi algorithm \cite{druskin98,simoncini1}; see Appendix A for the description of both algorithms. \\These algorithms generate the blocks $ V_1,V_2,\ldots,V_m $, $ V_ i \in \mathbb{R}^{n \times d} $ such that their columns form an orthonormal basis of the block Krylov subspace $\mathbb{K}_{m}(A,E)$ (with $d=s$) or the extended block Arnoldi ${\cal K}_m(A,E)$ (with $d=2s$). \\ Both algorithms compute also $d \times d$ block upper Hessenberg matrices $ {\mathcal T}_{m,A} = {\mathcal V}_m^T\,A\,{\mathcal V}_m $. The following algebraic relations are satisfied
\begin{eqnarray}
A\,{\cal V}_m & = & {\cal V}_{m+1}\,{\widehat {\cal T}}_{m,A}, \\
& = & {\cal V}_m\,{\cal T}_{m,A} + V_{m+1}\,T_{m+1,m}\,\widetilde{E}_m^T,
\end{eqnarray}
where ${{\widehat {\cal T}}_{m,A}} = {\cal V}_{m+1}^T\,A\,{\cal V}_m ; $ $ T_{i,j} $ is the $ (i,j) $ block of $ {\widehat {\cal T}_{m,A}} $ of size $ d\times d $, and $ \widetilde{E}_m = [ O_{d \times (m-1)d}, I_{d} ]^T $ is the matrix formed with the last $ d$ columns of the $ md \times md$ identity matrix $ I_{md} $ where $d=s$ for the block Arnoldi and $d=2s$ for the extended block Arnoldi.\\ When the matrix $A$ is nonsingular and when the computation of the products $W=A^{-1}V$ is not difficult (which is the case for sparse and structured matrices), the use of the extended block Arnoldi is to be preferred. \\
The paper is organized as follows: In Section 2, we present a first approach based on the approximation of the exponential of a matrix times a block using a Krylov projection method. We give some theoretical results such as a simple expression of the norm of the residual and upper bounds for the norm of the error and perturbation results. In Section 3, the initial differential Sylvester matrix equation is projected onto a block (or extended block) Krylov subspace. The obtained low dimensional differential Sylvester equation is solved by using the well known Backward Differentiation Formula (BDF) and Rosenbrock methods. The last section is devoted to some numerical experiments.
\\
Throught the paper, $\Vert . \Vert$ and $\Vert \,.\, \Vert_F$ will denote the 2-norm and the Frobenius norm, respectively.
\section{Solutions via of the matrix exponential approximation }
In this section, we give a new approach for computing approximate solutions to large differential Sylvester equations \eqref{sylv1}. \\
We recall that the exact solution to \eqref{sylv1} can be expressed as follows
\begin{equation}\label{solexacte3}
X(t)=e^{(t-t_0)A}X_0e^{(t-t_0)B}+\int_{t_0}^t e^{(t-\tau)A}\, EF^Te^{(t-\tau)B}\, d\tau.
\end{equation}
For our first approach, we use this expression of $X(t)$ to obtain low rank approximate solutions. We first approximate the factors $ e^{(t-\tau)A}E$ and $ e^{(t-\tau)B^T}F$ and then, use a quadrature method to compute the desired approximate solution. As the matrices $e^{(t-\tau)A}$ and $ e^{(t-\tau)B^T}$ are large and could be dense even though $A$ and $B$ are sparse, computing the exponential is not recommended. However, in our problem, the computation of $e^{(t-\tau)A}$ and $ e^{(t-\tau)B^T}$ are not needed explicitly as we will rather consider the products $e^{(t-\tau)A}\, E$ and $ e^{(t-\tau)B^T}F$ for which approximations via projection methods onto block or extended block Krylov subspaces are well suited.
\\
In what follows, we consider projections onto extended block Krylov (or just block Krylov) subspaces.
Let $\mathcal{V}_m=[V_1,\ldots,V_m]$ and $\mathcal{W}_m=[W_1,\ldots,W_m]$ be the orthogonal matrices whose columns form an orthonormal basis of the subspace ${\cal K}_m(A,E)$ and ${\cal K}_m(B^T,F)$, respectively.
Following \cite{saad1,saad2,vorst1}, an approximation to $Z_A=e^{(t-\tau)A}\, E$ can be obtained as
\begin{equation}
\label{expA}
Z_{m,A}(\tau) = \mathcal{V}_m e^{(t-\tau)\mathcal{T}_{m,A}}\, \mathcal{V}_m^T E
\end{equation}
where $\mathcal{T}_{m,A}=\mathcal{V}^T_{m} A \mathcal{V}_m$. In the same way, an approximation to $ e^{(t-\tau)B^T}F$, is given by
\begin{equation}
\label{expB}
Z_{m,B}(\tau) = \mathcal{W}_m e^{(t-\tau)\mathcal{T}_{m,B}}\, \mathcal{W}_m^T F,
\end{equation}
where $\mathcal{T}_{m,B}=\mathcal{W}^T_{m} B^T \mathcal{W}_m$.
Therefore, the integrand in the expression \eqref{solexacte3} can be approximated as
\begin{equation}
\label{exp2}
e^{(t-\tau)A}EF^Te^{(t-\tau)B} \approx Z_{m,A}(\tau) Z_{m,B}(\tau)^T.
\end{equation}
If for simplicity, we assume that $X(0)=0$, an approximation to the solution of the differential Sylvester equation \eqref{sylv1} can be expressed as the product
\begin{equation}
\label{exp3}
X_m(t) = \mathcal{V}_m G_m(t) {\mathcal{W}_m}^T,\; t \in [t_0,\,T_f],
\end{equation}
where
\begin{equation}
\label{gm}
G_m(t) = \displaystyle \int_{t_0}^t Z_{m,A}(\tau) Z_{m,B}^T(\tau) d\tau,
\end{equation}
with $E_m=\mathcal{V}^T_{m} E$ and $F_m=\mathcal{W}^T_{m} F$. \\
\noindent The next result shows that the $md \times md$ matrix function $G_{m}$ is solution of a low-order differential Sylvester matrix equation.
\begin{proposition}
Let $G_m(t)$ be the matrix function defined by \eqref{gm}, then it satisfies the following low-order differential Sylvester matrix equation
\begin{equation}\label{low2}
{\dot G}_m(t) = \mathcal{T}_{m,A} G_m(t) + G_m(t){\mathcal{T}_{m,B}}^T +E_mF_m^T,\;\; t \in [t_0,\,T_f].
\end{equation}
\end{proposition}
\medskip
{\bf Proof.}
The proof can be easily derived from the expression \eqref{gm} and the result of Theorem \ref{theo1}.
\medskip
\noindent As a consequence, introducing the residual $ R_m(t) = \displaystyle {\dot X}_m(t)-A\,X_m(t)-X_m(t)\,B- EF^T $ associated to the approximation $X_m(t)$, we have the following relation
\begin{eqnarray*}
{\cal V}^T_{m} R_m(t) {\cal W}_m &= & {\cal V}^T_m ({\dot X}_m(t) -AX_m(t)-X_m(t)B-EF^T) {\cal W}_m\\
& = & {\dot G}_m(t) -\mathcal{T}_{m,A} G_m(t) - G_m(t){\mathcal{T}_{m,B}}^T -E_mF_m^T\\
& = & 0,
\end{eqnarray*}
which shows that the residual satisfies a Petrov-Galerkin condition.\\
\noindent As mentioned earlier and for our first exponential-based approach, once $ Z_{m,A}(\tau)$ and $ Z_{m,B}(\tau)$ are computed, we use a quadrature method to approximate the integral \eqref{gm} in order to get an approximation of $G_m(t)$ and hence to compute $X_m(t)$ from \eqref{exp3}.
\noindent The computation of $ X_m(t) $
(and of $ R_m(t) $) becomes expensive as $ m $ increases. So, in
order to stop the iterations, one has to test if $ \parallel R_m(t)
\parallel < \epsilon $ without having to compute extra products
involving the matrices $ A $ and $B$. The next result shows how to compute
the norm of $ R_m(t) $ without forming the approximation $
X_m(t) $ which is computed in a factored form only when convergence
is achieved.
\medskip
\begin{proposition} \label{t2}
Let $ X_m(t) = {\cal V}_mG_m(t){\cal W}_m^T $ be the approximation obtained at step $ m $ by the block (or extended block) Arnoldi method. Then the residual $ R_m(t) $ satisfies the relation
\begin{equation}
\label{result2}
\parallel R_m(t) \parallel_F^2 = \parallel T_{m+1,m}^A {\bar G}_{m}(t) \parallel_F^2 + \parallel T_{m+1,m}^B {\bar G}_{m}(t) \parallel_F^2,
\end{equation}
and for the 2-norm, we have
\begin{equation}
\label{resultnorm2}
\parallel R_m(t) \parallel = \displaystyle \max \{ \parallel T_{m+1,m}^A {\bar G}_{m}(t) \parallel, \parallel T_{m+1,m}^B {\bar G}_{m}(t) \parallel\},
\end{equation}
where $ {\bar G}_m $ is the $ d \times md $ matrix corresponding to the last $ d$ rows of $ G_m $ where $d=2s$ when using the extended block Arnoldi algorithm and $d=s$ with the block Arnoldi algorithm.
\end{proposition}
\medskip
{\bf Proof.} .
The proof comes from the fact that the residual $R_m(t)$ can be expressed as
\begin{equation}
\label{rm}
R_m(t) = {\cal V}_{m+1} \left (
\begin{array}{cc}
{\dot G}_m(t) -\mathcal{T}_{m,A} G_m(t) - G_m(t){\mathcal{T}_{m,B}}^T -E_mF_m^T & -T_{m+1,m}^B {\bar G}_{m}(t)\\
T_{m+1,m}^A {\bar G}_{m}(t) & 0
\end{array}
\right )\, {\cal W}_{m+1}^T,
\end{equation}
where $G_m(t)$ solves the low dimensional problem \eqref{low2}. Therefore, we get
\begin{eqnarray*}
\Vert R_m(t) \Vert_F^2 & = & \left \Vert \left (
\begin{array}{cc}
0 & -T_{m+1,m}^B {\bar G}_{m}(t)\\
T_{m+1,m}^A {\bar G}_{m}(t) & 0
\end{array}
\right ) \right \Vert_F^2\\
& = & \parallel T_{m+1,m}^A {\bar G}_{m}(t) \parallel_F^2 + \parallel T_{m+1,m}^B {\bar G}_{m}(t) \parallel_F^2.
\end{eqnarray*}
\noindent To prove the expression \eqref{resultnorm2} with the 2-norm , let us first remark that if $$M=\left ( \begin{array}{cc}
0 & M_1\\
M_2 & 0
\end{array}\right),\;\; {\rm then}\; \; M^TM=\left ( \begin{array}{cc}
M_1^T M_1 & 0\\
0 & M_2 ^T M_2
\end{array}\right),$$
which shows that the singular values of $M$ are the sum of the singular values of $M_1$ and those of $M_2$ which implies that$$ \Vert M \Vert= \sigma_{max}(M)= \displaystyle \max\{ \sigma_{max}(M_1), \sigma_{max}(M_2)\}= \max\{\Vert M_1 \Vert, \Vert M_2 \Vert\}.$$
Therefore, using this remark and the fact that
$$\Vert R_m(t) \Vert = \left \Vert \left (
\begin{array}{cc}
0 & -T_{m+1,m}^B {\bar G}_{m}(t)\\
T_{m+1,m}^A {\bar G}_{m}(t) & 0
\end{array}
\right ) \right \Vert,$$
the result follows.\\
\medskip
\noindent
The approximate solution
$X_m(t)$ is computed only when convergence is achieved and in a factored form which is very important for storage requirements in large-scale problems. This procedure is described as follows.\\
Consider the singular value decomposition of the
matrix $G_m(t)=U\, \Sigma\, V$ where $\Sigma$ is the
diagonal matrix of the singular values of $G_m(t)$ sorted in
decreasing order. Let $U_l$ be the $md \times l$ matrix of the first $l$ columns of $U$
corresponding to the $l$ singular values of magnitude greater than
some tolerance $dtol$. We obtain the
truncated singular value decomposition $G_m(t) \approx U_l\, \Sigma_l\,
V_l^T$ where $\Sigma_l = {\rm diag}[\lambda_1, \ldots, \lambda_l]$.
Setting ${\widetilde Z}_{m,A}(t)={\cal V}_m \, U_l\, \Sigma_l^{1/2}$ and ${\widetilde Z}_{m,B}(t)={\cal W}_m \, V_l\, \Sigma_l^{1/2}$ , it
follows that
\begin{equation}
\label{approx}
X_m(t) \approx {\widetilde Z}_{m,A}(t) {\widetilde Z}_{m,B}(t)^T.
\end{equation}
Therefore, only the matrices ${\widetilde Z}_{m,A}(t)$ and ${\widetilde Z}_{m,B}(t)$ are needed.\\
\noindent The following result shows that the approximation $X_m $ is an exact solution of a perturbed differential Sylvester equation.\\
\begin{proposition}\label{ppertu}
Let $X_m(t)$ be the approximate solution given by \eqref{exp3}. Then we have
\begin{equation}
\label{pertu}
\displaystyle {\dot X}_m(t)=(A-F_{m,A})\,X_m(t)+X_m(t)\,(B-{F}_{m,B})+EF^T.
\end{equation}
where $ F_{m,A} = V_{m+1}\,T_{m+1,m}^A\,V_{m}^T $ and $F_{m,B}=W_m(T_{m+1,m}^B)^T W_{m+1}^T$
\end{proposition}
\medskip
{\bf Proof.} .
As $X_m(t)={\cal V}_m G_m(t) {\cal W}_m^T$, we have
\begin{equation}
\label{eq1}
{\dot X}_m(t)-(AX_m(t)+X_m(t)B+EF^T)= {\cal V}_m {\dot G}_m(t) {\cal W}_m^T-(A{\cal V}_m G_m(t) {\cal W}_m^T +B{\cal V}_m G_m(t) {\cal W}_m^T +EF^T).
\end{equation}
Now, using the fact that $$A\,{\cal V}_m = {\cal V}_{m}\, {\cal T}_{m,A} + V_{m+1}T_{m+1,m}^A {\widetilde E}_m^T, \; {\rm and} \; B^T\,{\cal W}_m = {\cal W}_{m}\,{\cal T}_{m,B}+W_{m+1}T_{m+1,m}^B {\widetilde E}_m^T,$$ equation \eqref{eq1} becomes
\begin{eqnarray*}
\label{eq2}
{\dot X}_m(t)-(AX_m(t)+X_m(t) B+EF^T) & = & {\cal V}_m \dot G_m(t) {\cal W}_m^T-( [{\cal V}_{m}\, {\cal T}_{m,A} + V_{m+1}T_{m+1,m} {\widetilde E}_m^T]G_m(t) {\cal W}_m^T\\ & + & {\cal V}_mG_m(t) [ {\cal W}_{m}\,{\cal T}_{m,B}+ W_{m+1}T_{m+1,m}^B {{\widetilde E}_m}^T]^T +EF^T ).
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\label{eq3}
{\dot X}_m(t)-(AX_m(t)+X_m(t) B+EF^T) & = & {\cal V}_m [ {\dot G}_m(t) - {\cal T}_{m,A} G_m(t) -G_m(t) {\cal T}_{m,B}^T -EF^T] {\cal W}_m\\
& - & ( V_{m+1}T_{m+1,m}^A {\widetilde E}_m^T G_m(t) {\cal W}_m + {\cal V}_mG_m(t) {\widetilde E}_m(T_{m+1,m}^B)^T W_{m+1}^T ).
\end{eqnarray*}
On the other hand we have ${\cal V}_mG_m(t)=X_m(t) {\cal W}_m$, $G_m(t) {\cal W}_m^T= {\cal V}_m^TX_m(t)$, ${\cal V}_m{\widetilde E}_m=V_m$, ${\cal W}_m{\widetilde E}_m=W_m$ and $EF^T={\cal V}_m E_mF_m^T {\cal W}_m^T $. So using these relations and the fact that $G_m$ solves the low dimensional differential Sylvester equation \eqref{low2}, we obtain the desired result.
\medskip
\noindent The next result states that the error ${\cal E}_m(t)=X(t)-X_m(t)$ satisfies also a differential Sylvester matrix equation.
\medskip
\begin{proposition}\label{err1}
Let $X(t)$ be the exact solution of \eqref{sylv1} and let $X_m(t)$ be the approximate solution obtained at step $m$. The error ${\cal E}_m(t)=X(t)-X_m(t)$ satisfies the following equation
\begin{equation}\label{pertu2}
\displaystyle \dot {\cal E}_m(t)-A{\cal E}_m(t)-{\cal E}_m(t)B=F_{m,A}X_m(t)+X_m(t) F_{m,B}=-R_m(t),
\end{equation}
where $F_{m,A}$ and $F_{m,B}$ are defined in Proposition \ref{ppertu} and $R_m(t)= \displaystyle \dot{X}_m(t)-A\,X_m(t)-X_m(t)\,B- EF^T$.
\end{proposition}
\medskip
{\bf Proof.}
The result is easily obtained by subtracting the equation \eqref{pertu} from the initial differential Sylvester equation \eqref{sylv1}.\\
\noindent Notice that from Proposition \ref{err1}, the error ${\cal E}_m(t)$ can be expressed in the integral form as follows
\begin{equation}
\label{error3}
{\cal E}_m(t)=e^{(t-t_0)A}{ \cal E}_{m,0}e^{(t-t_0)B}-\int_{t_0}^t e^{(t-\tau)A}R_m(\tau)e^{(t-\tau)B}d\tau,\; t \in [t_0,\, T_f].
\end{equation}
where $\mathcal{E}_{m,0}=\mathcal{E}_m(0)$.
\medskip
\noindent Next, we give an upper bound for the norm of the error by using the 2-logarithmic norm defined by $\mu_2(A)=\displaystyle \lim_{h \rightarrow 0^+} \frac{\Vert I + hA \Vert_2-1}{h}=\displaystyle \frac{1}{2} \lambda_{max}(A+A^T)$. \\
\begin{proposition}
\label{t3}
Assume that the matrices $A$ and $B$ are such that $\mu_2(A)+\mu_2(B) \ne 0$. Then at step $m$ of the extended block Arnoldi (or block Arnoldi) process, we have the following upper bound for the norm of the error $\mathcal{E}_m(t) = X(t)-X_m(t)$,
\begin{equation}
\label{upperbound}
\parallel \mathcal{E}_m(t) \parallel \le \Vert \mathcal{E}_{m,0} \Vert e^{(t-t_0)(\mu_2(A)+\mu_2(B))}+ \alpha_m \frac{e^{(t-t_0)(\mu_2(A)+\mu_2(B))}-1}{ \mu_2(A)+\mu_2(B)},\\
\end{equation}
where $\alpha_m$ is given by $\alpha_m=\displaystyle
\max_{\tau \in [t_0,\, t]} \left( \displaystyle \max \{ \parallel T_{m+1,m}^A {\bar G}_{m}(t) \parallel_2, \parallel T_{m+1,m}^B {\bar G}_{m}(t) \parallel_2\} \right)$. The matrix $ {\bar G}_m $ is the $ d \times md $ matrix corresponding to the last $ d $ rows of $ G_m $.
\end{proposition}
\medskip
{\bf Proof.}
We first point out that $\parallel e^{tA} \parallel \le e^{\mu_2(A)t}$. Using the expression \eqref{error3} of $\mathcal{E}_m(t) $, we obtain the following relation
\begin{equation*}
\parallel \mathcal{E}_m(t) \parallel \le \Vert \displaystyle e^{(t-t_0)A}\mathcal{E}_{m,0}e^{(t-t_0)B} \Vert+ \displaystyle \int_{t_0}^t \Vert e^{(t-\tau)A}R_m(\tau)e^{(t-\tau)B} \Vert d \tau.
\end{equation*}
Therefore, using \eqref{error3} and the fact that $\parallel e^{(t-\tau)A} \parallel \le e^{(t-\tau) \mu_2(A)}$, we get
\begin{eqnarray*}
\parallel \mathcal{E}_m(t) \parallel & \le & \Vert {\cal E}_{m,0} \Vert e^{(t-t_0)(\mu_2(A)+\mu_2(B))} + \displaystyle \max_{\tau \in [t_0,t]} \parallel R_m(\tau ) \parallel \displaystyle \int_{t_0}^t e^{(t-\tau) \mu_2(A)} e^{(t-\tau) \mu_2(B)} d\tau\\
& = &\Vert \mathcal{E}_{m,0} \Vert e^{(t-t_0)(\mu_2(A)+\mu_2(B))} + \displaystyle \max_{\tau \in [t_0,t]} \parallel R_m(\tau ) \parallel e^{t(\mu_2(A)+\mu_2(B))} \displaystyle \int_{t_0}^t e^{-\tau (\mu_2(A)+\mu_2(B))} d\tau. \\
\end{eqnarray*}
Hence
\begin{equation}
\label{equp1}
\parallel \mathcal{E}_m(t) \parallel \le \Vert \mathcal{E}_{m,0} \Vert \displaystyle e^{(t-t_0)(\mu_2(A)+\mu_2(B))} + \displaystyle \max_{\tau \in [t_0,t]} \parallel R_m(\tau ) \parallel \displaystyle \frac{e^{(t-t_0)(\mu_2(A)+\mu_2(B))}-1}{ \mu_2(A)+\mu_2(B)}.
\end{equation}
Using the result of Proposition \ref{t2}, we obtain $\displaystyle \max_{\tau \in [t_0,t]} \parallel R_m(\tau ) \parallel=\alpha_m$ and then
$$\parallel \mathcal{E}_m(t) \parallel \le \Vert \mathcal{E}_{m,0} \Vert e^{(t-t_0)(\mu_2(A)+\mu_2(B))} + \alpha_m \, \displaystyle \frac{e^{(t-t_0)(\mu_2(A)+\mu_2(B))}-1}{ \mu_2(A)+\mu_2(B)}.$$
\medskip
\noindent Notice that if the matrices $A$ and $B$ are stable (\textit{ie} all the eigenvalues are in the open half plane) then $\mu_2(A) <0$ and $\mu_2(B)<0$ which ensures the condition of Proposition \ref{t3} is satisfied.
Notice also that since $R_m(\tau)=-F_{m,A}X_m(\tau)-X_m(\tau) F_{m,B}$, where $ F_{m,A} = V_{m+1}\,T_{m+1,m}^A\,V_{m}^T $ and $F_{m,B}=W_m(T_{m+1,m}^B)^T W_{m+1}^T$, we get
$$\Vert R_m(\tau) \Vert \le \displaystyle \max_{\tau \in [t_0,t]} \Vert {\bar G}_m(\tau)\Vert \, \left (\Vert T_{m+1,m}^A \Vert + \Vert T_{m+1,m}^B \Vert \right ).$$
Hence, replacing in \eqref{equp1}, we get the new upper bound
\begin{equation}
\label{equp2}
\parallel \mathcal{E}_m(t) \parallel \le \Vert \mathcal{E}_{m,0} \Vert e^{(t-t_0)(\mu_2(A)+\mu_2(B))} + \beta_m \displaystyle \frac{e^{(t-t_0)(\mu_2(A)+\mu_2(B))}-1}{ \mu_2(A)+\mu_2(B)},
\end{equation}
where $$\beta_m= \displaystyle \max_{\tau \in [t_0,t]} \Vert {\bar G}_m(\tau)\Vert \, \left(\Vert T_{m+1,m}^A \Vert + \Vert T_{m+1,m}^B \Vert \right).$$
\noindent In Figure \ref{Figure1}, we compare the computed error to the two error upper bounds given by Formulae (\ref{upperbound}) and (\ref{equp2}) for $A$ and $B$ being two $100 \times 100$ matrices obtained by the finite differences discretization of linear differential operators on the unit square $[0,1]\times [0,1]$ with homogeneous Dirichlet boundary conditions. Matrices $E$ and $F$ were chosen as rank 2 matrices which entries are randomly generated over the interval $[0,1]$. In order to compute the error, we took the approximate solution given by the integral form of the solution as a reference.
\begin{figure}[H]
\begin{center}
\includegraphics[width=12cm,height=7cm]{fig1.eps}
\caption{ Norm of the error \textit{vs} number of Arnoldi iterations $m$}\label{Figure1}
\end{center}
\end{figure}
\noindent We observe that the bound (\ref{upperbound}) stated in Proposition \ref{t3} is slightly better in this example.
\noindent Next, we give another upper bound for the norm of the error ${\cal E}_m(t) $ .
\medskip
\begin{proposition}
\label{tr4}
Let $X(t)$ be the exact solution to \eqref{sylv1} and let $X_m(t)$ be the approximate solution obtained at step $m$. Then we have
\begin{equation}
\label{err3}
\Vert {\cal E}_m(t) \Vert \le \Vert F\, \Vert e^{t\mu_2(B)} \, \Gamma_{1,m}(t) + \Vert E_m\, \Vert e^{t\mu_2(A)} \Gamma_{2,m}(t),
\end{equation}
where $$\Gamma_{1,m}(t) =\displaystyle \int_{t_0}^t e^{-\tau\mu_2(B)} \, \Vert Z_A(\tau)-Z_{m,A}(\tau) \Vert d \tau, \; \Gamma_{2,m}(t)=
\displaystyle \int_{t_0}^t e^{-\tau\mu_2(A)} \Vert Z_B(\tau)-Z_{m,B}(\tau) \Vert d \tau.$$
\end{proposition}
\medskip
{\bf Proof.}
From the expressions of $X(t)$ and $X_m(t)$, we have
\begin{equation}\label{errr3}
\Vert {\cal E}_m(t)\Vert = \left \Vert \int_{t_{0}}^t \left (Z_A(\tau)Z_B(\tau)^T -Z_{m,A}(\tau)Z_{m,B}(\tau)^T \right ) d \tau \right \Vert,
\end{equation}
where $Z_{m,A}= {\cal V}_m e^{(t-\tau)\mathcal{T}_{m,A}}E_m$, $ Z_{m,B}(\tau)= {\cal W}_m e^{(t-\tau)\mathcal{T}_{m,B}}F_m$, $Z_A(\tau)=e^{(t-\tau)A}E$ and $Z_B(\tau)=e^{(t-\tau)B}F$. Then, using the relation
$$ Z_A(\tau)Z_B(\tau)^T -Z_{m,A}(\tau)Z_{m,B}(\tau)^T= (Z_A(\tau)-Z_{m,A}(\tau))Z_B^T+ Z_{m,A}(\tau) (Z_B(\tau)-Z_{m,B}(\tau))^T,$$ we obtain
\begin{eqnarray*}
\Vert Z_A(\tau)Z_B(\tau)^T -Z_{m,A}(\tau)Z_{m,B}(\tau)^T \Vert & \le & \Vert Z_B(\tau) \Vert\, \Vert (Z_A(\tau)-Z_{m,A}(\tau)) \Vert \\ & + & \Vert Z_{m,A}(\tau) \Vert \, \Vert (Z_B(\tau)-Z_{m,B}(\tau)) \Vert.
\end{eqnarray*}
Now as $\Vert Z_B(\tau) \Vert \le e^{(t-\tau)\mu_2(B)} \Vert F \Vert$ and since $\mu_2({\cal T}_{m,A}) \le \mu_2(A)$, we also have $\Vert Z_{m,A}(\tau) \Vert \le e^{(t-\tau)\mu_2({\cal T}_{m,A})} \Vert E_m \Vert \le e^{(t-\tau)\mu_2(A)} \Vert E_m \Vert$.
Using all these relations in \eqref{errr3}, we get
\begin{eqnarray*}
\label{err4}
\Vert {\cal E}_m(t) \Vert & \le & \int_{t_0}^t \left [e^{(t-\tau)\mu_2(B)} \Vert F\, \Vert Z_A(\tau)-Z_{m,A}(\tau) \Vert + e^{(t-\tau)\mu_2(A)} \Vert E_m \Vert\, \Vert Z_B(\tau)-Z_{m,B}(\tau)\Vert \right] d\tau\\
& \le & \Vert F\, \Vert e^{t\mu_2(B)} \, \int_{t_0}^t e^{-\tau\mu_2(B)} \, \Vert Z_A(\tau)-Z_{m,A}(\tau) \Vert d\tau \\
&+ & \Vert E_m\, \Vert e^{t\mu_2(A)} \, \int_{t_0}^t e^{-\tau\mu_2(A)}\Vert (Z_B(\tau)-Z_{m,B}(\tau)) \Vert d \tau,
\end{eqnarray*}
which ends the proof.\\
\noindent One can use some known results \cite{hochbruck,saad2} to derive upper bounds for $\Vert Z_A(\tau)-Z_{m,A}(\tau) \Vert$ and
$\Vert Z_B(\tau)-Z_{m,B}(\tau) \Vert $, when using Krylov or block Krylov subspaces. For general matrices $A$ and $B$, we can use the following result to get upper bounds for $\Vert Z_A(\tau)-Z_{m,A}(\tau) \Vert$ and
$\Vert Z_B(\tau)-Z_{m,B}(\tau) \Vert $.\\
\begin{proposition}
When using the extended block Arnoldi (or the block Arnoldi), we get the following upper bound for the exponential approximation error $ e_{m,A}(\tau)=Z_A(\tau)-Z_{m,A}(\tau) $:
\begin{equation}\label{erm}
\Vert e_{m,A}(\tau) \Vert \le \Vert T_{m+1,m}^A \Vert \int_0^{\tau} e ^{(u-\tau)\nu(A)} \Vert L_{m,A}(u) \Vert d u,
\end{equation}
where $L_{m,A}(u)={\widetilde E}_me^{(t-u) {\cal T}_{m,A}}E_m$ and $\nu(A)=\lambda_{min} \left ( \displaystyle \frac{A+A^T}{2}\right)$.
\end{proposition}
\medskip
{\bf Proof.}
We have $$Z_A(\tau) = e^{(t-\tau) A} E,\;\; {\rm and} \;\; Z_{m,A}(\tau)={\cal V}_m e^{(t-\tau) {\cal T}_{m,A}} E_m.$$
Then ${Z^{\prime}_A}(\tau) = -Ae^{(t-\tau) A} E=-AZ_A(\tau)$,
and
$${Z^{\prime}_{m,A}}(\tau)=-{\cal V}_m{\cal T}_{m,A} e^{(t-\tau) {\cal T}_{m,A}} E_m=-[A{\cal V}_m -V_{m+1} T_{m+1,m}^A {\widetilde E}_m]e^{(t-\tau) {\cal T}_{m,A}}E_m.$$
Hence,
\begin{equation}
{Z^{\prime}_{m,A}}(\tau)= -AZ_{m,A}(\tau)+V_{m+1} T_{m+1,m}^A L_{m,A}(\tau),
\end{equation}
where $L_{m,A}(\tau)={\widetilde E}_me^{(t-\tau) {\cal T}_{m,A}}E_m$.\\
Therefore, the error $e_{m,A}(\tau)=Z_A(\tau)-Z_{m,A}(\tau)$ is such that
$$e_{m,A}'(\tau)=-Ae_m(\tau)-V_{m+1} T_{m+1,m}^A L_{m,A}(\tau),$$
which gives the following expression of $e_m$:
\begin{equation}
e_{m,A}(\tau)= -\displaystyle \int_0^{\tau} e^{(u-\tau)A}V_{m+1} T_{m+1,m}^A L_{m,A}(u) d u.
\end{equation}
On the other hand, since $\tau-u >0$, it follows that
$$\Vert e^{(u-\tau)A} \Vert \le e^{(\tau-u)\mu_2(-A)}= e ^{(u-\tau)\nu(A)}.$$
Then, we get
$$\Vert e_{m,A}(\tau) \Vert \le \Vert T_{m+1,m}^A \Vert \int_0^{\tau} e ^{(u-\tau)\nu(A)} \Vert L_{m,A}(u) \Vert d u.$$
\noindent Notice that if $\nu(A)$ is not known but $\nu(A) \ \ge 0$ (which is the case for positive semidefinite matrices) then we get the upper bound
\begin{equation}\label{erm2}
\Vert e_{m,A}(\tau) \Vert \le \Vert T_{m+1,m}^A \Vert \int_0^{\tau} \Vert L_{m,A}(u) \Vert d u.
\end{equation}
To define a new upper bound for the norm of the global error ${\cal E}_m(t)$, we can use the upper bounds for the errors $e_{m,A}$ and $e_{m,B}$ in the expression \eqref{err3} stated in Propostion \ref{tr4} to get
\begin{eqnarray*}
\Vert {\cal E}_m(t) \Vert & \le & \Vert F\, \Vert e^{t\mu_2(B)} \, \displaystyle \int_{t_0}^t e^{-\tau\mu_2(B)} \, \Vert e_{m,A}(\tau)\Vert d \tau \\ & + & \Vert E_m\, \Vert e^{t\mu_2(A)} \displaystyle \int_{t_0}^t e^{-\tau\mu_2(A)} \Vert e_{m,B}(\tau)\Vert d \tau,
\end{eqnarray*}
and then we obtain
\begin{eqnarray}
\label{globerr}
\Vert {\cal E}_m(t) \Vert & \le & \Vert F\, \Vert e^{t\mu_2(B)} \,\Vert T_{m+1,m}^A \Vert \displaystyle \int_{t_0}^t e^{-\tau\mu_2(B)} \, S_{m,A}(\tau) d \tau \\ & + & \Vert E_m\, \Vert e^{t\mu_2(A)} \Vert T_{m+1,m}^B \Vert\displaystyle \int_{t_0}^t e^{-\tau\mu_2(A)} S_{m,B} (\tau) d \tau,
\end{eqnarray}
where $S_{m,A}(\tau)=\displaystyle \int_0^{\tau} e ^{(u-\tau)\nu(A)} \Vert L_{m,A}(u) \Vert d u$ and $S_{m,B}(\tau)= \displaystyle \int_0^{\tau} e ^{(u-\tau)\nu(B)} \Vert L_{m,B}(u) \Vert d u$.\\
As $m$ is generally very small as compared to $n$ and $p$, the factors $L_{m,A}$ and $L_{m,B}$ can be computed using Matlab fuctions such as {\tt expm} and the integral appearing in the right sides of \eqref{erm} and \eqref{globerr}, can be approximated via a quadrature formulae.\\
We summarize the steps of our proposed first approach (using the extended block Arnoldi) in the following algorithm
\begin{algorithm}[h!]
\caption{The extended block Arnoldi (EBA-exp) method for DSE's}\label{algo_EBA_exp}
\begin{itemize}
\item Input $X_0=X(t_0)$, a tolerance $tol>0$, an integer $m_{max}$.
\item For $ m = 1,\ldots,m_{max} $
\begin{itemize}
\item Apply the extended block Arnoldi algorithm to $(A,E)$ and $(B^T,F)$ to get the orthonormal matrices ${\mathcal V}_m=[V_1,...,V_m]$ and ${\mathcal W}_m=[W_1,...,W_m]$ and the upper block Hessenberg matrices ${\mathcal T}_{m,A}$ and ${\mathcal T}_{m,B}$.
\item Set ${E}_m={\mathcal V}_m^TE$, ${F}_m={\mathcal W}_m^TF$ and compute $ Z_{m,A}(\tau)= e^{(t-\tau)\mathcal{T}_{m,A}}E_m$ and $ Z_{m,B}(\tau)= e^{(t-\tau)\mathcal{T}_{m,B}}F_m$ using the matlab function {\tt expm}.
\item Use a quadrature method to compute the integral \eqref{gm} and get an approximation of $G_m(t)$ for each $t \in [t_0,\, T_f]$.
\item If $\parallel R_m(t) \parallel = \displaystyle \max \{ \parallel T_{m+1,m}^A {\bar G}_{m}(t) \parallel, \parallel T_{m+1,m}^B {\bar G}_{m}(t) \parallel\} < tol$ stop and compute the approximate solution $X_m(t)$ in the factored form given by the relation \eqref{approx}.
\end{itemize}
\item End
\end{itemize}
\end{algorithm}
\section{Projecting and solving the low dimensional problem}
\subsection{Low-rank approximate solutions}\label{ss3.1}
In this section, we show how to obtain low rank approximate solutions to the differential Sylvester equation \eqref{sylv1} by first projecting directly the initial problem onto block (or extended block) Krylov subspaces and then solve the obtained low dimensional differential problem.
We first apply the block Arnoldi algorithm (or the extended block Arnoldi) to the pairs $(A,E)$ and $(B^T,F)$ to get the orthonormal matrices ${\cal V}_m$ and ${\cal W}_m$, whose columns form orthonormal bases of the extended block Krylov subspaces ${\cal K}_m(A,E)$ and ${\cal K}_m(B^T,F)$, respectively. We also get the upper block Hessenberg matrices $ {\cal T}_{m,A}={\cal V}^T_m A {\cal V}_m $ and $ {\cal T}_{m,B}={\cal W}^T_mB^T {\cal W}_m $. \\ Let $X_m(t)$ be the desired low rank approximate solution given as
\begin{equation}\label{approx1}
X_m(t) = {\cal V}_m Y_m(t) {\cal W}_m^T,
\end{equation}
satisfying the Petrov-Galerkin orthogonality condition
\begin{equation}
\label{galerkin}
{\cal V}_m^T R_m(t) {\cal W}_m =0,\; t \in [t_0,\; T_f],
\end{equation}
where $R_m(t)$ is the residual $ R_m(t) = \displaystyle {\dot X}_m(t)-A\,X_m(t)-X_m(t)\,B- EF^T $. Then, from \eqref{approx1} and \eqref{galerkin}, we obtain the low dimensional differential Sylvester equation
\begin{equation}\label{lowsylv}
\displaystyle {\dot Y}_m(t)- {\cal T}_{m,A}\,Y_m(t)-Y_m(t)\,{\cal T}_{m,B}^T - E_mF_m^T=0,
\end{equation}
where $ { E}_m= {\cal V}_m^T\,E$ and $ { F}_m= {\cal W}_m^T\,F$. The obtained low dimensional differential Sylvester equation \eqref{lowsylv} is the same as the one given by \eqref{low2}. We have now to solve the latter differential equation by some integration method such as the well known Backward Differentiation Method (BDF) \cite{butcher} or the Rosenbrock method \cite{butcher,rosenbrock}. \\ Notice that all the properties and results such as the expressions of the residual norms or the upper bounds for the norm of the error given in the last section are still valid with this second approach. The two approaches only differ in the way the projected low dimensional differential Sylvester matrix equations are numerically solved.
\subsection{BDF for solving the low order differential Sylvester equation \eqref{lowsylv}}\label{projbdf}
We use the Backward Differentiation Formula (BDF) method for solving, at each step $m$ of the extended block Arnoldi (or block Arnoldi) process, the low dimensional differential Sylvester matrix equation \eqref{lowsylv}. We notice that BDF is especially used for the solution of stiff differential equations.\\ At each time $t_k$, let $Y_{m,k}$ of the approximation of $Y_m(t_k)$, where $Y_m$ is a solution of (\ref{lowsylv}). Then, the new approximation $Y_{m,k+1}$ of $Y_m(t_{k+1})$ obtained at step $k+1$ by BDF is defined by the implicit relation
\begin{equation}
\label{bdf}
Y_{m,k+1} = \displaystyle \sum_{i=0}^{p-1} \alpha_i Y_{m,k-i} +h_k \beta {\mathcal F}(Y_{m,k+1}),
\end{equation}
where $h_k=t_{k+1}-t_k$ is the step size, $\alpha_i$ and $\beta_i$ are the coefficients of the BDF method as listed in Table \ref{tab1} and ${\mathcal F}(Y)$ is given by
$${\mathcal F}(Y)= {\cal T}_{m,A}\,Y+Y\,{\cal T}_{m,B}^T+\,E_m\,F_m^T.$$
\begin{table}[h!!]
\begin{center}
\begin{tabular}{c|cccc}
\hline
$p$ & $\beta$ &$\alpha_0$ & $\alpha_1$ & $\alpha_2$ \\
\hline
1 & 1 & 1 & &\\
2 & 2/3 & 4/3& -1/3 &\\
3 & 6/11 & 18/11 & -9/11 & 2/11\\
\hline
\end{tabular}
\caption{Coefficients of the $p$-step BDF method with $p \le 3$.}\label{tab1}
\end{center}
\end{table}
\noindent The approximate $Y_{m,k+1}$ solves the following matrix equation
\begin{equation*}
-Y_{m,k+1} +h_k\beta ({\cal T}_{m,A} Y_{m,k+1} + Y_{m,k+1} {\cal T}_{m,B}^T+ EF^T) + \displaystyle \sum_{i=0}^{p-1} \alpha_i Y_{m,k-i} = 0,
\end{equation*}
which can be written as the following Sylvester matrix equation
\begin{equation}
\label{sylvbdf}
\mathbb{T}_{m,A}\, Y_{m,k+1} + \,Y_{m,k+1} \mathbb{T}_{m,B}^T+ \mathbb{E}_{m,k}\, \mathbb{F}_{m,k}^T =0.
\end{equation}
We assume that at each time $t_k$, the approximation $Y_{m,k}$ is factorized as a low rank product
$Y_{m,k}\approx {\widetilde U}_{m,k} {{\widetilde V}_{m,k}}^T$, where ${\widetilde U }_{m,k} \in \mathbb{R}^{n \times m_k}$ and ${\widetilde V }_{m,k} \in \mathbb{R}^{p \times m_k}$, with $m_k \ll n,p$. In that case, the coefficient matrices appearing in \eqref{sylvbdf} are given by
$$\mathbb{T}_{m,A}= h_k\beta {\cal T}_{m,A} -\displaystyle \frac{1}{2}I;\;\; \mathbb{T}_{m,B}= h_k\beta {\cal T}_{m,B} -\displaystyle \frac{1}{2}I,$$
$$ \mathbb{E}_{m,k+1}=[\sqrt{h_k\beta} E^T, \sqrt{\alpha_0}{\widetilde U }_{m,k}^T,\ldots,\sqrt{\alpha_{p-1}} {\widetilde U }_{m,k+1-p}^T]^T\, $$
and
$$\mathbb{F}_{m,k+1}=[\sqrt{h_k\beta} F^T, \sqrt{\alpha_0}{\widetilde V }_{m,k}^T,\ldots,\sqrt{\alpha_{p-1}} {\widetilde V }_{m,k+1-p}^T]^T.$$
The Sylvester matrix equation \eqref{sylvbdf} can be solved by applying direct methods based on Schur decomposition such as the Bartels-Stewart algorithm \cite{bartels,gnv}. \\
Notice that we can also use the BDF method applied directly to the original problem \eqref{sylv1} and then at each iteration, one has to solve large Sylvester matrix equations which can be done by using Krylov-based methods as developed in \cite{Elguen,jbilou}.
\subsection{Solving the low dimensional problem with the Rosenbrock method}\label{projros1}
Applying Rosenbrock method \cite{butcher,rosenbrock} to the low dimensional differential Sylvester matrix equation \eqref{lowsylv}, the new approximation $Y_{m,k+1}$ of $Y_m(t_{k+1})$ obtained at step $k+1$ is defined, in the ROS(2) particular case by the relations\\
\begin{equation}\label{ros1}
Y_{m,k+1} =Y_{m,k}+ \displaystyle \frac{3}{2}K_1+ \frac{1}{2}K_2,
\end{equation}
where $K_1$ and $K_2$ solve the following Sylvester equations
\begin{equation}\label{ros2}
\widetilde {\mathbb{T}}_{m,A}K_1+K_1\widetilde {\mathbb{T}}_{m,B}= -\mathcal{F}(t_k,Y_{m,k}),
\end{equation}
and
\begin{equation}\label{ros3}
\widetilde {\mathbb{T}}_{m,A}K_2+K_2\widetilde {\mathbb{T}}_{m,B}= -\mathcal{F}(t_{k+1},Y_{m,k}+K_1)+ \displaystyle \frac{2}{h}K_1,
\end{equation}
where
$$\widetilde {\mathbb{T}}_{m,A}= \gamma {\mathcal{T}}_{m,B}-\displaystyle \frac{1}{2h}I\;\;{\rm and} \;\;\widetilde {\mathbb{T}}_{m,B} = \gamma {\mathcal{T}}_{m,B}^T-\displaystyle \frac{1}{2h}I,$$
and
$$\mathcal{F}(Y)= {\mathcal{T}}_{m,A}Y+Y {\mathcal{T}}_{m,B}^T+E_mF_m^T.$$
\noindent We summarize the steps of the second approach (using the extended block Arnoldi) in the following algorithm
\begin{algorithm}[h!]
\caption{The extended block Arnoldi (EBA) method for DSE's}\label{algo_EBA}
\begin{itemize}
\item Input $X_0=X(t_0)$, a tolerance $tol>0$, an integer $m_{max}$.
\item For $ m = 1,\ldots,m_{max} $
\begin{itemize}
\item Apply the extended block Arnoldi algorithm to the pairs $(A,E)$ and $(B^T,F)$ to compute the orthonormal bases ${\mathcal V}_m=[V_1,...,V_m]$ and ${\mathcal W}_m=[W_1,...,W_m]$ and also the the upper block Hessenberg matrices ${\mathcal T}_{m,A}$ and ${\mathcal T}_{m,B}$.
\item Use the BDF or the Rosenbrock method to solve the low dimensional differential Sylvester equation
$$\displaystyle {\dot Y}_m(t)- {\cal T}_{m,A}\,Y_m(t)-Y_m(t)\,{\cal T}_{m,B}^T - E_mF_m^T=0,\; t \in [t_0,\,T_f]$$
\item If $\parallel R_m(t) \parallel < tol$ stop and compute the approximate solution $X_m(t)$ in the factored form given by the relation \eqref{approx}.
\end{itemize}
\item End
\end{itemize}
\end{algorithm}
\newpage
\section{Numerical examples}
In this section, we compare the approaches presented in this paper. The exponential approach (EBA-exp) summarized in Algorithm \ref{algo_EBA_exp}, which is based on the approximation of the solution to \eqref{sylv1} applying a quadrature method to compute the projected exponential form solution \eqref{gm}. We used a scaling and squaring strategy, implemented in the MATLAB \textbf{expm} function; see \cite{higham05,moler03} for more details. The second method (Algorithm 2) is based on the BDF integration method applied to the projected Sylvester equation as described in Section (\ref{projbdf}). Finally, we considered the EBA-ROS(2) method as described in Section (\ref{projros1}). The basis of the projection subspaces were generated by the extended block Arnoldi algorithm for all methods.
All the experiments were performed on a laptop with an Intel Core i7 processor and 8GB of RAM. The algorithms were coded in Matlab R2014b. \\
\noindent {\bf Example 1}.
For this example, the matrices $A \in \mathbb{R}^{n \times n}$ and $B\in \mathbb{R}^{p \times p}$ were obtained from the 5-point discretization of the operators
\begin{equation*}
\displaystyle{L_A=\Delta u-f_1(x,y)\frac{\partial u}{\partial x}+ f_2(x,y)\frac{\partial u}{\partial y}+g_1(x,y)},
\end{equation*}
and
\begin{equation*}
\displaystyle{L_B=\Delta u-f_3(x,y)\frac{\partial u}{\partial x}+ f_4(x,y)\frac{\partial u}{\partial y}+g_2(x,y)},
\end{equation*}
on the unit square $[0,1]\times [0,1]$ with homogeneous Dirichlet boundary conditions. The number of inner grid points in each direction are $n_0$ for $A$ and $p_0$ for $B$ and the dimension of the matrices $A$ and $B$ are $n = n_0^2=$ and $p=p_0^2$ respectively. Here we set $f_1(x,y) = x+10y^2$, $\displaystyle {f_2(x,y)= \sqrt{2x^2+y^2}}$, $f_3(x,y) = x+2y$, $f_4(x,y)= exp(y-x)$ , $g_1(x,y) = x^2-y^2$ and $g_2(x,y)=y^2-x^2$. The time interval considered was $[0,\,2]$ and the initial condition $X_0=X(0)$ was $X_0=Z_0Z_0^T$, where $Z_0=0_{n \times 2}$.\\
\noindent For all projection-based methods, we used projections onto the Extended Block Krylov subspaces ${\mathcal K}_k(A,B) = {\rm Range}(B,A\,B,\ldots,A^{m-1}\,B,A^{-1}\,B,\ldots,(A^{-1})^m\,B)$ and the tolerance was set to $10^{-10}$ for the stop test on the residual. For the EBA-BDF and Rosenbrock methods, we used a constant timestep $h$. The entries of the matrices $E$ and $F$ were random values uniformly distributed on the interval $[0, \, 1]$ and their rank were set to $s=2$. \\
To the authors' knowledge, there are no available exact solutions of large scale matrix Sylvester differential equations in the
literature. In order to check if our approaches produce reliable results, we first compared our results to the one given by Matlab's ode23s solver which is designed for stiff differential equations. This was done by vectorizing our DSE, stacking the columns of $X$ one on top of each other. This method is not suited to large-scale problems. Due to the memory limitation of our computer when running the ode23s routine, we chose a size of $100\times 100$ for the matrices $A$ and $B$.\\
\noindent In Figure \ref{Figure2}, we compared the component $X_{11}$ of the solution obtained by the methods tested in this section, to the solution provided by the ode23s method from Matlab, on the time interval $[0,\,2]$, for $size(A),~size(B)=100\times 100$ and a constant timestep $h=10^{-2}$. We observe that all the considered methods give good results in terms of accuracy. The relative error norms at final time $T_f=2$ were of order $\mathcal{O} (10^{-10})$ for the EBA-exp method and $\mathcal{O} (10^{-12})$ for the others. The runtimes were respectively 0.6s for EBA-exp, 7.3s for EBA-BDF(1), 20.8s for EBA-BDF(2) and 29.2s for EBA-ROS(2). The ode23s routine required 978s.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=16cm,height=11cm]{fig2.eps}
\caption{ Values of $X_{11}(t)$ for $t \in [0,\, 2]$}\label{Figure2}
\end{center}
\end{figure}
\noindent In Table \ref{tab2}, we give the obtained runtimes in seconds, the number of Arnoldi iterations and the Frobenius residual norm at final time, for the resolution of Equation \eqref{sylv1} for $t \in [0,\, 2]$, with a timestep $h=0.01$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c | c c c c }
&EBA-exp&EBA-BDF(1)&EBA-BDF(2)&EBA-ROS(2)\\
\hline
$n,p=2500,2500$&$3.8$s $(m=16)$&$6.1$s $(m=18)$&$13.6$s $(m=18)$&$28.8$s $(m=23)$\\
$\|R_m(T_f)\|_F$&$1.04\times 10^{-8}$ &$2.45\times 10^{-10}$&$2.45\times 10^{-10}$&$3.05\times 10^{-10}$\\
\hline
$n,p=10000,10000$&$35.2$s $(m=22)$&$38.4$s $(m=25)$&$80.3$s $(m=25)$&$104.7$s $(m=33)$\\
$\|R_m(T_f)\|_F$&$4.4\times 10^{-9}$ &$4.1\times 10^{-11}$&$4.2\times 10^{-11}$&$5.8\times 10^{-11}$\\
\hline
$n,p=22500,10000$&$137.3$s $(m=22)$&$166.5$s $(m=30)$&$342.3$s $(m=30)$&$246$s $(m=35)$\\
$\|R_m(T_f)\|_F$&$4.2\times 10^{-8}$ &$3.7\times 10^{-11}$&$3.6\times 10^{-11}$&$1.78\times 10^{-9}$\\
\hline
\end{tabular}
\caption{Runtimes in seconds and the residual norms}\label{tab2}
\end{center}
\end{table}
\noindent The results in Table \ref{tab2} show that the EBA-exp method is outperformed by the other approaches in terms of accuracy, although it allows to obtain an acceptable approximation more quickly. The EBA-BDF(1) appears to be the better option in terms of time and accuracy.\\
\noindent {\bf Example 2}
In this second example, we considered the particular case \begin{equation}\label{sylvrail}
\left\{
\begin{array}{l}
\dot X(t)=A(t)\,X(t)+X(t)\,A(t)-E(t)F(t)^T;\; (DSE) \\
\;X(t_0)=X_0,\; \; t \in [t_0, \, T_f],
\end{array}
\right.
\end{equation}
where the matrix $A=Rail1357$ was extracted from the IMTEK collection Optimal Cooling of Steel Profiles \footnote{https://portal.uni-freiburg.de/imteksimulation/downloads/benchmark}. We compared the EBA-BDF(1) method to the EBA-exp and EBA-ROS(2) methods for the problem size $n=1357$ on the time interval $[0\,,2]$. The initial value $X_0$ was chosen as $X_0=0$ and the timestep was set to $h=0.001$. The tolerance for EBA stop test was set to $10^{-7}$ for all methods and the projected low dimensional Sylvester equations were numerically solved by the solver ({\tt lyap} from Matlab at each iteration of the extended block Arnoldi algorithm for the EBA-BDF(1), EBA-BDF(2) and EBA-ROS(2) methods. As the size of the coefficient matrices allowed it, we also computed an approximate solution of \eqref{sylvrail} applying a quadrature method to the integral form of the exact solution given by Formula\eqref{solexacte2} and took it as a reference solution.
In Table \ref{tab3}, we reported the runtimes, in seconds, the number $m$ of Arnoldi iterations and the Frobenius norm $\|\mathcal{E}(T_f)_m\|_F$ of the error at final time.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c | c c c c }
&EBA-exp&EBA-BDF(1)&EBA-BDF(2)&EBA-ROS(2)\\
\hline
Runtime (s)&$48.4$ s ($m=18$)&$471.9$ s ($m=18)$&$1549.2$s ($m=23$) & $1827$s ($m=21$)\\
$\|\mathcal{E}_m(T_f)\|_F$&$1.28 \times 10^{-10}$& $5 \times 10^{-5}$ & $1.48 \times 10^{-4}$&$4.9 \times 10^{-5}$.\\
\hline
\end{tabular}
\caption{Optimal Cooling of Steel Profiles: runtimes, number of Arnoldi iterations and error norms }\label{tab3}
\end{center}
\end{table}
\noindent As can be seen from the reported results in Table \ref{tab3}, the EBA-exp method clearly outperforms all the other listed options.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=6cm,height=4cm]{fig3.eps}
\caption{ Residual norm \textit{vs} number $m$ of Arnoldi iterations}\label{Figure3}
\end{center}
\end{figure}
\noindent In Figure \eqref{Figure3}, we plotted the Frobenius residual norm $\|R_m(t_f)\|_F$ at final time $T_f$ in function of the number $m$ of Arnoldi iterations for the EBA-exp method.
\section{Appendix A}
Here we recall the extended block Arnoldi (EBA) and block Arnoldi (BA) algorithms, when applied to the pair $(A,E)$. EBA is described in Algorithm \ref{eba} as follows
\begin{algorithm}[h!]
\caption{The extended block Arnoldi algorithm (EBA)}\label{eba}
\begin{itemize}
\item Inputs: $ A $ an $ n \times n $ matrix, $ E $ an $ n \times s $ matrix and $ m $ an integer.
\item Compute the QR decomposition of $ [E,A^{-1}E] $,\textit{ i.e}., $ [E,A^{-1}E] = V_1\Lambda $; \\
\hspace*{1.2cm} Set $ {\mathcal V}_0 = \left [ ~\right] $;
\item For $ j = 1,\ldots,m $
\item \hspace*{0.4cm} Set $ V_j^{(1)} $: first $ s $ columns of $ V_j $ and $ V_j^{(2)} $: second $ s $ columns of $ V_j $
\item \hspace*{0.4cm} $ {\mathcal V}_j = \left [ {\mathcal V}_{j-1}, V_j \right ] $; $ \hat V_{j+1} = \left [ A\,V_j^{(1)},A^{-1}\,V_j^{(2)} \right ] $.
\item Orthogonalize $ \hat V_{j+1} $ w.r.t $ {\mathcal V}_j $ to get $ V_{j+1} $, \textit{i.e.},\\
\hspace*{1.8cm} For $ i=1,2,\ldots,j $ \\
\hspace*{2.4cm} $ H_{i,j} = V_i^T\,\hat V_{j+1} $; \\
\hspace*{2.4cm} $ \hat V_{j+1} = \hat V_{j+1} - V_i\,H_{i,j} $; \\
\hspace*{1.8cm} Endfor $i$
\item Compute the QR decomposition of $ \hat V_{j+1} $, \textit{i.e.}, $ \hat V_{j+1} = V_{j+1}\,H_{j+1,j} $.\\
\item Endfor $j$.
\end{itemize} ${}$
\end{algorithm}
\vskip0.3cm
The block Arnoldi algorithm is summarized in Algorithm \ref{ba} as follows
\begin{algorithm}[h!]
\caption{The block Arnoldi algorithm (BA)}\label{ba}
\begin{itemize}
\item Inputs: $ A $ an $ n \times n $ matrix, $ E $ an $ n \times s $ matrix and $ m $ an integer.
\item Compute the QR decomposition of $ E $,\textit{ i.e}., $ E= V_1 R_1 $.
\item For $j=1,\ldots,m$
\begin{enumerate}
\item $W_j=AV_j$,
\item for $i=1,2,\ldots,j$
\begin{itemize}
\item $H_{i,j}=V_i^T\,W_j,$
\item $W_j=W_j-V_j\,H_{i,j},$
\end{itemize}
\item endfor
\item $Q_j R_j=W_j$ ($QR$ decomposition)
\item $V_{j+1}=Q_j$, and $H_{j+1,j}=R_j$.
\end{enumerate}
\item EndFor $j$
\end{itemize}
\end{algorithm}
\medskip
\noindent Since the above algorithms implicitly involve a Gram-Schmidt process, the obtained blocks $ {\cal V}_m = \left [
V_1,V_2,\ldots,V_m \right ] $ ($ V_ i \in \mathbb{R}^{n \times d} $) ,where $d=s$ for the block Arnoldi and $d=2s$ for the extended block Arnoldi, have their columns mutually orthogonal provided none of the upper triangular matrices $ H_{j+1,j} $ are rank deficient.
Hence, after $ m $ steps, Algorithm \ref{eba} and Algorithm \ref{ba} build orthonormal bases $ {\cal V}_m $ of the Krylov subspaces ${\cal K}_m(A,E)={\rm Range}(E,A\,E,\ldots,A^{m-1}\,E,A^{-1}\,E,\ldots,(A^{-1})^m\,E) $ or $ \mathbb{K}_{m}(A,E) ={\rm Range}(E,A\,E,\ldots,A^{m-1}\,E) $, respectively and a block upper Hessenberg matrix $ {\cal H}_m $ whose nonzero sub-blocks are the $ H_{i,j} $. Note that each submatrix $ H_{i,j} $ ($1 \le i \le j \le m $) is of order $ d$. \\
\noindent Let $ {\cal T}_m \in \mathbb{R}^{d \times d} $ be the
restriction of the matrix $ A $ to the extended Krylov subspace $
{\cal K}_m(A,E) $ (or to the block Krylov subspace $\mathbb{K}_{m}(A,E)$), i.e., $ {\cal T}_m = {\cal V}_m^T\,A\,{\cal
V}_m $. Then it can be shown that matrix $ {\cal T}_m $ is
also block upper Hessenberg with $ d \times d $ blocks, see\cite{heyouni09,simoncini1} .
For the block Arnoldi algorithm, ${\cal T}_m={\cal H}_m$ while for the extended block Arnoldi algorithm, a recursion can be derived to compute $ {\cal T}_m $ from $
{\cal H}_m $ without requiring matrix-vector products with $A $, see \cite{simoncini1}. We notice that for large and non structured problems, the inverse of the matrix $ A $ is not computed explicitly and in this
case we can use iterative solvers with preconditioners to solve
linear systems with $ A $.
\section{Conclusion}
We presented in the present paper two new approaches for computing approximate solutions to large scale differential Sylvester matrix equations. The first one comes naturally from the exponential expression of the exact solution and the use of approximation techniques of the exponential of a matrix times a block of vectors. The second approach is obtained by first projecting the initial problem onto a block Krylov (or extended Krylov) subspace, obtain a low dimensional differential Sylvester equation which is solved by using the well known BDF or Rosenbrock integration method. We gave some theoretical results such as the exact expression of the residual norm and also upper bounds for the norm of the error. Numerical experiments show that both approaches are promising for large-scale problems, with a clear advantage for the EBA-exp method in terms of computation time although the EBA-BDF(1) method shows to offer a good balance between the execution time and the accuracy in some cases.
\bibliographystyle{plain}
| {
"timestamp": "2017-07-10T02:04:35",
"yymm": "1707",
"arxiv_id": "1707.02078",
"language": "en",
"url": "https://arxiv.org/abs/1707.02078",
"abstract": "In the present paper, we propose Krylov-based methods for solving large-scale differential Sylvester matrix equations having a low rank constant term. We present two new approaches for solving such differential matrix equations. The first approach is based on the integral expression of the exact solution and a Krylov method for the computation of the exponential of a matrix times a block of vectors. In the second approach, we first project the initial problem onto a block (or extended block) Krylov subspace and get a low-dimensional differential Sylvester matrix equation. The latter problem is then solved by some integration numerical methods such as BDF or Rosenbrock method and the obtained solution is used to build the low rank approximate solution of the original problem. We give some new theoretical results such as a simple expression of the residual norm and upper bounds for the norm of the error. Some numerical experiments are given in order to compare the two approaches.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Computational Krylov-based methods for large-scale differential Sylvester matrix problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126457229185,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7094396999858685
} |
https://arxiv.org/abs/math/9802108 | Partial norms and the convergence of general products of matrices | Motivated by the theory of inhomogeneous Markov chains, we determine a sufficient condition for the convergence to 0 of a general product formed from a sequence of real or complex matrices. When the matrices have a common invariant subspace $H$, we give a sufficient condition for the convergence to 0 on $H$ of a general product. Our result is applied to obtain a condition for the weak ergodicity of an inhomogeneous Markov chain. We compare various types of contractions which may be defined for a single matrix, such as paracontraction, $l$--contraction, and $H$--contraction, where $H$ is an invariant subspace of the matrix. | \section{Introduction}
\label{sec1}
\setcounter{equation}{0}
Recently there has been much interest in conditions for the
convergence
of infinite products of real or complex matrices. Several
investigations have concentrated
on products taken in one direction -- left or right,
see for example the recent papers by Beyn and Elsner
\cite{BeEl97} and Hartfiel and Rothblum \cite{HaRo97}.
However, in this paper,
we are concerned with {\it general products} formed from a
given infinite sequence
of matrices. These are defined further on in the paper
and they have previously been considered for nonnegative and for
stochastic matrices by Seneta in \cite[Chapters 3.1 and
4.6]{Sene81}.\\
Our principal result is a sufficient condition for the
convergence to $0$ of infinite general products of matrices.
We pay particular attention to the case where
there is a common invariant subspace for all the matrices
in the product. As a special case, we obtain a result on the
weak ergodicity of an inhomogeneous Markov chain.\\
The investigations described above are preceded
by a study of the
interrelations of several types of contractions which may be
defined for a single matrix.\\
The motivation for our study comes from the theory of
inhomogeneous
Markov chains, see \cite {Sene73} and \cite[Chapter 4]{Sene81}
for
much further
background material. This theory naturally leads to the study
of the convergence, ergodicity and weak ergodicity of products of
stochastic matrices.\\
We now describe our paper in more detail.
First, in Section 2, we sharpen several
observations
concerning paracontraction. This we shall achieve
by introducing a partial norm on the matrix restricted to an
invariant
subspace $H$ and the consequent notion
of $H$--{\it contraction}. We relate this concept to
the notion of paracontraction as introduced by Nelson and
Neumann in
\cite{NeNe87} and to the notion of $l$--contraction
recently introduced by Beyn and Elsner in \cite{BeEl97}.\\
Second, in Section 3,
which is essentially independent of Section 2,
we turn to our main results.
We formulate our sufficient conditions depending on norms
for the convergence to $0$ of infinite general products of
matrices
which are formed from a given
infinite sequence of matrices.
We then apply the results to examine the convergence to $0$
of infinite general
products on a common invariant subspace of the matrices.\\
In Section 4 we specialize
our results and we investigate
products of stochastic matrices to obtain a suffcient condition
for weak
ergodicity. The $\ell_1$ coefficient
of ergodicity due to Bauer, Deutsch, Stoer \cite{BaDeSt69} plays
a
special role here.\\
In Section 5
we give bounds on the algebraic and geometric multiplicities of
the eigenvalues of the restriction of a matrix to an invariant
subspace in terms of these quantities for the entire matrix.
These are related to familiar results on stochastic matrices.
\section{Partial norms and Paracontractions}
\label{sec2}
In this paper ${\rm I\kern-.25em F}$ will stand for the real field ${\rm I\kern-.25em R}$ or the
complex
field $\mathbb{C}$.
We begin by recalling the following definition:\\
\begin{defin}
\label{parcont} {\rm (Nelson and Neumann \cite{NeNe87}) Let $A\in
{\rm I\kern-.25em F}^{n,n}$
and let $\nu$
be a vector norm on ${\rm I\kern-.25em F}^{n}$. Then $A$ is {\it paracontracting
with
respect to} $\nu$ if
\begin{equation}
\label{para.cond}
\nu(Ax) \ < \nu(x) \ \ {\rm whenever } \ Ax \neq x.
\end{equation}
}
\end{defin}
In particular we see that one immediate consequence of the
definition is that
$\nu^0(A)\leq 1$,
where $\nu^0(A)$ denotes the operator norm corresponding to the
norm
$\nu$.
For a given matrix norm let ${\cal N}_{\nu}$ denote {\rm
the set of
all paracontracting matrices with respect to} $\nu$ in ${\rm I\kern-.25em F}^{n}$.
Nelson and Neumann show in \cite{NeNe87} that if $A\in {\cal
N}_{\nu}$, then
$\lim_{i\rightarrow
\infty}A^i$ exists and that
$AB\in {\cal
N}_{\nu}$
if $A,B\in {\cal N}_{\nu}$.
\\
We now introduce the concept of a partial norm with respect to a
subspace
and the concept of H--contraction:\\
\begin{defin}
\label{Hcont}
{\rm
Let $H$ be a subspace of ${\rm I\kern-.25em F}^n$ and let $A \in {\rm I\kern-.25em F}^{nn}$ be
a matrix which leaves $H$ invariant.
Let $\nu$ be a norm on ${\rm I\kern-.25em F}^n$ and
define $\nu^0$ to be the operator norm induced by $\nu$.
Then
\begin{equation}
\label{Hcond}
\nu^{0}_{H}(A) \ = \ \sup_{0 \neq x\in
H}\frac{\nu(Ax)}{\nu(x)}.
\end{equation}
We call $\nu^{0}_{H}(A)$ a {\em partial norm of} $A$ {\em with
respect
to the invariant subspace} $H$ of $A$. }
\end{defin}
\begin{defin}
\label{nonexpan}
{\rm
The matrix $A\in {\rm I\kern-.25em F}^{n,n}$ is {\em nonexpansive} with respect
to the
vector norm
$\nu$ on ${\rm I\kern-.25em F}^{n}$ if
$\nu^{0}(A)\leq 1$. The matrix $A$
is an $H$--{\em
contractor} if
it is nonexpansive, $H$ is an invariant subspace of $A$, and
$\nu_{H}^{0}(A)<1$. }
\end{defin}
We shall
denote the range of $A \in {\rm I\kern-.25em F}^{nn}$ by ${\cal R}(A)$ and the
nullspace
of $A$ by ${\cal N}(A)$.
\begin{remark}
\label{complement}
{\rm If $A\in {\rm I\kern-.25em F}^{n,n}$ is nonexpansive with respect to $\nu$
then
the spectral radius $\rho(A)$ of $A$ satisfies
$\rho(A)\leq \nu^0(A) \leq 1$. If $1$ is an
eigenvalue of $A$, then we must have that $\nu^0(A)=\rho(A)$ and
it follows that all Jordan blocks of $A$
corresponding to $1$ are
$1\times 1$, (viz. index$_0(A) = 1$), e.g., Mott and Schneider
\cite{MoSc59}. Hence $K={\cal{N}}(I-A)$ and
$H={\cal R}(I-A)$ are
complementary invariant subspaces.}
\end{remark}
We next show the connection between paracontraction and
$H$--contraction.
\begin{lemma}
\label{lemma1}
Let $A\in {\rm I\kern-.25em F}^{n,n}$ be paracontracting with respect to
$\nu$ and let $K$ be the subspace of all its fixed points.
Then $H={\cal R}(I-A)$ is
invariant under $A$, complementary to $K$, and such that
$\nu^{0}_{H}(A)<1$
so that
$A$ is an
$H$--contractor.\\
\end{lemma}
\underline{\bf Proof}:\
It follows by Remark \ref{complement} that $H$ is invariant under
$A$
and complementary to $K$.
Since $A$ is paracontracting with respect
to $\nu$, we have that $\nu^0(A) \leq 1$.
In view
of \reff{para.cond}, it now follows that
$\nu(Ax)<\nu(x)$, for all
$ x \in H$, and so
since the
unit ball
${\cal U}$ of $\nu^{0}_{H}$ in $H$ is compact,
\begin{displaymath}
\max_{x\in {\cal U}}\nu(Ax) \ < \ 1.
\end{displaymath}
Hence $\nu_{H}^{0}(A)<1$.
{\hfill $\Box$}\\
Recently Beyn and Elsner \cite{BeEl97} have introduced the notion
of
$l$--paracontraction:\\
\begin{defin}
\label{elsner}
{\rm (\cite{BeEl97}) A matrix $A \in {\rm I\kern-.25em F}^{n,n}$ is
$l$--paracontracting
with respect to the vector norm $\nu$ on ${\rm I\kern-.25em F}^{n}$ if there exists
a
$\gamma>0$ such that
\begin{equation}
\label{beyn.elsner}
\nu(Ax) \ \leq \ \nu(x)-\gamma \nu(Ax-x), \ \ \forall x\in
{\rm I\kern-.25em F}^{n}.
\end{equation}
}
\end{defin}
In their paper Beyn and Elsner establish
several conditions which are equivalent to $l$--paracontraction.
Clearly $l$--paracontraction implies paracontraction.
We next
show that
H--contraction implies $l$--paracontraction
for a suitable chosen norm and a suitably chosen subspace $H$.\\
\begin{theorem}
\label{equiv}
Let $A\in {\rm I\kern-.25em F}^{n,n}$ and suppose that $A$ is nonexpansive
with respect to the norm
$\nu$
on ${\rm I\kern-.25em F}^n$.
Let $H={\cal R}(I-A)$ and let $K = {\cal N}(I-A)$.
If the norm $\nu$ satisfies
\begin{equation}
\label{additive}
\nu(y+z) = \nu(y) + \nu(z), \ {\rm for} \ y \in H, \ z \in K,
\end{equation}
then the following are equivalent:\\
{\rm (i)} $A$ is $l$--paracontracting with respect to $\nu$.\\
{\rm (ii)} $A$ is paracontracting with respect to $\nu$.\\
{\rm (iii)} $A$ is an $H$--contraction with respect to $\nu$.
\end{theorem}
\underline{\bf Proof}:\
(i) $\Longrightarrow$ (ii): Obvious.\\
(ii) $\Longrightarrow$ (iii): By Lemma \ref{lemma1}.\\
(iii) $\Longrightarrow$ (i): Since $A$ is nonexpansive, we have
by
Remark \ref{nonexpan} that $K \oplus H \ = {\rm I\kern-.25em F}^n$.
For $x \in {\rm I\kern-.25em F}^n$, write
\begin{displaymath}
x \ = \ y + z, \ \ \ \ y \in H, \ z \in K.
\end{displaymath}
Let $k = \nu^0_H(A)$ and note that $k < 1$ by (iii).
Since $x - Ax$ = $y - Ay \in H$ it follows that
\begin{equation}
\label{inequ1}
\nu(x-Ax) = \nu((I-A)y) \ \leq \ \nu^0_H(I-A)\nu(y) \
\leq \ (1+k)\nu(y).
\end{equation}
Moreover, using \reff{additive}, we obtain
\begin{equation}
\label{inequ2}
\nu(x) -
\nu(Ax) \ = \
\nu(y+z)-
\nu(Ay+z) \ = \
\nu(y)-\nu(Ay) \ \geq \ (1-k)\nu(y).
\end{equation}
We combine \reff{inequ1} and \reff{inequ2} and we deduce that
\begin{displaymath}
\nu(x-Ax) \ \leq \ (1+k)\nu(y) \ \leq \ \frac{1+k}{1-k}
(\nu(x)-\nu(Ax)).
\end{displaymath}
and this is equivalent to \reff{beyn.elsner} with
$ \gamma = (1-k)/(1+k)$.
{\hfill $\Box$}\\
\begin{remark} {\rm
If $\nu'$ is a norm on ${\rm I\kern-.25em F}^n$, then we may define a norm $\nu$
that satisfies \reff{additive} and agrees with $\nu'$ on $H$ and
$K$
by setting
$\nu(y+z) = \nu'(y) + \nu'(z)$ for $y \in H$ and $z \in K$.}
\end{remark}
\begin{remark} {\rm
The implication (iii) $\Longrightarrow$ (ii) of Theorem
\ref{equiv}
holds under weaker assumptions on the norm $\nu$. Specifically,
if
$A$ is an $H$--contraction with respect to $\nu$ and the
condition
\begin{displaymath}
\nu(y) < \nu(y') \Rightarrow \nu(y+z) < \nu(y'+z), \ y \in H,\ z
\in K
\end{displaymath}
is satisfied, then $A$ can be shown to be paracontracting
with respect to $\nu$.}
\end{remark}
\section{Convergence of Infinite Products}
\label{sec3}
In this section we develop our main results concerning the
convergence of products of complex
matrices taken in an arbitrary order from an infinite sequence of
matrices.
Such products were considered (in a slightly less general form)
in Seneta \cite[Section 4.6]{Sene81}
in the case of stochastic matrices, see also \cite{Leiz92} and
\cite{Rhod97}.\\
Let $A_1,A_2, \ldots$
be a sequence of complex matrices. We shall consider
products of matrices obtained from the sequence in the following
manner: First choose
some permutation of the given infinite sequence to obtain a
sequence
$B_1,B_2,\ldots$. Then form the products $C_{p,r}$
of the matrices $B_{p+1},
\ldots,
B_{p+r}$ in some order. We shall call $C_{p,r}$ a {\em general
product} from
the sequence $A_1,A_2,\ldots$
and we shall consider the existence of
$\lim_{r\rightarrow\infty} C_{p,r}$.
If this limit is $0$,
for all permutations of $A_1, A_2, \ldots $ and all $p, \ p \geq
0$,
then we shall say that all general products from the
the sequence $A_1,A_2,\ldots$ converge to $0$.\\
As an example of a sequence of general product suppose the chosen
order is
$A_9, A_7, A_5$, $A_{14}, A_2, \ldots$. Then
the sequence of $(C_{2,1}, C_{2,2}, \ldots)$ may
begin thus:
\begin{displaymath}
\begin{array}{l}
C_{2,1} \ = \ A_7, \ \\
\ \\
C_{2,2} \ = \ A_7 A_5, \ \\
\ \\
C_{2,3} \ = \ A_5 A_2 A_7 ,\ \\
\ \\
C_{2,4} \ = \ A_2 A_7 A_{14} A_5.
\end{array}
\end{displaymath}
Note that, for a given sequence $C_{p,1}, C_{p,2}, \ldots$
of general products each factor of $C_{p,r}$ occurs in
$C_{p,r+1}$,
but the order in which the factors occur in $C_{p,r}$ is
arbitrary.\\
Let $\mu$ be a matrix norm (viz. a submultiplicative norm
on ${\rm I\kern-.25em F}^{nn}$)
and denote
\begin{displaymath}
\mu^+(P) \ = \ \max (\mu(P), 1) \;\;\;\;\;\;
\mbox{and}\;\;\;\;\;\;
\mu^-(P) \ = \ \min (\mu(P), 1).
\end{displaymath}
Now let $A_1,A_2,\ldots$ be a sequence of
matrices in ${\rm I\kern-.25em F}^{nn}$ and let $\mu$ be a matrix norm.
We now define two conditions:\\
\begin{description}
\item[{\fbox{Condition (C)}}]: We say that
the sequence $A_1,A_2,\ldots,$
satisfies Condition\\
{\bf (C)} for the norm $\mu$ if
\begin{equation}
\label{condC}
\sum_{i=1}^{\infty} (\mu^+(A_i)-1) \ {\rm converges}.
\end{equation}
\item[{\fbox{Condition (D)}}]: We say that the sequence
$A_1,A_2,\ldots$
satisfies
Condition\\
{\bf (D)} for the norm $\mu$ if
\begin{equation}
\label{condD}
\sum_{j=1}^{\infty} (1-\mu^-(A_i)) \
{\rm diverges. }
\end{equation}
\end{description}
We are now ready to prove the following result:\\
\begin{prop}
\label{prop.bdd}
Let $A_1,A_2,\ldots$ be a sequence of matrices in ${\rm I\kern-.25em F}^{nn}$.
Let $\mu$ be a matrix norm on ${\rm I\kern-.25em F}^{nn}$. Suppose
that the sequence $A_1,A_2,\ldots$
satisfies Condition {\bf (C)} \ for the norm $\mu$.
Then all general products from $A_1,A_2,\ldots$ are bounded.
\end{prop}
\underline{\bf Proof}:\
Let $B_1,B_2, \ldots$ be a permutation of $A_{1}, A_2, \ldots $
and
let $C_{p,r}$ be a product of $B_{p+1}, \ldots, B_{p+r}$ in some
order.
By Condition \ {\bf (C)} and \cite[Theorem 14]{Hysl45},
$\sum_{i=1}^{\infty}(\mu^+(B_i)-1)$
converges and hence
$\sum_{i=1}^{\infty}(\mu^+(B_{p+i})-1)$ also converges.
Thus, by
\cite[Theorem 51]{Hysl45}, the product
$\prod_{i=1}^{\infty} \mu^+(B_{p+i})$
converges and so there exists a positive constant
$M$ such that $\prod_{i=1}^r \mu^+(B_{p+i})\leq M$, for each $r
\in
\{1,2,\ldots\}$.
It follows that
\begin{equation}
\label{eqn1}
\begin{array}{lll}
\mu\left(C_{p,r}\right ) & \leq & \mu\left( B_{p+1} \right)
\cdots
\mu\left(B_{p+r}\right)
\\
& \ & \ \\
& = & \left[\prod_{i=1}^{r}
\mu^-\left(B_{p+i}\right)\right]
\left[\prod_{i=1}^{r}
\mu^+\left(B_{p+i}\right)\right]
\\
\ & \ & \ \\
& \leq & M \left[\prod_{i=1}^{r}
\mu^-\left(B_{p+i}\right)\right]
\\
\ & \ & \ \\
& \ \leq & M.
\end{array}
\end{equation}
{\hfill $\Box$}\\
The above proposition allows us to prove a stronger result under
an additional conditions. Note that in the theory of infinite
products
of nonnegative numbers it is customary to speak of {\it
divergence} to
$0$, see e.g. \cite[p. 93]{Hysl45}. \\
\begin{theorem}
\label{theorem.conv}
Let $A_1,A_2,\ldots$ be a sequence of matrices in ${\rm I\kern-.25em F}^{nn}$.
Let $\mu$ be a matrix norm on ${\rm I\kern-.25em F}^{nn}$. Suppose
that the sequence $A_1,A_2,\ldots$
satisfies Conditions {\bf (C)} \ and {\bf (D)} \
for the norm $\mu$.
Then all general products from $A_1,A_2,\ldots$ converge to $0$.
\end{theorem}
\underline{\bf Proof}:\
Let $B_1,B_2, \ldots$ be a permutation of $A_{1}, A_2, \ldots $
and
let $C_{p,r}$ be a product of $B_{p+1}, \ldots, B_{p+r}$ in some
order.
As in the proof of Proposition(\ref{prop.bdd}), we have that
\begin{equation}
\label{eqn2}
\mu\left(C_{p,r}\right )
\ \leq \
M \left[\prod_{i=1}^{r}
\mu^-\left(B_{p+i}\right)\right].
\end{equation}
By Condition {\bf (D)},
the sum $\sum_{i=1}^{\infty} (1-\mu^-(A_i))$
diverges and so, by \cite[Theorem 14]{Hysl45},
$\sum_{i=1}^{\infty} (1-\mu^-(B_i))$ diverges.
Thus
$\sum_{i=1}^{\infty} (1-\mu^-(B_{p+i}))$ also diverges.
We again apply \cite[Theorem 51]{Hysl45}, and we obtain that
$\prod_{i=1}^{\infty} \mu^-(B_{p+i})$ diverges.
But since $\mu^-(B_i) \leq 1$, the last product must diverge to
$0$
and the proof is done.
{\hfill $\Box$}\\
If $H$ is a subspace of ${\rm I\kern-.25em F}^n$ which is invariant under $A \in
{\rm I\kern-.25em F}^{nn}$,
we denote the restriction of $A$ to $H$ by $A_{|_H}$.
As an immediate application of Theorem \ref{theorem.conv} we
obtain the
following result:\\
\begin{cor}
\label{ergod.prod}
Let $A_1,A_2,\ldots$ be a sequence of matrices
and let $C_{p,1},
C_{p,2},\ldots$ be
sequence of general products from the $A_1,A_2,\ldots$.
Let $H$ be a subspace of ${\rm I\kern-.25em F}^n$ which is invariant
under each $A_i, \ i = 1,\ldots$. Let $\nu$ be a norm on ${\rm I\kern-.25em F}^n$.
Suppose that the sequence $((A_i)_{|H}) $ satisfies conditions
{\bf (C)} \
and {\bf (D)}\ for the norm $\nu^0_H$.
If $x \in H$, then
\begin{displaymath}
\lim_{r \rightarrow \infty} C_{p,r}x \ = \ 0 .
\end{displaymath}
\end{cor}
\underline{\bf Proof}:\
Immediate by Theorem \ref{theorem.conv}.
{\hfill $\Box$}\\
Since $\nu^0_H(A) \leq \nu^0(A)$ when $H$ is an invariant
subspace of
$A$, it follows easily that under the conditions of
Corollary(\ref{ergod.prod}),
Condition {\bf (C)} (resp. Condition {\bf (D)})
on the sequence of $(A_i)$ implies
Condition {\bf (C)} (resp. Condition {\bf (D)})
on the sequence of
$((A_i)_{|H})$.
\section{Applications to Stochastic Matrices}
\label{sec4}
In this section we apply the foregoing results
to stochastic matrices. In
order to be consistent with our previous section
we consider {\em column} stochastic matrices.Thus ``stochastic
matrix''
will mean ``column stochastic matrix''.\\
Let $e = (1,\ldots,1)^T \in {\rm I\kern-.25em R}^n$. Hence
in this section we shall assume
that
\begin{equation}
\label{partic.H}
H \ = \ \{x \in {\rm I\kern-.25em R}^n \ : \ e^Tx= 0 \}.
\end{equation}
If $A$ is a stochastic matrix in ${\rm I\kern-.25em R}^{n,n}$, then $H$
is invariant under $A$, but note that $H$ need not be a
complement
to the space of fixed points of $A$.
If $\nu$ is a norm on ${\rm I\kern-.25em R}^n$ and $H$ is given by \reff{partic.H},
then
the corresponding partial norm $\nu_H^0$ on
${\rm I\kern-.25em R}^{nn}$ will be called a {\em coefficient of ergodicity} as is
usual in the
literature on Markov chains.\\
The $\ell_1$ norm on ${\rm I\kern-.25em R}^n$ plays a special
role in this theory and we shall denote it henceforth by $ \omega $.
The corresponding coefficient of ergodicity was apparently first
computed
in Bauer, Deutsch, and Stoer \cite{BaDeSt69}, see also
\cite{Zeng72},
and equals
\begin{displaymath}
\omega _{H}^{0}(A) \ = \ (1/2) \max_{i,k \in \{1,\dots,n\}} \sum_{j=1}^n
|a_{i,j}-a_{k,j}|.
\end{displaymath}
It is known that $ \omega _{H}^{0}(A) \leq 1$ for all stochastic
matrices $A$
and $ \omega _{H}^{0}$
is the only coefficient of ergodicity that satisfies this
inequality,
\cite{KuRh90} and \cite{Lesa90},
but see also \cite{Rhod97}.\\
\begin{remark} {\rm
We comment that the only
stochastic matrix $A$ which is paracontracting with respect to
the $\ell_1$
norm $ \omega $
is the
identity matrix. This follows easily from the facts
that for any nonnegative vector $u$ with
$ \omega (u) = 1$, we also have that $ \omega (Au) = 1$.
On the other hand, every stochastic matrix that has a
row with two or more
positive elements
is H--contracting
for the subspace $H$ given
in \reff{partic.H} and the norm $ \omega $.
}\end{remark}
\begin{defin}
\label{weak}
{\rm
Let $P_1,P_2,\ldots$ be a sequence of $n \times n$ stochastic
matrices.
We shall say
that all general products
formed from
this sequence are {\em weakly ergodic} if for all general
products
$B_{p,1}, B_{p,2}, \ldots$, we have that
\begin{equation}
\label{weak.con}
lim_{r \rightarrow \infty}B_{p,r}x \ = \ 0, \ \ {\rm for \ all} \ \
x \in H.
\end{equation}.
}
\end{defin}
Since every $x \in H$ can be written $x = c(u - v)$, where $u,v
\in {\rm I\kern-.25em R}^n$
are nonnegative and $e^Tu = e^Tv = 1$ and $c \in {\rm I\kern-.25em R}$, it is
easily seen
that, for each product considered, our definition is equivalent
to that in
\cite{Hajn58}, \cite{MoSc57}, or \cite[Defn. 3.3]{Sene81}.\\
By Theorem \ref{theorem.conv} we now immediately obtain:\\
\begin{theorem}
\label{wkerg}
Let $\nu$ be a norm on ${\rm I\kern-.25em R}^n$ and let
$\ms$ be the corresponding coefficient of ergodicity.
Let $P_1,P_2,\ldots$ be a sequence of $n \times n$ stochastic
matrices.
Then all general products formed from
this sequence are weakly ergodic if
\begin{equation}
\label{condCW}
\sum_{i=1}^{\infty} ({\ms}^+(P_i)-1) \ {\rm converges}
\end{equation}
and
\begin{equation}
\label{condDW}
\sum_{i=1}^{\infty} (1-{\ms}^-(P_{i_j})) \
{\rm diverges \ for\ some \ subsequence\ of} \
P_1,P_2\ldots.
\end{equation}
\end{theorem}
Note that (\ref{condCW}) is automatically satisfied if $\nu =
\omega $,
the $\ell_1$-norm. This special case of Theorem \ref{wkerg} is
observed in \cite[Exercise 4.36]{Sene81}.
Results related to this case (and which therefore
involve only (\ref{condDW}) explicitly) are to be found in
\cite[Theorem 1]{Sene73} and in \cite{MoSc57}. The theorem in
the
latter paper is
there illustrated by an
example of a sequence of stochastic matrices
that satisfies (\ref{condDW}) for
the norm $ \omega $, see \cite[p. 333]{MoSc57}.\\
The following corollary is due to Rhodius
\cite[Thm. 3, Part I]{Rhod97} in the case of $\nu = \omega $,
see Leizarowitz \cite[Thm A (i)]{Leiz92} for a
related result.
\begin{cor}
\label{accum}
Let $P_1, P_2, \ldots$ be a sequence of stochastic matrices. If
$\ \nu_H^0(P_i)$ \ $\leq 1$ for all $i, \ i = 1,2, \ldots$, and
there exists a point of accumulation $c$ of the sequence
$\nu_H^0(P_1),\nu_H^0(P_2), \ldots$ such that $c < 1$, then all
general
products of the sequence are weakly ergodic.
\end{cor}
\underline{\bf Proof}:\
Clearly condition \reff{condCW} holds and
there is an infinite
subsequence $j_1, j_2, \ldots$ such that $ \omega _H^0(P_{i_j}) <
(1+c)/2 < 1$,
$j = 1,2, \ldots$. Then condition \reff{condDW} holds for this
subsequence.
The result follows from Theorem (\ref{wkerg}).
{\hfill $\Box$}\\
Another corollary of Theorem \ref{wkerg} is \cite[Thm A
(ii)]{Leiz92}:
\begin{cor}
\label{accum2}
Let $P_1, P_2, \ldots$ be a sequence of stochastic matrices and
let
$\nu$ be a norm on ${\rm I\kern-.25em R}^n$.
If all points of accumulation $c$ of the sequence
$\nu_H^0(P_1)$, \ $\nu_H^0(P_2), \ldots$ satisfy $c < 1$, then
all general
products of the sequence are weakly ergodic.
\end{cor}
\underline{\bf Proof}:\
Since the set of accumulation points of a bounded sequence is
compact,
there exists $d < 1$ such that only a finite number of terms of
the
sequence $\nu_H^0(P_1),\nu_H^0(P_2), \ldots$ exceed $d$. Hence
\reff{condCW}
and \reff{condDW} hold for the sequence of ergodicity
coefficients
and the corollary follows from Theorem \ref{wkerg}.
{\hfill $\Box$}\\
\section{Bounds for eigenvalues}
\label{sec5}
\begin{theorem}
\label{thm.bound}
Let $H$ be an invariant subspace for $A \in \mathbb{C}^{n,n}$. Let
$\alpha_{\lambda}(A)$
and $\gamma_{\lambda}(A)$ be the algebraic and geometric
multiplicities of $\lambda \in \mathbb{C}^{nn}$ as an eigenvalue of $A$,
respectively. Then
\begin{displaymath}
\alpha_{\lambda}(A) \ \leq \ \alpha_{\lambda}(A_{|_H}) + n - \dim(H)
\end{displaymath}
and
\begin{displaymath}
\gamma_{\lambda}(A) \ \leq \ \gamma_{\lambda}(A_{|_H}) + n - \dim(H).
\end{displaymath}
Further, if $\mu$ is a matrix norm and
$|\lambda| > \mu_H^0(A)$, then $\alpha_{\lambda}(A) \leq n - \dim(H)$ and
$\gamma_{\lambda}(A) \leq n - \dim(H)$.
\end{theorem}
\underline{\bf Proof}:\
By a proper choice of basis for $\mathbb{C}^n$ we may put $A$ in the form
\begin{equation}
\label{eqn3}
A \ = \ \pmatrix{A_{1,1} & 0 \cr A_{2,1} & A_{2,2} \cr}
\end{equation}
and the first inequality follows immediately.\\
The second inequality is an immediate consequence of
\cite[Theorem
2.1]{MeRo77}, see also \cite[Cor. 5.8]{HeRoSc89}.\\
Finally, since all eigenvalues $\lambda$ of $A_{22}$ satisfy
$|\lambda| \leq \mu_H^0(A)$, the last part of the theorem
follows.
{\hfill $\Box$}\\
\begin{remark}
{\rm
Theorem (\ref{thm.bound}) implies several known results.
We now give an example below which could also be derived from
\cite[Theorem 2]{HaRo97}. Let $A$ be a stochastic matrix, let $H$
be
defined by
\reff{partic.H},
and suppose that a coefficient of ergodicity
satisfies $\nu^{0}_{H}(A) < 1$. Since, in this case, the matrix
$A_{1,1}$ in (\ref{eqn3}) is $1 \times 1$, it follows that
$1$ is an
algebraically simple eigenvalue of $A$ and all other eigenvalues
$\lambda$ of $A$ satisfy $\nu^{0}_{H}(A) \geq |\lambda|$. }
\end{remark}
\centerline{{\bf Acknowledgment}}
\vspace{.15in}
We would like to thank Wenchao Huang for some helpful remarks.
We thank Olga Holtz for her careful reading of the manuscript.\\
\bibliographystyle{plainxe}
| {
"timestamp": "1998-02-23T04:42:44",
"yymm": "9802",
"arxiv_id": "math/9802108",
"language": "en",
"url": "https://arxiv.org/abs/math/9802108",
"abstract": "Motivated by the theory of inhomogeneous Markov chains, we determine a sufficient condition for the convergence to 0 of a general product formed from a sequence of real or complex matrices. When the matrices have a common invariant subspace $H$, we give a sufficient condition for the convergence to 0 on $H$ of a general product. Our result is applied to obtain a condition for the weak ergodicity of an inhomogeneous Markov chain. We compare various types of contractions which may be defined for a single matrix, such as paracontraction, $l$--contraction, and $H$--contraction, where $H$ is an invariant subspace of the matrix.",
"subjects": "Rings and Algebras (math.RA); Probability (math.PR)",
"title": "Partial norms and the convergence of general products of matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126525529014,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7094396991196658
} |
https://arxiv.org/abs/2205.13068 | Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes | We develop a rigorous mathematical analysis of zero-shot learning with attributes. In this setting, the goal is to label novel classes with no training data, only detectors for attributes and a description of how those attributes are correlated with the target classes, called the class-attribute matrix. We develop the first non-trivial lower bound on the worst-case error of the best map from attributes to classes for this setting, even with perfect attribute detectors. The lower bound characterizes the theoretical intrinsic difficulty of the zero-shot problem based on the available information -- the class-attribute matrix -- and the bound is practically computable from it. Our lower bound is tight, as we show that we can always find a randomized map from attributes to classes whose expected error is upper bounded by the value of the lower bound. We show that our analysis can be predictive of how standard zero-shot methods behave in practice, including which classes will likely be confused with others. |
\section{Lower Bounds for Zero-Shot Learning with Attributes}\label{sec:adv}
In this section, we formally define our lower bound. Consider a PMF $p$ with support over $\{0,1\}^n \times [k]$. We say that $p$ satisfies the class-feature matrix $\bm{A}$ if (as in constraints \eqref{matrix-class-attribute-relation}) for each $i \in [n]$ and $j \in [k]$,
\begin{align}
\label{matrix-constraint}
\sum_{\substack{\bm{v} \in \{0,1\}^n : \\v_i = 1} }p(\bm{v},j) = A_{j,i} \sum_{\bm{v} \in \{0,1\}^n} p(\bm{v},j) \enspace .
\end{align}
Recall that $p$ is balanced if for each $j \in [k]$, it holds that $\sum_{\bm{v}} p(\bm{v},j) = 1/k$.
Let $\mathcal{P}({\bm{A}})$ be the set of all possible PMFs $p$ with support over $\{0,1\}^n \times [k]$ that satisfy \eqref{matrix-constraint} and are balanced. Clearly, the unknown true distribution, $p^* \in \mathcal{P}(\bm{A})$.
The set $\mathcal{P}({\bm{A}})$ can be interpreted as the collection of all the PMFs of the random vector $(\psi_1, \ldots, \psi_n, y)$ that satisfy the constraints imposed by the information available on the prediction task and on the attribute functions. While the matrix $\bm{A}$ provides precise information on the correlation between any pair of attribute function and class, it fails to provide information on the correlation between attribute functions, i.e., it does not fully specify the distribution $p^*$. Without additional information, any PMF in $\mathcal{P}(\bm{A})$ could be equal to $p^*$.
Similarly to $\eqref{error-of-a-distribution}$, given a PMF $p \in \mathcal{P}(\bm{A})$ and an attribute-class classifier $g \in \mathcal{G}$, we can define the error of $g$ with respect to the distribution $p$ as
\begin{align}
\label{error-distribution-general}
\varepsilon(g,p) \doteq 1 - \sum_{\bm{v}\in \{0,1\}^n} p(\bm{v},g(\bm{v})) \enspace .
\end{align}
Following the computation in $\eqref{optimal-minimum}$, the error of the best map from attributes to classes with respect to $p \in \mathcal{P}(\bm{A})$ is computed as
\begin{align}
\label{Q-computation}
Q(p) \doteq \min_{g \in \mathcal{G}}\varepsilon(g,p) = 1 - \sum_{\bm{v}\in \{0,1\}^n} \max_{j \in [k]} p(\bm{v},j) \enspace .
\end{align}
We are interested in the quantity
\begin{align}
\label{adversarial-quantity}
Q \doteq \max_{p \in \mathcal{P}(\bm{A})}Q(p)
\end{align}
i.e., $Q$ is the maximum over all distributions $p \in \mathcal{P}(\bm{A})$ of the error of the best algorithm for distribution $p$.
In other words, $Q$ is the worst Bayes error with respect to all the distributions that satisfy the constraints imposed by the class-attribute matrix and on the class probabilities.
Since $p^*$ can be any vector in $ \in \mathcal{P}(\bm{A})$, the value $Q$ represents a lower bound to the best error rate that an algorithm can guarantee.
In fact, without further information on the attribute functions or the prediction task, it is possible that $p^*$ attains the maximum of $\eqref{adversarial-quantity}$, that is in the worst-case we have that
\begin{align*}
\varepsilon(g,p^*) = \varepsilon(g) \geq Q \hspace {10pt} \forall g \in \mathcal{G} \hspace{5pt}
\end{align*}
In other words, the quantity $Q$ reflects a worst-case scenario where the attribute functions are correlated in such a way that it is
hard to distinguish between the classes, even if the attribute functions still satisfy the constraints $\eqref{matrix-class-attribute-relation}$ given by the class-attribute matrix $\bm{A}$.
In \cref{subsection:tight}, we show that this lower bound is tight. In particular, we prove that there exists a randomized classifier from the attribute space $\{0,1\}^n$ to the classes $[k]$ whose expected error is at most $Q$ with respect to any distribution $p \in \mathcal{P}(\bm{A})$.
\textbf{Example.} Consider a balanced binary classification task with two attributes. The class-attribute matrix $\bm{A} \in \mathbb{R}^{2\times 2}$ is such that $A_{i,j}=1/2$ for $i,j \in \{1,2\}$. Based on this class-attribute matrix, we consider two different scenario. In the first scenario (\textit{best-case}), we have that items from the first class have either both attributes or none with probability $1/2$, and items from the second class have only either the first attribute or the second attribute with probability $1/2$. In this case, we can simply count the number of attributes that an item has to assign it to the correct class. In the second scenario (\textit{worst-case}), each item has either both attributes or none with probability $1/2$ independently from the item class. In this case, any mapping from the attributes to the class is going to incur an error of $1/2$.
\subsection{Computing the Lower Bound}\label{sec:compute_bound}
In this subsection, we show how to compute $Q$ as in $\eqref{adversarial-quantity}$ through a Linear Program (LP). To describe a generic PMF $p$, we introduce $2^n \times k$ variables $q_{\bm{v},j}$ with $\bm{v} \in \{ 0, 1\}^n$ and $j \in [k]$. We use additional $2^n$ auxiliary variables $\lambda_{\bm{v}}$, for $\bm{v} \in \{0,1\}^n$, to denote the maximums of \eqref{Q-computation}, i.e. $\lambda_{\bm{v}} = \max_{j \in [k]} q_{\bm{v},j}$. The LP is formulated as follows.
\begin{align}
\label{lp-maximum}
&1-Q = \min \sum_{\bm{v}} \lambda_{\bm{v}}& \\
&(a) \sum_{\substack{ \bm{v} \in \{0,1\}^n :\\ v_i = 1}}q_{\bm{v},j} = A_{j,i}\sum_{\bm{v} \in \{0,1\}^n}q_{\bm{v},j} & \forall j \in [k], i \in [n] \nonumber \\
&(b) \sum_{\substack{ \bm{v} \in \{0,1\}^n }}q_{\bm{v},j} = \frac{1}{k} & \forall j \in [k] \nonumber \\
&(c) \hspace{5pt} \lambda_{\bm{v}} \geq q_{\bm{v},j} \geq 0 & \forall \bm{v} \in \{0,1\}^n, j \in [k] \nonumber
\end{align}
\begin{theorem}
\label{lp-theorem}
The optimal value of the LP \eqref{lp-maximum} is equal to $1-Q$, with $Q$ is as in \eqref{Q-computation}.
\end{theorem}
By removing or modifying constraint $(b)$ of the LP, it is possible to remove the assumption that the classes are balanced or provide different class weights. All the previous results still hold by changing the definition of $\mathcal{P}(\bm{A})$ accordingly. It is important to point out that since we are computing a worst-case lower bound, the class weights provide significant information. Without constraints on the class weights, the worst-case distribution could concentrate all the probability mass on few classes that are hard to differentiate using the available class-attribute matrix $\bm{A}$.
The LP has $O(k \cdot 2^n)$ variables and constraints, and therefore it is computationally expensive for large number of attributes. The dependency on $2^n$ is required to describe all the possible correlations between the output of the $n$ attribute functions. Nevertheless, we present an efficient computation for the binary case and an efficient approximation for the general case.
\subsection{Lower Bound for Binary Classification}
\label{binary-exact-computation}
In this subsection, we show how to efficiently compute $Q$ as in \eqref{adversarial-quantity} in the case of a binary classification task, i.e. $k=2$ and $\bm{A} = [0,1]^{2 \times n}$. For ease of notation, let
\begin{align}
\label{binary-matrix}
\bm{A} = \begin{bmatrix}
\alpha_1 & \ldots & \alpha_n \\
\beta_1 & \ldots & \beta_n
\end{bmatrix} \enspace .
\end{align}
\begin{theorem}
\label{binary-adversarial-computation}
Consider a balanced binary classification task and let $\bm{A}$ be as in \eqref{binary-matrix}. Let $Q$ be as in \eqref{adversarial-quantity}. It holds that
$Q = \frac{1}{2}\left(1 - \max_{i \in [n]}| \beta_i - \alpha_i|\right)$. Moreover, let $g_a$ be the attribute-class classifier
\begin{align*}
g_a(\bm{v}) = \begin{cases}
v_i &\mbox{ if } \alpha_{i^*} > \beta_{i^*} \\
1 - v_i &\mbox{ if } \alpha_{i^*} \leq \beta_{i^*}
\end{cases}
\end{align*}
for each $\bm{v} \in \{0,1\}^n$, where $i^* = \operatorname*{\mathrm{arg\,max}}_{i}|\beta_i - \alpha_i|$ and $v_i$ is the $i$-th component of the vector $\bm{v}$. Then
$\varepsilon(g_a,p) = Q$ for all ${p} \in \mathcal{P}(\bm{A})$,
i.e. the lower bound $Q$ is tight.
\end{theorem}
The theorem shows that in the worst-case, the attributes could be correlated in such a way that it is not possible to do better than deciding solely based on the attribute with the largest gap between its probabilities in the two classes. This result also formally proves that for binary classification, the worst-case is determined by a single attribute, and there is no compounded benefit in having multiple attributes. This result is in line with other worst-case analyses in the context of weak supervision. In \citet{mazzetto2021semi}, it is noted that while combining the output of different weak supervision sources to obtain a noisy label of a given input item, in the worst-case one cannot do better than just using the most accurate weak supervision source without additional information. In \Cref{app:approximation-subsection}, we show how to approximate the lower bound in the multiclass setting by using \Cref{binary-adversarial-computation}.
\subsection{Lower Bound is Tight}\label{sec:tight}
In this subsection, we prove that the worst-case lower bound $\eqref{adversarial-quantity}$ is tight. We show a randomized attribute-class classifier whose expected error is upper bounded by $Q$ with respect to any distribution $p \in \mathcal{P}(\bm{A})$. This classifier can be computed only based on the class-attribute matrix, and it provides an upper bound to the error of the best map from attributes to classes that matches the lower bound.
We consider the family $\mathcal{G}_R$ of all randomized attribute-class classifiers, where each $g \in \mathcal{G}_R$ is a random map from $\{0,1\}^n$ to $[k]$. A attribute-class classifier in $\mathcal{G}_R$ is described with a right-stochastic matrix $\bm{W} \in [0,1]^{2^n \times k}$, where the rows are indexed by binary vectors $\bm{v} \in \{0,1\}^n$, and the columns are indexed by the classes $j \in [k]$. The entry $W_{\bm{v},j}$ represents the probability of the randomized classifier to output $j$ given that the input is $\bm{v}$. We will use $g_{\bm{W}}$ to denote the randomized classifier in $\mathcal{G}_R$ that is described with the right-stochastic matrix $\bm{W}$. Given a PMF $p$ over $\{0,1\}^n \times [k]$, we define the expected error of $g_{\bm{W}}$ as
\begin{align}
\label{expected-loss-randomized}
\varepsilon(g_{\bm{W}}, p) &\doteq 1 - \Pr_{(\bm{v},j) \sim p}[ g_{\bm{W}}(\bm{v}) = j]
= 1-\sum_{\bm{v} \in \{0,1\}^n} \sum_{j \in [k]} W_{\bm{v},j} \cdot p(\bm{v},j) \enspace .
\end{align}
We can observe that the definition above extends definition $\eqref{error-distribution-general}$, in fact $\eqref{expected-loss-randomized}$ coincides with $\eqref{error-distribution-general}$ if $g_{\bm{W}}$ is a deterministic classifier, i.e. each row of $\bm{W}$ contains a $1$.
\label{subsection:tight}
\begin{theorem}\label{th:adv}
\label{minimax-invertible-theorem}
There exists a randomized attribute-class classifier $g_a \in \mathcal{G}_R$ such that its worst-case expected error is upper bounded by $Q$, i.e.
$\max_{p \in \mathcal{P}(\bm{A})} \varepsilon(g_a, p) \leq Q$ , where $Q$ is computed as in $\eqref{adversarial-quantity}$. Also,
it holds
$\max_{p \in \mathcal{P}(\bm{A})} \min_{g \in \mathcal{G}_R} \varepsilon(g, p) = Q$,
i.e. the lower bound $Q$ also applies to the family of randomized functions $\mathcal{G}_R$.
\end{theorem}
It is possible to compute the randomized attribute-class classifier $g_a$ that satisfies \cref{minimax-invertible-theorem} solely based on the matrix $\bm{A}$ through Linear Programming using $O(k \cdot 2^n)$ variables and constraints. Due to space constraints, we defer this computation to Appendix~\ref{adversarial-classifier-computation}.
\section{Deferred Proofs.}
\label{deferred-proofs}
\textbf{Proof of \cref{lp-theorem}}.
Let $Q'$ be the optimal value of the LP.
Let $p^* \in \mathcal{P}(\bm{A})$ be a solution of the maximization $\eqref{adversarial-quantity}$. Consider the following assignment of the variables $q_{\bm{v},j} = p^*(\bm{v},j)$ for all $\bm{v} \in \{0,1\}^n$ and $j \in [k]$. Since $p^* \in \mathcal{P}(\bm{A})$, it is straight-forward to verify that the variables $q_{\bm{v},j}$ satisfy constraints $(a)$ and $(b)$ of the LP. Moreover, the objective function is minimized whenever the values $\lambda_{\bm{v}}$ are chosen as small as possible. Due to constraint $(c)$ of the LP, we have that $\lambda_{\bm{v}} = \max_{j \in [k]} q_{\bm{v},j}$ for each $\bm{v} \in \{0,1\}^n$.
We have that
\begin{align}
\label{step-thm-4.2}
1-Q = \sum_{\bm{v} \in \{0,1\}^n} \max_{\bm{v} \in \{0,1\}^n} \max_{j \in [k]} p^*(\bm{v},j)=\sum_{\bm{v} \in \{0,1\}^n} \lambda_{\bm{v}} \geq 1-Q'
\end{align}
By contradiction, assume the optimal solution $q^*_{\bm{v},j}$, $\lambda^*_{\bm{v}}$ is such that $1-Q'= \sum_{\bm{v} \in \{0,1\}^n} \lambda_{\bm{v}}^* < 1-Q$. Since $q^*_{\bm{v},j}$, $\lambda^*_{\bm{v}}$ is an optimal solution, due to constraint $(c)$ we have that $\lambda^*_{\bm{v}} = \max_{j \in [k]} q^*_{\bm{v},j}$. Consider a PMF $\tilde{p}$ over $\{0,1\}^n \times [k]$ such that $\tilde{p}(\bm{v},j) = q^*_{\bm{v},j}$. It is easy to verify that $\tilde{p} \in \mathcal{P}(\bm{A})$ due to the constraint $(a)$ and $(b)$. Moreover, we have that $Q(\tilde{p}) = 1 - \sum_{\bm{v}}\max \tilde{p}(\bm{v},j) = 1 - \sum_{\bm{v}} \lambda^*_{\bm{v}} > Q$. This is a contradiction as $\max_{p \in \mathcal{P}(\bm{A})}Q(p) = Q$. Therefore, we have that $1-Q' \geq 1-Q$. Combining the latter inequality with inequality \eqref{step-thm-4.2}, we can conclude that $Q = Q'$.
\qed
\medskip
\textbf{Proof of \cref{binary-adversarial-computation}}.
Without loss of generality, we assume that $\alpha_i \geq \beta_i$ for each $i \in [n]$. In fact, if $\alpha_i < \beta_i$, then we can consider the attribute function $\psi'_i = 1 - \psi_i$, and the $i$-th column of the matrix $\bm{A}$ would become $(1-\alpha_i, 1-\beta_i)^T$, with $1 - \alpha_i \geq 1-\beta_i$. Also, assume that the attributes are ordered such that $\alpha_1 - \beta_1 \geq \alpha_i - \beta_i$ for each $i \in [n]$.
We first prove the second part of the Theorem. Let $g_a$ be defined as in the problem statement. It is easy to see that for any $p \in \mathcal{P}(\bm{A})$, we have that
\begin{align}
\varepsilon(g_a,p) &= \Pr_{x \sim \mathcal{D}}( g_a \circ \bm{\psi}(x) \neq y(x)) = \label{greedycomputation}\\
&=\Pr( \psi_1(x) = 0 | y(x) = 1) \Pr(y(x)=1)+ \Pr( \psi_1(x) = 1 | y(x) = 0) \Pr(y(x)=0) \nonumber \\
&= \frac{1}{2}(1 - \alpha_1) + \frac{1}{2}\beta_1 \nonumber \\
&= \frac{1}{2}( 1 - |\beta_1 - \alpha_1|) = Q \nonumber
\end{align}
Since this holds for any $p \in \mathcal{P}(\bm{A})$, we have that
\begin{align*}
\max_{p \in \mathcal{P}(\bm{A})}\varepsilon(g_a,p) = \frac{1}{2}( 1 - |\beta_1 - \alpha_1|) \enspace .
\end{align*}
Now, we will prove the first part of the Theorem. The proof is by induction. For $i \in [n]$, let $\bm{A}_i$ be the matrix that consists of the first $i$ columns of $\bm{A}$. For $i \in [n]$, let $\mathcal{G}_i$ be the set of all the functions $\{0,1\}^n \rightarrow [2]$.
For $i \in [n]$, let $p^{(i)}$ be a PMF with support over $\{0,1\}^i \times [2]$ such that
\begin{align}
\label{p-i-definition}
p^{(i)} = \operatorname*{\mathrm{arg\,max}}_{p \in \mathcal{P}(\bm{A}_i)} \min_{g \in \mathcal{G}_i} \underbrace{\left( 1 - \sum_{\bm{v} \in \{0,1\}^i}p(\bm{v}, g(\bm{v})) \right)}_{\substack{\doteq \\ \varepsilon^{(i)}(g,p) }}
\end{align}
For ease of notation, for $i \in [n]$, we will denote $\p{i}{\bm{v},j} = p^{(i)}(\bm{v},j)$ for each $\bm{v} \in \{0,1\}^i$ and $j \in [2]$.
The following auxiliary proposition is crucial to prove the theorem.
\begin{proposition}
\label{property-induction}
Let $i \in [n]$. We have that $\min_{g\in \mathcal{G}_i}\varepsilon^{(i)}(g, p^{(i)}) = Q$ if and only if for each $\bm{v} \in \{0,1\}^{i-1}$, it holds both $\p{i}{1\bm{v},1} \geq \p{i}{1\bm{v},2}$ and $\p{i}{0\bm{v},1} \leq \p{i}{0\bm{v},2}$.
\begin{proof}
Assume that for each $\bm{v} \in \{0,1\}^{i-1}$, it holds both $\p{i}{1\bm{v},1} \geq \p{i}{1\bm{v},2}$ and $\p{i}{0\bm{v},1} \leq \p{i}{0\bm{v},2}$. Then, we have that
\begin{align}
\min_{g \in \mathcal{G}_i}\varepsilon^{(i)}(g,p^{(i)}) &= 1 - \sum_{\bm{v} \in \{0,1\}^i}\max( \p{i}{\bm{v},1}, \p{i}{\bm{v},2}) = \nonumber \\
&=1 - \sum_{\bm{v} \in \{0,1\}^{i-1}}\max( \p{i}{0\bm{v},1}, \p{i}{0\bm{v},2}) - \sum_{\bm{v} \in \{0,1\}^{i-1}}\max( \p{i}{1\bm{v},1}, \p{i}{1\bm{v},2}) \nonumber \\
&= 1 - \sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{0\bm{v},2} - \sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{1\bm{v},1} \nonumber \\
&= 1 - \frac{1}{2}(1 - \beta_1) + \frac{1}{2}\alpha_1 \label{equality-marginalization} \\
&= \frac{1}{2}\left( 1 - |\alpha_1 - \beta_1| \right) = Q \nonumber
\end{align}
Equality \eqref{equality-marginalization} is simply obtained by marginalization, since $p^{(i)} \in \mathcal{P}(\bm{A}_i)$, thus $\Pr_{x \sim \mathcal{D}}( \psi_1(x) = 0 \land y(x) = 2) = (1-\beta_1)/2$ and $\Pr_{x \sim \mathcal{D}}( \psi_1(x) = 1 \land y(x) = 1) = \alpha_1/2$.
Assume that there exists $\bm{v'} \in \{0,1\}^{i-1}$ such that $\p{i}{1\bm{v}',1} < \p{i}{1\bm{v}',2}$ (the case $\p{i}{0\bm{v}',1} > \p{i}{0\bm{v}',2}$ is proven with the same argument). Let $g^{(i)}_a$ be defined similarly to $g_a$, i.e. $g^{(i)}_a = 1$ if $\psi_1(x) = 1$, and $g^{(i)}_a=0$ otherwise. Following the same computation of $\eqref{greedycomputation}$, we can show that $\varepsilon^{(i)}(g_a,p^{(i)}) = Q$.
Consider the classifier $\tilde{g}$ such that $\tilde{g}(\bm{v}) = g_a(\bm{v})$ for all $\bm{v} \in \{0,1\}^i$ such that $\bm{v} \neq 1\bm{v}'$, and $\tilde{g}(1\bm{v}')=2$.
We have that
\begin{align*}
\varepsilon^{(i)}(g^{(i)}_a,p^{(i)}) - \varepsilon^{(i)}(\tilde{g},p^{(i)}) = p^{(i)}(\bm{v}', \tilde{g}(\bm{v}')) - p^{(i)}(\bm{v}', g^{(i)}_a(\bm{v})) = p^{(i)}_{1\bm{v}',2} - p^{(i)}_{1\bm{v}',1} > 0
\end{align*}
Therefore, $\varepsilon^{(i)}(\tilde{g},p^{(i)}) < \varepsilon^{(i)}(g^{(i)}_a,p^{(i)}) = Q$, which directly implies that $\min_{g\in \mathcal{G}_i}\varepsilon^{(i)}(g, p^{(i)}) < Q$.
\end{proof}
\end{proposition}
By induction, we will prove that for each $i \in [n]$, it is true that $\min_{g \in \mathcal{G}} \varepsilon^{(i)}(g,p^{(i)}) = Q$.
\textbf{Base case}. Let $i=1$. We have that
\begin{align*}
&\p{1}{1,1} = \frac{\alpha_1}{2} & \p{1}{0,1} = \frac{1}{2}(1-\alpha_1)\\
&\p{1}{1,2} =\frac{\beta_1}{2} & \p{1}{0,2} = \frac{1}{2}(1-\beta_1)\\
\end{align*}
Observe that $p^{(1)} \in \mathcal{P}(\bm{A}_1)$ as the classes are balanced, and we satisfy the constraints of the matrix $\bm{A}$ for the first attribute. It is easy to observe that
\begin{align*}
\min_{g \in \mathcal{G}} \varepsilon^{(1)}(g,p^{(1)}) = 1 - \frac{\alpha_1}{2} - \frac{1}{2}(1-\beta_1) = Q
\end{align*}
\textbf{Inductive step}. For $i \in 2, \ldots, n$, assume that $\min_{g \in \mathcal{G}_{i-1}} \varepsilon^{(i-1)}(g,p^{(i-1)}) = Q$, where $p^{(i-1)}$ is solution of $\eqref{p-i-definition}$.
We will show how to construct $p^{(i)}$ from $p^{(i-1)}$ guaranteeing $\min_{g \in \mathcal{G}_i} \varepsilon^{(i)}(g,p^{(i)}) = Q$ and that $p^{(i)} \in \mathcal{P}(\bm{A}_i)$. Observe that in that case, $p^{(i)}$ is also a solution of $\eqref{p-i-definition}$, since the classifier $g_a^{(i)}$ (defined as in the proof of \cref{property-induction}) has error exactly $Q$ with respect to any PMF $p \in \mathcal{P}(\bm{A}_i)$.
Our construction will be divided in three different cases, based on the ordering of the values $\alpha_1, \beta_1$, $\alpha_i$ and $\beta_i$. We will exhibit a different construction of $p^{(i)}$ for each of the case, but they all share the same proof line. In particular, we will guarantee that for each $\bm{v} \in \{0,1\}^{i-1}$ and $j \in [2]$, it holds
\begin{align}
\label{sum-to-previous}
p^{(i)}_{\bm{v}1,j} + p^{(i)}_{\bm{v}0,j} = p^{(i-1)}_{\bm{v},j}
\end{align}
This immediately implies that the classes are balanced, in fact, for any $j \in [2]$, we have that
\begin{align*}
\sum_{\bm{v} \in \{0,1\}^i} p^{(i)}_{\bm{v},j} = \sum_{\bm{v} \in \{0,1\}^{i-1}} p^{(i)}_{\bm{v}0,j}+p^{(i)}_{\bm{v}1,j} = \sum_{\bm{v} \in \{0,1\}^{i-1}}p^{(i-1)}_{\bm{v},j} = \frac{1}{2} \enspace,
\end{align*}
where the last inequality is due to the assumption that $p^{(i-1)} \in \mathcal{P}(\bm{A}_{i-1})$. Moreover, $\eqref{sum-to-previous}$ also implies that $p^{(i)}$ satisfies the constraints imposed by the matrix $\bm{A}$ for the first $i-1$ attributes. In fact, for any $a \in [i-1]$, and $j\in[2]$, we have that
\begin{align*}
&\sum_{\bm{v} \in \{0,1\}^i : v_{a}=1} \p{i}{\bm{v},j}= A_{j,a} \sum_{\bm{v} \in \{0,1\}^i} \p{i}{\bm{v},j} \\
\iff &\sum_{\bm{v} \in \{0,1\}^{i-1} : v_{a}=1}\left(\p{i}{\bv0,j} + \p{i}{\bv1,j}\right) = A_{j,a}\sum_{\bm{v} \in \{0,1\}^{i-1}}\left(\p{i}{\bv0,j} + \p{i}{\bv1,j}\right) \\
\iff & \sum_{\bm{v} \in \{0,1\}^{i-1} : v_{a}=1} \p{i-1}{\bm{v},j}= A_{j,a} \sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i-1}{\bm{v},j} \enspace .
\end{align*}
The latter equality is true as $p^{(i-1)} \in \mathcal{P}(\bm{A}_{i-1})$.
For each different case, we will show that our construction also satisfies the constraints imposed by matrix $\bm{A}$ for attribute $i$. This, together with $\eqref{sum-to-previous}$, implies that our construction guarantees that $p^{(i)} \in \mathcal{P}(\bm{A})$.
Moreover, we will show that with our construction, we also guarantee that for each $\bm{v} \in \{0,1\}^{i-1}$, it holds that
\begin{align}
\label{inequality-to-previous}
\p{i}{1\bm{v},1} \geq \p{i}{1\bm{v},2}\hspace{5pt} \land \hspace{5pt} \p{i}{0\bm{v},1} \leq \p{i}{0\bm{v},2} \enspace .
\end{align}
Using \cref{property-induction}, \eqref{inequality-to-previous} immediately implies that $\min_{g\in \mathcal{G}_i}\varepsilon^{(i)}(g, p^{(i)}) = Q$. In order to show that \eqref{inequality-to-previous} holds in our construction, we will use the fact that for each $\bm{v} \in \{0,1\}^{i-2}$, it holds that
\begin{align}
\label{inequality-to-previous-i-1}
\p{i-1}{1\bm{v},1} \geq \p{i-1}{1\bm{v},2}\hspace{5pt} \land \hspace{5pt} \p{i-1}{0\bm{v},1} \leq \p{i-1}{0\bm{v},2} \enspace .
\end{align}
This is indeed the case, as by assumption $\min_{g\in \mathcal{G}_{i-1}}\varepsilon^{(i-1)}(g, p^{(i-1)}) = Q$, hence we can apply the other direction of \cref{property-induction}.
We will now show our construction for the three different cases. For each case, it is straightforward to check that in our construction \eqref{sum-to-previous} holds, and that \eqref{inequality-to-previous-i-1} immediately implies \eqref{inequality-to-previous}. Therefore, we omit those computations.
\textbf{First Case}. [$\beta_1 \geq \beta_i \land \alpha_i \leq \alpha_1$].
We construct $p^{(i)}$ as follows. For each $\bm{v} \in \{0,1\}^{i-2}$,
we let
\begin{align*}
&\p{i}{1\bv1,2} = \p{i-1}{1\bm{v},2} \hspace{20pt} & \p{i}{1\bv0,2} = 0 \\
&\p{i}{1\bv1,1} = \p{i-1}{1\bm{v},2} + \frac{\alpha_i - \beta_1}{\alpha_1 - \beta_1}\left( \p{i-1}{1\bm{v},1} - \p{i-1}{1\bm{v},2} \right) & \\
& \p{i}{1\bv0,1} = \frac{\alpha_1 - \alpha_i}{\alpha_1 - \beta_1}\left( \p{i-1}{1\bm{v},1} - \p{i-1}{1\bm{v},2}\right) \enspace .
\end{align*}
These probabilities are well defined, as $0 \leq \frac{\alpha_i - \beta_1}{\alpha_1 - \beta_1} \leq 1$ and $\p{i-1}{1\bm{v},1} \geq \p{i-1}{1\bm{v},2}$. By construction, we have that $\p{i}{1\bv1,2}+\p{i}{1\bv0,2}=\p{i-1}{1\bm{v},2}$ and $\p{i}{1\bv1,1}+\p{i}{1\bv0,1} = \p{i-1}{1\bm{v},1}$, and it is easy to see that $\p{i}{1\bv1,1} \geq \p{i}{1\bv1,2}$ and $\p{i}{1\bv0,1} \geq \p{i}{1\bv0,2}$.
For each $\bm{v} \in \{0,1\}^{i-2}$, we let
\begin{align*}
&\p{i}{0\bv0,1} = \p{i-1}{0\bm{v},1} \hspace{20pt} & \p{i}{0\bv1,1} = 0 \\
&\p{i}{0\bv0,2} = \p{i-1}{0\bm{v},1} + \frac{\alpha_1 - \beta_i}{\alpha_1 - \beta_1}\left( \p{i-1}{0\bm{v},2} - \p{i-1}{0\bm{v},1} \right) & \\
& \p{i}{0\bv1,2} = \frac{\beta_i - \beta_1}{\alpha_1 - \beta_1}\left( \p{i-1}{0\bm{v},2} - \p{i-1}{0\bm{v},1}\right)
\end{align*}
Again, by construction we have that $\p{i}{0\bv0,2}+\p{i}{0\bv1,2}=\p{i-1}{0\bm{v},2}$ and $\p{i}{0\bv0,1}+\p{i}{0\bv1,1} = \p{i-1}{0\bm{v},1}$, and it is easy to see that $\p{i}{0\bv0,2} \geq \p{i}{0\bv0,1}$ and $\p{i}{0\bv1,2} \geq \p{i}{0\bv1,1}$.
The PMF $p^{(i)}$ satisfies the constraints imposed by the class-attribute matrix $\bm{A}$ for the attribute $i$, in fact
\begin{align*}
\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,1} = \frac{\beta_1}{2} + \frac{\alpha_i-\beta_1}{\alpha_1-\beta_1}\cdot \frac{1}{2}(\alpha_1 - \beta_1) = \frac{\alpha_i}{2} \\
\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,2} = \frac{\beta_1}{2} + \frac{\beta_i-\beta_1}{\alpha_1-\beta_1}\cdot \frac{1}{2}(\alpha_1 - \beta_1) = \frac{\beta_i}{2}
\end{align*}
\textbf{Second case}. [$\beta_1 \leq \beta_i$ $\land$ $\alpha_1 \leq \alpha_i$].
We construct $p^{(i)}$ as follows. For each $\bm{v} \in \{0,1\}^{i-2}$, let
\begin{align*}
&\p{i}{1\bv1,1} = \p{i-1}{1\bm{v},1} \hspace{20pt} & \p{i}{1\bv0,1} = 0 \\
&\p{i}{1\bv1,2} = \p{i-1}{1\bm{v},2} \hspace{20pt} & \p{i}{1\bv0,2} = 0
\end{align*}
and let
\begin{align*}
&\p{i}{0\bv1,1} = \frac{\alpha_i-\alpha_1}{1-\alpha_1}\p{i-1}{0\bm{v},1} \hspace{20pt} \\
&\p{i}{0\bv0,1} = \frac{1-\alpha_1}{1-\alpha_1}\p{i-1}{0\bm{v},1} \\
&\p{i}{0\bv1,2} = \frac{\alpha_i-\alpha_1}{1-\alpha_1}\p{i-1}{0\bm{v},1} + \frac{(\alpha_1 - \beta_1) - (\alpha_i - \beta_i)}{\alpha_1 - \beta_1}
\left(\p{i-1}{0\bm{v},2}-\p{i-1}{0\bm{v},1}\right) \\
& \p{i}{0\bv0,2} = \frac{1-\alpha_1}{1-\alpha_1}\p{i-1}{0\bm{v},1} +\frac{\alpha_i - \beta_i}{\alpha_1 - \beta_1}
\left(\p{i-1}{0\bm{v},2}-\p{i-1}{0\bm{v},1}\right)
\end{align*}
By construction, we can observe that for each $\bm{v} \in \{0,1\}^{i-1}$, it holds that $\p{i}{1\bm{v},1} \geq \p{i}{1\bm{v},2}$ and that $\p{i}{0\bm{v},2} \geq \p{i}{0\bm{v},1}$. Moreover, for each $\bm{v} \in \{0,1\}^{i}$ and $j \in [2]$, it holds that $\p{i}{\bv1,j} + \p{i}{\bv0,j} = \p{i-1}{\bm{v},j}$.
The PMF $p^{(i)}$ satisfies the constraints imposed by the class-attribute matrix $\bm{A}$ for the attribute $i$, in fact
\begin{align*}
&\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,1} = \frac{\alpha_1}{2} + \frac{\alpha_i-\alpha_1}{1-\alpha_1}\cdot \frac{1}{2}(1 - \alpha_1) = \frac{\alpha_i}{2} \\
&\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,2} = \frac{\beta_1}{2} + \frac{\alpha_i-\alpha_1}{1-\alpha_1}\cdot \frac{1}{2}(1 - \alpha_1) + \\
&+\frac{(\alpha_1 - \beta_1) - (\alpha_i - \beta_i)}{\alpha_1 - \beta_1} \cdot \frac{1}{2}(\alpha_1 - \beta_1) = \frac{\beta_i}{2}
\end{align*}
\textbf{Third case}. [$\beta_i \leq \beta_1$ $\land$ $\alpha_i \leq \alpha_1$]. We construct $p^{(i)}$ as follows. For each $\bm{v} \in \{0,1\}^{i-2}$, let
\begin{align*}
&\p{i}{0\bv0,1} = \p{i-1}{0\bm{v},1} \hspace{20pt} & \p{i}{0\bv1,1} = 0 \\
&\p{i}{0\bv0,2} = \p{i-1}{0\bm{v},2} \hspace{20pt} & \p{i}{0\bv1,2} = 0
\end{align*}
and let
\begin{align*}
&\p{i}{1\bv1,2} = \frac{\beta_i}{\beta_1}\p{i-1}{1\bm{v},2} \hspace{20pt} \\
&\p{i}{1\bv0,2} = \frac{\beta_1-\beta_i}{\beta_1}\p{i-1}{1\bm{v},2} \\
&\p{i}{1\bv1,1} = \frac{\beta_i}{\beta_1}\p{i-1}{1\bm{v},2} + \frac{\alpha_i - \beta_i}{\alpha_1 - \beta_1}
\left(\p{i-1}{1\bm{v},1}-\p{i-1}{1\bm{v},2}\right) \\
& \p{i}{1\bv0,1} = \frac{\beta_1-\beta_i}{\beta_1}\p{i-1}{1\bm{v},2} +\frac{(\alpha_1 - \beta_1) - (\alpha_i - \beta_i)}{\alpha_1 - \beta_1}
\left(\p{i-1}{1\bm{v},1}-\p{i-1}{1\bm{v},2}\right)
\end{align*}
Again, by construction, we can observe that for each $\bm{v} \in \{0,1\}^{i-1}$, it holds that $\p{i}{1\bm{v},1} \geq \p{i}{1\bm{v},2}$ and that $\p{i}{0\bm{v},2} \geq \p{i}{0\bm{v},1}$. Moreover, for each $\bm{v} \in \{0,1\}^{i}$ and $j \in [2]$, it holds that $\p{i}{\bv1,j} + \p{i}{\bv0,j} = \p{i-1}{\bm{v},j}$.
The PMF $p^{(i)}$ satisfies the constraints imposed by the class-attribute matrix $\bm{A}$ for the attribute $i$, in fact
\begin{align*}
&\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,1} = \frac{\beta_i}{\beta_1}\frac{\beta_1}{2} + \frac{\alpha_i-\beta_i}{\alpha_1-\beta_1} \cdot \frac{1}{2}(\alpha_1-\beta_1) = \frac{\alpha_i}{2} \\
&\sum_{\bm{v} \in \{0,1\}^{i-1}} \p{i}{\bv1,2} = \frac{\beta_i}{\beta_1} \cdot \frac{1}{2} \beta_1 = \frac{\beta_i}{2}
\end{align*}
We conclude the proof by observing that since $\alpha_1 - \beta_1 \geq \alpha_i - \beta_i$, the case $[\beta_i < \beta_1 \land \alpha_1 < \alpha_i]$ is impossible. \qed
\textbf{Proof of \cref{approximate-Q-matching}}. \\
For each edge $\{j,j'\}= e \in M$, consider the matrix $\bm{A}^e$ obtained by selecting the two rows of the classes $j$ and $j'$. Let $p^e$ be the PMF over $\{0,1\}^n \times \{j,j'\}$ that achieves the maximum of \cref{binary-adversarial-computation}. That is, $p^e$ is the adversarial distribution if we were to only distinguish between the two balanced classes $j$ and $j'$ assuming that we need to also satisfy the constraints imposed by $\bm{A}^e$.
Let $\bm{C} = [k] \setminus \left( \cup_{e \in M} M \right)$ be the set of classes that do not belong to any edge of the matching $M$. For any $c \in \mathcal{C}$, we let $p^c$ be an arbitrary PMF over $\{0,1\}^n \times \{c\}$ that satisfies the constraints imposed by the $c$ row of the matrix $\bm{A}$. We give a simple example of such a PMF $p^c$, assuming independence between the attributes. For each $\bm{v} \in \{0,1\}^n$, we let
\begin{align*}
p^c(\bm{v},c) = \prod_{i \in [n]} A_{c,i}^{v_i} \prod_{i \in [n]} (1-A_{c,i})^{1-v_i} \enspace ,
\end{align*}
and it is easy to verify that this PMF satisfies the constraints imposed by the row $c$ of matrix $\bm{A}$.
Based on the previous PMFs, we define a PMF $\tilde{p} \in \mathcal{P}(\bm{A})$ over $\{0,1\}^n \times [k]$. For each $\bm{v} \in \{0,1\}^n$ and $j \in [k]$, we let
\begin{align*}
\tilde{p}(\bm{v},j) = \begin{cases}
\frac{1}{k}p^j(\bm{v},j) \hspace{10pt} \mbox{if } j \in C \\
\frac{2}{k}p^e(\bm{v},j) \hspace{10pt} \mbox{for } e \in M : j \in e
\end{cases}
\end{align*}
Observe that this PMF is well defined, as each class is either in $C$ or it belongs to a unique edge in the matching $M$. Moreover, by construction of $\tilde{p}$, the classes are balanced and they satisfy the constraints imposed by matrix $\bm{A}$.
For each $\{j,j'\} = e \in M$, by construction we have that $1 - \sum_{\bm{v}}\max(p^e(\bm{v}, j),p^e(\bm{v}, j') ) = \sum_{\bm{v}}\min(p^e(\bm{v}, j),p^e(\bm{v}, j') ) = w_e$ (as $p_e$ achieves the maximum of \cref{binary-adversarial-computation}).
We have that:
\begin{align*}
Q \geq \min_{g \in \mathcal{G}} \varepsilon(g, \tilde{p}) &= 1 - \sum_{\bm{v}} \max_{j \in [k]} \tilde{p}(\bm{v},j) \\
&\geq \sum_{\{j,j'\} = e \in M}\sum_{\bm{v}} \min\left( \tilde{p}(\bm{v},j), \tilde{p}(\bm{v},j')\right) \\
& = \frac{2}{k} \sum_{\{j,j'\} = e \in M}\sum_{\bm{v}} \min\left( p^e(\bm{v},j), p^e(\bm{v},j')\right) \\
& = \frac{2}{k} \sum_{e \in M} w_e
\end{align*}
The first inequality is true because $M$ is a matching, so no two edges of $M$ share an endpoint, and the second equality is due to the definition of $\tilde{p}$. \qed
\textbf{Proof of \cref{minimax-invertible-theorem}}. \\
By combining $\eqref{Q-computation}$ and $\eqref{adversarial-quantity}$, we can rewrite $Q$ as
\begin{align*}
Q = \max_{p \in \mathcal{P}(\bm{A})} \min_{g \in \mathcal{G}} \varepsilon(g,p) \enspace .
\end{align*}
Consider the maximin
\begin{align}
\label{maximin-randomized}
Q' = \max_{p \in \mathcal{P}(\bm{A})} \min_{g_{\bm{W}} \in \mathcal{G}_R} \varepsilon(g_{\bm{W}},p) \enspace .
\end{align}
We show that $Q = Q'$. In fact, given $p \in \mathcal{P}(\bm{A})$, it is clear that the expected error \eqref{expected-loss-randomized} of a randomized attribute-class classifier $g_{\bm{W}} \in \mathcal{G}_R$
\begin{align*}
\varepsilon(g_{\bm{W}}, p) = 1-\sum_{\bm{v} \in \{0,1\}^n} \sum_{j \in [k]} W_{\bm{v},j} \cdot p(\bm{v},j)
\end{align*}
is minimized when $\bm{W}_{\bm{v},j'}=1$ if $j' = \operatorname*{\mathrm{arg\,max}}_{j \in [k]}p(\bm{v},j)$ for all $\bm{v} \in \{0,1\}^n$, and such a attribute-class classifier is deterministic, i.e. it is equal with probability $1$ to a properly chosen classifier in $\mathcal{G}$. This proves the second part of the Theorem.
Given $\alpha \in [0,1]$ and $p_1,p_2 \in \mathcal{P}(\bm{A})$, we define $p_\alpha = \alpha p_1 + (1-\alpha)p_2$ as a convex combination of $p_1$ and $p_2$, where for each $\bm{v} \in \{0,1\}^n$ and $j \in [k]$, we have that $p_\alpha(\bm{v},j) = \alpha p_1(\bm{v},j) + (1-\alpha)p_2(\bm{v},j)$. It is easy to verify that $p_{\alpha} \in \mathcal{P}(\bm{A})$. Moreover, for two randomized attribute-class classifiers $g_{\bm{W}}, g_{\bm{W}'}$, and $\alpha \in [0,1]$ we define $g_\alpha = g_{\alpha \bm{W} + (1-\alpha) \bm{W}'} $ as the convex combination of $g_{\bm{W}}$ and $g_{\bm{W}'}$, and observe that $g_{\alpha} \in \mathcal{G}_R$.
The sets $\mathcal{P}(\bm{A})$ and $\mathcal{W}$ are closed and bounded, therefore compact, and we have shown they are also convex. Moreover, the function $\epsilon(\cdot,\cdot)$ is bilinear with respect to $p$ and $\bm{W}$. Therefore, by von Neumann's Minimax Theorem \citep{neumann1928theorie},
the value of the minimax is equal to the value of the maximin, i.e.
\begin{align*}
\min_{g \in \mathcal{G}_R} \max_{p \in \mathcal{P}(\bm{A})} \varepsilon(g_{\bm{W}},p) = \max_{p \in \mathcal{P}(\bm{A})} \min_{g \in \mathcal{G}_R} \varepsilon(g_{\bm{W}},p) = Q
\end{align*}
\qed
\medskip
\section{Adversarial Attribute-Class Classifier Computation}
\label{adversarial-classifier-computation}
In this section of the Appendix, we show how to compute a randomized attribute-class classifier that satisfies \cref{minimax-invertible-theorem}. First, we show that the randomized attribute-class classifier
\begin{align}
\label{minimax-w}
g_{\bm{W}^*} = \operatorname*{\mathrm{arg\,min}}_{g_{\bm{W}} \in \mathcal{G}_R}\max_{p \in \mathcal{P}(\bm{A})} \varepsilon(g_{\bm{W}},p)
\end{align}
satisfies the condition of the Theorem. In fact, as noted in the proof of \cref{minimax-invertible-theorem}, we have that
\begin{align*}
\min_{g \in \mathcal{G}_R} \max_{p \in \mathcal{P}(\bm{A})} \varepsilon(g_{\bm{W}},p) = \max_{p \in \mathcal{P}(\bm{A})} \min_{g \in \mathcal{G}_R} \varepsilon(g_{\bm{W}},p) = \max_{p \in \mathcal{P}(\bm{A})} \min_{g \in \mathcal{G}} \varepsilon(g,p) = Q
\end{align*}
Now, we show how to compute $\bm{W}^*$. The minimax \eqref{minimax-w} can be written as a bilinear problem. Let $w_{\bm{v},j}$ and $q_{\bm{v},j}$ be variables that denote respectively $W_{\bm{v},j}$ and $p(\bm{v},j)$ for $\bm{v} \in \{0,1\}^n$ and $j \in [k]$. Inspecting \eqref{expected-loss-randomized}, and using the fact that the minimax is equal to the maximin, we can compute \eqref{minimax-w} as
\begin{align}
\label{maxmin-primal}
&1 + \max_{\bm{q} \geq 0}\min_{\bm{w} \geq 0}\sum_{\bm{v} \in \{0,1\}^n} \sum_{j \in [k]} (-w_{\bm{v},j}) \cdot q_{\bm{v},j}& s.t. \\
&(a) \sum_{\substack{ \bm{v} \in \{0,1\}^n :\\ v_i = 1}}q_{\bm{v},j} = A_{j,i}P_j & \forall j \in [k], i \in [n] \nonumber \\
&(b) \sum_{\substack{ \bm{v} \in \{0,1\}^n }}q_{\bm{v},j} = P_j & \forall j \in [k] \nonumber \\
&(c) \sum_{j \in [k]} w_{\bm{v},j} = 1 & \forall \bm{v} \in \{0,1\}^n \nonumber
\end{align}
We use $P_j$ to denote $P_j = \Pr_{x \sim \mathcal{D}}( y(x) = j)$ for $j \in [k]$, which is equal to $1/k$ when the classes are balanced. We transform the maximin \eqref{maxmin-primal} in a minimization problem that can be easily solved using a Linear Program.
For a given $\bm{q}$, let $\bm{w}^{\bm{q}}$ be an assignment of the variables $\bm{w}$ that achieves the minimum. We can write the dual of the maximization problem over the variables $\bm{q}$ with respect to the fixed $\bm{w}^{\bm{q}}$ as
\begin{align*}
&1 + \min_{\substack{\bm{a} \in \mathbb{R}^{k} \\ \bm{b} \in \mathbb{R}^{k \times n} }} \left( \sum_{j \in [k]} P_j \cdot a_j + \sum_{j \in [k]}\sum_{i \in [n]} P_j \cdot M_{j,i} \cdot b_{j,i}\right)& s.t. \\
&(a) \hspace{5pt} a_{j} + \sum_{\substack{i \in [n] \\ v_i = 1}}b_{j,i} \geq -w^{\bm{q}}_{\bm{v},j} & \forall \bm{v} \in \{0,1\}^n , j \in [k] \nonumber
\end{align*}
Due to strong-duality, the optimal value of the dual problem is the same of the primal with respect to the fixed assignment $\bm{w}_q$. By choosing $\bm{w}_q$ as the minimum over all feasible $\bm{w}$, we finally obtain the following minimum problem whose optimal value is equal to \eqref{maxmin-primal}.
\begin{align}
\label{minimini}
&1 + \min_{\substack{\bm{a} \in \mathbb{R}^{k} \\ \bm{b} \in \mathbb{R}^{k \times n} \\ \bm{w} \geq 0}} \left(\sum_{j \in [k]} P_j \cdot a_j + \sum_{j \in [k]}\sum_{i \in [n]} P_j \cdot M_{j,i} \cdot b_{j,i}\right)& s.t. \\
&(a) \hspace{5pt}a_{j} + \sum_{\substack{i \in [n] \\ v_i = 1}}b_{j,i} \geq -w_{\bm{v},j} & \forall \bm{v} \in \{0,1\}^n , j \in [k] \nonumber \\
&(b) \sum_{j \in [k]} w_{\bm{v},j} = 1 & \forall \bm{v} \in \{0,1\}^n \nonumber
\end{align}
This minimization problem is easily solved as a Linear Programming with $O(k \cdot 2^n)$ variables and constraints. We choose $\bm{W}^*$ as the optimal solution $\bm{w}^*$ of the minimum \eqref{minimini}.
\section{Approximation of the Lower Bound}
\label{app:approximation-subsection}
In this section of the Appendix, we show a computationally efficient method to compute a lower bound to the value $Q$ in a multiclass classification setting, i.e. $k \geq 2$. We build on the results of Section~\ref{binary-exact-computation}, and we will approximate $Q$ by using~\Cref{binary-adversarial-computation} between properly chosen pairs of the $k$ classes.
Consider a weighted, undirected complete graph $G$. Each vertex of the graph represents a class $j \in [k]$, and the edge $\{j,j'\}$ between classes $j,j' \in [k]$, $j \neq j'$, has weight $w_{\{j,j'\}} = \frac{1}{2}\left( 1 - \max_{i \in [n]}|A_{ji} - A_{j'i}|\right)$ computed as in~\Cref{binary-adversarial-computation}. A matching $M$ is a subset of edges such that no two edges of $M$ share an endpoint, i.e. for each $e,e' \in M$, $e \neq e'$, we have that $e \cap e' = \emptyset$. The weight of a matching $M$ is defined as the sum $\sum_{e \in M}w_e$ of the weights of the edges of $M$. The following theorem relates the weight of a matching to the value $Q$.
\begin{theorem}
\label{approximate-Q-matching}
Let $M$ be a matching of $G$, and $Q$ be computed as in \eqref{adversarial-quantity}. Then,
$Q \geq \frac{2}{k} \sum_{e \in M} w_e$ .
\end{theorem}
In order to maximize the lower bound provided by \cref{approximate-Q-matching}, we want to find a matching of $G$ with maximum weight. This optimization problem can be solved in $O(k^3)$ time by using an optimized version of the blossom algorithm~\citep{edmonds1965paths,lawler2001combinatorial}.
\section{Experimental details}\label{app:exp}
In this section, we provide additional details for the experiments in~\Cref{sec:experiments}.
\subsection{Data}\label{app:data}
We choose the following four datasets with attributes that are widely used benchmarks in ZSL.
\textbf{Animals with Attributes 2} (AwA2) consists of 37,322 images of 50 animal classes that are split into 40 seen and 10 unseen classes~\citep{xian2018zero}.
The dataset contains 85 attributes.
We normalize the provided continuous-valued class-attribute matrix, whose entries indicate the strength of the class-attribute association, which we interpret as a probability. We use this matrix as class-attribute matrix.
We use the provided binary class-attribute matrix to infer image-level attribute representation for each image to learn the attribute detectors (\Cref{app:attr_detectors}).
\textbf{aPascal-aYahoo} (aPY) consists of 15,339 images of 32 classes of animals and means of transportation, that are split into 20 seen and 12 unseen classes~\citep{farhadi2009describing}.
Each image is annotated with 64 attributes.
\textbf{Caltech-UCSD Birds-200-2011} (CUB) consists of 11,788 images of 200 fine-grained birds classes that are split into 150 seen and 50 unseen classes~\citep{WahCUB_200_2011}.
Each image is annotated with 312 attributes.
\textbf{SUN attribute database} (SUN) consists of 14,340 images of 717 scenes, e.g., ballroom and auditorium, that are split into 645 seen and 72 unseen classes~\citep{patterson2014sun}.
Each image is annotated with 102 attributes.
For each image, both the SUN and CUB datasets provide multiple crowdsourced attribute annotations.
We average such annotations, and we obtain a continuous attribute-representation of each images.
For our purposes, i.e., the training of the attribute detectors (\Cref{app:attr_detectors}), we round the value of each attribute.
For all these datasets, we use the split between seen and unseen classes suggested by~\citet{xian2018zero}.
Except for AwA2, we obtain the class-attribute matrices by averaging the attribute representation of the images of each class. This is the same strategy used by~\citet{RomeraParedes2015AnES} in their experiments.
For each dataset, we use a pre-trained ResNet-101~\citep{He2016DeepRL} as an encoder to extract features from the images.
The features are $2048$-dimensional, and they are used as input for the ZSL models.
\textbf{Synthetic Data Generation}. In Section~\ref{sec:exp:adversarial}, we generate adversarial synthetic data based on a input class-attribute matrix $\bm{A} \in [0,1]^{k \times n}$. To this end, we compute the solution of the Linear Program presented in \Cref{sec:compute_bound}. The values of the variables $q_{\bm{v},j}$ that achieve the minimum value of the Linear Program denote an adversarial distribution over the attributes and (balanced) classes that satisfy the constraints imposed by the class-attribute matrix.
We remind that $q_{\bm{v},j}$ denotes the probability that an image has attribute representation $\bm{v}$ and it belongs to class $j$.
We sample classification items from this distribution as follows. We sample a class uniformly at random among the $[k]$ classes, and then we sample a feature vector $\bm{x}$ with probability $k \cdot q_{\bm{x},j}$ with $\bm{x} \in \{0,1\}^n$ (That is, the feature representation is equal to the attribute representation). It is clear that data sampled in this way satisfy the constraints imposed by the class-attribute matrix with attribute functions $\psi_i(\bm{x}) = x_i$ for $i \in [n]$.
\subsection{Learning Attribute Detectors}\label{app:attr_detectors}
For DAP, we need to learn an attribute detector for each attribute.
The attribute detectors are classifiers that given an image output either $1$ or $0$, if the attribute appears in the image or not, respectively.
We learn the attribute detectors in a supervised fashion on the seen classes, by using the attribute annotations of the images.
In AwA2, the attributes are not explicitly annotated for each image, and we use the discrete attribute description of the image's class.
\subsection{ZSL models and training details}\label{appendix:zsl}
In our experiments, we compare the lower bound on the error with multiple ZSL methods.
Here, we provide details about the methods how we train them.
\textbf{DAP}~\citep{Lampert2014AttributeBasedCF}. This method is the first attribute-based method to solve the ZSL problem in the visual domain.
It uses attribute detectors trained on the seen classes, and then uses the class-attribute matrix to infer the a posteriori most-probable unseen class.
DAP unrealistically assumes attribute independence.
We train attribute detectors as explained in the previous section. As suggested by~\citet{Lampert2014AttributeBasedCF}, we use a uniform prior on the unseen classes.
The implementation of DAP is based on the code released by~\citet{Lampert2014AttributeBasedCF}\footnote{\url{https://github.com/zhanxyz/Animals_with_Attributes}} under the MIT License\footnote{\url{https://opensource.org/licenses/MIT}}.
ESZSL~\citep{RomeraParedes2015AnES}, SAE~\citep{Kodirov2017cvpr}, ALE~\citep{Akata2016}, and SJE~\citep{Akata2015cvpr} learn bilinear maps from image features to the the rows of the class-attribute matrix. At training, they use the class-attribute matrix of the seen classes, while for predictions they use the one of the unseen classes. These methods differ in the definition of the learning objective and the optimization method.
In particular, ESZSL and SAE have closed form solutions.
\textbf{ESZSL.} The hyperparameters of the model are $\alpha$ and $\gamma$, which are the regularizer parameter for feature space and the regularizer parameter for the attributes space, respectively. The parameters $\alpha$ and $\gamma$ for each dataset are set as follows: aPY, $\alpha=3$ and $\gamma=-1$; AwA2, $\alpha=3$ and $\gamma=0$; CUB, $\alpha=3$ and $\gamma=-1$; SUN, $\alpha=3$ and $\gamma=2$.
\textbf{SAE.} The hyperparameter of the model is $\lambda$ which is a coefficient that controls the trade-off between the decoder and encoder losses. The values of $\lambda$ are set as follows for each dataset: aPY, $\lambda=4$; AwA2, $\lambda=0.2$; CUB, $\lambda=0.2$; SUN, $\lambda=0.16$.
\textbf{ALE.} The hyperparameters of the models are the normalization strategy applied to the class-attribute matrix, and the SGD learning rate $\gamma$. For each dataset, the normalization strategy and the learning rates are: aPY, $\ell_2$ and $0.04$; AwA2, $\ell_2$ and $\gamma=0.01$; CUB: $\ell_2$ and $\gamma=0.3$; SUN, $\ell_2$ and $\gamma=0.1$.
\textbf{SJE.} The hyperparameters of the model are the normalization strategy applied to the class-attribute matrix, the SGD learning rate $\gamma$, and the margin $m$ for the optimization of the objective. For each dataset we report these parameter, in order: aPY, no normalization, $\gamma=0.01$, and $m=1.5$; AwA2, $\ell_2$, $\gamma=1$, and $m=2.5$; CUB, mean-centering, $\gamma=0.1$, and $m=4$; SUN, mean-centering, $\gamma=1$, and $m=2$.
The chosen hyperparameters maximize the balanced accuracy on the validation classes, and lead to test errors on the unseen classes comparable with the benchmarks by~\citet{xian2018zero}.
The implementations of ESZSL, SAE, ALE, SJE are based on a public code repository\footnote{\url{https://github.com/mvp18/Popular-ZSL-Algorithms}} released under the MIT License.
In~\Cref{tab:acc:zsl_acc}, we report the balanced accuracy of each method when trained on the whole set of attributes and seen classes.
Interestingly, we note that the accuracy of the methods on aPY is comparable to the accuracy that the methods achieve by only using $15$ attributes (\Cref{fig:err_attr}).
The last ZSL method we consider is DAZLE~\citep{Huynh-DAZLE:CVPR20} which (1) uses dense attribute-based attention to find local discriminative regions, and (2) embeds each attribute-based feature with the attribute semantic description. The implementation of DAZLE is based on the code released by~\citet{Huynh-DAZLE:CVPR20}\footnote{\url{https://github.com/hbdat/cvpr20_DAZLE}} under the MIT License.
\textbf{DAZLE.} The hyperparameters of the model the weight of the the self-calibration loss $\lambda$, the learning rate $\gamma$, the weight decay $w$, and momentum $m$. For each dataset we report these parameter, in order: aPY, $\lambda=0.1$, $\gamma=0.0001$, $w=0.0001$, and $m=0$; AwA2, $\lambda=0.1$, $\gamma=0.0001$, $w=0.0001$, and $m=0$; CUB, $\lambda=0.1$, $\gamma=0.0001$, $w=0.0001$, and $m=0.9$; SUN,$\lambda=0.1$, $\gamma=0.0001$, $w=0.0001$, and $m=0.9$. We used the same setting as in the released implementation of the model.
\textbf{Resources.} We run the experiments on an internal cluster. Most of the methods are executed on CPUs, while for DAZLE we used a GPU NVIDIA GeForce RTX 3090.
\input{tables/zsl}
\newpage
\section{Additional Experimental Results}
\label{app:exp:res}
\subsection{Additional Results for \Cref{sec:exp:adversarial}}
In \Cref{fig:err_attr_SUN}, we report the results for the experiments of \Cref{sec:exp:adversarial} for the SUN dataset. The experiments are consistent with our findings. We observe that for the experiments on the SUN data, we still observe that the empirical error of ZSL models roughly follows the trend of the lower bound. This suggests that the lower bound is able to capture how the additional information provided by an attribute leads to improvements of the ZSL models.
Moreover, for the adversarially generated synthetic data, we observe that no method is able to achieve errors lower than the lower bound, consistently with our theory.
In this subsection, we also report the empirical results for a method that we call \textbf{APA} (Adversarial Predicted Attributes) across all four datasets.
APA is an adversarial algorithm that uses a map from attributes to classes that satisfies~\cref{minimax-invertible-theorem}, computed as in~\cref{adversarial-classifier-computation}. The method uses attribute detectors trained on the seen classes (\cref{app:attr_detectors}), and predicts the target classes according to the output of those detectors, as specified in \cref{subsection:tight}.
APA is similar to DAP, except in how the attribute detectors are used to predict the target classes. In~\Cref{fig:apa}, we report the result of the experiments for the method APA. We point out that the performance of APA is competitive to the other ZSL approaches, at least using a reduced number of attributes. We remark that this comparison is out of the scope of this paper, but we believe these results open promising direction for further development of adversarial attribute-based zero-shot learning models.
\begin{figure*}[t]
\centering
\includegraphics[width=.4\columnwidth]{imgs/SUN_Unseen.pdf}
\includegraphics[width=.4\columnwidth]{imgs/synt_SUN_Unseen.pdf}
\caption{\textbf{Comparison of the lower bound with the empirical error.} We plot the lower bound on the error (\textbf{Q}), and the error of ZSL methods with attributes (\textbf{DAP, ESZSL, SAE, ALE}, and \textbf{DAZLE}).
The first column reports these values computed on the unseen classes of SUN dataset, varying the number of available attributes.
The second column reports the values for the adversarially generated synthetic data.
The bands indicate the standard errors on five runs with different seeds for randomized methods.
}
\label{fig:err_attr_SUN}
\end{figure*}
\subsection{Additional Results for \Cref{sec:exp:misclassification}}
In this subsection, we extend the experiments of \Cref{sec:exp:misclassification} and we propose a way to analytically quantify the similarity between the pairwise lower bound error matrix $L$ and the misclassification matrices.
Suppose that a model makes $m$ errors $E = \{ (j_1,j'_1), \ldots, (j_m, j'_m) \}$ on the data, where for each $i \in [m]$, the pair $(j_i,j'_i)$ represents an instance where the ZSL model outputs $j_i$ but the true class is $j'_i \neq j_i$. We compute the ratio between the empirical expected weight of the errors $E$ according to the graph $G$ and the expected weight of errors made uniformly at random between the classes, i.e.
$\left(\frac{1}{m}\sum_{(j,j') \in E} w_{j,j'}\right)/\left(\frac{1}{k(k-1)} \sum_{j \neq j'} w_{j,j'}\right)$.
We name this quantity \emph{skeweness} (Sk), and we observe that if the ratio is greater than $1$, then the misclassification errors $E$ of the ZSL models are skewed towards pair of classes that have larger values in $\bm{L}$, i.e., they are hard to distinguish.
In~\cref{tab:metrics}, we report the skewness scores computed for all the combination of ZSL models and datasets. We observe that all these quantities are greater than $1$. As noted before, this shows that the errors are skewed towards those indicated by our theoretical analysis. We can observe that the skewness is approximately $1$ for SAE on the aPY dataset. This is not surprising, as the model has very low performance (16.49\% accuracy, see~\cref{tab:acc:zsl_acc} in \cref{app:exp}) on this ZSL task.
We point out that it is very challenging to define a pairwise metric between the entries of $\bm{L}$ and the misclassification matrix $\bm{M}$ to describe their similarity. A pairwise metric would fail to capture more complex relations between classes. For instance, consider the
scenario where
three classes
are very hard to distinguish according to the values of their lower bound in $\bm{L}$. A ZSL model could fail to distinguish between them, and it could always output the same class given an image of any of those three classes. This would imply that we will observe zero misclassifications between a pair of these two classes in the matrix $\bm{M}$, which is different from the same entry in $\bm{L}$. Contrarily, the skewness metric is not affected by this problem.
\input{tables/tp_fp_metrics}
\textbf{Additonal Details.} As our lower bound is computed assuming balanced classes, we ensure this assumption holds by sampling the test data uniformly among the unseen classes.
In~\cref{tab:metrics}, we report the skewness averaged on 10 different randomly selected subsets of test data, and the respective standard deviation.
Specifically, for each class we sample a number of images equal to the minimum class-size among the unseen classes.
\begin{figure*}[t]
\centering
\includegraphics[width=.45\linewidth]{imgs/apa_APY_Unseen.pdf}
\includegraphics[width=.45\linewidth]{imgs/apa_AWA2_Unseen.pdf}
\includegraphics[width=.45\linewidth]{imgs/apa_CUB_Unseen.pdf}
\includegraphics[width=.45\linewidth]{imgs/apa_SUN_Unseen.pdf}
\caption{\textbf{Comparison of the lower bound on the error with the ZSL models error on the validation classes.} We plot the lower bound on the error (\textbf{Q}) and compare it to the error rates of the ZSL adversarial algorithm (\textbf{APA}) and the other ZSL models with attribute (\textbf{DAP, ESZSL, SAE, ALE}, and \textbf{DAZLE}).
The bands indicate the standard error on five runs with different seeds.}
\label{fig:apa}
\end{figure*}
\section{Conclusions, Limitations, and Future Work}
We present the first non-trivial lower bound on the best error that an attribute-based ZSL method can guarantee given the information provided---the class attribute matrix.
While our method is limited to class-attribute matrices, it constitutes a first theoretical building block to quantify the auxiliary information provided in ZSL. In general, theoretical evaluation of the error of ZSL models remains a hard problem due to the arbitrary domain shift between seen and unseen classes, and the wide range of possible auxiliary information used. As a future direction, it remains an open problem to be able to quantify this information for other families of ZSL methods.
However, our analysis readily extends to other variants of ZSL, such as generalized ZSL, where we simply use the class-attribute matrix of the union of both seen and unseen classes while computing our lower bound. \\
\subsection*{Broader Societal Impacts}
Zero-shot learning is now a popular scenario in research, with potential application to real-world language and vision tasks.
Worse-case guarantees have long been desired in ZSL.
Any improvement in the rigor of claims about model performance has impact because it demonstrates both what performance can be achieved and that some solutions are invalid.
However, such bounds do not cover many kinds of error, such as a generalization gap from domain shift or label errors.
Further, it is important that bounds are correctly interpreted such that no false claims or confidences are drawn from our findings.
An educated interpretation of the effect of these bounds upon any particular machine learning application is still required.
\subsection*{Acknowledgements}
We thank Michael Littman and James Tompkin for many helpful and insightful discussions. This material is based on research sponsored by Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) under agreement number FA8750-19-2-1006. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL) or the U.S. Government. We gratefully acknowledge support from Google and Cisco. Disclosure: Stephen Bach is an advisor to Snorkel AI, a company that provides software and services for weakly supervised machine learning.
\section{Empirical Applications}\label{sec:experiments}
In this section we compare our novel theory with the performance of popular attribute-based ZSL methods.
Our lower bounds quantify the lowest error rate that any ZSL algorithm can guarantee based on the information provided by the class-attribute matrix. In practice,
We show that the lower bound is still predictive of the performance and the behaviour of attribute-based ZSL algorithms.
We run two set of experiments.
\begin{compactenum}
\item \textbf{Comparing the lower bound and the empirical error}
(\cref{sec:exp:adversarial}).
We
compare the error rates of ZSL models with attributes
with the lower bound on the error from \cref{sec:compute_bound}.
\item \textbf{Pairwise misclassification prediction} (\cref{sec:exp:misclassification}).
We measure the predictive power of our lower bounds to
identify pairs of classes that ZSL models are likely to misclassify. This hardness is measured using the lower bound on the error for a pair of classes (\cref{binary-exact-computation}).
\end{compactenum}
\subsection{Experimental Setup}\label{sec:data}
In this section, we briefly describe the experimental setup. Further details about the datasets and the methods can be found in \cref{app:exp}.
We choose the following four datasets with attributes that are widely used benchmarks in ZSL: Animals with Attributes 2 (\textbf{AwA2})~\citep{xian2018zero}, aPascal-aYahoo (\textbf{aPY})~\citep{farhadi2009describing}, Caltech-UCSD Birds-200-2011 (\textbf{CUB})~\citep{WahCUB_200_2011}, and SUN attribute database (\textbf{SUN})~\citep{patterson2014sun}.
We focus on classic ZSL algorithms with attributes that
use at most
the information in
the class-attribute matrix for the unseen classes: \textbf{DAP}~\citep{Lampert2014AttributeBasedCF}, \textbf{ESZSL}~\citep{RomeraParedes2015AnES}, \textbf{SAE}~\citep{Kodirov2017cvpr}, \textbf{ALE}~\citep{Akata2016}, \textbf{SJE}~\citep{Akata2015cvpr}.
We choose these methods because they use the class-attribute matrix that is the focus of our theoretical analysis.
Many other ZSL methods have been proposed in recent years (see \Cref{sec:related-work}), but their comparison with our lower bound would be vacuous as they often use other source of auxiliary information on the unseen classes, and thus do not fit within our novel theoretical framework.
They are beyond the scope of this first analysis of ZSL.
However, we also run experiments on a more recent attribute-based method \textbf{DAZLE} ~\citep{Huynh-DAZLE:CVPR20} which takes advantage of additional information, i.e., attribute semantic vectors.
\subsection{Comparing Lower Bound and Empirical Error}
\label{sec:exp:adversarial}
In this section, we compare the lower bound presented in~\cref{sec:adv} with the actual error of the ZSL models.
To this end, we run two set of experiments: a first set using the ZSL datasets mentioned in the previous subsection, and a second set using adversarially generated synthetic data that conform with the class-attribute matrices of those same ZSL datasets.
In the first set of experiments, we follow the standard way to evaluate ZSL models. We train our model on the seen classes, and then compare our lower bound with the empirical error of the ZSL models on the unseen classes.
Since the computation of the lower bound is very expensive for a large number of attributes (\cref{sec:compute_bound}), we focus on a subsets of them.
We propose the following greedy strategy to ensure a selection of attributes that are informative with respect to the target classes. Starting with no attributes, we iteratively add the attribute that decreases the most the value of the lower bound, up to $15$ attributes. Due to the large number of seen classes of SUN and CUB, we restrict them to a smaller random subset (see \Cref{app:exp}).
In the first row of~\cref{fig:err_attr}, we report results for aPY, AwA2, and CUB, due to space constraints. The results for SUN are similar and in~\Cref{fig:err_attr_SUN} in~\cref{app:exp:res}.
We observe that the value of the lower bound can be significantly lower than the error rate of the ZSL models.
This gap is most probably due to the fact that the learned map from images to attributes does not generalize perfectly to the unseen classes.
In fact, in this setting we can identify two main source of error for the ZSL models: (1) the arbitrary error due to the domain shift, and (2) the error due to how discriminative is the attribute space to differentiate between the different classes. Our lower bound only addresses the latter, as no method can guarantee a smaller error than the lower bound to map from attributes to classes given only the information of the class-attribute matrix.
Nonetheless, for CUB and SUN we observe that the empirical error of ZSL models roughly follow the trend of the lower bound.
This suggests that the lower bound can still be used as a tool to capture how the additional information provided by an attribute leads to improvements of the ZSL models.
In the second set of experiments, we empirically demonstrate our theory by showing that even if we minimize the error due to domain shift, there exists data for which no method can do better than our lower bound. To this end, for each dataset we adversarially generate synthetic data with attribute values satisfying the dataset's class-attribute matrix. Specifically, we use the same class-attribute matrix with $15$ attributes as in the previous set of experiments in order to compute the adversarial distribution $p$ over attributes and classes according to the linear program introduced in \Cref{sec:compute_bound}. The data is generated by sampling attribute-class pairs from this distribution, and using the attribute vector as the feature vector. In order to minimize the error due to domain shift, this distribution is used to generate data for both training and testing of the ZSL methods, and the same class-attribute matrix is used for both seen and unseen classes. We report additional details on this experimental setup and synthetic data generation in \Cref{app:exp}.
We report the results of the experiments in the second row of~\cref{fig:err_attr}, iterating over the same attributes greedily selected in the first set of experiments for each dataset.
(For this set of experiments, we do not report results for DAZLE as this method relies on the input items being images, so it does not apply to our synthetic data.)
In this case, the methods are able to achieve errors that are comparable with the lower bound as we minimized the error due to domain shift. This experiment empirically validates that even in the absence of domain shift, there exists a distribution of the data that satisfy the constraints imposed by the class-attribute matrix for which no method can do better than the lower bound. This adversarial distribution represent an intrinsic error gap due to the quality of the information provided by the class-attribute matrix.
This is the first work to quantify such information in ZSL.
\begin{figure*}[t]
\centering
\includegraphics[width=.326\columnwidth]{imgs/APY_Unseen.pdf}
\includegraphics[width=.326\columnwidth]{imgs/AWA2_Unseen.pdf}
\includegraphics[width=.326\columnwidth]{imgs/CUB_Unseen.pdf}
\includegraphics[width=.326\columnwidth]{imgs/synt_APY_Unseen.pdf}
\includegraphics[width=.326\columnwidth]{imgs/synt_AWA2_Unseen.pdf}
\includegraphics[width=.326\columnwidth]{imgs/synt_CUB_Unseen.pdf}
\caption{\textbf{Comparison of the lower bound with the empirical error.} We plot the lower bound on the error (\textbf{Q}), and the error of ZSL methods with attributes (\textbf{DAP, ESZSL, SAE, ALE}, and \textbf{DAZLE}).
The first row reports these values computed on the unseen classes of the aPY, AwA2, and CUB, varying the number of available attributes.
The second row reports the values for the adversarially generated synthetic data.
The bands indicate the standard errors on five runs with different seeds for randomized methods.
These results validate that even in the absence of domain shift, there exists a distribution of the data that satisfy the constraints imposed by the class-attribute matrix for which no method can do better than the lower bound.
}
\label{fig:err_attr}
\vspace{-1pt}
\end{figure*}
\subsection{Pairwise Misclassification Prediction}
\label{sec:exp:misclassification}
\Cref{binary-adversarial-computation} shows how to efficiently compute the lower bound on the error to distinguish between a pair of classes given the class-attribute matrix.
In addition to the overall bound on error, it also gives us fine-grained information about which classes are harder to distinguish among.
We define the \emph{pairwise lower bound error matrix} $\bm{L}$, whose entry $L_{j,j'}$ is the lower bound on the error computed as in~\cref{binary-exact-computation}, for all classes $j,j' \in [k]$, $j \neq j'$.
A large entry $L_{j,j'}$ between two classes $j\neq j'$ indicates that it is hard (in the worst-case) to distinguish between them.
In this section, we compare the matrix $\bm{L}$ with the classification errors made by the ZSL models discussed in~\Cref{sec:data}.
In particular, we want to show if the pairwise lower bounds on the errors are predictive of the misclassification errors made by the ZSL models. Specifically, a large lower bound on the error for a pair of classes indicates that a ZSL model would likely confuse one class with the other.
For a given dataset and a ZSL method, we build a \emph{misclassification error matrix} $\bm{M}$. The entry $M_{j,j'}$ is computed as
\begin{align*}
\Pr_{x \sim \mathcal{D}}( h(x) = j \land y(x) = j' | y(x) \in \{j ,j'\})
+ \Pr_{x \sim \mathcal{D}}( h(x) = j' \land y(x) = j | y(x) \in \{j ,j'\})
\end{align*}
for all $j,j' \in [k]$, $j \neq j'$, where $h(\cdot)$ is the ZSL model. The entry $M_{j,j'}$ represents the probability of the model to misclassify an item of the class $j$ with the class $j'$ or vice-versa. We estimate $\bm{M}$ using test data of the unseen classes.
In~\Cref{fig:error_matrices}, we plot $\bm{L}$ together with the misclassification matrices $\bm{M}$ of two ZSL methods: DAP and ESZSL, computed on the unseen classes of aPY.
The pairwise lower bound matrix $\bm{L}$ has large values within multiple groups of semantically similar classes, e.g., animals and transportation means. This is in line with human intuition, as we expect visually similar classes to exhibit similar attributes. Correspondingly, the misclassification matrices of DAP and ESZSL highlight the presence of many
errors for classes belonging to these groups. We also note that ZSL models could misclassify other pairs of classes due to other source of errors, such as an inaccurate map from image to attributes.
We report additional experimental analysis on all datasets in \Cref{app:exp:res}.
\begin{figure*}[t]
\centering
\includegraphics[width=.314\columnwidth]{imgs/apy/complete_unseen.png}
\includegraphics[width=.322\columnwidth]{imgs/apy/sym_DAP_error_distribution_unseen.png}
\includegraphics[width=.328\columnwidth]{imgs/apy/sym_ESZSL_error_distribution_unseen.png}
\caption{\textbf{Pairwise miscassification matrices.} For the unseen classes of aPY, we plot the pairwise lower bound between pair of classes $\bm{L}$ (\cref{binary-exact-computation}), and the misclassification error matrix $\bm{M}$ of two ZSL models: DAP and ESZSL.
Darker squares indicate higher values, and light blue on the diagonal is 0.
High values of the lower bound indicate classes that are harder (in the worst-case) to distinguish in theory, and high values in $\bm{M}$ indicate pair of classes that are often confused by the ZSL model.
}
\label{fig:error_matrices}
\vspace{-1em}
\end{figure*}
\iffalse
\textbf{Identified Misclassification Ratios}.
We want to obtain a pairwise comparison between $\bm{L}$ and the misclassification matrix $\bm{M}$. To this end, we retain only the most significant error in $\bm{L}$, and the most frequent errors in $\bm{M}$, by thresholding the entries of the matrices according to their respective averages. We propose two ratios:
\begin{enumerate}
\item \emph{MtoG} indicates the fraction of pairs of classes confused by ZSL models that have a high value of the lower bound.
Specifically, it is the fraction of pairs of classes above the threshold in the adjacency matrix of $G$, among all the pair of classes that are above the threshold in the matrix $\bm{M}$.
\item \emph{GtoM} indicates the portion of pairs of classes with high value of the lower bound that are confused by the ZSL models. Specifically, it is the fraction of pairs of classes above the threshold in the adjacency matrix of $\bm{M}$, among all the pair of classes that are above the threshold in the matrix $G$.
\end{enumerate}
We report the values of these two proposed metrics for all the combination of datasets and ZSL methods in~\cref{tab:metrics}.
While this measure is able to quantify the pairwise similarities of the adjacency matrix of $G$ and the misclassification matrix $\bm{M}$, it fails to capture more complex relation between classes. For instance, consider the following scenario where there are three classes that are very hard to distinguish with respect to the graph $G$. A ZSL model could fail to distinguish between them, and it could always output the same class given an image of any of those three classes. In this case, . The skewness metric is not affected by this kind of error.
\fi
\section{Introduction}
\label{introduction}
Labeled training data is often scarce or unavailable, and it can be very costly to obtain.
For this reason, there is a growing interest in developing methods that can exploit source of information other than labeled data, such as zero-shot learning (ZSL).
In ZSL, we want to recognize items of \textit{unseen classes}, for which labeled data is not available.
A ZSL model is trained on a disjoint set of similar classes, called \textit{seen classes}, for which labeled data is available instead.
The model is trained to map examples to auxiliary information describing the seen classes.
Then, at test time, predictions can be made using only descriptions of the unseen classes.
While ZSL is increasingly common in practice, from a theoretical perspective ZSL is a hard problem that defies analysis, because in the worst case there can be an arbitrary shift between the distributions of the seen and unseen classes.
In this work, we take a step towards a better theoretical understanding of ZSL.
We investigate the question: \emph{Given only auxiliary information in the form of attributes describing unseen classes, what is the smallest worst-case error than {\bf any} method can guarantee?}
We provide the first non-trivial answer to this question by developing a framework based on adversarial optimization.
We also show that this framework has practical application as a method for identifying when the predictions of ZSL methods on certain unseen classes are more likely to be incorrect.
ZSL models have obtained impressive accuracy in practice, both for vision~\citep{xian2018zero} and language domains~\citep{sanh:iclr22,wei:iclr22}, but they come with no theoretical characterization of their accuracy.
To address this gap, we analyze the attribute-based Zero-Shot Learning setting that includes a large portion of the classic methods proposed in the literature \citep{RomeraParedes2015AnES,Lampert2014AttributeBasedCF,Akata2015cvpr,Akata2016}, as well as more recent end-to-end deep learning approaches~\citep{Kodirov2017cvpr,xian2018zero,Huynh-DAZLE:CVPR20}.
While this setting does not include all varieties of ZSL (discussed further in Section~\ref{sec:related-work}), we view this work as a critical first step towards building up a broader theory of ZSL. In attribute-based ZSL, an attribute is a property of a item to be classified.
Each item can either exhibit a given attribute or not.
For example, an image of a lion would often exhibit the attribute tail, while the image of a sheep would not.
Attribute-based ZSL models are trained using \textit{attribute representations} of the items of the seen classes, and a \textit{class-attribute} matrix that describes pairwise relations between the seen classes and each attribute.
At test time, predictions are made for the unseen classes given the items' attribute representation and a new class-attribute matrix.
\citet{RomeraParedes2015AnES} is one of the few works to address theoretical questions related to ZSL.
Studying attribute-based ZSL, they show a pair of basic bounds that characterize sufficient conditions for either learning or impossibility:
(1) if there is no shift from the seen to the unseen classes, then learning is trivial, and (2) if the vectors of attributes of the seen classes are mutually orthogonal with those of the unseen classes, then the error can be arbitrarily large in the worst case.
In this paper, we provide the first non-trivial lower bound for ZSL with attributes, addressing the open problem posed by~\citet{RomeraParedes2015AnES}.
We analyze ZSL with attributes by first observing that it is a two stage process consisting of a training phase and an inference phase.
In the training phase, we learn a map from the items to the attribute space using the seen classes, while in the inference phase we use the class-attribute matrix to infer the correct class given the item-attribute representation.
Based on this two-stage decomposition, we can identify two kinds of errors.
The first kind, related to the training phase, is due to domain shift.
The map from items to the attribute space that is trained on the seen classes might not generalize accurately to the unseen classes.
This contribution to the error can be arbitrarily large without introducing strong assumptions, as no labeled data is available for the unseen classes.
Thus it is impossible to characterize the domain shift between seen and unseen classes.
The second kind, related to the inference phase, is due to the fact that the class-attribute matrix might not fully differentiate among the unseen classes.
In particular, there can be an item of the unseen classes with a set of attributes that according to the class-attribute matrix relation conforms with the description of two different classes.
The first kind of error, domain shift, has been extensively studied both in the theory and the experimental literature \citep{mansour2009domain,ben2010theory,Sener2016LearningTR,Pinheiro2018UnsupervisedDA,Luo2019TakingAC}.
In this work, for the first time, we theoretically characterize the contribution of the second kind of error. It is important to understand and characterize this error for specific ZSL tasks because it corresponds to an inherent information gap in the problem setting that cannot be circumvented with a smarter algorithm.
We provide tight lower and upper bounds on the worst-case error of the best map from attribute representations to classes based on the class-attribute matrix.
Our analysis gives a lower bound in the sense that it bounds from below the minimum error that any method can guarantee given only the information of the class-attribute matrix.
The class-attribute matrix specifies the fraction of items in each class that exhibit each attribute. There is a range of class-attribute distributions that satisfy the constrains defined by a given matrix. We give a lower bound on the error of the best possible method for the worst case distribution in that range.
This distribution represents a worst case correlation between attributes that satisfies the class-attribute matrix while maximizing the difficulty to distinguish between attribute-representation of items belonging to distinct classes.
Our analysis also gives an upper bound in the sense that we show a randomized classifier that achieves at most the error of the lower bound, assuming perfect item-to-attribute mapping.
This also shows that the lower bound is tight.
Interestingly, the value of the lower bound can also be interpreted as the quality of the information provided by the class-attribute matrix.
To the best of our knowledge, this is the first work to quantify such information.
\textbf{Contributions.}
Our main contributions are the following:
\begin{compactenum}
\item We show the first non-trivial lower bound for attribute-based ZSL
(\Cref{sec:adv}).
\item We formulate the lower bound given a class-attribute matrix as a linear program (\Cref{sec:compute_bound}).
\item We show a closed form expression for the lower bound for binary classification (\Cref{binary-exact-computation}).
\item We show that the lower bound is tight: we exhibit a randomized classifier
whose expected error is upper bounded by the value of the lower bound (\Cref{subsection:tight}).
\item We run extensive experiments comparing the theoretical results with the error of popular attribute-based ZSL methods, on benchmark datasets. We show that information given by the bound can be predictive of how standard methods behave, including which classes will likely be confused with others (\Cref{sec:experiments}).
\end{compactenum}
\section{Preliminaries}
\label{preliminaries}
We denote scalar and generic items using lowercase letters, vectors using lowercase bold letters, and matrices using bold uppercase letters. Given two vectors $\bm{v}$ and $\bm{v}'$, we denote with $\bm{v}\bm{v}'$ the concatenation of the two vectors. For any $n \in \mathbb{N}$, we denote with $[n]$ the set $\{ 1, \ldots, n\}$. Due to space constraints, all proofs are deferred to the appendix.
Let $\mathcal{D}$ be a distribution defined over the \textit{classification domain} $\mathcal{X}$. A \textit{multiclass classification task} is defined by \textit{labeling function} $y: \mathcal{X} \rightarrow \mathcal{Y} = [k]$ that maps each \textit{item} $x \in \mathcal{X}$ to a class $j$ in the label space $\mathcal{Y}$, where $k \geq 2$. We say that a multiclass classification task is \textit{balanced} if for each $j \in [k]$, it holds that $\Pr_{x \sim \mathcal{D}}[ y(x) = j] = 1/k$. Unless otherwise stated, we assume that the classification task is balanced.
This assumption is not restrictive, and as we will observe later, it can be changed if a different prior is known on the class probabilities. We will show that our lower bound holds even if we do not assume balanced classes.
We also assume to have access to $n$ \textit{attribute functions} $\fatt{1},\ldots,\fatt{n}$, where $\fatt{i}: \mathcal{X} \rightarrow \{0,1\}$ for $i \in [n]$. We say that a classification item $x \in \mathcal{X}$ has attribute $i \in [n]$ if $\fatt{i}(x) = 1$. For ease of notation, we define $\bm{\psi}(x) \doteq ( \fatt{1}(x), \ldots, \fatt{n}(x))^T$. The codomain of $\bm{\psi}$ is $\{0,1\}^n$, and it is referred to as \textit{attribute space}.
All the information about the target unseen classes available to the algorithm is encoded in a \textit{class-attribute matrix} $\bm{\mat{}{}} \in [0,1]^{k \times n}$. The matrix provides information on the relations between classes and attributes. In particular for a class $j \in [k]$, and an attribute $i \in [n]$, $A_{j,i}$ is the probability that $\fatt{i}(x) = 1$ given that $y(x) = j$, i.e.,
\begin{align}
\label{matrix-class-attribute-relation}
A_{j,i} = \Pr_{x \sim \mathcal{D}}[ \psi_i(x) = 1 | y(x) = j] \enspace .
\end{align}
An \textit{attribute-class classifier} $g$ is a map from vectors in the attribute space to classes, i.e., $g : \{0,1\}^n \rightarrow [k]$. The error of $g$ is $
\varepsilon( g ) \doteq \Pr_{x \sim \mathcal{D}}[g \circ \bm{\psi}(x) \neq y(x)]$.
Let $\mathcal{G}$ be a collection of all the possible deterministic maps $\{0,1\}^n \rightarrow [k]$ from the attribute space to the $k$ classes. We are interested in evaluating $\min_{g \in \mathcal{G}} \varepsilon(g)$.
As we focus on the contribution of the information provided by the class-attribute matrix, we assume access to the attribute functions $\psi_1,\ldots,\psi_n$. In practice, the map to the attribute space is learned on the available labeled data for the seen classes \citep{Lampert2014AttributeBasedCF,RomeraParedes2015AnES},
and it is likely noisy, and can be cause of additional error.
Let $p^*$ be the (unknown) probability mass function (PMF) of the random vector $(\psi_1(x), \ldots, \psi_n(x), y(x))$ where $x \sim \mathcal{D}$. The support of $p^*$ is $\{0,1\}^n \times [k]$. For $\bm{v} \in \{0,1\}^n$, and $j \in [k]$, let
$p^*(\bm{v},j) \doteq \Pr_{x \sim \mathcal{D}}[ \bm{\psi}(x) = \bm{v} \land y(x) = j]$.
The error of $g$ is a function of $p^*$:
\begin{align}
\label{error-of-a-distribution}
\varepsilon(g) = \varepsilon(g,p^*) \doteq 1 - \sum_{\bm{v}\in \{0,1\}^n} p^*(\bm{v},g(\bm{v}))
\end{align}
A function $g^* \in \mathcal{G}$ that attains minimum error $\varepsilon(g^*) = \min_{g \in \mathcal{G}}\varepsilon(g)$ is a Bayes optimal classifier with respect to $p^*$, i.e. for each $\bm{v} \in \{0,1\}^n$, we have that $g^*(\bm{v}) = \operatorname*{\mathrm{arg\,max}}_{j \in [k]} p^*(\bm{v},j)$. Thus,
\begin{align}
\label{optimal-minimum}
\min_{g \in \mathcal{G}} \varepsilon(g) = 1 - \sum_{\bm{v}\in \{0,1\}^n} \max_{j \in [k]} p^*(\bm{v},j) \enspace .
\end{align}
We do not have access to labeled data for the unseen classes, so we cannot estimate $p^*$.
Instead, we construct a lower bound with respect to the set of all distributions that fit the available information.
\section{Background and Related Work}
\label{sec:related-work}
Much early work on ZSL focused on using logical descriptions of the classes as auxiliary information, including attributes~\citep{chang:aaai08, lampert2009learning}.
Since then, an increasing number of ZSL methods have been proposed, which differ in methodology and the auxiliary information they use.
Examples of auxiliary information are symbolic descriptions of classes~\citep{chang:aaai08, lampert2009learning}, pre-trained embedding of the classes~\citep{frome:neurips13}, natural language descriptions~\citep{obeidat:naacl19,brown:neurips20}, and knowledge graphs \citep{wang:cvpr18,kampffmeyer:cvpr19,nayak:arxiv20}.
Recent ZSL methods can be grouped into two main categories: embedding-based and generation-based~\citep{pourpanah:survey2020}.
Seminal embedding-based works used two-layer neural networks to link the image feature space to the semantic one~\citep{Socher2013ZeroShotLT}.
Later, they evolved into deep neural networks that either map semantic features into the visual space~\citep{Ba2015PredictingDZ,Zhang2017LearningAD,Changpinyo2017PredictingVE} or project both the image and semantic features into the same space~\citep{zhang:cvpr15,radford:icml21}.
Generative-based approaches employ various kind of Generative Adversarial Networks (GANs)~\citep{Mirza2014ConditionalGA} to synthesize the features of the unseen classes, and use them to train a ZSL classifier in a supervised fashion~\citep{Felix2018MultimodalCG,Li2019LeveragingTI,Xian2018FeatureGN,Xian2019FVAEGAND2AF,Narayan2020LatentEF}.
ZSL with attributes generally consists of learning a linear map from the item to the attribute space, in the first stage.
Then, we use the class-attribute matrix to infer the correct class given the item-attribute representation~\citep{xian2018zero}.
ZSL with attributes can be seen as a special case of embedding-based ZSL, in which the class embeddings are the rows of the class-attribute matrix.
Analyzing more general embedding-based or generation-based ZSL methods is challenging because they rely on deep neural networks for which relatively little theory is available.
Inspired by previous work to describe classes using error-correcting output codes~\citep{dietterich1994solving},~\citet{palatucci2009zero} were the first to propose a ZSL algorithm for which they can provide a theoretical analysis.
The algorithm learns linear classifiers individually for each binary attribute, and the attributes are mapped to the closest class-attribute representation.
While they are able to provide a PAC bound, their analysis relies on several strong assumptions that limit the problem setting.
First, they assume that they can learn each attribute independently, but attribute dependency is a widely recognized problem for attribute detection \citep{Jakulin2003AnalyzingAD}.
Second, they assume that each class has a unique attribute representation, i.e. each attribute must be either present or not in all the items of a given class.
Finally, they also assume that they are able to sample classes from a given distribution, and they are able to generalize to the non-sampled classes. That is, they do not separate beforehand between seen and unseen classes, which is the common scenario observed in ZSL settings.
Conversely, our lower bound does not assume a unique binary representation for each class, as we are given a class-attribute matrix that provides the probabilities to observe an attribute given an item of a class.
Also, our lower bound takes into account the possible correlation between attributes, and it is computed based on the information provided on the given unseen classes.
In more recent work, \citet{RomeraParedes2015AnES} draw a connection between transfer learning \citep{ben2010theory} and ZSL to provide a novel theoretical result.
In particular, they show that their model is not able to generalize if the attribute representations of the seen classes are orthogonal to the one of the unseen classes.
Intuitively, if those representations are orthogonal, the attribute map learned for the seen classes would fail to provide information for the unseen classes.
This is an impossibility result, and it is not able to arbitrarily quantify the information given for the unseen classes.
Unfortunately, transfer learning or domain adaptation like-bounds are challenging to estimate in a ZSL setting.
In fact, a term of those bounds require access to labeled data for the unseen classes, which is unavailable in ZSL.
Another term, the discrepancy, depends on the difference between the attribute representations of the classes and the distribution of the items between seen and unseen.
While it would be theoretically possible to compute the discrepancy based on the information available, its computation is very challenging and it has been possible only in very specific cases~\citep{mansour2009domain}.
Our novel lower bound is developed using adversarial techniques that describe the worst-case scenario with respect to the information available. It is inspired by recent work on semi-supervised learning, where the goal is to use the information provided by weak supervision sources~\citep{balsubramani2015optimally, arachie2021general, mazzetto2021semi, mazzetto2021adversarial}. The adversarial approach allows us to handle the possible dependencies between the attributes.
| {
"timestamp": "2022-05-27T02:04:37",
"yymm": "2205",
"arxiv_id": "2205.13068",
"language": "en",
"url": "https://arxiv.org/abs/2205.13068",
"abstract": "We develop a rigorous mathematical analysis of zero-shot learning with attributes. In this setting, the goal is to label novel classes with no training data, only detectors for attributes and a description of how those attributes are correlated with the target classes, called the class-attribute matrix. We develop the first non-trivial lower bound on the worst-case error of the best map from attributes to classes for this setting, even with perfect attribute detectors. The lower bound characterizes the theoretical intrinsic difficulty of the zero-shot problem based on the available information -- the class-attribute matrix -- and the bound is practically computable from it. Our lower bound is tight, as we show that we can always find a randomized map from attributes to classes whose expected error is upper bounded by the value of the lower bound. We show that our analysis can be predictive of how standard zero-shot methods behave in practice, including which classes will likely be confused with others.",
"subjects": "Machine Learning (cs.LG)",
"title": "Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126519319939,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7094396986695882
} |
https://arxiv.org/abs/1502.06332 | A sharp subelliptic Sobolev embedding theorem with weights | The purpose of this short article is to prove some potential estimates that naturally arise in the study of subelliptic Sobolev inequalites for functions. This will allow us to prove a local subelliptic Sobolev inequality with the optimal amount of smoothing, as well as a variant of that which describes quantitatively an improvement of the inequality as one gets away from certain characteristic varieties. | \section{Statement of results}
Subelliptic Sobolev-type estimates in general have received a lot of attention over the years. We list some results that share a similar theme as ours: Capogna-Danielli-Garofalo \cite{MR1312686}, Cohn-Lu-Wang \cite{MR2345338}, Franchi-Gallot-Wheeden \cite{MR1314734}, Franchi-Lu-Wheeden \cite{MR1343563}, \cite{MR1354890}, \cite{MR1383947}, Franchi-P{\'e}rez-Wheeden \cite{MR1744780}, Franchi-Wheeden \cite{MR1455453}, Jerison \cite{MR850547}, Lu \cite{MR1202416}, \cite{MR1286482}, Lu-Wheeden \cite{MR1642822}, \cite{MR1792599}, Muckenhoupt - Wheeden \cite{MR0340523}, P{\'e}rez-Wheeden \cite{MR1818113}, \cite{MR1962949} and Sawyer-Wheeden \cite{MR1175693}.
In \cite{PL}, the author has proved a Sobolev inequality for the $\overline{\partial}_b$-complex on $(0,q)$ forms on a certain class of CR manifolds of finite type. In this current work, the focus will be on functions (rather than forms), and the result is real-variable in nature.
To describe our results, we need to introduce some notations. Following Nagel, Stein and Wainger \cite{MR793239} and \cite{MR1882665}, let $\Omega \subseteq \mathbb{R}^N$ be a connected open set, and let $Y_1, \dots, Y_q$ be a list, possibly with repetitions, of smooth real vector fields on $\Omega$. Assume that to each $Y_j$ we associate an integer $d_j \geq 1$, called the formal degree of $Y_j$. The collection $\{Y_j\}_{j=1}^q$ is said to be of finite homogeneous type on $\Omega$ if they span $\mathbb{R}^N$ at every point in $\Omega$, and that for each $1 \leq j,k \leq q$, $$[Y_j,Y_k] = \sum_{d_l \leq d_j + d_k} c_{j,k}^l (x) Y_l$$ for some $c_{j,k}^l \in C^{\infty}(\Omega)$. For example, if $X_1, \dots, X_n$ are smooth real vector fields on $\Omega$ that satisfy Hormander's condition, meaning that successive commutators of $X_1,\dots,X_n$ of some finite length $r$ already span the tangent space at every point of $\Omega$, then if $\{Y_j\}$ is the collection of successive commutators of $X_1,\dots,X_n$ up to length $r$, it is of finite homogeneous type.
With such a collection $\{Y_j\}$, one can then define a control metric $\rho$ as follows. For each $\delta > 0$, let $C(\delta)$ be the set of absolutely continuous curves $\phi \colon [0,1] \to \Omega$ such that $\phi'(t) = \sum_{j=1}^q a_j(t) Y_j(\phi(t))$ with $|a_j(t)| \leq \delta^{d_j}$ for all $j$ and almost all $t \in [0,1]$. For $x,y \in \Omega$, let $\rho(x,y) = \inf \{ \delta > 0 \colon \text{there is a curve } \phi \in C(\delta) \text{ such that } \phi(0)=x \text{ and } \phi(1)=y \}.$ We shall write $B(x,\delta)$ for the metric ball centered at $x$ and of radius $\delta$, namely $\{y \in \Omega \colon \rho(x,y) < \delta\}$, and $V(x,y)$ for the Lebesgue measure of the ball $B(x,\rho(x,y))$.
If now $I$ is an $N$-tuple $(i_1,\dots,i_N)$, $1 \leq i_j \leq q$, we write $$Q_I = \sum_{j=1}^N d_{i_j},$$ and $$\lambda_I(x) = \det(Y_{i_1}, \dots, Y_{i_N})(x)$$ for $x \in \Omega$. Here we are taking the determinant of the $N \times N$ matrix, whose $j$-th column is the component of $Y_{i_j}$ in the coordinate basis $\frac{\partial}{\partial x_1}, \dots, \frac{\partial}{\partial x_N}$. These numbers are important in computing the volumes of the metric balls (see Theorem~\ref{thm:HT} below). It is also in terms of these numbers that we state our main results.
\begin{thm}\label{thm:WPE}
For each $N$-tuple $I$ and each compact subset $E$ of $\Omega$, the map $$T_If(x) = |\lambda_I(x)|^{\frac{1}{Q_I}} \int_E \frac{\rho(x,y)}{V(x,y)} f(y) dy$$ maps $L^p(E)$ boundedly into $L^{p^*}(E)$, where $$\frac{1}{p^*} = \frac{1}{p} - \frac{1}{Q_I}, \quad 1 < p < Q_I.$$ It also maps $L^1(E)$ into weak-$L^{\frac{Q_I}{Q_I-1}}(E)$. Here $dy$ is the Lebesgue measure on $E$, and all the $L^p$ spaces are taken with respect to the Lebesgue measure on $E$.
\end{thm}
We also have:
\begin{thm}\label{thm:mPE}
For each $N$-tuple $I$ and each compact subset $E$ of $\Omega$, the map $$Tf(x) = \int_E \frac{\rho(x,y)}{V(x,y)} f(y) dy$$ maps $L^1(E,dy)$ boundedly into weak-$L^{\frac{Q_I}{Q_I-1}}(E,d\mu_I)$, where $$d\mu_I(x) := |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx,$$ and $dx$ is the Lebesgue measure on $E$.
\end{thm}
These allow us to prove the following subelliptic Sobolev inequality for Hormander's vector fields:
\begin{thm}\label{thm:WSI}
Let $X_1, \dots, X_n$ be smooth real vector fields on a connected open set $\Omega \subseteq \mathbb{R}^N$, whose commutators of length $\leq r$ span at every point of $\Omega$. List the commutators of $X_1, \dots, X_n$ of length $\leq r$ as $Y_1, \dots, Y_q$, and define $\lambda_I(x)$ for each $N$-tuple $I$ and $x \in \Omega$ as above. Let $\Omega'$ be a relatively compact open subset of $\Omega$ with smooth boundary and $I$ be an $N$-tuple. Then for each $f \in C^{\infty}(\overline{\Omega'})$, we have
$$
\left(\int_{\Omega'} |f(x)|^{p^*} |\lambda_I(x)|^{\frac{p}{Q_I-p}} dx\right)^{\frac{1}{p^*}} \leq C \left( \int_{\Omega'} |\nabla_b f(x)|^p + |f(x)|^p dx \right)^{\frac{1}{p}},
$$
where
\begin{equation} \label{eq:pp*}
\frac{1}{p^*} = \frac{1}{p}-\frac{1}{Q_I} \quad \text{and} \quad 1 \leq p < Q_I.
\end{equation}
\end{thm}
Here the length of the subelliptic gradient $|\nabla_b f|$ is defined by
$$
|\nabla_b f|^2 := |X_1 f|^2 + \dots + |X_n f|^2.
$$
By picking $I$ to be the $N$-tuple with minimal $Q_I$ such that $|\lambda_I| \simeq 1$ around each point in $\overline{\Omega'}$, and patching the estimates together, we obtain the following corollary:
\begin{cor} \label{cor:SIQ}
Let $X_1, \dots, X_n$ be as in Theorem~\ref{thm:WSI}. For each $x \in \Omega$, let $Q(x)$ be the non-isotropic dimension at $x$, defined by
$$
Q(x) := \sum_{j=1}^r j n_j(x), \qquad n_j(x) := \dim V_j(x) - \dim V_{j-1}(x).
$$
where $V_j(x)$ is the span of the commutators of $X_1, \dots, X_n$ of length $\leq j$ at $x$. Let $\Omega'$ be a relatively compact open subset of $\Omega$ with smooth boundary, and define the non-isotropic dimension $Q$ of $\overline{\Omega'}$ by setting $$Q := \sup_{x \in \overline{\Omega'}} Q(x).$$ Then for any $f \in C^{\infty}(\overline{\Omega'})$,
\begin{equation}\label{eq:SSI}
\|f\|_{L^{p^*}(\Omega')} \leq C (\|\nabla_b f\|_{L^p(\Omega')} + \|f\|_{L^p(\Omega')}),
\end{equation}
where $$\frac{1}{p^*} = \frac{1}{p} - \frac{1}{Q}, \quad 1 \leq p < Q.$$
\end{cor}
This is a subelliptic Sobolev inequality with a maximal degree of smoothing. It implies Proposition 1 of \cite{PL}, which was stated there without proof. See also the work of Caponga, Danielli and Garofalo \cite{MR1266765}, Varopoulos \cite{MR1070036} and Gromov \cite[Section 2.3.D'']{MR1421823}. We shall also prove that the exponent $p^*$ given in the corollary is \emph{always} sharp; in other words, this inequality cannot hold for any bigger values of $p^*$. See Section~\ref{sect:cor} below.
The theorem for general $I$, on the other hand, says that one gets more smoothing as soon as one looks at regions where $\lambda_I(x)$ does not degenerate to 0, and it tells us how such an improved inequality degenerates as $\lambda_I(x)$ degenerates to 0.
For instance, if we have the vector fields $\frac{\partial}{\partial x_1}$ and $x_1^{r-1}\frac{\partial}{\partial x_2}$ on $\mathbb{R}^2$, then Theorem~\ref{thm:WSI} (and a rescaling argument) implies that
$$
\left(\int_{\mathbb{R}^2} |f(x_1,x_2)|^{p^*} |x_1|^{(r-1)\frac{p}{2-p}} dx_1dx_2\right)^{\frac{1}{p^*}} \leq C \left( \left\|\frac{\partial f}{\partial x_1} \right\|_{L^p(\mathbb{R}^2)} + \left\|x_1^{r-1} \frac{\partial f}{\partial x_2} \right\|_{L^p(\mathbb{R}^2)}\right),
$$
where
$$\frac{1}{p^*} = \frac{1}{p} - \frac{1}{2}, \quad 1 \leq p < 2.$$ Note that the factor $|x_1|^{^{(r-1)\frac{p}{2-p}}}$ on the left hand side tends to 0 as the vector field $x_1^{r-1} \frac{\partial}{\partial x_2}$ degenerates, when one moves towards the axis where $\{x_1 = 0\}$.
In the case where the underlying space is a homogeneous group, however, Theorem~\ref{thm:WSI} does not improve upon the known results, because $\lambda_I \equiv 0$ unless $Q_I$ is bigger than or equal to the homogeneous dimension of the group.
The observation (as depicted in Theorem~\ref{thm:WSI} above) that one can use different weights at different points in a Sobolev or isoperimetric inequality is certainly not new; see for example Franchi-Gallot-Wheeden \cite{MR1314734}, Franchi-Lu-Wheeden \cite{MR1343563}, \cite{MR1354890}, \cite{MR1383947}. Typically when one uses different weights, one attach a `dimension' to every point, that may vary not just with the weights being used, but also from point to point. In Franchi-Wheeden \cite{MR1455453}, they introduced the concept of a \emph{compensation couple}, in an attempt to `stablize' the dimensions used at various points (when such a couple exists).
In light of this, it would be natural to ask what the `best' weight is in any given situation. Unfortunately our results have little to say in this direction.
One can also prove the following variant of Theorem~\ref{thm:WSI}, where instead of a zeroth order term in $f$ on the right hand side, we have $f$ minus the average of $f$ on the left-hand side. Moreover, one can replace the smoothness condition on $\Omega'$, by a weaker Boman chain condition: an open set $\Omega' \subset \Omega$ will be said to satisfy the Boman chain condition $\mathcal{F}(\tau,M)$ for some $\tau \geq 1$, $M \geq 1$, if there exists a covering $W$ of $\Omega'$ by (Carnot-Caratheodory) balls $B$, such that
$$
\sum_{B \in W} \chi_{\tau B}(x) \leq M \chi_{\Omega'}(x) \quad \text{for all $x \in \Omega$},
$$
and there exists a ``central'' ball $B_1 \in W$, which can be connected to every ball $B \in W$ by a finite chain of balls $B_1, \dots, B_{\ell(B)} = B$ of $W$ so that $B \subset M B_j$ for $j=1,\dots,\ell(B)$, with the additional property that $B_j \cap B_{j-1}$ contains a ball $R_j$ such that $B_j \cup B_{j-1} \subset M R_j$ for $j = 2, \dots, \ell(B)$. (Here all balls are Carnot-Caratheodory balls. Also, $\tau B$ denotes a ball that has the same center as $B$, but $\tau$ times the radius, and $\chi_{S}$ denotes the characteristic function of a set $S$.)
\begin{thm} \label{thm:WSI2}
Let $\Omega'$ be a relatively compact open subset of $\Omega$ that satisfies the Boman chain condition $\mathcal{F}(\tau,M)$ for some $\tau \geq 1$, $M \geq 1$. For any $N$-tuple $I$ and any $1 \leq p < Q_I$, let $w_{I,p}(x)$ be the weight defined by
$$
w_{I,p}(x) := |\lambda_I(x)|^{\frac{p}{Q_I-p}}.
$$
Assume that $w_{I,p}(x) dx$ is a doubling measure. Then for any Lipschitz functions $f$ on $\overline{\Omega'}$, we have
\begin{equation} \label{eq:fwPIfinal}
\left( \int_{\Omega'} |f(x) - f_{\Omega'}|^{p^*} w_{I,p}(x) dx \right)^{\frac{1}{p^*}}
\leq C \left( \int_{\Omega'} |\nabla_b f(x)|^p dx \right)^{\frac{1}{p}},
\end{equation}
where
\begin{equation} \label{eq:fwav}
f_{\Omega'} := \frac{\int_{\Omega'} f(x) w_{I,p}(x) dx}{\int_{\Omega'} w_{I,p}(x) dx},
\end{equation}
and $p^*$ is as in (\ref{eq:pp*}).
\end{thm}
Note that if $\Omega'$ is a relatively compact subset of $\Omega$ with smooth boundary, then it satisfies a Boman chain condition for some $\tau \geq 1$, $M \geq 1$. More generally, the same is true for all John domains \cite{MR1427074}, \cite{MR1343563}, so the above theorem applies for such $\Omega'$'s as well.
On the other hand, we only managed to establish such a theorem under the additional doubling condition on our weighted measure $w_{I,p}(x) dx$. Such a doubling condition is satisfied by a number of important examples (e.g. the Grushin type example given by the vector fields $\frac{\partial}{\partial x_1}$ and $x^{r-1} \frac{\partial}{\partial x_2}$ on $\mathbb{R}^2$), but could fail when say $\lambda_I(x)$ vanishes on some open set (e.g. if $Y_1 = \frac{\partial}{\partial x_1}$, $Y_2 = \frac{\partial}{\partial x_2}$, $Y_3 = (1-a(x)) \frac{\partial}{\partial x_1} + a(x) \frac{\partial}{\partial x_2}$ on $\mathbb{R}^2$, where $a(x)$ vanishes on some non-trivial open set, then when $I = \{1,3\}$, $\lambda_I(x) = a(x)$ vanishes on some non-trivial open set). It is not clear whether such doubling conditions are really necessary.
It is an interesting question whether the pair of weights $(w_{I,p}(x),1)$ satisfies the local balance condition in the work of Chanillo-Wheeden \cite{MR805809} (i.e. condition (1.5) of \cite{MR1343563}). If it is, then Theorem~\ref{thm:WSI2} would follow from the work of Franchi-Lu-Wheeden in \cite{MR1343563}.
The author thanks the referee for suggesting the possibility of Theorem~\ref{thm:WSI2}, and for raising the above question about the pair of weights $(w_{I,p}(x),1)$. The author is also grateful to the referee for numerous very helpful comments.
\section{Proof of Theorem~\ref{thm:WPE}}
Let $Y_1,\dots,Y_q$ be of finite homogeneous type in $\Omega$ as in the previous section. We recall the following Theorem of Nagel, Stein and Wainger, from \cite{MR793239} and \cite{MR1882665}:
\begin{thm}[Nagel-Stein-Wainger]\label{thm:HT}
Let $E$ be a compact subset of $\Omega$. Then for all $x \in E$ and all $\delta < \text{diam}_{\rho}(E)$, where $\text{diam}_{\rho}(E)$ is the diameter of $E$ with respect to the metric $\rho$, we have $$|B(x,\delta)| \simeq \max_J |\lambda_J(x)| \delta^{Q_J},$$ where the maximum is over all $N$-tuples $J$. (Hereafter we write $\simeq$ or $\lesssim$ when the implicit constants depend only on $E$.)
\end{thm}
In particular, the Lebesgue measure is doubling on $E$ with respect to the metric balls defined by $\rho$, and $V(x,y) \simeq V(y,x)$ for all $x,y \in E$.
Now to prove Theorem~\ref{thm:WPE}, fix any $N$-tuple $I$ and a compact subset $E$ of $\Omega$. We observe the following pointwise estimate for the kernel of $T_I$:
$$|\lambda_I(x)|^{\frac{1}{Q_I}} \frac{\rho(x,y)}{V(x,y)} = \frac{(|\lambda_I(x)| \rho(x,y)^{Q_I})^{\frac{1}{Q_I}}}{V(x,y)} \lesssim V(x,y)^{\frac{1}{Q_I}-1} \simeq V(y,x)^{\frac{1}{Q_I}-1}.$$ This is just a simple consequence of Theorem~\ref{thm:HT}. Hence for any $x \in E$ and any $\alpha > 0$, the set $$\left\{y \in E \colon |\lambda_I(x)|^{\frac{1}{Q_I}} \frac{\rho(x,y)}{V(x,y)} > \alpha \right\} \subseteq \left\{y \in E \colon V(x,y) \lesssim \alpha^{-\frac{Q_I}{Q_I-1}} \right\},$$ the latter of which is a metric ball centered at $x$, whose Lebesgue measure is $\simeq \alpha^{-\frac{Q_I}{Q_I-1}}$ uniformly in $x$. Similarly, for any $y \in E$, $$\left\{x \in E \colon |\lambda_I(x)|^{\frac{1}{Q_I}} \frac{\rho(x,y)}{V(x,y)} > \alpha \right\} \subseteq \left\{x \in E \colon V(y,x) \lesssim \alpha^{-\frac{Q_I}{Q_I-1}} \right\},$$ which is a metric ball centered at $y$, and has Lebesgue measure $\simeq \alpha^{-\frac{Q_I}{Q_I-1}}$ uniformly in $y$. Hence $T_I$ maps $L^p(E)$ to weak-$L^{p^*}(E)$ whenever $\frac{1}{p^*} = \frac{1}{p} - \frac{1}{Q_I}$, where $1 \leq p < Q_I$. We now invoke the following version of the Marcinkiewicz interpolation theorem (which can be found, e.g. in Lemma 15.3 of Folland-Stein~\cite{MR0367477}):
\begin{lemma}
Let $k$ be a measurable function on $E \times E$ such that for some $r > 1$, $k(x,\cdot)$ is weak-$L^r$ uniformly in $x$, and $k(\cdot,y)$ is weak-$L^r$ uniformly in $y$. Then the operator $f(x) \mapsto \int k(x,y) f(y) dy$ is bounded from $L^p(E)$ to $L^q(E)$ whenever $$\frac{1}{q} + 1 = \frac{1}{p} + \frac{1}{r}, \quad 1 < p < q < \infty.$$
\end{lemma}
From the above estimates for the kernel of $T_I$, if we apply the lemma with $r = \frac{Q_I}{Q_I-1}$, we see that $T_I$ mapping $L^p(E)$ to $L^{p^*}(E)$, whenever $1 < p < Q_I$.
\section{Proof of Theorem~\ref{thm:mPE}}
We now turn to the proof of Theorem~\ref{thm:mPE}. Let $r := \frac{Q_I}{Q_I-1}$. Then $r > 1$, and weak-$L^r$ is a normed space. So by Minkowski inequality, if $f \in L^1(E,dy)$, then
$$
\|Tf\|_{\text{weak-}L^r(E,d\mu_I)} \leq \|f\|_{L^1(E,dy)} \sup_{y \in E} \left\| \frac{\rho(x,y)}{V(x,y)} \right\|_{\text{weak-}L^r(E,d\mu_I(x))}.
$$
Since $d\mu_I(x) = |\lambda(x)|^{\frac{1}{Q_I-1}} dx$, it suffices to show that for any $y \in E$ and $\alpha > 0$,
$$
\int_{\{x \in E \colon \frac{\rho(x,y)}{V(x,y)} > \alpha\}} |\lambda(x)|^{\frac{1}{Q_I-1}} dx \lesssim \alpha^{-r} \quad \text{uniformly in $y$.}
$$
Now $\{x \in E \colon \frac{\rho(x,y)}{V(x,y)} > \alpha\} \subseteq \{x \in E \colon \frac{\rho(y,x)}{V(y,x)} \gtrsim \alpha\}$, and the latter is a metric ball centered at $y$. Let $\delta_{\alpha}$ be its radius, so that it is equal to $B(y,\delta_{\alpha})$; then
\begin{equation}\label{eq:deltaalpha}
\frac{\delta_{\alpha}}{|B(y,\delta_{\alpha})|} \simeq \alpha.
\end{equation}
Recall that by Theorem~\ref{thm:HT}, $|\lambda_I(x)| \delta_{\alpha}^{Q_I} \lesssim |B(x,\delta_{\alpha})|$. Hence for any $x \in B(y,\delta_{\alpha})$, we have
$$
|\lambda_I(x)| \lesssim |B(x,\delta_{\alpha})| \delta_{\alpha}^{-Q_I} \lesssim |B(y,\delta_{\alpha})| \delta_{\alpha}^{-Q_I}.
$$
(The last inequality follows from the doubling property of the Lebesgue measure with respect to the metric balls.) Hence
\begin{align*}
\int_{\{x \in E \colon \frac{\rho(x,y)}{V(x,y)} > \alpha\}} |\lambda(x)|^{\frac{1}{Q_I-1}} dx
&\lesssim \int_{B(y,\delta_{\alpha})} |B(y,\delta_{\alpha})|^{\frac{1}{Q_I-1}} \delta_{\alpha}^{-\frac{Q_I}{Q_I-1}} dx \\
&= |B(y,\delta_{\alpha})|^{\frac{Q_I}{Q_I-1}} \delta_{\alpha}^{-\frac{Q_I}{Q_I-1}} \simeq \alpha^{-\frac{Q_I}{Q_I-1}} = \alpha^{-r},
\end{align*}
the second-to-last equality following from (\ref{eq:deltaalpha}). This completes our proof.
\section{Proof of Theorem~\ref{thm:WSI}}
We can now prove Theorem~\ref{thm:WSI}. First recall the following pointwise potential estimate, versions of which are well-known: Let $\Omega'$ be a relatively compact open subset of $\Omega$ with smooth boundary, and $E = \overline{\Omega'}$ be its closure. For any $f \in C^{\infty}(E)$ and any $x \in E$, we have
$$
|f(x)| \lesssim \int_{E} \frac{\rho(x,y)}{V(x,y)} (|\nabla_b f(y)| + |f(y)|) dy.
$$
In fact, this estimates follows from an analysis of the fundamental solution of the sum of squares operator $-\sum_{j=1}^n X_j^* X_j$, as was analyzed in Nagel-Stein-Wainger~\cite{MR793239} and S\'anchez-Salle~\cite{MR762360}. (See also discussion following formula (1.2) of Franchi-Lu-Wheeden~\cite{MR1343563}.) Theorem~\ref{thm:WSI} then follows readily from Theorem~\ref{thm:WPE}, in the case when $p > 1$. If $p = 1$, we need a well-known truncation argument, to show that the strong type bound follows from the weak type bound we proved in Theorem~\ref{thm:mPE}; c.f. Long-Rui \cite{MR1187073}, and the exposition in Haj{\l}asz \cite{MR1886617} or Chapter 3 of Heinonen \cite{MR1800917}. The crucial reason why this truncation argument works is that we are not letting the potential operator $T$ act on arbitrary functions; instead they are all acting on some gradient of a single function. We also need the fact that the gradients here are taken using real (rather than complex) vector fields.
First, according to Theorem~\ref{thm:mPE}, for all $N$-tuples $I$ and all $\alpha > 0$,
$$
\int_{E \cap \{|f|> \alpha\}} |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx \lesssim \alpha^{-\frac{Q_I}{Q_I-1}} (\|\nabla_b f\|_{L^1(E)} + \|f\|_{L^1(E)})^{\frac{Q_I}{Q_I-1}}.
$$
This originally holds for all $f \in C^{\infty}(E)$, but this also holds for all $f \in W^{1,1}(E)$, because one can approximate such functions both in $W^{1,1}(E)$ and almost everywhere by smooth functions in $E$.
Now let $f \in C^{\infty}(E)$, and for any integer $j$ let
$$
f_j =
\begin{cases}
0 &\quad \text{if $|f| \leq 2^{j-1}$} \\
|f|-2^{j-1} &\quad \text{if $2^{j-1} \leq |f| \leq 2^j$}\\
2^{j-1} &\quad \text{if $|f| \geq 2^j$}
\end{cases}.
$$
Then $f_j \in W^{1,1}(E)$ (this is a qualitative statement; we will not need bounds on the $W^{1,1}(E)$ norms of the $f_j$'s),
and
$$
\nabla_b f_j =
\begin{cases}
\nabla_b |f| &\quad \text{if $2^{j-1} < |f| < 2^j$}\\
0 &\quad \text{otherwise}
\end{cases}
$$
in distribution.
Hence by the weak-type $(L^1(dy),L^{\frac{Q_I}{Q_I-1}}(d\mu_I))$ result above,
\begin{align*}
\int_{E \cap \{f_j > \alpha\}} |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx
&\lesssim \alpha^{-\frac{Q_I}{Q_I-1}} \left( \int_{2^{j-1} < |f| < 2^j} |\nabla_b f(x)| dx + \int_E |f_j(x)|dx \right)^{\frac{Q_I}{Q_I-1}}
\end{align*}
because wherever $f \ne 0$, $$|\nabla_b |f|| \leq |\nabla_b f|$$ (here we need $X_1, \dots, X_n$ to be real vector fields, because $f$ may be complex-valued). It then follows that
\begin{align*}
&\int_E |f(x)|^{\frac{Q_I}{Q_I-1}} |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx\\
\leq& \sum_{j=-\infty}^{\infty} (2^{j+1})^{\frac{Q_I}{Q_I-1}} \int_{E \cap \{ 2^j < |f| \leq 2^{j+1}\}} |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx \\
\leq& \sum_{j=-\infty}^{\infty} (2^{j+1})^{\frac{Q_I}{Q_I-1}} \int_{E \cap \{ f_j > 2^{j-1}\}} |\lambda_I(x)|^{\frac{1}{Q_I-1}} dx \\
\lesssim & \sum_{j=-\infty}^{\infty} (2^{j+1})^{\frac{Q_I}{Q_I-1}} (2^{j-1})^{-\frac{Q_I}{Q_I-1}} \left( \int_{2^{j-1} < |f| < 2^j} |\nabla_b f(x)| dx + \int_E |f_j(x)| dx \right)^{\frac{Q_I}{Q_I-1}} \\
\lesssim & \left(\|\nabla_b f\|_{L^1(E)} + \|f\|_{L^1(E)}\right)^{\frac{Q_I}{Q_I-1}}
\end{align*}
as desired.
\section{Proof of Corollary~\ref{cor:SIQ} and its sharpness} \label{sect:cor}
Let $\Omega'$ be as in the previous section, and $E = \overline{\Omega'}$. Recall that at every point $x \in E$, we defined a local non-isotropic dimension $Q(x)$, and from its definition, it is clear that there exists a neighborhood $U_x$ of $x$ and an $N$-tuple $I_x$ such that the degree of $I_x$ is $Q(x)$, and such that $|\lambda_{I_x}| \simeq 1$ on $U_x \cap E$. From Theorem~\ref{thm:WSI}, it follows that for all $f \in C^{\infty}(U_x \cap E)$ and all $1 \leq p < Q(x)$, we have $$\|f\|_{L^{\frac{Q(x)p}{Q(x)-p}}(U_x \cap E)} \lesssim \|\nabla_b f\|_{L^p(U_x \cap E)} + \|f\|_{L^p(U_x \cap E)}.$$ Since $Q = \sup_{x \in E} Q(x)$, by taking a partition of unity and gluing the estimates, we see that Corollary~\ref{cor:SIQ} follows.
We now prove that the exponent $p^*$ in Corollary~\ref{cor:SIQ} is always the best possible. This will follow from a consideration of an approximate dilation invariance. For this we need to introduce a suitable coordinate system and a non-isotropic dilation near a point $x_0 \in E$ where $Q(x_0) = \sup_{x \in E} Q(x)$.
Let $x_0$ be as such, and let $\{X_{jk} \colon 1 \leq j \leq r, 1 \leq k \leq n_j\}$ be a collection of vector fields that satisfies the following:
\begin{enumerate}[(a)]
\item Each $X_{jk}$ is a commutator of $X_1, \dots, X_n$ of length $j$;
\item For each $1 \leq j_0 \leq r$, $\{X_{jk} \colon 1 \leq j \leq j_0, 1 \leq k \leq n_j\}$ restricts at $x_0$ to a basis of $V_{j_0}(x_0)$.
\end{enumerate}
In particular $$\sum_{j=1}^r jn_j = Q(x_0) = Q.$$
Then for some small $\varepsilon > 0$,
\begin{align}
[-\varepsilon,\varepsilon]^N &\to \mathbb{R}^N \notag \\
u &\mapsto \exp(u \cdot X') x_0 \label{eq:coord}
\end{align}
defines a normal coordinate system in a neighborhood $U_0$ of $x_0$ in $\mathbb{R}^N$; here $\exp(X)x_0$ is the time-1-flow along the integral curve of the vector field $X$ beginning at $x_0$, and
$$
u \cdot X' = \sum_{j = 1}^r \sum_{k=1}^{n_j} u_{jk} X_{jk}
$$
where $u = (u_{jk})_{1 \leq j \leq r, 1 \leq k \leq n_j}$. For simplicity we shall consistently write $u$ for $\exp(u \cdot X')x_0 \in U_0$. This coordinate system allows us to define the associated non-isotropic dilation: for $u = (u_{jk}) \in U_0$ and $\lambda > 0$, write $$\lambda \cdot u := (\lambda^{j} u_{jk})_{1 \leq j \leq r, 1 \leq k \leq n_j}$$ as long as the latter is in $U_0$ (and we leave this undefined if it is not in $U_0$).
Now if $\alpha = (j_1k_1, \dots, j_sk_s)$ is a multiindex, we shall let $u^{\alpha}$ be the monomial $u_{j_1k_1}u_{j_2k_2} \dots u_{j_sk_s}.$ It is said to have non-isotropic degree $|\alpha|=j_1+\dots+j_s$ because $$(\lambda \cdot u)^{\alpha} = \lambda^{|\alpha|} u^{\alpha}.$$ A function $f$ of $u$ is said to vanish to non-isotropic order $l$ at $0$ if its Taylor series expansion consists of terms whose non-isotropic degrees are all $\geq l$. Note that if $$X_l = \sum_{j=1}^{r} \sum_{k=1}^{n_j} a^{l}_{jk}(u) \frac{\partial}{\partial u_{jk}}$$ on $U_0$ for $1 \leq l \leq n$, then by the Campbell-Hausdorff formula, each $a^l_{jk}(u)$ vanishes to non-isotropic order $j-1$ at $u=0$. (c.f. Section 10 of \cite{MR0436223}). In what follows we Taylor expand $a^l_{jk}$ at $0$ such that $$a^l_{jk}(u) = p^l_{jk}(u) + e^l_{jk}(u)$$ where $p^l_{jk}(u)$ are homogeneous polynomials of non-isotropic degree $j-1$ and $e^l_{jk}(u)$ vanish to non-isotropic order $j$ at $0$. Define $$W_l = \sum_{j=1}^{r} \sum_{k=1}^{n_j} p^{l}_{jk}(u) \frac{\partial}{\partial u_{jk}}, \quad E_l = \sum_{j=1}^{r} \sum_{k=1}^{n_j} e^{l}_{jk}(u) \frac{\partial}{\partial u_{jk}}$$ on $U_0$, for $1 \leq l \leq n$.
Given $p \geq 1$, let $q$ be an exponent for which (\ref{eq:SSI}) holds for all $f \in C^{\infty}(E)$. We shall show $q \leq p^*$. In fact then the inequality holds for all $f \in C^{\infty}_c(U_0 \cap E)$ (just extend $f$ by zero to all of $E$):
$$
\left( \int_{U_0 \cap E} |f(x)|^q dx \right)^{\frac{1}{q}} \lesssim \left( \int_{U_0 \cap E} |\nabla f(x)|^p + |f(x)|^p dx \right)^{\frac{1}{p}}.
$$
But we can also parameterize $U_0 \cap E$ by the $u$ coordinates we introduced above, and use the Lebesgue measure $du$ with respect to this $u$ coordinates in place of $dx$ in the above inequality. This is because $du$ is a smooth density times $dx$, and vice versa. Hence for all $f \in C^{\infty}_c(U_0 \cap E)$,
\begin{equation}\label{eq:NLS2}
\left( \int_{U_0 \cap E} |f(u)|^q du \right)^{\frac{1}{q}} \lesssim \left( \int_{U_0 \cap E} |\nabla f(u)|^p + |f(u)|^p du \right)^{\frac{1}{p}}.
\end{equation}
Now pick an open set $U_1 \subseteq U_0 \cap E$ such that $\lambda \cdot u \in U_0 \cap E$ for all $\lambda \in [0,1]$. This is possible because $E$ is the closure of an open set with smooth boundary. Then take $f \in C^{\infty}_c(U_1)$ that is not identically zero. For each $\delta \in (0,1)$, let
$$
f_{\delta}(u) :=
\begin{cases}
f(\delta^{-1} \cdot u) &\quad \text{if $\delta^{-1} \cdot u \in U_1$}\\
0 &\quad \text{otherwise}
\end{cases}.
$$
Applying (\ref{eq:NLS2}) to $f_{\delta}$ in place of $f$, we get
\begin{align*}
\delta^{\frac{Q}{q}} \|f\|_{L^{q}(U_1)}
&\leq C \left(\sum_{j=1}^n \|W_j(f_{\delta})\|_{L^p(U_1)} + \sum_{j=1}^n \|E_j(f_{\delta})\|_{L^p(U_1)} + \|f_{\delta}\|_{L^p(U_1)} \right) \\
&= C \sum_{j=1}^n \delta^{-1+\frac{Q}{p}} \|W_j f\|_{L^p(U_1)} + O(\delta^{\frac{Q}{p}})
\end{align*}
by the homogeneity of the vector fields $W_j$ and $E_j$. Letting $\delta \to 0$, we get $\frac{Q}{q} \geq -1 + \frac{Q}{p}$, i.e. $$\frac{1}{q} \geq \frac{1}{p}-\frac{1}{Q}.$$ Hence $q \leq p^*$ as desired.
We remark that a similar argument shows that Theorem~1 of \cite{PL} cannot hold for any value of $Q$ smaller than the one stated there.
\section{Proof of Theorem~\ref{thm:WSI2}}
To prove Theorem~\ref{thm:WSI2}, an important starting point is a representation formula, as derived in Lu-Wheeden \cite{MR1642822}. It was proved, in Theorem 1 there, that if $B$ is any Carnot-Caratheodory ball in $\Omega'$, then
\begin{equation} \label{eq:oprepform}
|f(x)-L(f,B)| \leq C \int_{B} \frac{\rho(x,y)}{V(x,y)} |\nabla_b f(y)| dy, \quad \text{for all $x \in B$},
\end{equation}
where $L(f,B) := |B|^{-1} \int_B f(y) dy$ is the Lebesgue average of $f$ over $B$, and $C$ is an appropriate constant. If $1 < p < Q_I$, it then follows from Theorem~\ref{thm:WPE} that
\begin{equation} \label{eq:wPIBall}
\left( \int_{B} |f(x) - L(f,B)|^{p^*} w_{I,p}(x) dx \right)^{\frac{1}{p^*}} \leq C \left( \int_{B} |\nabla_b f(x)|^p dx \right)^{\frac{1}{p}}.
\end{equation}
The corresponding weak-type $(1,1^*)$ bound, and the truncation argument used in the proof of Theorem~\ref{thm:WSI} shows that the above inequality remains true when $p = 1$. Now we patch these estimates together,
using the Boman chain condition satisfied by $\Omega'$. In fact, since we assumed that $w_{I,p}(x) dx$ is a doubling measure, using Theorem~3.7 of Franchi-Lu-Wheeden \cite{MR1343563}, one then concludes the existence of some constant $A(f,\Omega')$, such that
\begin{equation} \label{eq:fwPIpre}
\left( \int_{\Omega'} |f(x) - A(f,\Omega')|^{p^*} w_{I,p}(x) dx \right)^{\frac{1}{p^*}} \leq C \left( \int_{\Omega'} |\nabla_b f(x)|^p dx \right)^{\frac{1}{p}}.
\end{equation}
Then it is a standard argument that one can replace $A(f,\Omega')$ by the average $f_{\Omega'}$, where $f_{\Omega'}$ is defined as in (\ref{eq:fwav}). In fact,
$$
f_{\Omega'} - A(f,\Omega') = \frac{\int_{\Omega'} [f(x) - A(f,\Omega')] w_{I,p}(x) dx}{\int_{\Omega'} w_{I,p}(x) dx}.
$$
Hence by Jensen's inequality,
$$
|f_{\Omega'} - A(f,\Omega')| \leq \left( \frac{\int_{\Omega'} |f(x) - A(f,\Omega')|^{p^*} w_{I,p}(x) dx}{\int_{\Omega'} w_{I,p}(x) dx} \right)^{\frac{1}{p^*}}.
$$
Moving the denominator to the left hand side, and applying (\ref{eq:fwPIpre}), we then see that
\begin{equation} \label{eq:fwPIpre2}
\left( \int_{\Omega'} \left| f_{\Omega'} - A(f,\Omega') \right|^{p^*} w_{I,p}(x) dx \right)^{\frac{1}{p^*}} \leq C \left( \int_{\Omega'} |\nabla_b f(x)|^p dx \right)^{\frac{1}{p}}.
\end{equation}
Combining (\ref{eq:fwPIpre}) and (\ref{eq:fwPIpre2}) finally gives the desired estimate (\ref{eq:fwPIfinal}).
We remark that in place of (\ref{eq:oprepform}), we could also have used formula (1.2) of Franchi-Lu-Wheeden \cite{MR1343563} instead, which states the same inequality as in (\ref{eq:oprepform}), except that the integral on the right hand side was over a ball $cB$ for some $c > 1$ instead of just over $B$. In that case, we would have to replace, in the right hand side of (\ref{eq:wPIBall}), $B$ by $cB$, but Theorem 3.7 of Franchi-Lu-Wheeden \cite{MR1343563} still applies, and we get the same inequality (\ref{eq:fwPIpre}) as desired.
| {
"timestamp": "2015-07-14T02:14:38",
"yymm": "1502",
"arxiv_id": "1502.06332",
"language": "en",
"url": "https://arxiv.org/abs/1502.06332",
"abstract": "The purpose of this short article is to prove some potential estimates that naturally arise in the study of subelliptic Sobolev inequalites for functions. This will allow us to prove a local subelliptic Sobolev inequality with the optimal amount of smoothing, as well as a variant of that which describes quantitatively an improvement of the inequality as one gets away from certain characteristic varieties.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A sharp subelliptic Sobolev embedding theorem with weights",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978712643239288,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7094396981855586
} |
https://arxiv.org/abs/0902.1290 | Boolean Inner product Spaces and Boolean Matrices | This article discusses the concept of Boolean spaces endowed with a Boolean valued inner product and their matrices. A natural inner product structure for the space of Boolean n-tuples is introduced. Stochastic boolean vectors and stochastic and unitary Boolean matrices are studied. A dimension theorem for orthonormal bases of a Boolean space is proven. We characterize the invariant stochastic Boolean vectors for a Boolean stochastic matrix and show that they can be used to reduce a unitary matrix. Finally, we obtain a result on powers of stochastic and unitary matrices. | \section{Introduction}
\qquad A Boolean space $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is the
set of all $n$-tuples of elements of a fixed Boolean algebra $\mathcal{B}$.
The elements of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ are called
Boolean vectors and they possess a natural linear space-like structure.
Moreover, we can define on $\mathcal{L}_{n}\left( \mathcal{B}\right) $ an
operation which is analogous to an inner product. By using this
\textquotedblleft inner product\textquotedblright\ we can also define a $%
\mathcal{B}$-valued norm and orthogonality relations for Boolean vectors.
\qquad A Boolean matrix is a matrix whose entries are elements of a Boolean
algebra $\mathcal{B}$. With the natural choice of matrix multiplication
defined in terms of the lattice operations of $\mathcal{B}$, such matrices
become the linear mappings between Boolean linear spaces. The study of
Boolean matrices is a fascinating blend of linear algebra and boolean
algebra which finds many applications, and was undertaken in \cite%
{Blyth67,Cechlarova03,Giveon64,Jagannadham66,Luce52,Rutherford63,Rutherford63b,Rutherford64,Sindak75,Skornyakov86,Subrahmanyam64,Subrahmanyam65,Subrahmanyam67,Wedderburn34,Tan98,Tan01,Yoeli61,Gregory94}
.
\qquad An important concept in our work is that of a stochastic vector.
These are Boolean vectors of norm one whose components are mutually
disjoint. In particular, a finite partition of the universe of a Boolean
algebra would correspond to a stochastic Boolean vector. We define an
orthonormal basis of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ the usual
way and it turns out it must be made of stochastic vectors. Our first main
result is that all orthonormal bases for $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ have cardinality $n$ and conversely, any orthonormal set of
stochastic vectors with cardinality $n$ is a basis for $\mathcal{L}
_{n}\left( \mathcal{B}\right) $. Our next main result states that any
orthonormal set of stochastic vectors in $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ can be extended to an orthonormal basis for $\mathcal{L}_{n}\left(
\mathcal{B}\right) $. In order to prove this result, we introduce a notion
of linear subspace of $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
\qquad We define stochastic and unitary Boolean matrices in terms of
properties of their product with their adjoint matrices. We then show that
stochastic Boolean matrices are precisely those whose columns are stochastic
vectors and unitary matrices are precisely those whose rows and columns are
stochastic.
\qquad We next characterize the invariant stochastic Boolean vectors for
stochastic Boolean matrices and show that they can be employed to reduce
unitary Boolean matrices. As mentioned in Section 2, stochastic Boolean
matrices may be used to describe a dynamics analogous to a Markov chain. It
is thus of interest to consider powers of stochastic Boolean matrices
because they correspond to iterations in the dynamics. Our last result
concerns such powers. The paper includes examples that illustrate various
points which we wish to emphasize.
\qquad As a matter of notations, we shall write $\mathbb{N}$ as the set of
nonzero natural numbers.
\section{Definitions and Motivation}
\qquad Throughout this article, $\mathcal{B}$ will denote a Boolean algebra.
We denote the smallest and largest element of $\mathcal{B}$ respectively by $%
0$ and $1$. For any $a\in \mathcal{B}$, we denote by $a^{c}$ its complement.
For $a,b\in \mathcal{B}$, we denote the infimum of $a$ and $b$ by $ab$
(instead of $a\wedge b$). We denote by $a\backslash b=a\left( b^{c}\right) $
. The supremum of $a,b$ is denoted by $a\vee b$.
\qquad For all $n\in \mathbb{N}$ we denote by $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ the set of all $n$-tuples of elements in $\mathcal{B}$.
We endow $\mathcal{L}_{n}\left( \mathcal{B}\right) $ with the following
operations: if $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ and $%
\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $ are in $\mathcal{L}
_{n}\left( \mathcal{B}\right) ,$ and $c\in \mathcal{B}$ then
\begin{equation*}
\underline{a}+\underline{b}=\left( a_{1}\vee b_{1},\ldots ,a_{n}\vee
b_{n}\right)
\end{equation*}
and
\begin{equation*}
c\underline{a}=\left( ca_{1},\ldots ,ca_{n}\right) \text{.}
\end{equation*}
Then $\mathcal{L}_{n}\left( \mathcal{B}\right) $ has the usual properties of
a linear space except for the lack of additive inverses. In particular, our
structure differs from the notion of Boolean vector space introduced in \cite%
{Subrahmanyam64,Subrahmanyam65,Subrahmanyam67} which assumes an underlying
additive group and is best modelled by the action of a Boolean space on a
regular vector space by means of a (finitely additive) measure.
\qquad We call the elements of $\mathcal{L}_{n}\left( \mathcal{B}\right) $
\emph{Boolean vectors} and call $\mathcal{L}_{n}\left( \mathcal{B}\right) $
a \emph{Boolean (linear) space}. We will use the following definitions
throughout this paper
\begin{defn}
A Boolean vector $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ is an
orthovector when $a_{i}a_{j}=0$ for $i,j\in \left\{ 1,\ldots ,n\right\} $
and $i\not=j$.
\end{defn}
\begin{defn}
An orthovector $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ is a
stochastic vector when $\dbigvee\limits_{i=1}^{n}a_{i}=1$.
\end{defn}
\qquad The Boolean space $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is
endowed with a natural inner product.
\begin{defn}
Let $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ and $\underline{b}
=\left( b_{1},\ldots ,b_{n}\right) $ in $\mathcal{L}_{n}\left( \mathcal{B}
\right) $. Then we define the $\mathcal{B}$-valued inner product of these
two vectors by
\begin{equation*}
\left\langle \underline{a},\underline{b}\right\rangle
=\dbigvee\limits_{i=1}^{n}a_{i}b_{i}\text{.}
\end{equation*}
The norm of \underline{$a$} is defined by $\left\Vert \underline{a}
\right\Vert =\left\langle \underline{a},\underline{a}\right\rangle $.
\end{defn}
\bigskip \qquad The Boolean inner product shares most of the usual
properties of the Euclidian inner product, if we replace scalar sums and
products by the supremum and infimum in $\mathcal{B}$. Thus given $%
\underline{a},\underline{b},\underline{c}\in \mathcal{L}_{n}\left( \mathcal{%
B }\right) $ and $\alpha \in \mathcal{B}$ then
\begin{itemize}
\item $\left\langle \alpha \underline{a}+\underline{b},\underline{c}
\right\rangle =\alpha \left\langle \underline{a},\underline{c}\right\rangle
\vee \left\langle \underline{b},\underline{c}\right\rangle $,
\item $\left\langle \underline{a},\underline{b}\right\rangle =\left\langle
\underline{b},\underline{a}\right\rangle $,
\item $\left\langle \alpha \underline{a},\underline{c}\right\rangle
=\left\langle \underline{a},\alpha \underline{c}\right\rangle $,
\item $\left\langle \underline{a},\underline{a}\right\rangle =0$ if and only
if $\underline{a}=\left( 0,\ldots ,0\right) =\underline{0}$.
\end{itemize}
We now give some properties of the norm.
\begin{thm}
\label{Norm}Let $\underline{a},$ $\underline{b}\in \mathcal{L}_{n}\left(
\mathcal{B}\right) $ and $c\in \mathcal{B}$. Then
\begin{enumerate}
\item $\left\Vert c\underline{a}\right\Vert =c\left\Vert \underline{a}
\right\Vert $,
\item $\left\Vert \underline{a}+\underline{b}\right\Vert =\left\Vert
\underline{a}\right\Vert \vee \left\Vert \underline{b}\right\Vert $,
\item $\left\langle \underline{a},\underline{b}\right\rangle \leq \left\Vert
\underline{a}\right\Vert \left\Vert \underline{b}\right\Vert $,
\item If $\underline{a}$ and $\underline{b}$ are orthovectors and $%
\left\Vert \underline{a}\right\Vert =\left\Vert \underline{b}\right\Vert $
then $\left\langle \underline{a},\underline{b}\right\rangle =\left\Vert
\underline{a}\right\Vert \left\Vert \underline{b}\right\Vert $ if and only
if $\underline{a}=\underline{b}$.
\end{enumerate}
\end{thm}
\begin{pf}
We have
\begin{equation*}
\left\Vert c\underline{a}\right\Vert =\left\langle c\underline{a},c
\underline{a}\right\rangle =c\left\langle \underline{a},c\underline{a}
\right\rangle =c\left\langle \underline{a},\underline{a}\right\rangle
=c\left\Vert \underline{a}\right\Vert
\end{equation*}
and, denoting $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ and $%
\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $, we have
\begin{equation*}
\left\Vert \underline{a}+\underline{b}\right\Vert
=\dbigvee\limits_{i=1}^{n}\left( a_{i}\vee b_{i}\right) =\left(
\dbigvee\limits_{i=1}^{n}a_{i}\right) \vee \left(
\dbigvee\limits_{i=1}^{n}b_{i}\right) =\left\Vert \underline{a}\right\Vert
\vee \left\Vert \underline{b}\right\Vert
\end{equation*}
while
\begin{equation*}
\left\langle \underline{a},\underline{b}\right\rangle
=\dbigvee\limits_{i=1}^{n}a_{i}b_{i}\leq
\dbigvee\limits_{i,j=1}^{n}a_{i}b_{j}=\left(
\dbigvee\limits_{i=1}^{n}a_{i}\right) \left(
\dbigvee\limits_{j=1}^{n}b_{j}\right) =\left\Vert \underline{a}\right\Vert
\left\Vert \underline{b}\right\Vert \text{.}
\end{equation*}
Now let us assume that $\underline{a}$ and $\underline{b}$ are orthovectors
and $\left\Vert \underline{a}\right\Vert =\left\Vert \underline{b}
\right\Vert $ and that $\left\langle \underline{a},\underline{b}
\right\rangle =\left\Vert \underline{a}\right\Vert \left\Vert \underline{b}
\right\Vert $. Hence, $\left\langle \underline{a},\underline{b}\right\rangle
=\left\Vert \underline{a}\right\Vert $ so $\dbigvee
\limits_{i=1}^{n}a_{i}b_{i}=\dbigvee\limits_{i=1}^{n}a_{i}$. Hence, for all $%
j\in \left\{ 1,\ldots ,n\right\} $
\begin{equation*}
a_{j}b_{j}=\left( \dbigvee\limits_{i=1}^{n}a_{i}b_{i}\right)
b_{j}=a_{j}b_{j}\vee \left( \dbigvee\limits_{i\not=j}a_{i}b_{j}\right) \text{
.}
\end{equation*}
Hence $\dbigvee\limits_{i\not=j}a_{i}b_{j}\leq a_{j}b_{j}$ yet $%
\dbigvee\limits_{i\not=j}a_{i}b_{j}\leq a_{j}^{c}b_{j}$ since $\underline{a}$
is an orthovector, so $\dbigvee\limits_{i\not=j}a_{i}b_{j}=0$ and thus $%
a_{i}b_{j}=0$. Therefore
\begin{equation*}
a_{j}\left( \left\Vert \underline{a}\right\Vert \backslash b_{j}\right)
=a_{j}\left( \left\Vert \underline{b}\right\Vert \backslash b_{j}\right)
=a_{j}\left( \dbigvee\limits_{i\not=j}b_{i}\right)
=\dbigvee\limits_{i\not=j}a_{j}b_{i}=0\text{.}
\end{equation*}
Hence, using again that $a$ is an orthovector, $a_{j}=a_{j}\left\Vert
\underline{a}\right\Vert =a_{j}b_{j}\leq b_{j}$. Symmetrically, $b_{j}\leq
a_{j}$ so $a_{j}=b_{j}$ for all $j\in \left\{ 1,\ldots ,n\right\} $. Hence $%
\underline{a}=\underline{b}$.\qed
\end{pf}
\bigskip \qquad Note that the condition $\left\Vert \underline{a}\right\Vert
=\left\Vert \underline{b}\right\Vert $ in the last statement of Theorem (\ref%
{Norm}) is necessary. If we let $\underline{a}=\left( a,0,\ldots ,0\right) $
and $\underline{b}=\left( b,0,\ldots ,0\right) $ with $a,b\in \mathcal{B}$
and $a\not=b$ then $\underline{a},\underline{b}$ are orthovectors of
different norms, and yet trivially $\left\langle \underline{a},\underline{b}
\right\rangle =\left\Vert \underline{a}\right\Vert \left\Vert \underline{b}
\right\Vert $. Also, the condition that $\underline{a}$ and $\underline{b}$
are orthovectors is necessary since if $\underline{a}=\left( 1,a\right) $
and $\underline{b}=\left( 1,b\right) $ for $a,b\in \mathcal{B}$ with $%
a\not=b $ then $\left\Vert \underline{a}\right\Vert =\left\Vert \underline{b}
\right\Vert =1$ and $\left\langle \underline{a},\underline{b}\right\rangle
=1 $.
\begin{cor}
If $\underline{a}$ and $\underline{b}$ are stochastic Boolean vectors then $%
\left\langle \underline{a},\underline{b}\right\rangle =1$ if and only if $%
\underline{a}=\underline{b}$.
\end{cor}
\begin{pf}
By assumption, $\underline{a}$ and $\underline{b}$ are orthovectors with $%
\left\Vert \underline{a}\right\Vert =\left\Vert \underline{b}\right\Vert =1$
so the result follows from Theorem (\ref{Norm}).\qed
\end{pf}
We now introduce the following standard notions:
\begin{defn}
Two vectors $\underline{a}$ and $\underline{b}$ in $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ are orthogonal when $\left\langle \underline{a},
\underline{b}\right\rangle =0$, in which case we shall write $\underline{a}
\perp \underline{b}$. The vector $\underline{a}$ is a unit vector when $%
\left\Vert \underline{a}\right\Vert =1$.
\end{defn}
\begin{defn}
An orthogonal set in $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is a subset
$E$ of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ such that for all $%
\underline{e},\underline{f}\in E$ we have $\underline{e}\not=\underline{f}
\implies \left\langle \underline{e},\underline{f}\right\rangle =0$. An
orthonormal subset of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is an
orthogonal set whose elements all have norm $1$.
\end{defn}
\qquad The next section of this paper will address the concept of dimension
for a Boolean vector space. It will be based on the notion of basis. We now
introduce:
\begin{defn}
Let $\mathcal{A}$ be a subset of $\mathcal{L}_{n}\left( \mathcal{B}\right) $
. A vector $\underline{b}\in \mathcal{L}_{n}\left( \mathcal{B}\right) $ is a
linear combination of elements in $\mathcal{A}$ when there exists a finite
subset $\left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}\right\} $ of $%
\mathcal{A}$ and $b_{1},\ldots ,b_{m}\in \mathcal{B}$ such that $\underline{%
b }=\sum_{i=1}^{m}b_{i}\underline{a_{i}}$.
A subset $\mathcal{A}$ of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is a
generating subset of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ when all
vectors in $\mathcal{L}_{n}\left( \mathcal{B}\right) $ are linear
combinations of elements in $\mathcal{A}$.
A subset $\mathcal{A}$ is free when for any $b_{i},d_{j}\in \mathcal{B}
\backslash \left\{ 0\right\} $ and $\underline{a_{i}},\underline{c_{j}}\in
\mathcal{A}$ with $i=1,\ldots ,m$ and $j=1,\ldots k$ such that $%
\sum_{i=1}^{m}b_{i}\underline{a_{i}}=\sum_{j=1}^{k}d_{j}\underline{c_{j}}$
we have:
\begin{equation*}
m=k\text{, }\left\{ b_{1},\ldots ,b_{m}\right\} =\left\{ d_{1},\ldots
,d_{m}\right\} \text{ and }\left\{ \underline{a_{1}},\ldots ,\underline{%
a_{m} }\right\} =\left\{ \underline{c_{1}},\ldots ,\underline{c_{m}}\right\}
\text{ .}
\end{equation*}
\end{defn}
\qquad Thus a set $\mathcal{A}$ is free whenever a linear combination of
elements in $\mathcal{A}$ has unique nonzero coefficients and associated
vectors of $\mathcal{A}$. We naturally introduce:
\begin{defn}
\label{Basis}A subset $\mathcal{A}$ of $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ is a basis of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ when
every element of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ can be written
as a unique linear combination of elements of $\mathcal{A}$ with nonzero
coefficients, i.e. when $\mathcal{A}$ is generating and free.
\end{defn}
\qquad A first easy observation is that a basis must be made of unit vectors.
\begin{lem}
Let $\mathcal{A}$ be a basis of $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
If $\underline{a}\in \mathcal{A}$ then $\left\Vert \underline{a}\right\Vert
=1$.
\end{lem}
\begin{pf}
Note first that, if $\underline{0}=(0,\ldots ,0)\in \mathcal{L}_{n}\left(
\mathcal{B}\right) $ were in $\mathcal{A}$ and $\underline{1}=(1,\ldots
,1)\in \mathcal{L}_{n}\left( \mathcal{B}\right) $ then $\underline{1}=
\underline{1}=\underline{1}+\underline{0}$, so $\underline{1}$ could be
written as two distinct linear combinations of elements in $\mathcal{A}$
with coefficients $1$. This is a contradiction so $\underline{0}\not\in
\mathcal{A}$. Let $\underline{a}\in \mathcal{A}$. Then $\underline{a}=1
\underline{a}=\left\Vert \underline{a}\right\Vert \underline{a}$. Hence if $%
\left\Vert \underline{a}\right\Vert \not=1$ then $\underline{a}$ can be
written as two distinct linear combinations of elements in $\mathcal{A}$
with nonzero coefficients (since $\underline{a}\not=\underline{0}$ so $%
\left\Vert \underline{a}\right\Vert \not=0$) which contradicts the
definition of a basis.\qed
\end{pf}
\bigskip A second easy observation is:
\begin{lem}
\label{OrthoFree}Let $\mathcal{A}$ be an orthonormal set in $\mathcal{L}
_{n}\left( \mathcal{B}\right) $. Then $\mathcal{A}$ is free.
\end{lem}
\begin{pf}
Let $\underline{e}=\sum_{i=1}^{m}b_{i}\underline{a_{i}}=\sum_{i=1}^{k}d_{i}
\underline{c_{i}}$ with $\underline{a_{1}},\ldots ,\underline{a_{m}},
\underline{c_{1}},\ldots ,\underline{c_{k}}\in \mathcal{A}$ and $%
b_{1},\ldots ,b_{m},d_{1},\ldots ,d_{k}\in \mathcal{B}\backslash \left\{
0\right\} $. Note that $d_{i}=\left\langle \underline{e},\underline{c_{i}}
\right\rangle $ for $i=1,\ldots ,k$. Now if $\underline{c_{j}}\not\in
\left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}\right\} $ for some $j\in
\left\{ 1,\ldots ,k\right\} $ then $d_{j}=\left\langle \underline{c_{j}},
\underline{e}\right\rangle =\left\langle \underline{c_{j}}
,\sum_{i=1}^{m}b_{i}\underline{a_{i}}\right\rangle =0$ which is a
contradiction. Hence $\left\{ \underline{c_{1}},\ldots ,\underline{c_{k}}
\right\} \subseteq \left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}
\right\} $. The reverse inclusion is obtained by symmetry. Then for all $%
i=1,\ldots ,m$ there exists $j\in \left\{ 1,\ldots ,m\right\} $ such that $%
b_{i}=\left\langle \underline{e},\underline{a_{i}}\right\rangle
=\left\langle \underline{e},\underline{c_{j}}\right\rangle =d_{j}$,
concluding this proof.\qed
\end{pf}
\qquad We thus can set:
\begin{defn}
A subset $\mathcal{A}$ of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is an
orthonormal basis of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ when it is
an orthonormal generating subset of $\mathcal{L}_{n}\left( \mathcal{B}
\right) $.
\end{defn}
\qquad An orthonormal basis is thus a generating set which, by Lemma (\ref%
{OrthoFree}), is also free, so it is basis, so that our vocabulary is
consistent.
\qquad There always exist orthonormal bases of $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ and we now give some examples. First, the \emph{\
canonical basis }or \emph{standard basis }of $\mathcal{L}_{n}\left( \mathcal{%
\ B}\right) $ is defined as the basis $\left( \underline{\delta _{i}}\right)
_{i=1,\ldots ,n}$ with $\underline{\delta _{1}}=\left( 1,0,\ldots ,0\right) $
, $\underline{\delta _{2}}=\left( 0,1,0,\ldots ,0\right) $, \ldots , $%
\underline{\delta _{n}}=\left( 0,\ldots ,0,1\right) $. More generally, we
have:
\begin{exmp}
\label{CyclicBasis}Let $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) $ be
a stochastic vector. Let
\begin{equation*}
\underline{e_{i}}=\left( a_{i},a_{i+1},\ldots ,a_{n},a_{1},\ldots
,a_{i-1}\right)
\end{equation*}
for all $i\in \left\{ 1,\ldots ,n\right\} $. Then by construction, $\left(
\underline{e_{i}}\right) _{i=1,\ldots ,n}$ is an orthonormal subset of $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $. Moreover
\begin{eqnarray*}
\underline{\delta _{1}} &=&a_{1}\underline{e_{1}}+a_{2}\underline{e_{2}}
+\ldots +a_{n}\underline{e_{n}} \\
\underline{\delta _{2}} &=&a_{2}\underline{e_{1}}+a_{3}\underline{e_{2}}
+\ldots +a_{n}\underline{e_{n-1}}+a_{1}\underline{e_{n}} \\
&&\vdots \\
\underline{\delta _{n}} &=&a_{n}\underline{e_{1}}+a_{1}\underline{e_{2}}
+\ldots +a_{n-1}\underline{e_{n}}
\end{eqnarray*}
so $\left( \underline{e_{i}}\right) _{i=1,\ldots ,n}$ is a generating set
and thus an orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $
.
\end{exmp}
\bigskip \qquad Let us observe that in general, linear independence in $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $ is not an easy concept. We
propose in this paper to use orthogonality as a substitute. Indeed, if $%
\left\{ v_{1},\ldots ,v_{k}\right\} $ is a generating subset of $\mathcal{L}%
_{n}\left( \mathcal{B}\right) $ made of pairwise orthogonal, nonzero
vectors, then it is a minimal generating set, in the sense that any strict
subset is not generating (since, say, $v_{i}$ is not a linear combination of
the vectors in $\left\{ v_{1},\ldots ,v_{k}\right\} \backslash \left\{
v_{i}\right\} $ as all such combinations are orthogonal to $v_{i}$, the
inner product is definite yet $v_{i}\not=0$). However, orthogonality still
allows for some pathologies. For instance, assume there exists $a\in
\mathcal{B}$ such that $a$ is neither $0$ or $1$. Then $\left( a,0\right) $,
$\left( a^{c},0\right) $ and $\left( 0,1\right) $ are three nonzero
orthogonal vectors generating $\mathcal{L}_{2}\left( \mathcal{B}\right) $.
It is a minimal generating set, yet its cardinality is not minimal among all
generating families (since the canonical basis of $\mathcal{L}_{2}\left(
\mathcal{B}\right) $ has cardinal $2$). If $\mathcal{B}$ is large enough, we
can even build on the same model infinite orthogonal generating families of
nonzero vectors, which are therefore minimal! We shall prove in the next
section that these pathologies are avoided when one restricts one's
attention to orthonormal bases. We shall also see that the concept of a
basis, i.e. a free generating subset, is in fact identical to the concept of
an orthonormal basis.
\bigskip \qquad The natural maps for our structure are:
\begin{defn}
A map $T:\mathcal{L}_{n}\left( \mathcal{B}\right) \longrightarrow \mathcal{L}
_{m}\left( \mathcal{B}\right) $ is linear when for all $a\in \mathcal{B},
\underline{b},\underline{c}\in \mathcal{L}_{n}\left( \mathcal{B}\right) $ we
have $T(a\underline{b}+\underline{c})=aT(\underline{b})+T(\underline{c})$.
\end{defn}
\bigskip \qquad As usual, $T(0)=0$ when $T$ is linear. When $T$ is linear
from $\mathcal{L}_{n}\left( \mathcal{B}\right) $ into $\mathcal{L}_{n}\left(
\mathcal{B}\right) $, we call $T$ an operator on $\mathcal{L}_{n}\left(
\mathcal{B}\right) $. An operator $T$ on $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ is invertible when there exists an operator $S$ such that $S\circ
T=T\circ S=I$ where $I:x\in \mathcal{L}_{n}\left( \mathcal{B}\right) \mapsto
x$ is the identity operator. In the usual way, one can check that $T$ is an
invertible operator if and only if $T$ is a linear bijection, and the
inverse is a unique operator and is denoted by $T^{-1}$.
\bigskip \qquad We shall denote by $\mathcal{B}^{n}$ the Boolean algebra
product of $\mathcal{B}$ with itself $n$ times. Of course, the elements of $%
\mathcal{B}^{n}$ are the same as the elements of $\mathcal{L}_{n}\left(
\mathcal{B}\right) $, but the algebraic structures are different.
\begin{lem}
\label{AutoAuto}If $T$ is an invertible operator on $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ then $T$ is a Boolean algebra automorphism on $\mathcal{%
\ B}^{n}$.
\end{lem}
\begin{pf}
Note that the supremum operation $\vee $ on $\mathcal{B}^{n}$ agrees with
the addition on $\mathcal{L}_{n}\left( \mathcal{B}\right) $ by definition.
So for any operator $L$ on $\mathcal{L}_{n}\left( \mathcal{B}\right) $ we
have $L\left( \underline{a}\vee \underline{b}\right) =L(\underline{a})\vee
L( \underline{b})$ and $L$ preserves the order $\leq $ on $\mathcal{B}^{n}$.
Hence, $T$ and $T^{-1}$ both preserve the order. Consequently, $\underline{a}
\leq \underline{b}$ if and only if $T(\underline{a})\leq T(\underline{b})$.
Hence $T$ is a lattice morphism, i.e. it also preserves the infimum. Also
note that this implies that $T(1,\ldots ,1)=\left( 1,\ldots ,1\right) $ --
since $\left( 1,\ldots ,1\right) $ is the largest element of $\mathcal{B}
^{n} $, we deduce that $T$ preserves the complement operation as well. This
concludes the proof.\qed
\end{pf}
\bigskip \qquad The converse of Lemma (\ref{AutoAuto}) does not hold,
namely: if $T:\mathcal{B}^{n}\longrightarrow \mathcal{B}^{n}$ is a Boolean
algebra automorphism then $T:\mathcal{L}_{n}\left( \mathcal{B}\right)
\longrightarrow \mathcal{L}_{n}\left( \mathcal{B}\right) $ need not be
linear. For example, let $\mathcal{B}=\left\{ 0,1,\omega ,\omega
^{c}\right\} $ and consider the Boolean algebra $\mathcal{B}^{2}$. Define
the automorphism $S$ on $\mathcal{B}$ by $S(\omega )=\omega ^{c}$ (so that $%
S(0)=0$, $S(1)=1$ and $S(\omega ^{c})=\omega $). Then $T=S\times S$ is an
automorphism of $\mathcal{B}^{2}$. Yet, seen as a map on $\mathcal{L}
_{2}\left( \mathcal{B}\right) $ we have
\begin{equation*}
T\left( \omega \left( 1,0\right) \right) =T(\omega ,0)=\left( \omega
^{c},0\right)
\end{equation*}
and yet
\begin{equation*}
\omega T\left( 1,0\right) =\left( \omega ,0\right)
\end{equation*}
and thus $T$ is not linear.
\bigskip \qquad We now show that if $\mathcal{B}$ is a finite Boolean
algebra, then any orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ has cardinality $n$. Indeed, let $\left\{ \underline{e_{1}},\ldots
,\underline{e_{m}}\right\} $ be an orthonormal basis for $\mathcal{L}
_{n}\left( \mathcal{B}\right) $. Define $T:\mathcal{L}_{n}\left( \mathcal{B}
\right) \longrightarrow \mathcal{L}_{m}\left( \mathcal{B}\right) $ by
\begin{equation*}
T\left( \underline{a}\right) =\left( \left\langle \underline{a},\underline{
e_{1}}\right\rangle ,\ldots ,\left\langle \underline{a},\underline{e_{m}}
\right\rangle \right) \text{.}
\end{equation*}
Then $T$ is a bijection from $\mathcal{B}^{n}$ onto $\mathcal{B}^{m}$ by
definition of orthonormal basis. Hence $n=m$ since $\mathcal{B}$ is finite.
As previously mentioned, we shall show in the next section that this result
holds for any Boolean algebra $\mathcal{B}$. Also, notice that $T$ thus
defined is an invertible operator on $\mathcal{L}_{n}\left( \mathcal{B}
\right) $, hence a Boolean algebra automorphism of $\mathcal{B}^{n}$ by
Lemma\ (\ref{AutoAuto}).
\bigskip \qquad As in traditional linear algebra, the study of linear maps
is facilitated by introducing matrices. A \emph{Boolean matrix} $A$ is a $%
n\times m$ matrix with entries in $\mathcal{B}$. We then write $A=\left[
a_{ij}\right] $ with $a_{ij}\in \mathcal{B}$ for $i\in \left\{ 1,\ldots
,n\right\} $ and $j\in \left\{ 1,\ldots ,m\right\} $. If $A$ is an $n\times
m $ Boolean matrix and if $B$ is an $m\times k$ Boolean matrix, then we
define the product $AB$ as the $n\times k$ matrix whose $\left( i,j\right) $
entry is given by $\vee _{p=1}^{m}a_{ip}b_{pj}$. In particular, we see
elements of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ as $n\times 1$
matrices (i.e. column vectors). Boolean matrices, and a generalization to
distributive lattices have a considerable literature of investigation \cite%
{Blyth67,Cechlarova03,Giveon64,Jagannadham66,Luce52,Rutherford63,Rutherford63b,Rutherford64,Sindak75,Skornyakov86,Wedderburn34,Tan98,Tan01,Yoeli61}
.. These matrices provide useful tools in various fields such as switching
nets, automata theory and finite graph theory. Notice that permutation
matrices are a special case of (invertible) Boolean matrices.
\bigskip \qquad Our main motivation for studying Boolean matrices comes from
an analogy of a Markov chain \cite{Gudder08,Gudder08b,Stirzaker05}. Let $G$
be a finite directed graph whose vertices are labelled $1,2,\ldots ,n$ and
let $\mathcal{B}$ be a fixed Boolean algebra. We think of the vertices of $G$
as sites that a physical system can occupy. The edges of $G$ designate the
allowable transitions between sites. If there is an edge from vertex $i$ to
vertex $j$, we label it by an element $a_{ji}$ of $\mathcal{B}$. We think of
$a_{ji}$ as the event, or proposition that the system evolves from site $i$
to site $j$ in one time-step. If there is no edge between $i$ and $j$ then
we set $a_{ji}=0$. The Boolean matrix $A=\left[ a_{ij}\right] $ is the
transition matrix in one-time-step for the physical system. The transition
matrix for $m$-time-steps is then naturally given by $A^{m}$.
\qquad Assuming that the system evolves from a site $i$ to some specific
site $j$ in one-time-step, we postulate that $a_{ji}a_{ki}=0$ for $j\not=k$
and $\vee _{j=1}^{n}a_{ji}=1$ for all $i=1,\ldots ,n$. Thus each column of $%
A $ is a stochastic vector. In the next section, we will refer to such
matrices as stochastic matrices. Suppose that $b_{i}$ is the event that the
system is in the site $i$ initially. We would then have that the vector $%
\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $ is a stochastic vector and
$A\underline{b}$ describes the system location after one-time-step. As we
shall see, $A\underline{b}$ is again a stochastic vector and in a natural
way, $\left( A\underline{b}\right) _{i}=\dbigvee\limits_{j=1}^{n}a_{ij}b_{j}$
is the event that the system is at site $i$ at one time-step. Thus, $m\in
\mathbb{N}\mapsto A^{m}$ describes the dynamics of the system and this is
analogous to a traditional Markov chain. If in addition, we impose the
condition that for every site $i$ there is a specific site $j$ from which
the system evolved in one time-step, then we would have $a_{ij}a_{ik}=0$ and
$\dbigvee\limits_{j=1}^{n}a_{ij}=1$. Such matrices are called unitary and
will be studied from Section 4 onward.
\qquad In general, if $G$ is a directed graph with $n$ vertices and $A$ is
an $n\times n$ stochastic matrix corresponding to the edges of $G$, we call $%
\left( G,A\right) $ a Boolean Markov chains. In section 6, we study the
powers of $A$ which are important for the description of the dynamics of $%
\left( G,A\right) $.
\section{The Dimension Theorem}
\qquad An orthonormal set is said to be \emph{stochastic} if all of its
elements are stochastic. In this section, we show that all orthonormal bases
of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ have cardinality $n$.
Conversely, we show that any stochastic orthonormal set with cardinality $n$
is a basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
\qquad We shall use the following notations. Given a set $\mathcal{A=}
\left\{ \underline{a}_{1},\ldots ,\underline{a}_{m}\right\} $ of $m$
vectors, we use the notation $\underline{a}_{j}=\left( a_{1j},\ldots
,a_{nj}\right) $ with $a_{ij}\in \mathcal{B}$ ($i=1,\ldots ,n$ and $%
j=1,\ldots ,m$). Thus, we often think about a set $\left\{ \underline{a}
_{1},\ldots ,\underline{a}_{m}\right\} $ as a matrix $\left[ a_{ij}\right]
_{n\times m}$ whose columns are the elements of the set. By abuse of
notation, we denote this matrix by $\mathcal{A}$ again.
\bigskip \qquad We first establish that orthonormal bases possess a duality
property
\begin{thm}
\label{Duality}Let $\mathcal{A=}\left\{ \underline{a}_{1},\ldots ,\underline{%
a}_{m}\right\} $ be an orthonormal subset of $\mathcal{L}_{n}\left( \mathcal{%
B}\right) $. Then $\mathcal{A}$ is an orthonormal basis for $\mathcal{L}%
_{n}\left( \mathcal{B}\right) $ if and only if the set $\mathcal{A}^{\ast }$
of columns of $\left[ a_{ji}\right] _{m\times n}$ is an orthonormal subset
of $\mathcal{L}_{m}\left( \mathcal{B}\right) $.
\end{thm}
\begin{pf}
For all $j\in \left\{ 1,\ldots ,m\right\} $ we denote $\underline{a_{j}}
=\left( a_{1j},\ldots ,a_{nj}\right) $. Assume that $\mathcal{A}$ is an
orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $. Then
there exists $b_{1},\ldots ,b_{m}\in \mathcal{B}$ such that $\underline{
\delta _{1}}=\sum_{j=1}^{m}b_{j}\underline{a_{j}}$. In particular, $%
0=\dbigvee\limits_{j=1}^{m}b_{j}a_{ij}$ for $i\not=1$ so $b_{j}a_{ij}=0$ for
all $i\in \left\{ 2,\ldots ,n\right\} $ and all $j\in \left\{ 1,\ldots
,m\right\} $. Hence, $b_{j}a_{1j}=b_{j}\left(
\dbigvee\limits_{i=1}^{n}a_{ij}\right) =b_{j}$ since $\vee
_{i=1}^{n}a_{ij}=1 $. Hence $b_{j}\leq a_{1j}$ for all $j\in \left\{
1,\ldots ,m\right\} $. On the other hand, $1=\dbigvee
\limits_{j=1}^{m}b_{j}a_{1j}$ and $a_{1j}$ and $a_{1k}$ are disjoint for $%
j\not=k$, so we must have $b_{j}a_{1j}=a_{1j}$ for all $j\in \left\{
1,\ldots ,m\right\} $. Consequently, $\dbigvee\limits_{j=1}^{m}a_{1j}=1$.
Moreover since $b_{j}a_{ij}=0$ for $i\not=1$, we conclude that $%
a_{1j}a_{ij}=0$ for $i\not=1$.
Replacing $\underline{\delta _{1}}$ by $\underline{\delta _{k}}$ for $k\in
\left\{ 1,\ldots ,n\right\} $ we see similarly that $\dbigvee
\limits_{j=1}^{m}a_{kj}=1$ and $a_{kj}a_{ij}=0$ for $i\not=k$ and for all $%
j\in \left\{ 1,\ldots ,m\right\} $. Hence, the set of columns of $\left[
a_{ji}\right] _{m\times n}$ is indeed an orthonormal subset of $\mathcal{L}
_{m}\left( \mathcal{B}\right) $.
Conversely, assume that $\mathcal{A}^{\ast }$ is an orthonormal subset of $%
\mathcal{L}_{m}\left( \mathcal{B}\right) $. This means by definition, and
using the same notations as before, that $\dbigvee\limits_{j=1}^{m}a_{ij}=1$
for all $i=1,\ldots ,n$ and $a_{kj}a_{ij}=0$ for all $i\not=k$ between $1$
and $n$ and $j=1,\ldots ,m$. It follows that
\begin{equation}
\dbigvee\limits_{j=1}^{m}a_{kj}a_{ij}=\delta _{ik}\text{\ \ \ \ }%
(k,i=1,\ldots ,n) \label{DualityEq1}
\end{equation}%
where $\delta _{ij}$ is $1\in \mathcal{B}$ if $i=j$ and $0\in \mathcal{B}$
otherwise. Now (\ref{DualityEq1}) is equivalent to
\begin{equation*}
\underline{\delta _{k}}=\dbigvee\limits_{j=1}^{m}a_{kj}\underline{a_{j}}
\end{equation*}%
for $k=1,\ldots ,n$ and thus $\left\{ \underline{a_{1}},\ldots ,\underline{%
a_{m}}\right\} $ generates $\mathcal{L}_{n}\left( \mathcal{B}\right) $ and,
since it is an orthonormal set by assumption, it is an orthonormal basis of $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $.\qed
\end{pf}
\begin{cor}
\label{StochasticBasis}An orthonormal basis is stochastic.
\end{cor}
\begin{cor}
\label{NSize}If $\left\{ \underline{a_{1}},\ldots ,\underline{ a_{n}}
\right\} $ is a stochastic orthonormal subset of $\mathcal{L} _{n}\left(
\mathcal{B}\right) $ then it is a basis.
\end{cor}
\begin{pf}
Let $a=\left( \dbigvee\limits_{j=1}^{n}a_{1j}\right) ^{c}$ and assume $%
a\not=0$. By Stone's Theorem, there exists a set $\Omega ,$ a Boolean
algebra of subsets of $\Omega $ and a Boolean algebra isomorphism $\mathcal{%
B }\longrightarrow \mathcal{B}_{\Omega }$. We identify $\mathcal{B}$ and $%
\mathcal{B}_{\Omega }$ in this proof and thus regard the elements of $%
\mathcal{B}$ as subsets of $\Omega $, with $0$ identified with $\emptyset $
and $1$ with $\Omega $.
Let $\omega \in a$. Then $\omega \not\in a_{1j}$ for $j=1,\ldots ,n$.\ Since
$\mathcal{A}$ is stochastic and orthonormal, we must have that $\omega \in
a_{i_{1}1}$, $\omega \in a_{i_{2}2}$, \ldots , $\omega \in a_{i_{n-1}n-1}$
for some $i_{1},\ldots ,i_{n-1}$ with $i_{r}\not=1$ and $i_{r}\not=i_{s}$
for $r,s=1,\ldots ,n-1$. Now, suppose $\omega \in a_{kn}$ for some $k\in
\left\{ 1,\ldots ,n\right\} $. Then $k\not=1$ (since $\omega \in a$) and $%
k\not=i_{r}$ for $r=1,\ldots ,n-1$ (orthogonality). But this is a
contradiction since this precludes $n$ values for $k$ which can only take $n$
values. Hence $\omega \not\in a_{kn}$ for all $k\in \left\{ 1,\ldots
,n\right\} $. This contradicts, in turn, that $\underline{a_{n}}$ is a unit
vector, i.e. form a partition of $\Omega $. Hence, $a=0$.
The same reasoning applies to show that $\dbigvee\limits_{j=1}^{n}a_{kj}=1$
for all $k\in \left\{ 1,\ldots ,n\right\} $. Hence $\mathcal{A}^{\ast }$ is
an orthonormal subset of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ and
thus by Theorem (\ref{Duality}), $\mathcal{A}$ is an orthonormal basis for $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $.\qed
\end{pf}
\bigskip \qquad By symmetry, we can restate Theorem (\ref{Duality}) by
stating that $\mathcal{A}$ is an orthonormal basis for $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ if and only if $\mathcal{A}^{\ast }$ is an
orthonormal basis for $\mathcal{L}_{m}\left( \mathcal{B}\right) $. We call $%
\mathcal{A}^{\ast }$ the dual basis for $\mathcal{A}$. For example, if $%
a_{1},a_{2},a_{3}\in \mathcal{B}$ with $a_{1}\vee a_{2}\vee a_{3}=1$ and $%
a_{i}a_{j}=0$ for $i\not=j$ in $\left\{ 1,2,3\right\} $, then the columns of
the following matrix:
\begin{equation*}
\mathcal{A=}\left[
\begin{array}{ccc}
a_{1} & a_{3} & a_{2} \\
a_{2} & 0 & a_{2}^{c} \\
a_{3} & a_{3}^{c} & 0%
\end{array}
\right]
\end{equation*}
form an orthonormal basis for $\mathcal{L}_{3}\left( \mathcal{B}\right) $.
The rows form the corresponding dual basis. Notice that $\mathcal{A}$ need
not be symmetric. Such a matrix $\mathcal{A}$ is what we shall call a
unitary matrix in section 4.
\bigskip \qquad We now establish a core result concerning the construction
of stochastic vectors.
\begin{thm}
\label{Descent}Let $n>1$. Let $\underline{a}=\left( a_{1},\ldots
,a_{n}\right) $ and $\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $ be
two stochastic vectors in $\mathcal{L}_{n}\left( \mathcal{B}\right) $. Then $%
\underline{a}\perp \underline{b}$ if and only if there exists a stochastic
vector $\underline{c}=\left( c_{1},\ldots ,c_{n-1}\right) $ in $\mathcal{L}
_{n-1}\left( \mathcal{B}\right) $ such that $b_{i}=c_{i}a_{i}^{c}$ for $%
i=1,\ldots ,n-1$. If $\underline{a}\perp \underline{b}$ then we can always
choose $\underline{c}$ with $c_{i}=b_{n}a_{i}\vee b_{i}$ for $i=1,\ldots
,n-1 $.
\end{thm}
\begin{pf}
Suppose that $\underline{a}\perp \underline{b}$. Let $i\in \left\{ 1,\ldots
,n-1\right\} $. We set $c_{i}=b_{n}a_{i}\vee b_{i}$. Since $\underline{a}
\perp \underline{b}$, we have $b_{i}\leq a_{i}^{c}.$ Hence
\begin{equation*}
c_{i}a_{i}^{c}=\left( b_{n}a_{i}\vee b_{i}\right)
a_{i}^{c}=b_{i}a_{i}^{c}=b_{i}\text{.}
\end{equation*}
Now, since $\underline{a}$ and $\underline{b}$ are stochastic vectors, we
conclude that for all $j\in \left\{ 1,\ldots ,n\right\} $ and $j\not=i$ we
have
\begin{eqnarray*}
c_{i}c_{j} &=&\left( b_{n}a_{i}\vee b_{i}\right) \left( b_{n}a_{j}\vee
b_{j}\right) \\
&=&b_{n}a_{i}a_{j}\vee b_{n}b_{j}a_{i}\vee b_{i}b_{n}a_{j}\vee b_{i}b_{j}=0
\text{.}
\end{eqnarray*}
Finally, we have
\begin{eqnarray*}
\dbigvee\limits_{i=1}^{n-1}c_{i} &=&\dbigvee\limits_{i=1}^{n-1}\left(
b_{n}a_{i}\vee b_{i}\right) =\left(
b_{n}\dbigvee\limits_{i=1}^{n-1}a_{i}\right) \vee
\dbigvee\limits_{i=1}^{n-1}b_{i} \\
&=&b_{n}a_{n}^{c}\vee b_{n}^{c}=b_{n}\vee b_{n}^{c}=1\text{.}
\end{eqnarray*}
We conclude that $\underline{c}=\left( c_{1},\ldots ,c_{n-1}\right) $ is a
stochastic vector, and it obviously has the desired property.
Conversely, suppose that there exists a stochastic vector $\underline{c}$ in
$\mathcal{L}_{n-1}\left( \mathcal{B}\right) $ such that $%
b_{i}=c_{i}a_{i}^{c} $ for $i=1,\ldots ,n-1$. Then by construction $%
a_{i}b_{i}=0$ for $i=1,\ldots ,n-1$. Moreover
\begin{eqnarray*}
a_{n}b_{n} &=&a_{n}\left( \dbigvee\limits_{i=1}^{n-1}b_{i}\right)
^{c}=a_{n}\left( \dbigvee\limits_{i=1}^{n-1}c_{i}a_{i}^{c}\right) ^{c} \\
&=&a_{n}\dbigwedge\limits_{i=1}^{n-1}\left( a_{i}\vee c_{i}^{c}\right)
=a_{n}\dbigwedge\limits_{i=1}^{n-1}c_{i}^{c}=a_{n}\left(
\dbigvee\limits_{i=1}^{n-1}c_{i}\right) ^{c}=0\text{.}
\end{eqnarray*}
It follows that $\underline{a}\perp \underline{b}$.\qed
\end{pf}
\bigskip \qquad We can now show:
\begin{lem}
\label{BasisUpBound}If $\mathcal{A}=\left\{ \underline{a_{1}},\ldots ,
\underline{a_{m}}\right\} $ is a stochastic orthonormal set in $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ then $m\leq n$.
\end{lem}
\begin{pf}
We proceed by induction on $n\in \mathbb{N}$. For $n=1$ the only orthonormal
set is $\left\{ 1\right\} $ so the result holds trivially. Now we assume the
results holds for some $n\in \mathbb{N}$. Let $\mathcal{A=}\left[ a_{ij}%
\right] _{(n+1)\times m}$ be a stochastic orthonormal set in $\mathcal{L}%
_{n+1}\left( \mathcal{B}\right) $. By Theorem (\ref{Descent}), for each $%
j=2,\ldots ,m$ there exists a stochastic vector $\underline{c_{j}}=\left(
c_{1j},\ldots ,c_{nj}\right) $ in $\mathcal{L}_{n}\left( \mathcal{B}\right) $
such that $a_{ij}=c_{ij}a_{i1}^{c}$ for all $i=1,\ldots ,n$ and $j=2,\ldots
,m$. Let $j,k\in \left\{ 2,\ldots ,m\right\} $ with $j\not=k$ and $i\in
\left\{ 1,\ldots ,n\right\} $. Recall from Theorem (\ref{Descent}) that $%
c_{ij}=a_{i1}a_{nj}\vee a_{ij}$, and since $\mathcal{A}$ is orthonormal
\begin{eqnarray*}
c_{ij}c_{ik} &=&\left( a_{i1}a_{n+1,j}\vee a_{ij}\right) \left(
a_{i1}a_{n+1,k}\vee a_{ik}\right) \\
&=&a_{i1}a_{n+1,j}a_{n+1,k}\vee a_{i1}a_{n+1,j}a_{ik}\vee
a_{i1}a_{ij}a_{n+1,k}\vee a_{ij}a_{ik} \\
&=&0\text{.}
\end{eqnarray*}
Hence $\left\{ \underline{c_{2}},\ldots ,\underline{c_{m}}\right\} $ is a
stochastic orthonormal set in $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
By our induction hypothesis, $m-1\leq n$ and thus $m\leq n+1$, which
completes our proof by induction.\qed
\end{pf}
\qquad The main result of this section is:
\begin{thm}
\label{Dimension}If $\mathcal{A}$ is an orthonormal basis for $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ then the cardinality of $\mathcal{A}$ is $n$.
\end{thm}
\begin{pf}
We proceed by induction on $n$. The result is trivial for $n=1$. Assume that
for some $n\in \mathbb{N}$, if $\mathcal{A}_{0}$ is an orthonormal basis for
$\mathcal{L}_{k}\left( \mathcal{B}\right) $ with $k\leq n$ then $\mathcal{A}%
_{0}$ contains exactly $k$ vectors. Let $\mathcal{A}$ be an orthonormal
basis of $\mathcal{L}_{n+1}\left( \mathcal{B}\right) $. By Corollary (\ref%
{StochasticBasis}), $\mathcal{A}$ is stochastic. Applying Lemma (\ref%
{BasisUpBound}), we deduce that the cardinality $m$ of $\mathcal{A}$
satisfies $m\leq n+1$. Assume that $m<n+1$. By Theorem (\ref{Duality}), $%
\mathcal{A}^{\ast }$ is an orthonormal basis for $\mathcal{L}_{m}\left(
\mathcal{B}\right) $ since $\mathcal{A}=\left( \mathcal{A}^{\ast }\right)
^{\ast }$ is an orthonormal subset of $\mathcal{L}_{n+1}\left( \mathcal{B}%
\right) $. Since $m\leq n$, we conclude by our induction hypothesis that the
cardinality of $\mathcal{A}^{\ast }$ is $m$. But by construction, the
cardinality of $\mathcal{A}^{\ast }$ is $n+1$, which is a contradiction.
Hence $m=n+1$ which completes our proof by induction.\qed
\end{pf}
Combining Theorem (\ref{Dimension}) and Corollary (\ref{NSize}) we obtain
the following result:
\begin{cor}
\label{DimCor}A stochastic orthonormal set $\mathcal{A}$ is a basis for $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $ if and only if the cardinality of
$\mathcal{A}$ is $n$.
\end{cor}
\qquad To be fully satisfactory, we shall now check that the orthonormal
families of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ of cardinality $n$
are in fact basis. We shall use the following:
\begin{lem}
\label{Redux1}If $\underline{a}=\left( a_{1},\ldots ,a_{n}\right) \in
\mathcal{L}_{n}\left( \mathcal{B}\right) $ is a unit vector, then there
exists a stochastic vector $\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $
with $b_{i}\leq a_{i}$ for all $i=1,\ldots ,n$.
\end{lem}
\begin{pf}
For $i=1,\ldots ,n$ we set $b_{i}=a_{i}\left( a_{1}^{c}a_{2}^{c}\ldots
a_{i-1}^{c}\right) \leq a_{i}$. Then $b_{i}b_{j}=0$ for $i,j=1,\ldots ,n$
and $i\not=j$, and $\dbigvee\limits_{i=1}^{n}b_{i}=\dbigvee
\limits_{i=1}^{n}a_{i}=1$ so $\underline{b}$ is a stochastic vector.\qed
\end{pf}
\qquad Now, we can state:
\begin{cor}
\label{DimCor2}An orthonormal set of $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ is a basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $ if and
only if it has cardinality $n$.
\end{cor}
\begin{pf}
Let $\mathcal{A=}\left\{ \underline{a_{1}},\ldots ,\underline{a_{n}}\right\}
$ be an orthonormal set. Using Lemma (\ref{Redux1}), there exists a set of
stochastic vectors $\underline{b_{1}},\ldots ,\underline{b_{n}}$ such that $%
b_{ij}\leq a_{ij}$. Therefore, $\left\{ \underline{b_{1}},\ldots ,\underline{
b_{n}}\right\} $ is a stochastic orthogonal set of size $n$ and thus it is a
basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $ by Corollary (\ref%
{DimCor}). Now, let $i,j,k,l=1,\ldots ,n$ with $i>j$. Let $\underline{v}=$ $%
a_{ik}a_{jk}\underline{\delta _{i}}$. Then, using the construction of Lemma
( \ref{Redux1}), we have
\begin{equation*}
a_{ik}a_{jk}b_{il}=a_{ik}a_{jk}a_{il}a_{1l}^{c}\ldots a_{i-1,l}^{c}=0
\end{equation*}
since either $l=k$ and then $a_{ik}a_{jk}b_{il}\leq a_{jk}a_{jk}^{c}=0$
since $i>j$, or $l\not=k$ and $a_{ik}a_{il}=0$ since $\mathcal{A}$ is
orthogonal. Hence the vector $\underline{v}$ is orthogonal to $\underline{
b_{1}},\ldots ,\underline{b_{n}}$, thus $\underline{v}=0$. Hence, $\mathcal{%
A }$ is stochastic.\ By Corollary (\ref{DimCor}), it is an orthonormal basis
of $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
The converse is Theorem (\ref{Dimension}).\qed
\end{pf}
\qquad In view of Corollary (\ref{DimCor2}), we call $n$ the \emph{dimension}
of the Boolean linear space $\mathcal{L}_{n}\left( \mathcal{B}\right) $. We
now consider the following question: can any stochastic orthonormal subset $%
\mathcal{A}$ of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ be extended to
an orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $? By
Lemma (\ref{BasisUpBound}), $\mathcal{A}$ can not have more than $n$
vectors. Of course, if the cardinality of $\mathcal{A}$ is $n$ then it is
already a basis by Corollary (\ref{NSize}). Moreover, Example (\ref%
{CyclicBasis}) shows that if $\mathcal{A}$ is reduced to a unique stochastic
vector, then there is an orthonormal basis of $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ containing $\mathcal{A}$ so the answer is affirmative.
We shall now prove that the answer is affirmative in general.
\bigskip \qquad We shall use the following concept:
\begin{defn}
A subset $\mathcal{M}\subseteq \mathcal{L}_{n}\left( \mathcal{B}\right) $ is
a subspace if it is generated by an orthonormal set $\mathcal{A}=\left\{
\underline{a_{1}},\ldots ,\underline{a_{m}}\right\} $, i.e.
\begin{equation*}
\mathcal{M}=\left\{ \sum_{i=1}^{m}b_{i}\underline{a_{i}}:b_{1},\ldots
,b_{m}\in \mathcal{B}\right\} \text{.}
\end{equation*}
Any orthonormal set $\mathcal{A}$ generating $\mathcal{M}$ is called an
orthonormal basis for $\mathcal{M}$.
\end{defn}
\qquad We emphasize that we do not require orthonormal bases of subspaces to
be stochastic. In fact, a subspace may not contain any stochastic
orthonormal basis: for example, if there exists $a\in \mathcal{B}$ such that
$a\not\in \left\{ 0,1\right\} $ then the subset $E=\left\{ b\left(
1,a\right) :b\in \mathcal{B}\right\} $ is a subspace with basis $\left(
1,a\right) $. Since any orthonormal set of two vectors generates $\mathcal{L}
_{2}\left( \mathcal{B}\right) \not=E$, any orthonormal basis for $E$ is
necessarily reduced to one vector. If this vector is stochastic, then it is
of the form $\left( b,b^{c}\right) $ for some $b\in \mathcal{B}$. It is then
easy to check that $(1,a)$ can not be of the form $\left( cb,cb^{c}\right) $
and thus $E$ has no stochastic vector basis. Thus, we will sometimes use:
\begin{defn}
A subspace with a stochastic orthonormal basis is called a stochastic
subspace.
\end{defn}
\qquad Linear maps generalize trivially to linear maps between two
subspaces. Of special interest to us will be:
\begin{defn}
A linear map $T:\mathcal{M}\longrightarrow \mathcal{N}$ between two
subspaces $\mathcal{M}$ and $\mathcal{N}$ of, respectively, $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ and $\mathcal{L}_{m}\left( \mathcal{B}
\right) $, is called an isometry when for all $\underline{a},\underline{b}
\in \mathcal{M}$ we have $\left\langle T(\underline{a}),T(\underline{b}
)\right\rangle =\left\langle \underline{a},\underline{b}\right\rangle $.
\end{defn}
\begin{lem}
\label{IsometryChar}Let $\mathcal{M}\subseteq \mathcal{L}_{n}\left( \mathcal{%
\ B}\right) $ and $\mathcal{N}\subseteq \mathcal{L}_{m}\left( \mathcal{B}
\right) $ be two subspaces. Let $T:\mathcal{M}\longrightarrow \mathcal{N}$
be a linear map. The following are equivalent:
\begin{enumerate}
\item $T$ is an isometry,
\item There exists an orthonormal basis $\mathcal{A}=\left\{ \underline{%
e_{1} },\ldots ,\underline{e_{k}}\right\} $ of $\mathcal{M}$ such that $%
\left\{ T \underline{e_{i}}:i=1,\ldots ,k\right\} $ is an orthonormal set of
$\mathcal{\ N}$,
\item For every orthonormal set $\mathcal{A=}\left\{ \underline{e_{1}}
,\ldots ,\underline{e_{k}}\right\} $ of $\mathcal{M}$, the set $\left\{ T
\underline{e_{1}},\ldots T\underline{e_{k}}\right\} $ is an orthonormal set
of $\mathcal{N}$.
\end{enumerate}
Moreover, if $T$ is an isometry, then it is injective.
\end{lem}
\begin{pf}
We start by proving that (2) implies (1). Let $\mathcal{A=}\left\{
\underline{e_{1}},\ldots ,\underline{e_{k}}\right\} $ be an orthonormal
basis of $\mathcal{M}$ such that $\left\{ T\underline{e_{1}},\ldots ,T%
\underline{e_{k}}\right\} $ is orthonormal. Let $\underline{a},\underline{b}%
\in \mathcal{M}$. We can write $\underline{a}=\sum_{i=1}^{k}a_{i}\underline{%
e_{i}}$ and $\underline{b}=\sum_{i=1}^{k}b_{i}\underline{e_{i}}$ with $%
a_{i},b_{i}\in \mathcal{B}$ ($i=1,\ldots ,k$). Then
\begin{eqnarray*}
\left\langle T\underline{a},T\underline{b}\right\rangle
&=&\dbigvee\limits_{i,j=1}^{k}\left\langle a_{i}T\underline{e_{i}},b_{j}T%
\underline{e_{j}}\right\rangle
=\dbigvee\limits_{i,j=1}^{k}a_{i}b_{j}\left\langle T\underline{e_{i}},T%
\underline{e_{j}}\right\rangle \\
&=&\dbigvee\limits_{i=1}^{k}a_{i}b_{i}=\left\langle \underline{a},\underline{%
b}\right\rangle \text{.}
\end{eqnarray*}%
Hence $T$ is an isometry.
Now, (1) implies (3) and (3) implies (2) are both trivial.
Assume now that $T$ is an isometry. Assume $T\underline{a}=T\underline{b}$.
Then, using the same notations as above, we have
\begin{equation*}
a_{i}=\left\langle a,\underline{e_{i}}\right\rangle =\left\langle T
\underline{a},T\underline{e_{i}}\right\rangle =\left\langle T\underline{b},T
\underline{e_{i}}\right\rangle =\left\langle \underline{b},\underline{e_{i}}
\right\rangle =b_{i}
\end{equation*}
for all $i=1,\ldots ,k$. Hence $\underline{a}=\underline{b}$.\qed
\end{pf}
\begin{defn}
Let $\mathcal{M}$ and $\mathcal{N}$ be two subspaces of respectively $%
\mathcal{L}_{m}\left( \mathcal{B}\right) $ and $\mathcal{L}_{n}\left(
\mathcal{B}\right) $. A surjective isometry $T:\mathcal{M}\longrightarrow
\mathcal{N}$ is called an isomorphism, and then $\mathcal{M}$ and $\mathcal{%
N }$ are called isomorphic subspaces.
\end{defn}
\bigskip \qquad It is clear that the inverse of an isomorphism is an
isomorphism, and that the composition of two isomorphisms is again an
isomorphism. It follows that isomorphic is an equivalence relation. It is
also an important observation that isomorphisms map orthonormal bases to
orthonormal bases: if $\left\{ \underline{a_{1}},\ldots ,\underline{a_{n}}%
\right\} $ is an orthonormal basis for a subspace $\mathcal{M}$ and $T:%
\mathcal{M}\longrightarrow \mathcal{N}$ is an isomorphism then $\left\{ T%
\underline{a_{1}},\ldots ,T\underline{a_{n}}\right\} $ is an orthonormal set
since $T$ is an isometry (Lemma (\ref{IsometryChar})). Moreover, if $%
\underline{b}\in \mathcal{N}$ then there exists $\underline{c}\in \mathcal{M}
$ such that $T(\underline{c})=\underline{b}$. Since $\underline{c}%
=\sum_{i=1}^{n}c_{i}\underline{a_{i}}$ for some $c_{1},\ldots ,c_{n}\in
\mathcal{B}$ we conclude that $\underline{b}=\sum_{i=1}^{n}\underline{c_{i}}%
T(\underline{a_{i}})$. Hence $\left\{ T\underline{a_{1}},\ldots ,T\underline{%
a_{n}}\right\} $ is an orthonormal generating subset of $\mathcal{N}$, hence
a basis of $\mathcal{N}$.
\begin{thm}
\label{SubIso}If $\mathcal{M}$ is a subspace then there exists an $m\in
\mathbb{N}$ and an isomorphism $T:\mathcal{M}\longrightarrow \mathcal{L}
_{m}\left( \mathcal{B}\right) $. Moreover $T$ can be chosen to take
stochastic vectors to stochastic vectors, and if $\mathcal{M}$ is a
stochastic subspace then $T$ can be chosen so that $T$ and $T^{-1}$ map
stochastic vectors to stochastic vectors.
\end{thm}
\begin{pf}
Let $\left\{ \underline{e_{1}},\ldots ,\underline{e_{m}}\right\} $ be an
orthonormal basis for $\mathcal{M}$ and let us denote the canonical basis of
$\mathcal{L}_{m}\left( \mathcal{B}\right) $ by $\left\{ \underline{\delta
_{1}} ,\ldots ,\underline{\delta _{m}}\right\} $. We define $T:\mathcal{M}
\longrightarrow \mathcal{L}_{m}\left( \mathcal{B}\right) $ by setting for
all $\underline{a}\in \mathcal{M}$:
\begin{equation*}
T\underline{a}=\left( \left\langle \underline{a},e_{1}\right\rangle ,\ldots
,\left\langle \underline{a},e_{m}\right\rangle \right) \text{.}
\end{equation*}
Then $T$ is linear and $T\underline{e_{i}}=\underline{\delta _{i}}$ for $%
i\in \left\{ 1,\ldots ,m\right\} $. By Lemma (\ref{IsometryChar}), $T$ is an
isometry and $T$ is surjective by construction (if $\underline{b}\in
\mathcal{L}_{m}\left( \mathcal{B}\right) $ then $\underline{b}=\left(
b_{1},\ldots ,b_{m}\right) $ then $T\left( \sum_{i=1}^{m}b_{i}\underline{
e_{i}}\right) =\underline{b}$). So $T$ is an isomorphism.
Moreover, $T$ preserves stochastic vectors. Indeed, let $\underline{a}$ be a
stochastic vector in $\mathcal{M}$. Let $n\in \mathbb{N}$ such that $%
\mathcal{M}$ is a subspace of $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
Denote by $\left\{ \underline{\delta _{1}^{\prime }},\ldots ,\underline{
\delta _{n}^{^{\prime }}}\right\} $ the canonical orthonormal basis of $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $. For $i=1,\ldots ,m$ then
\begin{equation*}
\left\langle \underline{a},\underline{e_{i}}\right\rangle =\left\langle
\underline{a},\sum_{r=1}^{n}\left\langle \underline{e_{i}},\underline{\delta
_{r}^{\prime }}\right\rangle \underline{\delta _{r}^{\prime }}\right\rangle
=\dbigvee\limits_{r=1}^{n}\left\langle \underline{e_{i}},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle \underline{a},\underline{\delta
_{r}^{\prime }}\right\rangle \text{.}
\end{equation*}
Hence, for $i\not=j$ and $i,j=1,\ldots ,m$ we have
\begin{eqnarray*}
\left\langle \underline{a},\underline{e_{i}}\right\rangle \left\langle
\underline{a},\underline{e_{j}}\right\rangle
&=&\dbigvee\limits_{r,s=1}^{n}\left\langle \underline{e_{i}},\underline{
\delta _{r}^{\prime }}\right\rangle \left\langle \underline{a},\underline{
\delta _{r}^{\prime }}\right\rangle \left\langle \underline{e_{j}},
\underline{\delta _{s}^{\prime }}\right\rangle \left\langle \underline{a},
\underline{\delta _{s}^{\prime }}\right\rangle \\
&=&\dbigvee\limits_{r=1}^{n}\left\langle \underline{e_{i}},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle \underline{a},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle \underline{e_{j}},\underline{
\delta _{r}^{\prime }}\right\rangle \text{ since }\underline{a}\text{ is
stochastic} \\
&\leq &\dbigvee\limits_{r=1}^{n}\left\langle \underline{e_{i}},\underline{
\delta _{r}^{\prime }}\right\rangle \left\langle \underline{e_{j}},
\underline{\delta _{r}^{\prime }}\right\rangle =\left\langle \underline{%
e_{i} },\underline{e_{j}}\right\rangle =0\text{.}
\end{eqnarray*}
Hence, by definition, $T\underline{a}$ is stochastic.
Now, it is easy to check that $T^{-1}\left( a_{1},\ldots ,a_{m}\right)
=\sum_{k=1}^{m}a_{k}\underline{e_{k}}$. Assume that $\mathcal{M}$ is
stochastic and that the basis $\left\{ \underline{e_{1}},\ldots ,\underline{%
e_{n}}\right\} $ is stochastic. If $\left( a_{1},\ldots ,a_{m}\right) \in
\mathcal{L}_{m}\left( \mathcal{B}\right) $ is stochastic, then for $%
r,s=1,\ldots ,m$:
\begin{eqnarray*}
\left\langle \sum_{k=1}^{m}a_{k}\underline{e_{k}},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle \sum_{k=1}^{m}a_{k}\underline{e_{k}%
},\underline{\delta _{s}^{\prime }}\right\rangle
&=&\dbigvee\limits_{k,l=1}^{m}a_{k}a_{l}\left\langle e_{k},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle e_{l},\underline{\delta
_{s}^{\prime }}\right\rangle \\
&=&\dbigvee\limits_{k=1}^{m}a_{k}\left\langle e_{k},\underline{\delta
_{r}^{\prime }}\right\rangle \left\langle e_{k},\underline{\delta
_{s}^{\prime }}\right\rangle \text{ as }\underline{a}\text{ is stochastic} \\
&=&a_{k}\delta _{r}^{s}
\end{eqnarray*}%
with $\delta _{r}^{s}$ is the Kronecker symbol. Note that we used that by
definition, an orthonormal basis of a subspace is stochastic. Hence $%
T^{-1}\left( a_{1},\ldots ,a_{m}\right) $ is a stochastic vector as well.
Hence $T^{-1}$ maps stochastic vectors to stochastic vectors.\qed
\end{pf}
\begin{cor}
Any two orthonormal bases of a subspace $\mathcal{M}$ have the same
cardinality.
\end{cor}
\begin{pf}
Let $\mathcal{A=}\left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}\right\}
$ and $\mathcal{B=}\left\{ \underline{b_{1}},\ldots ,\underline{b_{n}}
\right\} $ be two orthonormal bases of $\mathcal{M}$. By Theorem (\ref%
{SubIso}), there exists isomorphisms $T:\mathcal{M}\longrightarrow \mathcal{%
L }_{m}\left( \mathcal{B}\right) $ and $S:\mathcal{M}\longrightarrow
\mathcal{L }_{n}\left( \mathcal{B}\right) $. Hence $T\circ S^{-1}:\mathcal{L}%
_{n}\left( \mathcal{B}\right) \longrightarrow \mathcal{L}_{m}\left( \mathcal{%
B}\right) $ is an isomorphism. In particular, it maps orthonormal basis to
orthonormal basis. Hence $n=m$ by Theorem (\ref{Dimension}).\qed
\end{pf}
\bigskip \qquad We call the common cardinality of all orthonormal bases for
a subspace $\mathcal{M}$ the \emph{dimension of }$\mathcal{M}$. It follows
from Theorem (\ref{SubIso}) that if $\mathcal{M}$ has dimension $m$, then $%
\mathcal{M}$ is isomorphic to $\mathcal{L}_{m}\left( \mathcal{B}\right) $. A
source of examples of subspaces is given by:
\begin{prop}
\label{OrthoSub}For any $\underline{a_{1}}\in \mathcal{L}_{n}\left( \mathcal{%
\ B}\right) $ we denote by $\underline{a_{1}}^{\perp }$ the set
\begin{equation*}
\left\{ \underline{b}\in \mathcal{L}_{n}\left( \mathcal{B}\right)
:\left\langle \underline{a_{1}},\underline{b}\right\rangle =0\right\} \text{%
. }
\end{equation*}
If $\underline{a_{1}}$ is stochastic then $\underline{a_{1}}^{\perp }$ is a
stochastic subspace of $\mathcal{L}_{n}\left( \mathcal{B}\right) $ of
dimension $n-1$.
\end{prop}
\begin{pf}
Using Example (\ref{CyclicBasis}), we extend the stochastic vector $%
\underline{a_{1}}$ to an orthonormal basis $\left\{ \underline{a_{1}},\ldots
,\underline{a_{n}}\right\} $ of $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
If $\underline{b}\perp \underline{a_{1}}$ then, writing $\underline{b}
=\sum_{i=1}^{n}b_{i}\underline{a_{i}}$ we see that $\left\langle \underline{%
b },\underline{a_{1}}\right\rangle =0$ if and only if $b_{1}=0$. Hence
\begin{equation*}
\underline{a_{1}}^{\perp }=\left\{ \sum_{i=2}^{n}b_{i}\underline{a_{i}}
:b_{2},\ldots ,b_{n}\in \mathcal{B}\right\}
\end{equation*}
is the subspace generated by the stochastic orthonormal set $\left\{
\underline{a_{2}}\ldots ,\underline{a_{n}}\right\} $ of cardinality $n-1$.
\qed
\end{pf}
\bigskip \qquad We are now ready to show:
\begin{thm}
\label{Incomplete}If $\mathcal{A}=\left\{ \underline{a_{1}},\ldots ,
\underline{a_{m}}\right\} $ is a stochastic orthonormal set in $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ with $m<n$ then $\mathcal{A}$ can be
extended to an orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}
\right) $.
\end{thm}
\begin{pf}
We proceed by induction on $n$. The result is trivial for $n=1$. Assume that
for some $n\in \mathbb{N}$, any stochastic orthonormal set of cardinality $%
m<n$ in $\mathcal{L}_{n}\left( \mathcal{B}\right) $ can be extended to a
basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $. Let $\mathcal{A=}%
\left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}\right\} $ be a
stochastic orthonormal subset of $\mathcal{L}_{n+1}\left( \mathcal{B}\right)
$ with $m<n+1$. By Proposition (\ref{OrthoSub}) and Theorem (\ref{SubIso})
there exist an isomorphism $T:\underline{a_{1}}^{\perp }\longrightarrow
\mathcal{L}_{n}\left( \mathcal{B}\right) $ such that $T$ and $T^{-1}$
preserve stochastic vectors. Moreover, $\left\{ \underline{a_{2}},\ldots ,%
\underline{a_{m}}\right\} \subseteq \underline{a_{1}}^{\perp }$. Let $%
\underline{b_{i}}\in \mathcal{L}_{n}\left( \mathcal{B}\right) $ be given by $%
T\underline{a_{i}}=\underline{b_{i}}$ for $i=2,\ldots ,m$. It follows from
Lemma (\ref{IsometryChar}) that $\left\{ \underline{b_{2}},\ldots ,%
\underline{b_{m}}\right\} $ is an orthonormal set in $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ of cardinal $m-1<n$. By our induction hypothesis, there
exist stochastic vectors $\underline{b_{m+1}},\ldots ,\underline{b_{n+1}}$
such that $\left\{ \underline{b_{2}},\ldots ,\underline{b_{n+1}}\right\} $
is an orthonormal basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $. By
Theorem (\ref{SubIso}), $\left\{ T^{-1}\underline{b_{2}},\ldots ,T^{-1}%
\underline{b_{n+1}}\right\} $ is a stochastic orthonormal set in $\mathcal{L}%
_{n+1}\left( \mathcal{B}\right) $ which is a basis for $\underline{a_{1}}%
^{\perp }$. Since $\underline{a_{i}}=T^{-1}\underline{b_{i}}$ for $%
i=2,\ldots ,m$, we conclude by Corollary (\ref{NSize}) that $\left\{
\underline{a_{1}},T^{-1}\underline{b_{2}},\ldots ,T^{-1}\underline{b_{n+1}}%
\right\} $ is an orthonormal basis of $\mathcal{L}_{n+1}\left( \mathcal{B}%
\right) $ which extends $\mathcal{A}$.\qed
\end{pf}
\bigskip \qquad It follows from Theorem (\ref{Incomplete}) that if $\mathcal{%
\ M}$ is a stochastic subspace of $\mathcal{L}_{n}\left( \mathcal{B}\right) $
then
\begin{equation*}
\mathcal{M}^{\perp }=\left\{ \underline{b}\in \mathcal{L}_{n}\left( \mathcal{%
\ B}\right) :\forall \underline{a}\in \mathcal{M}\ \ \ \ \underline{b}\perp
\underline{a}\right\}
\end{equation*}
is also a stochastic subspace and $\mathcal{L}_{n}\left( \mathcal{B}\right)
= \mathcal{M}+\mathcal{M}^{\perp }$. One can now study projection operators
and the order structure of subspaces but we leave this for later work.
\section{Stochastic and Unitary Matrices}
\bigskip \qquad In the sequel, a matrix on $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ will mean an $n\times n$ Boolean matrix, and a vector in $\mathcal{%
\ L}_{n}\left( \mathcal{B}\right) $ will mean a Boolean vector and will be
identified with a $n\times 1$ column vector. Moreover, if $A$ is a matrix
then we denote the $(i,j)^{\text{th}}$ entry by $(A)_{ij}$, or simply $%
\left( A\right) _{i}$ if $A$ is a column vector.
\qquad Let $A$ be a matrix on $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
Then the map $\underline{x}\in \mathcal{L}_{n}\left( \mathcal{B}\right)
\mapsto A\underline{x}$ is linear and will be identified with $A$. Indeed,
for all $\underline{b},\underline{c}\in \mathcal{L}_{n}\left( \mathcal{B}
\right) $, $c\in \mathcal{B}$, and $i=1,\ldots ,n$ we have
\begin{equation*}
\left( A\left( c\underline{b}\right) \right)
_{i}=\dbigvee\limits_{j=1}^{n}a_{ij}\left( c\underline{b}\right)
_{j}=\dbigvee\limits_{j=1}^{n}a_{ij}cb_{j}=c\dbigvee
\limits_{j=1}^{n}a_{ij}b_{j}=c\left( A\underline{b}\right) _{i}
\end{equation*}
and
\begin{eqnarray*}
\left( A\left( \underline{b}+\underline{c}\right) \right) _{i}
&=&\dbigvee\limits_{j=1}^{n}a_{ij}\left( \underline{b}+\underline{c}\right)
_{j}=\dbigvee\limits_{j=1}^{n}a_{ij}\left( b_{j}\vee c_{j}\right) \\
=\left( \dbigvee\limits_{j=1}^{n}a_{ij}b_{j}\right) \vee \left(
\dbigvee\limits_{j=1}^{n}a_{ij}c_{j}\right) \\
&=&\left( A\underline{b}\right) _{i}\vee \left( A\underline{c}\right)
_{i}=\left( A\underline{b}+A\underline{c}\right) _{i}\text{.}
\end{eqnarray*}
\qquad Conversely, any operator $T$ on $\mathcal{L}_{n}\left( \mathcal{B}
\right) $ can be represented by a matrix on $\mathcal{L}_{n}\left( \mathcal{%
B }\right) $ with respect to the canonical basis. Indeed, define $%
a_{ij}=\left\langle T\underline{\delta _{j}},\underline{\delta _{i}}
\right\rangle $ for all $i,j=1,\ldots ,n$. Then $T\underline{\delta _{j}}
=\sum_{i=1}^{n}a_{ij}\underline{\delta _{i}}$. Defining the matrix $A_{T}= %
\left[ a_{ij}\right] _{n\times n}$ we have
\begin{equation*}
\left( A_{T}\underline{\delta _{i}}\right)
_{k}=\dbigvee\limits_{j=1}^{n}a_{kj}\left( \underline{\delta _{i}}\right)
_{j}=\dbigvee\limits_{j=1}^{n}a_{kj}\delta _{ji}=a_{ki}=\left( T\underline{
\delta _{i}}\right) _{k}
\end{equation*}
for all $i,k=1,\ldots ,n$ and it follows that the action of $A_{T}$ is given
by $T$. The matrix $A_{T}$ is called the \emph{matrix corresponding to }$T$
in the canonical basis of $\mathcal{L}_{n}\left( \mathcal{B}\right) $. If $%
A= \left[ a_{ij}\right] _{n\times n}$ is a matrix on $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ then its transpose $\left[ a_{ji}\right] _{n\times n}$
is denoted by $A^{\ast }$.
\qquad It is straightforward to check that if $T:\mathcal{L}_{n}\left(
\mathcal{B}\right) \longrightarrow \mathcal{L}_{m}\left( \mathcal{B}\right) $
and $S:\mathcal{L}_{m}\left( \mathcal{B}\right) \longrightarrow \mathcal{L}
_{k}\left( \mathcal{B}\right) $ then the matrix of $S\circ T$ is given by
the product $A_{S}A_{T}$, the matrix of $\lambda T$ for $\lambda \in
\mathcal{B}$ is given by $\lambda A_{T}$ and if $S:\mathcal{L}_{n}\left(
\mathcal{B}\right) \longrightarrow \mathcal{L}_{m}\left( \mathcal{B}\right) $
then the matrix of $S+T$ is $A_{S}+A_{T}$. Moreover, for all $\underline{a}
\in \mathcal{L}_{n}\left( \mathcal{B}\right) $ and $\underline{b}\in
\mathcal{L}_{m}\left( \mathcal{B}\right) $ we check that $\left\langle T
\underline{a},\underline{b}\right\rangle =\left\langle \underline{a},T^{\ast
}\underline{b}\right\rangle $ where $T^{\ast }:\mathcal{L}_{m}\left(
\mathcal{B}\right) \longrightarrow \mathcal{L}_{n}\left( \mathcal{B}\right) $
is the linear map of matrix $A_{T}^{\ast }$ (and where we use the same
notation for the inner products on $\mathcal{L}_{n}\left( \mathcal{B}\right)
$ and $\mathcal{L}_{m}\left( \mathcal{B}\right) $). Thus, linear maps always
have an adjoint. It is routine to check that the adjoint is unique. We thus
have, as with standard linear algebra, a natural isomorphism between the
*-algebra of linear maps and the *-algebra of Boolean matrices.
\qquad Invertibility of Boolean matrices was studied in \cite%
{Luce52,Rutherford63,Skornyakov86,Yoeli61} and the following result is
well-known. We present here a short proof which relies upon our previous
work with orthonormal bases and generalize the invertibility result to show
that invertible rectangular matrices have to be square. Note that if a
matrix $A$ is invertible, then its columns and its rows both form generating
families. We now show that these families are actually orthonormal bases of $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $ and therefore are stochastic.
\begin{thm}
\label{Inverse}Let $A$ be an $n\times m$ Boolean matrix. The following are
equivalent:
\begin{enumerate}
\item $A$ is invertible, i.e. there exists a (necessarily unique) $m\times n$
Boolean matrix $A^{-1}$ such that $A^{-1}A=I_{n}$ and $AA^{-1}=I_{m}$,
\item $A$ is unitary,i.e. $n=m$ and $AA^{\ast }=A^{\ast }A=I_{n}$,
\item The columns of $A$ form an orthonormal basis for $\mathcal{L}
_{n}\left( \mathcal{B}\right) $,
\item The rows of $A$ form an orthonormal basis of $\mathcal{L}_{m}\left(
\mathcal{B}\right) $.
\end{enumerate}
In particular, if any of 1-4 holds, then $n=m$.
\end{thm}
\begin{pf}
Assume (3) holds. Then by Theorem (\ref{Dimension}), there are $n$ columns
of $A$ and thus $n=m$. By Theorem (\ref{Duality}), the rows of $A$ are a
basis for $\mathcal{L}_{n}(\mathcal{B)}$ as well, so (4) holds. The same
reasoning shows that (4) implies (3) and in particular $n=m$ again.
Moreover, let us denote the columns of $A$ by $\underline{a_{1}},\ldots ,
\underline{a_{m}}$ and the rows of $A$ by $\underline{r_{1}},\ldots ,
\underline{r_{n}}$. By construction $A^{\ast }A=\left[ \left\langle
\underline{a_{i}},\underline{a_{j}}\right\rangle \right] _{m\times m}$ and $%
AA^{\ast }=\left[ \left\langle \underline{r_{i}},\underline{r_{j}}
\right\rangle \right] _{n\times n}$ so $A$ is unitary if and only if both
(3) and (4) holds. Since (3) and (4) are equivalent and imply $n=m$, either
imply (2).
Assume now that $A$ is invertible and write $A=\left[ a_{ij}\right]
_{n\times m}$ and $A^{-1}=\left[ b_{ij}\right] _{m\times n}$. Then $%
A^{-1}A=I_{m}$ and $AA^{-1}=I_{n}$ implies that $\dbigvee
\limits_{j=1}^{m}a_{ij}=\dbigvee\limits_{j=1}^{n}b_{ij}=1$ and $%
b_{ki}a_{ij}=0$ ($k\not=j\in \left\{ 1,\ldots ,m\right\} $ and $i\in \left\{
1,\ldots n\right\} $) and $a_{ik}b_{kj}=0$ ($i\not=j\in \left\{ 1,\ldots
n\right\} $ and $k\in \left\{ 1,\ldots m\right\} $). Moreover if $i\in
\left\{ 1,\ldots ,n\right\} $ and $j\not=k\in \left\{ 1,\ldots ,m\right\} $
then:
\begin{equation*}
a_{ij}a_{ik}=\left( \dbigvee\limits_{s=1}^{n}b_{ks}a_{ij}\right)
a_{ik}=\left( \dbigvee\limits_{\substack{ s=1 \\ s\not=i}}
^{n}a_{ij}b_{ks}\right) a_{ik}\leq \left(
\dbigvee\limits_{s=1,s\not=i}^{n}a_{ik}b_{ks}\right) =0\text{.}
\end{equation*}
Hence, the columns of $A^{\ast }$ form an orthonormal subset of $\mathcal{L}
_{m}\left( \mathcal{B}\right) $ and thus by Theorem (\ref{Duality}), the
columns of $A$ form an orthonormal basis of $\mathcal{L}_{n}\left( \mathcal{%
B }\right) $. So (1) implies (3) and the proof is complete.\qed
\end{pf}
\qquad As a consequence of Theorem (\ref{Inverse}), we see that invertible
operators are always isomorphisms by Lemma (\ref{IsometryChar}), since they
map the canonical basis to the orthonormal basis of their column vectors.
\bigskip Theorem (\ref{Inverse}) allows us to establish the following
remarkable fact: bases, as per Definition (\ref{Basis}), are necessarily
orthonormal, hence of cardinality the dimension of the Boolean vector space.
Thus, for Boolean vector spaces, being a basis in a traditional sense is the
same as being an orthonormal basis.
\begin{thm}
If $\mathcal{A=}\left\{ \underline{a_{1}},\ldots ,\underline{a_{m}}\right\} $
is a basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $ then $n=m$ and $%
\mathcal{A}$ is an orthonormal basis.
\end{thm}
\begin{pf}
Define
\begin{equation*}
T:\left\vert
\begin{array}{ccc}
\mathcal{L}_{n}\left( \mathcal{B}\right) & \longrightarrow & \mathcal{L}
_{m}\left( \mathcal{B}\right) \\
\underline{b} & \longmapsto & \left( b_{1},\ldots ,b_{m}\right)%
\end{array}
\right.
\end{equation*}
where $\underline{b}=\sum b_{i}\underline{a_{i}}$. Now $T$ is a linear
bijection. Denote the inverse of $T$ by $S$. It is easily checked that $S$
is a linear bijection and $ST=I_{n}$ and $TS=I_{m}$. We conclude from
Theorem (\ref{Inverse}) that $n=m$ and that matrix $A_{S}$ of $S$ is
unitary. It is easily checked that $S\underline{\delta _{i}}=\underline{%
a_{i} }$ for $i=1,\ldots ,n$, i.e. the columns of $A_{S}$ are the vectors $%
\underline{a_{1}},\ldots ,\underline{a_{n}}$ which by Theorem (\ref{Inverse}
) form an orthonormal basis.\qed
\end{pf}
\qquad We record the following observation as well:
\begin{cor}
Let $T:\mathcal{L}_{n}\left( \mathcal{B}\right) \longrightarrow \mathcal{L}
_{m}\left( \mathcal{B}\right) $ be a linear bijection. Then $n=m$ and $T$ is
an isomorphism.
\end{cor}
\bigskip \qquad In view of Theorem (\ref{Inverse}), we introduce a type of
matrix which will be of great interest to us in the next section. First,
given $A,B$ two $n\times n$ matrices, we shall say that $A\leq B$ when $%
\left\langle A\underline{a},\underline{b}\right\rangle \leq \left\langle B
\underline{a},\underline{b}\right\rangle $ for all $\underline{a},\underline{
b}\in \mathcal{L}_{n}\left( \mathcal{B}\right) $. The relation $\leq $ is
easily seen to be an order on the set of $n\times n$ matrices. It is shown
in \cite{Luce52} that $\left[ a_{ij}\right] _{n\times n}\leq \left[ b_{ij} %
\right] _{n\times n}$ if and only $a_{ij}\leq b_{ij}$ for all $i,j\in
\left\{ 1,\ldots ,n\right\} $. Now we set:
\begin{defn}
A matrix $A$ is stochastic when $A^{\ast }A\geq I$ and $AA^{\ast }\leq I$.
\end{defn}
\qquad It is shown in \cite{Luce52} that products of stochastic matrices are
stochastic matrices, and that a matrix is stochastic if and only if it maps
stochastic vectors to stochastic vectors, or equivalently when its columns
are stochastic vectors.
\qquad Note that $A$ is unitary, or equivalently invertible, if and only if $%
A$ and $A^{\ast }$ are both stochastic. So unitarity is the same as
bi-stochasticity. As an interesting observation, if we call a matrix $A$
symmetric when $A^{\ast }=A$, then a symmetric stochastic matrix is always a
unitary of order 2, namely $A^{2}=I$. Conversely, if $A^{2}=I$ then $A$ is
invertible with $A^{-1}=A^{\ast }$, so symmetric stochastic matrices are
exactly given by unitaries of order 2, i.e. a reflection.
\qquad We have encountered such matrices before. Example (\ref{CyclicBasis})
shows how to obtain such reflections. Let $\underline{a}=\left( a_{1},\ldots
.a_{n}\right) $ be a stochastic vector. Then the matrix
\begin{equation*}
A=\left[
\begin{array}{cccc}
a_{1} & a_{2} & \cdots & a_{n} \\
a_{2} & a_{3} & \cdots & a_{1} \\
\vdots & \vdots & & \vdots \\
a_{n} & a_{1} & \cdots & a_{n-1}%
\end{array}
\right]
\end{equation*}
is symmetric and stochastic.
\qquad Note however that the product of reflections need not be a
reflection, as the product of the reflections $\left[
\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1%
\end{array}
\right] $ and $\left[
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0%
\end{array}
\right] $ is given by $\left[
\begin{array}{ccc}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0%
\end{array}
\right] $ which is not a reflection.
\section{Invariant Vectors}
\qquad Eigenvalues and eigenvectors of Boolean matrices have been previously
studied \cite{Blyth67,Rutherford63b,Sindak75,Tan98}. Though invariant
vectors are special case of eigenvectors, as far as we know the results in
this section are new.
\qquad The following consequence of Lemma (\ref{Redux1}) will be used.
\begin{lem}
\label{Redux2}If $\underline{a}\in \mathcal{L}_{n}\left( \mathcal{B}\right) $
then there exists an orthovector $\underline{b}\in \mathcal{L}_{n}\left(
\mathcal{B}\right) $ such that $\left\Vert \underline{b}\right\Vert
=\left\Vert \underline{a}\right\Vert $ and $\underline{b}\leq \underline{a}$.
\end{lem}
\begin{pf}
Apply Lemma (\ref{Redux1}) with the interval $\left[ 0,\left\Vert
a\right\Vert \right] $ of $\mathcal{B}$ in lieu of $\mathcal{B}$.\qed
\end{pf}
\bigskip Let $A_{1},\ldots ,A_{m}$ be matrices on $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ with $A_{k}=\left[ a_{ij}^{k}\right] _{n\times n}$ ($%
k=1,\ldots ,m$). The \emph{joint trace} of $A_{1},\ldots ,A_{m}$ is
\begin{equation*}
\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right)
=\dbigvee\limits_{i=1}^{n}a_{ii}^{1}a_{ii}^{2}\ldots a_{ii}^{m}\text{.}
\end{equation*}
In particular, the \emph{trace} of $\left[ a_{ij}\right] _{n\times n}$ is
given by $\limfunc{tr}\left( A\right) =\dbigvee\limits_{i=1}^{n}a_{ii}$. A
vector $\underline{b}$ is an \emph{invariant vector} for $A$ if $A\underline{
b}=\underline{b}$, and more generally a \emph{common invariant vector }of $%
A_{1},\ldots ,A_{m}$ if $A_{i}\underline{b}=\underline{b}$ for $i=1,\ldots
,m $.
\begin{lem}
Let $A,B$ be two matrices on $\mathcal{L}_{n}\left( \mathcal{B}\right) $.
Then
\begin{enumerate}
\item $\limfunc{tr}\left( AB\right) =\limfunc{tr}(BA)$,
\item If $B$ is invertible then $\limfunc{tr}\left( BAB^{\ast }\right) =
\limfunc{tr}(A)$.
\end{enumerate}
\end{lem}
\begin{pf}
We compute
\begin{equation*}
\limfunc{tr}(AB)=\dbigvee\limits_{i=1}^{n}\left( AB\right)
_{ii}=\dbigvee\limits_{i=1}^{n}\dbigvee\limits_{k=1}^{n}a_{ik}b_{ki}=
\dbigvee\limits_{k=1}^{n}\dbigvee\limits_{i=1}^{n}b_{ki}a_{ik}=\dbigvee
\limits_{k=1}^{n}\left( BA\right) _{kk}=\limfunc{tr}\left( BA\right) \text{.}
\end{equation*}
If $B$ is invertible then $B^{-1}=B^{\ast }$ by Theorem (\ref{Inverse}) and
thus (1) implies (2).\qed
\end{pf}
\begin{thm}
\label{StoInv}Stochastic matrices $A_{1},\ldots ,A_{m}$ on $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ have a common invariant stochastic vector if
and only if $\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right) =1$.
\end{thm}
\begin{pf}
Suppose $\underline{b}$ is a stochastic vector and $A_{i}\underline{b}=%
\underline{b}$ for $i=1,\ldots ,m$. Then $\dbigvee%
\limits_{j=1}^{n}a_{ij}^{k}b_{j}=b_{i}$ for $k=1,\ldots ,m$ and $i=1,\ldots
,n$. Multiplying both sides by $b_{i}$ and since $\underline{b}$ is
stochastic, we obtain $a_{ii}^{k}b_{i}=b_{i}$. Hence, $b_{i}\leq a_{ii}^{k}$%
, $k=1,\ldots ,m$, so $b_{i}\leq a_{ii}^{1}a_{ii}^{2}\ldots a_{ii}^{m}$.
Therefore
\begin{equation*}
\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right)
=\dbigvee\limits_{i=1}^{n}a_{ii}^{1}a_{ii}^{2}\ldots a_{ii}^{m}\geq
\dbigvee\limits_{i=1}^{n}b_{i}=1\text{.}
\end{equation*}%
Conversely, suppose $\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right)
=\dbigvee\limits_{i=1}^{n}a_{ii}^{1}a_{ii}^{2}\ldots a_{ii}^{m}=1$. By
Lemma\ (\ref{Redux1}), there exists a stochastic vector $\underline{b}%
=\left( b_{1},\ldots ,b_{n}\right) $ such that $b_{j}\leq
a_{jj}^{1}a_{jj}^{2}\ldots a_{jj}^{m}$. Since $b_{j}\leq a_{jj}^{k}$ ($%
k=1,\ldots ,m$) and $A_{k}$ is stochastic, we have that $a_{ij}^{k}b_{j}=0$
for $i\not=j$, $i,j=1,\ldots ,n$ and $k=1,\ldots ,m$. Hence
\begin{equation*}
\left( A_{k}\underline{b}\right)
_{i}=\dbigvee\limits_{j=1}^{n}a_{ij}^{k}b_{j}=a_{ii}^{k}b_{i}=b_{i}\text{.}
\end{equation*}%
Therefore, $A_{k}\underline{b}=\underline{b}$ ($k=1,\ldots ,m$) so $%
\underline{b}$ is a common invariant stochastic vector for $A_{1},\ldots
,A_{m}$.\qed
\end{pf}
\begin{cor}
\label{StoInv1}A stochastic matrix $A$ has an invariant stochastic vector if
and only if $\limfunc{tr}(A)=1$.
\end{cor}
\begin{cor}
If $A$ is a stochastic matrix and $B$ is invertible on $\mathcal{L}
_{n}\left( \mathcal{B}\right) $ then $A$ has an invariant stochastic vector
if and only if $BAB^{\ast }$ does.
\end{cor}
\begin{cor}
\label{StoInv3}A stochastic vector $\underline{b}=\left( b_{1},\ldots
,b_{n}\right) $ is a common invariant vector for stochastic matrices $%
A_{1},\ldots ,A_{m}$ if and only if $b_{i}\leq a_{ii}^{1}a_{ii}^{2}\ldots
a_{ii}^{m}$ for all $i=1,\ldots ,n$.
\end{cor}
\qquad Stochastic matrices $A_{1},\ldots ,A_{m}$ on $\mathcal{L}_{n}\left(
\mathcal{B}\right) $ are \emph{simultaneously reducible} if there exists an
invertible matrix $B$ on $\mathcal{L}_{n}\left( \mathcal{B}\right) $ and
matrices $C_{1},\ldots ,C_{m}$ on $\mathcal{L}_{n-1}\left( \mathcal{B}
\right) $ such that for $i=1,\ldots ,m$ we have
\begin{equation*}
A_{i}=B\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{i}%
\end{array}
\right] B^{\ast }\text{.}
\end{equation*}
Notice that the matrices $C_{1},\ldots ,C_{m}$ are stochastic since $B^{\ast
}A_{i}B=\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{i}%
\end{array}
\right] $. In particular, if there is only one matrix $A$ in the above
definition, we say that $A$ is \emph{reducible.}
\begin{thm}
\label{UnitReduce}Unitary matrices $A_{1},\ldots ,A_{m}$ on $\mathcal{\ L}
_{n}\left( \mathcal{B}\right) $ are simultaneously reducible if and only if $%
\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right) =1$.
\end{thm}
\begin{pf}
If $A_{1},\ldots ,A_{m}$ are simultaneously reducible then $A_{i}=B\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{i}%
\end{array}
\right] B^{\ast }$ for some invertible matrix $B$ and some matrix $C_{i}$, $%
i=1,\ldots ,m$. Since $B$ is unitary, $B\underline{\delta _{1}}$ is
stochastic and
\begin{equation*}
A_{i}\left( B\underline{\delta _{1}}\right) =B\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{i}%
\end{array}
\right] \underline{\delta _{1}}=B\underline{\delta _{1}}
\end{equation*}
for $i=1,\ldots ,m$. Hence, $A_{1},\ldots ,A_{m}$ have a common invariant
vector, and thus by Theorem (\ref{StoInv}) we have $\limfunc{tr}
(A_{1},\ldots ,A_{m})=1$.
Conversely, assume that $\limfunc{tr}\left( A_{1},\ldots ,A_{m}\right) =1$.
Then $A_{1},\ldots ,A_{m}$ have a common stochastic invariant vector $%
\underline{b}=\left( b_{1},\ldots ,b_{n}\right) $ by Theorem (\ref{StoInv}).
We define the symmetric stochastic matrix $B$ by
\begin{equation*}
B=\left[
\begin{array}{ccccc}
b_{1} & b_{2} & b_{3} & \cdots & b_{n} \\
b_{2} & b_{2}^{c} & 0 & \cdots & 0 \\
b_{3} & 0 & b_{3}^{c} & \cdots & 0 \\
\vdots & & & & \\
b_{n} & 0 & 0 & \cdots & b_{n}^{c}%
\end{array}
\right] \text{.}
\end{equation*}
Let $D_{i}=BA_{i}B$ for $i=1,\ldots ,m$. With the notation $A_{k}=\left[
a_{ij}^{k}\right] _{n\times n}$, we compute the $(1,1)$ entry of $D_{i}$ as
\begin{equation*}
\dbigvee\limits_{j=1}^{n}b_{1j}\left(
\dbigvee\limits_{r=1}^{n}a_{jr}^{i}b_{r1}\right)
=\dbigvee\limits_{j=1}^{n}b_{j}\left(
\dbigvee\limits_{r=1}^{n}a_{jr}^{i}b_{r}\right)
=\dbigvee\limits_{j=1}^{n}b_{j}b_{j}=1\text{.}
\end{equation*}
Since a product of unitary matrices is unitary, $D_{i}$ is a unitary matrix
and thus must have the form
\begin{equation*}
D_{i}=\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{i}%
\end{array}
\right]
\end{equation*}
for some matrix $C_{i}$ ($i=1,\ldots ,m$). Since $A_{i}=BD_{i}B$ for $%
i=1,\ldots ,m$, we are finished.\qed
\end{pf}
\begin{cor}
\label{UnitReduce1}A unitary matrix $A$ is reducible if and only if $%
\limfunc{tr}(A)=1$.
\end{cor}
\qquad We now give an example to show that Theorem (\ref{UnitReduce}) does
not hold for stochastic matrices. Consider the stochastic matrix $A=\left[
\begin{array}{cc}
1 & 1 \\
0 & 0%
\end{array}
\right] $. It is of trace $1$, yet if it were reducible then there exists a
unitary $B$ such that $A=B\left[
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right] B^{\ast }=I$ which is a contradiction.
\qquad Notice if $A$ is unitary and $\underline{b}$ is an invariant vector
for $A$, then $\underline{b}$ is also an invariant vector for $A^{\ast }$.
Indeed, $A\underline{b}=\underline{b}$ implies that $A^{\ast }\underline{b}
=A^{\ast }A\underline{b}=\underline{b}$.
\bigskip \qquad We now give an example that motivates the next result. Let $%
A=\left[ a_{ij}\right] _{3\times 3}$ be a $3\times 3$ symmetric stochastic
matrix. We shall show that $A$ has an invariant stochastic vector and hence $%
A$ is reducible. Indeed, we have that
\begin{eqnarray*}
a_{11}^{c}a_{22}^{c}a_{33}^{c} &=&\left( a_{12}\vee a_{13}\right) \left(
a_{12}\vee a_{32}\right) \left( a_{13}\vee a_{23}\right) \\
&=&\left( a_{12}\vee a_{13}\right) \left( a_{12}\vee a_{23}\right) \left(
a_{13}\vee a_{23}\right) \\
&=&\left( a_{12}a_{12}\vee a_{12}a_{23}\vee a_{13}a_{12}\vee
a_{13}a_{23}\right) \left( a_{13}\vee a_{23}\right) \\
&=&a_{12}\left( a_{13}\vee a_{23}\right) =0\text{.}
\end{eqnarray*}
Thus $\limfunc{tr}\left( A\right) =\left(
a_{11}^{c}a_{22}^{c}a_{33}^{c}\right) ^{c}=0^{c}=1$ so the result follows
from Corollaries (\ref{StoInv1}) and (\ref{UnitReduce1}). The next theorem
generalizes this calculation.
\begin{thm}
\label{OddInv}If $A$ is an $n\times n$ symmetric stochastic matrix with $n$
odd, then $A$ has an invariant stochastic vector.
\end{thm}
\begin{pf}
Since $A=\left[ a_{ij}\right] _{n\times n}$ is symmetric, we have that
\begin{eqnarray*}
a_{11}^{c}a_{22}^{c}\ldots a_{nn}^{c} &=&\left( a_{12}\vee a_{13}\vee \ldots
\vee a_{1n}\right) \left( a_{12}\vee a_{23}\vee \ldots \vee a_{2n}\right) \\
&&\ldots \left( a_{1n}\vee a_{2n}\vee \ldots \vee a_{n-1,n}\right) \text{.}
\end{eqnarray*}
Since $A$ is stochastic, we conclude that if we expand the right hand-side,
the only nonzero terms are of the form $a_{ij}a_{ij}a_{rs}a_{rs}\ldots
a_{uv}a_{uv}$ with $i\not=r$, $r\not=u$ and so on. By construction, there
are $n$ factors in this product. This would imply that $n$ must be even.
This is a contradiction, so all terms in the expansion are zero and thus
\begin{equation*}
\limfunc{tr}\left( A\right) =\left( a_{11}^{c}a_{22}^{c}\ldots
a_{nn}^{c}\right) ^{c}=1\text{.}
\end{equation*}
The result follows from Corollary (\ref{StoInv1}).\qed
\end{pf}
\bigskip \qquad We now show that Theorem\ (\ref{OddInv}) does not hold if $n$
is even. Consider the stochastic symmetric matrix $A=\left[
\begin{array}{cc}
0 & 1 \\
1 & 0%
\end{array}
\right] $. Then $\limfunc{tr}(A)=0$ so $A$ has no stochastic invariant
vector. Now, generalizing, we see that if $B$ is a $k\times k$ stochastic
symmetric matrix, then $\left[
\begin{array}{cc}
0 & B \\
B & 0%
\end{array}
\right] $ has trace $0$ and thus has no invariant stochastic vector. Thus,
for all even $n$ there exists a stochastic symmetric $n\times n$ matrix with
no invariant stochastic vector.
\bigskip \qquad We can find more invariant stochastic vectors in the natural
way. An \emph{invariant orthogonal set} for matrices $A_{1},\ldots ,A_{m}$
on $\mathcal{L}_{n}\left( \mathcal{B}\right) $ is a set of mutually
orthogonal invariant vectors for $A_{1},\ldots ,A_{m}$. For example, if $%
\underline{b},\underline{c}$ are stochastic vectors, then $\left\{
\underline{b},\underline{c}\right\} $ is an invariant orthogonal set for the
unitary matrix $A$ if and only if $c_{i}\leq a_{ii}b_{i}^{c}$ for $%
i=1,\ldots ,n$ or equivalently $b_{i}\leq a_{ii}c_{i}^{c}$ for $i=1,\ldots
,n $.
\begin{thm}
\label{UnitaryReduxMultiple}A unitary matrix $A$ possesses an invariant
orthogonal set of $m$ stochastic vectors if and only if there exists an
invertible matrix $B$ such that
\begin{equation*}
A=B\left[
\begin{array}{cc}
I_{m} & 0 \\
0 & C%
\end{array}
\right] B^{\ast }
\end{equation*}
where $I_{m}$ is the identity operator on $\mathcal{L}_{m}\left( \mathcal{B}
\right) $.
\end{thm}
\begin{pf}
Suppose $A$ is an $n\times n$ matrix with the given form. Then $m\leq n$ and
we can define $\underline{b_{j}}=B\underline{\delta _{j}}$, $j=1,\ldots ,m$.
We conclude from Theorem (\ref{Inverse}) that $\underline{b_{1}},\ldots ,
\underline{b_{m}}$ are stochastic vectors and we have $A\underline{b_{j}}=
\underline{b_{j}}$ for $j=1,\ldots ,m$ by construction. Moreover, for $%
i\not=j$ we have
\begin{equation*}
\left\langle \underline{b_{j}},\underline{b_{i}}\right\rangle =\left\langle
B \underline{\delta _{i}},B\underline{\delta _{j}}\right\rangle
=\left\langle B^{\ast }B\underline{\delta _{i}},\underline{\delta _{j}}%
\right\rangle =\left\langle \underline{\delta _{i}},\underline{\delta _{j}}%
\right\rangle =0 \text{.}
\end{equation*}
Hence $\left\{ \underline{b_{1}},\ldots ,\underline{b_{m}}\right\} $ is an
invariant orthogonal set of stochastic vectors.
\qquad Conversely, suppose that $A$ possesses an invariant orthogonal set of
stochastic vectors $\left\{ \underline{b_{1}},\ldots ,\underline{b_{m}}%
\right\} $ and write $\underline{b_{j}}=\left( b_{1j},\ldots ,b_{nj}\right) $
for $j=1,\ldots ,m$. Letting
\begin{equation*}
B_{1}=\left[
\begin{array}{ccccc}
b_{11} & b_{21} & b_{31} & \cdots & b_{n1} \\
b_{21} & b_{21}^{c} & 0 & \cdots & 0 \\
b_{31} & 0 & b_{31}^{c} & \cdots & 0 \\
\vdots & & & & \vdots \\
b_{n1} & 0 & \cdots & 0 & b_{n1}^{c}%
\end{array}%
\right]
\end{equation*}%
and $D_{1}=B_{1}AB_{1}$ as in the proof of Theorem (\ref{UnitReduce}), we
have that
\begin{equation*}
D_{1}=\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{1}%
\end{array}%
\right]
\end{equation*}%
where $C_{1}$ is a stochastic matrix and $A=B_{1}D_{1}B_{1}$. Letting $C_{1}=%
\left[ c_{ij}\right] _{(n-1)\times (n-1)}$ and $D_{1}=\left[ d_{ij}\right]
_{n\times n}$ we have
\begin{eqnarray*}
c_{11} &=&d_{22}=\dbigvee\limits_{j=1}^{2}b_{2j}\left(
\dbigvee\limits_{k=1}^{2}a_{jk}b_{k2}\right) \\
&=&b_{21}\left( a_{11}b_{21}\vee a_{12}b_{21}^{c}\right) \vee
b_{21}^{c}\left( a_{21}b_{21}\vee a_{22}b_{21}^{c}\right) \\
&=&a_{11}b_{21}\vee a_{22}b_{21}^{c}\text{.}
\end{eqnarray*}%
More generally
\begin{equation*}
c_{ii}=d_{i+1,i+1}=a_{ii}b_{i+1,1}\vee a_{i+1,i+1}b_{i+1,1}^{c}
\end{equation*}%
for $i=1,\ldots ,n-1$. Hence
\begin{equation*}
\limfunc{tr}\left( C_{1}\right) =\dbigvee\limits_{i=1}^{n-1}\left(
a_{ii}b_{i+1,1}\vee a_{i+1,i+1}b_{i+1,1}^{c}\right)
=\dbigvee_{i=1}^{n}a_{ii}b_{i,1}^{c}\text{.}
\end{equation*}%
Since $b_{i2}\leq a_{ii}b_{i1}^{c}$ ($i=1,\ldots ,n$), we conclude that $%
\underline{b_{2}}$ is an invariant stochastic vector of $C_{1}$ by Corollary
(\ref{StoInv3}). Hence, there exists a symmetric stochastic matrix $B_{2}$
such that
\begin{equation*}
C_{1}=B_{2}\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{2}%
\end{array}%
\right] B_{2}\text{.}
\end{equation*}%
It follows that
\begin{eqnarray*}
A &=&B_{1}\left[
\begin{array}{cc}
1 & 0 \\
0 & B_{2}\left[
\begin{array}{cc}
1 & 0 \\
0 & C_{2}%
\end{array}%
\right] B_{2}%
\end{array}%
\right] B_{1} \\
&=&B_{1}\left[
\begin{array}{cc}
1 & 0 \\
0 & B_{2}%
\end{array}%
\right] \left[
\begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & C_{2}%
\end{array}%
\right] \left[
\begin{array}{cc}
1 & 0 \\
0 & B_{2}%
\end{array}%
\right] B_{1} \\
&=&B_{3}\left[
\begin{array}{cc}
I_{2} & 0 \\
0 & C_{2}%
\end{array}%
\right] B_{3}^{\ast }
\end{eqnarray*}%
with $B_{3}=B_{1}\left[
\begin{array}{cc}
1 & 0 \\
0 & B_{2}%
\end{array}%
\right] $. The proof is then completed by a simple induction.\qed
\end{pf}
\qquad Theorem (\ref{UnitaryReduxMultiple}) can be easily generalized to the
following:
\begin{cor}
Unitary matrices $A_{1},\ldots ,A_{m}$ possess an invariant orthogonal set
of stochastic vectors if and only if there exists an invertible matrix $B$
and matrices $C_{1},\ldots ,C_{n}$ such that
\begin{equation*}
A_{i}=B\left[
\begin{array}{cc}
I_{m} & 0 \\
0 & C_{i}%
\end{array}
\right] B^{\ast }
\end{equation*}
for $i=1,\ldots ,m$ and $I_{m}$ the identity operator on $\mathcal{L}
_{m}\left( \mathcal{B}\right) $.
\end{cor}
\qquad We now illustrate Theorem (\ref{UnitaryReduxMultiple}) with an
example. Let $\mathcal{B}$ be the power set of $\left\{ 1,2,3,4,5\right\}
=\Omega $ endowed with its natural Boolean algebra structure. Consider the
stochastic symmetric matrix $A$ over $\mathcal{L}_{5}\left( \mathcal{B}
\right) $ defined by
\begin{equation*}
A=\left[
\begin{array}{ccccc}
\left\{ 1\right\} & \left\{ 2\right\} & \left\{ 3\right\} & \left\{ 4\right\}
& \left\{ 5\right\} \\
\left\{ 2\right\} & \left\{ 4,5\right\} & \emptyset & \emptyset & \left\{
1,3\right\} \\
\left\{ 3\right\} & \emptyset & \left\{ 4,5\right\} & \left\{ 1\right\} &
\{2\} \\
\left\{ 4\right\} & \emptyset & \left\{ 1\right\} & \left\{ 2,3,5\right\} &
\emptyset \\
\left\{ 5\right\} & \left\{ 1,3\right\} & \left\{ 2\right\} & \emptyset &
\left\{ 4\right\}%
\end{array}
\right] \text{.}
\end{equation*}
There are many stochastic invariant vectors for $A$ and we choose
\begin{equation*}
\underline{b}=\left( \left\{ 1\right\} ,\emptyset ,\emptyset ,\left\{
2,3,5\right\} ,\left\{ 4\right\} \right) \text{.}
\end{equation*}
We now form the stochastic symmetric matrix
\begin{equation*}
B=\left[
\begin{array}{ccccc}
\left\{ 1\right\} & \emptyset & \emptyset & \left\{ 2,3,5\right\} & \left\{
4\right\} \\
\emptyset & \Omega & \emptyset & \emptyset & \emptyset \\
\emptyset & \emptyset & \Omega & \emptyset & \emptyset \\
\left\{ 2,3,5\right\} & \emptyset & \emptyset & \left\{ 1,4\right\} &
\emptyset \\
\left\{ 4\right\} & \emptyset & \emptyset & \emptyset & \left\{
1,2,3,5\right\}%
\end{array}
\right]
\end{equation*}
We can then reduce $A$ by
\begin{equation*}
BAB=\left[
\begin{array}{ccccc}
\Omega & \emptyset & \emptyset & \emptyset & \emptyset \\
\emptyset & \left\{ 4,5\right\} & \emptyset & \left\{ 2\right\} & \left\{
1,3\right\} \\
\emptyset & \emptyset & \left\{ 4,5\right\} & \left\{ 1,3\right\} & \left\{
2\right\} \\
\emptyset & \left\{ 2\right\} & \left\{ 1,3\right\} & \emptyset & \left\{
4,5\right\} \\
\emptyset & \left\{ 1,3\right\} & \left\{ 2\right\} & \left\{ 4,5\right\} &
\emptyset%
\end{array}
\right] \text{.}
\end{equation*}
Thus
\begin{equation*}
A=B\left[
\begin{array}{cc}
1 & 0 \\
0 & C%
\end{array}
\right] B
\end{equation*}
yet $\limfunc{tr}(C)=\left\{ 4,5\right\} \not=\Omega $ so no further
reduction is possible.
\section{Powers of Stochastic Matrices}
\qquad As mentioned in section 2, powers of stochastic matrices may be
important for the study of Boolean Markov chains. Various applications of
powers of lattice matrices are discussed in \cite{Cechlarova03,Tan01}. If $A$
is a Boolean matrix, the smallest natural number $p$ such that there exists
a natural number $e$ with $A^{e+p}=A^{e}$ is called the \emph{period} of $A$
and is denoted by $p(A)$. The smallest natural number $e$ such that $%
A^{e+p(A)}=A^{e}$ is called the \emph{exponent} or \emph{index} of $A$ and
is denoted by $e(A)$. It is known that for any $n\times n$ Boolean matrix $A$
, both $p(A)$ and $e(A)$ exist and $e(A)\leq \left( n-1\right) ^{2}+1$ \cite%
{Cechlarova03,Tan01}. We shall use:
\begin{defn}
Let $n\in \mathbb{N}$. The least common multiple of $\left\{ 1,2,\ldots
,n\right\} $ is denoted by $[n]$.
\end{defn}
\qquad It is also known that $p(A)$ divides $[n]$.
\qquad In this section, we show that for a stochastic matrix, we can improve
the upper bound for $e(A)$ to $e(A)\leq n-1$. Although we do not improve on $%
p(A)|[n]$, we give an alternative proof of this result for stochastic
matrices because it is embedded in our proof that $e(A)\leq n-1$.
\qquad If $A$ is a $2\times 2$ matrix, then it follows from the previous
known results that $A^{4}=A^{2}$. Moreover, it is easy to check that if $A$
is a $2\times 2$ stochastic matrix then $A^{3}=A$. In the same way, for $%
3\times 3$ matrix $A$ we have $A^{11}=A^{5}$. However, one can check that if
$A$ is a $3\times 3$ stochastic matrix then $A^{8}=A^{2}$. Displaying the
first eight powers of $A$ would be cumbersome, so we refrain from doing so.
However, we can easily prove the special case that $A^{6}=I$ for any unitary
$3\times 3$ matrix $A$. In this case, we have
\begin{equation*}
A=\left[
\begin{array}{ccc}
a_{1} & b_{1} & c_{1} \\
a_{2} & b_{2} & c_{2} \\
a_{3} & b_{3} & c_{3}%
\end{array}
\right]
\end{equation*}
where each row and column is a stochastic vector. We then have
\begin{eqnarray*}
A^{2} &=&\left[
\begin{array}{ccc}
a_{1}\vee a_{2}b_{1}\vee a_{3}c_{1} & b_{3}c_{1} & b_{1}c_{2} \\
a_{3}c_{2} & a_{2}b_{1}\vee b_{2}\vee b_{3}c_{2} & a_{2}c_{1} \\
b_{3}a_{2} & a_{3}b_{1} & a_{3}c_{1}\vee b_{3}c_{2}\vee c_{3}%
\end{array}
\right] , \\
A^{3} &=&\left[
\begin{array}{ccc}
a_{1}\vee a_{3}b_{1}c_{2}\vee a_{2}b_{3}c_{1} & a_{2}b_{1} & a_{3}c_{1} \\
a_{2}b_{1} & a_{2}b_{3}c_{1}\vee b_{2}\vee a_{3}b_{1}c_{2} & b_{3}c_{2} \\
a_{3}c_{1} & b_{3}c_{2} & a_{3}b_{1}c_{2}\vee a_{2}b_{3}c_{1}\vee c_{3}%
\end{array}
\right] \text{.}
\end{eqnarray*}
Since $A^{3}$ is symmetric and unitary (as a product of unitary, or by
inspection), we conclude that $A^{6}=A^{3}A^{3}=I$.
\qquad From these observations and our work in Section 5, we can already
draw some interesting conclusions. For example, let $A$ be a $3\times 3$
unitary matrix with $\limfunc{tr}(A)=1$. Applying Corollary (\ref%
{UnitReduce1}), there exists an invertible matrix $B$ and a $2\times 2$
unitary matrix $C$ such that
\begin{equation}
A=B\left[
\begin{array}{cc}
1 & 0 \\
0 & C%
\end{array}
\right] B^{\ast }\text{.} \label{Eq61}
\end{equation}
Since $C$ is symmetric (all $2\times 2$ unitaries are), we have $C^{2}=I$
and thus
\begin{equation*}
A^{2}=B\left[
\begin{array}{cc}
1 & 0 \\
0 & C^{2}%
\end{array}
\right] B^{\ast }=I\text{.}
\end{equation*}
We conclude that any $3\times 3$ unitary matrix $A$ with $\limfunc{tr}(A)=1$
is symmetric.
\qquad As another example, let $A$ be a $4\times 4$ unitary matrix with $%
\limfunc{tr}(A)=1$. As before, there exists an invertible matrix $B$ such
that (\ref{Eq61}) holds where $C$ is now a $3\times 3$ unitary matrix. Since
$C^{6}=I$, we conclude that $A^{6}=I$ and thus $A^{3}$ is symmetric.
\bigskip \qquad We now begin the proof of the main result of this section.
Let $A=\left[ a_{ij}\right] _{n\times n}$ be a stochastic matrix on $%
\mathcal{L}_{n}\left( \mathcal{B}\right) $. We shall use:
\begin{defn}
A nonzero element of $\mathcal{B}$ of the form
\begin{equation*}
a_{i_{1}1}a_{i_{2}2}\ldots a_{i_{n}n}
\end{equation*}
for $i_{1},\ldots ,i_{n}\in \left\{ 1,\ldots ,n\right\} $ is called an atom
of $A$.
\end{defn}
\qquad Of course there are a finite numbers of atoms of $A$.
\begin{lem}
\label{Atoms}Let $A=\left[ a_{ij}\right] _{n\times n}$ be a stochastic
matrix on $\mathcal{L}_{n}\left( \mathcal{B}\right) $. Let $\omega
_{1},\ldots ,\omega _{m}$ be the distinct atoms of $A$.
\begin{enumerate}
\item If $i,j\in \left\{ 1,\ldots ,m\right\} $ and $i\not=j$ then $\omega
_{i}\omega _{j}=0$,
\item $\dbigvee\limits_{i=1}^{m}\omega _{i}=1$,
\item For all $i,j\in \left\{ 1,\ldots ,n\right\} $ we have $a_{ij}=\dbigvee
\left\{ \omega _{k}:\omega _{k}\leq a_{ij}\right\} $,
\item If $\omega _{i}\leq a_{kj}$ then $A\omega _{i}\underline{\delta _{j}}
=\omega _{i}\underline{\delta _{k}}$.
\end{enumerate}
\end{lem}
\begin{pf}
For (1),\ letting $\omega _{i}=a_{i_{1}1}a_{i_{2}2}\ldots a_{i_{n}n}$ and $%
\omega _{j}=a_{j_{1}1}a_{j_{2}2}\ldots a_{j_{n}n}$, if $i\not=j$ then $%
i_{k}\not=j_{k}$ for some $k\in \left\{ 1.\ldots ,n\right\} $ and thus $%
\omega _{j}\omega _{i}=0$ since $a_{i_{k}k}a_{j_{k}k}=0$.
(2) will follow from (3). For (3), since
\begin{equation*}
a_{11}=\dbigvee \left\{ a_{11}\left( a_{i_{2}2}\ldots a_{i_{n}n}\right)
:i_{2},\ldots ,i_{n}=1,\ldots ,n\right\}
\end{equation*}
as $A$ is stochastic, the results holds for $a_{11}$. It holds similarly for
$a_{ij}$ with $i,j\in \left\{ 1,\ldots ,n\right\} $. Last, for (4), if $%
\omega _{i}\leq a_{kj}$ then
\begin{eqnarray*}
A\omega _{i}\underline{\delta _{j}} &=&\omega _{i}A\underline{\delta _{j}}
=\omega _{i}\left( a_{1j},a_{2j},\ldots ,a_{nj}\right) =\left( \omega
_{i}a_{1j},\ldots ,\omega _{i}a_{nj}\right) \\
&=&\omega _{i}a_{jk}\underline{\delta _{k}}=\omega _{i}\underline{\delta
_{k} }\text{.}
\end{eqnarray*}
This concludes our proof.\qed
\end{pf}
\bigskip \qquad The main result for this section is:
\begin{thm}
If $A$ is a stochastic $n\times n$ matrix then $A^{[n]+n-1}=A^{n-1}$.
\end{thm}
\begin{pf}
Let $\omega _{1},\ldots ,\omega _{m}$ be the distinct atoms of $A$. By Lemma
(\ref{Atoms},2), we have $\underline{\delta _{i}}=\sum_{j=1}^{m}\omega _{j}
\underline{\delta _{i}}$ for all $i\in \left\{ 1,\ldots n\right\} $. Since $%
\left\{ \underline{\delta _{1}},\ldots ,\underline{\delta _{n}}\right\} $ is
a basis for $\mathcal{L}_{n}\left( \mathcal{B}\right) $, the set $\left\{
\omega _{j}\underline{\delta _{i}}:i=1,\ldots n;j=1,\ldots m\right\} $ is a
generating set of $\mathcal{L}_{n}\left( \mathcal{B}\right) $. Set $%
r=[n]+n-1 $. If we can show that $A^{r}\omega _{j}\underline{\delta _{i}}
=A^{n-1}\omega _{j}\underline{\delta _{i}}$ for $i=1,\ldots n$ and $%
j=1,\ldots m$ then we are done.
Consider first $\omega _{1}\underline{\delta _{1}}$ and call the vectors $%
A^{0}\omega _{1}\underline{\delta _{1}}$, $A\omega _{1}\underline{\delta
_{1} }$, $A^{2}\omega _{1}\underline{\delta _{1}},\ldots ,A^{n-1}\omega _{1}
\underline{\delta _{1}}$ the \emph{iterates} of $A$ \emph{at} $\omega _{1}
\underline{\delta _{1}}$. By Lemma (\ref{Atoms},4), the iterates of $\omega
_{1}\underline{\delta _{1}}$ have the form: $\omega _{1}\underline{\delta
_{1}},$ $\omega _{1}\underline{\delta _{i_{1}}},\omega _{1}\underline{\delta
_{i_{2}}},\ldots ,\omega _{1}\underline{\delta _{i_{n-1}}}$ for $%
i_{1},\ldots ,i_{n-1}\in \left\{ 1,\ldots ,n\right\} $.
Suppose there is only one distinct iterate of $A$ at $\omega _{1}\underline{
\delta _{1}}$. Then
\begin{equation*}
A\omega _{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta _{i_{1}}}
=\omega _{1}\underline{\delta _{1}}\text{.}
\end{equation*}
Then we have
\begin{equation}
A^{n-1}\omega _{1}\underline{\delta _{1}}=A^{n}\omega _{1}\underline{\delta
_{1}}=\ldots =A^{r}\omega _{1}\underline{\delta _{1}}\text{.} \label{Eq62}
\end{equation}
Suppose now there are two distinct iterates of $A$ at $\omega _{1}\underline{%
\delta _{1}}$. Then $\omega _{1}\underline{\delta _{1}}\not=\omega _{1}%
\underline{\delta _{i_{1}}}$. If $\omega _{1}\underline{\delta _{i_{2}}}%
=\omega _{1}\underline{\delta _{i_{1}}}$ then
\begin{equation*}
\omega _{1}\underline{\delta _{i_{3}}}=A\omega _{1}\underline{\delta _{i_{2}}%
}=A\omega _{1}\underline{\delta _{i_{1}}}=\omega _{1}\underline{\delta
_{i_{2}}}=\omega _{1}\underline{\delta _{i_{1}}}
\end{equation*}%
and we can conclude again that (\ref{Eq62}) holds. Otherwise, $A^{2}\omega
_{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta _{1}}$ and thus $%
A^{n-1}\omega _{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta _{1}}$
or $A^{n-1}\omega _{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta
_{i_{1}}}$. Either way, we have
\begin{equation}
A^{2+(n-1)}\omega _{1}\underline{\delta _{1}}=A^{n-1}\omega _{1}\underline{%
\delta _{1}}\text{.} \label{Eq63}
\end{equation}
Suppose instead that there are three distinct iterates of $A$ at $\omega
_{1} \underline{\delta _{1}}$. Thus $\omega _{1}\underline{\delta _{1}},$ $%
\omega _{1}\underline{\delta _{i_{1}}}$ and $\omega _{1}\underline{\delta
_{i_{2}}}$ are distinct. If $\omega _{1}\underline{\delta _{i_{3}}}=\omega
_{1} \underline{\delta _{i_{2}}}$ then $A^{r}\omega _{1}\underline{\delta
_{1}} =A^{n-1}\omega _{1}\underline{\delta _{1}}=\omega _{1}\underline{%
\delta _{i_{3}}}$ so (\ref{Eq62}) holds again. If $\omega _{1}\underline{%
\delta _{i_{1}}}=\omega _{1}\underline{\delta _{i_{3}}}$ then $A\omega _{1}
\underline{\delta _{1}}\in \left\{ \omega _{1}\underline{\delta _{1}},\omega
_{1}\underline{\delta _{i_{2}}}\right\} $ and (\ref{Eq63}) holds. If $\omega
_{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta _{i_{3}}}$ then $%
A^{n-1}\omega _{1}\underline{\delta _{1}}\in \left\{ \omega _{1}\underline{
\delta _{1}},\omega _{1}\underline{\delta _{i_{1}}},\omega _{1}\underline{
\delta _{i_{2}}}\right\} $ and we have
\begin{equation}
A^{3+n-1}\omega _{1}\underline{\delta _{1}}=A^{n-1}\omega _{1}\underline{
\delta _{1}}\text{.} \label{Eq64}
\end{equation}
Generalizing this observation, suppose that all the iterates $\omega _{1}
\underline{\delta _{1}},\omega _{1}\underline{\delta _{i_{1}}},\ldots ,$ $%
\omega _{1}\underline{\delta _{i_{n-1}}}$ are distinct. Since there are only
$n$ possibilities for $A^{n}\omega _{1}\underline{\delta _{1}}$, we conclude
that $A^{n}\omega _{1}\underline{\delta _{1}}=\omega _{1}\underline{\delta
_{1}}$ or $\omega _{1}\underline{\delta _{i_{j}}}$ for some $j\in \left\{
1,\ldots ,n-1\right\} $. But then
\begin{equation}
A^{t+\left( n-1\right) }\omega _{1}\underline{\delta _{1}}=A^{n-1}\omega
_{1} \underline{\delta _{1}} \label{Eq65}
\end{equation}
for some $t\in \left\{ 1,2,\ldots ,n\right\} $. Notice (\ref{Eq63}) and (\ref%
{Eq64}) are special cases of (\ref{Eq65}).
Let us now suppose (\ref{Eq65}) holds for some $t\in \left\{ 1,\ldots
,n\right\} $. Since $r=kt+(n-1)$ for some $k\in \mathbb{N}$ we have
\begin{eqnarray*}
A^{r}\omega _{1}\underline{\delta _{1}} &=&A^{kt+n-1}\omega _{1}\underline{
\delta _{1}}=\left( A^{t}\right) ^{k}A^{n-1}\omega _{1}\underline{\delta
_{1} } \\
&=&\left( A^{t}\right) ^{k-1}A^{t}A^{n-1}\omega _{1}\underline{\delta _{1}}
=\left( A^{t}\right) ^{k-1}A^{n-1}\omega _{1}\underline{\delta _{1}} \\
&=&\left( A^{t}\right) ^{k-2}A^{t}A^{n-1}\omega _{1}\underline{\delta _{1}}
=\left( A^{t}\right) ^{k-2}A^{n-1}\omega _{1}\underline{\delta _{1}} \\
&=&\ldots =A^{n-1}\omega _{1}\underline{\delta _{1}}\text{.}
\end{eqnarray*}
In a similar way, we can prove that $A^{r}\omega _{j}\underline{\delta _{i}}
=A^{n-1}\omega _{j}\underline{\delta _{i}}$ for $j=1,\ldots ,m$ and $%
i=1,\ldots ,n$, so the proof is complete.\qed
\end{pf}
\begin{cor}
If $A$ is an $n\times n$ unitary matrix then $A^{\left[ n\right] }=I$.
\end{cor}
\qquad As examples, $A^{15}=A^{3}$ for any $4\times 4$ stochastic matrix and
$A^{64}=A^{4}$ for any $5\times 5$ stochastic matrix. We now give a final
example. Let $\left( a,b,c\right) $ be a stochastic vector and form the
stochastic matrix
\begin{equation*}
A=\left[
\begin{array}{ccc}
b\vee c & a & 0 \\
a & b & a \\
0 & c & b\vee c%
\end{array}
\right] \text{.}
\end{equation*}
We then have
\begin{equation*}
A^{2}=\left[
\begin{array}{ccc}
1 & 0 & a \\
0 & a\vee b & 0 \\
0 & c & c\vee b%
\end{array}
\right]
\end{equation*}
and $A^{2n+1}=A$, $A^{2n}=A^{2}$ for $n\in \mathbb{N}$. This example
illustrates an important difference between Boolean Markov chains and
traditional Markov chains given by real stochastic matrices. An important
property of traditional Markov chains is that the sites (called states in
the traditional case) can be decomposed into equivalence classes. This is
important because sites in the same equivalence class share a similar
behavior \cite{Giveon64}.
\qquad To be precise, let $M=\left[ p_{ij}\right] _{n\times n}$ be a real
stochastic matrix, i.e. $p_{ij}\geq 0$ and $\sum_{i=1}^{n}p_{ij}=1$ for
every $j=1,\ldots ,n$. The real $p_{ij}$ represents the transition
probability from site $j$ to site $i$. A site $i$ is \emph{accessible} from
a site $j$ if there exists $n\in \mathbb{N}$ such that $\left( M^{n}\right)
_{ij}>0$, and we then denote $j\rightarrow i$. It is easy to check that $%
\rightarrow $ is transitive and that the relation $\longleftrightarrow $
defined by $i\longleftrightarrow j\iff \left( i\rightarrow j\wedge
j\rightarrow i\right) $ is an equivalence relation on the sites of the
Markov chain.
\qquad Let us now extend this concept to Boolean Markov chains whose
transition matrix is a Boolean stochastic matrix $A$. Thus, $j\rightarrow i$
whenever $\left( A^{n}\right) _{ij}>0$ for some $n\in \mathbb{N}$. For the
example above, we note that $1\rightarrow 2$ and $2\rightarrow 3$ yet $%
1\not\rightarrow 3$. Thus $\rightarrow $ is not transitive. If we define $%
\longleftrightarrow $ by $i\longleftrightarrow j\iff \left( i\rightarrow
j\wedge j\rightarrow i\right) $ then we have, in the above example, that in
fact $1\longleftrightarrow 2$ and $2\longleftrightarrow 3$ yet $%
1\nleftrightarrow 3$. Hence $\longleftrightarrow $ is no longer an
equivalence relation.
\bibliographystyle{amsplain}
| {
"timestamp": "2009-02-08T05:54:24",
"yymm": "0902",
"arxiv_id": "0902.1290",
"language": "en",
"url": "https://arxiv.org/abs/0902.1290",
"abstract": "This article discusses the concept of Boolean spaces endowed with a Boolean valued inner product and their matrices. A natural inner product structure for the space of Boolean n-tuples is introduced. Stochastic boolean vectors and stochastic and unitary Boolean matrices are studied. A dimension theorem for orthonormal bases of a Boolean space is proven. We characterize the invariant stochastic Boolean vectors for a Boolean stochastic matrix and show that they can be used to reduce a unitary matrix. Finally, we obtain a result on powers of stochastic and unitary matrices.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Boolean Inner product Spaces and Boolean Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126451020111,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7094396937187369
} |
https://arxiv.org/abs/1109.4454 | Parrondo's paradox via redistribution of wealth | Toral (2002) considered an ensemble of N\geq2 players. In game B a player is randomly selected to play Parrondo's original capital-dependent game. In game A' two players are randomly selected without replacement, and the first transfers one unit of capital to the second. Game A' is fair (with respect to total capital), game B is losing (or fair), and the random mixture {\gamma}A'+(1-{\gamma})B is winning, as was demonstrated by Toral for {\gamma}=1/2 using computer simulation. We prove this, establishing a strong law of large numbers and a central limit theorem for the sequence of profits of the ensemble of players for each {\gamma}\in(0,1). We do the same for the nonrandom pattern of games (A')^r B^s for all integers r,s\geq1. An unexpected relationship between the random-mixture case and the nonrandom-pattern case occurs in the limit as N\rightarrow\infty. | \section{Introduction}
The original Parrondo (1996) games can be described in terms of probabilities $p:=1/2-\eps$ and
\begin{equation}\label{old-param}
p_0:={1\over10}-\eps,\qquad p_1=p_2:={3\over4}-\eps,
\end{equation}
where $\eps>0$ is a small bias parameter (less than 1/10, of course). In game $A$, the player tosses a $p$-coin (i.e., $p$ is the probability of heads). In game $B$, if the player's current capital is congruent to $j$ (mod 3), he tosses a $p_j$-coin. (Assume initial capital 0 for simplicity.) In both games, the player wins one unit with heads and loses one unit with tails.
It can be shown that games $A$ and $B$ are both losing games (asymptotically), regardless of $\eps$, whereas the random mixture $(1/2)(A+B)$ (toss a fair coin to determine which game to play) is a winning game for $\eps$ sufficiently small. Furthermore, certain nonrandom patterns, including $AAB$, $ABB$, and $AABB$ but excluding $AB$, are winning as well, again for $\eps$ sufficiently small. These are the original examples of \textit{Parrondo's paradox}.
It has been suggested that game $A$ acts as ``noise'' to break up the losing cycles of game $B$ played alone (Kay and Johnson 2003). Toral (2002) proposed a stochastic model in which a different type of noise appears to have a similar effect. The model assumes an ensemble of $N\ge2$ players and replaces the noise effect of Parrondo's game $A$ by a redistribution of capital among the players. A player $i$ is selected at random to play. With probability 1/2 he can either play Parrondo's game $B$ or game $A'$ consisting in that player giving away one unit of his capital to a randomly selected (without replacement) player $j$. Notice that this new game $A'$ is fair since it does not modify the total amount of capital, it simply redistributes it randomly among the players.
Toral showed by computer simulation that the Parrondo effect is present in his games. Our aim here is to prove this, establishing a strong law of large numbers and a central limit theorem for the sequence of profits of the ensemble of $N$ players. For this we apply results of Ethier and Lee (2009), but the application is not straightforward. For example, the formulas for the mean and variance parameters in the central limit theorem depend on the unique stationary distribution of the underlying Markov chain as well as on its fundamental matrix, both of which are too complicated to derive explicitly except for small $N$. Nevertheless, we can evaluate the mean and variance parameters for all $N$.
We generalize (\ref{old-param}) to the parameterization of Ethier and Lee (2009):
\begin{equation}\label{param}
p_0:={\rho^2\over1+\rho^2}-\eps,\qquad p_1=p_2:={1\over1+\rho}-\eps,
\end{equation}
where $\rho>0$ (eq.\ (\ref{old-param}) is the special case $\rho=1/3$). The bias parameter is not important, so we take $\eps=0$ in most of what follows, which makes game $B$ fair (asymptotically).
Let us summarize our results. Just as it is conventional in the literature to denote the nonrandom pattern $(A')^r B^s$ by $[r,s]$, we will introduce the (slightly redundant) notation $(\gamma,1-\gamma)$ for the random mixture $\gamma A'+(1-\gamma)B$. We establish a strong law of large numbers (SLLN) and a central limit theorem (CLT) for the sequence of profits of the ensemble of $N$ players in both settings (random mixture and nonrandom pattern). We provide a formula for the random-mixture mean $\mu_{(\gamma,1-\gamma)}^{(N)}$, which does not depend on $N$, as a function of $\gamma\in(0,1)$ and $\rho>0$. The nonrandom-pattern mean $\mu_{[r,s]}^{(N)}$ does depend on $N$ and is rather more complicated; we provide a formula, as a function of $N\ge2$ and $\rho>0$, only for small $r,s\ge1$ but we determine its sign for all $r,s\ge1$, $N\ge2$, and $\rho>0$, thereby establishing necessary and sufficient conditions for the Parrondo effect to be present. Finally we show that the random-mixture case and the nonrandom-pattern case are connected by the unexpected relationship
\begin{equation}\label{relation}
\mu_{(r/(r+s),s/(r+s))}^{(N)}=\lim_{M\to\infty}\mu_{[r,s]}^{(M)},\qquad r,s\ge1,\;N\ge2,\;\rho>0,
\end{equation}
and a simple formula for this common value is provided. To put this in perspective, the corresponding identity for one-player Parrondo games fails in all but one case ($r=2$, $s=1$).
The variance parameter is considerably more complicated, so we assume that $\rho=1/3$ (i.e., (\ref{old-param}) holds with $\eps=0$) and $\gamma=1/2$, obtaining a formula for $(\sigma_{(1/2,1/2)}^{(N)})^2$ as a function of $N\ge2$. We do the same for $(\sigma_{[r,s]}^{(N)})^2$ for $\rho=1/3$ and small $r,s\ge1$. It turns out that the analogue of (\ref{relation}) fails for the variances. However, a different notion of variance, the expected sample variance of the individual players' capitals, which was considered by Toral (2002), does apparently satisfy a relationship nearly analogous to (\ref{relation}). We can confirm this only in special cases, so it remains a conjecture.
Toral (2002) also studied a model in which the capital-dependent game is replaced by a history-dependent game. It seems likely that most of the results of this paper can be extended to that setting, with the probable exception of Theorem \ref{positivity} below. Finally, Toral considered a model with redistribution of wealth from richer to poorer neighbors. That model is considerably more difficult to analyze than either of the others, and no rigorous results for it are known.
\section{Mean profit for random mixtures}\label{model}
There are two natural ways to define the model. The simplest is to describe the state of the system by an $N$-dimensional vector $\bm x=(x_1,x_2,\ldots,x_N)$ in which $x_i$ denotes the capital (mod 3) of player $i$. An alternative approach (adopted by Ethier 2007), which makes the state space smaller but the one-step transition probabilities more complicated, is to describe the state of the system, when it is in state $\bm x$ according to the previous description, by $(n_0,n_1,n_2)$, where $n_0$ (resp., $n_1$, $n_2$) is the number of 0s (resp., 1s, 2s) among $x_1,x_2,\ldots,x_N$. Using the first approach, the state space is
$$
\Sigma_N:=\{\bm x=(x_1,x_2,\ldots,x_N): x_i\in\{0,1,2\}{\rm\ for\ }i=1,\ldots,N\}=\{0,1,2\}^N,
$$
while using the second approach, the state space is
$$
\bar\Sigma_N:=\{(n_0,n_1,n_2)\in{\bf Z}_+^3: n_0+n_1+n_2=N\}.
$$
We note that $|\Sigma_N|=3^N$ and $|\bar\Sigma_N|={N+2\choose2}$.
The one-step transition probabilities using the first approach depend on three probabilities $p_0,p_1,p_2$. If only game $B$ is played, then the one-step transition probabilities have the simple form
$$
\bm P_B^{(N)}(\bm x,\bm y):=\begin{cases}N^{-1}p_{x_i}&\text{if $y_i=x_i+1$ (mod 3) and $y_j=x_j$ for all $j\ne i$}\\
N^{-1}q_{x_i}&\text{if $y_i=x_i-1$ (mod 3) and $y_j=x_j$ for all $j\ne i$}\end{cases}
$$
for $i=1,2,\ldots,N$, where $q_x:=1-p_x$ for $x=0,1,2$, and $\bm P_B^{(N)}(\bm x,\bm y)=0$ otherwise. We adopt the parameterization (\ref{param}) with $\eps=0$.
If only game $A'$ is played, then the one-step transition matrix is symmetric and of the form
$$
\bm P_{A'}^{(N)}(\bm x,\bm y):=[N(N-1)]^{-1}
$$
if, for some $i,j\in\{1,2,\ldots,N\}$ with $i\ne j$, we have $y_i=x_i-1$ (mod 3), $y_j=x_j+1$ (mod 3), and $y_k=x_k$ for all $k\ne i,j$. Finally, if the two games are mixed, that is, game $A'$ is played with probability $\gamma\in(0,1)$ and game $B$ is played with probability $1-\gamma$, then our one-step transition matrix has the form $\bm P_{(\gamma,1-\gamma)}^{(N)}:=\gamma\bm P_{A'}^{(N)}+(1-\gamma)\bm P_B^{(N)}$.
The one-step transition probabilities using the second approach also depend on the three probabilities $p_0,p_1,p_2$ and are best summarized in the form of a table. See Table \ref{Table1}, which is essentially from Ethier (2007).
\begin{table}[ht]
\caption{\label{Table1}One-step transitions using the second approach, for both game $A'$ and game $B$. From state $(n_0,n_1,n_2)$, a transition is made to state $(n_0',n_1',n_2')$.\medskip}
\tabcolsep=.16cm
\begin{center}
\begin{tabular}{ccccc}
\hline
\noalign{\smallskip}
&&& type of &\\
$(n_0',n_1',n_2')$ & type of & game & winner & probability \\
& player & played & / result & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$(n_0-2,n_1+1,n_2+1)$ & 0 & $A'$ & 0 & $[N(N-1)]^{-1}n_0(n_0-1)$ \\
$(n_0-1,n_1-1,n_2+2)$ & 0 & $A'$ & 1 & $[N(N-1)]^{-1}n_0 n_1$ \\
$(n_0,n_1,n_2)$ & 0 & $A'$ & 2 & $[N(N-1)]^{-1}n_0 n_2$ \\
$(n_0,n_1,n_2)$ & 1 & $A'$ & 0 & $[N(N-1)]^{-1}n_1 n_0$ \\
$(n_0+1,n_1-2,n_2+1)$ & 1 & $A'$ & 1 & $[N(N-1)]^{-1}n_1(n_1-1)$ \\
$(n_0+2,n_1-1,n_2-1)$ & 1 & $A'$ & 2 & $[N(N-1)]^{-1}n_1 n_2$ \\
$(n_0-1,n_1+2,n_2-1)$ & 2 & $A'$ & 0 & $[N(N-1)]^{-1}n_2 n_0$ \\
$(n_0,n_1,n_2)$ & 2 & $A'$ & 1 & $[N(N-1)]^{-1}n_2 n_1$ \\
$(n_0+1,n_1+1,n_2-2)$ & 2 & $A'$ & 2 & $[N(N-1)]^{-1}n_2(n_2-1)$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$(n_0-1,n_1+1,n_2)$ & 0 & $B$ & win & $N^{-1}n_0 p_0$ \\
$(n_0-1,n_1,n_2+1)$ & 0 & $B$ & lose & $N^{-1}n_0 q_0$ \\
$(n_0,n_1-1,n_2+1)$ & 1 & $B$ & win & $N^{-1}n_1 p_1$ \\
$(n_0+1,n_1-1,n_2)$ & 1 & $B$ & lose & $N^{-1}n_1 q_1$ \\
$(n_0+1,n_1,n_2-1)$ & 2 & $B$ & win & $N^{-1}n_2 p_2$ \\
$(n_0,n_1+1,n_2-1)$ & 2 & $B$ & lose & $N^{-1}n_2 q_2$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table}
That the two approaches to the model are equivalent, at least in the stationary setting, is a consequence of the following simple lemma, which is easily seen to be applicable to $\bm P_B^{(N)}$ and $\bm P_{(\gamma,1-\gamma)}^{(N)}$.
We first need some notation. Given a finite set $E$ and an integer $N\ge2$, put $E^N:=E\times\cdots\times E$. Given a permutation $\sigma$ of $\{1,2,\ldots,N\}$ and $\bm x=(x_1,\ldots,x_N)\in E^N$, write $\bm x_\sigma:=(x_{\sigma(1)},\ldots,x_{\sigma(N)})$.
\begin{lemma}
Let $E$ be a finite set, fix $N\ge2$, let $\bm P$ be the one-step transition matrix for an irreducible Markov chain in the product space $E^N$, and let $\bm\pi$ be its unique stationary distribution. If, for every permutation $\sigma$ of $\{1,2,\ldots,N\}$,
$$
\bm P(\bm x_\sigma,\bm y_\sigma)=\bm P(\bm x,\bm y)
$$
for all $\bm x,\bm y\in E^N$, then $\bm\pi$ is exchangeable, that is, for every permutation $\sigma$ of $\{1,2,\ldots,N\}$, we have $\bm\pi(\bm x_\sigma)=\bm\pi(\bm x)$ for all $\bm x\in E^N$.
\end{lemma}
\begin{proof} Given a permutation $\sigma$ of $\{1,2,\ldots,N\}$, define the distribution ${\bm\pi}_\sigma$ on $E^N$ by ${\bm\pi}_\sigma(\bm x):=\bm\pi(\bm x_\sigma)$. Then
\begin{eqnarray*}
\bm\pi_\sigma(\bm y)
=\sum_{\bm x\in E^N}\bm\pi(\bm x)\bm P(\bm x,\bm y_\sigma)=\sum_{\bm x\in E^N}\bm\pi(\bm x_\sigma)\bm P(\bm x_\sigma,\bm y_\sigma)
=\sum_{\bm x\in E^N}\bm\pi_\sigma(\bm x)\bm P(\bm x,\bm y)
\end{eqnarray*}
for all $\bm y\in E^N$, hence by the uniqueness of stationary distributions, $\bm\pi_\sigma=\bm\pi$.
\end{proof}
We would like to apply results of Ethier and Lee (2009) to game $B$ and to the mixed game. (They do not apply to game $A'$ because the one-step transition matrix $\bm P_{A'}^{(N)}$ is not irreducible, but the behavior of the system is clear in this case.) We restate those results here for convenience.
Consider an irreducible aperiodic Markov chain $\{X_n\}_{n\ge0}$ with finite state space $\Sigma$. It evolves according to the one-step transition matrix ${\bm P}=(P_{ij})_{i,j\in\Sigma}$. Let us denote its unique stationary distribution by ${\bm \pi}=(\pi_i)_{i\in \Sigma}$. Let $w:\Sigma\times\Sigma\mapsto {\bf R}$ be an arbitrary function, which we write as a matrix ${\bm W}=(w(i,j))_{i,j\in\Sigma}$ and refer to as the \textit{payoff matrix}. Finally, define the sequences $\{\xi_n\}_{n\ge1}$ and $\{S_n\}_{n\ge1}$ by
\begin{equation}\label{xi_n}
\xi_n:=w(X_{n-1},X_n),\qquad n\ge1,
\end{equation}
and
\begin{equation}\label{S_n}
S_n:=\xi_1+\cdots+\xi_n,\qquad n\ge1.
\end{equation}
Let ${\bm \Pi}$ denote the square matrix each of whose rows is ${\bm \pi}$, and let ${\bm Z}:=({\bm I}-({\bm P}-{\bm \Pi}))^{-1}$ denote the \textit{fundamental matrix}. Denote by $\dot{\bm P}$ (resp., $\ddot{\bm P}$) the Hadamard (entrywise) product $\bm P\circ\bm W$ (resp., $\bm P\circ\bm W\circ\bm W$), and let $\bm 1:=(1,1,\ldots,1)^\T$. Then define
\begin{equation}\label{mu,sigma2}
\mu:=\bm\pi\dot{\bm P}\bm 1\quad{\rm and}\quad\sigma^2:=\bm\pi\ddot{\bm P}\bm 1
-(\bm\pi\dot{\bm P}\bm 1)^2+2\bm\pi\dot{\bm P}(\bm Z-\bm\Pi)\dot{\bm P}\bm 1.
\end{equation}
\begin{theorem}[Ethier and Lee 2009]\label{EL}
Under the above assumptions, and with the distribution of $X_0$ arbitrary, $\lim_{n\to\infty}n^{-1}\E[S_n]=\mu$,
$$
{S_n\over n}\to \mu\;\;{\rm a.s.},
$$
$\lim_{n\to\infty}n^{-1}\Var(S_n)=\sigma^2$, and, if $\sigma^2>0$,
$$
{S_n-n\mu\over\sqrt{n\sigma^2}}\to_d N(0,1).
$$
If $\mu=0$ and $\sigma^2>0$, then $-\infty=\liminf_{n\to\infty}S_n<\limsup_{n\to\infty}S_n=\infty$ \emph{a.s.}
\end{theorem}
We apply this result first with $\Sigma:=\Sigma_N$ and $\bm P:=\bm P_B^{(N)}$, which is clearly irreducible and aperiodic. We claim that the stationary distribution $\bm\pi_B^{(N)}$ is the $N$-fold product measure $\bm\pi\times\bm\pi\times\cdots\times\bm\pi$, where $\bm\pi=(\pi_0,\pi_1,\pi_2)$ denotes the stationary distribution of the three-state chain in $\Sigma_1$ with one-step transition matrix
$$
\bm P_B^{(1)}=\left(\begin{array}{ccc}
0&p_0&q_0\\
q_1&0&p_1\\
p_2&q_2&0
\end{array}\right).
$$
Indeed,
\begin{eqnarray*}
&&\sum_{\bm x}\pi_{x_1}\cdots\pi_{x_N}\bm P_B^{(N)}(\bm x,\bm y)\\
&&\quad{}=\sum_{i=1}^N\pi_{y_1}\cdots\pi_{y_{i-1}}\pi_{y_{i+1}}\cdots\pi_{y_N}\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad\qquad{}\cdot\sum_{x_i:x_i\ne y_i}\pi_{x_i}\bm P_B^{(N)}((y_1,\ldots,y_{i-1},x_i,y_{i+1},\ldots,y_N),\bm y)\\
&&\quad{}=N^{-1}\sum_{i=1}^N\pi_{y_1}\cdots\pi_{y_N}\\
&&\quad{}=\pi_{y_1}\cdots\pi_{y_N},
\end{eqnarray*}
where the first equality holds because state $\bm y$ can be reached in one step only from states $\bm x$ that differ from $\bm y$ at exactly one coordinate.
Alternatively, we could take $\Sigma:=\bar\Sigma_N$ and $\bm P:=\bar{\bm P}_B^{(N)}$ from Table \ref{Table1}. In this case the unique stationary distribution is multinomial$(N,\bm\pi)$.
Next, let us determine the value of $\mu$ in the theorem. We have
\begin{eqnarray*}
\mu_B^{(N)}&=&\bm\pi_B^{(N)}\dot{\bm P}_B^{(N)}\bm 1=\sum_{\bm x}\pi_{x_1}\cdots\pi_{x_N}\sum_{i=1}^N N^{-1}(p_{x_i}-q_{x_i})\\
&=&N^{-1}\sum_{(n_0,n_1,n_2)}{N\choose n_0,n_1,n_2}\pi_0^{n_0}\pi_1^{n_1}\pi_2^{n_2}[n_0(p_0-q_0)+n_1(p_1-q_1)\\
\noalign{\vglue-4mm}
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad{}+n_2(p_2-q_2)]\\
&=&\pi_0(p_0-q_0)+\pi_1(p_1-q_1)+\pi_2(p_2-q_2)=\mu_B^{(1)}=0
\end{eqnarray*}
because the parameterization (\ref{param}) with $\eps=0$ was chosen to ensure the last equality.
Now we apply the theorem with $\Sigma:=\Sigma_N$ and $\bm P:=\bm P_{(\gamma,1-\gamma)}^{(N)}=\gamma\bm P_{A'}^{(N)}+(1-\gamma)\bm P_B^{(N)}$, where $0<\gamma<1$, which is also irreducible and aperiodic (because $\bm P_B^{(N)}$ is). Here the unique stationary distribution $\bm\pi_{(\gamma,1-\gamma)}^{(N)}$ is complicated. For example, in the simplest case, $\gamma=1/2$ and $N=2$,
\begin{eqnarray*}
\bm\pi_{(1/2,1/2)}^{(2)}(0,0)&=&(1 + \rho^2) (31 + 47 \rho + 60 \rho^2 + 47 \rho^3 + 31 \rho^4)/d,\\
\bm\pi_{(1/2,1/2)}^{(2)}(0,1)&=&\bm\pi_{(1/2,1/2)}^{(2)}(1,0)=2(1 + \rho) (1 + \rho^2) (11 + 15 \rho + 9 \rho^2 + 19 \rho^3)/d,\\
\bm\pi_{(1/2,1/2)}^{(2)}(0,2)&=&\bm\pi_{(1/2,1/2)}^{(2)}(2,0)=2(1 + \rho) (1 + \rho^2) (19 + 9 \rho + 15 \rho^2 + 11 \rho^3)/d,\\
\bm\pi_{(1/2,1/2)}^{(2)}(1,1)&=&(1 + \rho) (19 + 21 \rho + 48 \rho^2 + 59 \rho^3 + 27 \rho^4 + 42 \rho^5)/d,\\
\bm\pi_{(1/2,1/2)}^{(2)}(1,2)&=&\bm\pi_{(1/2,1/2)}^{(2)}(2,1)=6 (1 + \rho)^2 (1 + \rho^2) (4 + \rho + 4 \rho^2)/d,\\
\bm\pi_{(1/2,1/2)}^{(2)}(2,2)&=&(1 + \rho) (42 + 27 \rho + 59 \rho^2 + 48 \rho^3 + 21 \rho^4 + 19 \rho^5)/d,
\end{eqnarray*}
where $d:=2 (13 - 2 \rho + 13 \rho^2) (10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)$. In particular, each entry of $\bm\pi_{(1/2,1/2)}^{(2)}$ is the ratio of two degree-6 polynomials in $\rho$. In another simple case, $\gamma=1/2$ and $N=3$, each entry of $\bm\pi_{(1/2,1/2)}^{(3)}$ is the ratio of two degree-14 polynomials in $\rho$. Fortunately, explicit formulas such as these are unnecessary to evaluate $\mu_{(\gamma,1-\gamma)}^{(N)}$.
Let $\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}$ denote the corresponding stationary distribution on $\bar\Sigma_N$. Then the mean profit per turn to the ensemble of players is
\begin{eqnarray}\label{mu_C-prelim}
\mu_{(\gamma,1-\gamma)}^{(N)}&=&\bm\pi_{(\gamma,1-\gamma)}^{(N)}\dot{\bm P}_{(\gamma,1-\gamma)}^{(N)}\bm 1\nonumber\\
&=&(1-\gamma)\sum_{\bm x}\bm\pi_{(\gamma,1-\gamma)}^{(N)}(x_1,\ldots,x_N)\sum_{i=1}^N N^{-1}(p_{x_i}-q_{x_i})\\
&=&N^{-1}(1-\gamma)\sum_{(n_0,n_1,n_2)}\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}(n_0,n_1,n_2) [n_0(p_0-q_0)+n_1(p_1-q_1)\nonumber\\
\noalign{\vglue-4mm}
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\;{}+n_2(p_2-q_2)]\nonumber\\
&=&N^{-1}(1-\gamma)\{\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_0](p_0-q_0)+\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_1](p_1-q_1)\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad{}+\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_2](p_2-q_2)\}.\nonumber
\end{eqnarray}
Now by Table \ref{Table1}, we can compute
\begin{eqnarray*}
\E[n_0'-n_0]&=&\gamma{-2n_0(n_0-1)-n_0n_1+n_1(n_1-1)+2n_1n_2-n_2n_0+n_2(n_2-1)\over N(N-1)}\\
&&\quad{}+(1-\gamma){-n_0p_0-n_0q_0+n_1q_1+n_2p_2\over N}\\
&=&{\gamma(N-3n_0)+(1-\gamma)[n_0(-1)+n_1q_1+n_2p_2]\over N}.
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\E[n_1'-n_1]&=&{\gamma(N-3n_1)+(1-\gamma)[n_0p_0+n_1(-1)+n_2q_2]\over N},\\
\E[n_2'-n_2]&=&{\gamma(N-3n_2)+(1-\gamma)[n_0q_0+n_1p_1+n_2(-1)]\over N}.
\end{eqnarray*}
In each of these equations, we have used $n_0+n_1+n_2=N$ to simplify, with the result that all the quadratic terms cancel and the right sides are linear in $(n_0,n_1,n_2)$, at least if we replace the $N$ in the numerators by $n_0+n_1+n_2$.
Next we take expectations with respect to $\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}$ to obtain
\begin{eqnarray*}
(0,0,0)&=&(\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_0],\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_1],\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_2])\left[\gamma\left(\begin{array}{rrr}
-2&1&1\\
1&-2&1\\
1&1&-2
\end{array}\right)\right.\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad{}\left.{}+(1-\gamma)\left(\begin{array}{rrr}
-1&p_0&q_0\\
q_1&-1&p_1\\
p_2&q_2&-1
\end{array}\right)\right],
\end{eqnarray*}
which with $\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_0]+\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_1]+\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_2]=N$ uniquely determines the vector $(\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_0], \E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_1],\E_{\bar{\bm\pi}_{(\gamma,1-\gamma)}^{(N)}}[n_2])$ because the matrix within brackets is an irreducible infinitesimal matrix. Substituting into (\ref{mu_C-prelim}) and using our parametrization (\ref{param}) with $\eps=0$, we obtain
\begin{equation}\label{mu_C}
\mu_{(\gamma,1-\gamma)}^{(N)}={3 \gamma(1-\gamma) (1 - \rho)^3 (1 + \rho)\over
2 (1 + \rho + \rho^2)^2 + \gamma (5 + 10 \rho + 6 \rho^2 + 10 \rho^3 + 5 \rho^4) + 2 \gamma^2 (1 + \rho + \rho^2)^2},
\end{equation}
which does not depend on $N$ and is positive if $0<\rho<1$, zero if $\rho=1$, and negative if $\rho>1$, indicating that the Parrondo effect is present, regardless of $\gamma\in(0,1)$, if $\rho\ne1$. (In the case $\rho>1$, the effect is sometimes referred to as a \textit{reverse} Parrondo effect. We will not make this distinction.) Temporarily denoting $\mu_{(\gamma,1-\gamma)}^{(N)}$ by $\mu_{(\gamma,1-\gamma)}^{(N)}(\rho)$ to emphasize its dependence on $\rho$, we note that
$$
\mu_{(\gamma,1-\gamma)}^{(N)}(1/\rho)=-\mu_{(\gamma,1-\gamma)}^{(N)}(\rho),
$$
a fact that can also be proved probabilistically (Ethier and Lee 2009).
When $\gamma=1/2$, this reduces to
$$
\mu_{(1/2,1/2)}^{(N)}={3 (1 - \rho)^3 (1 + \rho)\over 2(10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)}.
$$
As we will see in Section \ref{coincidence}, this formula appears elsewhere in the literature of Parrondo's paradox.
\section{An alternative approach}\label{alternative}
The method used in Section \ref{model} to find $\mu_{(\gamma,1-\gamma)}^{(N)}$ does not extend to finding the variance $(\sigma_{(\gamma,1-\gamma)}^{(N)})^2$. However, a method that does extend is based on the observation that the components of the $N$-dimensional Markov chain controlling the mixed game are themselves Markovian.
For example, when game $B$ is played, the Markov chain for player $i$ (one of the $N$ players) has one-step transition matrix
\begin{equation}\label{pB1N}
\bm P_B^{(1,N)}:=N^{-1}[\bm P_B^{(1)}+(N-1)\bm I_3].
\end{equation}
On the other hand, the redistribution game $A'$ affects player $i$ only if $i$ is chosen as the donor or as the beneficiary (probability $(N-1)/[N(N-1)]=1/N$ for each). This leads to
\begin{equation}\label{pA1N}
\bm P_{A'}^{(1,N)}:=N^{-1}[2\bm P_A^{(1)}+(N-2)\bm I_3],
\end{equation}
where $\bm P_A^{(1)}$ denotes the one-step transition matrix for the original one-player Parrondo game $A$ (not $A'$). In both displayed matrices, the superscript $(1,N)$ is intended to indicate that the underlying Markov chain controls one of the $N$ players.
From these one-step transition matrices we calculate
$$
\dot{\bm P}_B^{(1,N)}:=N^{-1}\dot{\bm P}_B^{(1)},
\qquad
\dot{\bm P}_{A'}^{(1,N)}:=2N^{-1}\dot{\bm P}_A^{(1)},
$$
and
$$
\ddot{\bm P}_B^{(1,N)}:=N^{-1}\ddot{\bm P}_B^{(1)},
\qquad
\ddot{\bm P}_{A'}^{(1,N)}:=2N^{-1}\ddot{\bm P}_A^{(1)}.
$$
With
\begin{eqnarray*}
\bm P&:=&\gamma\bm P_{A'}^{(1,N)}+(1-\gamma)\bm P_B^{(1,N)},\\
\dot{\bm P}&:=&\gamma\dot{\bm P}_{A'}^{(1,N)}+(1-\gamma)\dot{\bm P}_B^{(1,N)},\\
\ddot{\bm P}&:=&\gamma\ddot{\bm P}_{A'}^{(1,N)}+(1-\gamma)\ddot{\bm P}_B^{(1,N)},
\end{eqnarray*}
and with $\bm\pi$, $\bm\Pi$, and $\bm Z$ chosen accordingly and $\bm 1:=(1,1,1)^\T$, we have
$$
\mu_{(\gamma,1-\gamma)}^{(1,N)}=\bm\pi\dot{\bm P}\bm1,\qquad (\sigma_{(\gamma,1-\gamma)}^{(1,N)})^2=\bm\pi\ddot{\bm P}\bm1-(\bm\pi\dot{\bm P}\bm1)^2+2\bm\pi\dot{\bm P}(\bm Z-\bm\Pi)\dot{\bm P}\bm1.
$$
The mean is readily evaluated to give
\begin{eqnarray}\label{mu_C-2}
\mu_{(\gamma,1-\gamma)}^{(N)}&=&N\mu_{(\gamma,1-\gamma)}^{(1,N)}\\
&=&{3 \gamma(1 - \gamma) (1 - \rho)^3 (1 + \rho)\over 2 (1 + \rho + \rho^2)^2 +
\gamma (5 + 10 \rho + 6 \rho^2 + 10 \rho^3 + 5 \rho^4)+ 2 \gamma^2 (1 + \rho + \rho^2)^2},\nonumber
\end{eqnarray}
which is consistent with (\ref{mu_C}) and does not depend on $N$. The variance $(\sigma_{(\gamma,1-\gamma)}^{(1,N)})^2$ is also easily evaluated but is complicated; instead we provide its asymptotic value as $N\to\infty$ ($a_N\sim b_N$ if $\lim_{N\to\infty}a_N/b_N=1$):
\begin{eqnarray}\label{sigma_C-asymp}
&&\!\!\!\!\!(\sigma_{(\gamma,1-\gamma)}^{(1,N)})^2\nonumber\\
&&\!\!\!\!\!{}\sim 9 [8 (1 + \gamma^7) \rho^2 (1 + \rho + \rho^2)^4 \nonumber\\
&&\;\;{} + 4 (\gamma + \gamma^6) (1 + \rho + \rho^2)^2 (1 + 2 \rho + \rho^2 + 2 \rho^3 + \rho^4) (1 + 2 \rho + 12 \rho^2 + 2 \rho^3 + \rho^4)\nonumber\\
&&\;\;{} + 6 (\gamma^2 + \gamma^5) (1 + \rho + \rho^2)^2 (3 + 20 \rho + 30 \rho^2 + 40 \rho^3 + 66 \rho^4 + 40 \rho^5 + 30 \rho^6 \nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad\;\;{}+ 20 \rho^7 + 3 \rho^8) \nonumber\\
&&\;\;{} + (\gamma^3 + \gamma^4) (59 + 306 \rho + 864 \rho^2 + 1738 \rho^3 + 2781 \rho^4 + 3636 \rho^5 + 3912 \rho^6 \nonumber\\
&&\qquad\qquad\qquad\;{} + 3636 \rho^7 + 2781 \rho^8 + 1738 \rho^9 + 864 \rho^{10} + 306 \rho^{11} + 59 \rho^{12})]\nonumber\\
&&\;{} /\{N [2 (1 + \gamma^2 ) (1 + \rho + \rho^2)^2 + \gamma (5 + 10 \rho + 6 \rho^2 + 10 \rho^3 + 5 \rho^4)]^3\}.
\end{eqnarray}
\section{Variance parameter for game $B$}\label{varB}
Let $\bm P$ be the one-step transition matrix for an irreducible aperiodic Markov chain, let $\bm\pi$ be its unique stationary distribution, and let $\bm\Pi$ be the square matrix each of whose rows is $\bm\pi$. Denote by $\bm Z_{\bm P}:=(\bm I-(\bm P-\bm\Pi))^{-1}$ the fundamental matrix of $\bm P$.
\begin{lemma}\label{fundamental}For each positive integer $N$,
$\bm Z_{(1/N)\bm P+(1-1/N)\bm I}-\bm\Pi=N(\bm Z_{\bm P}-\bm\Pi)$.
\end{lemma}
\begin{proof}
The one-step transition matrix $(1/N)\bm P+(1-1/N)\bm I$ has the same stationary distribution $\bm\pi$, hence the same $\bm\Pi$, so
$$
\bm Z_{(1/N)\bm P+(1-1/N)\bm I}=(\bm I-[(1/N)\bm P+(1-1/N)\bm I-\bm\Pi])^{-1}=N(\bm I-(\bm P-N\bm\Pi))^{-1},
$$
hence it suffices to prove that
$$
(\bm I-(\bm P-N\bm\Pi))^{-1}-(1/N)\bm\Pi=(\bm I-(\bm P-\bm\Pi))^{-1}-\bm\Pi.
$$
For this it is enough that
\begin{eqnarray*}
&&(\bm I-(\bm P-N\bm\Pi))[(\bm I-(\bm P-N\bm\Pi))^{-1}-(1/N)\bm\Pi]\\
&&\qquad{}=(\bm I-(\bm P-\bm\Pi)+(N-1)\bm\Pi)[(\bm I-(\bm P-\bm\Pi))^{-1}-\bm\Pi]
\end{eqnarray*}
or
\begin{eqnarray}\label{eq}
&&\bm I-(1/N)(\bm I-(\bm P-N\bm\Pi))\bm\Pi\nonumber\\
&&\qquad{}=\bm I-(\bm I-(\bm P-\bm\Pi))\bm\Pi+(N-1)\bm\Pi[(\bm I-(\bm P-\bm\Pi))^{-1}-\bm\Pi].
\end{eqnarray}
Now $\bm\Pi\bm P=\bm P\bm\Pi=\bm\Pi$, $\bm\Pi^2=\bm\Pi$, and so $\bm\Pi=\bm\Pi(\bm I-(\bm P-\bm\Pi))$ and $\bm\Pi(\bm I-(\bm P-\bm\Pi))^{-1}=\bm\Pi$. So (\ref{eq}) is equivalent to
$$
\bm I-(1/N)(\bm\Pi-(\bm\Pi-N\bm\Pi))=\bm I-(\bm\Pi-(\bm\Pi-\bm\Pi))+(N-1)(\bm\Pi-\bm\Pi)
$$
or $\bm I-\bm\Pi=\bm I-\bm\Pi$, hence (\ref{eq}), and therefore the lemma, is established.
\end{proof}
We want to use this to evaluate the variance parameter for Toral's $N$-player game $B$, in which there is no redistribution of wealth. The state space is $\Sigma_N$ and the one-step transition probabilities are as previously specified. We assume the parameterization (\ref{param}) with $\eps=0$.
We have seen that the stationary distribution $\bm\pi_B^{(N)}$ is the $N$-fold product measure $\bm\pi\times\bm\pi\times\cdots\times\bm\pi$, where $\bm\pi=(\pi_0,\pi_1,\pi_2)$ denotes the stationary distribution of the three-state chain with one-step transition matrix $\bm P_B^{(1)}$.
Specifically,
$$
\pi_0={1+\rho^2\over2(1+\rho+\rho^2)},\quad \pi_1={\rho(1+\rho)\over2(1+\rho+\rho^2)},\quad \pi_2={1+\rho\over2(1+\rho+\rho^2)}.
$$
In principle, we could use the formula
$\sigma^2:=\bm\pi\ddot{\bm P}\bm 1-(\bm\pi\dot{\bm P}\bm 1)^2+2\bm\pi\dot{\bm P}(\bm Z-\bm\Pi)\dot{\bm P}\bm 1$,
but the evaluation of the $3^N\times 3^N$ fundamental matrix $\bm Z$ is difficult, so we take a different approach.
The key observation is that each coordinate of the $N$-dimensional Markov chain is a one-dimensional Markov chain with one-step transition matrix (\ref{pB1N}) or
$$
\bm P_B^{(1,N)}:=(1/N)\bm P_B^{(1)}+(1-1/N)\bm I_3.
$$
Further, the coordinate processes are independent if their initial states are, and they are if the initial state of the $N$-dimensional process has the stationary distribution $\bm\pi_B^{(N)}$ on $\Sigma_N$.
As already noted in Section \ref{alternative}, $\dot{\bm P}_B^{(1,N)}=(1/N)\dot{\bm P}_B^{(1)}$ and $\ddot{\bm P}_B^{(1,N)}=(1/N)\ddot{\bm P}_B^{(1)}$. By Lemma \ref{fundamental}, $\bm Z_B^{(1,N)}-\bm\Pi=N(\bm Z_B^{(1)}-\bm\Pi)$,
so (since $\mu_B^{(1,N)}=N^{-1}\mu_B^{(1)}=0$)
\begin{eqnarray*}
(\sigma_B^{(1,N)})^2&:=&\bm\pi\ddot{\bm P}_B^{(1,N)}\bm 1+2\bm\pi\dot{\bm P}_B^{(1,N)}(\bm Z_B^{(1,N)}-\bm\Pi)\dot{\bm P}_B^{(1,N)}\bm 1\\
&\;=&N^{-1}[\bm\pi\ddot{\bm P}_B^{(1)}\bm 1+2\bm\pi\dot{\bm P}_B^{(1)}(\bm Z_B^{(1)}-\bm\Pi)\dot{\bm P}_B^{(1)}\bm 1]\\
&\;=&N^{-1}(\sigma_B^{(1)})^2=N^{-1}\bigg({3\rho\over1+\rho+\rho^2}\bigg)^2.
\end{eqnarray*}
Finally, let $S_n$ denote the profit to the ensemble of $N$ players after $n$ plays of game $B$, with $S_n^{[i]}$ denoting the profit to player $i$. Then $S_n=S_n^{[1]}+\cdots+S_n^{[N]}$ and the summands are independent (assuming the stationary initial distribution mentioned above), hence
\begin{eqnarray}\label{sigmaB2}
(\sigma_B^{(N)})^2&=&\lim_{n\to\infty}n^{-1}\Var(S_n)=N\lim_{n\to\infty}n^{-1}\Var(S_n^{[1]})\nonumber\\
&=&N(\sigma_B^{(1,N)})^2=\bigg({3\rho\over1+\rho+\rho^2}\bigg)^2,
\end{eqnarray}
yielding a simple and explicit formula for $(\sigma_B^{(N)})^2$, which does not depend on $N$.
\section{Variance parameter for random mixtures}\label{variance_C}
With $S_n$ denoting the profit to the ensemble of $N$ players after $n$ plays of the mixed game, let $S_n^{[i]}$ denote the profit to player $i$ (one of the $N$ players) after $n$ plays of the mixed game. Then
$$
S_n=\sum_{i=1}^N S_n^{[i]},
$$
so
\begin{eqnarray*}
\Var(S_n)&=&\sum_{i=1}^N\Var(S_n^{[i]})+2\sum_{1\le i<j\le N}\Cov(S_n^{[i]},S_n^{[j]})\\
&=&N\Var(S_n^{[1]})+N(N-1)\Cov(S_n^{[1]},S_n^{[2]}).
\end{eqnarray*}
Dividing by $n$ and letting $n\to\infty$, we find that
\begin{equation}\label{sigma_C^N}
(\sigma_{(\gamma,1-\gamma)}^{(N)})^2=N(\sigma_{(\gamma,1-\gamma)}^{(1,N)})^2+N(N-1)\sigma_{(\gamma,1-\gamma)}^{([1,2],N)},
\end{equation}
where the last superscript is intended to indicate that the underlying Markov chain controls players 1 and 2 of the $N$ players.
We know how to evaluate $(\sigma_{(\gamma,1-\gamma)}^{(1,N)})^2$, so it remains to find $\sigma_{(\gamma,1-\gamma)}^{([1,2],N)}$.
For this we will need an extension of (\ref{xi_n})--(\ref{mu,sigma2}). With the same assumptions on $\{X_n\}_{n\ge0}$ (an irreducible, aperiodic, finite Markov chain in $\Sigma$ with one-step transition matrix $\bm P$ and unique stationary distribution $\bm\pi$), we let $w^{[1]},w^{[2]}:\Sigma\times\Sigma\mapsto{\bm R}$ be two functions with $\bm W^{[1]}$ and $\bm W^{[2]}$ denoting the corresponding matrices, and define
$$
\xi^{[1]}_n:=w^{[1]}(X_{n-1},X_n),\quad \xi^{[2]}_n:=w^{[2]}(X_{n-1},X_n),\qquad n\ge1,
$$
and
$$
S_n^{[1]}:=\xi^{[1]}_1+\cdots+\xi^{[1]}_n,\quad S_n^{[2]}:=\xi^{[2]}_1+\cdots+\xi^{[2]}_n,\qquad n\ge1.
$$
Let ${\bm\Pi}$ and ${\bm Z}$ be associated with $\bm P$ in the usual way. Denote by $\bm P^{[1]}$, $\bm P^{[2]}$, and $\bm P^{[1,2]}$ the Hadamard products $\bm P\circ\bm W^{[1]}$, $\bm P\circ\bm W^{[2]}$, and $\bm P\circ\bm W^{[1]}\circ\bm W^{[2]}$, resp., and let $\bm 1:=(1,1,\ldots,1)^\T$. Then define the covariance parameter
\begin{eqnarray*}
\sigma^{[1,2]}&:=&\bm\pi\bm P^{[1,2]}\bm 1-(\bm\pi\bm P^{[1]}\bm 1)(\bm\pi\bm P^{[2]}\bm 1)\nonumber\\
&&\quad{}+\bm\pi\bm P^{[1]}(\bm Z-\bm\Pi)\bm P^{[2]}\bm 1+\bm\pi\bm P^{[2]}(\bm Z-\bm\Pi)\bm P^{[1]}\bm 1.
\end{eqnarray*}
The interpretation of this parameter is as follows.
\begin{theorem}
Under the above assumptions, and with the distribution of $X_0$ arbitrary,
$$
\lim_{n\to\infty}n^{-1}\Cov(S^{[1]}_n,S^{[2]}_n)=\sigma^{[1,2]}.
$$
\end{theorem}
\begin{proof}
The proof is similar to the proof that $\lim_{n\to\infty}n^{-1}\Var(S_n)=\sigma^2$ in Theorem \ref{EL}, which is just the special case $w^{[1]}=w^{[2]}=w$.
\end{proof}
We now want to apply this to find $\sigma_{(\gamma,1-\gamma)}^{([1,2],N)}$. This involves only players 1 and 2, for which we need only a (9-state) Markov chain in $\Sigma_2$. The reduced model that does not distinguish between the players but only counts how many players of each type there are is insufficient.
Thinking of $(i,j)\in\Sigma_2$ as the base-3 representation of the integer $3i+j$, we order the elements of $\Sigma_2$ by their values (0--8). The one-step transition matrix for the profit to players 1 and 2 when $N$ players are playing game $B$ is
$$
\bm P_B^{(2,N)}:=N^{-1}[2\bm P_B^{(2)}+(N-2)\bm I_9],
$$
where $\bm P_B^{(2)}$ is as in Section \ref{model} with $N=2$. The superscript $(2,N)$ is intended to indicate that the underlying Markov chain controls two of the $N$ players. The one-step transition matrix for the profit to players 1 and 2 when $N$ players are playing game $A'$ is
$$
\bm P_{A'}^{(2,N)}:=[N(N-1)]^{-1}[2\bm P_{A_0}+4(N-2)\bm P_{A_1}+(N-2)(N-3)\bm I_9],
$$
where $\bm P_{A_0}$ is a $9\times9$ matrix with two entries (each equal to 1/2) in each row, corresponding to one-unit transfers $1\to2$ and $2\to1$; similarly, $\bm P_{A_1}$ is a $9\times9$ matrix with four entries (each equal to 1/4) in each row, corresponding to one-unit transfers $1\to\cdot$, $\cdot\to1$, $2\to\cdot$, and $\cdot\to2$, where $\cdot$ represents the players other than 1 and 2. The functions $w^{[1]}$ and $w^{[2]}$ can be specified as follows. Corresponding to matrices $\bm P_B^{(2)}$ and $\bm P_{A_1}$, the function $w^{[1]}$ is 1 at (1 wins) and at $\cdot\to1$; it is $-1$ at (1 loses) and at $1\to\cdot$; and it is 0 at (2 wins) or (2 loses) and at $\cdot\to2$ and $2\to\cdot$. Corresponding to matrix $\bm P_{A_0}$, the function $w^{[1]}$ is 1 at $2\to1$; it is $-1$ at $1\to2$. The function $w^{[2]}$ is defined exactly in the same way but with the roles of 1 and 2 reversed.
From these one-step transition matrices we calculate
$$
({\bm P}_B^{(2,N)})^{[1]}:=2N^{-1}({\bm P}_B^{(2)})^{[1]},\qquad({\bm P}_B^{(2,N)})^{[2]}:=2N^{-1}({\bm P}_B^{(2)})^{[2]},
$$
\begin{eqnarray*}
({\bm P}_{A'}^{(2,N)})^{[1]}&:=&[N(N-1)]^{-1}[2({\bm P}_{A_0})^{[1]}+4(N-2)({\bm P}_{A_1})^{[1]}],\\
({\bm P}_{A'}^{(2,N)})^{[2]}&:=&[N(N-1)]^{-1}[2({\bm P}_{A_0})^{[2]}+4(N-2)({\bm P}_{A_1})^{[2]}],
\end{eqnarray*}
$({\bm P}_B^{(2,N)})^{[1,2]}:=\bm0$, and
$$
({\bm P}_{A'}^{(2,N)})^{[1,2]}:=2[N(N-1)]^{-1}({\bm P}_{A_0})^{[1,2]}.
$$
With
\begin{eqnarray*}
\bm P&:=&\gamma\bm P_{A'}^{(2,N)}+(1-\gamma)\bm P_B^{(2,N)},\\
{\bm P}^{[1]}&:=&\gamma({\bm P}_{A'}^{(2,N)})^{[1]}+(1-\gamma)({\bm P}_B^{(2,N)})^{[1]},\\
{\bm P}^{[2]}&:=&\gamma({\bm P}_{A'}^{(2,N)})^{[2]}+(1-\gamma)({\bm P}_B^{(2,N)})^{[2]},\\
{\bm P}^{[1,2]}&:=&\gamma({\bm P}_{A'}^{(2,N)})^{[1,2]}+(1-\gamma)({\bm P}_B^{(2,N)})^{[1,2]},
\end{eqnarray*}
and with $\bm\pi$, $\bm\Pi$, and $\bm Z$ chosen accordingly and $\bm 1:=(1,1,1)^\T$, we can evaluate
\begin{eqnarray*}
\sigma_{(\gamma,1-\gamma)}^{([1,2],N)}&:=&\bm\pi\bm P^{[1,2]}\bm1-(\bm\pi\bm P^{[1]}\bm1)(\bm\pi\bm P^{[2]}\bm1)\\
&&\quad{}+\bm\pi\bm P^{[1]}(\bm Z-\bm\Pi)\bm P^{[2]}\bm 1
+\bm\pi\bm P^{[2]}(\bm Z-\bm\Pi)\bm P^{[1]}\bm 1
\end{eqnarray*}
as a function of $N$, at least if we fix $\rho$ and $\gamma$.
With $\rho=1/3$ and $\gamma=1/2$, we conclude that
\begin{eqnarray}\label{sigmaC2}
(\sigma_{(1/2,1/2)}^{(N)})^2&=&27 (-36821493886409 + 71724260647553 N - 46282959184439 N^2 \nonumber\\
&&\qquad{}+ 9902542819695 N^3)\\
&&\quad/[8331019058 (-269171 + 524347 N - 338381 N^2 + 72405 N^3)],\nonumber
\end{eqnarray}
which is monotonically increasing in $N\ge2$, ranging from
$$
(\sigma_{(1/2,1/2)}^{(2)})^2={114315959583\over258261590798}\approx0.442636
$$
to
$$
\lim_{N\to\infty}(\sigma_{(1/2,1/2)}^{(N)})^2={5941525691817\over13404609664322}\approx0.443245.
$$
Let us summarize our results for random mixtures.
Let $S_n$ be the cumulative profit after $n$ turns to the ensemble of $N\ge2$ players playing the mixed game $\gamma A'+(1-\gamma)B$, where $0\le\gamma\le1$. We assume the parameterization (\ref{param}) with $\eps=0$.
\begin{theorem}\label{limit-thm}
If $\gamma=1$ so that game $A'$ is always played, then $\P(S_n=0$ for all $n\ge1)=1$.
If $\gamma=0$ so that game $B$ is always played, then $\{S_n-S_{n-1}\}_{n\ge1}$ satisfies the SLLN and the CLT with mean and variance parameters $\mu_B^{(N)}=0$ and $(\sigma_B^{(N)})^2$ as in (\ref{sigmaB2}).
If $0<\gamma<1$ so that both games are played, then $\{S_n-S_{n-1}\}_{n\ge1}$ satisfies the SLLN and the CLT with mean and variance parameters $\mu_{(\gamma,1-\gamma)}^{(N)}$ as in (\ref{mu_C}) (or (\ref{mu_C-2})) and $(\sigma_{(\gamma,1-\gamma)}^{(N)})^2$, at least when $\rho=1/3$ and $\gamma=1/2$, as in (\ref{sigmaC2}). When $\rho\ne1/3$ or $\gamma\ne1/2$, we implicitly assume that $(\sigma_{(\gamma,1-\gamma)}^{(N)})^2>0$.
\end{theorem}
\begin{proof}
The first conclusion is obvious. The second and third conclusions follow from Theorem 1, though the mean and variance parameters are obtained not from the theorem but by using the methods described in the text.
\end{proof}
To compare our results with those of Toral (2002), we must restore the bias parameter $\eps>0$. For simplicity, let us take $\gamma=1/2$, as he did. Then
\begin{eqnarray}\label{mu_C:eps}
\mu_{(1/2,1/2)}^{(N)}&=&\{3 [2 (1 - \rho)^3 (1 + \rho) - \eps (13 + 26 \rho + 30 \rho^2 + 26 \rho^3 + 13 \rho^4)\\
&&\;{}+ \eps^2 (1 - \rho)^3 (1 + \rho) - 2 \eps^3 (1 + \rho)^2 (1 + \rho^2) ]\}/\{2 [2(10 + 20 \rho \nonumber\\
&&\;{}+ 21 \rho^2 + 20 \rho^3 + 10 \rho^4) - \eps (1 - \rho)^3 (1 + \rho) + 3 \eps^2 (1 + \rho)^2 (1 + \rho^2)]\}.\nonumber
\end{eqnarray}
Toral reported a simulation with $\rho=1/3$, $\gamma=1/2$, $\eps=1/100$, and $N=200$. Actually, $\eps=1/1000$ was intended (personal communication 2011). With $\rho=1/3$ and $\eps=1/1000$, (\ref{mu_C:eps}) reduces to $193387599/6704101000\approx0.028846$, with which Toral's estimate, 0.029, is consistent.
\section{Mean profit for nonrandom patterns}\label{patterns}
Toral (2002) omitted discussion of the case in which his games $A'$ and $B$ are played in a nonrandom periodic pattern such as $A'BBA'BBA'BB\cdots$. Let us denote by $[r,s]$ the pattern $(A')^rB^s$ repeated ad infinitum. We would like to apply the results of Ethier and Lee (2009) to the pattern $[r,s]$, showing that the Parrondo effect is present for all $r,s\ge1$. (Unlike in the original one-player Parrondo games, the case $r=s=1$ is included.) We do this by showing that the mean profit per turn for the ensemble of players, $\mu_{[r,s]}^{(N)}$, is positive if $0<\rho<1$, zero if $\rho=1$, and negative if $\rho>1$, for all $r,s\ge1$ and $N\ge2$. As we will see, here the mean parameter depends on $N$ and it takes a particularly simple form in the limit as $N\to\infty$.
First, Theorem 6 of Ethier and Lee (2009) is applicable. (The assumption there that $\bm P_A$ is irreducible and aperiodic is unnecessary.) But again it is simplest to apply the results to one or two players at a time, as we did in Sections \ref{alternative} and \ref{variance_C}. Let us begin by finding the mean parameter $\mu_{[r,s]}^{(N)}$.
For the original one-player Parrondo games, in which
$$
\setlength{\arraycolsep}{1mm}
{\bm P}_A:={1\over2}\left(\begin{array}{ccc}
0&1&1\\
1&0&1\\
1&1&0
\end{array}\right),\quad
\setlength{\arraycolsep}{1mm}
{\bm P}_B:=\left(\begin{array}{ccc}
0&p_0&q_0\\
q_1&0&p_1\\
p_2&q_2&0
\end{array}\right),\quad
\setlength{\arraycolsep}{1mm}
{\bm W}:=\left(\begin{array}{rrr}
0&1&-1\\
-1&0&1\\
1&-1&0
\end{array}\right).
$$
Ethier and Lee (2009) showed that
$$
\mu_{[r,s]}={1\over r+s}\,{\bm \pi}_{s,r}{\bm R}\,\,{\rm diag}\!\left(s,\,{1-e_1^s\over1-e_1},\,{1-e_2^s\over1-e_2}\right){\bm L}{\bm\zeta},
$$
where $\bm\pi_{s,r}$ is the unique stationary distribution of $\bm P_B^s\bm P_A^r$, $\bm R$ is the matrix of right eigenvectors of $\bm P_B$, $e_1$ and $e_2$ are the nonunit eigenvalues of $\bm P_B$, $\bm L:=\bm R^{-1}$, and $\bm\zeta:=(\bm P_B\circ\bm W)\bm1$. They further showed that this formula reduces algebraically to
$$
\mu_{[r,s]}=E_{r,s}/D_{r,s},
$$
where
\begin{eqnarray}\label{Ers}
E_{r,s}&:=&3a_r\{[2+(3a_r-1)(e_1^s+e_2^s-2e_1^s e_2^s)-(e_1^s+e_2^s)](1-\rho)(1+\rho)S\nonumber\\
&&\qquad\qquad{}+a_r(e_2^s-e_1^s)[5(1+\rho)^2(1+\rho^2)-4\rho^2]\}(1-\rho)^2
\end{eqnarray}
and
\begin{equation}\label{Drs}
D_{r,s}:=4(r+s)[1+(3a_r-1)e_1^s][1+(3a_r-1)e_2^s](1+\rho+\rho^2)^2S
\end{equation}
with $a_r:=(1-(-1/2)^r)/3$ and $S:=\sqrt{(1+\rho^2)(1+4\rho+\rho^2)}$.
We apply these results but with $\bm P_A$ and $\bm P_B$ replaced by
$$
\bm P_{A'}^{(1,N)}:=N^{-1}[2\bm P_A^{(1)}+(N-2)\bm I_3]\quad{\rm and}\quad
\bm P_B^{(1,N)}:=N^{-1}[\bm P_B^{(1)}+(N-1)\bm I_3].
$$
Now $(\bm P_{A'}^{(1,N)})^r$ is given by the same formula as $\bm P_A^r$ but with $a_r$ redefined as
\begin{equation}\label{a_r}
a_r:=[1-(1-3/N)^r]/3,
\end{equation}
and $(\bm P_B^{(1,N)})^s$ has the same spectral representation as $\bm P_B^s$ but with the nonunit eigenvalues replaced by
\begin{equation}\label{e1,e2}
e_1:=1-{1-e_1^\circ\over N},\qquad e_2:=1-{1-e_2^\circ\over N},
\end{equation}
where $e_1^\circ$ and $e_2^\circ$ are the nonunit eigenvalues of $\bm P_B$, namely
$$
e_1^\circ:=-{1\over2}+{(1-\rho)S\over2(1+\rho)(1+\rho^2)},\qquad
e_2^\circ:=-{1\over2}-{(1-\rho)S\over2(1+\rho)(1+\rho^2)}.
$$
The matrices $\bm R$ and $\bm L$ are unchanged.
We conclude that
\begin{equation}\label{mu[r,s]-eigenvalues}
\mu_{[r,s]}^{(N)}=NE_{r,s}/D_{r,s},
\end{equation}
where $E_{r,s}$ and $D_{r,s}$ are as in (\ref{Ers}) and (\ref{Drs}) with only the changes (\ref{a_r}) and (\ref{e1,e2}). For example, this leads to
\begin{eqnarray*}
\mu_{[1,1]}^{(N)}&=&3N (2N-3) (1 - \rho)^3 (1 + \rho)/
\{2 [18 (1 + \rho + \rho^2)^2 -
3 N (13 + 26 \rho \\
&&\quad{}+ 30 \rho^2 + 26 \rho^3 + 13 \rho^4) +
2N^2 (10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)]\}
\end{eqnarray*}
and
\begin{eqnarray*}
&&\!\!\!\!\!\mu_{[1,2]}^{(N)}\\
&&{}=2 N (1 - \rho)^3 (1 + \rho) [- 3 (1 + \rho + \rho^2)^2 + N (10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)\\
&&\quad{} -9 N^2 (1 + \rho)^2 (1 + \rho^2) + 3 N^3 (1 + \rho)^2 (1 +\rho^2)]
/[36 (1 + \rho + \rho^2)^4\\
&&\quad{} - 12 N (1 + \rho + \rho^2)^2 (11 + 22 \rho + 24 \rho^2 + 22 \rho^3 + 11 \rho^4) + N^2 (193 + 772 \rho\\
&&\quad{} + 1660 \rho^2 + 2548 \rho^3 + 2938 \rho^4 + 2548 \rho^5 + 1660 \rho^6 + 772 \rho^7 + 193 \rho^8) \\
&&\quad{} - 3 N^3 (1 + \rho)^2 (43 + 86 \rho + 145 \rho^2 + 172 \rho^3 + 145 \rho^4 + 86 \rho^5 + 43 \rho^6) \\
&&\quad{} + N^4 (1 + \rho)^2 (35 + 70 \rho + 113 \rho^2 + 140 \rho^3 + 113 \rho^4 + 70 \rho^5 + 35 \rho^6)].
\end{eqnarray*}
Both of these functions are positive for all $N\ge2$, as can be seen by expanding numerators and denominators in powers of $N-2$ and noticing that all coefficients are polynomials in $\rho$ with only positive coefficients.
Although these formulas become increasingly complicated as $r$ and $s$ increase, their limits as $N\to\infty$ have a very simple form. To see this, it suffices to note that
$$
a_r={r\over N}+O\bigg({1\over N^2}\bigg),\qquad e_1^s=1-{(1-e_1^\circ)s\over N}+O\bigg({1\over N^2}\bigg),
$$
and similarly for $e_2^s$, so (\ref{mu[r,s]-eigenvalues}) converges as $N\to\infty$ to
$$
{3 r s (1 - \rho)^3 (1 + \rho) \over9 r^2 (1 + \rho)^2 (1 + \rho^2) + 9 r s (1 + \rho)^2 (1 + \rho^2) + 2 s^2 (1 + \rho + \rho^2)^2},
$$
which coincides with (\ref{mu_C}) (or (\ref{mu_C-2})) when $\gamma=r/(r+s)$. This limit is positive if $0<\rho<1$, zero if $\rho=1$, and negative if $\rho>1$, so we conclude that the Parrondo effect is present for all $r,s\ge1$, as long as $N$ is large enough and $\rho\ne1$. This relationship between the random-mixture case and the nonrandom-pattern case is not present in the original one-player Parrondo games except in a single case ($r=2$, $s=1$). (We have confirmed this for $r,s\ge1$ and $r+s\le75$ and expect that it is true generally.)
We now verify that the Parrondo effect is always present. We begin with a lemma.
\begin{lemma}\label{lemma1}
If $0<a<b<c$, then $(c^n-b^n)/(b^n-a^n)$ is increasing in $n\ge1$.
\end{lemma}
\begin{proof}
Divide both numerator and denominator by $b^n$ to see that we can, without loss of generality, assume that $b=1$. So the aim is to show that
$$
{c^n-1\over1-a^n}<{c^{n+1}-1\over1-a^{n+1}}, \qquad n\ge1,
$$
or that
$$
{c^n-1\over c^{n+1}-1}<{a^n-1\over a^{n+1}-1}, \qquad n\ge1.
$$
For this it is enough to fix $n\ge1$ and show that the function
$$
f(x):={x^n-1\over x^{n+1}-1},
$$
defined by continuity at $x=1$, is decreasing on $(0,\infty)$. Its derivative has the same sign as
$$
-[x^{n+1}-(n+1)x+n],
$$
so it is enough that the quantity within brackets is positive for $x>1$ and $0<x<1$. First suppose that $x>1$. Then
\begin{eqnarray*}
x^{n+1}-(n+1)x+n&=&(x-1+1)^{n+1}-(n+1)(x-1)-1\\
&=&(x-1)^{n+1}+{n+1\choose1}(x-1)^n+\cdots+{n+1\choose n-1}(x-1)^2\\
&>&0.
\end{eqnarray*}
Next suppose that $0<x<1$. Then
\begin{eqnarray*}
x^{n+1}-(n+1)x+n&=&x^{n+1}-1-(n+1)(x-1)\\
&=&(x-1)(x^n+x^{n-1}+\cdots+x+1)-(n+1)(x-1)\\
&=&(x-1)[x^n+x^{n-1}+\cdots+x+1-(n+1)]\\
&>&0.
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{theorem}\label{positivity}
$\mu_{[r,s]}^{(N)}$ is positive if $0<\rho<1$, zero if $\rho=1$, and negative if $\rho>1$, for all $r,s\ge1$ and $N\ge2$.
\end{theorem}
\begin{proof}
Denoting $\mu_{[r,s]}^{(N)}$ temporarily by $\mu_{[r,s]}^{(N)}(\rho)$ to emphasize its dependence on $\rho$, it can be shown algebraically or probabilistically that
$$
\mu_{[r,s]}^{(N)}(1/\rho)=-\mu_{[r,s]}^{(N)}(\rho),
$$
so it will suffice to treat the case $0<\rho<1$.
First, $|3a_r-1|<1$ and $e_1,e_2\in(0,1)$, so $D_{r,s}>0$. Since $a_r>0$, it suffices to show that
\begin{eqnarray*}
&&[2+(3a_r-1)(e_1^s+e_2^s-2e_1^s e_2^s)-(e_1^s+e_2^s)](1-\rho)(1+\rho)S\\
&&\qquad\qquad{}+a_r(e_2^s-e_1^s)[5(1+\rho)^2(1+\rho^2)-4\rho^2]>0.
\end{eqnarray*}
Discarding the $-4\rho^2$ term (since $e_2^s-e_1^s<0$), it is enough to show that
\begin{eqnarray}\label{suff}
&&(1-e_1^s)[1+(3a_r-1)e_2^s]+(1-e_2^s)[1+(3a_r-1)e_1^s]\nonumber\\
&&\qquad\qquad{}-a_r(e_1^s-e_2^s){5(1+\rho)(1+\rho^2)\over(1-\rho)S}>0.
\end{eqnarray}
Now $e_1^\circ=(-1+x)/2$ and $e_2^\circ=(-1-x)/2$, where $x:=(1-\rho)S/[(1+\rho)(1+\rho^2)]\in(0,1)$, so
$e_1=(2N-3+x)/(2N)$ and $e_2=(2N-3-x)/(2N)$.
Let us first assume that $N\ge3$. Then $3a_r-1\le0$, so, replacing $e_1^s$ and $e_2^s$ within brackets in (\ref{suff}) by 1, we need only show that
$$
3(1-e_1^s)+3(1-e_2^s)>(e_1^s-e_2^s){5(1+\rho)(1+\rho^2)\over(1-\rho)S},
$$
or that
\begin{equation}\label{x-ineq}
{2(2N)^s-[(2N-3+x)^s+(2N-3-x)^s]\over[(2N-3+x)^s-(2N-3-x)^s]/x}>{5\over3}.
\end{equation}
The denominator is a polynomial of degree $s-1$ in $x$ with positive coefficients, while the term within brackets in the numerator is a polynomial of degree $s$ in $x$ with positive coefficients. So the left side of (\ref{x-ineq}) is decreasing in $x$, and it suffices to verify it at $x=1$. For this we notice that
\begin{eqnarray*}
{2(2N)^s-[(2N-2)^s+(2N-4)^s]\over(2N-2)^s-(2N-4)^s}=2{N^s-(N-1)^s\over(N-1)^s-(N-2)^s}+1,
\end{eqnarray*}
and the fraction on the right is increasing in $s\ge1$ by Lemma \ref{lemma1}. At $s=1$ the value is 3, so the desired inequality holds.
It remains only to consider the case $N=2$. The same argument works if $r$ is even because then $3a_r-1\le0$ still holds. If $r$ is odd, we can replace the quantities within brackets in (\ref{suff}) by 1 and can replace $a_r$ in the second line of (\ref{suff}) by $a_1=1/2$. Thus, we need only verify (\ref{x-ineq}) with 5/3 replaced by 5/2, and of course it still holds.
\end{proof}
\section{Remark on a ``coincidence''}\label{coincidence}
We can prove algebraically that
\begin{eqnarray*}
\lim_{M\to\infty}\mu_{[r,r]}^{(M)}&=&\mu_{(1/2,1/2)}^{(N)}=(3/2)\mu_{(2/3,1/3)}^{(1)}=(3/2)\mu_{[2,1]}^{(1)}=\mu_{[1,1]}^{(2)}\\
&=&{3 (1 - \rho)^3 (1 + \rho)\over 2(10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)}
\end{eqnarray*}
for all $r\ge1$ and $N\ge2$, where superscripts refer to the number of players. (For superscripts equal to 1, the games are $A$ and $B$, the original one-player Parrondo games. For superscripts 2 or larger, the games are $A'$ and $B$.) The first equality is from Section \ref{patterns}. Can the others be explained probabilistically? We can elucidate at least the second equality, while the third and fourth remain partially unexplained.
Since $\mu_{(\gamma,1-\gamma)}^{(N)}$ does not depend on $N$, it is enough to verify the identity with $N=2$. Let us consider the profit of one player when two players are playing. Recalling (\ref{pB1N}) and (\ref{pA1N}) with $N=2$, we have
$$
\bm P_{A'}^{(1,2)}:=\bm P_A^{(1)}\qquad{\rm and}\qquad\bm P_B^{(1,2)}:=(1/2)(\bm P_B^{(1)}+\bm I_3).
$$
The former is just the one-step transition matrix for the original one-player game $A$, and we have
$$
(1/2)\bm P_{A'}^{(1,2)}+(1/2)\bm P_B^{(1,2)}=(1/2)\bm P_A^{(1)}+(1/4)\bm P_B^{(1)}+(1/4)\bm I_3.
$$
The left side describes the $(1/2,1/2)$ mixture of games $A'$ and $B$, as viewed by one of two players. Its mean is $(1/2)\mu_{(1/2,1/2)}^{(2)}$. The right side describes the $(2/3,1/3)$-mixture of games $A$ and $B$ if we ignore the $(1/4)\bm I_3$ term and normalize to ensure a stochastic matrix. That term just slows down the process, making one-fourth of its transitions null. So its mean is $(3/4)\mu_{(2/3,1/3)}^{(1)}$. These are equal, so $\mu_{(1/2,1/2)}^{(2)}=(3/2)\mu_{(2/3,1/3)}^{(1)}$, as claimed.
This can be regarded as a more correct version of the argument sketched in the third paragraph of page L307 of Toral (2002) and attributed to an anonymous referee of that paper.
\section{Variance parameter for nonrandom patterns}
We can evaluate the variance parameter $(\sigma_{[r,s]}^{(N)})^2$ for the nonrandom pattern $[r,s]$ in the $N$-player games directly for small $N$, using the state space $\bar\Sigma_N$ with its ${N+2\choose2}$ states. We apply (25)--(27) of Ethier and Lee (2009), obtaining, for example,
\begin{eqnarray*}
&&(\sigma_{[1,1]}^{(2)})^2\\
&&\quad{}=[9 (466 + 2680 \rho + 7621 \rho^2 + 16310 \rho^3 + 29018 \rho^4 + 41582 \rho^5 + 51471 \rho^6\\
&&\qquad\;{} + 55998 \rho^7 + 51471 \rho^8 + 41582 \rho^9 + 29018 \rho^{10} + 16310 \rho^{11} + 7621 \rho^{12}\\
&&\qquad\;{} + 2680 \rho^{13} + 466 \rho^{14})]/[4 (2 - \rho + 2 \rho^2) (10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)^3],
\end{eqnarray*}
which reduces when $\rho=1/3$ to $74176355601/141627323986\approx0.523743$. Since $N=2$, this is a computation involving $6\times6$ matrices.
To get results for larger $N$, we apply the method of considering one or two players at a time. By analogy with (\ref{sigma_C^N}), we have
\begin{equation*}
(\sigma_{[r,s]}^{(N)})^2=N(\sigma_{[r,s]}^{(1,N)})^2+N(N-1)\sigma_{[r,s]}^{([1,2],N)},
\end{equation*}
where
\begin{eqnarray*}
(\sigma_{[r,s]}^{(1,N)})^2&=&{1\over r+s}\bigg\{\sum_{u=0}^{r-1}[\bm\pi\bm P_A^u\ddot{\bm P}_A\bm1-(\bm\pi\bm P_A^u\dot{\bm P}_A\bm1)^2]\\
&&\qquad\quad{} +\sum_{v=0}^{s-1}[\bm\pi\bm P_A^r\bm P_B^v\ddot{\bm P}_B\bm1-(\bm\pi\bm P_A^r\bm P_B^v\dot{\bm P}_B\bm1)^2] \nonumber\\
&&\qquad\quad{}+2\sum_{0\le u<v\le r-1}\bm\pi\bm P_A^u\dot{\bm P}_A(\bm P_A^{v-u-1}-\bm\Pi\bm P_A^v)\dot{\bm P}_A\bm1 \nonumber\\
&&\qquad\quad{}+2\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}\bm\pi\bm P_A^u\dot{\bm P}_A (\bm P_A^{r-u-1}-\bm\Pi\bm P_A^r)\bm P_B^v\dot{\bm P}_B\bm1\nonumber\\
&&\qquad\quad{}+2\sum_{0\le u<v\le s-1}\bm\pi\bm P_A^r\bm P_B^u\dot{\bm P}_B(\bm P_B^{v-u-1}-\bm\Pi\bm P_A^r\bm P_B^v)\dot{\bm P}_B\bm1\\
&&\qquad\quad{}+2\bigg[\sum_{u=0}^{r-1}\sum_{v=0}^{r-1}\bm\pi\bm P_A^u\dot{\bm P}_A\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^v\dot{\bm P}_A\bm1 \nonumber\\
&&\qquad\qquad\quad{}+\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}\bm\pi\bm P_A^u\dot{\bm P}_A\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\dot{\bm P}_B\bm1 \nonumber\\
&&\qquad\qquad\quad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{r-1}\bm\pi\bm P_A^r\bm P_B^u\dot{\bm P}_B\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^v\dot{\bm P}_A\bm1 \nonumber\\
&&\qquad\qquad\quad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{s-1}\bm\pi\bm P_A^r\bm P_B^u\dot{\bm P}_B\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\dot{\bm P}_B\bm1\bigg]\bigg\}
\end{eqnarray*}
(from Ethier and Lee 2009) with $\bm P_A$ and $\bm P_B$ replaced by $\bm P_{A'}^{(1,N)}$ and $\bm P_B^{(1,N)}$ as defined in Section \ref{alternative}.
The covariance term, $\sigma_{[r,s]}^{([1,2],N)}$, requires an extension of the preceding formula to covariances. We omit the details of the derivation and just give the result:
\begin{eqnarray*}
\sigma_{[r,s]}^{([1,2],N)}&=&{1\over r+s}\bigg\{\sum_{u=0}^{r-1}[\bm\pi\bm P_A^u\bm P_A^{[1,2]}\bm1-(\bm\pi\bm P_A^u\bm P_A^{[1]}\bm1)(\bm\pi\bm P_A^u\bm P_A^{[2]}\bm1)]\\
&&\quad\qquad{}+\sum_{v=0}^{s-1}[\bm\pi\bm P_A^r\bm P_B^v\bm P_B^{[1,2]}\bm1-(\bm\pi\bm P_A^r\bm P_B^v\bm P_B^{[1]}\bm1)(\bm\pi\bm P_A^r\bm P_B^v\bm P_B^{[2]}\bm1)]\\
&&\quad\qquad{}+\sum_{0\le u<v\le r-1}[\bm\pi\bm P_A^u\bm P_A^{[1]}(\bm P_A^{v-u-1}-\bm\Pi\bm P_A^v)\bm P_A^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^u\bm P_A^{[2]}(\bm P_A^{v-u-1}-\bm\Pi\bm P_A^v)\bm P_A^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}[\bm\pi\bm P_A^u\bm P_A^{[1]}(\bm P_A^{r-u-1}-\bm\Pi\bm P_A^r)\bm P_B^v\bm P_B^{[2]}\bm1\\
\noalign{\vglue-2mm}
&&\qquad\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^u\bm P_A^{[2]}(\bm P_A^{r-u-1}-\bm\Pi\bm P_A^r)\bm P_B^v\bm P_B^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{0\le u<v\le s-1}[\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[1]}(\bm P_B^{v-u-1}-\bm\Pi\bm P_A^r\bm P_B^v)\bm P_B^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[2]}(\bm P_B^{v-u-1}-\bm\Pi\bm P_A^r\bm P_B^v)\bm P_B^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{u=0}^{r-1}\sum_{v=0}^{r-1}[\bm\pi\bm P_A^u\bm P_A^{[1]}\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^v\bm P_A^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^u\bm P_A^{[2]}\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^v\bm P_A^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{u=0}^{r-1}\sum_{v=0}^{s-1}[\bm\pi\bm P_A^u\bm P_A^{[1]}\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\bm P_B^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^u\bm P_A^{[2]}\bm P_A^{r-u-1}\bm P_B^s(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\bm P_B^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{r-1}[\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[1]}\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^v\bm P_A^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[2]}\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^v\bm P_A^{[1]}\bm1]\\
&&\quad\qquad{}+\sum_{u=0}^{s-1}\sum_{v=0}^{s-1}[\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[1]}\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\bm P_B^{[2]}\bm1\\
\noalign{\vglue-3mm}
&&\qquad\qquad\qquad\qquad{}+\bm\pi\bm P_A^r\bm P_B^u\bm P_B^{[2]}\bm P_B^{s-u-1}(\bm Z-\bm\Pi)\bm P_A^r\bm P_B^v\bm P_B^{[1]}\bm1]\bigg\}
\end{eqnarray*}
with $\bm P_A$ and $\bm P_B$ replaced by $\bm P_{A'}^{(2,N)}$ and $\bm P_B^{(2,N)}$ as defined in Section \ref{variance_C}.
By analogy to (\ref{sigmaC2}), with $\rho=1/3$, we conclude that
\begin{eqnarray}\label{sigma[1,1]2}
(\sigma_{[1,1]}^{(N)})^2&=&9 (615639408424560 - 6408926620214040 N + 29541545957894139 N^2\nonumber \\
&&\quad{} - 80214814200037491 N^3 + 143582273075781927 N^4\nonumber\\
&&\quad{} - 179192557802543130 N^5 + 160434481099881996 N^6 \nonumber\\
&&\quad{}- 104152159483211664 N^7 + 48799091685478468 N^8\nonumber\\
&&\quad{} - 16137521956595246 N^9 + 3584898779152593 N^{10} \nonumber\\
&&\quad{} - 481633399018397 N^{11} + 29679648590925 N^{12})\nonumber\\
&&\;\;/[2 (1521 - 3174 N + 1609 N^2)^3 (3285360 -
9816120 N + 12525387 N^2\nonumber\\
&&\quad{} - 8725589 N^3 + 3501928 N^4 - 768851 N^5 +
72405 N^6)],
\end{eqnarray}
which is monotonically decreasing in $N$, ranging from
$$
(\sigma_{[1,1]}^{(2)})^2={74176355601\over141627323986}\approx0.523743
$$
to
$$
\lim_{N\to\infty}(\sigma_{[1,1]}^{(N)})^2={5935929718185\over13404609664322}\approx0.442827.
$$
This last number differs slightly from the corresponding limit in the random-mixture case, showing that the variances lack the nice property that the means enjoy.
Similar formulas can be found for other $[r,s]$, assuming $\rho=1/3$. With $[r,s]=[1,2]$ (resp., $[2,1]$, $[2,2]$, $[2,4]$, $[3,3]$, $[4,2]$), we get in place of (\ref{sigma[1,1]2}) a ratio of degree-21 (resp., 24, 33, 51, 54, 57) polynomials in $N$, and
$$
\lim_{N\to\infty}(\sigma_{[r,s]}^{(N)})^2=\begin{cases}
{1891312136577\over6060711605323}\approx0.312061&\text{if $[r,s]=[2,1]$ or $[4,2]$,}\\
\noalign{\medskip}
{5935929718185\over13404609664322}\approx0.442827&\text{if $[r,s]=[1,1]$ or $[2,2]$ or $[3,3]$,}\\
\noalign{\medskip}
{136286243910\over252688187761}\approx0.539346&\text{if $[r,s]=[1,2]$ or $[2,4]$.}
\end{cases}
$$
In particular, it appears that the result for $[r,s]$ depends on $r$ and $s$ only through $r/(r+s)$. We have confirmed this only in the several cases shown above; a proof for all integers $r,s\ge1$ seems difficult.
Let us summarize our results for nonrandom patterns.
Let $S_n$ be the cumulative profit after $n$ turns to the ensemble of $N\ge2$ players playing the nonrandom pattern $(A')^r B^s$ (denoted by $[r,s]$) repeatedly, with $r$ and $s$ being positive integers. We assume the parameterization (\ref{param}) with $\eps=0$.
\begin{theorem}
$\{S_n-S_{n-1}\}_{n\ge1}$ satisfies the SLLN and the CLT with mean and variance parameters
$\mu_{[r,s]}^{(N)}$ computable for specified $r,s\ge1$ as a function of $N\ge2$ and $\rho>0$ and $(\sigma_{[r,s]}^{(N)})^2$ computable for specified $r,s\ge1$, $N\ge2$, and $\rho>0$. We implicitly assume that $(\sigma_{[r,s]}^{(N)})^2>0$.
\end{theorem}
\begin{proof}
The proof is as for Theorem \ref{limit-thm}, except that, instead of citing Theorem 1, we cite Theorem 6 of Ethier and Lee (2009).
\end{proof}
\section{Sample variance of players' capitals}
Recall our notation in which $S_n^{[i]}$ is the capital of player $i$ (one of the $N$ players) after $n$ trials, so that
$$
S_n:=\sum_{i=1}^N S_n^{[i]}
$$
is the total capital of the $N$ players after $n$ trials.
Toral (2002) simulated the sample variance of $S_n^{[1]},\ldots,S_n^{[N]}$, namely
$$
{1\over N}\bigg[\sum_{i=1}^N (S_n^{[i]})^2-N\bigg({1\over N}\sum_{i=1}^N S_n^{[i]}\bigg)^2\bigg],
$$
finding that it grows approximately linearly in $n$. Let us replace this sample variance by its unbiased (at least in the case of a random sample) version,
$$
(\hat{\sigma}^{(N)})_n^2:={1\over N-1}\bigg[\sum_{i=1}^N (S_n^{[i]})^2-N\bigg({1\over N}\sum_{i=1}^N S_n^{[i]}\bigg)^2\bigg],
$$
and let us consider its expectation $\E[(\hat{\sigma}^{(N)})_n^2]$ instead of the random variable itself. We can evaluate this using the results already obtained. Indeed,
\begin{eqnarray*}
\E[(\hat{\sigma}^{(N)})_n^2]&:=&{1\over N-1}\bigg[\sum_{i=1}^N \E[(S_n^{[i]})^2]-{1\over N}\E[(S_n)^2]\bigg]\\
&\;=&{1\over N-1}\bigg[\sum_{i=1}^N \{\Var(S_n^{[i]})+(\E[S_n^{[i]}])^2\}-{1\over N}\{\Var(S_n)+(\E[S_n])^2\}\bigg]\\
&\;=&{1\over N-1}\bigg[N\Var(S_n^{[1]})-{1\over N}\Var(S_n)\bigg]\\
&\;\sim&n{1\over N-1}\bigg[N(\sigma^{(1,N)})^2-{1\over N}[N(\sigma^{(1,N)})^2+N(N-1)\sigma^{([1,2],N)}]\bigg]\\
&\;=&n[(\sigma^{(1,N)})^2-\sigma^{([1,2],N)}]
\end{eqnarray*}
as $n\to\infty$, hence
\begin{equation*}
\lim_{n\to\infty}n^{-1}\E[(\hat{\sigma}^{(N)})_n^2]=(\sigma^{(1,N)})^2-\sigma^{([1,2],N)}.
\end{equation*}
We have omitted subscripts $A'$, $B$, $(\gamma,1-\gamma)$, and $[r,s]$ because we want to apply this formula in all cases.
Let us first consider the random-mixture case with $\rho$ arbitrary. With $\gamma=1/2$ we have
\begin{eqnarray}\label{asymp-sample-var}
&&\!\!\!\!\!\!\!\!\!\!(\sigma_{(1/2,1/2)}^{(1,N)})^2-\sigma_{(1/2,1/2)}^{([1,2],N)}\nonumber\\
&&{}\sim27 (97 + 606 \rho + 1926 \rho^2 + 4262 \rho^3 + 7284 \rho^4 + 9894 \rho^5 + 10911 \rho^6\nonumber \\
&&\qquad\quad{}+ 9894 \rho^7 + 7284 \rho^8 + 4262 \rho^9 + 1926 \rho^{10} + 606 \rho^{11} + 97 \rho^{12})\nonumber\\
&&\qquad{}/[2 N (10 + 20 \rho + 21 \rho^2 + 20 \rho^3 + 10 \rho^4)^3]
\end{eqnarray}
as $N\to\infty$, which is (\ref{sigma_C-asymp}) with $\gamma=1/2$. With $\gamma=0$ (only game $B$ is played) we have
$$
(\sigma_B^{(1,N)})^2-\sigma_B^{([1,2],N)}=\bigg({3\rho\over1+\rho+\rho^2}\bigg)^2{1\over N}.
$$
Finally, with $\gamma=1$ (only game $A'$ is played) we have
$$
(\sigma_{A'}^{(1,N)})^2-\sigma_{A'}^{([1,2],N)}={2\over N-1}\sim{2\over N}.
$$
As Toral found, the result for the mixed game lies between those for games $A'$ and $B$, and this is true regardless of $\rho>0$. (Our results are not directly comparable to his because we have taken the bias parameter $\eps$ to be 0.)
Finally, let us consider the nonrandom pattern $[1,1]$. We find that
$(\sigma_{[1,1]}^{(1,N)})^2-\sigma_{[1,1]}^{([1,2],N)}$ has the same asymptotic value as in (\ref{asymp-sample-var}),
so we have another coincidence. It appears that these expected sample variances have essentially the same property that the means enjoy, namely that their asymptotic value for the case of the nonrandom pattern $[r,s]$ depends only on $r/(r+s)$ and $\rho$ and is equal to the asymptotic value for the case of the random mixture with $\gamma=r/(r+s)$. We have confirmed this in the six cases $r,s\ge1$ and $r+s\le4$, but a proof for all integers $r,s\ge1$ seems difficult.
| {
"timestamp": "2011-09-22T02:00:55",
"yymm": "1109",
"arxiv_id": "1109.4454",
"language": "en",
"url": "https://arxiv.org/abs/1109.4454",
"abstract": "Toral (2002) considered an ensemble of N\\geq2 players. In game B a player is randomly selected to play Parrondo's original capital-dependent game. In game A' two players are randomly selected without replacement, and the first transfers one unit of capital to the second. Game A' is fair (with respect to total capital), game B is losing (or fair), and the random mixture {\\gamma}A'+(1-{\\gamma})B is winning, as was demonstrated by Toral for {\\gamma}=1/2 using computer simulation. We prove this, establishing a strong law of large numbers and a central limit theorem for the sequence of profits of the ensemble of players for each {\\gamma}\\in(0,1). We do the same for the nonrandom pattern of games (A')^r B^s for all integers r,s\\geq1. An unexpected relationship between the random-mixture case and the nonrandom-pattern case occurs in the limit as N\\rightarrow\\infty.",
"subjects": "Probability (math.PR)",
"title": "Parrondo's paradox via redistribution of wealth",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126432392879,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7094396923685042
} |
https://arxiv.org/abs/1705.08365 | A Short Proof for a Lower Bound on the Zero Forcing Number | We provide a short proof of a conjecture of Davila and Kenter concerning a lower bound on the zero forcing number $Z(G)$ of a graph $G$. More specifically, we show that $Z(G)\geq (g-2)(\delta-2)+2$ for every graph $G$ of girth $g$ at least $3$ and minimum degree $\delta$ at least $2$. | \section{Introduction}
We consider finite, simple, and undirected graphs and use standard terminology.
For an integer $n$, let $[n]$ denote the set of positive integers at most $n$.
For a graph $G$,
a set $Z$ of vertices of $G$ is a {\it zero forcing set} of $G$
if the elements of $V(G)\setminus Z$ have a linear order $u_1,\ldots,u_k$
such that, for every $i$ in $[k]$,
there is some vertex $v_i$ in $Z\cup \{ u_j:j\in [i-1]\}$ such that
$u_i$ is the only neighbor of $v_i$ outside of $Z\cup \{ u_j:j\in [i-1]\}$;
in particular,
$N_G[v_i]\setminus (Z\cup N_G[v_1]\cup \cdots \cup N_G[v_{i-1}])=\{ u_i\}$ for $i\in [k]$.
The {\it zero forcing number $Z(G)$} of $G$,
defined as the minimum order of a zero forcing set of $G$,
was proposed by the AIM Minimum Rank - Special Graphs Work Group \cite{aim,hv}
as an upper bound on the corank of matrices associated with a given graph.
The same parameter was also considered in connection with
quantum physics \cite{bg,bm,s} and logic circuits \cite{bghsy}.
In \cite{dk} Davila and Kenter conjectured that
\begin{eqnarray}
Z(G)&\geq &(g-2)(\delta-2)+2\label{eb3}
\end{eqnarray}
for every graph $G$ of girth $g$ at least $3$ and minimum degree $\delta$ at least $2$.
They observe that, for $g>6$ and sufficiently large $\delta$ in terms of $g$,
the conjectured bound follows by combining results from \cite{aim2} and \cite{cs}.
For $g\leq 6$, it was shown in \cite{gprs,gr},
Davila and Henning \cite{dh} showed it for $7\leq g\leq 10$,
and, eventually,
Davila, Kalinowski, and Stephen \cite{dks} completed the proof.
The proof in \cite{dks} is rather short itself but relies on \cite{gprs,gr,dh}.
While the cases $g\leq 6$ have rather short proofs,
the proof in \cite{dh} for $7\leq g\leq 10$ extends over more than eleven pages and requires a detailed case analysis.
Therefore, the complete proof of (\ref{eb3}) obtained by combining \cite{gprs,gr,dh,dks} is rather long.
In the present note we propose a considerably shorter and simpler proof.
Our approach only requires a special treatment for the triangle-free case $g=4$ \cite{gprs},
involves a new lower bound on the zero forcing number,
and an application of the Moore bound \cite{ahl}.
\section{Proof of (\ref{eb3})}
Our first result is a natural generalization of the well known fact $Z(G)\geq \delta(G)$ \cite{aim},
where $\delta(G)$ is the minimum degree of a graph $G$.
For a set $X$ of vertices of a graph $G$ of order $n$, let
$N_G(X)=\left(\bigcup\limits_{u\in X}N_G(u)\right)\setminus X$,
$N_G[X]=X\cup N_G(X)$, and
$\delta_p(G)=\min\left\{ |N_G(X)|:X\subseteq V(G)\mbox{ and }|X|=p\right\}$
for $p\in [n]$.
Note that $\delta_1(G)$ equals $\delta(G)$.
\begin{lemma}\label{lemma1}
If $G$ is a graph of order $n$, then $Z(G)\geq \delta_p(G)$ for every $p\in [n]$.
\end{lemma}
{\it Proof:} Let $Z$ be a zero forcing set of minimum order.
Let $u_1,\ldots,u_k$ and $v_1,\ldots,v_k$ be as in the introduction.
Since, by definition, $\delta_p(G)\leq n-p$, the result is trivial for $p\geq k=n-|Z|$,
and we may assume that $p<k$.
As noted above, we have $N_G[v_i]\setminus (Z\cup N_G[v_1]\cup \cdots \cup N_G[v_{i-1}])=\{ u_i\}$ for $i\in [k]$,
which implies that $X=\{ v_1,\ldots,v_p\}$ is a set of $p$ distinct vertices of $G$.
Furthermore, it implies that $|N_G[X]|\leq |Z|+p$,
and, hence, $\delta_p(G)\leq |N_G(X)|=|N_G[X]|-p\leq |Z|$ as required. $\Box$
\medskip
\noindent For later reference, we recall the Moore bound for irregular graphs.
\begin{theorem}[Alon, Hoory, and Linial \cite{ahl}]\label{theoremmoore}
If $G$ is a graph of order $n$, girth at least $2r$ for some integer $r$, and average degree $d$ at least $2$, then
$n\geq 2\sum\limits_{i=0}^{r-1}(d-1)^i$.
\end{theorem}
We also need the following numerical fact.
\begin{lemma}\label{lemma2}
For positive integers $p$ and $f$ with $p\geq 5$ and $2p-1\leq f\leq {p\choose 2}$,
$$\left(1+\frac{2(f-p)}{f+p}\right)^{\lceil\frac{p}{2}\rceil+1}>f-p+1.$$
\end{lemma}
{\it Proof:} For $p\geq 17$,
it follows from $f\geq 2p-1$
that $1+\frac{2(f-p)}{f+p}\geq 1.64$,
and, since $1.64^{\lceil\frac{p}{2}\rceil+1}>{p\choose 2}-p+1$,
the desired inequality follows for these values of $p$.
For the finitely many pairs $(p,f)$ with $5\leq p \leq 16$ and $2p-1\leq f\leq {p\choose 2}$, we verified it using a computer.
$\Box$
\medskip
\noindent We proceed to the proof of (\ref{eb3}).
\begin{theorem}
If $G$ is a graph of girth $g$ at least $3$ and minimum degree $\delta$ at least $2$,
then $Z(G)\geq (g-2)(\delta-2)+2$.
\end{theorem}
{\it Proof:} For $g=3$, the inequality simplifies to the known fact $Z(G)\geq \delta(G)$,
and, for $g=4$, it has been shown in \cite{gprs}. Now, let $g\geq 5$.
Let $X$ be a set of $g-2$ vertices of $G$ with $|N_G(X)|=\delta_{g-2}(G)$,
and, let $N=N_G(X)$.
By the girth condition, the components of $G[X]$ are trees,
and no vertex in $N$ has more than one neighbor in any component of $G[X]$.
Let $K_1,\ldots,K_p$ be the vertex sets of the components of $G[X]$.
If $p\geq 3$, and there are two vertices in $N$
that both have neighbors in two distinct components of $G[X]$,
then $G$ contains a cycle of order at most $2+|K_i|+|K_j|\leq 2+(g-2)-(m-2)<g$
which is a contradiction.
Similarly, if $p=2$, and there are three vertices $u$, $v$, and $w$ in $N$
that both have neighbors in $K_1$ and $K_2$,
then let $u_i$, $v_i$, and $w_i$ denote the corresponding neighbors in $K_i$ for $i\in [2]$, respectively.
If $G[K_1]$ contains a path between two of the vertices $u_1$, $v_1$, and $w_1$
avoiding the third, then $G$ contains a cycle of order at most $2+(|K_1|-1)+|K_2|=g-1$, which is a contradiction.
By symmetry, this implies $u_1=v_1=w_1$ and $u_2=v_2=w_2$,
and $G$ contains the cycle $u_1uu_2vu_1$ of order $4$, which is a contradiction.
Combining these observations, we obtain
\begin{eqnarray}\label{e1}
\sum\limits_{\{ i, j\}\in {[p]\choose 2}}|N_G(K_i)\cap N_G(K_j)|
\leq
\begin{cases}
{p\choose 2} & \mbox{, for $p\geq 3$, and }\\
2p-2 & \mbox{, for $p\in \{ 1,2\}$}.
\end{cases}
\end{eqnarray}
Let the bipartite graph $H$ arise from $G[X\cup N]$
by contracting each component of $G[X]$ to a single vertex,
and removing all edges of $G[N]$.
Lemma \ref{lemma1} and a simple counting implies
\begin{eqnarray*}
Z(G) & \geq & \delta_{g-2}(G)\\
& = & |N|\\
& = & \sum_{u\in V(H)\setminus N}d_H(u)-\sum_{v\in N}(d_H(v)-1)\\
& \geq & \sum_{i=1}^p\Big(\delta|K_i|-2(|K_i|-1)\Big)-\sum_{v\in N}(d_H(v)-1)\\
& = & (g-2)(\delta-2)+2+\left((2p-2)-\sum_{v\in N}(d_H(v)-1)\right).
\end{eqnarray*}
In view of (\ref{eb3}), we may assume $f\geq 2p-1$ for $f=\sum\limits_{v\in N}(d_H(v)-1)$.
Since
$$2p-1\leq f=\sum\limits_{v\in N}(d_H(v)-1)
\leq \sum\limits_{v\in N}{d_H(v)\choose 2}
=\sum\limits_{\{ i, j\}\in {[p]\choose 2}}|N_G(K_i)\cap N_G(K_j)|,$$
(\ref{e1}) implies $p\geq 5$.
Let $H'$ arise by removing all vertices of degree $1$ from $H$.
Since every vertex $u$ in $V(H)\setminus N$ satisfies
$d_H(u)\geq \delta |K_i|-2(|K_i|-1)\geq 2$ for some $i\in [p]$,
the graph $H'$ contains all $p$ vertices of $V(H)\setminus N$.
Let $H'$ contain $q$ vertices of $N$.
Since $H'$ has order $p+q$ and size
$$\sum\limits_{v\in N\cap V(H')}d_H(v)
=q+\sum\limits_{v\in N}(d_H(v)-1)=q+f,$$
its average degree is at least $\frac{2(f+q)}{p+q}$,
which is at least $2$, because $f\geq 2p-1\geq p$.
If $H'$ contains a cycle of order $2\ell$,
then $G$ contains a cycle of order at most $(g-2)-(p-\ell)+\ell$.
By the girth condition,
this implies that the bipartite graph $H'$ has girth at least
$p+2$, if $p$ is even, and $p+3$, if $p$ is odd.
Using Theorem \ref{theoremmoore} and $f\geq q$, we obtain
\begin{eqnarray*}
p+q & \geq & 2\sum\limits_{i=0}^{\lceil\frac{p}{2}\rceil}\left(\frac{2(f+q)}{p+q}-1\right)^i\\
&=& 2\frac{p+q}{2f-2p}\left(\left(1+\frac{2(f-p)}{p+q}\right)^{\lceil\frac{p}{2}\rceil+1}-1\right)\\
&\geq &2\frac{p+q}{2f-2p}\left(\left(1+\frac{2(f-p)}{p+f}\right)^{\lceil\frac{p}{2}\rceil+1}-1\right),
\end{eqnarray*}
which implies
$\left(1+\frac{2(f-p)}{f+p}\right)^{\lceil\frac{p}{2}\rceil+1}\leq f-p+1$.
Since $f\geq 2p-1$, and, by (\ref{e1}), $f\leq {p\choose 2}$,
this contradicts Lemma \ref{lemma2},
which completes the proof.
$\Box$
| {
"timestamp": "2017-05-24T02:10:10",
"yymm": "1705",
"arxiv_id": "1705.08365",
"language": "en",
"url": "https://arxiv.org/abs/1705.08365",
"abstract": "We provide a short proof of a conjecture of Davila and Kenter concerning a lower bound on the zero forcing number $Z(G)$ of a graph $G$. More specifically, we show that $Z(G)\\geq (g-2)(\\delta-2)+2$ for every graph $G$ of girth $g$ at least $3$ and minimum degree $\\delta$ at least $2$.",
"subjects": "Combinatorics (math.CO)",
"title": "A Short Proof for a Lower Bound on the Zero Forcing Number",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.991554374556315,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.7093818047142321
} |
https://arxiv.org/abs/2203.11416 | Statistics on Almost-Fibonacci Pattern-Avoiding Permutations | We prove that $|Av_n(231,312,1432)|$, $|Av_n(312,321,1342)|$ $|Av_n(231,312,4321,21543)|$, and $ |Av_n(321,231,4123,21534)|$, are all equal to $F_{n+1} - 1$ where $F_n$ is the $n$-th Fibonacci number using the convention $F_0 = F_1 = 1$ and $Av_n(S)$ is the set of all permutations of length $n$ that avoid all of the patterns in the set $S$. To do this, we characterize the structures of the permutations in these sets in terms of Fibonacci permutations. Then, we further quantify the structures using statistics such as inversion number and a statistic that measures the length of Fibonacci subsequences. Finally, we encode these statistics in generating functions written in terms of the generating function for Fibonacci permutations. We use these generating functions to find analogs about recurrence relation and addition formulae of Fibonacci identities. | \section{Introduction}
We say two sequences $a_1a_2\ldots a_k$ and $b_1b_2\ldots b_k$ of positive integers are \emph{order isomorphic} whenever $a_i < a_j$ if and only if $b_i < b_j$ for all $1\leq i,j \leq k$. A sequence $\pi$ \emph{contains} a sequence (or pattern) $\sigma$ whenever $\pi$ has a subsequence that is order isomorphic to $\sigma$. A permutation \emph{avoids} a pattern whenever it does not contain that pattern. For any set $S$ of permutations and any non-negative integer $n$, we write $\Av(S)$ to denote the set of all permutations which avoid all of the permutations in $S$ and we write $\Av_n(S)$ to denote the set of permutations of length $n$ in $\Av(S)$. In this context, we call the elements of $S$ \emph{forbidden patterns}.
Simion and Schmidt \cite{Simion} showed that $|\Av_n(231,312,321)| = F_n$. We will call permutations of this form \emph{Fibonacci permutations}. This Fibonacci structure will show up in the structures of permutations in all of our sets of pattern-avoiding permutations. It is implicit in Simion and Schmidt's proof that every permutation in $\Av(231, 312, 321)$ is made up of consecutive decreasing subsequences of length less than or equal to two where every entry in each subsequence is greater than all entries in prior subsequences. We will soon mention notation to make this description less clunky.
Permutations of length $n$ are bijections from $\{1,2,\ldots,n\}$ to $\{1,2,\ldots,n\}$, so we can plot them on coordinate axes with the horizontal axis representing the position of the entry in a permutation and the vertical axis representing its value. Since permutations are functions, we can use their plots to describe our sets of pattern-avoiding permutations. Given any set of points in the plane, the associated \emph{geometric grid class} is the set of permutations whose graphs can be drawn on the set of points. This definition comes from \cite{grid}, though in our case, we do not need to add the stipulation that no two points on a horizontal or vertical line can be chosen because this is impossible given the sets of points we picked. In this paper, we use grid classes mostly for descriptive and visualization purposes. For example, the set of Fibonacci permutations is the grid class associated with the infinite version of Figure \ref{fig:Fibonacci}, in which there are infinitely many pairs of dots.
\begin{figure}
\centering
\begin{tikzpicture}
\draw (-.5,-.5) -- (-.5,5);
\draw (-.5,-.5) -- (5,-.5);
\draw (-.5,5) -- (5,5);
\draw (5,-.5) -- (5,5);
\filldraw[black] (0,0.5) circle (2pt);
\filldraw[black] (0.5,0) circle (2pt);
\filldraw[black] (1,1.5) circle (2pt);
\filldraw[black] (1.5,1) circle (2pt);
\filldraw[black] (2,2.5) circle (2pt);
\filldraw[black] (2.5,2) circle (2pt);
\filldraw[black] (3,3.5) circle (2pt);
\filldraw[black] (3.5,3) circle (2pt);
\filldraw[black] (4,4.5) circle (2pt);
\filldraw[black] (4.5,4) circle (2pt);
\end{tikzpicture}
\caption{Fibonacci Grid Class}
\label{fig:Fibonacci}
\end{figure}
Our paper focuses on four sets of patterns, which we will label as
$$A_1 = \{231,312,4321,21543\}, A_2 = \{231,321,4123,21534\},$$ $$B_1 = \{231,312,1432\}, \text{ and } B_2 = \{312,321,1342\}.$$
It will soon become clear why they are grouped in this way.
In this paper, we will first use Fibonacci identities and tiling bijections to prove that for $n \geq 1$,
$$|\Av_n(A_1)| = |\Av_n(A_2)| = |\Av_n(B_1)| = |\Av_n(B_2)| = F_{n+1}-1.$$
This equality is somewhat remarkable, as it is an example of \textit{unbalanced Wilf-equivalence}, which is to say that $A_1$ and $A_2$ have different numbers and lengths of patterns than $B_1$ and $B_2$, but the number of permutations of a given length that avoid the patterns in each set is equal. Until recently, there were no known examples of unbalanced Wilf-equivalences between finite sets of patterns, though now many have been discovered, as described in \cite{Bloom} and \cite{Burstein}.
Previous work has demonstrated the value in using statistics to describe the structure of permutations. In \cite{Pudwell}, statistics concerned with relative order of consecutive entries including double ascents, double descents, peaks, and valleys were used to compare structures of Wilf-equivalent sets of permutations such as $\Av(\sigma)$ where $\sigma$ is a pattern of length 3. In \cite{Goyt}, the statistic inversion number is used to examine the structure of Fibonacci permutations. This statistic is then encoded into generating functions to find analogs of Fibonacci identities from which we get the $q$-Fibonacci numbers in \cite{Goyt} and \cite{GoytSagan}.
In this paper, we follow \cite{Goyt} and use inversion number to describe the structure of almost-Fibonacci sets of pattern avoiding permutations. We also consider a statistic that gives the length of the ending Fibonacci subsequence. We encode these two statistics in generating functions which have the form
$$G_n^{A_1}(v,q) = \sum_{\pi \in \Av_n(A_1)}v^{\Fib(\pi)}q^{\inv(\pi)}.$$
which we will write in terms of those in \cite{Goyt}. Finally, we use these generating functions to derive analogs of Fibonacci identities similar to those in \cite{Goyt}.
The computational tools involving generating trees to demonstrate this enumeration come from \cite{Vatter} and are well-known. The novel parts of our paper are the structural descriptions of these sets of permutations and the proofs of their enumeration using them.
\section{Enumeration}
Before we prove any results about our sets of pattern-avoiding permutations, we describe their structure. We employ the following definitions for simplicity.
\begin{definition}
If a permutation $\pi$ has length $n$ and a permutation $\sigma$ has length $k$, then we define $\pi \oplus \sigma$ as $\pi$ followed by $\sigma$, where all of the entries of $\sigma$ have been increased by $n$. Similarly, we define $\pi \ominus \sigma$ as $\pi$ followed by $\sigma$, where all of the entries of $\pi$ have been increased by $k$.
\end{definition}
For example, if $\pi = 132$ and $\sigma = 312$, then $\pi \oplus \sigma = 132645$ and $\pi \ominus \sigma = 465312$. Using this definition, we describe permutations in the set $\Av(A_1).$
\begin{theorem}
The permutations in the set $\Av(A_1)$ are exactly the permutations of the form $\pi \oplus \sigma \oplus \tau$, where $\pi$ is an increasing permutation that can be empty, $\sigma$ is either empty or is $321$, and $\tau$ is Fibonacci.
\end{theorem}
In Figure \ref{fig:AvA1} we have a set of points for which $\Av(A_1)$ is the grid class assuming even if the line segment has finite length, we can choose arbitrarily many points and there are infinitely many pairs of points in the upper right square. Note that if only one of the points in the middle block is chosen, the middle block becomes part of the first block, and if two of them are chosen, the middle block becomes part of the last block. No matter which collection of points in the image is chosen, the corresponding permutation will still be in $\Av(A_1)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) -- (0,6);
\draw (0,0) -- (6,0);
\draw (0,6) -- (6,6);
\draw (6,0) -- (6,6);
\draw (2,0) -- (2,6);
\draw (4,0) -- (4,6);
\draw (0,2) -- (6,2);
\draw (0,4) -- (6,4);
\draw[black, ultra thick] (0.3,0.3) -- (1.7,1.7);
\filldraw[black] (2.5,3.5) circle (2pt);
\filldraw[black] (3,3) circle (2pt);
\filldraw[black] (3.5,2.5) circle (2pt);
\filldraw[black] (4.4,4.2) circle (1pt);
\filldraw[black] (4.2,4.4) circle (1pt);
\filldraw[black] (4.8,4.6) circle (1pt);
\filldraw[black] (4.6,4.8) circle (1pt);
\filldraw[black] (5,5.2) circle (1pt);
\filldraw[black] (5.2,5) circle (1pt);
\filldraw[black] (5.4,5.6) circle (1pt);
\filldraw[black] (5.6,5.4) circle (1pt);
\end{tikzpicture}
\caption{ $\Av(A_1)$ is the grid class for this set}
\label{fig:AvA1}
\end{figure}
\begin{proof}
We consider two types of permutations in $\Av(A_1)$: permutations with a 321 pattern and permutations without a 321 pattern.
\noindent Case 1: $\pi$ contains 321.
In this case, we place the 321 in an arbitrary position and consider what can happen around the 321. We first claim the entries in the 321 must be consecutive in value. If they were not consecutive then there would exist some entry of $\pi$ with a value between that of two decreasing entries on either the left or the right, giving either a 231 or a 312. Additionally, the entries in the 321 must be consecutive in placement. If an entry less than all of the entries in the 321 were in the middle of the 321 then we would get a 312, and if an entry greater than all of the entries in the 321 were in the middle of the 321, then we would get a 231. Thus, the 321 subsequence is consecutive in placement and value. \par
The subsequence of $\pi$ to the left of the 321 must be less than the entries in the 321 in order to avoid 4321. Additionally, it must be increasing in order for $\pi$ to avoid 21543. \par
The the entries in the subsequence to the right of the 321 (which we called $\tau$) must be greater than the entries in the 321 in order for $\pi$ to avoid 4321. Additionally, $\tau$ must avoid 321 in order for $\pi$ to avoid 21543. Thus, $\tau$ must avoid 321, 312 and 231, so it is Fibonacci. This shows that if a permutation has a 321 than it must have the stated structure in order to be in the set.
\noindent Case 2: $\pi$ avoids 321.
In this case, $\pi$ avoids 321 in addition to 231 and 312. Note $\pi$ also avoids 21543 and 4321 since they both contain 321. Thus, any permutation in this set is a Fibonacci permutation, which means it has the form we want with $\pi$ and $\sigma$ empty.
Now, we must show that permutations with the structure we described avoid the patterns in $A_1$. Again, $\tau$ avoids 312, 321 and 231, so it avoids all of the necessary patterns. Additionally, the entries in $\tau$ is greater than all of the other entries in the permutation so there is no way that part of it can be in one of the forbidden patterns. Furthermore, $\pi$ is an increasing permutation and $\sigma$ is 321. Both of these subsequences avoid all of the forbidden patterns separately, and any subsequence of $\pi \oplus \sigma$ will also avoid all of the forbidden patterns. Thus, the structure avoids all of the forbidden patterns.
\end{proof}
As we show next, permutations in $\Av(A_2)$ have a very similar structure with 321 replaced with 312.
\begin{theorem}
\label{A2Structure}
The permutations in the set $\Av(A_2)$ are exactly the permutations of the form $\pi \oplus \sigma \oplus \tau$, where $\pi$ is an increasing permutation that can be empty, $\sigma$ is either empty or is $312$, and $\tau$ is Fibonacci.
\end{theorem}
Theorem \ref{A2Structure} tells us $\Av(A_2)$ is the grid class associated with the graph in Figure \ref{fig:AvA2} again assuming even if the line segment has finite length, we can choose arbitrarily many points and the pairs in the upper right box continue infinitely.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
\draw (0,0) -- (0,6);
\draw (0,0) -- (6,0);
\draw (0,6) -- (6,6);
\draw (6,0) -- (6,6);
\draw (2,0) -- (2,6);
\draw (4,0) -- (4,6);
\draw (0,2) -- (6,2);
\draw (0,4) -- (6,4);
\draw[black, ultra thick] (0.3,0.3) -- (1.7,1.7);
\filldraw[black] (2.5,3.5) circle (2pt);
\filldraw[black] (3,2.5) circle (2pt);
\filldraw[black] (3.5,3) circle (2pt);
\filldraw[black] (4.4,4.2) circle (1pt);
\filldraw[black] (4.2,4.4) circle (1pt);
\filldraw[black] (4.8,4.6) circle (1pt);
\filldraw[black] (4.6,4.8) circle (1pt);
\filldraw[black] (5,5.2) circle (1pt);
\filldraw[black] (5.2,5) circle (1pt);
\filldraw[black] (5.4,5.6) circle (1pt);
\filldraw[black] (5.6,5.4) circle (1pt);
\end{tikzpicture}
\caption{ $\Av(A_2)$ is the grid class for this set.}
\label{fig:AvA2}
\end{figure}
\begin{proof}
We first show that if $\pi \in \Av(A_2)$, then $\pi$ has the given form. To do this, we consider all possible permutations in $\Av(A_2)$ in two cases: permutations with a 312 pattern and permutations without a 312 pattern. \par
\noindent Case 1: $\pi$ contains 312.
In this case, we place a 312 in an arbitrary position in $\pi$ and consider what can happen around it. We first show that the entries in the 312 subsequence must be consecutive in value. If they were not consecutive then there would exist either a 321, 231 or a 4123. Additionally, the 312 must be consecutive in placement. If an entry less than all of the entries in the 312 were in the middle of the 312 than we would get a 321 or a 4123, and if the entry greater than all of the entries in the 321 were in the middle of the 312 then we would get a 231. Thus, the 312 subsequence is consecutive in placement and value. \par
The subsequence of $\pi$ to the left of the 312 (which we called $\pi$) must be less than the entries in the 312 in order for $\pi$ to avoid 321. Additionally, $\pi$ must be monotone increasing in order for $\pi$ to avoid 21534. The entries in the subsequence of $\pi$ to the right of the 312 (which we called $\tau$) must be greater than the entries in the 312 in order to avoid 321. Additionally, $\tau$ must avoid 312 in order for $\pi$ to avoid 21534. Thus, $\tau$ avoids 321, 312 and 231, so it is Fibonacci. \par
\noindent Case 2: $\pi$ avoids 312.
In this case, $\pi$ avoids 312 in addition to 231 and 321. Note that the other two patterns will automatically be avoided since they both have a 312 in them. Thus, $\pi$ is a Fibonacci permutation, which means it has the form we want with $\pi$ and $\sigma$ empty.
Next, we must show that permutations with the structure we described avoid the necessary patterns. Again, $\tau$ avoids 312, 321 and 231, so it avoids all of the necessary patterns. Additionally, the entries of $\tau$ are greater than all of the other entries in the permutations so there is no way that part of it can be in one of the forbidden patterns. Furthermore, $\pi$ is an increasing permutation and $\sigma$ is a 312. Both of these subsequences avoid all of the forbidden patterns separately, and any subsequence of $\pi \oplus \sigma$ will also avoid all of the forbidden patterns. Thus, the structure avoids all of the forbidden patterns.
\end{proof}
Now that we have described the structure of permutations in $\Av(A_1)$ and $\Av(A_2)$, we can enumerate them.
\begin{theorem}\label{2.4}
For all $n \geq 1$,
$$|\Av(A_1)| = |\Av(A_2)| = F_{n+1} - 1.$$
\end{theorem}
We will give two proofs of this theorem. The first reduces permutations to Fibonacci permutations and relies on a well-known identity of the Fibonacci numbers given by
\begin{equation}
\label{Fibonacci Identity}
\sum_{k=1}^n F_k = F_{n+2}-1.
\end{equation}
The second uses bijections to domino and monomino tilings of a $1 \times (n+1)$ board.
\begin{proof}
We divide permutations of length $n$ in $\Av(A_1)$ into two types. The first type are the permutations that do not have a 321. These permutations are the Fibonacci permutations of length $n$ so there are $F_n$ of them. The second type are the permutations that have a 321. Since $\pi$ and $\sigma$ are fixed for a given length of $\pi$, we sum the possible subsequences $\tau$ over every possible length of $\tau$, which can be anywhere from 0 to $n-3$. Summing over both cases and using \eqref{Fibonacci Identity} gives
\begin{align*}
|\Av_n(A_1)| &= F_n + \sum_{j=0}^{n-3}F_j\\
&= F_n + F_{n-1} - 1\\
&= F_{n+1} - 1.
\end{align*}
The proof for $\Av(A_2)$ is identical except it is based on the presence and position of the 312 instead of the 321.
\end{proof}
Next, recall that the tilings of a $1\times n$ board with dominoes and monominoes are counted by the Fibonacci numbers (see \cite{Benjamin}). Thus, we can also prove the identity with a bijection to tilings of a $1\times (n+1)$ board with dominoes and monominoes that excludes a single tiling. Before we do this, we must briefly explain the bijection between Fibonacci permutations of length $n$ and tilings of a $1 \times n$ board with dominoes and monominoes.
Using Goyt and Mathisen's \cite{Goyt} description of the structure of a Fibonacci permutation $\pi = \pi_1\pi_2\cdots\pi_k$ where $\pi_i < \pi_j$ whenever $i < j$ and each $\pi_i$ is a decreasing sequence of at most two entries (see Figure \ref{fig:Fibonacci}), we map every $\pi_i$ of length one to a monomino and every $\pi_i$ of length two to a domino. We will call this mapping $\Phi$. Some examples of this mapping are $\Phi(1324576) = mdmmd$ and $\Phi^{-1}(dmdmm) = 2135467$.
For our bijection, we will map every permutation in $\Av(A_1)$ and $\Av(A_2)$ to a unique tiling of a $1\times(n+1)$ board with dominoes and monominoes excluding the tiling that starts with a domino and is followed only by monominoes.
\begin{proof}[Alternate proof of \textbf{Theorem \ref{2.4}}]
We define a function $\phi$ from permutations in $\Av(A_1)$ to tilings described above. As in the proof Theorem 2.2, we define $\phi$ in two cases based on the presence or absence of a 321. If there is not a 321 in a permutation, then $\phi$ will place a monomino at the beginning of the tiling and tile the remaining $1\times n$ board using $\Phi$. Note that since the permutation avoids 321, it is entirely Fibonacci, so this makes sense. If there is a 321, then $\phi$ will place a domino at the beginning, remove the 1 in the 321, and tile the remaining $1\times (n-1)$ board using $\Phi$ and the remaining $n-1$ entries in the permutation. We can describe $\phi$ recursively as
$$\phi(\pi \oplus \sigma \oplus \tau) = \begin{cases}
m \oplus \Phi(\tau) & \sigma = \emptyset, \\
d \oplus \Phi(\pi \oplus 21 \oplus \tau) & \sigma \neq \emptyset,\\
\end{cases}$$
where $\Phi$ is the bijection to Fibonacci tilings, $m$ is a monomino, $d$ is a domino, and $\oplus$ is concatenation of tilings. Note that this function will not produce the tiling given by a domino followed by only monominoes because the domino at the beginning means there is a 321 and thus, when we do not consider the 1, we still have a descent that will be mapped to a domino, giving us a domino in the middle in addition to the initial domino. Some examples of $\phi$ are $\phi(214356) = mddmm$ and $\phi(143265) = dmdd$. \par
The inverse of $\phi$ can be described in a similar piecewise manner: for a tiling $x\oplus y$, where $x$ is a single tile, we have
$$\phi^{-1}(x\oplus y) = \begin{cases}
\Phi^{-1}(y) & \text{if }x = m \\
\psi(y) & \text{if }x = d.\\
\end{cases}$$
Here, $\psi$ is the function that is identical to $\Phi^{-1}$ except it sends the first domino to a consecutive 321 instead of a consecutive 21. Since $\phi$ and its inverse are both functions, $\phi$ is a bijection. Thus, the number of permutations of length $n$ in the set $\Av(A_1)$ is $F_{n+1}-1$.
\end{proof}
The tiling bijection for $\Av(A_2)$ is nearly identical, using a piecewise function $\phi$ based on the presence of a 312 instead of a 321, and a $\psi$ that maps the first domino to a 312 instead of a 321.
Now, we discuss the other two sets of pattern avoiding permutations.
\begin{theorem}
The permutations in the set $\Av(B_1)$ are exactly the permutations of the form $(\pi \ominus 1) \oplus \sigma$, where $\pi$ is a decreasing permutation and $\sigma$ is Fibonacci.
\end{theorem}
Before we prove this, we note that the structure of the permutations in this set could be written more simply as $\alpha \oplus \sigma$, where $\alpha$ is a decreasing permutation and $\sigma$ is Fibonacci. This simplification is visible in Figure \ref{fig:AvB1}. Since $\alpha$ is decreasing and every entry in $\alpha$ is less than every entry in $\sigma$ it must be the case that the last entry in $\alpha$ is 1, showing that this is identical to $\pi \ominus 1$. We have chosen to write it in a more complicated way to emphasize the similarity to $\Av(B_2)$.
\begin{figure}[ht]
\centering
\resizebox{0.36\textwidth}{!}{\begin{tikzpicture}
\draw (0,0) -- (0,4);
\draw (0,0) -- (4,0);
\draw (0,4) -- (4,4);
\draw (4,0) -- (4,4);
\draw (2,0) -- (2,4);
\draw (0,2) -- (4,2);
\draw[black, thick] (0.3,1.7) -- (1.7,0.3);
\filldraw[black] (2.4,2.2) circle (1pt);
\filldraw[black] (2.2,2.4) circle (1pt);
\filldraw[black] (2.8,2.6) circle (1pt);
\filldraw[black] (2.6,2.8) circle (1pt);
\filldraw[black] (3,3.2) circle (1pt);
\filldraw[black] (3.2,3) circle (1pt);
\filldraw[black] (3.4,3.6) circle (1pt);
\filldraw[black] (3.6,3.4) circle (1pt);
\end{tikzpicture}}
\caption{Grid Class $\Av(B_1)$}
\label{fig:AvB1}
\end{figure}
\begin{proof}
To show all permutations in $\Av(B_1)$ have this structure, suppose $\pi\in\Av(B_1)$. Because $\pi$ avoids 312, every entry to the left of 1 is less than every entry to the right of 1. Because $\pi$ avoids 231, the entries to the left of 1 are in decreasing order. Because $\pi$ also avoids 1432, the entries to the right of 1 avoid 321, in addition to 231 and 312. Hence, the entries to the right of 1 form a Fibonacci permutation. As a result, $\pi$ has the claimed form.\par
Now, we must show that permutations that have the structure we described avoid the necessary patterns. Note that $\sigma$ is Fibonacci so it avoids 231, 312, and 321 and thus it must avoid the necessary patterns. Additionally, in our structure, the entries $\sigma$ must be greater than the rest of the entries in the permutation. Thus, there is no way to have a 231 or a 312 that is partially contained in $\sigma$, and since $\sigma$ avoids 321 there is no way to get a 1432 using this part either. Thus, one of the forbidden patterns would have to occur in $\pi \ominus 1$. However, this subsequence is strictly decreasing so it must avoid 231, 312 and 1432. This covers all cases and shows that permutations with the structure we described avoid the necessary patterns.
\end{proof}
Permutations in the $\Av(B_2)$ have a similar structure, except they start with an increasing subsequence instead of a decreasing subsequence.
\begin{theorem}
The permutations in the set $\Av(B_2)$ are exactly the permutations of the form $(\pi \ominus 1) \oplus \sigma$, where $\pi$ is an increasing permutation and $\sigma$ is Fibonacci.
\end{theorem}
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,0) -- (0,6);
\draw (0,0) -- (6,0);
\draw (0,6) -- (6,6);
\draw (6,0) -- (6,6);
\draw (2,0) -- (2,6);
\draw (4,0) -- (4,6);
\draw (0,2) -- (6,2);
\draw (0,4) -- (6,4);
\draw[black, ultra thick] (0.3,2.3) -- (1.7,3.7);
\filldraw[black] (3,1) circle (2pt);
\filldraw[black] (4.4,4.2) circle (1pt);
\filldraw[black] (4.2,4.4) circle (1pt);
\filldraw[black] (4.8,4.6) circle (1pt);
\filldraw[black] (4.6,4.8) circle (1pt);
\filldraw[black] (5,5.2) circle (1pt);
\filldraw[black] (5.2,5) circle (1pt);
\filldraw[black] (5.4,5.6) circle (1pt);
\filldraw[black] (5.6,5.4) circle (1pt);
\end{tikzpicture}
\caption{Grid Class $\Av(B_2)$}
\label{fig:AvB2}
\end{figure}
In Figure $\ref{fig:AvB2}$ we have a set of points for which $\Av(B_2)$ is the grid class. Note that the only difference between this grid class and the grid class $\Av(B_1)$ is that the leftmost block has an increasing subsequence instead of a decreasing subsequence.
\begin{proof}
Again, we place 1 in an arbitrary position and consider the structure of the permutation around it. Since the permutation avoids 312, it must be the case that everything to the left 1 is less than everything to the right of 1. Since the permutation avoids 321, it must be the case that everything to the left of the 1 is increasing. Since the permutation avoids 1342, it must be the case that everything to the right of 1 avoids 231. Since the subsequence to the right of 1 must also avoid 321 and 312, it must be Fibonacci. Thus, the only possible structure for permutations in this set is the one that we have described. \par
Now, we must show that permutations that have the structure we described avoid the necessary patterns. Again, $\sigma$ avoids 312, 321 and 231, so it avoids all of the necessary patterns. Additionally, since the entries in $\sigma$ are grater than all other entries in the permutation, there is no way that part of it can be in one of the forbidden patterns. Finally, $\pi \ominus 1$ must avoid 312, 321 and 1342 since $\pi$ is monotone increasing. This covers all cases and shows that permutations with the structure we described avoid the necessary patterns.
\end{proof}
Again, we can use the structure of permutations in these sets to give proofs about the enumeration of $\Av(B_1)$ and $\Av(B_2)$.
\begin{theorem}\label{2.7}
For all $n\geq 1$,
$$|\Av(B_1)| = |\Av(B_2)| = F_{n+1}-1.$$
\end{theorem}
Again, we give a proof reducing permutations in this set to Fibonacci permutations and a proof using tilings.
\begin{proof}
For $\Av(B_1)$, we divide all permutations of length $n$ in this set into two cases. In the first case, $\pi$ has length less than or equal to 1. In this case, the permutations are Fibonacci and thus there are $F_n$ of them. In the second case, $\pi$ has length at least 2. In this case, we sum over all possible lengths $k$ of $\pi$, from 2 to $n$, noting that $\pi$ is fixed and $\sigma$ is Fibonacci of length $n-k-1$. Thus, we get
$$\sum_{k=2}^n F_{n-k-1}$$
permutations. Again we can rearrange this and use $\eqref{Fibonacci Identity}$ to find that this sum is equivalent to $F_{n-1}-1$.
Summing both cases, we find again that there are
$$F_n + F_{n-1}-1 = F_{n+1}-1$$
permutations of length $n$ in the set.
The proof is nearly identical for $\Av(B_2)$.
\end{proof}
We can also give a bijection to domino and monomino tilings of a $1\times(n+1)$ board for each of the two sets, this time excluding the all monomino tiling.
\begin{proof}[Alternative proof of \textbf{Theorem \ref{2.7}}]
For $\Av(B_1)$, we construct the following function,
$$\rho: \Av(B_1) \rightarrow \{\text{Domino and monomino tilings of a } 1\times (n+1) \text{ board with at least 1 domino}\}$$
such that $\rho$ maps the entry 1 to a domino, every decreasing subsequence of length two to the right of the entry 1 to a domino, and all other entries to monominoes. Since we are mapping one entry to a tile of length two, this will increase the size of the board from $n$ to $n+1$. Additionally, it excludes the all-monomino tiling, since the entry 1 must be in the permutation, and it will always be mapped to a domino, so there must be at least one domino in all tilings in the range of our map. Some examples of $\rho$ are $\rho(3215467) = mmddmm$ and $\rho(1235476) = dmmdd$.
To show $\rho$ is a bijection, we describe $\rho^{-1}$. We note that $\rho^{-1}$ will construct $\pi$ by mapping the first domino to 1 and the sequence of $k$ monominoes to the left of the first domino to the deceasing subsequence of length $k$ from $k+1$ to 2. Then, $\rho^{-1}$ will construct $\sigma$ using the $\Phi^{-1}$. Since this function is invertible, it is bijective. This proves that there are as many permutations in $\Av(B_1)$ as there are tilings with dominoes and monominoes of a $1\times (n+1)$ board with at least one domino. Since we know the latter is counted by $F_{n+1}-1$, the former must be as well.
The proof for $\Av(B_2)$ is nearly identical. In this case, it will be an increasing subsequence to the left of the entry 1 instead of a decreasing subsequence. But in either case every entry in that initial sequence is mapped to a monomino so there is no difference in the construction $\rho$. The inverse of $\rho$ will send all $k$ monominoes before the first domino to the consecutive increasing subsequence starting at 2 and ending at $k+1$.
\end{proof}
\section{Enumeration By Statistics}
Next, we will count permutations in our four sets according to a couple of statistics. The first statistic we consider is inversion number, written as $\inv(\pi)$. The inversion number of a permutation $\pi$ the number of 21 patterns in $\pi$. For example, if $\pi = 15324$, then $\inv(\pi) = 4$.
\begin{proposition}
\label{FibInv}
For $n,k \in \mathbb N$ such that $n \geq k$ the number of Fibonacci permutations with $k$ inversions is given by $\binom{n-k}k$.
\end{proposition}
\begin{proof}
Use the mapping from $\Av_n(231,312,321)$ to domino and monomino tilings of a $1\times n$ board. Note that the number of inversions is given by the number of dominoes.
\end{proof}
For our permutations, distribution of inversion number is slightly more complicated.
\begin{theorem}
For $k \geq 3$, the number of permutations of length $n$ with $k$ inversions in $\Av(A_1)$ is given by
$$\binom{n-k}{k}+\sum_{\ell=k-3}^{n-3}\binom{\ell-(k-3)}{k-3} = \binom{n-k}{k}+\binom{n-k+1}{k-2}$$
\end{theorem}
\begin{proof}
Again, we divide into cases based on the presence of a 321. The permutations that do not have a 321 are the Fibonacci permutations, so by Proposition $\ref{FibInv}$ there are $\binom{n-k}{k}$ permutations of length $n$ of this form with $k$ inversions. When there is a 321, then there are already 3 inversions contained within that subsequence. Thus, we are looking for $k-3$ inversions in the rest of the permutation. There can be no inversions in the increasing subsequence before the 321, nor can there be any inversions between the 321 and any entry before it or after it. Thus, the only place to look for additional inversions is in the part after the 321, which is Fibonacci. Note that this part can vary in length from $k-3$ to $n-3$, because there are $k-3$ inversions in the subsequence and the upper bound is bounded by the fact that there needs to be a 321 pattern in the sequence. Thus, we need to sum the ways to get $k-3$ inversions over possible lengths $\ell$ of the Fibonacci part, giving us $$\binom{n-k}{k}+\sum_{\ell=k-3}^{n-3}\binom{\ell-(k-3)}{k-3}.$$
Using to the hockey-stick identity,
\begin{equation}
\label{Hockey-Stick}
\sum_{i = r}^n\binom{i}{r} = \binom{n+1}{r+1}, \text{ for } n,r\in\mathbb{N}, n>r,
\end{equation} we simplify the summation to a single binomial coefficient,
$$\sum_{\ell=k-3}^{n-3}\binom{\ell-(k-3)}{k-3} = \binom{n-k+1}{k-2}.$$
\end{proof}
The distribution of inversion number is very similar for $\Av(A_2)$.
\begin{theorem}
For $k \geq 2$, the number of permutations of length $n$ with $k$ inversions in the set $\Av(A_2)$ is given by
$$\binom{n-k}{k}+\sum_{\ell=k-2}^{n-3}\binom{\ell-(k-2)}{k-2} = \binom{n-k}{k}+\binom{n-k}{k-1} = \binom{n-k+1}{k}.$$
\end{theorem}
\begin{proof}
The proof of this is identical to the proof of the last identity except we note that the 312 pattern only has two inversions, so when it is included in the permutation, the remaining part must have $k-2$ inversions, instead of $k-3$ inversions as before. Similarly, using \eqref{Hockey-Stick} allows us to simplify the summation. This time, Pascal's rule allows us to simplify the expression even further.
\end{proof}
Note that for $0\leq k\leq 2$ for $\Av(A_1)$ and $0 \leq k \leq 1$ for $\Av(A_2)$ there will be $\binom{n-k} k$ permutations of length $n$ with $k$ inversions since there cannot be a 321 or a 312 respectively so the permutations with $k$ inversions must be Fibonacci.
Next, we consider the distribution of inversion number for $\Av(B_2)$.
\begin{theorem}
There are
$$\sum_{\ell=1}^n \binom{n-k-1}{k-\ell+1}$$
permutations of length $n$ in $\Av(B_2)$ with $k$ inversions.
\end{theorem}
\begin{proof}
We divide this proof into cases based on the length $\ell$ of $\pi \ominus 1$, and then sum over all of them. Note that in this case, it is simpler to sum over the length of the non-Fibonacci part instead of the length of the Fibonacci part. For a given length $\ell$, there are $\ell-1$ inversions in $\pi \ominus 1$ since $\pi$ is increasing and every entry in $\pi$ will have an inversion with the 1. Thus, there must be $k-(\ell-1)$ inversions in the Fibonacci part. Additionally, we know that the Fibonacci part has length $n-l$. Thus, the number of permutations that satisfy this is given by $$\binom{(n-\ell)-(k-(\ell-1))}{k-(\ell-1)} = \binom{n-k-1}{k-\ell+1}.$$
Summing over all possible lengths $\ell$ gives us the desired result.
\end{proof}
Note that in this proof we could split $\Av(B_2)$ into permutations that are Fibonacci and permutations that are not. Permutations that are Fibonacci have either $\ell=1$ or $\ell=2$. Thus, we use Pascal's rule to sum the first two terms to get
$$\binom{n-k-1}{k-1} + \binom{n-k-1}{k} = \binom{n-k}{k}.$$
As expected, this is the number of Fibonacci permutations of length $n$ with $k$ inversions.
The distribution of inversion number for permutations of length $n$ in $\Av(B_1)$ follows the same logic.
\begin{theorem}
There are
$$\sum_{\ell=1}^n \binom{n-\ell-(k-\binom{\ell}{2})}{k-\binom{\ell}{2}}$$
permutations of length $n$ in $\Av(B_1)$ with inversion number $k$.
\end{theorem}
\begin{proof}
The proof of this is nearly identical to the previous proof. The only difference is there are $\binom{\ell}{2}$ inversions in $\pi \ominus 1$ of length $\ell$, since $\pi$ is decreasing so every pair of entries in this subsequence results in an inversion.
\end{proof}
We also define the following statistic to keep track of the length of the ending Fibonacci subsequence common among permutations in all four of our sets.
\begin{definition}
The statistic $\Fib(\pi)$ gives the length of the ending Fibonacci subsequence containing $n$ in a permutation of length $n$.
\end{definition}
For example, $\Fib(321) = 0$, $\Fib(132) = 2$, $\Fib(231) = 1$, $\Fib(2341657) = 3$ and $\Fib(1253467) = 2$. In this last case, it is important to note that in order to be Fibonacci as we define it, the subsequence must contain every consecutive integer from its lowest integer to $n$. This means the 3467 subsequence is not Fibonacci so 67 is the longest Fibonacci subsequence ending at the last entry of the sequence. We can now consider the distribution of this statistic over permutations in each set.
\begin{theorem}
There are $F_k$ permutations $\pi$ of length $n$ in $\Av(A_1)$, $\Av(A_2), \Av(B_1)$, and $\Av(B_2)$ with $\Fib(\pi) = k$.
\end{theorem}
\begin{proof}
For each set, given the length $k$ of the ending Fibonacci subsequence concluding $n$, everything before it is fixed. Thus, the only variation in the permutations occurs in the Fibonacci part, resulting in the expected $F_k$ permutations.
\end{proof}
We can combine these enumerations to get a general formula for the number of permutations $\pi$ of length $n$ in our set such that $\inv(\pi) = j$ and $\Fib(\pi) = k$.
\begin{theorem}
For $k < n$ and $j \in \mathbb N$, there are
$$\binom{k-j+3}{j-3}, \binom{k-j+2}{j-2}, \binom{k-j+\binom{n-k}{2}}{j-\binom{n-k}{2}}, \text{ and } \binom{2k+j+1-n}{j+k+1-n}$$
permutations $\pi$ of length $n$ such that $\inv(\pi)=j$ and $\Fib(\pi) = k$ in $\Av(A_1)$, $\Av(A_2)$, $\Av(B_1)$, and $\Av(B_2)$, respectively, as long as the binomial coefficients are defined. If they are not defined, then there are no permutations with those values of $k$ and $j$. If $k = n$, then there will be $\binom{k-j}{j}$ permutations $\pi$ such that $\inv(\pi) = j$ and $\Fib(\pi) = k$.
\end{theorem}
\begin{proof}
The results follow from determining how many inversions must be in the Fibonacci subsequence of each permutation. If $\pi \in Av_n(A_1)$ has $\inv(\pi) = j$ and $\Fib(\pi) = k < n$, then $\pi$ is an increasing sequence of length $n-k-3$, followed by a 321 pattern, followed by a Fibonacci permutation of length $k$. The first two parts contain a total of 3 inversions, so the Fibonacci part must contain $j-3$ inversions. By Proposition \ref{FibInv} there are $\binom{k-j+3}{j-3}$ such permutations. The remaining cases are similar, but the inversion number of the pre-Fibonacci parts are given by 2, $\binom{n-k}{2}$ and $n-k-1$ for $\Av(A_2)$, $\Av(B_1)$, and $\Av(B_2)$, respectively. In any of the cases, if $k = n$, then $\pi$ is Fibonacci so there will be $\binom{k-j}{j}$ permutations with $\inv(\pi) = j$ and $\Fib(\pi) = k$ by Proposition \ref{FibInv}.
\end{proof}
\section{Generating Functions}
In this section, we encode the statistics $\inv$ and $\Fib$ in generating functions similar to those in $\cite{Goyt}$. We define the generating function $G_n$ for the set $\Av(A_1)$ as
$$G_n^{A_1}(v,q) = \sum_{\pi \in \Av_n(A_1)}v^{\Fib(\pi)}q^{\inv(\pi)}.$$
We define generating functions for the sets $\Av(A_2)$, $\Av(B_1)$, and $\Av(B_2)$ in the same way such that
$$G_n^{A_2}(v,q) = \sum_{\pi \in \Av_n(A_2)}v^{\Fib(\pi)}q^{\inv(\pi)},$$
$$G_n^{B_1}(v,q) = \sum_{\pi \in \Av_n(B_1)}v^{\Fib(\pi)}q^{\inv(\pi)},$$
and
$$G_n^{B_2}(v,q) = \sum_{\pi \in \Av_n(B_2)}v^{\Fib(\pi)}q^{\inv(\pi)}.$$
Now that we have defined the generating functions, we can describe their relationship with the Fibonacci generating function defined as
$$F_n(q) = \sum_{\pi \in \Av_n(231,312,321)}q^{\inv(\pi)}.$$
\begin{theorem}
For all $n \geq 3$,
$$G_n^{A_1}(v,q) = F_n(q)v^n + \sum_{j=0}^{n-3} q^3v^j F_{j}(q),$$
$$G_n^{A_2}(v,q) = F_n(q)v^n + \sum_{j=0}^{n-3} q^2v^j F_j(q),$$
$$G_n^{B_1}(v,q) = F_n(q)v^n + \sum_{j=0}^{n-3} q^{\binom{j}{2}}v^j F_j(q),$$
and
$$G_n^{B_2}(v,q) = F_n(q)v^n + \sum_{j=0}^{n-3} q^{j-1}v^j F_j(q).$$
\end{theorem}
\begin{proof}
We start with $G_n^{A_1}(v,q)$, intending to sum over all possible values of Fib. When $\Fib(\pi) = n$, the permutation is Fibonacci. This accounts for the $F_n(q)v^n$ term. When $\Fib(\pi) = j < n$, the permutation is not Fibonacci. In this case, that means it has a 321 after an increasing subsequence $\pi$. The 321 will have three inversions and the Fibonacci subsequence of length $j$ will contribute $F_{j}(q)$ inversions. Note that $j$ must be less than $n-3$ since the 321 is not part of $\sigma_3$ and it has length three. Summing over all possible values of $j$ and adding the Fibonacci permutations will give a generating function equivalent to $G_n^{A_1}$.
The proofs of the other three equations are almost identical except for the differences in counting inversions for the pre-Fibonacci part. For $G_n^{A_2}$ there are exactly two inversions in the pre-Fibonacci part coming from the 312. For $G_n^{B_1}$, there are $\binom{j}{2}$ inversions coming from the decreasing subsequence of length $j$. For $G_n^{B_2}$, there are $j-1$ inversions coming from the increasing sequence of length $j-1$ followed by the entry 1.
\end{proof}
We can also describe recurrence relations for these generating functions. Before we do this, we describe the recurrence relation for the terms in our sequence.
\begin{theorem}
If $a_n = F_{n+1}-1$, then for $n\geq 2$, we have
$$a_n = a_{n-1}+a_{n-2}+1.$$
\end{theorem}
\begin{proof}
To show this, we work in terms of Fibonacci numbers and use their recurrence relation:
\begin{align*}
a_n &= F_{n+1}-1 \\
&= F_n + F_{n-1} -1 \\
&= a_{n-1} + F_{n-1} \\
&= a_{n-1}+a_{n-2}+1.
\end{align*}
\end{proof}
We can see combinatorially why this recurrence relation makes sense in a similar way to the combinatorial proof for the Fibonacci recurrence relation. We will use $\Av(A_1)$ as an example, but the others work similarly. Permutations in $\Av(A_1)$ end in a decreasing subsequence of length one, two, or three. If the decreasing subsequence has length one or two, it is in the Fibonacci part $\sigma$. Thus, like with Fibonacci permutations, we can remove it. This will leave a permutation in $\Av(A_1)$ of length $n-1$ or $n-2$. If the decreasing subsequence has length three, then it must be the 321. There is only one possible permutation in this case, given by $\tau \oplus 321$ where $\tau$ is strictly increasing. Thus, there are $a_{n-1}$ permutations that end in a decreasing subsequence of length one, $a_{n-2}$ permutations that end in a decreasing subsequence of length two, and one permutation that ends in a decreasing subsequence of length three, so $a_n = a_{n-1} + a_{n-2} + 1$.
In a similar way, we can find recurrence relations for our generating functions.
\begin{theorem}
For $n \geq 2$,
$$G_n^{A_1}(v,q) = q^3 + vG_{n-1}^{A_1}(v,q) + qv^2G_{n-2}^{A_1}(v,q),$$
$$G_n^{A_2}(v,q) = q^2 + vG_{n-1}^{A_2}(v,q) + qv^2G_{n-2}^{A_2}(v,q),$$
$$G_n^{B_1}(v,q) = q^{\binom{n}{2}} + vG_{n-1}^{B_1}(v,q) + qv^2G_{n-2}^{B_1}(v,q),$$
and
$$G_n^{B_2}(v,q) = q^{n-1} + vG_{n-1}^{B_2}(v,q) + qv^2G_{n-2}^{B_2}(v,q).$$
\end{theorem}
\begin{proof}
We start with $G_n^{A_1}(v,q)$. Let $\pi \in \Av(A_1)$ have length $n$. Given the structure of permutations in $\Av(A_1)$, $\pi$ can end in a decreasing subsequence of length one, two, or three. If it ends in a decreasing subsequence of length 1, it corresponds to a permutation of length $n-1$ with $n$ added to the end. Thus, it has the same number of inversions and the value of Fib increases by 1 compared to the corresponding permutation of length $n-1$. Similarly, if it ends in a decreasing subsequence of length two, it corresponds to a permutation of length $n-2$ with $n, n-1$ added to the end. Thus, it has one more inversion and Fib increases by two compared to the corresponding permutation of length $n-2$ in $\Av(A_1)$. If it ends in a decreasing subsequence of length 3, then it must be the permutation given by $1,2,3,\ldots,n,n-1,n-2$. Thus, $\Fib(\pi) = 0$ and $\inv(\pi) = 3$. Summing together these three possibilities results in the equality
$$G_n^{A_1}(v,q) = q^3 + vG_{n-1}^{A_1}(v,q) + qv^2G_{n-2}^{A_1}(v,q).$$
The other three proofs are similar. In each case, we get the same terms for the permutations that end in a consecutive decreasing subsequence of length one or two. The difference lies in the permutations where $\sigma$ is empty. Note that there is only one of these in each set with a given length $n$. For $\Av(A_2)$, this permutation is given by $\pi =1,2,3,\ldots,n,n-2,n-1$. In this case $\inv(\pi) = 2$ and $\Fib(\pi) = 0$. For $\Av(B_1)$, $\pi$ is the decreasing permutation. In this case $\inv(\pi) = \binom{n}{2}$ and $\Fib(\pi) = 0$. For $\Av(B_2)$, this permutation is given by $\pi =2,3,\ldots,n,1$. In this case $\inv(\pi) = n-1$ and $\Fib(\pi) = n-1$. This gives us the first terms for each of the recurrence relations.
\end{proof}
Beyond the recurrence relation, we can prove other analogs of Fibonacci identities using our generating functions. For example, we consider the identity
$$F_{n+m} = F_{n-1}F_m + F_nF_{m-1}.$$
\begin{theorem}
For $m,n \geq 2$,
\begin{align*}G_{m+n}^{A_1}(v,q) = v^nG_m^{A_1}(v,q)F_n(q) &+ v^{n+2}qG_{m-1}^{A_1}(v,q)F_{n-1}(q) + v^{n-1}q^3F_{n-1}(q) \\ &+ v^{n-2}q^3F_{n-2}(q) + G_n^{A_1}(v,q)-v^nF_n(q),
\end{align*}
\begin{align*}
G_{m+n}^{A_2}(v,q) = v^nG_m^{A_2}(v,q)F_n(q) &+ v^{n+2}qG_{m-1}^{A_2}(v,q)F_{n-1}(q) + v^{n-1}q^2F_{n-1}(q) \\ &+ v^{n-2}q^2F_{n-2}(q) + G_n^{A_2}(v,q)-v^nF_n(q),
\end{align*}
$$G_{m+n}^{B_1}(v,q) = v^nG_m^{B_1}(v,q)F_n(q) + v^{n+2}qG_{m-1}^{B_1}(v,q)F_{n-1}(q) + v^{\binom{m}{2}}\sum_{i=0}^nv^iF_i(q)q^{\binom{i}{2}+im},
$$
\begin{align*}
G_{m+n}^{B_2}(v,q) = v^nG_m^{B_2}(v,q)F_n(q) &+ qv^{n+2}G_{m-1}^{B_2}(v,q)F_{n-1}(v,q) + q^mv^{n-1}F_{n-1}(q) \\ &+ q^m(G_n^{B_2}(v,q)-v^nF_n(q)).
\end{align*}
\end{theorem}
This analog is somewhat messier because there are more cases depending on the placement of the ``cut" of a permutation of length $m+n$ into a permutation of length $m$ and a permutation of length $n$. This reflects the additional structural complexity of permutations in our sets compared to Fibonacci permutations. $B_1$ is especially messy due to the contrasted decreasing initial subsequence and increasing Fibonacci subsequence.
\begin{proof}
For $A_1$, there are five cases that we will sum over, depending on where a permutation $\pi$ of length $m+n$ is divided into $\pi_m$ and $\pi_n$ such that $\pi = \pi_m\pi_n$. In the first two cases, $\pi_n$ is Fibonacci. In the first case, the last entry in $\pi_m$ is less than the first entry in $\pi_n$. Thus, we get the term $v^nG_m^{A_1}(v,q)F_n(q)$. Note that this includes the Fibonacci permutations. In the second case, the last entry in $\pi_m$ is greater than the first entry in $\pi_n$, which is to say that the division was in the middle of a descent. In this case, we pull the descent out, noting that it will add 1 inversion and 2 to Fib, to get the term $v^{n+2}qG_{m-1}^{A_1}(v,q)F_{n-1}(q).$
In the third case, $\pi_m = 1,2,3,\ldots,m-2,m+1,m$ and $\pi_n$ starts with $m-1$ and the rest of it is Fibonacci. Thus, $\Fib(\pi) = n-1$ and the inversions are counted by the three in the 321 added to those in the ending Fibonacci subsequence of length $n-1$ resulting in the term $v^{n-1}q^3F_{n-1}(q)$.
In the fourth case, $\pi_m = 1,2,3,\ldots,m-1,m+2$ and $\pi_n$ starts with $m+1,m$ and the rest of it is Fibonacci. In this case, we get $\Fib(\pi) = n-2$ and three inversions before the ending Fibonacci subsequence of length $n-2$, resulting in the term $v^{n-2}q^3F_{n-2}(q)$.
In the fifth case, $\pi_m$ is an increasing subsequence. Thus, we have get the term $G_n^{A_2}(v,q)$. However, this will double count Fibonacci permutations, so we have to subtract out the $v^nF_n(q)$. Note that since we are only considering permutations that are not Fibonacci for $\pi_n$, we do not have to worry about $\pi_m$ in our calculation of Fib.
Summing all of the possible cases together gives us the desired result,
\begin{align*}
G_{m+n}^{A_1}(v,q) = v^nG_m^{A_1}(v,q)F_n(q) &+ v^{n+2}qG_{m-1}^{A_1}(v,q)F_{n-1}(q) + v^{n-1}q^3F_{n-1}(q) \\ &+ v^{n-2}q^3F_{n-2}(q) + G_n^{A_1}(v,q)-v^nF_n(q).
\end{align*}
A similar process will yield similar results for the other sets.
\end{proof}
\section{Acknowledgements}
We would like to thank Professor Jay Pantone for his computational work that hypothesized the Wilf-equivalence of $A_1$, $A_2$, $B_1$, and $B_2$ as well as our research advisor, Professor Eric Egge, for his guidance and support.
| {
"timestamp": "2022-03-23T01:11:13",
"yymm": "2203",
"arxiv_id": "2203.11416",
"language": "en",
"url": "https://arxiv.org/abs/2203.11416",
"abstract": "We prove that $|Av_n(231,312,1432)|$, $|Av_n(312,321,1342)|$ $|Av_n(231,312,4321,21543)|$, and $ |Av_n(321,231,4123,21534)|$, are all equal to $F_{n+1} - 1$ where $F_n$ is the $n$-th Fibonacci number using the convention $F_0 = F_1 = 1$ and $Av_n(S)$ is the set of all permutations of length $n$ that avoid all of the patterns in the set $S$. To do this, we characterize the structures of the permutations in these sets in terms of Fibonacci permutations. Then, we further quantify the structures using statistics such as inversion number and a statistic that measures the length of Fibonacci subsequences. Finally, we encode these statistics in generating functions written in terms of the generating function for Fibonacci permutations. We use these generating functions to find analogs about recurrence relation and addition formulae of Fibonacci identities.",
"subjects": "Combinatorics (math.CO)",
"title": "Statistics on Almost-Fibonacci Pattern-Avoiding Permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543740571681,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.7093818043571304
} |
https://arxiv.org/abs/2002.02359 | Nonconforming discretizations of convex minimization problems and precise relations to mixed methods | This article discusses nonconforming finite element methods for convex minimization problems and systematically derives dual mixed formulations. Duality relations lead to simple error estimates that avoid an explicit treatment of nonconformity errors. A reconstruction formula provides the discrete solution of the dual problem via a simple postprocessing procedure which implies a strong duality relation and is of interest in a posteriori error estimation. The framework applies to differentiable and nonsmooth problems, examples include $p$-Laplace, total-variation regularized, and obstacle problems. Numerical experiments illustrate advantages of nonconforming over standard conforming methods. |
\section{Introduction}
Mixed finite element methods as introduced in~\cite{RavTho77,BoBrFo13}
provide an attractive framework to
approximate partial differential equations in divergence form
since they lead to accurate approximations of fluxes.
For the Poisson problem it is well understood
that a close connection of mixed methods to nonconforming methods
exists, cf.~\cite{Mari85,ArnBre85}. This is of practical interest since mixed
finite element methods require the solution of saddle-point problems while
nonconforming methods lead to positive definite linear systems.
Moreover, the nonconforming Crouzeix--Raviart element of~\cite{CroRav73}
has proved to be particularly robust and flexible to provide accurate
approximations for Stokes equations \cite{GirRav86-book}, for nearly incompressible
Navier--Lam\'e equations \cite{HanLar03},
and for singular minimizers related to the Lavrentiev phenomenon
in the calculus of variations \cite{Ortn11}. Another useful feature is
that the element
is suitable to compute reliable lower bounds for eigenvalue
problems \cite{ArmDur04,CarGed14}. Further aspects of the Crouzeix--Raviart
element are addressed in~\cite{Bren15}.
In this article we show that the relation to mixed methods applies to a large class
of convex minimization problems provided an appropriate discretization
is used. From a discrete duality relation we derive quasi-optimal error
estimates for the
modified discretizations, show that they apply to various nonlinear partial
differential equations and variational inequalities, and illustrate
the theoretical findings via simulations for certain singular limit settings.
The results of this article are inspired by recent work on quasi-optimal
convergence rates for nonconforming approximations of total-variation
regularized problems in~\cite{ChaPoc19-pre}.
\subsection{Convex minimization}
To explain the main ideas we consider a convex variational problem
defined via a minimization of the energy functional
\[
I(u) = \int_\Omega \phi(\nabla u) \dv{x} - \int_\Omega f u \dv{x},
\]
in a Sobolev space $W^{1,p}_D(\Omega)$, i.e., subject to homogeneous
Dirichlet boundary conditions on a
boundary part ${\Gamma_{\rm D}}\subset \partial\Omega$; we set ${\Gamma_{\rm N}}=\partial\Omega\setminus {\Gamma_{\rm D}}$.
The dual problem is obtained by using the relation $\phi^{**}=\phi$
with the convex conjugate
\[
\phi^*(t) = \sup_{s\in \mathbb{R}^d} s\cdot t - \phi(s)
\]
It consists in maximizing the functional
\[
D(z) = - \int_\Omega \phi^*(z) \dv{x}
\]
in the space of vector fields $z\in L^{p'}(\Omega;\mathbb{R}^d)$ whose
distributional divergence $\diver z$ belongs to $L^{p'}(\Omega)$
with vanishing normal component on ${\Gamma_{\rm N}}$ and which
satisfy the constraint
\[
-\diver z = f.
\]
It turns out that solutions are related via
\[
z = D \phi(\nabla u) \quad \Longleftrightarrow \quad \nabla u = D\phi^*(z),
\]
and satisfy the Euler--Lagrange equation
\[
-\diver D \phi(\nabla u) = f
\]
and the saddle-point system
\[
D\phi^* (z) - \nabla \lambda = 0, \quad -\diver z = f,
\]
where $\lambda$ is the Lagrange multiplier related to the divergence constraint.
One directly verifies that $\lambda= u$.
\subsection{Mixed and nonconforming methods}
A low order finite element discretization of the dual problem uses
the Raviart--Thomas finite element space ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ that contains
certain piecewise linear vector fields whose distributional divergence
is given by a piecewise constant function and which have vanishing
normal component on ${\Gamma_{\rm N}}$. In the quadratic case with
$\phi(s)=|s|^2/2$ and $\phi^*(t) =|t|^2/2$, corresponding to the Poisson
problem, the numerical method determines a uniquely defined vector field
$z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and an elementwise constant function
$\overline{u}_h\in \mathcal{L}^0(\mathcal{T}_h)$ that solve
\begin{equation}\label{eq:poisson_mixed_discr}
(z_h,y_h) + (\overline{u}_h,\diver y_h) = 0, \quad (\diver z_h, \overline{v}_h) = -(f,\overline{v}_h)
\end{equation}
for all $(y_h,\overline{v}_h) \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$, where
$(\cdot,\cdot)$ denotes the $L^2$ inner product of functions or
vector fields with associated norm $\|\cdot\|$.
The low order nonconforming approximation of the primal problem uses
the Crouzeix--Raviart finite element space $\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ of
piecewise linear functions that are continuous at midpoints of
sides of elements and vanish at the midpoints of sides belonging to ${\Gamma_{\rm D}}$.
It provides a nonconforming approximation of the
Sobolev space $W^{1,2}_D(\Omega)$. With the piecewise application of the
gradient operator denoted by $\nabla_{\! h}$ we have that the discrete
solution $u_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ satisfies
\[
(\nabla_{\! h} u_h,\nabla_{\! h} v_h) = (f_h,v_h)
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$. It has been shown in~\cite{Mari85}
that the solutions $z_h$ and $u_h$ are related via
\[
z_h|_T(x) = \nabla_{\! h} u_h|_T - \frac{f_h|_T}{d} (x-x_T)
\]
on every element $T\in \mathcal{T}_h$ with midpoint $x_T\in T$,
provided that $f_h=\Pi_{h,0}f $ is the $L^2$ projection of $f$
onto $\mathcal{L}^0(\mathcal{T}_h)$. Moreover, it follows that
\[
\overline{u}_h|_T = u_h(x_T) + \frac{f_h|_T}{d^2 |T|} \|x-x_T\|_{L^2(T)}^2.
\]
Hence, the solution of the mixed finite element method can entirely
be determined by the solution of the nonconforming discretization
and vice versa. We show that
the relations can be generalized and that a modification of the dual
problem simplifies the second equation.
\subsection{Generalized reconstruction}
We consider the nonconforming discretization of the primal problem
given by the minimization of
\[
I_h(u_h) = \int_\Omega \phi(\nabla_{\! h} u_h) \dv{x}
- \int_\Omega f_h u_h \dv{x}
\]
in the set of all $u_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$. Solutions
satisfy
\[
\big(D\phi(\nabla_{\! h} u_h),\nabla_{\! h} v_h\big) = (f_h,v_h)
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$. The systematically obtained
discretization
of the dual problem consists in maximizing the discrete functional
\[
D_h(z_h) = - \int_\Omega \phi^*(\Pi_{h,0} z_h) \dv{x}
\]
for $z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ subject to the constraint
\[
- \diver z_h = f_h.
\]
The existence of a solution $z_h$ follows from surjectivity properties
of the divergence operator restricted to ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$.
In contrast to consistent discretizations
of the dual problem, here the operator $\Pi_{h,0}$ is included
in defining $D_h$ leading to discrete duality relations.
It does not limit the coercivity properties
of the problem since for divergence-free vector fields in ${\mathcal{R}T}^0(\mathcal{T}_h)$
we have that $\Pi_{h,0} y_h = y_h$. In fact, including the operator
$\Pi_{h,0}$ has the interpretation of using quadrature which makes
the numerical realization substantially easier.
By imposing the divergence constraint via a Lagrange multiplier $\overline{u}_h$ one
finds that optimal pairs $(z_h,\overline{u}_h)\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$
satisfy the mixed formulation of the dual problem
\[\begin{split}
\big(D\phi^*(\Pi_{h,0} z_h), \Pi_{h,0} y_h\big) + (\overline{u}_h,\diver y_h) &= 0, \\
(\diver z_h,\overline{v}_h) \hspace*{4.1cm} &= -(f_h,\overline{v}_h),
\end{split}\]
for all $(y_h,\overline{v}_h) \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$. We claim that
we have
\[
z_h|_T(x) = D\phi(\nabla_{\! h} u_h|_T) - \frac{f_h|_T}{d} (x-x_T)
\]
and
\[
\overline{u}_h|_T = u_h(x_T)
\]
for all $T\in \mathcal{T}_h$. To see this, let ${\widetilde{z}}_h$ and $\widetilde{u}_h$ denote the right-hand sides
of the asserted identities for $z_h$ and $\overline{u}_h$. We have that $-\diver {\widetilde{z}}_h|_T = f_h|_T$
for all $T\in \mathcal{T}_h$, and
\begin{equation}\label{eq:dual_var}
\Pi_{h,0} {\widetilde{z}}_h = D\phi(\nabla_{\! h} u_h).
\end{equation}
Hence, for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ we have
\[
({\widetilde{z}}_h,\nabla_{\! h} v_h) = \big(D\phi(\nabla_{\! h} u_h),\nabla_{\! h} v_h\big)
= (f_h,v_h) = (z_h ,\nabla_{\! h} v_h),
\]
where we used an integration-by-parts formula for products of Raviart--Thomas
vector fields and gradients of Crouzeix--Raviart functions. Since
$\diver ({\widetilde{z}}_h-z_h)|_T = 0$ for every $T\in \mathcal{T}_h$, this identity
implies that ${\widetilde{z}}_h-z_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and in particular that
${\widetilde{z}}_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$. Using that $[D\phi^*]^{-1} = D\phi$ we find that
\[
D\phi^*(\Pi_{h,0} {\widetilde{z}}_h) = \nabla_{\! h} u_h.
\]
Since $\widetilde{u}_h$ coincides with the elementwise average of $u_h$ this implies
that
\[
\big(D\phi^*(\Pi_{h,0} {\widetilde{z}}_h),\Pi_{h,0} y_h\big) + ( \widetilde{u}_h, \diver y_h) = 0
\]
for all $y_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$. Hence, we see that $({\widetilde{z}}_h,\widetilde{u}_h)$ solves the
mixed finite element formulation and in case of uniqueness coincides with the
pair $(z_h,\overline{u}_h)$. The crucial identity~\eqref{eq:dual_var} also implies the
important duality relation $I_h(u_h) = D_h(z_h)$.
It is also possible to construct the solution $u_h$ of the
nonconforming discretization from the pair $(z_h,\overline{u}_h)$ solving the mixed
formulation of the dual problem. One directly verifies that this is given by
\[
u_h(x) = \overline{u}_h|_T + D\phi^*(\Pi_{h,0} z_h|_T) \cdot (x-x_T)
\]
for every $T\in \mathcal{T}_h$ and all $x\in T$. The reconstruction formulas are
related to discrete Lagrange functionals, e.g.,
\[
L_h(u_h,z_h)
= \int_\Omega \nabla_{\! h} u_h \cdot z_h - \phi^*(\Pi_{h,0} z_h) - f_h \Pi_{h,0} u_h \dv{x},
\]
and imply weak and strong discrete duality principles. We note that
related reconstructions in the case of the $p$-Laplace problem have been
identified in~\cite{LiLiCh18}.
\subsection{Error estimates}
The discrete duality relation $I_h(u_h) \ge D_h(z_h)$ provides a
natural way to derive error estimates. With a coercivity functional
$\sigma_{I_h}$ that measures strong convexity properties of $I_h$,
we have for a minimizing $u_h$ that
\[
\sigma_{I_h}^2(u_h,v_h) \le I_h(v_h) - I_h(u_h)
\]
for every $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$. Choosing $v_h = \mathcal{I}_{cr} u$
and using $I_h(u_h) \ge D_h(z_h) \ge D_h(\mathcal{I}_{\mathcal{R}T} z)$ leads to
\[
\delta_h^2 = \sigma_{I_h}^2(u_h,\mathcal{I}_{cr} u ) \le\int_\Omega \phi(\nabla_{\! h} \mathcal{I}_{cr} u) - f_h \mathcal{I}_{cr} u
+ \phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \dv{x}.
\]
Noting that $-\diver \mathcal{I}_{\mathcal{R}T} z = f_h$ and using an integration-by-parts
formula show that
\[
\delta_h^2 \le \int_\Omega \phi(\nabla_{\! h} \mathcal{I}_{cr} u) -
\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z \cdot \nabla_{\! h} \mathcal{I}_{cr} u + \phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \dv{x}.
\]
Fenchel's inequality implies that the integrand is
nonnegative and vanishes if $\nabla_{\! h} \mathcal{I}_{cr} u = D \phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z)$.
The identity $\nabla_{\! h} \mathcal{I}_{cr} u = \Pi_{h,0} \nabla u$ in combination
with Jensen's inequality, the duality relation $I(u) = D(z)$, and an
integration by parts using $-\diver z= f$ lead to
\[\begin{split}
\delta_h^2 &\le \int_\Omega \phi(\nabla u) - \Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z \cdot \nabla u
+ \phi^*(\Pi_{h,0}\mathcal{I}_{\mathcal{R}T} z) \dv{x} \\
&= \int_\Omega -\phi^*(z) + (z- \Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \cdot \nabla u + \phi^*(\Pi_{h,0}\mathcal{I}_{\mathcal{R}T} z) \dv{x}.
\end{split}\]
Finally, using convexity of $\phi^*$, i.e.,
\[
\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \le
\phi^*(z) - D\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \cdot (z-\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z),
\]
and the relation $\nabla u = D\phi^*(z)$ lead to the general error estimate
\[
\delta_h^2 \le
\int_\Omega \big( D\phi^*(z) - D\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z)\big)
\cdot (z-\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z)\dv{x}.
\]
In case of a Lipschitz continuous mapping $D\phi^*$ and a regularity
property $z\in W^{1,2}(\Omega;\mathbb{R}^d)$ we directly deduce a linear convergence
rate for $\delta_h$. The estimate and conceptual approach apply however to a
significantly larger class of
variational problems including nonsmooth problems. We remark that the same
upper bound is obtained for the error in approximating
the dual variable, i.e., for $\sigma_{D_h}^2(z_h,\mathcal{I}_{\mathcal{R}T} z)$.
The error estimate can be improved by incorporating
strong convexity properties of $\phi^*$. For the Poisson problem the derivation
then corresponds to the estimates
\[
\|\nabla_{\! h} (u_h - \Pi_{h,0}\mathcal{I}_{cr} u) \|
\le \|\nabla_{\! h} \mathcal{I}_{cr} u - \Pi_{h,0}\mathcal{I}_{\mathcal{R}T} z \|
\le \|z-\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z\|,
\]
i.e., the discretization error related to the nonconforming discretization with
the Crouzeix--Raviart element is controlled by the interpolation error
for approximating the flux variable in the Raviart--Thomas finite element space.
By making use of interpolation estimates and the triangle inequality this
estimate implies the well known error estimate
\[
\|\nabla_{\! h} u_h - \nabla u \| \le c h \|D^2 u\|.
\]
The derivation given here circumvents the use of a Strang lemma, cf.~\cite{BreSco08-book},
or the decomposition of functions as in~\cite{Gudi10}, to control nonconformity errors.
Another application of duality relations arises in a~posteriori error estimates
for conforming discretizations \cite{Repi00,Brae09}.
If $u_h^c \in W^{1,p}_D(\Omega)$ is a conforming approximation of the exact solution~$u$ then we
have, assuming for simplicity that $f = f_h$ so that $I_h=I$ and $D_h = D$
on the discrete spaces, that for all $z_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ with
$-\diver z_h = f_h$ we have
\[\begin{split}
\sigma_I^2(u,u_h^c)
& \le I(u_h^c) - I(u) \le I(u_h^c) - D(z_h) \\
&= \int_\Omega \phi(\nabla u_h^c) \dv{x} - z_h \cdot \nabla u_h^c
+ \phi^*(z_h) \dv{x} =: \frac12 \eta^2(u_h^c,z_h)
\end{split}\]
By Fenchel's inequality the integrand on the right-hand side is nonnegative and
vanishes if the
optimality condition $\nabla u_h^c = D\phi^*(z_h)$ holds which can in
general not be satisfied on the discrete level. The optimal choice
of $z_h$ solves the discrete dual problem which by the arguments given
above is obtained via solving the nonconforming discretization and using the
reconstructed flux
\[
z_h = D\phi(\nabla_{\! h} u_h) - (f_h/d) (\cdot-x_T).
\]
For the Poisson problem we deduce the estimate
\[
\|\nabla (u_h^c -u) \| \le \eta(u_h^c,z_h) = \|\nabla u_h^c - z_h \|,
\]
and with the reconstruction relation $z_h = \nabla_{\! h} u_h - (f_h/d) (\cdot-x_T)$ in
case that $\nabla u_h^c$ is elementwise constant,
\[
\|\nabla (u_h^c -u) \| \le \|\nabla u_h^c - \nabla_{\! h} u_h\|
+ \|(f_h/d)(\cdot -x_T)\| = \widetilde{\eta}(u_h^c,u_h).
\]
The error estimator $\widetilde{\eta}(u_h^c,u_h)$ is
also efficient, which an application of the triangle inequality and the
equivalence of the conforming and nonconforming method in case of the
Poisson problem show, cf.~\cite{Bren15}.
\subsection{Outline}
The article is organized as follows. We collect various relevant facts
about Crouzeix--Raviart and Raviart--Thomas finite element spaces in
Section~\ref{sec:fem_prelim}. In Section~\ref{sec:general} we present
a general theory leading to an
error estimate for differentiable convex minimization problems and
a general flux reconstruction formula. Nonsmooth problems including
a quadratic obstacle problem, a total-variation regularized problem, and
an infinity Laplace problem require certain modifications and
are discussed in Section~\ref{sec:nonsmooth}. In preparation of
numerical experiments we devise iterative algorithms for the
practical realization in Section~\ref{sec:iterative}. The results
of various numerical experiments that reveal certain advantages of
nonconforming methods are presented in Section~\ref{sec:num_ex}.
\section{Finite element spaces}\label{sec:fem_prelim}
Throughout what follows we let $(\mathcal{T}_h)_{h>0}$ be a sequence of
regular triangulations of
the bounded polyhedral Lipschitz domain $\Omega\subset \mathbb{R}^d$ into triangles
or tetrahedra for $d=2$ and $d=3$, respectively. We let $P_k(T)$ denote
the set of polynomials of maximal total degree $k$ on $T\in \mathcal{T}_h$ and
define the set of discontinuous, elementwise polynomial functions or
vector fields
\[
\mathcal{L}^k(\mathcal{T}_h)^\ell = \{ w_h \in L^\infty(\Omega;\mathbb{R}^\ell): w_h|_T \in P_k(T)
\text{ for all }T\in \mathcal{T}_h\}.
\]
The parameter $h>0$ refers to the maximal mesh-size of the triangulation
$\mathcal{T}_h$. The set of sides of elements is denoted by $\mathcal{S}_h$. We let
$x_S$ and $x_T$ denote the midpoints (barycenters) of sides and elements,
respectively. The $L^2$ projection onto piecewise constant functions or
vector fields is denoted by
\[
\Pi_{h,0} : L^1(\Omega;\mathbb{R}^\ell) \to \mathcal{L}^0(\mathcal{T}_h)^\ell.
\]
For an elementwise affine function it corresponds to the evaluation
at element midpoints. Standard notation is used for Sobolev spaces, in
particular
\[\begin{split}
W^{1,p}_D(\Omega) &= \{v\in W^{1,p}(\Omega): v|_{\Gamma_{\rm D}} = 0 \}, \\
W^q_{\!N}(\diver;\Omega) &= \{ y \in L^q(\Omega;\mathbb{R}^d): \diver y \in L^q(\Omega), \,
y\cdot n = 0 \text{ on }{\Gamma_{\rm N}}\}.
\end{split}\]
We let $BV(\Omega)$ denote space of functions in $L^1(\Omega)$ with finite
total variation denoted $|\DD u|(\Omega)$. Most estimates derived below follow
from the boundedness of the trace operator
\[
\trace: W^{1,p}(\Omega;\mathbb{R}^\ell) \to L^p(\partial\Omega;\mathbb{R}^\ell), \quad v \mapsto v|_{\partial\Omega},
\]
and the Poincar\'e inequality
\[
\|v-\overline{v} \|_{L^p(\omega)} \le c_{p,\omega} \diam(\omega) \|\nabla v \|_{L^p(\omega)}, \quad
\overline{v} = |\omega|^{-1} \int_\omega v \dv{x},
\]
for Lipschitz domains $\omega\subset \Omega$, functions $v\in W^{1,p}(\Omega;\mathbb{R}^\ell)$ with
mean integral $\overline{v}$ on $\omega$, and $1\le p \le \infty$. We occasionally
make use of indicator functionals, which are for sets $K\subset X$ are defined
by
\[
I_K(s) =
\begin{cases}
+\infty & \mbox{for } s\not \in K, \\ 0 & \mbox{for } s\in K,
\end{cases}
\]
for every $s\in X$. For details on the properties of finite element methods
listed below we refer the reader to~\cite{Ciar78-book,BoBrFo13,BreSco08-book,ErnGue04-book,Bart16-book}.
\subsection{Crouzeix--Raviart finite elements}
The Crouzeix--Raviart finite element space of lowest order consists
of piecewise affine functions that are continuous at the midpoints of
sides of elements, i.e.,
\[
\mathcal{S}^{1,{cr}}(\mathcal{T}_h) = \{v_h \in \mathcal{L}^1(\mathcal{T}_h): v_h \text{ continuous in
$x_S$ for all $S\in \mathcal{S}_h$} \}.
\]
The space provides nonconforming approximations of Sobolev spaces
$W^{1,p}(\Omega)$. The elementwise application of the gradient operator
to a function $v_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ defines an elementwise
constant vector field $\nabla_{\! h} v_h$ via
\[
\nabla_{\! h} v_h|_T = \nabla (v_h|_T)
\]
for all $T\in \mathcal{T}_h$. For weakly differentiable functions
$v\in W^{1,p}(\Omega)$ we have $\nabla_{\! h} v = \nabla v$.
The subset of functions vanishing at midpoints
of boundary sides on ${\Gamma_{\rm D}}$ is denoted by
\[
\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h) = \{v_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h): v_h(x_S)=0
\text{ for all $S\in \mathcal{S}_h$ with $S\subset {\Gamma_{\rm D}}$}\}.
\]
We note that the jump of a function $v_h\in\mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ over
an inner element side $S\in \mathcal{S}_h$ with neighboring elements $T_-,T_+\in \mathcal{T}_h$,
defined by
\[
[v_h](x) = v_h|_{T_+}(x) - v_h|_{T_-}(x),
\]
has vanishing integral mean over $S$. Similarly, if
$v_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ then the integral of $v_h|_S$ vanishes
on every boundary side $S\in \mathcal{S}_h\cap {\Gamma_{\rm D}}$. A basis of the
space $\mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ is given by the functions
$\varphi_S \in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$, $S\in \mathcal{S}_h$, satisfying
\[
\varphi_S(x_{S'}) = \delta_{S,S'}
\]
for all $S,S'\in \mathcal{S}_h$. The function $\varphi_S$ vanishes on elements that
do not contain the side $S$ and is continuous with value~1 on $S$. A
quasi-interpolation operator is for $v\in W^{1,p}(\Omega)$ defined via
\[
\mathcal{I}_{cr} v = \sum_{S\in \mathcal{S}_h} v_S \varphi_S, \quad v_S = |S|^{-1} \int_S v \dv{s},
\]
Since $\mathcal{I}_{cr}$ is bounded
and preserves affine functions and averages of gradients, i.e., $\nabla_{\! h} \mathcal{I}_{cr} v =
\Pi_{h,0} \nabla v$, we have the estimates
\[
\|v-\mathcal{I}_{cr} v \|_{L^p(\Omega)} \le c_{cr} h \|\nabla v\|_{L^p(\Omega)},
\quad \|\nabla_{\! h} \mathcal{I}_{cr} v \|_{L^p(\Omega)} \le \|\nabla v \|_{L^p(\Omega)}
\]
for all $v\in W^{1,p}(\Omega)$, $1\le p\le \infty$. Moreover, we have
$\|\mathcal{I}_{cr} v \|_{L^\infty(\Omega)} \le c_d \|v\|_{L^\infty(\Omega)}$
with $c_d = (d-1)(d+1)$. For $v\in W^{2,p}(\Omega)$ with $1\le p \le \infty$ we
also have the interpolation estimates
\[
\|v-\mathcal{I}_{cr} v\|_{L^p(\Omega)} + h \|\nabla_{\! h} \mathcal{I}_{cr} v - \nabla v\|_{L^p(\Omega)}
\le c_{cr}' h^2 \|D^2 v\|_{L^p(\Omega)}.
\]
Finally, we note that there exists a linear
enriching operator
\[
E_h^{cr}: \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h) \to W^{1,p}_D(\Omega)
\]
such that
\[
\|\nabla E_h^{cr} v_h \|_{L^p(\Omega)} + h^{-1}
\|E_h^{cr} v_h - v_h\|_{L^p(\Omega)} \le c_E \|\nabla_{\! h} v_h\|_{L^p(\Omega)}
\]
for $1\le p< \infty$, cf.~\cite{Bren15} in case $p=2$ and
Appendix~\ref{app:enrich} for $p\neq 2$.
\subsection{Raviart--Thomas finite elements}
The lowest order Raviart--Thomas finite element space is defined as
\[\begin{split}
{\mathcal{R}T}^0(\mathcal{T}_h) = \{y_h\in & W^1(\diver;\Omega): y_h|_T(x) = a_T + b_T (x-x_T), \\
& a_T\in \mathbb{R}^d, \, b_T\in \mathbb{R} \text{ for all $T\in\mathcal{T}_h$} \}.
\end{split}\]
Vector fields in ${\mathcal{R}T}^0(\mathcal{T}_h)$ have continuous constant normal components
on element sides. The subset of vector fields with
vanishing normal component on the Neumann boundary ${\Gamma_{\rm N}}$ is defined as
\[
{\mathcal{R}T}^0_{\!N}(\mathcal{T}_h) = \{ y_h\in {\mathcal{R}T}^0(\mathcal{T}_h): y_h \cdot n = 0 \text{ on ${\Gamma_{\rm N}}$}\},
\]
where $n$ denotes the outer unit normal on $\partial\Omega$. A basis of the
space ${\mathcal{R}T}^0(\mathcal{T}_h)$ is given by vector fields $\psi_S$, $S\in \mathcal{S}_h$,
supported on adjacent elements with
\begin{equation}\label{eq:def_rt_basis}
\psi_S(x) = \pm \frac{|S|}{d! |T_\pm|} (z_{S,T_\pm} - x)
\end{equation}
for $x\in T_\pm$ with opposite vertex $z_{S,T_\pm}$ to $S\subset \partial T_\pm$.
We have that $\psi_S|_{S'} \cdot n_{S'}=0$
for all sides $S'\neq S$ with unit normal vector $n_{S'}$. If $n_S$ is the
unit normal vector on $S$ and points from $T_-$ into $T_+$ then we have
$\psi_S|_S \cdot n_S =1$. A quasi-interpolation operator is for vector fields
$z\in W^{1,1}(\Omega;\mathbb{R}^d)$ given by
\[
\mathcal{I}_{\mathcal{R}T} z = \sum_{S\in\mathcal{S}_h} z_S \psi_S, \quad
z_S = |S|^{-1} \int_S z \cdot n_S \dv{s}.
\]
The operator $\mathcal{I}_{\mathcal{R}T}$ is bounded on $C^0(\overline{\Omega};\mathbb{R}^d)$ and we have
\[
\|z-\mathcal{I}_{\mathcal{R}T} z \|_{L^p(\Omega)} \le c_{\mathcal{R}T} h \|\nabla z\|_{L^p(\Omega)}
\]
and $\diver \mathcal{I}_{\mathcal{R}T} z = \Pi_{h,0} \diver z$ for all $z\in W^{1,p}(\Omega;\mathbb{R}^d)$.
The latter property implies that the divergence operator defines a surjection
from ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ into $\mathcal{L}^0(\mathcal{T}_h)$, provided that constants
are eliminated from $\mathcal{L}^0(\mathcal{T}_h)$ if ${\Gamma_{\rm D}} = \emptyset$.
\subsection{Orthogonality relations}
An elementwise integration by parts implies that for $v_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$
and $y_h\in {\mathcal{R}T}^0(\mathcal{T}_h)$ we have the integration-by-parts formula
\begin{equation}\label{eq:int_parts_rt_cr}
\int_\Omega \nabla_h v_h \cdot y_h \dv{x} + \int_\Omega v_h \diver y_h \dv{x}
= \int_{\partial\Omega} v_h \, y_h \cdot n \dv{s}.
\end{equation}
Here we used that $y_h$ has continuous constant normal components on inner element
sides and that jumps of $v_h$ have vanishing integral mean. If
an elementwise constant vector field $w_h\in \mathcal{L}^0(\mathcal{T}_h)^d$ satisfies
\[
\int_\Omega w_h \cdot \nabla_{\! h} v_h \dv{x} = 0
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ then its normal components are
continuous on inner element sides and vanish on the ${\Gamma_{\rm N}}$, so that
it belongs to ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$. The following elementary identity
is used repeatedly.
\begin{lemma}[Exchange of projections]\label{la:exchange_projections}
For $z\in W^p_N(\diver;\Omega) \cap W^{1,1}(\Omega;\mathbb{R}^d)$ and $u\in W^{1,p}_D(\Omega)$
and their interpolants ${\widetilde{z}}= \mathcal{I}_{\mathcal{R}T} z \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and
$\widetilde{u}_h =\mathcal{I}_{cr} u\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ we have
\[
\int_\Omega \diver z (u - \Pi_{h,0} \widetilde{u}_h) \dv{x}
+ \int_\Omega \nabla u \cdot (z - \Pi_{h,0} {\widetilde{z}}_h) \dv{x} = 0.
\]
\end{lemma}
\begin{proof}
Since $\diver {\widetilde{z}}_h = \Pi_{h,0} \diver z$ and $\nabla_{\! h} \widetilde{u}_h = \Pi_{h,0} \nabla u$,
we verify that
\[\begin{split}
\int_\Omega \diver z (u - \Pi_{h,0} \widetilde{u}_h) \dv{x}
&= - \int_\Omega z \cdot \nabla u + \diver {\widetilde{z}}_h \widetilde{u}_h \dv{x} \\
&= - \int_\Omega \nabla u \cdot (z - \Pi_{h,0} {\widetilde{z}}_h) \dv{x},
\end{split}\]
which proves the asserted equality.
\end{proof}
\subsection{Convex conjugates}
Given a proper, convex, and lower semicontinuous functional $\phi:\mathbb{R}^d \to \R\cup \{+\infty\}$
the convex conjugate $\phi^*:\mathbb{R}^d \to \R\cup \{+\infty\}$ is defined via
\[
\phi^*(t) = \sup_{s\in\mathbb{R}^d} t\cdot s - \phi(s).
\]
The function $\phi^*$ is proper, convex, and lower semicontinuous and we have
the relations
\[
\phi^{**} = \phi, \quad s = D \phi^*\big(D \phi(s)\big),
\]
where the second identy can be generalized to subdifferentials. We refer
the reader to~\cite{Rock70-book} for details and note the Fenchel--Young
inequality which states that for $s,t\in \mathbb{R}^d$ we have
\[
t\cdot s \le \phi(s) + \phi^*(t)
\]
with equality if and only if $t= D\phi(s)$.
Certain duality relations can be transferred to discretizations of variational
problems. We provide a modified version and a different proof of an
important formula identified in~\cite{ChaPoc19-pre}.
\begin{proposition}[Discrete duality]\label{prop:discrete_duality}
Given $\overline{u}_h\in \Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h) \subset \mathcal{L}^0(\mathcal{T}_h)$ we have
\[\begin{split}
\inf & \Big\{ \int_\Omega \phi\big(\nabla_{\! h} u_h\big) \dv{x} :
u_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h), \, \Pi_{h,0} u_h = \overline{u}_h \Big\} \\
& \ge
\sup \Big\{ - \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) + \overline{u}_h \diver z_h \dv{x}:
z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h) \Big\}.
\end{split}\]
If $\phi \in C^1(\mathbb{R}^d)$ then equality holds.
\end{proposition}
\begin{proof}
We let $L(\overline{u}_h)$ and $R(\overline{u}_h)$ denote the terms on the left- and right-hand side of the
asserted inequality and show that $R(\overline{u}_h)\le L(\overline{u}_h)$. For this, let
$u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ with $\Pi_{h,0} u_h = \overline{u}_h$. Given any
$z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ we then have that
\[
- \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) + \overline{u}_h \diver z_h \dv{x}
= - \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) - \nabla_{\! h} u_h \cdot \Pi_{h,0} z_h \dv{x}.
\]
Hence, only the midpoint values of $z_h$ matter and the supremum is larger
if it is taken over elementwise constant vector fields $p_h \in \mathcal{L}^0(\mathcal{T}_h)^d$.
This corresponds to computing elementwise the values
$\phi(\nabla_{\! h} u_h) = \phi^{**}(\nabla_{\! h} u_h)$. Since $u_h$ is arbitrary
with $\Pi_{h,0} u_h = \overline{u}_h$ we deduce that $R(\overline{u}_h) \le L(\overline{u}_h)$. If
$\phi \in C^1(\mathbb{R}^d)$ then an optimal $u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ for
$L(\overline{u}_h)$ satisfies
\[
\int_\Omega D\phi(\nabla_{\! h} u_h) \cdot \nabla_{\! h} v_h \dv{x}
+ \int_\Omega \mu_h \Pi_{h,0} v_h \dv{x} = 0,
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$, where
$\mu_h\in \Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)\subset \mathcal{L}^0(\mathcal{T}_h)$ is
a Lagrange multiplier related to the constraint $\Pi_{h,0} u_h = \overline{u}_h$.
For $T\in \mathcal{T}_h$ and $x\in T$ we define
\[
z_h(x) = D\phi(\nabla_{\! h} u_h|_T) + \frac{\mu_h|_T}{d} (x-x_T)
\]
and note that $\diver z_h|_T = \mu_h|_T$.
We choose an arbitrary element ${\widetilde{z}}_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ with $\diver {\widetilde{z}}_h = \mu_h$
and verify that the elementwise constant vector field $z_h -{\widetilde{z}}_h$ satisfies
\[
\int_\Omega (z_h -{\widetilde{z}}_h) \cdot \nabla_{\! h} v_h \dv{x}
= \int_\Omega \big(D\phi(\nabla u_h) - {\widetilde{z}}_h\big) \cdot \nabla_{\! h} v_h \dv{x}
= 0
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$, i.e., $z_h -{\widetilde{z}}_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and in particular
$z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$. The identity $\Pi_{h,0} z_h = D\phi(\nabla_{\! h} u_h)$
implies that
\[
\phi^*(\Pi_{h,0} z_h) = \Pi_{h,0} z_h \cdot \nabla_{\! h} u_h - \phi(\nabla_{\! h} u_h).
\]
An integration over $\Omega$ and the integration-by-parts formula~\eqref{eq:int_parts_rt_cr}
lead to
\[
\int_\Omega \phi(\nabla_{\! h} u_h)\dv{x} = - \int_\Omega \phi^*(\Pi_{h,0} z_h) + \overline{u}_h \diver z_h \dv{x},
\]
which implies that $L(\overline{u}_h) = R(\overline{u}_h)$.
\end{proof}
\begin{remark}
The condition $\phi \in C^1(\mathbb{R}^d)$ can be avoided provided there exists
a sequence of regularizations $\phi_\varepsilon$ of $\phi$ such that $\phi_\varepsilon$
and $\phi_\varepsilon^*$ converge uniformly to $\phi$ and $\phi^*$ on their
domains. This applies, e.g., to the truncated regularization
$\phi_\varepsilon(s) = \min\{|s|-\varepsilon/2,|s|^2/(2\varepsilon)\}$ of the modulus
for which we have $\phi^*_\varepsilon(t) = I_{K_1(0)}(t) + t^2/(2\varepsilon)$, where
$K_1(0)= \{t\in \mathbb{R}^d: |t|\le 1\}$.
\end{remark}
\section{General results}\label{sec:general}
We consider the minimization of the abstract functional
\[
I(u) = \int_\Omega \phi(\nabla u) \dv{x} + \int_\Omega \psi(x,u) \dv{x}
\]
in a Sobolev space $W^{1,p}_D(\Omega)$ for $1< p<\infty$ and $f\in L^{p'}(\Omega)$.
We assume that the convex and measurable integrands
\[
\phi:\mathbb{R}^d \to \mathbb{R}, \quad \psi :\Omega\times \mathbb{R} \to \R\cup \{+\infty\}
\]
are such that $I$ is bounded from below, coercive, not
identical to $+\infty$, and
weakly lower semicontinuous so that the direct method in the calculus of
variations implies the existence of a solution $u\in W^{1,p}_D(\Omega)$.
The dual problem consists in maximizing the functional
\[
D(z) = -\int_\Omega \phi^*(z) \dv{x} - \int_\Omega \psi^*(x,\diver z) \dv{x}
\]
in the space $W^{p'}_{\!N}(\Omega;\diver)$ with $p'=p/(p-1)$ and we assume that
a solution exists. We also assume the strong duality relation
\[
\inf_{u\in W^{1,p}_D(\Omega)} I(u) = \sup_{z\in W^{p'}_{\!N}(\Omega;\diver)} D(z)
\]
to hold and refer the reader to, e.g.,~\cite{AtBuMi06-book,Rock70-book}, for conditions
leading to this equality. We recall that in this case we have the relations
\[
z = D\phi(\nabla u), \quad \diver z = D \psi(u)
\]
for solutions $u$ and $z$, where $D\psi$ stands for the derivative of
$\psi$ with respect to the second argument. The derivatives can be replaced
by subdifferentials.
\subsection{Discrete duality}
The discrete primal problem is defined by minimizing the functional
\[
I_h(u_h) = \int_\Omega \phi(\nabla_{\! h} u_h) \dv{x} + \int_\Omega \psi_h(x,\Pi_{h,0} u_h) \dv{x}
\]
in the nonconforming finite element space $\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ with
suitable convex approximations $\psi_h$ of $\psi$ that are elementwise constant
with respect to the first argument.
The corresponding discrete dual problem consists in maximizing the
functional
\[
D_h(z_h) = -\int_\Omega \phi^*(\Pi_{h,0} z_h) \dv{x}
- \int_\Omega \psi_h^* (x,\diver z_h) \dv{x}
\]
in the set ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$.
\begin{proposition}[Duality relations]\label{prop:discrete_dual_strong}
The discrete primal and dual problems satisfy the
duality relation
\[
\inf_{u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)} I_h(u_h)
\ge \sup_{z_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)} D_h(z_h).
\]
If $\phi$ and $\psi$ are differentiable
then solutions $u_h$ and $z_h$ are related via
\[
z_h(x) = D \phi\big(\nabla_h u_h|_T\big) + d^{-1} D \psi_h \big(x,u_h(x_T)\big) (x-x_T)
\]
for every $T\in\mathcal{T}_h$ and $x\in T$. The pair
$(z_h,\overline{u}_h) \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$
with $\overline{u}_h|_T = u_h(x_T)$ for all $T\in \mathcal{T}_h$
solves the corresponding saddle-point problem
\[\begin{split}
\big(D\phi^*(\Pi_{h,0} z_h),\Pi_{h,0} y_h\big) + (\overline{u}_h, \diver y_h) &= 0, \\
(\diver z_h, \overline{v}_h) - \big(D \psi_h(\overline{u}_h),\overline{v}_h\big)&= 0,
\end{split}\]
for all $(y_h,v_h) \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$. Moreover,
in this case strong duality applies, i.e., $I_h(u_h) = D_h(z_h)$.
\end{proposition}
\begin{proof}
We use the duality formula of Proposition~\ref{prop:discrete_duality} and
exchange extrema, to verify that, indicating by $u_h,z_h$ arbitrary
functions from the spaces $\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ and ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$,
and abbrevating $\mathcal{P}_h = \Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h) \subset \mathcal{L}^0(\mathcal{T}_h)$,
\[\begin{split}
&\inf_{u_h} I_h(u_h)
= \inf_{\overline{u}_h\in \mathcal{P}_h} \inf_{u_h:\, \Pi_{h,0} u_h = \overline{u}_h} \int_\Omega \phi(\nabla_{\! h} u_h) \dv{x} + \int_\Omega \psi_h(\Pi_{h,0} u_h) \dv{x} \\
&\quad \ge \inf_{\overline{u}_h \in \mathcal{P}_h} \sup_{z_h} - \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) + \overline{u}_h \diver z_h \dv{x} + \int_\Omega \psi_h(\overline{u}_h) \dv{x} \\
&\quad\ge \inf_{\overline{u}_h \in \mathcal{L}^0(\mathcal{T}_h)} \sup_{z_h} - \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) + \overline{u}_h \diver z_h \dv{x} + \int_\Omega \psi_h(\overline{u}_h) \dv{x} \\
&\quad\ge \sup_{z_h} \inf_{\overline{u}_h \in \mathcal{L}^0(\mathcal{T}_h)} - \int_\Omega \phi^* \big(\Pi_{h,0} z_h\big) + \overline{u}_h \diver z_h \dv{x} + \int_\Omega \psi_h(\overline{u}_h) \dv{x}.
\end{split}\]
The infimum is eliminated by using the convex conjugate of $\psi_h$, i.e.,
by noting that
\[
\psi_h^*(x,t) = \sup_{s\in \mathbb{R}} t \, s - \psi_h(x,s) = - \inf_{s\in \mathbb{R}} -t \, s + \psi_h(x,s),
\]
we find that
\[
\inf_{\overline{u}_h\in \mathcal{L}^0(\mathcal{T}_h)} - \int_\Omega \overline{u}_h \diver z_h \dv{x} + \int_\Omega \psi_h(\overline{u}_h) \dv{x}
= - \int_\Omega \psi_h^*(\diver z_h) \dv{x}.
\]
This implies that we have
\[
\inf_{u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)} I_h(u_h) \ge \sup_{z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)} D_h(z_h).
\]
Assume that $\phi$ and $\psi$ are differentiable, let
$u_h$ be a solution of the primal problem, and let $z_h$ be defined
as in the proposition. Furthermore, let ${\widetilde{z}}_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ be such that
$\diver {\widetilde{z}}_h = D \psi_h (\Pi_{h,0} u_h)$. Since
$\diver z_h|_T = D \psi_h (u_h(x_T))$ for all $T\in \mathcal{T}_h$ it follows
that $z_h - {\widetilde{z}}_h \in \mathcal{L}^0(\mathcal{T}_h)^d$. Using the discrete Euler--Lagrange equations
\[
\int_\Omega D \phi(\nabla_{\! h} u_h) \cdot \nabla_{\! h} v_h \dv{x} + \int_\Omega D\psi_h(\Pi_{h,0} u_h) v_h \dv{x} = 0
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$, we find that
\[
\int_\Omega (z_h-{\widetilde{z}}_h) \cdot \nabla_{\! h} v_h \dv{x}
= \int_\Omega D \phi(\nabla_{\! h} u_h) \cdot \nabla_{\! h} v_h \dv{x} + \int_\Omega \diver {\widetilde{z}}_h v_h \dv{x} = 0.
\]
Hence $z_h-{\widetilde{z}}_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and in particular $z_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$.
Since $\Pi_{h,0} z_h = D\phi(\nabla_{\! h} u_h)$ and $\diver z_h = D\psi_h(\Pi_{h,0}u_h)$ we verify that
\[\begin{split}
\phi^*(\Pi_{h,0} z_h ) &= \Pi_{h,0} z_h \cdot \nabla_{\! h} u_h - \phi(\nabla_{\! h} u_h), \\
\psi_h^*(\diver z_h) & = \diver z_h \Pi_{h,0} u_h - \psi_h(\Pi_{h,0} u_h).
\end{split} \]
Adding these identities and incorporating the integration-by-parts
formula~\eqref{eq:int_parts_rt_cr} implies that
\[
- \int_\Omega \phi^*(\Pi_{h,0} z_h) \dv{x} - \int_\Omega \psi_h^*(\diver z_h) \dv{x}
= \int_\Omega \phi(\nabla_{\! h} u_h) \dv{x} + \int_\Omega \psi_h ( \Pi_{h,0} u_h) \dv{x},
\]
i.e., that $I_h(u_h) = D_h(z_h)$. Noting that $D\phi^*(\Pi_{h,0} z_h) = \nabla_{\! h} u_h$
we find that the pair $(z_h,\overline{u}_h)$ solves the saddle-point problem.
\end{proof}
\begin{remark}
In general, the inclusion $\Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h) \subset \mathcal{L}^0(\mathcal{T}_h)$
is strict, e.g., for $\mathcal{T}_h = \{T\}$, $\overline{\Omega} = T$, and ${\Gamma_{\rm D}} = \partial\Omega$. The implicit
treatment of Dirichlet boundary conditions in the dual formulation implies that
strong duality still applies.
\end{remark}
\subsection{$\Gamma$-convergence of $I_h$}
A general justification of the discrete problems $I_h$ as correct
discretizations of the functional $I$ is established via a $\Gamma$-convergence
result. For this, we extend the functionals $I$ and $I_h$ to functionals
$\widetilde{I}$ and $\widetilde{I}_h$ on $L^p(\Omega)$ by formally assigning the value $+\infty$
to arguments not belonging to $W^{1,p}_D(\Omega)$ and $\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$,
respectively.
\begin{proposition}[$\Gamma$-convergence]
Assume that $\phi:\mathbb{R}^d\to \mathbb{R}$ satisfies
\[
\big|\phi(s)-\phi(r)\big|\le c_5 \big( 1+ |r| + |s|\big)^{p-1} |r-s|
\]
for all $s,r\in \mathbb{R}^d$ and that $\psi_h:\Omega\times \mathbb{R} \to \R\cup \{+\infty\}$ and
$\psi: \Omega\times \mathbb{R} \to \R\cup \{+\infty\}$ are related via
\[
\psi_h(\cdot,v_h) \to \psi(\cdot,v)
\]
in $L^1(\Omega)$ as $h\to 0$ whenever $\psi_h(\cdot,v_h) \in L^1(\Omega)$ for all $h>0$ and
$v_h \to v$ in $L^p(\Omega)$.
Then, the extended functionals $\widetilde{I}_h:L^p(\Omega)\to \R\cup \{+\infty\}$ are $\Gamma$-convergent
to $\widetilde{I}:L^p(\Omega)\to \R\cup \{+\infty\}$ as $h\to 0$ with respect to strong convergence
in $L^p(\Omega)$.
\end{proposition}
\begin{proof}
Let $(u_h)_{h>0}$ be such that $u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$
and $\liminf_{h\to 0} \widetilde{I}_h(u_h) < \infty$. The assumed coercivity property
implies that for a subsequence we have $\|\nabla_{\! h} u_h\|_{L^p(\Omega)} \le c$. Incorporating the
enriching operator $E_h^{cr}$ shows that there exists $u\in W^{1,p}_D(\Omega)$ such
that $u_h \to u$ in $L^p(\Omega)$ and $\nabla_{\! h} u_h \rightharpoonup \nabla u$ for an
appropriate subsequence as $h\to 0$.
Convexity of $\phi$ and the assumption on $\psi$ then imply that
\[
\liminf_{h\to 0} I_h(u_h) \le I(u).
\]
Given $u\in W^{1,p}_D(\Omega)$ we use regularizations of $u$ and the interpolation
operator $\mathcal{I}_{cr}$ to construct a seqence $(u_h)_{h>0}$ with
$u_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ such that
\[
\|u - u_h\|_{L^p(\Omega)} + \|\nabla_{\! h} u_h - \nabla u \|_{L^p(\Omega)} \to 0,
\]
as $h\to 0$. Noting that $\Pi_{h,0} u_h \to u$ in $L^p(\Omega)$ and using
the assumed local Lipschitz continuity of $\phi$
and the approximability condition on $\psi_h$ and $\psi$ we deduce that
\[\begin{split}
|I_h(u_h) - I(u)|
& \le c_5 \int_\Omega (1+|\nabla u| + |\nabla_{\! h} u_h|\big)^{p-1} |\nabla u -\nabla_{\! h} u_h| \dv{x} \\
&\qquad + \|\psi(u)-\psi_h(\Pi_{h,0} u_h) \|_{L^1(\Omega)}.
\end{split}\]
This implies that $I_h(u_h) \to I(u)$ as $h\to 0$.
\end{proof}
\subsection{Error estimate}
We next derive an abstract error estimate for the approximation of $I$
with the nonconforming discretization $I_h$. We assume that the functionals
$I_h$ provide a uniform strong coercivity property, i.e., with the variational
derivative $\delta I_h$, that
\[
I_h(v_h) + \delta I_h (v_h)[w_h-v_h] + \sigma_{I_h}^2(v_h,w_h) \le I_h(w_h)
\]
for all $v_h,w_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ with a definite functional
$\sigma_{I_h}$. This implies that minimizers for $I_h$ are unique.
\begin{theorem}[Discretization error]\label{thm:error_abstract}
For minimizers $u\in W^{1,p}_D(\Omega)$ and $u_h \in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$
of the functionals $I$ and $I_h$ and a dual solution
$z\in W^1_N(\diver;\Omega) \cap W^{1,1}(\Omega;\mathbb{R}^d)$ we have that
\[\begin{split}
\sigma_{I_h}^2(u_h,\mathcal{I}_{cr} u)
&\le \int_\Omega \big(D\phi^*(z)- D\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z )\big) \cdot (z- \Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \dv{x} \\
& \quad + \int_\Omega \big(D\psi(u)- D\psi_h (\Pi_{h,0} \mathcal{I}_{cr} u)\big) \cdot (u-\Pi_{h,0} \mathcal{I}_{cr} u) \dv{x} \\
& \quad + \int_\Omega \psi_h(u)- \psi(u) \dv{x} + \int_\Omega \psi_h^*(\Pi_{h,0} \diver z) - \psi^*(\diver z) \dv{x}.
\end{split}\]
\end{theorem}
\begin{proof}
The interpolants $\mathcal{I}_{\mathcal{R}T} z$ and $\mathcal{I}_{cr} u$ are well defined and we abbreviate
\[
{\widetilde{z}}_h = \mathcal{I}_{\mathcal{R}T} z, \quad \widetilde{u}_h = \mathcal{I}_{cr} u.
\]
By minimality of $u_h$ and the duality relation $I_h(u_h) \ge D_h(\mathcal{I}_{\mathcal{R}T} z)$,
we have
\[\begin{split}
\sigma_{I_h}^2(u_h,\widetilde{u}_h)
& \le I_h(\widetilde{u}_h) -I_h(u_h) \le I_h(\widetilde{u}_h) - D_h({\widetilde{z}}_h) \\
&= \int_\Omega \phi(\nabla_{\! h} \widetilde{u}_h) + \psi_h( \Pi_{h,0} \widetilde{u}_h) + \phi^*(\Pi_{h,0} {\widetilde{z}}_h) + \psi_h^*(\diver {\widetilde{z}}_h) \dv{x}.
\end{split}\]
The identity $\Pi_{h,0} \nabla u = \nabla_{\! h} \widetilde{u}_h$
in combination with convexity of $\phi$ and Jensen's inequality on
every element leads to
\[\begin{split}
\sigma_{I_h}^2(u_h,\widetilde{u}_h)
& \le \int_\Omega \phi(\nabla u) + \psi_h( \Pi_{h,0} \widetilde{u}_h)
+ \phi^*(\Pi_{h,0} {\widetilde{z}}_h) + \psi_h^*(\diver {\widetilde{z}}_h) \dv{x} \\
& = \int_\Omega \phi(\nabla u) + \psi_h ( \Pi_{h,0} \widetilde{u}_h)
+ \phi^*(\Pi_{h,0} {\widetilde{z}}_h) + \psi^*(\diver z) \dv{x} \\
& \qquad + \int_\Omega \psi_h^*(\diver {\widetilde{z}}_h) - \psi^*(\diver z) \dv{x}.
\end{split}\]
The duality relation $I(u)=D(z)$ allows us to replace the sum of the
first and the last term in the first integral and implies that we have
\[\begin{split}
\sigma_{I_h}^2(u_h,\widetilde{u}_h)
& \le \int_\Omega - \psi(u) + \psi_h(\Pi_{h,0} \widetilde{u}_h) + \phi^*(\Pi_{h,0} {\widetilde{z}}_h) - \phi^*(z) \dv{x}\\
& \qquad + \int_\Omega \psi_h^*(\diver {\widetilde{z}}_h) - \psi^*(\diver z) \dv{x}.
\end{split}\]
We next use convexity of $\phi^*$ and $\psi_h$ at $\Pi_{h,0} {\widetilde{z}}_h$ and
$\Pi_{h,0} \widetilde{u}_h$, respectively, to deduce that
\[\begin{split}
\sigma_{I_h}^2 & (u_h,\widetilde{u}_h)
\le - \int_\Omega D \psi_h (\Pi_{h,0} \widetilde{u}_h) \cdot (u-\Pi_{h,0} \widetilde{u}_h) \dv{x} -\int_\Omega \psi(u)-\psi_h(u) \dv{x} \\
& - \int_\Omega D\phi^*(\Pi_{h,0} {\widetilde{z}}_h) \cdot (z- \Pi_{h,0} {\widetilde{z}}_h) \dv{x}
+ \int_\Omega \psi_h^*(\diver {\widetilde{z}}_h) - \psi^*(\diver z) \dv{x}.
\end{split}\]
Using the relations $D\phi^*(z) = \nabla u$ and $\diver z = D\psi(u)$
in Lemma~\ref{la:exchange_projections} implies that
\[
\int_\Omega D \phi^*(z) \cdot (z- \Pi_{h,0} {\widetilde{z}}_h) \dv{x}
= -\int_\Omega (u- \Pi_{h,0} \widetilde{u}_h) D\psi(u) \dv{x}.
\]
In combination with the previous estimate we deduce the error bound.
\end{proof}
\begin{remark}
If $\psi$ is independent of $x\in \Omega$ and $\psi_h = \psi$ then the last
two integrals on the right-hand side of the error estimate are nonpositive
by Jensen's inequality.
\end{remark}
\subsection{Examples}
The abstract error estimates applies to various partial
differential equations. For linear and quadratic low
order terms $\psi$ the corresponding error terms simplify, provided the
approximations $\psi_h$ are suitably chosen.
\begin{proposition}[Low order terms]\label{prop:low_order}
(i) Assume that for $f\in L^q(\Omega)$ and $f_h = \Pi_{h,0} f$ we have
\[
\psi(x,s) = -f(x) s, \quad \psi_h(x,s) = -f_h(x) s.
\]
Then the error estimate of Theorem~\ref{thm:error_abstract} reduces to
\[
\sigma_{I_h}^2(\widetilde{u}_h, u_h)
\le \int_\Omega \big(D\phi^*(z)- D\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z )\big) \cdot (z- \Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \dv{x}.
\]
(ii) Assume that for $g\in L^2(\Omega)$ and $g_h = \Pi_{h,0} g$ we have
\[
\psi(x,s) = (g(x)-s)^2/2, \quad \psi_h(x,s) = (g_h(x)-s)^2/2.
\]
Then the error estimate of Theorem~\ref{thm:error_abstract} reduces to
\[\begin{split}
\sigma_{I_h}^2(\widetilde{u}_h, u_h)
&\le \int_\Omega \big(D\phi^*(z)- D\phi^*(\Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z )\big) \cdot (z- \Pi_{h,0} \mathcal{I}_{\mathcal{R}T} z) \dv{x} \\
& \quad + \|u-\Pi_{h,0} \mathcal{I}_{cr} u\|^2 .
\end{split}\]
\end{proposition}
\begin{proof}
As above we abbreviate $\widetilde{u}_h = \mathcal{I}_{cr} u$ and ${\widetilde{z}}_h = \mathcal{I}_{\mathcal{R}T} z$.
In the first case we have
\[
\psi^*(x,t) = I_{\{-f(x)\}}(t), \quad \psi_h^*(x,t) = I_{\{-f_h(x)\}}(t).
\]
Hence, the last three integrals in the error estimate of Theorem~\ref{thm:error_abstract}
become
\[\begin{split}
E_\psi & = -\int_\Omega (f-f_h) (u-\Pi_{h,0}\widetilde{u}_h) \dv{x} - \int_\Omega f_h u - f u \dv{x} \\
& \qquad + \int_\Omega I_{\{-f_h\}}(\diver {\widetilde{z}}_h) - I_{\{-f\}}(\diver z) \dv{x},
\end{split}\]
so that $E_\psi = 0$ since $f-f_h$ is orthogonal to $\Pi_{h,0} \widetilde{u}_h$
and $\diver z = -f$ and $\diver {\widetilde{z}}_h = -f_h$. In the second case we have
\[
\psi^*(x,t) = \frac12 (t+g(x))^2 - \frac12 g(x)^2, \quad
\psi_h^*(x,t) = \frac12 (t+g_h(x))^2 - \frac12 g_h(x)^2.
\]
The corresponding error terms are given by
\[\begin{split}
E_\psi& = \int_\Omega \big((u-g)-(\Pi_{h,0} \widetilde{u}_h -g_h)\big) (u- \Pi_{h,0}\widetilde{u}_h) \dv{x}
+ \frac12 \int_\Omega (g_h-u)^2 - (g-u)^2 \dv{x} \\
& \quad
+ \frac12 \int_\Omega (\diver {\widetilde{z}}_h + g_h)^2 - g_h^2 - (\diver z + g)^2 + g^2\dv{x}.
\end{split}\]
The relation $\diver {\widetilde{z}}_h + g_h = \Pi_{h,0}(\diver z + g)$ in combination with
Jensen's inequality and elementary calculations imply that
\[
E_\psi \le \int_\Omega (u- \Pi_{h,0} \widetilde{u}_h)^2 \dv{x}.
\]
This proves the simplified error estimate.
\end{proof}
\begin{remark}
Using the strong convexity of $\psi$ in case~(ii) of
Proposition~\ref{prop:low_order} the factor~$1$ in front of the term
$\|u-\Pi_{h,0} \mathcal{I}_{cr} u\|^2$ can be replaced by~1/2.
\end{remark}
Typical choices for the function $\phi$ correspond to $p$-Laplace equations.
\begin{example}[$p$-Dirichlet problems]\label{ex:p_laplace}
For $1<p<\infty$ let $\phi(s) = |s|^p/p$ and $\psi(x,s) = -f(x)s$ for
$f\in L^q(\Omega)$ with $q = p' =p/(p-1)$. Noting that $\phi^*(t) = |t|^q/q$ we define
\[
F(a) = |a|^{(p-2)/2} a, \quad
\widetilde{S}(v) = D\phi^*(v) = |v|^{q-2} v, \quad
\widetilde{F}(v) = |v|^{(q-2)/2} v.
\]
We abbreviate ${\widetilde{z}}_h = \mathcal{I}_{\mathcal{R}T} z$ and use inequalities from~\cite{DiEbRu07} which are
explained in Appendix~\ref{app:p_laplace} to verify that the error estimate
of Theorem~\ref{thm:error_abstract} becomes
\[\begin{split}
c_p \big\|F(\nabla_{\! h} \mathcal{I}_{cr} u) - F(\nabla_{\! h} u_h)\big\|^2
& \le \int_\Omega \big(\widetilde{S}(z)- \widetilde{S}(\Pi_{h,0} {\widetilde{z}}_h)\big) \cdot (z- \Pi_{h,0} {\widetilde{z}}_h)\dv{x} \\
& \le c_p' \|\widetilde{F}(z) - \widetilde{F}(\Pi_{h,0} {\widetilde{z}}_h)\|^2.
\end{split}\]
The right-hand side can be bounded using techniques from~\cite{DieRuz07}
provided $\widetilde{F}(z)\in W^{1,2}(\Omega;\mathbb{R}^d)$. The results provided there also imply that
$\|F(\nabla_{\! h} \mathcal{I}_{cr} u) - F(\nabla u)\| \le c h \|\nabla F(\nabla u)\|$.
The estimate confirms error estimates from~\cite{BarLiu93,LiuYan01,DieRuz07}.
\end{example}
\section{Nonsmooth problems}\label{sec:nonsmooth}
We discuss in this section necessary adjustments of the general theory
to apply it to nondifferentiable problems, where, e.g., well-posedness
and admissibility of modified interpolants has to be ensured.
\subsection{Obstacle problem}
We consider a prototypical obstacle problem defined by minimizing
\[
I(u) = \frac12 \int_\Omega |\nabla u|^2 \dv{x} - \int_\Omega f u \dv{x} + I_+(u)
\]
in the set $W^{1,2}_D(\Omega)$, where $I_+$ is the indicator functional of
functions that are nonnegative almost everywhere. With the
\[
\psi(x,s) = -f(x) u + I_+(s)
\]
we have
\[
\psi^*(x,t) = I_-(t+f(x)).
\]
The dual problem thus determines a maximizing $z\in L^2(\Omega;\mathbb{R}^d)$ for
\[
D(z) = - \frac12 \int_\Omega |z|^2 \dv{x} - I_-(\diver z + f),
\]
where the indicator functional $I_-$ is finite if $\diver z + f$ is nonpositive
as a functional on $W^{1,2}_D(\Omega)$. We have $z=\nabla u$ and
a complementarity principle implies that $\diver z + f =0$ whenever $u>0$.
We remark that general obstacles $\chi \in H^1_D(\Omega)$ can be treated
via a substitution $u = \widetilde{u}+ \chi$ which leads to a modified function
$f$ provided that $\Delta \chi \in L^2(\Omega)$.
\subsubsection*{Discretization}
The discrete primal problem imposes the obstacle constraint at midpoints
of elements, i.e., we consider
\[
I_h(u_h) = \frac12 \int_\Omega |\nabla_{\! h} u_h|^2 \dv{x}
- \int_\Omega f_h u_h \dv{x}
+ I_+(\Pi_{h,0} u_h),
\]
where $f_h = \Pi_{h,0} f$. Proposition~\ref{prop:discrete_dual_strong}
shows that the discrete dual problem consists in determining a
maximizing vector field $z_h\in {\mathcal{R}T}^0_N(\mathcal{T}_h)$ for
\[
D_h(z_h) = -\frac12 \int_\Omega |\Pi_{h,0} z_h|^2 \dv{x} - I_- (\diver z_h + f_h).
\]
Adopting the ideas of the general error analysis leads to a quasi-optimal
error estimate. We note that imposing the obstacle condition at midpoints
of elements instead of midpoints of element sides as in the two-dimensional
setting considered in~\cite{CarKoh17} simplifies the error analysis.
\begin{proposition}[Error estimate]
Let $u\in H^1_D(\Omega)$ and $u_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ are the solutions
of the primal and discrete primal problem, respectively. If
the solution $z\in L^2(\Omega;\mathbb{R}^d)$ of the dual problem satisfies $z \in H^1(\Omega;\mathbb{R}^d)$
then we have
\[
\|\nabla_{\! h} (u_h - u)\| \le c h \big(\|D^2 u\| + \|f+\diver z\|\big).
\]
\end{proposition}
\begin{proof}
Throughout this proof we abbreviate $\widetilde{u}_h = \mathcal{I}_{cr} u$ and ${\widetilde{z}}_h= \mathcal{I}_{\mathcal{R}T} z$.
Minimality of $u_h$, strong convexity of $I_h$, and discrete duality imply that
\[
\delta_h^2 = \frac12 \|\nabla_{\! h} (u_h - \widetilde{u}_h)\|^2 \le I_h(\widetilde{u}_h) - D_h({\widetilde{z}}_h).
\]
The relation $\nabla \widetilde{u}_h = \Pi_{h,0} \nabla u$ in combination with Jensen's
inequality and the identity $I(u) = D(z)$ show that
\[
\delta_h^2 \le -\frac12 \int_\Omega |z|^2 \dv{x} + \int_\Omega f u - f_h \Pi_{h,0}\widetilde{u}_h \dv{x}
+ \frac12 \int_\Omega |\Pi_{h,0} {\widetilde{z}}_h|^2 \dv{x}.
\]
Using Lemma~\ref{la:exchange_projections} with $\nabla u = z$
and noting $f_h =\Pi_{h,0} f$ leads to
\[
\delta_h^2 \le \int_\Omega (f+\diver z) (u - \Pi_{h,0}\widetilde{u}_h) \dv{x}
+ \frac12 \int_\Omega |z-\Pi_{h,0} {\widetilde{z}}_h|^2 \dv{x}.
\]
We abbreviate $\mu = f + \diver z \in L^2(\Omega)$ and insert $\widetilde{u}_h = \mathcal{I}_{cr} u$
to rewrite the first term on the right-hand side as
\[
\int_\Omega \mu (u-\Pi_{h,0}\widetilde{u}_h) \dv{x} =
\int_\Omega \mu (u- \widetilde{u}_h) \dv{x} + \int_\Omega \mu (\widetilde{u}_h - \Pi_{h,0}\widetilde{u}_h) \dv{x}.
\]
To deduce the error estimate it remains to bound the second term on the
right-hand side. For $T\in \mathcal{T}_h$ let $\mathcal{C}_T = \{x\in T: u(x)=0\}$
and note that $\lambda|_{T\setminus \mathcal{C}_T} = 0$. Since $\nabla u = 0$
almost everywhere on $\mathcal{C}_T$ and since $\Pi_{h,0} \widetilde{u}_h|_T = \widetilde{u}_h(x_T)$
it follows from $\widetilde{u}_h(x) = \widetilde{u}_h(x_T) + \nabla_{\! h} \widetilde{u}_h|_T \cdot (x-x_T)$
that
\[\begin{split}
\int_T \mu (\widetilde{u}_h - \Pi_{h,0}\widetilde{u}_h) \dv{x}
&= \int_{\mathcal{C}_T} \mu \, \nabla_{\! h} (\widetilde{u}_h-u) \cdot (x-x_T) \dv{x} \\
&\le h_T \|\mu\|_{L^2(T)} \|\nabla_{\! h} (\widetilde{u}_h-u)\|_{L^2(T)}.
\end{split}\]
We thus deduce that
\[
\delta_h^2 \le \|\mu\|\big(\|u - \widetilde{u}_h\| + h \|\nabla_{\! h} (u-\widetilde{u}_h)\|\big)
+ \frac12 \|z-\Pi_{h,0} {\widetilde{z}}_h\|^2,
\]
which implies the error estimate.
\end{proof}
\subsubsection*{Flux reconstruction}
The discrete flux $z_h$ can be constructed if a discrete Lagrange
multiplier $\mu_h\in \Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ is given, i.e., $\mu_h \le 0$
is such that
\[
(\mu_h,v_h) = (f_h,v_h) - (\nabla_{\! h} u_h,\nabla_{\! h} v_h)
\]
for all $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$. We then have that
\[
z_h(x) = \nabla_{\! h} u_h|_T - \frac{(f_h - \mu_h)|_T}{d} (x-x_T)
\]
for all $T\in \mathcal{T}_h$ and $x\in T$.
\subsection{Total variation minimization}\label{sub_sec:tv_min}
Given a function $g\in L^2(\Omega)$ we consider the primal problem that
consists in determining a function $u\in BV(\Omega)\cap L^2(\Omega)$ which is
minimial for the functional
\[
I(u) = |\DD u|(\Omega) + \frac12 \|u-g\|^2.
\]
The corresponding dual problem determines a maximizing
vector field $z\in W^2_{\!N}(\diver;\Omega)$ with ${\Gamma_{\rm N}}=\partial\Omega$ for the functional
\[
D(z) = - \frac12 \|\diver z + g\|^2 + \frac12 \|g\|^2
\]
subject to the pointwise constraint $|z|\le 1$ in~$\Omega$. From the characterization
\[
|\DD u|(\Omega) = \sup \Big\{ - \int_\Omega u \diver z \dv{x} : z\in W^2_{\!N}(\diver;\Omega), \,
|z|\le 1 \text{ in $\Omega$}\Big\},
\]
we obtain the strong duality relation
\[
I(u) = D(z)
\]
for solutions $u$ and $z$ of the primal and dual problems, where $u$ and
$z$ are related via $\diver z = u-g$ and the subdifferential inclusion
$z\in \partial |\nabla u|$, cf., e.g.,~\cite{HinKun04}.
\subsubsection*{Discretization}
With $g_h = \Pi_{h,0}g$ the discrete minimization problem is defined as
the minimization of
\[
I_h(u_h) = \int_\Omega |\nabla_{\! h} u_h| \dv{x} + \frac12 \|\Pi_{h,0}u_h-g_h\|^2
\]
in the set of all $u_h \in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$. The discrete dual formulation
consists in a maximization of
\[
D_h(z_h) = - \frac12 \|\diver z_h + g_h\|^2 + \frac12 \|g_h\|^2
\]
in the set ${\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ subject to the constraints
$|z_h(x_T)|\le 1$ for all $T\in \mathcal{T}_h$. Related discretizations have
been used in~\cite{HHSVW19}. The discretization used here is obtained from
Proposition~\ref{prop:discrete_duality} which shows that for every
$\overline{u}_h \in \Pi_{h,0} \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ we have
\[\begin{split}
&\inf \Big\{\int_\Omega |\nabla_{\! h} u_h|\dv{x}: \, u_h \in \mathcal{S}^{1,{cr}}(\mathcal{T}_h), \Pi_{h,0} u_h = \overline{u}_h \Big\}
\\ &\, \ge \sup\Big\{ -\int_\Omega \overline{u}_h \diver z_h \dv{x}: \,
z_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h), \, |z_h(x_T)| \le 1 \text{ for all $T\in \mathcal{T}_h$} \Big\},
\end{split}\]
and by using the relation $\diver z_h = \Pi_{h,0} u_h - g_h$
and arguing as in Proposition~\ref{prop:discrete_dual_strong}
we obtain the discrete duality relation
\[
I_h(u_h) \ge D_h(z_h)
\]
for optimal elements $u_h$ and $z_h$, respectively.
The following quasi-optimal error estimate
is obtained via constructing appropriate comparison functions.
It confirms an estimate from~\cite{ChaPoc19-pre}
in which a discretization using piecewise constant functions and implicitly
incorporating Crouzeix--Raviart elements has been considered. We closely
follow the arguments used therein.
It is remarkable that the data approximation error $g-g_h$ does not occur explicitly
which avoids imposing restrictive conditions on $g$.
\begin{proposition}[Error estimate]
Let $u\in BV(\Omega)\cap L^2(\Omega)$ and $u_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ be optimal
for $I$ and $I_h$, respectively. Assume that $g\in L^\infty(\Omega)$ and there
exists an optimal $z\in W^2_{\!N}(\diver;\Omega)$ for
$D$ with $z\in W^{1,\infty}(\Omega;\mathbb{R}^d)$. We then have that
\[
\|u-u_h\| \le c h^{1/2} \big(\|u\|_{L^\infty(\Omega)} |\DD u|(\Omega) + \|g\|\|\nabla z\|_{L^\infty(\Omega)} \|\diver z\|\big)^{1/2},
\]
where $\|u\|_{L^\infty(\Omega)} \le \|g \|_{L^\infty(\Omega)}$.
\end{proposition}
\begin{proof}
The strong convexity properties of $I_h$ and the discrete duality relation
yield that
\[
\frac12 \|\Pi_{h,0}(u_h - \widetilde{u}_h) \|^2 \le I_h(\widetilde{u}_h) - I_h(u_h) \le I_h(\widetilde{u}_h)- D_h({\widetilde{z}}_h)
\]
for every $\widetilde{u}_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ and ${\widetilde{z}}_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ with
$|{\widetilde{z}}_h(x_T)|\le 1$. Since $g\in L^\infty(\Omega)$ we have that
$u\in L^\infty(\Omega)$ with $\|u\|_{L^\infty(\Omega)} \le \|g\|_{L^\infty(\Omega)}$
and Lemma~\ref{la:interpol_primal} below yields the existence
of $\widetilde{u}_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ with
\[
I_h(\widetilde{u}_h) \le I(u) + c h - \frac12 \|g-g_h\|^2
\]
and
\[
\|u-\widetilde{u}_h\|_{L^1(\Omega)} \le c h, \quad \|\widetilde{u}_h\|_{L^\infty(\Omega)} \le c.
\]
Letting ${\widetilde{z}}_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ be the function constructed in
Lemma~\ref{la:interpol_dual} below we find that
\[
D_h ({\widetilde{z}}_h) \ge D(z) - c h - \frac12 \|g-g_h\|^2.
\]
On combining the previous estimates,
and noting that $I(u) = D(z)$, we deduce that
\[
\frac12 \|\Pi_{h,0} (u_h - \widetilde{u}_h)\|^2 \le c h.
\]
We incorporate the estimates
\[
\|u-\widetilde{u}_h\| \le \|u-\widetilde{u}_h\|_{L^1(\Omega)}^{1/2} \|u-\widetilde{u}_h\|_{L^\infty(\Omega)} \le c h^{1/2}
\]
and
\[
\|v_h- \Pi_{h,0} v_h\|_{L^2(\Omega)} \le c h \|\nabla_{\! h} v_h\|_{L^1(\Omega)} \|v\|_{L^\infty(\Omega)}
\]
for every $v_h\in \mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ to deduce the error bound.
\end{proof}
\subsubsection*{Modified interpolants}
The following lemma provides the primal comparison function with explicit constants.
\begin{lemma}[Primal quasi-interpolant]\label{la:interpol_primal}
Given any $u\in BV(\Omega)\cap L^\infty(\Omega)$ there exists $\widetilde{u}_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$
such that
\[
I_h(\widetilde{u}_h) \le I(u) + 2 c_d c_{cr} h \|u\|_{L^\infty(\Omega)} |\DD u|(\Omega) - \frac12 \|g-g_h\|^2.
\]
\end{lemma}
\begin{proof}
We choose a sequence $(u_\varepsilon)_{\varepsilon>0} \in C^\infty(\overline{\Omega})\cap BV(\Omega)$ such that
\[
\|u-u_\varepsilon \|_{L^1(\Omega)} \to 0, \quad \|\nabla u_\varepsilon \|_{L^1(\Omega)} \to |Du|(\Omega), \quad
\|u_\varepsilon \|_{L^\infty(\Omega)} \to \|u\|_{L^\infty(\Omega)},
\]
cf.~\cite{AmFuPa00-book,BaNoSa14}.
We then define $\widetilde{u}_h^\varepsilon = \mathcal{I}_{cr} u_\varepsilon$ and note that
\[
\|\nabla_h \widetilde{u}_h^\varepsilon\|_{L^1(\Omega)} \le \| \nabla u_\varepsilon \|_{L^1(\Omega)}
\]
We pass to an accumulation point $\widetilde{u}_h \in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ as $\varepsilon \to 0$
for which we have that
\[\begin{split}
\|\nabla_h \widetilde{u}_h\|_{L^1(\Omega)} &\le |\DD u|(\Omega), \\
\|\widetilde{u}_h \|_{L^\infty(\Omega)} &\le 2c_d \|u\|_{L^\infty(\Omega)}, \\
\|\widetilde{u}_h - u\|_{L^1(\Omega)} &\le c_{{cr}} h |\DD u|(\Omega).
\end{split}\]
For ease of notation we abbreviate $\overline{u}_h = \Pi_{h,0} \widetilde{u}_h$ and $g_h = \Pi_{h,0}g$.
We have that
\[
\|\overline{u}_h-g_h\|^2 = \|\overline{u}_h-g\|^2 - \|g-g_h\|^2
\]
and
\[
\|\overline{u}_h -g \|^2 = \|u- g\|^2 + \int_\Omega (\overline{u}_h-u) (\overline{u}_h + u -2g)\dv{x}.
\]
These identities imply that we have
\[\begin{split}
I_h(\widetilde{u}_h) &= \|\nabla \widetilde{u}_h\|_{L^1(\Omega)} + \frac12 \|\overline{u}_h-g_h\|^2 \\
&\le I(u) + \frac12 \|\overline{u}_h-u\|_{L^1(\Omega)} \|\overline{u}_h+u -2g \|_{L^\infty(\Omega)} - \frac12 \|g-g_h\|^2.
\end{split}\]
This implies the assertion.
\end{proof}
A comparison function for the discrete dual problem is constructed in
the following lemma.
\begin{lemma}[Dual quasi-interpolant]\label{la:interpol_dual}
Given any $z\in W^2_{\!N}(\diver;\Omega)$ with $z\in W^{1,\infty}(\Omega;\mathbb{R}^d)$ there
exists ${\widetilde{z}}_h \in {\mathcal{R}T}^0_{\!N}(\diver;\Omega)$ with $|{\widetilde{z}}_h(x_T)|\le 1$ for all
$T\in \mathcal{T}_h$ and
\[
D_h({\widetilde{z}}_h) \ge D(z) -c_{\mathcal{R}T} h L \| g\| \|\diver z \| - \frac12 \|g-g_h\|^2,
\]
with the Lipschitz constant $L$ of $z$.
\end{lemma}
\begin{proof}
The interpolant $\mathcal{I}_{\mathcal{R}T} z$ satisfies $\diver \mathcal{I}_{\mathcal{R}T} z = \Pi_{h,0} \diver z$ and
we have with the constant function $\overline{z}|_T = z(x_T)$ that
\[
|\mathcal{I}_{\mathcal{R}T} z(x_T)| \le \|\mathcal{I}_{\mathcal{R}T} (z-\overline{z})\|_{L^\infty(T)}
+ |\overline{z}| \le c_{\mathcal{R}T} h L + 1 = \gamma_h.
\]
Hence, for ${\widetilde{z}}_h = \gamma_h^{-1} \mathcal{I}_{\mathcal{R}T} z = \mathcal{I}_{\mathcal{R}T} {\widetilde{z}}$ with
${\widetilde{z}} = \gamma_h^{-1} z$ we have ${\widetilde{z}}_h \in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$ and
$|{\widetilde{z}}_h(x_T)|\le 1$ for all $T\in \mathcal{T}_h$. Noting that
\[
\diver {\widetilde{z}}_h + g_h = \Pi_{0,h} (\diver {\widetilde{z}} + g)
\]
and $\|g\|^2 - \|g_h\|^2 = \|g-g_h\|^2$ we deduce that
\[\begin{split}
D_h({\widetilde{z}}_h) &= - \frac12 \int_\Omega (\diver {\widetilde{z}}_h+ g_h)^2 - g_h^2 \dv{x} \\
&\ge - \frac12 \int_\Omega (\diver {\widetilde{z}} + g)^2 - g^2 \dv{x} - \frac12 \|g-g_h\|^2.
\end{split}\]
Hence, we have that
\[\begin{split}
D_h({\widetilde{z}}_h) &\ge - \frac12 \int_\Omega (\diver {\widetilde{z}})^2 + 2 g \diver {\widetilde{z}} \dv{x} - \frac12 \|g-g_h\|^2 \\
&= - \frac12 \gamma_h^{-2} \int_\Omega (\diver z)^2 \dv{x} - \gamma_h^{-1} \int_\Omega g \diver z \dv{x} - \frac12 \|g-g_h\|^2 \\
&\ge - \frac12 \int_\Omega (\diver z)^2 + 2 g \diver z \dv{x} - (1-\gamma_h^{-1}) \|g\| \|\diver z \|
- \frac12 \|g-g_h\|^2,
\end{split}\]
where we also used that $\gamma_h^{-2} \le 1$. The estimate $1-\gamma_h^{-1} \le c_{\mathcal{R}T} h L$
implies the assertion.
\end{proof}
\begin{remark}
In the absence of the regularity condition $z\in W^{1,\infty}(\Omega;\mathbb{R}^d)$
one can establish $\Gamma$-convergence $I_h \to I$ in $L^1(\Omega)$.
Alternatively, one may choose a regularization $z_\varepsilon$ of $z$
so that Lemma~\ref{la:interpol_dual} holds with $L_\varepsilon = c \varepsilon^{-1}$.
An approximability condition on $g$ then implies
$\|\diver z-\diver z_\varepsilon\| \le \varepsilon$ and leads to the
convergence rate $\mathcal{O}(h^{1/4})$, cf.~\cite{ChaPoc19-pre}. This rate
has also been obtained in~\cite{Bart12,Bart15-book} for conforming
approximations and was improved in~\cite{BaNoSa15} in the case
of certain anisotropic functionals.
\end{remark}
\subsubsection*{Flux reconstruction}
The ideas that lead to the reconstruction of the solution of the
dual problem can be transferred to the nonsmooth situation
if a regularization of the modulus
function is used to approximate the discrete primal functional $I_h$,
i.e., if $|\cdot|_\varepsilon :\mathbb{R}^d \to \mathbb{R}$ is a differentiable approximation
of euclidean length, then the discrete primal and dual
problems correspond to the Lagrange functional
\[
L_{h,\varepsilon}(u_h,z_h) = -\int_\Omega u_h \diver z_h + |\Pi_{h,0} z_h|_\varepsilon^* \dv{x}
+ \frac12 \|\Pi_{h,0} u_h - g_h\|^2.
\]
and the relations
\[
\diver z_h = \Pi_{h,0} u_h - g_h, \quad \nabla_{\! h} u_h \in D|\Pi_{h,0} z_h|_\varepsilon^*,
\]
where the second identity is equivalent to
\[
\Pi_{h,0} z_h = D|\nabla_{\! h} u_h|_\varepsilon.
\]
If, e.g., $|s| = (|s|^2+\varepsilon^2)^{1/2}$ then we obtain on every $T\in \mathcal{T}_h$
\[
z_h = \frac{\nabla_{\! h} u_h}{|\nabla_{\! h} u_h|_\varepsilon} + \frac{\Pi_{h,0} u_{h,\varepsilon} -g_h}{d} (\cdot -x_T).
\]
\subsection{Infinity Laplacian}
A variant of the $p$-Laplace problem with $p\to \infty$ arises in problems
of optimal transportation and leads to a minimization of
\[
I(u) = I_{K_1(0)} (\nabla u) -\int_\Omega f u \dv{x}
\]
in the space $W^{1,\infty}_D(\Omega)$ for a given function $f\in L^1(\Omega)$.
The dual problem consists in maximizing the functional
\[
D(z) = -\int_\Omega |z| \dv{x} - I_{\{-f\}} (\diver z)
\]
in the space $W^1_{\!N}(\diver;\Omega)$. We refer
the reader to~\cite{Evan99} for existence and strong duality results.
\subsubsection*{Discretization}
We define a discrete approximation of $I$ via
\[
I_h(u_h) = I_{K_1(0)} (\nabla_{\! h} u_h) - \int_\Omega f_h u_h \dv{x}
\]
on the set $\mathcal{S}^{1,{cr}}_D(\mathcal{T}_h)$ using $f_h = \Pi_{h,0} f$.
Proposition~\ref{prop:discrete_dual_strong}
implies that the discrete dual problem consists in maximizing the functional
\[
D_h(z_h) = -\int_\Omega |\Pi_{h,0} z_h| \dv{x} - I_{\{-f_h\}} (\diver z_h)
\]
in the set of all discrete vector fields $z_h\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)$.
Other discretizations are addressed in~\cite{BarPri07,Ober13,BarPri15,BarSch17,Prye18}.
We have the following approximation result.
\begin{proposition}[Approximation]\label{prop:approx_inf_laplace}
If a solution $z\in W^1_{\!N}(\diver;\Omega)$ of the dual problem with $z\in W^{1,1}(\Omega;\mathbb{R}^d)$
exists and if $u$ and $u_h$ solves the primal and discrete primal problem, respectively,
then we have
\[
\big| I_h(u_h) - I(u)| \le c h \big(\|f\|_{L^1(\Omega)} + \|\nabla z\|_{L^1(\Omega)} \big).
\]
\end{proposition}
\begin{proof}
Establishing the existence of a discrete solution
$u_h\in \mathcal{S}^{1,{cr}}(\mathcal{T}_h)$ is
straightforward by continuity of the discrete
problem and boundedness of the admissible set. Abbreviating
$\widetilde{u}_h = \mathcal{I}_{cr} u$ and ${\widetilde{z}}_h = \mathcal{I}_{\mathcal{R}T} z$ we note that
$|\nabla_{\! h} \widetilde{u}_h|\le 1$ and hence
\[\begin{split}
0 & \le I_h(\widetilde{u}_h) - I_h(u_h) \le I_h(\widetilde{u}_h) - D_h({\widetilde{z}}_h) \\
&= -\int_\Omega f_h \widetilde{u}_h \dv{x} + \int_\Omega |\Pi_{h,0} {\widetilde{z}}_h | \dv{x} \\
&= -\int_\Omega \Pi_{h,0} {\widetilde{z}}_h \cdot \nabla_{\! h} \widetilde{u}_h \dv{x} + \int_\Omega |\Pi_{h,0} {\widetilde{z}}_h | \dv{x} \\
&= -\int_\Omega \Pi_{h,0} {\widetilde{z}}_h \cdot \nabla u \dv{x} + \int_\Omega |\Pi_{h,0} {\widetilde{z}}_h | \dv{x}.
\end{split}\]
The duality relation $I(u)=D(z)$ shows that
\[
- \int_\Omega |z| \dv{x} = - \int_\Omega f u\dv{x} = \int_\Omega z\cdot \nabla u\dv{x}.
\]
This leads to
\[\begin{split}
0 & \le I_h(\widetilde{u}_h) - I_h(u_h) \\
&\le \int_\Omega (z-\Pi_{h,0} {\widetilde{z}}_h) \cdot \nabla u \dv{x} + \int_\Omega |\Pi_{h,0} {\widetilde{z}}_h | -|z| \dv{x} \\
&\le 2 \|z-\Pi_{h,0} {\widetilde{z}}\|_{L^1(\Omega)} \le c h \|\nabla z\|_{L^1(\Omega)}.
\end{split}\]
Finally, we verify that
\[
I_h(\widetilde{u}_h) - I(u) = \int_\Omega f_h \Pi_{h,0} \widetilde{u}_h -f u \dv{x}
= - \int_\Omega f (u-\Pi_{h,0} \widetilde{u}_h) \dv{x},
\]
and deduce the asserted estimate.
\end{proof}
\begin{remark}\label{rem:inf_laplace_p1}
On right-angled triangulations the conforming $P1$ finite element
method leads to a similar estimate since we have that
\[
\|\nabla \mathcal{I}_{p1} u \|_{L^\infty(\Omega)} \le \|\nabla u\|_{L^\infty(\Omega)}
\]
for every $u\in W^{1,\infty}(\Omega)$ and hence if
$u_h^c \in \mathcal{S}^{1,0}_D(\mathcal{T}_h)\subset W^{1,\infty}_D(\Omega)$ is minimal
for $I_h$ in this set then
\[\begin{split}
0 & \le I(u_h^c) - I(u) = I(u_h^c)
- I_h(u_h^c) + I_h(u_h^c)
- I_h(\mathcal{I}_{p1} u) + I_h(\mathcal{I}_{p1} u) \\
&\qquad- I(\mathcal{I}_{p1} u) + I(\mathcal{I}_{p1}u)
- I(u) \\
&\le \|f-f_h\|_{L^1(\Omega)} \big(\|u_h^c- \Pi_{h,0} u_h^c\|_{L^\infty(\Omega)}
+ \|\mathcal{I}_{p1} u- \Pi_{h,0}\mathcal{I}_{p1} u \|_{L^\infty(\Omega)}\big) \\
& \qquad + \|f\|_{L^1(\Omega)} \|u-\mathcal{I}_{p1} u\|_{L^\infty(\Omega)},
\end{split}\]
where we used that $f_h = \Pi_{h,0} f$ and $I_h(u_h) \le I_h(\mathcal{I}_{p1}u )$.
Hence, without additional regularity assumptions we have that
$|I(u_h^c)-I(u)| \le c h$; if $f\in W^{1,1}(\Omega)$ and $u\in W^{2,\infty}(\Omega)$
then this can be improved to $\mathcal{O}(h^2)$. A realistic regularity property
is $u\in W^{4/3,\infty}(\Omega)$, cf.~\cite{Aron68}.
\end{remark}
\subsubsection*{Flux reconstruction}
To construct the discrete flux $z_h$ from the solution $u_h$ of the
nonconforming method for the primal problem we consider
a regularization $|\cdot|_\varepsilon$ of the euclidean length which defines
regularizations $|\cdot|_\varepsilon^*$ of $I_{K_1(0)}$. We then find that
on every $T\in \mathcal{T}_h$ we have
\[
z_h = D |\nabla_{\! h} u_h|_\varepsilon^* - (f_h/d)(\cdot -x_T),
\]
where $z_h$ and $u_h$ are the solutions of the regularized problems.
\section{Iterative solution}\label{sec:iterative}
To solve the discrete problems we devise iterative algorithms for problems with
sub- and superquadratic growth properties that result from semi-implicit discretizations
of appropriate gradient flows for the primal and dual problem, respectively.
A gradient flow for the primal minimization problem determines a family
$(u(t))_{t\ge 0} \subset W^{1,p}_D(\Omega)$ of functions for an initial
$u^0 \in W^{1,p}_D(\Omega)$ via $u(0) = u^0$ and
\[
(\partial_t u,v)_* = - \int_\Omega D\phi(\nabla u) \cdot \nabla v \dv{x} - \int_\Omega D\psi(u) v\dv{x}
\]
for all $v\in W^{1,p}_D(\Omega)$ and all $t>0$. To avoid solving nonlinear systems
of equations a semi-implicit discretization in time is used. We consider
the case that $\phi$ only depends on the length of its argument, i.e.,
$\phi(s) = \varphi(|s|)$
with a convex function $\varphi \in C^1(\mathbb{R}_{\ge 0})$. In this case we have
\[
D\phi(s) = \frac{\varphi'(|s|)s}{|s|},
\]
which naturally leads to a semi-implicit treatment. To discretize
the time derivative we use the the backward difference quotient operator
\[
d_t u^k = \tau^{-1} (u^k-u^{k-1})
\]
for a sequence $(u^k)_{k=0,1,\dots}$ and a step-size $\tau>0$.
\begin{algorithm}[Subquadratic case, primal iteration]\label{alg:primal}
Let $u^0\in W^{1,p}_D(\Omega)$ and choose $\tau,\varepsilon_{\rm stop} >0$, set $k=0$. \\
(1) Compute $u^k\in W^{1,p}_D(\Omega)$ such that
\[
(d_t u^k,v)_* +
\int_\Omega \frac{\varphi'(|\nabla u^{k-1}|)}{|u^{k-1}|} \nabla u^k \cdot \nabla v \dv{x}
+ \int_\Omega D\psi(u) v \dv{x}
= 0
\]
for all $v\in W^{1,p}_D(\Omega)$. \\
(2) Stop if $\|d_t u^k\|_* \le \varepsilon_{\rm stop}$; otherwise increase
$k\to k+1$ and continue with~(1).
\end{algorithm}
It is shown below that the iteration is unconditionally energy decreasing and
convergent if $\varphi$ has subquadratic growth. If this is not the
case then we expect the dual problem to have this property and consider
a gradient descent for $-D$, i.e., we determine a family $(z(t))_{t\ge 0}$
satisfying $z(0)=z^0$ and the constrained evolution equation
\[
(\partial_t z,y)_\dagger = - \int_\Omega D\phi^*(z) \cdot y \dv{x}
- \int_\Omega D\psi^*(\diver z) \diver y \dv{x}
\]
for all $y\in W^{p'}_N(\diver;\Omega)$. In case of a linear functional $\psi$,
the differential $D\psi^*$ becomes a subdifferential
and the equation a variational inequality or constrained equation.
Similarly to the gradient flow for the primal problem we assume
that the integrand is isotropic, i.e.,
$\phi^*(r) = \varphi(|r|)$ with a convex function $\varphi \in C^1(\mathbb{R}_{\ge 0})$.
In this case we have
\[
D\phi^*(r) = \frac{\varphi'(|r|)r}{|r|}
\]
and the semi-implicit iteration is similar to that of
Algorithm~\ref{alg:primal}.
\begin{algorithm}[Superquadratic case, dual iteration]\label{alg:dual}
Let $z^0\in W^{p'}_N(\diver;\Omega)$ and choose $\tau,\varepsilon_{\rm stop} >0$, set $k=0$. \\
(1) Compute $z^k\in W^{p'}_N(\diver;\Omega)$ such that
\[
(d_t z^k,y)_\dagger +
\int_\Omega \frac{\varphi'(|z^{k-1}|)}{|z^{k-1}|} z^k \cdot y \dv{x}
+ \int_\Omega D\psi^*(\diver z^k) \diver y \dv{x} = 0,
\]
for all $y\in W^{p'}_N(\diver;\Omega)$. \\
(2) Stop if $\|d_t z^k\|_\dagger \le \varepsilon_{\rm stop}$; otherwise increase
$k\to k+1$ and continue with~(1).
\end{algorithm}
If $\psi(x,s) = -f(x)s$ then the system in Step~(1) includes the constraints
$-\diver z^k = f$ and $\diver y =0$ instead of the integral involving $D\psi^*$.
The algorithms converge for subquadratic growth of $\phi$ and $\phi^*$, respectively.
We adopt arguments from~\cite{BaDiNo18}.
\begin{proposition}[Unconditional convergence]\label{prop:uncond_conv}
Assume that $r\mapsto \varphi'(r)/r$ is positive, non-increasing,
and continuous on $\mathbb{R}_{\ge 0}$. If $\phi(s) = \varphi(|s|)$ for all $s\in \mathbb{R}^d$
then the iteration of Algorithm~\ref{alg:primal}
is well-posed, convergent, and monotone with
\[
I(u^\ell) + \tau \sum_{k=1}^\ell \|d_t u^k\|_*^2 \le I(u^0).
\]
If $\phi^*(t) = \varphi(|t|)$ then the iteration of Algorithm~\ref{alg:dual}
is well-posed, convergent, and monotone with
\[
-D(z^\ell) + \tau \sum_{k=1}^\ell \|d_t z^k\|_\dagger^2 \le -D(z^0).
\]
\end{proposition}
\begin{proof}
(i) The conditions on $\varphi$ imply that the iteration is well posed and
that we have
\begin{equation}\label{eq:mon_phi}
\frac{\varphi'(|a|)}{|a|} b\cdot (b-a) \ge \varphi(|b|) - \varphi(|a|)
+ \frac12 \frac{\varphi'(|a|)}{|a|} |b-a|^2
\end{equation}
for all $a,b\in \mathbb{R}^d$, cf.~Appendix~\ref{app:monotone} for a proof of~\eqref{eq:mon_phi}.
Hence, by choosing $v= d_t u^k$ in Algorithm~\ref{alg:primal} we find
that
\[
\|d_t u^k\|_*^2
+ \int_\Omega \frac{\varphi'(|\nabla u^{k-1}|)}{|u^{k-1}|} \nabla u^k \cdot \nabla d_t u^k \dv{x}
+\int_\Omega D\psi(u^k) d_t u^k \dv{x} = 0.
\]
Using $a=\nabla u^{k-1}$ and $b= \nabla u^k$ in~\eqref{eq:mon_phi}
shows that
\[
\frac{\varphi'(|\nabla u^{k-1}|)}{|\nabla u^{k-1}|} \nabla u^k\cdot \nabla (u^k-u^{k-1})
\ge \varphi(|\nabla u^k|) - \varphi(|\nabla u^{k-1}|).
\]
By combining the last two equations, using convexity of $\psi$,
and summing over $k=1,2,\dots,\ell$ we deduce the asserted estimate. \\
(ii) If the conditions on $\phi^*$ are satisfied then the arguments used to show~(i)
apply to Algorithm~\ref{alg:dual} and we deduce the estimate.
\end{proof}
\begin{example}
The conditions of the proposition apply to typical regularized $p$-Dirichlet energies
$\phi(s) = |s|_\varepsilon^p$ for $\varepsilon>0$, cf.~\cite{BaDiNo18}. Algorithm~\ref{alg:primal}
converges if $1\le p \le 2$ while Algorithm~\ref{alg:dual} converges if $2\le p <\infty$.
\end{example}
\begin{remark}
Note that owing to the semi-implicit discretization the functions $d_t u^k$
and $d_t z^k$ are not residuals. If, e.g., $\widetilde{u}=u^k$ for some $k\ge 0$
and the residual $r$ is defined via
\[
(D \phi_\varepsilon(\nabla \widetilde{u}),\nabla v) + (D\psi(\widetilde{u}),v) = (r,v)_*
\]
for all $v\in W^{1,p}_D(\Omega)$, then by convexity of $I_\varepsilon$ we have
\[
I_\varepsilon(\widetilde{u}) + (r,v-\widetilde{u})_* + \sigma_I^2(\widetilde{u},v) \le I_\varepsilon(v)
\]
for all $v\in W^{1,p}_D(\Omega)$, where we assume that coercivity holds uniformly
with respect to $\varepsilon\ge 0$. In case of the $L^2$ scalar product
$(\cdot,\cdot)_*=(\cdot,\cdot)$, and if, e.g., $\sigma_I^2(\widetilde{u},v) \ge (\alpha_I/2) \|\widetilde{u}-v\|^2$,
we deduce that
\[
I_\varepsilon(\widetilde{u}) + \frac{\alpha_I}{4} \|v-\widetilde{u}\|^2 \le I_\varepsilon(v) + \frac{1}{\alpha_I} \|r\|^2.
\]
With the minimizing $u_\varepsilon$ for $I_\varepsilon$ we deduce
$\|u_\varepsilon-\widetilde{u} \| \le (2/\alpha_I) \|r\|$.
\end{remark}
Two alternative approaches to the iterative solution of the discrete problems
are described in the following remarks.
\begin{remarks}\label{rem:admm_primal_dual}
(i) The ADMM iteration (alternating direction of multiplier method) as
in~\cite{ForGlo83-book} decouples the gradient operator
from $\phi$ by introducing
$q=\nabla u$ via a Lagrange multiplier $\lambda$. With the augmented Lagrange functional
\[
L_\tau(u,q,\lambda) = \int_\Omega \phi(q) \dv{x} + \int_\Omega \psi(u) \dv{x} + (\lambda,\nabla u - q)_H
+ \frac{\tau}{2} \| \nabla u -q \|_H^2,
\]
with a suitable Hilbert space norm $H$ and a stabilization parameter
$\tau>0$, the algorithm successively minimizes $L_\tau$ with respect to $u$ and $q$,
and then performs an ascent step with respect to $\lambda$. \\
(ii) Primal-dual methods as investigated in~\cite{ChaPoc11}
alternatingly update the variable $u$ and $z$ in the Lagrange functional
\[
L(u,z) = \int_\Omega - u \diver z - \phi^*(z) + \psi(u) \dv{x}.
\]
via discretizations of $\partial_t z = \delta_z L(u,z)$ and $\partial_t u = -\delta_u L(u,z)$ using an
extra\-polated quantity to decouple the equations. The application to Raviart--Thomas
methods is not straightforward due to their nonlocal character.
\end{remarks}
\section{Numerical experiments}\label{sec:num_ex}
In this section we verify the theoretical findings via numerical experiments
and illustrate advantages of nonconforming and mixed methods over standard
conforming methods.
\subsection{Total variation minimization}
We consider the numerical approximation of the functional
\[
I(u) = |\DD u|(\Omega) + \frac{\alpha}{2} \|u-g\|^2.
\]
To compare approximations to an exact solution we impose Dirichlet boundary
conditions on ${\Gamma_{\rm D}}=\partial\Omega$. Although it is difficult to establish a general
existence theory, the error estimates of Section~\ref{sub_sec:tv_min}
carry over verbatimly with ${\Gamma_{\rm N}}=\emptyset$ provided a minimizer exists. This
is the case in the setting of the following example.
\begin{example}\label{ex:tv_exact}
For $\Omega\subset \mathbb{R}^d$ and $r>0$ with $B_r(0)\subset \Omega$, and $g=\chi_{B_r(0)}$
the unique minimizer for $I$ subject to Dirichlet boundary conditions is given
by
\[
u = \max\{0,1-d/(\alpha r)\} \chi_{B_r(0)}.
\]
If $d\le \alpha r$ then the Lipschitz continuous vector field
\[
z(x) =
\begin{cases}
-r^{-1} x & \text{for } |x|\le r, \\
-rx/|x|^2 & \text{for } |x|\ge r,
\end{cases}
\]
solves the dual problem, cf., e.g.,~\cite{Bart15-book}.
We use $d=2$, $\Omega = (-1,1)^2$, $r=1/2$, and $\alpha=10$.
\end{example}
\subsubsection*{Iterative solution}
For the practical solution of the minimization problem we use a regularization
defined with the regularized euclidean length
\[
|s|_\varepsilon = (|s|^2+\varepsilon^2)^{1/2}
\]
for $\varepsilon>0$ and $s\in \mathbb{R}^d$. The uniform approximation property
$0 \le |s|_\varepsilon - |s| \le \varepsilon$ for all $s\in \mathbb{R}^d$
implies that with the regularized functional
\[
I_\varepsilon(u) = \int_\Omega |\nabla u|_\varepsilon \dv{x} + \frac{\alpha}{2}\|u-g\|^2,
\]
we have for minimizers $u$ of $I$ and $u_\varepsilon$ of $I_\varepsilon$ that
\[
\frac{\alpha}{2} \|u-u_\varepsilon \|^2 \le I(u_\varepsilon) - I(u) \le \varepsilon.
\]
This justifies using the regularized functional with $\varepsilon=h$ to
compute approximations for minimizers of $I$. We use Algorithm~\ref{alg:primal}
to decrease the energy and stop the iteration when
$\|d_t u^k\| \le \varepsilon_{\rm stop} = h/20$. We always use the $L^2$ inner
product and the step size $\tau=1$.
\subsubsection*{Experimental results}
For triangulations $\mathcal{T}_\ell$ of $\Omega=(-1,1)^2$ resulting from $\ell\ge 0$
uniform refinements of a coarse triangulation of $\Omega$ into two triangles
we have that the maximal mesh-size of $\mathcal{T}_\ell$ is proportional to
$h_\ell =2^{-\ell}$. For a simple implementation we use the
function $\widetilde{g}_h\in \mathcal{L}^0(\mathcal{T}_h)$ via
\[
\widetilde{g}_h(x_T) = g(x_T) =
\begin{cases}
1 & |x_T| < 1/2, \\ 0 & |x_T| \ge 1/2,
\end{cases}
\]
instead of the $L^2$ projection $g_h = \Pi_{h,0} g$. Since for $g=\chi_{B_r(0)}$
we have $\|g-\widetilde{g}_h\|_{L^1(\Omega)} \le c h |\partial B_r(0)|$, the error estimate
remains valid. The top row in Figure~\ref{fig:comp_p1_cr_tv} shows the
numerical solutions obtained for the discretizations using a standard $P1$
method and the Crouzeix--Raviart method on the triangulation $\mathcal{T}_5$.
At first glance the $P1$ approximation appears superior as, e.g., the
Crouzeix--Raviart approximation does not satisfy a discrete maximum
principle. The projections of the approximations onto piecewise constant
functions are shown in the bottom row of Figure~\ref{fig:comp_p1_cr_tv}
and lead to a different interpretation. The circular discontinuity
set is better resolved by the discontinuous method and we observe a more
localized approximation of the jump set.
Figure~\ref{fig:conv_rates_cr_vs_p1_tv} supports the latter interpretation
via logarithmic plots for the experimental convergence rates of the error quantity
\[
\|e_h\|^2 = \|\Pi_{h,0} u_h - u(x_\mathcal{T}) \|^2,
\]
where $x_\mathcal{T}|_T = x_T$ for every $T\in \mathcal{T}_\ell$,
versus the number of vertices $N_\ell \sim h_\ell^{-2}$. We observe that the $L^2$
error for the Crouzeix--Raviart method
converges at the quasi-optimal rate $\mathcal{O}(h^{1/2})$ while the $P1$ error
is larger and decays at a lower rate. The approximations were
computed on the triangulations $\mathcal{T}_\ell$ for $\ell=3,4,\dots,9$ with
$N_\ell = (2^\ell+1)^2 = 81, 289, \dots ,66049, 263169$ vertices.
\begin{figure}[p]
\includegraphics[width=.49\linewidth]{p1_sol_red_5_lin.jpg} \hspace*{0.1mm}
\includegraphics[width=.49\linewidth]{cr_sol_red_5_lin.jpg} \\
\includegraphics[width=.49\linewidth]{p1_sol_red_5_mp.jpg} \hspace*{0.1mm}
\includegraphics[width=.49\linewidth]{cr_sol_red_5_mp.jpg}
\caption{\label{fig:comp_p1_cr_tv} Continuous $P1$ (left) and Crouzeix--Raviart
approximations (right) in Example~\ref{ex:tv_exact} displayed
as piecewise affine functions (top) and via their projections onto
piecewise constant functions (bottom).}
\end{figure}
\begin{figure}[p]
\includegraphics[width=.65\linewidth]{conv_rates_cr_vs_p1_tv.pdf}
\caption{\label{fig:conv_rates_cr_vs_p1_tv}
Squared $L^2$ errors in Example~\ref{ex:tv_exact} for $P1$ and Crouzeix-Raviart
approximations. The predicted rate $\mathcal{O}(h^{1/2})$ is observed
for the Crouzeix--Raviart method while the
$P1$ method leads to larger errors and a reduced rate.}
\end{figure}
\subsection{Effect of modification}
The operator $\Pi_{h,0}$ that occurs in the discrete dual problem via the term
$\phi^*(\Pi_{h,0} z_h)$ is crucial for the discrete duality theory and
in fact simplifies the realization of the method as quadrature becomes
trivial. This does not affect the discrete flux variable $z_h$ but leads to
a modified discrete Langrange multiplier $\overline{u}_h$. To illustrated this
effect we consider the standard dual mixed formulation~\eqref{eq:poisson_mixed_discr}
of the Poisson problem and the modified version which seeks
$(z_h,\overline{u}_h)\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$ satisfying
\[
(\Pi_{h,0} z_h, y_h) + (\overline{u}_h,\diver z_h) = 0, \quad (\overline{v}_h,\diver y_h) = -(f_h,v_h),
\]
for all $(y_h,\overline{v}_h)\in {\mathcal{R}T}^0_{\!N}(\mathcal{T}_h)\times \mathcal{L}^0(\mathcal{T}_h)$. We specify
the problem as follows.
\begin{example}[Poisson problem]\label{ex:poisson}
Let $d=2$, $\Omega= (-1,1)^2$, ${\Gamma_{\rm D}} = \partial\Omega$, and $f(x,y) = 2(1-x^2)+2(1-y^2)$.
The solution of the dual mixed formulation of the Poisson problem is
given by $u(x,y) = (1-x^2)(1-y^2)$ and $z= \nabla u$.
\end{example}
Figure~\ref{fig:conv_rates_rt_lump_vs_class} shows the $L^2$ errors
\[
\|e_h\| = \|\overline{u}_h - u(x_\mathcal{T}) \|,
\]
versus numbers of vertices in $\mathcal{T}_\ell$ with a logarithmic scaling
on both axes. The $L^2$ error for the modified treatment is larger
than that for the exact treatment but converges at the same quadratic
rate. This rate is higher than the expected linear convergence rate for
the difference $\|\overline{u}_h - u\|$. An explanation is provided by the
relation $\overline{u}_h = u_h(x_T)$ to solutions $u_h$ of the
Crouzeix--Raviart discretization for which we have $\|u_h-u\|_{L^\infty(\Omega)}
= \mathcal{O}(h^2\log(h))$, cf.~\cite{GasNoc87}.
\begin{figure}[htb]
\includegraphics[width=.65\linewidth]{conv_rates_rt_lump_vs_class.pdf}
\caption{\label{fig:conv_rates_rt_lump_vs_class} $L^2$ errors for the
standard and modified Raviart--Thomas approximations of the Poisson problem
of Example~\ref{ex:poisson}.
An increased $L^2$ error is observed for the modified
treatment but both approximations converge with nearly qudratic rate.}
\end{figure}
\subsection{Infinity Laplacian}
We define an infinity Laplace problem via the primal functional
\[
I(u) = -\int_\Omega f u \dv{x}, \quad |\nabla u|\le 1,
\]
on the set $W^{1,\infty}_D(\Omega)$ for a given function $f\in L^1(\Omega)$.
We approximate solutions by determining nearly maximizing discrete
vector fields for the regularized dual functional
\[
D_\varepsilon(z) = -\int_\Omega |z|_\varepsilon\dv{x}, \quad -\diver z= f,
\]
with $|s|_\varepsilon = (|s|^2 +\varepsilon^2)^{1/2}$.
We consider the following specification that leads to a Lipschitz
continuous solution.
\begin{example}[Infinity Laplacian]\label{ex:inf_laplace}
Let $d=2$, $\Omega =(-1,1)^2$, ${\Gamma_{\rm D}} = \partial\Omega$, and $f(x,y) =1$. Then
the solution of the primal problem is given by
$u(x,y) = 1-\max\{|x|,|y|\}$.
\end{example}
We use Algorithm~\ref{alg:dual} with the $L^2$ scalar product and $\tau =1$
to iteratively determine discrete
minimizers for $D_\varepsilon$ using $\varepsilon =h$. We also compute conforming
approximations $u_h^c$ for the primal problem using a conforming $P1$
finite element method
and the ADMM iteration described in Remarks~\ref{rem:admm_primal_dual}.
Figure~\ref{fig:inf_laplace_ener} displays the resulting
approximation errors
\[
|D(z)-D_{\varepsilon,h}(z_h)|, \quad |I(u)-I(u_h^c)|,
\]
obtained using the Raviart-Thomas method for the dual problem and
a standard conforming $P1$ method for the primal problem. We observe
that on right-angled triangulations the $P1$ method leads to an almost quadratic
convergence rate which is slightly better than the experimental convergence
rate $\mathcal{O}(h^{5/3})$ observed for the Raviart--Thomas method. Surprisingly,
the nearly quadratic convergence behavior is also observed for
$P1$ finite element approximations on perturbed triangulations. We note
however that in this case the admissibility of the nodal interpolant
is not true in general, cf.~Remark~\ref{rem:inf_laplace_p1}.
\begin{figure}[h!]
\includegraphics[width=.49\linewidth]{inf_laplace_pw_constant_red_4.jpg} \hspace*{0.1mm}
\includegraphics[width=.49\linewidth]{inf_laplace_cr_reconstruct_red_4.jpg} \\
\caption{\label{fig:sols_inf_laplace} Piecewise constant approximation of the
solution of the infinity Laplacian defined in Example~\ref{ex:inf_laplace}
obtained with the Raviart--Thomas discretization
of the regularized dual problem (left) and the corresponding reconstructed Crouzeix--Raviart
approximation (right).}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=.65\linewidth]{inf_laplace_ener.pdf}
\caption{\label{fig:inf_laplace_ener} Experimental convergence
rate for the approximation of the value $D(z)$ using the Raviart--Thomas
discretization $D_{h,\varepsilon}$ and of $I(u)$ using a conforming $P1$ method
in the case of the inifinity Laplace problem of Example~\ref{ex:inf_laplace}.}
\end{figure}
\clearpage
| {
"timestamp": "2020-02-07T02:13:07",
"yymm": "2002",
"arxiv_id": "2002.02359",
"language": "en",
"url": "https://arxiv.org/abs/2002.02359",
"abstract": "This article discusses nonconforming finite element methods for convex minimization problems and systematically derives dual mixed formulations. Duality relations lead to simple error estimates that avoid an explicit treatment of nonconformity errors. A reconstruction formula provides the discrete solution of the dual problem via a simple postprocessing procedure which implies a strong duality relation and is of interest in a posteriori error estimation. The framework applies to differentiable and nonsmooth problems, examples include $p$-Laplace, total-variation regularized, and obstacle problems. Numerical experiments illustrate advantages of nonconforming over standard conforming methods.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Nonconforming discretizations of convex minimization problems and precise relations to mixed methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543728093005,
"lm_q2_score": 0.7154240079185319,
"lm_q1q2_score": 0.709381803464376
} |
https://arxiv.org/abs/2203.01419 | Electrostatic partners and zeros of orthogonal and multiple orthogonal polynomials | For a given polynomial $P$ with simple zeros, and a given semiclassical weight $w$, we present a construction that yields a linear second-order differential equation (ODE), and in consequence, an electrostatic model for zeros of $P$. The coefficients of this ODE are written in terms of a dual polynomial that we call the electrostatic partner of $P$. This construction is absolutely general and can be carried out for any polynomial with simple zeros and any semiclassical weight on the complex plane. An additional assumption of quasi-orthogonality of $P$ with respect to $w$ allows us to give more precise bounds on the degree of the electrostatic partner. In the case of orthogonal and quasi-orthogonal polynomials, we recover some of the known results and generalize others. Additionally, for the Hermite--Padé or multiple orthogonal polynomials of type II, this approach yields a system of linear second-order differential equations, from which we derive an electrostatic interpretation of their zeros in terms of a vector equilibrium. More detailed results are obtained in the special cases of Angelesco, Nikishin, and generalized Nikishin systems. We also discuss the discrete-to-continuous transition of these models in the asymptotic regime, as the number of zeros tends to infinity, into the known vector equilibrium problems. Finally, we discuss how the system of obtained second-order ODEs yields a third-order differential equation for these polynomials, well described in the literature. We finish the paper by presenting several illustrative examples. | \section{Introduction}
Hermite polynomials
\begin{equation} \label{hermiteH}
H_N(x)=N!\sum_{\ell=0}^{\left\lfloor N/2\right\rfloor}\frac{(-1)
{\ell}(2x)^{N-2\ell}}{\ell!\;(N-2\ell)!}=2^Nx^N+\dots
\end{equation}
are probably the simplest representatives of the family of \textit{classical} orthogonal polynomials.
They satisfy the linear differential equation
\begin{equation} \label{odeHermite}
y''(x) -2 x y'(x) +2N y(x)=0
\end{equation}
and the orthogonality conditions
\begin{equation*}
\begin{split}
&\int_{-\infty}^{+\infty} x^j H_N(x) e^{-x^2}\, dx=0, \qquad j=0,1,\dots, N-1,\\
&\int_{-\infty}^{+\infty} x^N H_N(x) e^{-x^2}\, dx\neq 0.
\end{split}
\end{equation*}
As a consequence, their zeros are all real and simple. A well-known calculation that goes back to Stieltjes \cite{Stieltjes1885} (see also \cite[Theorem 6.8]{Szego75} or \cite{MR1379147}) shows that there are two equivalent physical interpretations of these zeros:
\begin{itemize}
\item as equilibrium positions of equally charged points on the plane in the presence of an external field (background potential); or
\item as an appropriately rescaled configuration of vortices on the real line under assumption that they all have same circulations and rotate as a rigid body.
\end{itemize}
We explain these notions in more detail in Section \ref{sec:vortices}. Both models are rooted in the linear second order differential equation \eqref{odeHermite} satisfied by these polynomials. There are several ways of obtaining this equation, all of them relying on a specific feature of the orthogonality weight, namely the fact that its logarithmic derivative is a rational function. This idea allows to extend the classical theory to the so-called \textit{semiclassical} orthogonal polynomials, a construction that probably goes back to J.~Shohat \cite{Shohat}, see also \cite{MR1340939, MR1637827}. This generalization preserves several convenient features of classical orthogonal polynomials, such as a Rodrigues-type formula or the existence of raising and lowering operators, see e.g.~\cite{Ismail2000} or \cite{Ismail05}; each one of these properties leads to \eqref{odeHermite}.
The elegance of the above mentioned model attracted attention of generations of researchers and lead to several generalizations. For instance, we can choose as a starting point a second-order linear differential equation with polynomial coefficients (\textit{generalized Lam\'e equations} in algebraic form), whose particular cases are the hypergeometric and the Heun differential equation \cite{Ronveaux95}, and develop an electrostatic/vortex dynamics interpretation for the zeros of its polynomial solutions (\textit{Heine--Stieltjes polynomials}). This was carried out in the classical works of B\^ocher \cite{Bocher97}, Heine \cite{Heine1878} and Van Vleck \cite{Vleck1898}; for more modern treatment, see e.g.~\cite{MR4091604, Dimitrov:00, Grunbaum98, MR2345246}, as well as \cite{MR2647571, MR2770010, MR2003j:33031} for the asymptotic results.
Another approach starts from the orthogonality conditions with respect to a semiclassical (or even a more general) weight, as it was done in the pioneering work of Ismail \cite{Ismail2000, Ismail2000c, Ismail2001}, which has been extended in many directions, see e.g.~ \cite{zbMATH07222111, MR826706, MR2873079, zbMATH06596279, zbMATH06963441, zbMATH07122712, zbMATH07326239}, to cite a few. One of such generalizations is the case of \textit{quasi-orthogonal polynomials}, which satisfy ``incomplete'' orthogonality conditions. As it was shown in \cite{MR3926161} in the simplest case of one condition short of full orthogonality, such polynomials also satisfy a linear second order differential equation that can be interpreted in electrostatic terms.
\textit{Hermite-Pad\'e} or \textit{multiple} orthogonal polynomials (MOP) of type II are defined by distributing the orthogonality conditions among different weights or measures. In the simplest case of two weights $w_1$, $w_2$, supported on $\Delta_1\subset \R$ and $\Delta_2\subset \R$, respectively, and for a given multi-index $\bm n= (n_1,n_2)\in \Z_{\ge 0}^2$, it is a polynomial $P_{\bm n}$ of total degree at most $N=|\bm n|:=n_1+n_2$, such that
\begin{equation} \label{typeIIintro}
\int_{\Delta_i}x^jP_{\bm n}(x)w_i(x)dx \begin{cases}
=0,&\quad j\leq n_i-1,\\
\neq 0,&\quad j=n_i,
\end{cases}
\qquad i=1, 2.
\end{equation}
Polynomial $P_{\bm n}$ appears as a common denominator of a pair of rational approximants to Markov functions related to the weights $w_i$'s, see Section~\ref{sec:generalHermitePade}. For a more detailed account of the corresponding theory we recommend the monograph \cite{niksor}, as well as the works of Aptekarev, Gonchar, Kuijlaars, Rakhmanov, Stahl, Suetin, Van Assche, Yattselev, and others, see e.g.~\cite{MR1702555, MR2475084, AptBraVA, Aptekarev:97, MR1240781, MR3602528, MR2187942, GRS97, VanAssche06, MR1808581, MR3304586, MR3907776, MR2796829, MR3058747, MR3137137} (the list is far from complete). MOP find applications in number theory, numerical analysis, integrable systems, interacting particle systems and random matrix models \cite{MR2963452, ALT2011, MR2470930, MR2327035}. Although the general analytic theory of multiple orthogonal polynomials is in its infancy, their zeros (especially, their asymptotic behavior) have been studied in several particular situations, known as the Angelesco and Nikishin cases, described in Section~\ref{sec:special}. However, there is no known electrostatic interpretation of these zeros.
Linear differential equations satisfied by multiple orthogonal polynomials have been found for many families of polynomials, see e.g.~\cite{AptBraVA, Aptekarev:97, MR2214212, MR3055365}, but in all cases these are equations of order 3 and higher. The problem is that an electrostatic interpretation of the solutions of these ODE is not straightforward.
Our main goal is to present a unified construction that yields an electrostatic model for polynomials related with a (system of) semiclassical weights. As it was mentioned, the known constructions of the differential equations for polynomials use either a Rodrigues formula or the so-called raising and lowering operators that can be combined into a single ODE \cite{MR2214212, MR3055365, Ismail05}. Instead of that, we start from a construction that can associate to an arbitrary polynomial $P$ and to a semiclassical weight $w$ supported on a set $\Delta \subset \C$ another polynomial $S$, that we have called its \textit{electrostatic partner}, see Definition~\ref{defCompanionP} and the schematic representation below.
\setlength{\unitlength}{1mm}
\begin{center}
\begin{tikzpicture}
\draw[thick] (2,1) rectangle (5,2.8);
\draw[->,thick] (1,1.4) -- (2,1.4);
\put(6,15){$w$};
\put(6,11){$\Delta$};
\draw[->,thick] (1,2.4) -- (2,2.4);
\put(6,23){$P$};
\draw[->,thick] (5,1.9) -- (6,1.9);
\put(61,18){$ S$};
\end{tikzpicture}
\end{center}
Using $S$ and $w$ we can write a linear second order differential equation with polynomial coefficients whose solutions are $P$ and the corresponding function of the second kind $q$, defined in Section~\ref{sec:quasiorth}. This shows that the zeros of $P$ (assumed simple) are in an electrostatic equilibrium in an external field created by $w$ and by the attracting zeros of $S$ (understanding by this a stationary point of their energy, and not necessarily its local or global minimum). This construction uses only the semiclassical character of $w$; no orthogonality conditions on $P$ are required. An additional assumption that $P$ is quasi-orthogonal with respect to $w$ (in the complex case, we mean by that a non-hermitian orthogonality, see \eqref{orthog1}) allows us to make more precise statements about the electrostatic partner of $P$. Moreover, two alternative representations for $S$ in this case yield a generalization of an identity involving Wronskian and Casorati determinants of $P$ and $q$, known in the case of classical orthogonal polynomials, see Section~\ref{sec:quasi}.
Since the definition of type II Hermite-Pad\'e orthogonal polynomials \eqref{typeIIintro} boils down to two simultaneous quasi-orthogonality conditions, we can associate with the corresponding MOP $P_{\bm n}$ \textit{two} electrostatic partners, $S_{\bm n, 1}$ and $S_{\bm n, 2}$, and a system of two linear differential equations of order 2, whose solution is $P_{\bm n}$. This apparent redundancy can be used to find an electrostatic model for the zeros of $P_{\bm n}$. Namely, by a procedure similar to the definition of an electrostatic partner, we associate with $P_{\bm n}$, $w_1$ and $w_2$ a polynomial $R_{\bm n}$:
\begin{center}
\begin{tikzpicture}
\draw[thick] (2,1) rectangle (5,2.8);
\draw[->,thick] (1,1.2) -- (2,1.2);
\draw[->,thick] (1,1.9) -- (2,1.9);
\put(4,13){$w_2$};
\put(4,9){$\Delta_2$};
\put(4,21){$w_1$};
\put(4,17){$\Delta_1$};
\draw[->,thick] (1,2.6) -- (2,2.6);
\put(4,25){$P_{\bm n}$};
\draw[->,thick] (5,1.9) -- (6,1.9);
\put(61,18){$ R_{\bm n}$};
\end{tikzpicture}
\end{center}
With this construction, the zeros of $P_{\bm n}$ and the zeros of $S_{\bm n, 1}$ (or $S_{\bm n, 2}$) are in a vector equilibrium given by their mutual interaction and by the vector external field created by $w_1$, $w_2$ and the zeros of $R_{\bm n}$, see Section~\ref{sec:multiple_orth} for details. This model is especially convenient because it is known that the asymptotic distribution of the zeros of $P_{\bm n}$ is usually described by vector equilibria. We discuss this connection and provide some heuristic arguments for this discrete-to-continuous transition in Section~\ref{sec:asymptotics}, where several particular configurations are analyzed in detail.
So far, both two- and three-component vector critical measures have been used to describe asymptotics in several cases. Our construction suggests that there is one universal two-component vector equilibrium valid for all known configurations corresponding to perfect systems, and that all descriptions constitute just its particular manifestations.
In order to establish another connection with previous literature, we describe in Section~\ref{sec:ODE3} how to combine the system of ODEs from Section~\ref{sec:multiple_orth} into a third order linear differential equation whose solutions are $P_{\bm n}$ and the corresponding functions of the second kind. An additional advantage of this construction is that it is easily generalized to the case of more than 2 weights and explains the appearance of higher order ODEs.
In the last section we discuss several examples of multiple orthogonal polynomials well known in the literature.
We hope that this approach can be applied in some other contexts; in particular, it would be interesting to explore a possible electrostatic interpretation of the zeros of the Type I multiple orthogonal polynomials, see e.g.~\cite{niksor}.
Since this paper unfortunately contains a large amount of technical details and auxiliary results (some of them, relegated to Appendices~\ref{appendixA} and \ref{sec:realcase}), we finish this introduction with a short navigation guide for the reader interested in the main highlights:
\begin{itemize}
\item An electrostatic partner $S$ of a given polynomial $P$ (in a sense, the starting fundamental construction) is introduced in Definition~\ref{defCompanionP}, whose consistency is justified by Theorem~\ref{lem1aux}.
\item The second order linear differential equations whose solution is $P$ is introduced in Theorem~\ref{CorolarioEDO}, which leads to an electrostatic model (Proposition~\ref{cor:criticalGeneral}) for zeros of $P$, which are shown to be in equilibrium in a field created in part by the attracting zeros of the electrostatic partner $S$.
\item This construction is extended to a type II Hermite-Pad\'e orthogonal polynomial with respect to two weights, giving us now two second order linear differential equations (Theorem~\ref{TeoClaveHP}). Additionally, we get another set of differential equations, now for the electrostatic partners (Theorem~\ref{thm:WronskianS/w}, which is based on a construction from Proposition~\ref{prop:WronskianS/w}).
\item As a consequence, we derive a vector equilibrium model for the two sets of point charges: the zeros of the Hermite-Pad\'e orthogonal polynomial and the zeros of its electrostatic partner(s), Theorem~\ref{thm:vectorelectrostatics}.
\item More precise results about the location of the zeros of the electrostatic partners in some widely studied cases of Hermite-Pad\'e orthogonal polynomials are matter of Section~\ref{sec:special}. They allow us to discuss in Section~\ref{sec:asymptotics} the discrete-to-continuous transition in the equilibrium model as the degrees tend to $\infty$, and to compare the resulting models with the description of the asymptotic distribution of zeros in terms of the vector equilibrium, both with 2 and 3 components. In particular, Corollary~\ref{cor:asymptoticsGeneral} suggests that the universal description can be achieved using a 2-component vector equilibrium with the interaction matrix
$$
\begin{pmatrix}1&-1/2\\-1/2&1\end{pmatrix}.
$$
\item Since third order linear differential equations associated to the multiple orthogonal polynomials are well known in the literature, we have included in Section~\ref{sec:ODE3} their derivation from the system of second order ODE described in Theorem~\ref{TeoClaveHP}.
\item Last, but definitely not least, we have a set of curious examples in Section~\ref{sec:examples}, whose examination poses several interesting questions and suggests possible lines of further research.
\end{itemize}
\section{Electrostatics of point charges and vortex dynamics} \label{sec:vortices}
\subsection{Identical point charges and vortices}
\
We can associate with $N$ pairwise distinct points $\zeta_i$ on the plane ($\zeta_i \neq \zeta_j $ for $i\neq j$) their discrete ``counting'' measure
\begin{equation}\label{defMucritDiscrete}
\mu=\sum_{k=1}^N \delta_{\zeta_k} ,
\end{equation}
where $\delta_x$ is a unit mass (Dirac delta) at $x$, and define its (discrete) logarithmic energy\footnote{Actually, the magnitude in \eqref{EnergyDiscrt} is twice the logarithmic energy, which is not relevant, but explains the factor $2$ in \eqref{defWeightedEnergy} introduced for consistency.}
\begin{equation} \label{EnergyDiscrt}
\mathcal E(\mu) :=
\sum_{i\neq j} \log \frac{1}{|\zeta _i-\zeta _j|}
\end{equation}
(we can extend the notion of the energy to the case when two or more $\zeta_j$'s coincide by assuming that then $\mathcal E(\mu)=+\infty$).
Additionally, given a real-valued function (\emph{external field} or \textit{background potential}) $\varphi $, finite at $\supp(\mu)$, we consider the weighted energy
\begin{equation}\label{defWeightedEnergy}
\mathcal E_\varphi(\mu) := \mathcal E (\mu)+ 2 \sum_{k=1}^N \varphi(\zeta _k) \,.
\end{equation}
For our purposes, it will be sufficient to assume that $\varphi=\Re \Phi$, where $\Phi$ is an analytic (in general, multivalued) function in $ \C$, excluding its (finite number) of isolated singularities and branch points, with a single-valued derivative $\Phi'$.
\begin{definition}[\cite{MR2770010}] \label{defCriticalScalar}
We say that $\mu$ in \eqref{defMucritDiscrete} is \emph{$\varphi$-critical} or just \emph{critical measure} if $\supp(\mu)$ is disjoint with the set of singularities of $\varphi$, and is a stationary point of the weighted discrete energy $\EE_\varphi(\mu)=\EE_\varphi(\zeta _1, \dots, \zeta _N)$ defined in \eqref{defWeightedEnergy} :
\begin{equation}\label{gradient}
\nabla \, \EE_\varphi(\zeta_1, \dots, \zeta_N)=0\,,
\end{equation}
or equivalently,
$$
\frac{\partial}{\partial z} \, \EE_\varphi(\zeta_1, \dots, z, \dots \zeta_N)\big|_{z=\zeta_k} =0, \quad k = 1, \dots, N, \qquad \frac{\partial}{\partial z} = \frac{1}{2}\, \left(\frac{\partial}{\partial x} - i \frac{\partial}{\partial y}\right).
$$
\end{definition}
We also say that the configuration of points (or charges) is in \textit{electrostatic equilibrium} in the external field $\varphi$.
Notice that with $\varphi=\Re \Phi$, we can write explicitly the equilibrium conditions for $\EE_\varphi(\zeta_1, \dots, \zeta_N)$ as the system of equations
\begin{equation} \label{eq:condEq}
\sum_{\substack{i, j=1 \\
i\neq j }}^N
\frac{1}{\zeta_{j}-\zeta_{i}}- \Phi^{\prime}\left(\zeta_{j}\right) =0, \quad j=1, \dots, N.
\end{equation}
Let
\begin{equation} \label{eq:defPOly}
y(z):=\prod_{j=1}^N (z-\zeta_j);
\end{equation}
a common terminology is that $\mu$ in \eqref{defMucritDiscrete} is the \emph{zero counting measure} of the polynomial $y$, for which we will use the notation
\begin{equation} \label{defCountingMeaure}
\nu(y): =\sum_{j=1}^n \delta_{\zeta_j} .
\end{equation}
It is easy to check that
\begin{equation} \label{eq:polyidentities}
y'(z)=y(z) \sum_{j=1}^N \frac{1}{z-\zeta_j}, \qquad y''(z)= y(z) \sum_{\substack{i, j=1 \\
i\neq j }}^N \frac{1}{(z-\zeta_i)(z-\zeta_j)},
\end{equation}
from where
$$
\sum_{\substack{i, j=1 \\
i\neq j }}^N
\frac{1}{\zeta_{j}-\zeta_{i}}=\frac{y''}{2y'}(\zeta_j).
$$
In particular, \eqref{eq:condEq} is equivalent to
\begin{equation} \label{eq:condEqBis}
\left( y''- 2\Phi^{\prime} y'\right) \left(\zeta_{j}\right) =0, \quad j=1, \dots, N.
\end{equation}
In the case when all zeros $\zeta_j$'s are on the real line and the external field is given by $\varphi(x)=x^2/2$, we get from \eqref{eq:condEqBis} that $y''-2 x y'$ matches $y$ up to a multiplicative constant. Comparing the leading coefficients we conclude that $y$ solves the differential equation \eqref{odeHermite}; in other words, the zeros of Hermite polynomials are in electrostatic equilibrium on $\mathbb R$ in the external field $\varphi(x)=x^2/2$, as observed by Stieltjes in \cite{Stieltjes1885}
\footnote{As it is pointed out in \cite{MR1379147}, Stieltjes mentions without proving that the equilibrium configuration is actually the minimum of the energy. The proof can be found for instance in \cite[Section 6.7]{Szego75}.}. He also realized that this kind of electrostatic models is easily generalized to all classical families of polynomials (Jacobi, Laguerre and Bessel), see \cite{Stieltjes1885b, MR1554668}, or \cite{Szego75} and \cite{Ismail05} for a more modern account.
Another approach to zeros of these polynomials is via vortex dynamics. The notion of a point vortex is a classical approximation in ideal hydrodynamics of planar flow, introduced almost 150 years ago in Helmholtz's classical paper on vortex dynamics \cite{Helmholtz1858}. Considering the flow plane to be the complex plane, the equations of motion for $N$ point vortices with circulations $\gamma_{i}$ at positions $\zeta_i$, $i=1, \dots, N$, in a background flow $\Psi $, is
\begin{equation} \label{vortices1}
\overline{ \left(\frac{d \zeta_{j}}{d t} \right)}=\frac{1}{2 \pi i} \sum_{\substack{i=1 \\
i\neq j }}^{N} \frac{\gamma_{i}}{\zeta_{j}-\zeta_{i}} + \frac{1}{2\pi i}\, \overline{ \Psi(\zeta_j) }, \quad j=1,2, \ldots, N.
\end{equation}
In this paper, the overline indicates complex conjugation.
By \eqref{eq:condEq}, stationary vortices $(d\zeta_j/dt=0$) correspond to electrostatic equilibrium in the external field $\varphi=\Re \Phi$ if the background flow $\Psi$ satisfies $\overline{\Psi (\zeta_j)}=-\Phi'(\zeta_j)$, $j=1, \dots, N$.
Alternatively, if a vortex configuration rotates as a rigid body with angular velocity $\Omega$, then $\overline{d\zeta_j/dt}$ is equal to $\overline{\zeta_j}$ times a purely imaginary constant proportional to the angular velocity.
If we assume additionally that all $\zeta_j$'s are real and identical (all $\gamma_i$'s are equal) and there is no external flow field, then after rescaling \eqref{vortices1} boils down to
\begin{equation} \label{vortices1bis}
\zeta_{j} = \sum_{\substack{i=1 \\
i\neq j }}^{N} \frac{1}{\zeta_{j}-\zeta_{i}} , \quad j=1,2, \ldots, N.
\end{equation}
Let us use again the polynomial $y$ defined in \eqref{eq:defPOly}, known in this field as the \textit{generating polynomial} for the vortex configuration (see \cite{MR1831715}). We can rewrite the second identity in \eqref{eq:polyidentities} equivalently as
\begin{equation} \label{eq:polyidentitiesBis}
y''(z)= -2 y(z) \sum_{j=1}^N \sum_{\substack{i=1 \\
i\neq j }}^N \frac{1}{(\zeta_i-\zeta_j)(z-\zeta_j)},
\end{equation}
which together with \eqref{vortices1bis} yields again the differential equation \eqref{odeHermite}. Thus, the zeros of the Hermite polynomials give us the positions of vortices on $\R$ such that the configuration rotates like a rigid body. Clearly, these considerations can be extended to more general families of polynomials.
A reader interested in vortex dynamics should check nice surveys \cite{MR2305271} and \cite{MR2538285}.
\subsection{Groups of point charges and vortices }
\
We can extend Definition~\ref{defCriticalScalar} to a vector setting (for our purpose, it will be sufficient to consider two-component vectors) that allows us to handle groups of differently charged particles. Given a vector of discrete measures $\vec \mu=(\mu_1, \mu_2)$, with $\supp (\mu_1) \cap \supp(\mu_2)=\emptyset$,
\begin{equation} \label{defMuJ}
\mu_1=\sum_{k=1}^{n_1}\delta_{\zeta_k} ,\qquad \mu_2=\sum_{j=1}^{n_2}\delta_{\xi_j},
\end{equation}
a real \textit{interaction parameter} $-1<a<1$,
and a vector external field $\vec \varphi=(\varphi_1, \varphi_2)$, both $\varphi_i$ real-valued and finite at $\supp(\mu_1)\cup \supp(\mu_2)$,
the corresponding \textit{weighted vector energy} is
\begin{equation}\label{def:EnergiaVect}
\mathcal{E}_{\vec \varphi, a}(\vec \mu ):=
\mathcal{E}(\mu_1)+2a\sum_{k=1}^{n_1}\sum_{j=1}^{n_2}\log\frac{1}{|\zeta_k-\xi_j|}
+\mathcal{E}(\mu_2)+2\sum_{k=1}^{n_1}\varphi_1(\zeta_k)+2\sum_{j=1}^{n_2}\varphi_2(\xi_j)
\end{equation}
(see the notation in \eqref{EnergyDiscrt}).
We can restate this definition using vector notation and the symmetric positive-definite matrix
$$M:=\begin{pmatrix}1&a\\a&1\end{pmatrix} $$
and say that the weighted vector energy in \eqref{def:EnergiaVect} corresponds to the \textit{interaction matrix} $M$.
Moreover, for measures \eqref{defMuJ} we can write alternatively
$$
\mathcal{E}_{\vec \varphi, a}(\zeta_1,\dots,\zeta_{n_1},\xi_1,\dots,\xi_{n_2} ):=\mathcal{E}_{\vec \varphi, a}(\vec \mu ).
$$
\begin{definition} \label{defCriticalVector}
We say that $\vec \mu$ is a \textit{critical vector measure} for $\mathcal{E}_{\vec \varphi, a }$ if $\supp(\mu_1)\cup \supp(\mu_2)$ is a stationary configuration for $\mathcal{E}_{\vec \varphi, a }$:
$$
\nabla \, \mathcal{E}_{\vec \varphi, a }(\zeta_1,\dots,\zeta_{n_1},\xi_1,\dots,\xi_{n_2})=0.
$$
\end{definition}
For any Borel measure $\mu $ on $\C$ we can define its \textit{logarithmic potential},
\begin{equation} \label{defLogPot}
U^\mu(z) := \int \log\frac{1}{|z-t|}\, d\mu(t).
\end{equation}
From the expression for $\mathcal{E}_{\vec \varphi, a}$ it follows that Definition \ref{defCriticalVector} is equivalent to simultaneous equilibrium conditions
\begin{equation} \label{VectorequilConditions}
\begin{split}
\mu_1 \text{ is $F_1$-critical, with } F_1 & := a\, U^{\mu_2} + \varphi_1, \\
\mu_2 \text{ is $F_2$-critical, with } F_2 & := a\, U^{\mu_1} + \varphi_2.
\end{split}
\end{equation}
Alternatively, consider the situation when in the absence of the background flow, the circulations $\gamma_i$'s in \eqref{vortices1} take only two possible values,
$$
\gamma_{k}= \begin{cases}\gamma>0, & \text { for } \quad k=1,2, \ldots, n_1,
\\ -\gamma, & \text { for } \quad k=n_1+1, n_1+2, \ldots, n_1+n_2=N
\end{cases}
$$
We can rename $\xi_k:=\zeta_{n_1+k}$, $k=1,\dots, n_2$; in this way, for stationary vortices we have the equations
\begin{equation} \label{groupvortices1}
\begin{split}
\sum_{\substack{i=1 \\
i\neq j }}^{n_1} \frac{1}{\zeta_{j}-\zeta_{i}} & = \sum_{k=1 }^{n_2} \frac{1}{\zeta_{j}-\xi_{k}} , \quad j=1,2, \ldots, n_1, \\
\sum_{\substack{i=1 \\
i\neq j }}^{n_2} \frac{1}{\xi_{j}-\xi_{i}} & = \sum_{k=1 }^{n_1} \frac{1}{\xi_{j}-\zeta_{k}} , \quad j=1,2, \ldots, n_2.
\end{split}
\end{equation}
To study these vortex patterns, we define again the generating polynomials
$$
y(z)=\prod_{j=1}^{n_1}\left(z-\zeta_{j}\right), \quad v(z)=\prod_{k=1}^{n_2}\left(z-\xi_{k}\right).
$$
Formulas \eqref{eq:polyidentities} and \eqref{eq:polyidentitiesBis} show that \eqref{groupvortices1} yield the bilinear identity
\begin{equation}\label{eq:Tkach}
y'' v - 2 y' v' + y v'' =0.
\end{equation}
This is currently known as Tkachenko's equation, since it was first derived by Tkachenko in his dissertation in 1964. Polynomial solutions of this equation were studied by Burchnall and Chaundy \cite{MR1576413}. Adler and Moser \cite{MR501106} showed that \eqref{eq:Tkach} is solved by two consecutive polynomials that nowadays are know as Adler-Moser polynomials, see also \cite{MR2924501}. Moreover, comparing \eqref{groupvortices1} with \eqref{eq:condEq} and \eqref{VectorequilConditions} we conclude that the zeros of consecutive Adler-Moser polynomials are stationary configurations (or equivalently, $\vec \mu=(\mu_1, \mu_2)$ defined in \eqref{defMuJ} is a critical vector measure) for the vector energy $\mathcal{E}_{\vec \varphi, a}(\vec \mu )$ defined in \eqref{def:EnergiaVect}, with
$$
\vec \varphi\equiv (0,0) \quad \text{and} \quad a=-\frac{1}{2},
$$
fact that was already observed in \cite{MR824780}.
Further generalizations of these ideas, and in particular, their relation to rational solutions of Painlev\'e equations, can be found in \cite{MR2305271, MR3335711, MR2538285, MR2402236, MR2989511,MR3508236, MR2530570, MR2336164, Loutsenko:2004, MR1831715}, to cite a few.
\section{Electrostatics for semi-classical weight}\label{sec:quasiorth}
In the rest of the paper we will try to adhere to the following notational convention, whenever possible: we will use capital letters to denote polynomials, and small letters to indicate general, usually multivalued, functions. A few exceptions of these rules will be clearly indicated.
\subsection{The semiclassical weight} \label{sec:generalcase}
\
We start from a monic polynomial
$$
A(z)=z^{\deg(A)} + \text{lower degree terms;}
$$
we use the notation $\mathbb A$ for the set of zeros of $A$ on $\C$ and admit $\mathbb A=\emptyset$. Let
$$
\Delta := \Gamma_1 \cup \dots \cup \Gamma_k,
$$
where each $\Gamma_j$, $j=1,\dots, k$, is an oriented Jordan piece-wise analytic arc joining pairs of points from $\mathbb A\cup \{\infty\}$ and not containing any other point from $\mathbb A$ in its interior $\os \Gamma_j$. For simplicity, we assume that the interiors of $\Gamma_j$'s are pairwise disjoint ($\os \Gamma_i\cap \os \Gamma_j=\emptyset$, $i\neq j$), but as it will be clear from what follows, this is not an actual restriction.
Given another polynomial, $B$,
we define, up to a normalization constant, (multivalued) analytic functions in $\C$,
\begin{equation} \label{defW}
w(z):=\exp \left( \int ^z \frac{B(t)}{ A(t)}\, dt \right), \quad v(z):= A(z) w(z),
\end{equation}
with only possible singularities (either isolated or branch points) at $\mathbb A\cup \{\infty\}$. Notice that a priori we do not assume that $A$ and $B$ are relatively prime.
The orientation of the arcs $\Gamma_j$ in $\Delta$ defines the left- and right-side boundary values of the function $w$ on $\Gamma_j$ that we denote by $w_+$ and $w_-$, respectively; two different values of $w_+$, as well as $w_+$ and $w_-$, differ by a multiplicative constant. We fix on $\Delta$ the \textit{weight} by assuming that on each component $\Gamma_j$ it coincides, up to a non-zero multiplicative constant, with $w_+$; for the sake of simplicity of notation, we will be denoting this weight by the same letter $w$. A fundamental assumption is that such a weight
has finite moments:
$$
\int_\Delta |z|^m |w(z)| |dz|<+\infty, \quad m=0, 1, 2, \dots
$$
As consequence, for $m=0, 1, 2, \dots$,
\begin{equation}\label{endpoints}
z^m v(z)=o(1), \quad z\to z_0, \quad z_0 \text{ an endpoint of an arc from $\Delta$. }
\end{equation}
Clearly, $w$ is piece-wise differentiable on $\Delta$ and
\begin{equation} \label{ratW}
\frac{w'(z)}{w(z)}= \frac{B(z)}{ A(z)}, \quad z\in \Delta;
\end{equation}
this equality is actually valid in $\C\setminus \mathbb A$.
The weight $w$ is known as \textit{semiclassical}, and the value
\begin{equation} \label{def:class}
\sigma := \max \{\deg(A)-2, \deg(B)-1 \}
\end{equation}
is often referred to as its \textit{class}, see e.g.~\cite{MR1340939,MR1637827}. This is ambiguous in the case when $A$ and $B$ have a common factor, so we prefer to say that $\sigma$ is the \textit{class of the pair $(A,B)$} and assume $\sigma\geq 0$. Relation \eqref{ratW} can be written in the form
$$
\left( A w\right)'-\left( A'+B\right)w=0,
$$
known as the \textit{Pearson differential equation}, see e.g.~\cite{chihara:1978, Szego75}.
\begin{example} \label{exampleJacobi1}
The simplest and well known example is the case of the Jacobi weight, when
\begin{equation} \label{ABJacobi}
A(x)=x^2-1, \quad B(x)=(\alpha+\beta) x + \alpha-\beta.
\end{equation}
If $\Re \alpha, \Re \beta >-1$ then condition \eqref{endpoints} is satisfied for $\Delta=[-1,1]$.
This is an example of a \textit{classical} weight ($ \sigma=0$); here
$$
w(z)=(z-1)^{\alpha } (z+1)^{\beta }, \quad v(z)=(z-1)^{\alpha+1} (z+1)^{\beta+1 }.
$$
\end{example}
\begin{example} \label{exampleAngelescoJJ}
With
$$
A(x)=x(x-a)(x-b), \quad b<0<a, \quad B(x)=\alpha x(x-b)+ \beta x (x-a)+ \gamma (x-a)(x-b),
$$
we have
$$
w(x)=(x-a)^{\alpha } (x-b)^{\beta } x^{\gamma }, \quad v(x)=(x-a)^{\alpha+1} (x-b)^{\beta+1} x^{\gamma+1}.
$$
If $\alpha, \beta, \gamma>-1$, we may take
$$
\Gamma_1=[b,0], \quad \Gamma_2=[0,a], \quad \Delta=\Gamma_1\cup \Gamma_2,
$$
so that condition \eqref{endpoints} holds.
This is a semiclassical weight of class $\sigma=1$.
\end{example}
\subsection{The electrostatic partner} \label{sec:companion}
\
In this section we carry out a purely formal construction that will gain content with additional assumptions in Sections \ref{sec:quasi} and \ref{sec:multiple_orth}.
For any $w$-integrable function $f$ on $\Delta$ we can define its \textit{weighted Cauchy transform}
\begin{equation} \label{defCauchy}
\mathfrak C_w[f](z):=\int_\Delta\frac{f(t)w(t)}{t-z}dt,
\end{equation}
holomorphic in $\C\setminus \Delta$. The case of $f\equiv 1$ is particularly important, so we will use a brief notation
\begin{equation} \label{defwhat}
\widehat w(z):= \mathfrak C_w[1](z)=\int_\Delta\frac{ w(t)}{t-z}dt;
\end{equation}
$\widehat w$ is also known as a \textit{Markov function} related to the weight $w$.
Given a polynomial $P\not\equiv 0$, its \textit{polynomial of the second kind} $Q$ is defined as
\begin{equation} \label{defQnew}
Q(z):= \int_\Delta\frac{P(t)-P(z)}{t-z} w(t)dt.
\end{equation}
Additionally, we call
\begin{equation} \label{defqN}
q(z):
\frac{ \mathfrak C_w[P](z)}{ w(z)}
\end{equation}
the corresponding \textit{function of the second kind} of $P$. They are related by the evident identity
\begin{equation} \label{identityQ}
P(z) \widehat w(z) + Q(z ) = \mathfrak C_w[P](z) = q(z) w(z).
\end{equation}
Notice that due to analyticity, more than the actual path of integration $\Delta$ in the definition of $q$ what matters is the connectivity of the points from $\mathbb A$ by the arcs $\Gamma_j$'s, comprising $\Delta$: monodromy-preserving deformations of $\Delta$ do not change $q$. Moreover, although $q$ is not necessarily analytic or even single-valued in $\C\setminus \Delta$,
\begin{equation} \label{Wq}
w q' = (\mathfrak C_w[P])' - \mathfrak C_w[P] \, \frac{w' }{w} = \left( \widehat P \right)' - \mathfrak C_w[P] \, \frac{B }{ A }
\end{equation}
is meromorphic in $\C\setminus \Delta$, with only possible poles at the poles of $B/A$.
We denote by $ \mathfrak{Wrons}[f_1, \dots, f_k] $ the Wronskian determinant of the functions $f_1, \dots, f_k$, namely
\begin{equation} \label{eq:Wronskian}
\mathfrak{Wrons}[f_1, \dots, f_k] := \det \begin{pmatrix}
f_1 & \dots & f_k \\
f_1' & \dots & f_k' \\
\vdots & \dots & \vdots \\
f_1^{(k-1)} & \dots & f_k^{(k-1)}
\end{pmatrix}.
\end{equation}
Finally, for a polynomial $P$ we define the transform
\begin{equation} \label{defTransform2}
\mathfrak D_w[P] : = \det
\begin{pmatrix} P & \mathfrak C_w[P] \\A P' & A \left( \mathfrak C_w[P] \right)' -B \mathfrak C_w[P] \end{pmatrix} = v \times \mathfrak{Wrons}[P,q] ,
\end{equation}
a priori holomorphic in $\C\setminus \Delta$.
\begin{thm} \label{lem1aux}
If $P$ is a polynomial of degree $N\in \N$ then there exist a polynomial $U$ of degree $\le N-1$ and a polynomial $H$ of degree $\le \sigma$ such that
\begin{equation} \label{polynD}
\mathfrak D_w[P] = \det \begin{pmatrix}
P & \mathfrak C_w[P] \\
U & \mathfrak C_w[U] + H
\end{pmatrix} .
\end{equation}
Moreover, $\mathfrak D_w[P]$ is a polynomial of degree $\le N+\sigma$.
\end{thm}
\begin{proof}
We start with an identity for $ \mathfrak C_w[P] $: for $z\in \C \setminus \Delta$,
\begin{equation} \label{identity1bis}
\mathfrak C_w[A P ' + B P ](z)= A(z) \left( \mathfrak C_w[P]\right) '(z) -D(z),
\end{equation}
where
\begin{equation} \label{identity1bis1}
D(z):= \int_\Delta \frac{A(z)-A(x) +A'(x)(x-z)}{(x-z)^2}P (x) w(x)\, dx
\end{equation}
is a polynomial of degree $\le \deg(A)-2$. In particular, $D\equiv 0$ if $\deg(A) \le 1$.
The identity can be established by direct calculation: integrating by parts and with account of \eqref{endpoints}--\eqref{ratW},
\begin{align*}
\mathfrak C_w[A P ' + B P ](z) & = \int_\Delta \frac{A (x)}{x-z}\frac{d\left(P w \right)}{dx} \, dx = - \int_\Delta \frac{d }{dx}\left(\frac{A (x)}{x-z}\right) P (x)w (x) dx,
\end{align*}
so that the left-hand side in \eqref{identity1bis} is equal to
$$
- \int_\Delta \left(\frac{A(z)}{(x-z)^2} +\frac{d }{dx}\left(\frac{A (x)}{x-z}\right) \right) P (x)w (x) dx,
$$
and \eqref{identity1bis} follows.
Denote by $E $ the polynomial part of the expansion of $ A P ' / P $ at $\infty$, i.e.
$$
A(z)\frac{P '}{P }(z) = E (z)+ \mathcal O\left(\frac{1}{z} \right), \quad z\to \infty.
$$
It is easy to see that $E $ is polynomial of degree $\leq \deg(A)-1$. In this way,
\begin{equation}\label{defU}
U :=AP ' - E P
\end{equation}
is a polynomial of degree $\leq N-1$.
By \eqref{identity1bis},
\begin{align*}
\mathfrak C_w[U ] & = \mathfrak C_w[AP' - E P ]= \mathfrak C_w[A P' + B P] + \mathfrak C_w[(-E-B) P] \\
& = A \left(\mathfrak C_w[P]\right)' - (B+E)(z) \mathfrak C_w[P ] - H ,
\end{align*}
where, with the definition \eqref{identity1bis1},
\begin{equation}\label{DefH}
H(z):= D(z)+ \int_\Delta \frac{(E+B) (x)-(E+B) (z) }{x-z} \, p(x) w(x)\, dx
\end{equation}
is a polynomial of degree $\le \sigma$. In particular (see \eqref{Wq}),
\begin{equation} \label{identityUhat}
v q' = Aw q' = A \left(\mathfrak C_w[P]\right)' - B \mathfrak C_w[P ] = \mathfrak C_w[U ] +E \mathfrak C_w[P ] + H.
\end{equation}
Using \eqref{defU} and \eqref{identityUhat} in \eqref{defTransform2} we conclude that
$$
\mathfrak D_w[P]
= \det
\begin{pmatrix} P & \mathfrak C_w[P ] \\AP '& \mathfrak C_w[U ] +E \mathfrak C_w[P ] + H \end{pmatrix}
= \det
\begin{pmatrix} P & \mathfrak C_w[P ] \\ U & \mathfrak C_w[U ] + H \end{pmatrix} ,
$$
establishing \eqref{polynD}.
In order to show that $\mathfrak D_w[p] $ is a polynomial (of degree at most $N +\sigma $) we use the standard arguments from the Riemann--Hilbert characterization of orthogonal polynomials (see e.g. \cite{MR2000g:47048}). Consider matrix
$$
\bm Y(z):=\left(Y_{ij} (z)\right)_{i,j=1,2}= \begin{pmatrix} P & \mathfrak C_w[P ] \\
U & \mathfrak C_w[U ] + H\end{pmatrix} ,
$$
holomorphic in $\C\setminus \Delta$, which satisfies that
$$
\bm Y_+(x)=\bm Y_-(x) \, \begin{pmatrix}
1 & w(x) \\
0 & 1
\end{pmatrix}, \quad x \in \Delta,
$$
where $ \bm Y_\pm$ denote the boundary values of $ \bm Y$ on $\Delta$ from the left/right sides, respectively. Additionally, the first column of $\bm Y$ is bounded at each finite point of $\C$, while the local behavior of the second column at the end points of $\Delta$ essentially matches that of $\widehat w$.
It follows that $\det(\bm Y)$ is an entire function. Moreover, since $H$ is a polynomial of degree $\le \sigma$, there exist a constant $c\in \C\setminus\{0\} $ and $s\in \N \cup \{0\}$, $s\le \sigma$, such that
$$
\bm Y(z)= \begin{pmatrix} 1 & 0 \\ 0 & c \end{pmatrix}\left(\bm I + \mathcal O\left(\frac{1}{z}\right) \right)\, \diag\left(z^{N}, z^{ s} \right), \quad z \to \infty.
$$
This shows that $\det(\bm Y)= \mathfrak D_w[P] $ is a polynomial of degree $N+ s \le N +\sigma $, which concludes the proof.
\end{proof}
\begin{remark} \label{rem1}
A relation of the form \eqref{identityUhat} was used in \cite{Magnus} as a definition of the semi-classical character of the weight, and in fact, it characterizes \eqref{ratW}.
\end{remark}
\begin{definition} \label{defCompanionP}
Let $P\neq 0$ be a polynomial. Then we call the polynomial
\begin{equation} \label{defCompanion}
S := \mathfrak D_w[P],
\end{equation}
defined by \eqref{defTransform2}, the \textit{electrostatic partner} of $P$ induced by the weight $w$.
\end{definition}
A priori, we cannot discard the case that $S\equiv 0$; obviously, we will be interested only in the situations where $S$ is non-trivial. Additionally, as we will see in the future, the value of the leading coefficient of $S$ is irrelevant for what follows, which means that we can always use the most convenient normalization of $S$.
Some properties of the electrostatic partner $S$ and of the function of the second kind $q$ of $P$ are established in the Appendix \ref{appendixA}.
\subsection{The differential equation and electrostatic model} \label{sec:theODE}
\
\begin{thm}\label{CorolarioEDO}
Let $P$ be a polynomial, $q$ its function of the second kind defined in \eqref{defqN}, and $S$ the electrostatic partner defined in \eqref{defCompanion}. Then $P$ and $q$ are two solutions of the same second order linear differential equation with polynomial coefficients
\begin{equation} \label{odegeneral}
AS y'' + (A'S -AS' + BS) y' + C y = 0 .
\end{equation}
If $\deg(A)\leq 1$ then $C= \mathfrak D_{v}[P']$, with $v=Aw$.
\end{thm}
\begin{proof
Away from the zeros of $A$, the formal identity
\begin{equation} \label{2.1}
A (z) v(z) \mathfrak{Wrons}[y, P, q] (z)= A^2(z) w(z) \det \begin{pmatrix}
y &P &q \\
y'&P '&q '\\
y'' &P ''&q ''
\end{pmatrix}(z)=0
\end{equation}
is clearly satisfied by $y=P$ and $y=q $, and thus, by any linear combination of these two functions. Expanding the determinant along the first column yields the following second order differential equation with respect to $y$:
\begin{equation}
f(z) = f_2(z) y''(z) + f_1(z) y'(z) + f_0(z) y (z) = 0 ,
\label{ode2.2}
\end{equation}
where (see \eqref{defTransform2}),
\begin{align}
f_2 &=
A v \det \begin{pmatrix}
P & q \\
P' & q '
\end{pmatrix} ,
\label{2.2}
\\
f_1 & = - A v \det \begin{pmatrix}
P & q \\
P '' & q ''
\end{pmatrix},
\label{2.3}
\\
f_0 & =
A v \det \begin{pmatrix}
P' & q' \\
P'' & q''
\end{pmatrix}(z)= \det
\begin{pmatrix}
P' & v q' \\
A P'' & A v q''
\end{pmatrix}.
\label{2.4}
\end{align}
By \eqref{defTransform2}, $f_2=AS$, and thus it is a polynomial. Furthermore, differentiating $f_2$ and using \eqref{defW}, \eqref{mainIdentitySN}, it is straightforward to deduce that
$$
f_1 =A'S -AS' + BS
$$
is also a polynomial.
By \eqref{Wq} and \eqref{identity1bis},
$$
v q' = A \left(\mathfrak C_w[P] \right)' - B \mathfrak C_w[P] = \mathfrak C_w[AP'] + D = \mathfrak C_{v}[ P ' ] + D .
$$
Differentiating this identity and using \eqref{ratW} we obtain that
$$
A v q'' = A \left( \mathfrak C_{v}[ P ' ] \right)' - (A'+B) \mathfrak C_{v}[ P ' ] + \left[ AD'-A'D -BD \right].
$$
Thus,
\begin{align*}
f_0 & = \det
\begin{pmatrix}
P' & v q' \\
A P'' & A v q''
\end{pmatrix} \\
& = \det
\begin{pmatrix}
P' & \mathfrak C_{v}[ P ' ] \\
A P'' & A \left( \mathfrak C_{v}[ P ' ] \right)' - (A'+B) \mathfrak C_{v}[ P ' ]
\end{pmatrix} + \det
\begin{pmatrix}
P' & D \\
A P'' & AD'-A'D -BD
\end{pmatrix} .
\end{align*}
Observe that the weight $v = Aw$ is also semiclassical, with
$$
\frac{v'(z)}{v(z)}= \frac{B_1(z)}{ A(z)}, \quad B_1(z)= A'(z)+B(z), \quad z \in \Delta,
$$
so that by \eqref{defTransform2},
$$
\begin{pmatrix}
P' & \mathfrak C_{v}[ P ' ] \\
A P'' & A \left( \mathfrak C_{v}[ P ' ] \right)' - (A'+B) \mathfrak C_{v}[ P ' ]
\end{pmatrix} = \mathfrak D_{v}[P'],
$$
and we get that
\begin{equation} \label{expressionforC}
f_0 = \mathfrak D_{v}[P'] + \det
\begin{pmatrix}
P' & D \\
A P'' & AD'-A'D -BD
\end{pmatrix} .
\end{equation}
The first term in the right hand side is the electrostatic partner to $P'$ induced by the semiclassical weight $v$, while the determinant is clearly a polynomial. Moreover, if $\deg(A)\le 1$, it follows from \eqref{identity1bis1} that $D\equiv 0$, which concludes the proof.
\end{proof}
\begin{remark}\label{remark:operator}
Incidentally, we have established the following differential identity:
\begin{equation} \label{diffOp1}
\mathcal L[y]:= (Av)\, \mathfrak{Wrons}[y, P, q] = AS y'' + (A'S -AS' + BS) y' + C y ,
\end{equation}
for some polynomial $C$. We will use this later, in the proof of Theorem~\ref{thmODER}.
\end{remark}
Assume $S\not \equiv 0$. Using the notions of Section \ref{sec:vortices}, and in particular, characterization \eqref{eq:condEqBis}, we see that \eqref{odegeneral} yields
that zeros of $P$ are in electrostatic equilibrium in the external field $\varphi(z)=\Re \Phi(z)$, if
$$
\Phi'(z) := -\frac{1}{2} \, \frac{A'S -AS' + BS}{AS}(z) = - \frac{1}{2} \left(\frac{A'(z)}{A(z)}+\frac{B(z)}{A(z)}-\frac{S'(z)}{S(z)}\right).
$$
Taking into account the definition \eqref{defW}, we conclude:
\begin{prop}\label{cor:criticalGeneral}
Assume that the polynomial $P$ of degree $N$ does not vanish at the zeros of $AS$, where $S= \mathfrak D_w[P]$ is its electrostatic partner. Then the discrete zero-counting measure $\nu(P)$ of $P$ is $\varphi$-critical for the external field
\begin{equation} \label{fieldGeneral}
\varphi(z)=\frac{1}{2}\log \left| \frac{S}{v } \right|(z) .
\end{equation}
\end{prop}
We can write alternatively that
$$
\varphi(z)= \frac{1}{2} \left( U^{\nu(A)}(z) - U^{\nu(S)}(z)+ \log \left| \frac{1}{w } \right|(z) \right).
$$
In other words, the zeros of $P$ (which under assumptions of Proposition~\ref{cor:criticalGeneral} are necessarily simple) are in equilibrium in the external field $\varphi$ induced by the orthogonality weight $w$, with an additional contribution from point charges of size $1/2$: a \textit{repulsion} from positive charges at $\mathbb A$ and an \textit{attraction} from negative charges at the zeros of the electrostatic partner $S$ (whose location is a priori unknown). The presence of attracting ``ghost charges'' was observed already by M.H. Ismail \cite{Ismail2000, Ismail2000c} in his electrostatic interpretation for zeros of orthogonal polynomials with respect to generalized Jacobi weights.
\section{Quasi-orthogonality} \label{sec:quasi}
We revisit the facts established in Section~\ref{sec:quasiorth} under an additional assumption on our polynomial $P$.
A (monic) polynomial $P_N$ of degree $N$ is called \textit{quasi-orthogonal}\footnote{A more restrictive notion of quasi-orthogonality was introduced by Chihara \cite{MR86898}, where he assumed a condition equivalent to $n=N-1$; see also \cite{MR3926161}.} with respect to the weight $w$ on $\Delta$ if it satisfies the following (in general, non-Hermitian) orthogonality relations: for $n\in \N$, $n\le N$,
\begin{equation} \label{orthog1}
\begin{split}
&\int_\Delta x^j P_N(x) w(x)dx=0\,,\qquad j=0,1,\dots, n-1,\\
m_n:=&\int_\Delta x^n P_N(x) w(x)dx\neq 0.
\end{split}
\end{equation}
In particular, if $n=N$, polynomial $P_N$ is the $N$-th monic \textit{orthogonal polynomial}. Clearly, last condition in \eqref{orthog1}, that is, $m_n\neq 0$, is a constraint on the weight $w$ and orthogonality contour $\Delta$.
Using the notation introduced in \eqref{defCauchy}--\eqref{defqN}, we denote
\begin{equation} \label{defQuasi}
Q_N(z):= \int_\Delta\frac{P_N(t)-P_N(z)}{t-z} w(t)dt
\end{equation}
and
\begin{equation} \label{defQ}
\mathfrak C_w[P_N](z)=\int_\Delta\frac{p_N(t)w(t)}{t-z}dt, \quad z \in \C \setminus \Delta.
\end{equation}
It is useful to observe that \eqref{orthog1} yields that for any polynomial $T$ of degree $\leq n$,
\begin{equation} \label{eq:CauchTrEq}
T \mathfrak C_{w}[P_N] = \mathfrak C_{w}[T P_N], \quad \text{that is,} \quad T(z)\, \int_{\Delta} \frac{P_N(t)w(t)}{t-z}dt\,= \int_{\Delta}\frac{T(t) P_N(t) w(t)}{t-z}dt,
\end{equation}
which by \eqref{identityQ} in particular shows that, as $z\to\infty$,
\begin{equation} \label{asymptPN}
\mathfrak C_w[P_N](z) = P_N(z) \widehat w(z) + Q_N(z ) =-\frac{m_n}{z^{n+1}}+\dots;
\end{equation}
if $\Delta$ is unbounded, we understand the equality above in the asymptotic sense. Notice also that if $P_N$ satisfies a full set of orthogonality conditions ($N=n$), then \eqref{asymptPN} shows that the rational function
$$
\pi_N:=-\frac{Q_N}{P_N}
$$
is the $N$-th diagonal Pad\'e approximant to $\widehat w$ at infinity, fact that is well known.
We denote by $q_N$, $U_N$ and $H_N$ the functions $q$, $U$ and $H$ introduced in \eqref{defqN}, \eqref{defU} and \eqref{DefH} for $P=P_N$, and let
\begin{equation} \label{main2}
S_N := \mathfrak D_w[P_N] = \det
\begin{pmatrix} P_N & \mathfrak C_w[P_N] \\A P_N' & A \left( \mathfrak C_w[P_N]\right)' -B \mathfrak C_w[P_N] \end{pmatrix}
\end{equation}
be the electrostatic partner to $P_N$, induced by the weight $w$.
By Theorem~\ref{lem1aux},
\begin{equation} \label{mainIdentitySN}
S_N=v\, \mathfrak{Wrons}[P_N, q_N]
= \det \begin{pmatrix}
P_N & \mathfrak C_w[P_N] \\
U_N & \mathfrak C_w[U_N] + H_N
\end{pmatrix} ,
\end{equation}
and $S_N$ is a polynomial of degree $\le N+\sigma$. In fact, due to quasi-orthogonality relations we can say more: the upper bound on the degree of $S_N$ is lessened by the number of orthogonality conditions:
\begin{cor}\label{lemMain}
If $p_N$ satisfies \eqref{orthog1} then for the electrostatic partner $S_N$,
\begin{equation} \label{leadingC}
S_N(z)= m_n \, z^{N-n} \left[ (N+n+1) \, z^{ \deg(A)-2} \left(1 + o(1) \right)+ \kappa \, z^{ \deg(B)-1} \left(1+ o(1) \right)\right], \quad z\to \infty,
\end{equation}
where $\kappa$ is the leading coefficient of the polynomial $B$. In particular, with assumptions \eqref{orthog1}, $\deg(S_N)\le N-n+\sigma$ and can be strictly less only if $\deg(A)=\deg(B)+1$ and $\kappa =-(N+n+1)$.
Moreover,
if $n\ge \sigma +1$ then $H_N\equiv 0$ and $U_N $ (of degree $\le N-1$) is quasi-orthogonal:
\begin{equation} \label{quasiorthSnBis}
\int_\Delta x^j U_{N}(x)w(x)dx=0, \quad j=0,\dots, n-\sigma-2.
\end{equation}
\end{cor}
\begin{proof}
An immediate consequence of quasi-orthogonality conditions \eqref{orthog1} for $P=P_N$ is that in \eqref{DefH}, $H\equiv 0$. Using \eqref{asymptPN} in the definition \eqref{main2} we obtain \eqref{leadingC}. Also from \eqref{asymptPN}, \eqref{identityUhat} and since $\deg(B+E)\leq \sigma +1$, we see that
\begin{equation} \label{asymptU}
\mathfrak C_w[U_N] (z) +H(z) = A(z) (\mathfrak C_w[P_N] )'(z) - (B+E)(z) \mathfrak C_w[P_N] (z) = \mathcal O\left( \frac{1}{z^{n-\sigma} }\right) , \quad z \to \infty,
\end{equation}
which implies the quasi-orthogonality conditions \eqref{quasiorthSnBis}.
\end{proof}
\begin{remark} \label{rem:OP}
It is interesting to examine the conclusions of Corollary \ref{lemMain} in the particular case of polynomials $P_N$ orthogonal on $\Delta$ with respect to the weight $w$ (so that $n=N$). Consequently, the degree of $S_N$ is uniformly bounded: $\deg S_N\leq \sigma$. Furthermore, $S_N\not \equiv 0$ if either
$$
\deg(A)\neq \deg(B)+1 \quad \text{or} \quad \kappa \neq -(N+n+1),
$$
see \eqref{leadingC}.
If additionally $\sigma=0$ (i. e. when $P_N$ is basically a classical orthogonal polynomial), \eqref{quasiorthSnBis} asserts that $U_{N}$ is the $(N-1)$-th orthogonal polynomial, and up to normalization, $U_{N}=P_{N-1}$, in which case \eqref{mainIdentitySN} boils down to
$$
S_N= V \det \begin{pmatrix}
P_N & q_N \\
P'_{N } & q_N'
\end{pmatrix} = \const \times \det \begin{pmatrix}
P_N & \mathfrak C_w[P_N] \\
P_{N-1} & \mathfrak C_w[P_{N-1}]
\end{pmatrix},
$$
that is, polynomial $S_N$ is the product of a factor related with the weight and the Wronskian of $P_N$ and $q_N$, and also is the Casorati determinant associated with $P_N$; such an identity appears for instance in \cite[formula (3.6.13)]{Ismail05}. Thus, \eqref{mainIdentitySN} is a generalization of such kind of relations to quasi-orthogonal polynomials with respect to semi-classical weights, a fact that has an independent interest.
\end{remark}
With the additional assumption of quasi-orthogonality of $P_N$, Theorem \ref{CorolarioEDO} and Proposition \ref{cor:criticalGeneral} are still valid, with $S$ replaced by $S_N$. For instance, $P_N$ satisfying \eqref{orthog1} and its function of the second kind $q_N$ are solutions of the linear differential equation with polynomial coefficients
\begin{equation} \label{ode1}
AS_N y''+(A'S_N -AS'_N+BS_N )y'+C_N y=0,
\end{equation}
where $S_N$ is the electrostatic partner defined in \eqref{main2}, and $C_N$ is a polynomial. Its degree can be easily estimated by using in \eqref{ode1} that $\deg P_N=N$ and $\deg(S_N) \le N-n+\sigma$ , which yields that
\begin{equation} \label{degSN}
\deg(C_N) \leq N-n+2\sigma.
\end{equation}
In some cases, we can be more precise. For instance, if $ \deg (A) \leq \deg(B) $ and $\deg(P_N)=N$ then using \eqref{leadingC} and \eqref{ode1} we conclude that
$$
\deg (S_N)=N-n+\deg(B)-1, \quad \deg(C_N) = N-n+2\deg(B)-2.
$$
Equation \eqref{ode1} is known in the literature on semi--classical orthogonal polynomials, see for instance \cite[Eq.~(20)]{Shohat} or \cite[Eq.~(18)]{Magnus}. The equation in \cite{Magnus} was obtained for orthogonal polynomials only, when the polynomial $S_N$ (denoted by $\Theta_n$ there) has degree $\le \sigma$. In this sense, \eqref{ode1} is an extension of these results. Recall that Magnus uses in \cite{Magnus} an alternative definition for the semi-classical weights, in terms of an identity for the Cauchy transform \eqref{defCauchy}, which is equivalent to \eqref{ratW}, see Remark~\ref{rem1}.
\begin{example}\label{ex:Jacobi1}
Let us return to Jacobi polynomials
\begin{equation} \label{JacPolExplicit}
P_N(x)=P_N^{(\alpha, \beta)}(x)=\frac{1}{2^{N}\, N ! }(x-1)^{-\alpha}(x+1)^{-\beta}\left[(x-1)^{\alpha+N}(x+1)^{\beta+N}\right]^{(N)},
\end{equation}
corresponding to the weight considered in Example~\ref{exampleJacobi1}, for which
$$
A(x)=x^2-1, \quad B(x)=(\alpha+\beta) x + \alpha-\beta,
$$
and $\sigma=0$.
It is known that $P_{N}^{\left( \alpha ,\beta \right) }$ may have a multiple zero at
$x=1$ if $\alpha \in \{-1,\ldots,-N\}$, at $x=-1$ if $\beta \in
\{ -1,\ldots,-N\} $ or, even, at $x=\infty $ (which means a degree
reduction) if $N+\alpha +\beta \in \{-1, \ldots,-N\}$, see e.g.~\cite{ETNA05, Szego75}. Otherwise, all zeros are simple.
If $ \alpha, \beta >-1$, we can take $\Delta=[-1,1]$.
By Corollary~\ref{lemMain}, the electrostatic partner $S_N$ is a constant ($\not \equiv 0$ if $\alpha + \beta \neq -2N-1$). Since
$$
v(z)=(z-1)^{\alpha+1} (z+1)^{\beta+1 },
$$
the zeros of $P_N$ are in equilibrium in the external field
\begin{equation} \label{extFieldJacobi}
\varphi(z)=\frac{1}{2}\log \left| \frac{1}{(z-1)^{\alpha+1} (z+1)^{\beta+1 } } \right|=\frac{\alpha+1}{2} U^{\delta_1}(z)+\frac{\beta+1}{2} U^{\delta_{-1}}(z).
\end{equation}
In other cases, considered non-standard, when either $\alpha \leq -1$ or $\beta\le -1$, Jacobi polynomials satisfy non-hermitian quasi-orthogonality conditions, see \cite[Theorem 4.1]{ETNA05}.
For instance, if $\alpha, \beta, \alpha + \beta \notin \mathbb Z$, and $ -N<\alpha < -1$, then all zeros of $P_N=P_{N}^{\left( \alpha ,\beta \right) }$ are simple, and $P_N(-1)\neq 0$, see e.g.~\cite[Ch.~IV]{Szego75}. Polynomial $P_N$ satisfies also a quasi-orthogonality relation \eqref{orthog1} with $n=N-[-\alpha]$, but with a modified weight, $w(z)=(z-1)^{\alpha+[-\alpha]}(z+1)^\beta$, so that now
$$
v(z)=(z-1)^{\alpha+[-\alpha]+1} (z+1)^{\beta+1 }.
$$
Moreover, as $\Delta$ we can take an arbitrary curve oriented clockwise, connecting $1 - i 0$ with $1 + i 0$ and lying entirely in $\C \setminus [-1, +\infty)$, except for its
endpoints; if $\beta>-1$, then $\Delta$ can be deformed into $[-1,1]$.
By Corollary~\ref{lemMain}, the electrostatic partner $S_N$ is of degree exactly $ [-\alpha]$. We will show next that, up to a normalizing constant,
\begin{equation} \label{expressionS_NJac}
S_N(x)=(x-1)^{[-\alpha]}.
\end{equation}
However, notice that by Proposition~\ref{cor:criticalGeneral}, the discrete zero-counting measure $\nu(P_N)$ of $P_N=P_{N}^{\left( \alpha ,\beta \right) }$ is $\varphi_N$-critical in the external field
$$
\varphi(z)=\frac{1}{2}\log \left| \frac{S_N}{v } \right|(z) ,
$$
which coincides with the one given in \eqref{extFieldJacobi}. In other words, even with the non-standard values of the parameters $\alpha, \beta$, we still get equilibrium (probably, on a different set of the plane) in the field \eqref{extFieldJacobi}.
Returning to the expression for $S_N$ in the case under consideration, observe that for all values of the parameters, $p_N$ is a solution of the differential equation
$$
Ay''+(B+A')y'-\lambda_N y=0,
$$
with $A$ and $B$ given in \eqref{ABJacobi}. At the same time, from our discussion it follows that $P_N$ is a solution of the differential equation \eqref{ode1}, namely
$$
AS_N y''+(A'S_N -AS'_N+B_1S_N )y'+C_N y=0, \quad B_1(x)=(\alpha+[-\alpha]+\beta) x + \alpha+[-\alpha]-\beta.
$$
Combining these two equations we obtain the identity
$$
\left((B-B_1)S_N + AS_N'\right)\,P_N' = (\lambda_N S_N + C_N)\,P_N,
$$
which using the explicit expressions for $A$, $B$ and $B_1$, can be rewritten as
\begin{equation}\label{id1S1}
(x+1) \left((x-1) S'_N(x) - [-\alpha] S_N(x)\right) P'_N(x) = (\lambda_N S_N(x) + C_N(x)) P_N(x).
\end{equation}
Recall that $\deg S_N=[-\alpha] \le N$; a simple argument shows that
$$
\deg \left((x-1) S'_N(x) - [-\alpha] S_N(x)\right)<N.
$$
Indeed, the assertion is obvious for $\deg S_N< N$; if $\deg S_N=[-\alpha] = N$ then the leading coefficients of $(x-1) S'_N(x)$ and $[-\alpha] S_N(x)$ match, so the assertion follows also in this case.
Since in the situation we are analyzing $P_N(x)$ and $(x+1)P'_N(x)$ are relatively prime (up to a multiplicative constant), by \eqref{id1S1} we conclude that $(x-1) S'_N(x) - [-\alpha] S_N(x)$ must vanish at all $N$ distinct zeros of $P_N$, which is possible only if $(x-1) S'_N(x) - [-\alpha] S_N(x)\equiv 0$, which implies \eqref{expressionS_NJac}. Incidentally, we also obtain that in this case, $C_N=-\lambda_N S_N $.
We will return to the example of Jacobi polynomials with non-standard values of the parameters in the Section~\ref{sec:complex}, when we will address multiple orthogonality.
\end{example}
\begin{example}\label{ex:JacobiQuasi}
It is instructive to compare our construction with the results of \cite{MR3926161} in the case of quasi-orthogonal Jacobi polynomials. These are polynomials
$$
P_N(x)=\widehat P_N(x) + c \, \widehat P_{N-1}(x), \quad c \in \R,
$$
where $\widehat P_N$ is the $N$-th orthonormal Jacobi polynomial
$$
\widehat P_N(x) =\sqrt{\frac{(\alpha+\beta+2 N+1) \Gamma(\alpha+\beta+N+1) N ! }{2^{\alpha+\beta+1} \Gamma(\alpha+N+1) \Gamma(\beta+N+1)}}\, P_N^{(\alpha, \beta)}(x)
$$
and $P_N^{(\alpha, \beta)}$ defined in \eqref{JacPolExplicit}. Obviously, for $\alpha, \beta>-1$, $P_N$ satisfy \eqref{orthog1} with $n=N- 1$ and the weight $w(x)=(x-1)^{\alpha } (x+1)^{\beta }$. According to Corollary~\ref{lemMain}, $S_N(x)=x- \sigma_N$ (up to a multiplicative constant), and by Proposition~\ref{cor:criticalGeneral}, the discrete zero-counting measure $\nu(P_N)$ of $P_N$ is $\varphi$-critical in the external field
\begin{align*}
\varphi(x) & =\frac{\alpha+1}{2}\, U^{\delta_1}(x)+\frac{\beta+1}{2}\, U^{\delta_{-1}}(x)- \frac{ 1}{2} U^{\delta_{\sigma_N}}(x) \\
& =\frac{\alpha+1}{2}\, \log \frac{1}{|x-1|} +\frac{\beta+1}{2}\, \log \frac{1}{|x+1|} - \frac{ 1}{2} \log \frac{1}{|x-\sigma_N|} .
\end{align*}
Explicit expressions allow to calculate $\sigma_N$ by definition \eqref{mainIdentitySN}:
$$
\sigma_{N}=-\frac{(\alpha+\beta+1+2 N)+c ^{2}(\alpha+\beta-1+2 N)}{(2 N+\alpha+\beta) c } a_{N} + \frac{\beta^{2}-\alpha^{2}}{(2 N+\alpha+\beta)^{2}},
$$
where
$$
a_{N}=\frac{2}{\alpha+\beta+2 N} \sqrt{\frac{N (\alpha+N)(\beta+N)(\alpha+\beta+N)}{(\alpha+\beta-1+2 N)(\alpha+\beta+1+2 N)}}\, .
$$
This expression coincides with the one obtained in \cite[Section 5.2]{MR3926161}.
\end{example}
\section{Multiple orthogonality of Type II}\label{sec:multiple_orth}
\subsection{The general case} \label{sec:generalHermitePade}
\
We are interested in type II \textit{multiple} or \textit{Hermite-Pad\'e} orthogonal polynomials with respect to semi-classical weights. For the sake of simplicity, we restrict ourselves to two positive weights $w_1$, $w_2$, supported on $\Delta_1\subset \R$ and $\Delta_2\subset \R$, respectively. Given an ordered pair $\bm n= (n_1,n_2)\in \Z_{\ge 0}^2$, where $\Z_{\ge 0}=\N\cup \{0\}$, we look for a (monic) polynomial $P_{\bm n}$ of total degree at most $N=|\bm n|:=n_1+n_2$, such that
\begin{equation} \label{defHP}
\int_{\Delta_i}x^jP_{\bm n}(x)w_i(x)dx \begin{cases}
=0,&\quad j\leq n_i-1,\\
\neq 0,&\quad j=n_i,
\end{cases}
\qquad i=1, 2.
\end{equation}
In this way, the definition of type II Hermite-Pad\'e orthogonal polynomials \eqref{defHP} boils down to two simultaneous quasi-orthogonality conditions.
Additionally, we assume that both weights are semiclassical and belong to the framework discussed in Section~\ref{sec:quasiorth}. More precisely, we assume that
for each $i=1, 2$, $\Delta_i\subset \R$ is a finite union of non-overlapping intervals joining real zeros of a real polynomial $A_i$ and eventually $\infty$, and that each weight $w_i$ is defined on $\Delta_i$ in such a way that on each component $\Gamma_j$ it coincides, up to a non-zero multiplicative constant, with a boundary value $(w_i)_+$ of the function defined below:
\begin{equation} \label{defWHP}
w_i(z):=\exp \left( \int ^z \frac{B_i(t)}{ A_i(t)}\, dt \right) , \quad \quad i=1, 2,
\end{equation}
for some real polynomials $B_1, B_2$. As before, we use the notation $\mathbb A_i$ for the set of zeros of $A_i$ on $\C$, $i=1,2$.
We assume also that $w_i$'s have finite moments:
$$
\int_{\Delta_i} |x|^m |w_i(x)| dx <+\infty, \quad m=0, 1, 2, \dots, \quad i=1, 2.
$$
As a consequence, for $m=0, 1, 2, \dots$,
\begin{equation}\label{endpoints2}
z^m v_i(z)=0 \text{ at endpoints of every subinterval of $\Delta_i$, $i=1, 2$,}
\end{equation}
with
\begin{equation} \label{defWHPv}
v_i(z):= A_i(z) w_i (z), \quad i=1, 2.
\end{equation}
We also have that
\begin{equation} \label{ratWHP}
\frac{w_i'(z)}{w_i (z)}=\frac{B_i(z)}{ A_i(z)}, \quad i=1, 2,
\end{equation}
with the identity taking place away from the singularities of $w_i$.
As in \eqref{def:class},
\begin{equation} \label{def:classHP}
\sigma_i := \max \{\deg(A_i)-2, \deg(B_i)-1 \},\quad i=1, 2.
\end{equation}
\begin{remark} \label{remark:A1=A2}
In the special case when $A_1=A_2=A$ and $B_1=B_2=B$, we will always assume, without loss of generality, that both $w_i$ are normalized in such a way that for every selection of the branch,
$$
w_1(z)=w_2(z)=w(z)=\exp \left( \int ^z \frac{B(t)}{ A(t)}\, dt \right) .
$$
This situation was considered for instance in \cite{Aptekarev:97} (and previously, in \cite{kaliaguine:1981} and \cite{kaliaguine:1996}). Several examples of the case when $\Delta_1=\Delta_2$, $A_1=A_2$, but $B_1\neq B_2$ for classical weights ($\sigma_i=0$) appear in \cite{AptBraVA}.
\end{remark}
For $P_{\bm n}$ we define the corresponding polynomials
\begin{equation} \label{defQHP}
Q_{{\bm n},i}(z):= \int_{\Delta_i}\frac{P_{\bm n}(t)-P_{\bm n}(z)}{t-z} w_i(t)dt, \quad i=1, 2,
\end{equation}
and functions of the second kind,
\begin{equation}\label{2ndkindfunt}
q_{\bm n,i}(z):=\frac{\mathfrak C_{w_i}[P_{\bm n}](z) }{w_i (z)}= \frac{1}{ w_i (z)}\int_\Delta\frac{P_{\bm n}(t)w_i(t)}{t-z}dt , \quad i=1, 2.
\end{equation}
By \eqref{eq:CauchTrEq}--\eqref{asymptPN}, for $i=1, 2$,
\begin{equation} \label{eq:asymptCauchy}
\mathfrak C_{w_i}[P_{\bm n}](z) = P_{\bm n} (z) \widehat w_i(z) + Q_{{\bm n},i}(z ) = \mathcal O(z^{-n_i-1}), \quad z\to \infty.
\end{equation}
This shows that the pair of rational functions
\begin{equation} \label{def_approx}
\left( \pi_{{\bm n},1}, \pi_{{\bm n},2}\right):=\left( -\frac{Q_{{\bm n},1}}{P_{\bm n}}, - \frac{Q_{{\bm n},2}}{P_{\bm n}}\right)
\end{equation}
are the \textit{simultaneous} or \textit{Hermite--Pad\'e approximants of type II} to the pair of functions $\left( \widehat w_1 , \widehat w_2 \right) $, and that the functions $\mathfrak C_{w_i}[P_{\bm n}]$ are the corresponding \textit{residues}, see e.g.~\cite{Aptekarev:97, MR4238535}.
We will impose an additional condition enforcing an independence of both weights, namely we will assume that
\begin{equation} \label{mainconditionwronks}
\mathfrak{Wrons}[P_{\bm n}, q_{\bm n,1}, q_{\bm n,2}] \not\equiv 0.
\end{equation}
It is equivalent to assuming that $\mathfrak{Wrons}[P_{\bm n}, q_{\bm n,1}, q_{\bm n,2}]\neq 0$ at some point different from the zeros of $A_1A_2$. Moreover, we conjecture that condition \eqref{mainconditionwronks} is equivalent to being $\bm n$ a normal index.
By the analysis from Section \ref{sec:quasiorth}, $P_{\bm n}$ has now two electrostatic partners,
\begin{equation} \label{defS12}
S_{\bm n, i}(z) := \mathfrak D_{w_i}[P_{\bm n}]= v_i(z) f_{\bm n,i} (z), \quad i=1, 2,
\end{equation}
where
\begin{equation} \label{wronskF}
f_{\bm n,i}:= \mathfrak{Wrons}[P_{\bm n}, q_{\bm n, i}], \quad i=1, 2,
\end{equation}
and $\mathfrak{Wrons}[\cdot, \cdot]$ stands for the Wronskian as defined in \eqref{eq:Wronskian}. An equivalent formula that might shed some light on the behavior of these polynomials is
\begin{equation} \label{S_inew}
S_{\bm n, i}(z) = v(z) P_{\bm n}^2(z) \, \left(\frac{\widehat w_i(z)-\pi_{\bm n,i}(z)}{w_i(z)} \right)' = v(z) P_{\bm n}^2(z) \, \left(\frac{\mathfrak C_{w_i}[P_{\bm n}](z)}{w_i(z)P_{\bm n}(z)} \right)' , \quad i=1, 2,
\end{equation}
where $\pi_{\bm n,i}$ are defined in \eqref{def_approx}; it can be checked using \eqref{eq:asymptCauchy} by direct computation.
By Theorem~\ref{CorolarioEDO},
\begin{thm}\label{TeoClaveHP}
Let $\bm n= (n_1,n_2)\in \Z_{\ge 0}^2$, and let $P_{\bm n}$ be a monic polynomial of degree $N=n_1+n_2$ (assuming it exists) that satisfies the multiple orthogonality conditions \eqref{defHP}.
Then $S_{\bm n, i}$ are polynomials of degree at most $N-n_i+\sigma_i $, $i=1, 2$, and there exist polynomials $C_{\bm n, i} $,
$$
\deg(C_{\bm n, i}) \leq N-n_i+2\sigma_i , \quad i=1, 2 ,
$$
such that $P_{\bm n}$ is a solution of the system of linear differential equations
\begin{equation} \label{odeHP}
\begin{split}
& A_i S_{\bm n, i}\, y''+(A'_i S_{\bm n, i}-A_i S_{\bm n, i}'+B_iS_{\bm n, i})\, y'+C_{\bm n, i} \, y=0, \quad i=1, 2.
\end{split}
\end{equation}
\end{thm}
\begin{remark}\label{rem:linearcomb}
In the case when $A_1=A_2$ and $B_1=B_2$ we can replace in the equations \eqref{odeHP} the coefficient $S_{\bm n, i}$ by any linear combination
$$
a \, S_{\bm n, 1} + b \, S_{\bm n, 2}, \quad a, b \in \R;
$$
see the result of numerical experiments for Appell's polynomials at the end of Section~\ref{sec:AngelescoJacobi}, especially the plots in Figure~\ref{Fig2Appell}.
\end{remark}
An application of Proposition \ref{cor:criticalGeneral} yields:
\begin{cor}\label{cor:HPinterp}
Assume that the polynomial $P_{\bm n}$ has no common zeros neither with $A_1 S_{\bm n,1}$ nor with $A_2 S_{\bm n,2}$.
Then the discrete zero-counting measure $\nu(P_{\bm n})$ of $P_{\bm n}$ is $\varphi_i$-critical, in the sense of Definition~\ref{defCriticalScalar}, for the external field
\begin{equation} \label{fieldHP}
\varphi_i(z)=\frac{1}{2}\log \left| \frac{S_{\bm n,i}}{v_i } \right|(z) ,
\end{equation}
for both $i=1, 2$.
\end{cor}
Thus, the zeros of the Hermite--Pad\'e polynomial $P_{\bm n}$ are in equilibrium (i.e., their counting measure is critical) in two different external fields, each one induced by the corresponding orthogonality weight $w_i$, with an addition of attracting charges placed at the zeros of the electrostatic partner $S_{\bm n,i}$. This ``redundancy'' allows us to provide also an electrostatic model for these additional charges forming the external fields $\varphi_i$. Formally, it will be a consequence of a differential equations for the electrostatic partners $S_{\bm n, i}$ that we establish next, but we need to introduce first another auxiliary polynomial, this time generated by both weights simultaneously.
We define the following differential operators:
\begin{equation} \label{defDiffOperators}
\mathcal L_i[y]:= (A_i v_i ) \, \mathfrak{Wrons}[y, P_{\bm n}, q_{\bm n,i} ], \quad i=1, 2.
\end{equation}
Notice that we omit indicating the dependence of $\mathcal L_i$ from $\bm n$ for the sake of brevity of notation.
\begin{prop}\label{prop:WronskianS/w}
Function
\begin{equation} \label{defRpolyn}
R_{\bm n}:= A_2 v_2 \, \mathcal L_1[q_{\bm n,2}] = - A_1 v_1 \, \mathcal L_2[q_{\bm n,1}]
=(A_1 A_2 v_1 v_2) \, \mathfrak{Wrons}[P_{\bm n}, q_{\bm n,1}, q_{\bm n,2}]
\end{equation}
is a polynomial of degree $\leq 2\sigma_1+2\sigma_2+3$.
If $A_1=A_2=:A$, then $A$ is a factor of $R_{\bm n}$. If in addition $B_1=B_2$, then $A^2$ is a factor of $R_{\bm n}$, i.e.
\begin{equation} \label{defR*}
R_{\bm n}^*:=\frac{R_{\bm n}}{A^2}= v^2 \, \mathfrak{Wrons}[P_{\bm n}, q_{\bm n,1}, q_{\bm n,2}], \quad v:= A w,
\end{equation} is a polynomial.
\end{prop}
Notice that by our assumption \eqref{mainconditionwronks}, $R_{\bm n}\not \equiv 0$.
\begin{proof}
Let us define
\begin{equation}\label{IdR0}
R_{\bm n} :=\frac{A_1S_{\bm n, 1} C_{\bm n, 2}-A_2S_{\bm n, 2}C_{\bm n, 1}}{P_{\bm n}'}.
\end{equation}
Multiplying the equations in \eqref{odeHP} evaluated at $y=P_{\bm n}$ by $A_2S_{\bm n, 2}/P_{\bm n}$ and $A_1S_{\bm n, 1}/P_{\bm n}$ , respectively, and subtracting we obtain
\begin{equation}\label{IdR1}
R_{\bm n}\, P_{\bm n} = -\mathfrak{Wrons} [A_1, A_2] S_{\bm n, 1}S_{\bm n, 2} +A_1A_2 \, \mathfrak{Wrons}[S_{\bm n, 1}, S_{\bm n, 2}] +(A_2B_1-A_1B_2)S_{\bm n, 1}S_{\bm n, 2} .
\end{equation}
Notice that $R_{\bm n}\, P_{\bm n}$ is a polynomial. An immediate consequence of the statement a) of Proposition \ref{prop:fromTHM2.1} is that $ S_{\bm n, 1} / P_{\bm n}'$ and $S_{\bm n, 2}/P_{\bm n}'$ are analytic at the zeros of $P_{\bm n}$, which together with the bounds on the degree of $C_{\bm n, i}$ from Theorem \ref{TeoClaveHP} yields that $R_{\bm n}$ is a polynomial of degree at most $2\sigma_1+2\sigma_2+3$.
On the other hand, straightforward calculation using \eqref{ratWHP} and \eqref{wronskF} shows also that
$$
-\mathfrak{Wrons} [A_1, A_2] S_{\bm n, 1}S_{\bm n, 2} +A_1A_2 \, \mathfrak{Wrons}[S_{\bm n, 1}, S_{\bm n, 2}] +(A_2B_1-A_1B_2)S_{\bm n, 1}S_{\bm n, 2} = A_1 A_2 v_1 v_2 \, \mathfrak{Wrons}[f_{\bm n,1}, f_{\bm n, 2}],
$$
so that \eqref{IdR0} reduces to
\begin{equation} \label{wronksHP}
A_1 A_2 v_1 v_2 \, \mathfrak{Wrons}[f_{\bm n,1}, f_{\bm n, 2}] =p_n R_{\bm n}.
\end{equation}
Let
$$
d_{\bm n}:=\mathfrak{Wrons}[f_{\bm n,1}, f_{\bm n, 2}]. $$
Since
$$
f_{\bm n,i} '= \mathfrak{Wrons}[P_{\bm n}, q_{\bm n, i}]' = \det \begin{pmatrix}
P_{\bm n} & q_{\bm n, i} \\
P_{\bm n}'' & q_{\bm n, i}''
\end{pmatrix},
$$
this gives us the following identity for $d_{\bm n}$:
\begin{equation} \label{identitiesW}
\begin{split}
d_{\bm n} =& \det \begin{pmatrix}
P_{\bm n} & q_{\bm n, 1} \\
P_{\bm n}' & q_{\bm n, 1}'
\end{pmatrix} \det \begin{pmatrix}
P_{\bm n} & q_{\bm n, 2} \\
P_{\bm n}'' & q_{\bm n, 2}''
\end{pmatrix} - \det \begin{pmatrix}
P_{\bm n} & q_{\bm n, 2} \\
P_{\bm n}' & q_{\bm n, 2}'
\end{pmatrix} \det \begin{pmatrix}
P_{\bm n} & q_{\bm n, 1} \\
P_{\bm n}'' & q_{\bm n, 1}''
\end{pmatrix} \\
= & P_{\bm n} \, \mathfrak{Wrons}[P_{\bm n}, q_{\bm n,1}, q_{\bm n,2}] ,
\end{split}
\end{equation}
which together with \eqref{wronksHP} proves the first part of assertion.
If $A_1=A_2=:A$ then \eqref{IdR1} becomes
\begin{equation*}
A^2(S_{\bm n, 1}S_{\bm n, 2}'-S_{\bm n, 1}'S_{\bm n, 2})+A(B_1-B_2)S_{\bm n, 1}S_{\bm n, 2}=A \frac{R_{\bm n}}{A} P_{\bm n},
\end{equation*}
with
\begin{equation} \label{expreforR}
\frac{R_{\bm n}}{A} = \frac{S_{\bm n, 1}C_{\bm n, 2}-S_{\bm n, 2}C_{\bm n, 1}}{P_{\bm n}'}.
\end{equation}
Assume that $A(z_0)=0$. If also $P_{\bm n}(z_0)= 0$ then by the same argument as before, $ S_{\bm n, 1} / P_{\bm n}'$ and $S_{\bm n, 2}/P_{\bm n}'$ are analytic at $z_0$, as well as the right hand side of \eqref{expreforR}. If $P_{\bm n}(z_0)\neq 0$ but $P_{\bm n}'(z_0)= 0$, then by \eqref{odeHP},
$$
C_{\bm n, i} (z_0)\, P_{\bm n}(z_0)=- A(z_0) S_{\bm n, i}(z_0)\, P_{\bm n}''(z_0) , \quad i=1,2.
$$
In this case, $A P_{\bm n}''/P_{\bm n}'$ is analytic at $z_0$, which implies again that the expression in the right hand side of \eqref{expreforR} is analytic at $z_0$. This proves that $R_{\bm n}/A$ is a polynomial.
If in addition $B_1=B_2$, then \eqref{IdR1} reduces to
\begin{equation} \label{identityRpartCase}
A^2(S_{\bm n, 1} S_{\bm n, 2}'-S_{\bm n, 1}'S_{\bm n, 2})=
R_{\bm n} P_{\bm n}
\qquad \Rightarrow\qquad \frac{R_{\bm n}}{A^2}=\frac{S_{\bm n, 1} S_{\bm n, 2}'-S_{\bm n, 1}'S_{\bm n, 2}}{P_{\bm n}}.
\end{equation}
Hence, $R_{\bm n}/A^2$ could have poles only at the common roots of $A$ and $P_{\bm n}$. But in this case, by Proposition~\ref{prop:fromTHM2.1}, $S_{\bm n, 1}/P_{\bm n}$ and $S_{\bm n, 2}/P_{\bm n}$ are analytic at the zeros of $ P_{\bm n}$. The proof is complete.
\end{proof}
\begin{remark} \label{remark:equal}
In the case when
$$
A_1=A_2=A, \qquad \deg(A)< \deg(B_i)+1, \quad i =1, 2,
$$
formula \eqref{IdR1} shows that
\begin{equation} \label{degRoverAparticularcase}
\deg\left( \frac{R_{n}}{A}\right)= \deg(B_1)+ \deg(B_2)+ \deg(B_1-B_2)-2.
\end{equation}
If on the other hand, $A_1=A_2=A$, $B_1=B_2=B$,
$$
\sigma = \max \{\deg(A)-2, \deg(B)-1 \}=1 \quad \text{and} \quad n_1=n_2,
$$
then $R_{\bm n}^*$ is a constant; in other words,
$$
R_{\bm n}(x) = \const \times A^2.
$$
Indeed, by \eqref{identityRpartCase},
$$
R_{\bm n}^*=\frac{S_{\bm n, 1} S_{\bm n, 2}'-S_{\bm n, 1}'S_{\bm n, 2}}{P_{\bm n}}
$$
is a polynomial. Since
$\deg(S_{\bm n,1})=n_2+1$ and $\deg(S_{\bm n,2})=n_1+1$, $\deg(R_{\bm n}^*)\le 1$. The assumption that $n_1=n_2$ implies that the leading coefficient of $S_{\bm n,1}S_{\bm n,2}'$ and $S_{\bm
n,1}'S_{\bm n,2}$ match, which proves that $R_{\bm n}^*$ is a constant.
\end{remark}
Now we are ready to produce the promised differential equations satisfied by the electrostatic partners:
\begin{thm}\label{thm:WronskianS/w}
There exist polynomials $D_1$ and $D_2$ (in general, dependent on ${\bm n}$) such that $S_{\bm n, 1}$ is solution of the linear differential equation
\begin{equation} \label{ODEsForSis}
A_1A_2P_{\bm n}R_{\bm n}\, y''+((2A_1A_2'+A_1B_2-A_2B_1)P_{\bm n}R_{\bm n}-A_1A_2(P_{\bm n}R_{\bm n}'+P_{\bm n}'R_{\bm n}))y'+D_1y =0\,,
\end{equation}
and $S_{\bm n, 2}$ satisfies
\begin{equation} \label{ODEsForSis2}
A_1A_2P_{\bm n}R_{\bm n}\, y''+((2A_1'A_2-A_1B_2+A_2B_1)P_{\bm n}R_{\bm n}-A_1A_2(P_{\bm n}R_{\bm n}'+P_{\bm n}'R_{\bm n}))y'+D_2y =0.
\end{equation}
If $A_1=A_2$ and $B_1=B_2$, then the two differential equations coincide, i.e., $S_{\bm n, 1}$ and $S_{\bm n, 2}$ are solutions of the same differential equation
$$P_{\bm n}R_{\bm n}^*y''-(P_{\bm n}'R_{\bm n}^{*}+P_{\bm n}(R_{\bm n}^{*})')y'+D^*y=0\,,$$
where $R_{\bm n}^*$ was defined in \eqref{defR*}, and $D^*$ is a certain polynomial, dependent on ${\bm n}$.
\end{thm}
\begin{proof}
Notice that, away from the zeros of $A_1$ and $A_2$, the formal identity
\begin{equation} \label{eq:det1}
\mathfrak{Wrons}[y, f_{\bm n, 1}, f_{\bm n,2} ] = \det \begin{pmatrix}
y &f_{\bm n,1} &f_{\bm n,2} \\
y'&f_{\bm n,1} '&f_{\bm n,2} '\\
y'' &f_{\bm n,1}''&f_{\bm n,2} ''
\end{pmatrix}(z)=0
\end{equation}
is satisfied by $y=f_{\bm n,i}$, $i=1,2 $. Expanding the determinant along the first column yields
$$
u_2(z) y''(z) - u_1(z) y'(z) + u_0(z) y (z) = 0 ,
$$
with
$$
u_2 = \mathfrak{Wrons}[f_{\bm n,1}, f_{\bm n, 2}]= d_{\bm n},
\quad
u_1 = \det \begin{pmatrix}
f_{\bm n,1} &f_{\bm n,2} \\
f_{\bm n,1}''&f_{\bm n,2} ''
\end{pmatrix}(z)= \mathfrak{Wrons}[f_{\bm n,1}, f_{\bm n, 2}]'= d_{\bm n}',
$$
and
\begin{equation}
u_0 = \det \begin{pmatrix}
f_{\bm n,1} '&f_{\bm n,2} '\\
f_{\bm n,1}''&f_{\bm n,2} ''
\end{pmatrix}(z)= \mathfrak{Wrons}[f'_{\bm n,1}, f'_{\bm n, 2}](z).
\label{f21}
\end{equation}
Differentiating \eqref{defS12} we get that for $ i=1, 2$,
\begin{align*}
A_i V_i \, f_{\bm n,i}' & = A_i S_{\bm n, i}' - S_{\bm n, i}\left( A_i'+ B_i \right), \\
A_i^2 V_i\, f_{\bm n,i}'' & = A_i \left(-S_{\bm n, i} \left(A_i''+B_i'\right)+A_i S_{\bm n, i}'' -B_i S_{\bm n, i}'\right)+\left(2
A_i'+B_i\right) \left(S_{\bm n, i} \left(A_i' +B_i\right)-A_i S_{\bm n, i}'\right).
\end{align*}
This shows that
$$
D:= A_1^2A_2^2 v_1 v_2 u_0 = \det \begin{pmatrix}
A_1^2 v_1 \, f_{\bm n,1} '& A_2^2 v_2\, f_{\bm n,2} '\\
A_1^2 v_1 \, f_{\bm n,1}''& A_2^2 v_2\, f_{\bm n,2} ''
\end{pmatrix}
$$
is a polynomial.
Thus, we conclude that with this polynomial $D$, functions
$f_{\bm n,i}$ are two independent solutions of the linear differential equation
$$
d_{\bm n} y''
-d_{\bm n} 'y'
+\frac{D}{A_1^2A_2^2 v_1 v_2 }y=0.
$$
With the change of variable $y\mapsto y/ v_1 $ and using the definition \eqref{defS12} we see that $S_{\bm n,1}$ is a solution of the equation
$$
g_2(z) y''(z) + g_1(z) y'(z) + g_0(z) y (z) = 0 ,
$$
with
\begin{align*}
g_2&=\frac{d_{\bm n}}{v_1 }=\frac{R_{\bm n}P_{\bm n}}{A_1 A_2 v_1^2 v_2 },\\
g_1&=\frac{-d_{\bm n}'}{v_1 }-\frac{2d_{\bm n} v_1'}{v_1^2}
=-\frac{R_{\bm n}P_{\bm n}}{A_1 A_2 v_1^2 v_2 }\left(\frac{P_{\bm n}'}{P_{\bm n}}+\frac{R_{\bm n}'}{R_{\bm n}}-\frac{2A_1'+B_1}{A_1}-\frac{2A_2'+B_2}{A_2}+2\frac{A_1'+B_1}{A_1}\right),\\
g_0&=\frac{d_{\bm n}' v_1'}{v_1^2 }+\frac{2d_{\bm n} (v_1' )^2}{v_1^3 }
-\frac{d_{\bm n} v_1''}{v_1^2 }+\frac{D}{A_1^2 A_2^2v_1^2 v_2} \\
&=\frac{R_{\bm n}P_{\bm n}}{A_1^2 A_2 v_1^2 v_2}\left(\left(\frac{P_{\bm n}'}{P_{\bm n}}+\frac{R_{\bm n}'}{R_{\bm n}}
-\frac{2A_2'+B_2}{A_2}\right)(A_1'+B_1)
-A_1''-B_1'+\frac{D}{A_2}\right),
\end{align*}
which shows that each $A_1^2 A_2^2 v_1^2v_2 g_j$ is a polynomial. The differential equation for $S_{\bm n,2}$ is obtained in a analogous way.
Finally, in the case $A_1=A_2$ and $B_1=B_2$ (and as explained in Remark~\ref{remark:A1=A2}, $w_1 =w_2 =w$), both changes of variable coincide, so we have the same linear differential equation of order $2$ with polynomial coefficients for $S_{\bm n,1}$ and $S_{\bm n,2}$. Indeed
\begin{equation} \label{eq:det2}
\mathfrak{Wrons}[y, S_{\bm n, 1}, S_{\bm n,2} ] = \det \begin{pmatrix}
y & S_{\bm n,1} &S_{\bm n,2} \\
y'&S_{\bm n,1} '&S_{\bm n,2} '\\
y'' &S_{\bm n,1}''&S_{\bm n,2} ''
\end{pmatrix} =0
\end{equation}
is satisfied by $y=S_{\bm n,i}$, $i=1,2 $ and by \eqref{identityRpartCase}, the coefficients of $y''$, $y'$ and $y$ are
$$\mathfrak{Wrons}[S_{\bm n,1}, S_{\bm n, 2}]=P_{\bm n}R_{\bm n}^*\,,\qquad
-\mathfrak{Wrons}[S_{\bm n,1}, S_{\bm n, 2}]'=-(P_{\bm n}R_{\bm n}^*)'\,,\qquad
\mathfrak{Wrons}[S_{\bm n,1}', S_{\bm n, 2}']=D^*\,,$$
respectively. The statement is proved.
\end{proof}
The differential equations \eqref{ODEsForSis}--\eqref{ODEsForSis2} imply that, respectively,
\begin{equation*}
\begin{split}
y''+\left( \frac{2A_2' }{A_2} - \frac{ B_1 }{A_1} + \frac{ B_2 }{A_2} - \frac{ R_{\bm n}' }{ R_{\bm n}} - \frac{ P_{\bm n}'}{ P_{\bm n}} \right) y' & =0 \quad \text{at the zeros of }
S_{\bm n, 1}, \\
y''+\left( \frac{ 2A_1' }{A_1 } + \frac{ B_1 }{A_1 }- \frac{ B_2 }{A_2 } - \frac{ R_{\bm n}' }{ R_{\bm n}} - \frac{ P_{\bm n}' }{ P_{\bm n} } \right) y' &=0 \quad \text{at the zeros of }
S_{\bm n, 2},
\end{split}
\end{equation*}
and the characterization \eqref{eq:condEqBis}, along with definitions \eqref{ratWHP}, yields an electrostatic interpretation of the zeros of its solutions. Recall that $R_{\bm n} \not \equiv 0$ is the polynomial defined in \eqref{defRpolyn} (see alternative expressions in \eqref{IdR0}, \eqref{IdR1}). Then
\begin{cor}\label{cor:Scritical}
Let $i=1, 2$, and let
$$
\phi_1: = \frac{1}{2}\log\left|\frac{P_{\bm n}R_{\bm n}}{A_1 A_2 }\right| + \frac{1}{2}\log\left|\frac{ v_1}{ v_2}\right|, \quad \phi_2(z):= \frac{1}{2}\log\left|\frac{P_{\bm n}R_{\bm n}}{A_1 A_2 }\right| + \frac{1}{2}\log\left|\frac{ v_2}{ v_1}\right|.
$$
If for $i\in \{1, 2\}$, the roots of $S_{\bm n,i}$ are simple, then the discrete zero-counting measure $\nu\left( S_{\bm n,i} \right)$ of $S_{\bm n,i}$ is $\phi_i$-critical in the sense of Definition~\ref{defCriticalScalar}.
\end{cor}
\begin{remark}
As it follows from \eqref{ODEsForSis}--\eqref{ODEsForSis2}, zeros of $S_{\bm n,i}$ can be multiple only at zeros of $A_1A_2P_{\bm n}R_{\bm n}$. By the definition of $S_{\bm n,i}$ via $\mathfrak D_{w_i}[P_{\bm n}]$ as in \eqref{defTransform2}, if $A_i$ and $B_i$ share a common root (case not excluded by our assumptions) then this root is also a zero of $S_{\bm n,i}$.
\end{remark}
Notice that the roots of $S_{\bm n,i}$ are in equilibrium in the external field $\phi_i$ created not only by charges fixed at the zeros of $A_2$ or $A_1$, but also by additional masses, all charged as $-1/2$, placed at the roots of $P_{\bm n}R_{\bm n}$. As $N=n_1 + n_2$ grows large, the dominant interaction is provided by the relation between the zeros of $S_{\bm n,i}$ and $P_{\bm n}$. This motivates to combine the statements of Corollaries \ref{cor:HPinterp} and \ref{cor:Scritical} into a single electrostatic model.
\begin{thm} \label{thm:vectorelectrostatics}
Let $R_{\bm n}\not \equiv 0$ be the auxiliary polynomial of degree $\leq 2\sigma_1+2\sigma_2+3$ defined in \eqref{defRpolyn}.
If the roots of $P_{\bm n}$ and of $S_{\bm n,1}$ are simple, then
the vector discrete measure $\vec \nu_1:=(\nu(P_{\bm n}),\nu(S_{\bm n,1}))$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
\begin{equation} \label{defVectorExternalField1}
\vec \varphi = \left( \frac{1}{2}\log\left|\frac{1}{v_1 }\right| , \frac{1}{2}\log\left|\frac{R_{\bm n} }{A_1 A_2 }\right| + \frac{1}{2}\log\left|\frac{ v_1}{ v_2}\right| \right).
\end{equation}
Analogously, if the roots of $S_{\bm n,2}$ are simple, then
the vector discrete measure $\vec \nu_2:=(\nu(P_{\bm n}),\nu(S_{\bm n,2}))$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
\begin{equation} \label{defVectorExternalField2}
\vec \varphi = \left( \frac{1}{2}\log\left|\frac{1}{v_2 }\right| , \frac{1}{2}\log\left|\frac{R_{\bm n} }{A_1 A_2 }\right| + \frac{1}{2}\log\left|\frac{ v_2}{ v_1}\right| \right).
\end{equation}
\end{thm}
Some observations are in order. First, the negative value of the interaction parameter $a=-1/2$ in the electrostatic model above shows that zeros of $P_{\bm n}$ and zeros of $S_{\bm n,i}$ have opposite charges, and thus are mutually attracting. This indicates that in general we should not expect the equilibrium configurations to provide minima for the energy functionals, at least, without additional constraints.
In the case when $S_{\bm n,i} \equiv \const$, the assertion of the theorem is still valid taking $\nu(S_{\bm n,i})=0$.
If $A_1=A_2=A$ and $B_1=B_2$, we have that $v_1=v_2=v$, so same external field,
\begin{equation} \label{defVectorExternalField3}
\vec \varphi = \left( \frac{1}{2}\log\left|\frac{1}{v }\right| , \frac{1}{2}\log\left| R^*_{\bm n} \right| \right)
\end{equation}
is acting both on the zeros of $S_{\bm n,1}$ and $S_{\bm n,2}$. Moreover, as observed in Remark \ref{remark:equal} ii), if additionally
$$
\sigma = \max \{\deg(A)-2, \deg(B)-1 \}=1 \quad \text{and} \quad n_1=n_2,
$$
then $ R_{\bm n}^* $ is a constant. In other words, the second component of $\vec \varphi $ is zero, so the external field acts only on the zeros of $P_{\bm n}$.
The electrostatic model defined above is still formal, as long as we are unable to prove existence of $P_{\bm n}$, its exact degree and the localization of the zeros of the participating polynomials (or at least, of the bulk of them). This is impossible in the general case considered so far. Hence, we need to impose additional assumptions on the weights $w_i$'s. In the next sections we discuss the most common cases well studied in the literature.
\subsection{Some special cases of multiple orthogonal polynomials}\label{sec:special}
\
In this section we will try to make our construction more meaningful by clarifying the location of the zeros of the electrostatic partners $S_{\bm n, i}$ of $P_{\bm n}$. Many of these results can be predicted by observing that the roots of
$S_{\bm n, i}$ are de facto critical points of the error function of the Hermite-Pad\'e approximants, see \eqref{S_inew}.
\subsubsection{Angelesco systems}\label{sec:angelesco}
\
The best understood situation when the existence and uniqueness of the Hermite-Pad\'e orthogonal polynomial $P_{\bm n}$ satisfying relations \eqref{defHP} is assured is the so--called Angelesco case, introduced by Angelesco \cite{angelesco} in 1919, and later studied in \cite{MR412503} and in the works of Aptekarev, Gonchar, Kaliaguin, Nikishin, Rakhmanov and others, see e.g.~\cite{MR870267, GRS97, MR1391763, niksor}. We assume now that $\Delta_1$ and $ \Delta_2$ are real intervals, and
\begin{equation} \label{caseAngelesco}
\os \Delta_1 \cap \os \Delta_2=\emptyset.
\end{equation}
Under these conditions, for every multi-index $\bm n=(n_1, n_2)$, polynomial $P_{\bm n}$ is of maximal degree,
$$
\deg P_{\bm n} = n_1+n_2 =N,
$$
(in other words, $\bm n$ is a \textit{normal} index); since this is valid for every $\bm n = (n_1,n_2)$, the system is known as \textit{perfect}, see \cite{Mahler}.
Moreover, in the Angelesco case, $P_{\bm n}$ has exactly $n_1$ and $n_2$ simple zeros in the interiors of $\Delta_1$ and $\Delta_2$, respectively (see \cite[Sect. 5.6]{niksor}). Additionally, the localization of the majority of the zeros of the polynomials $S_{\bm n, i}$ is given in the following result:
\begin{prop}\label{cor:zerosS12ang}
Polynomial $S_{\bm n, 1}$ (respect. $S_{\bm n, 2}$) has $n_2-1$ (respect. $n_1-1$) zeros interlacing with those of $P_{\bm n}$ on $\Delta_2$ (respect. $\Delta_1$).
\end{prop}
\begin{proof}
Let us write
\begin{equation} \label{defFactorization}
P_{\bm n}(x) = P_{\bm n,1}(x)\,P_{\bm n,2}(x),
\end{equation}
where $P_{\bm n,i}$ is the monic polynomial whose zeros agree with those of $P_{\bm n}$ on $\Delta_i$, $i=1,2$.
Taking $T = P_{\bm n,i}$ in \eqref{eq:CauchTrEq}, we conclude that
\begin{equation*}
\mathfrak C_{w_i}[P_{\bm n}] = \frac{1}{P_{\bm n,i}} \, \mathfrak C_{w_i}[P_{\bm n,i} \, P_{\bm n}] .
\end{equation*}
Since the integrand in $\mathfrak C_{w_i}[P_{\bm n,i} \, P_{\bm n}] $ preserves sign on $\Delta_i$, it implies that for $i=1, 2$, the Cauchy transform $\mathfrak C_{w_i}[P_{\bm n}]$ has no zeros in $\R \setminus \Delta_i$.
It remains to apply Lemma~\ref{propInterlacing1} with $S=S_{\bm n, i}$.
\end{proof}
Recall that by Theorem~\ref{TeoClaveHP}, $S_{\bm n, i}$ are polynomials of degree at most $N-n_i+\sigma_i $, $i=1, 2$. Proposition~\ref{cor:zerosS12ang} shows that all zeros of $S_{\bm n, i}$, $i=1,2$, except for at most $\sigma_i+1$ of them (amount only depending on the classes of the weights), are well localized by this interlacing property.
Thus, the electrostatic model for the zeros of $P_{\bm n}$, stated in Corollary~\ref{cor:HPinterp}, is as follows. If we consider each one of the $n_1$ zeros of $P_{\bm n}$ on $\Delta_1$ as a positive unit charge, then they are in equilibrium (or more exactly, they are in critical configuration) in the field generated by:
\begin{itemize}
\item the repulsion of the unit positive charges placed at rest of the zeros of $P_{\bm n}$, on $\Delta_2$,
\item the attraction of the zeros of $S_{\bm n, 1}$, this time with charge $-1/2$, all except at most $\sigma_1+1$ of them interlacing with the zeros of $P_{\bm n}$ on $\Delta_2$, and
\item the background potential from the orthogonality weight $w_1$ on $\Delta_1$.
\end{itemize}
A symmetric picture is obviously valid on $\Delta_2$.
Angelesco--Jacobi polynomials constitute an example of an Angelesco system. They are considered in detail in Section \ref{sec:AngelescoJacobi}.
\subsubsection{AT systems and generalized Nikishin systems}\label{sec:nikishin}
\
We assume now that, unlike the Angelesco case, both weights $w_i$, $i=1, 2$, are supported on the same interval:
\begin{equation} \label{assumptionNikishin1}
\Delta_1 = \Delta_2 = \Delta = [a,b] \subset \R.
\end{equation}
These two weights form an algebraic Chebyshev system (or an \textit{AT system}) for the multi-index $\bm n =(n_1,n_2)$ if
\begin{equation} \label{system}
\left\{w_{1}(x), x w_{1}(x), \dots, x^{n_{1}-1} w_{1}(x), w_{2}(x), x w_{2}(x), \dots, x^{n_{2}-1} w_{2}(x) \right\}
\end{equation}
is a Chebyshev system on $\Delta$, that is, if every non-trivial linear combination of functions from \eqref{system} with real coefficients has at most $N=n_1+n_2$ zeros on $\Delta$. For further details, see e.g. \cite[Chapter 4, \S 4]{niksor} or \cite[Section 23.1.2]{Ismail05}.
It is known (see e.g. \cite[Theorem 23.1.4]{Ismail05}) that if the multi-index $\bm n =(n_1,n_2)$ is such that $\left(w_{1}, w_{2}\right)$ is an AT system on $[a, b]$ for every index $\bm m =(m_1,m_2) $ for which $m_{j} \leq n_{j}$, $j=1, 2$, then
$\bm n$ is normal, and the multiple orthogonal polynomial $P_{\bm n}$ has all its $N$ zeros, all simple, on $(a, b)$.
A construction of an AT system that now is known to be perfect (see \cite{FiLo}) was put forward by E. M. Nikishin \cite{Nik82}; it is called an \textit{$MT$-system} in \cite{niksor}, but is nowadays known as a \textit{Nikishin system}. Namely, we assume additionally that the ratio $w_2/w_1$ is the Cauchy transform (also known as a \textit{Markov function}) of a non-negative function on an interval $[c,d]\subset \R$, whose interior is disjoint with $\Delta$. Besides normality for all multi-indices $\bm n$ and that all zeros of $P_{\bm n}$ are simple and belong to the open interval $(a,b)$, this allows localization of zeros of the electrostatic partner, as we show below.
\begin{remark}
Nikishin \cite{Nik82} proved the normality for all indices of the form $(n,n)$ and $(n+1,n)$, asserting without proof that it holds for any index $(n_1,n_2)$ such that $n_1 \geq n_2$. He called it a \textit{weakly perfect} system, but a result for Markov functions (see e.g. \cite[Lemma 6.3.5]{StTo}) implies that weak perfectness is equivalent to perfectness of the system. Later, K. Driver and H. Stahl established the normality for any index in the case of Nikishin systems of two functions \cite{DrSt} (see also \cite{Bulo}), and more recently, U. Fidalgo and G. L\'{o}pez proved perfectness of a Nikishin system of any order \cite{FiLo}.
\end{remark}
As before, we restrict our attention to semiclassical weights, but slightly weaken Nikishin's assumptions. Namely, in the situation \eqref{assumptionNikishin1} we suppose that $w_1$, $w_2$ are non-negative weights on $[a,b]$ such that:
\begin{itemize}
\item $w_1$ a semiclassical weight on $[a,b]$;
\item weight $w_2$ is of the form
\begin{equation} \label{defW2}
w_2(x)= | \Pi(x) u(x)| w_1(x), \quad x\in [a, b],
\end{equation}
where
\begin{equation}\label{Cauchytrans}
u(x) = \int_c^d\,\frac{v(t)\, }{x-t}\,dt
\end{equation}
is semiclassical, with $(a,b) \cap (c,d) = \emptyset$, $v$ continuous and non-negative on $(c,d)$, and $\Pi$ is an arbitrary polynomial with real coefficients, non-vanishing on $(a,b) \cup (c,d)$.
\end{itemize}
Under these assumptions the weight $w_2$ is also semiclassical. The fact that $(w_1, w_2)$ forms an AT-system for $n_1 \geq n_2 + m$ can be deduced from the fact that the linear form $p + q \Pi u$, for arbitrary polynomials $p, q$ of respective degrees $\leq n_1-1, n_2-1$, and $n_1 \geq n_2 + m$, has at most $n_1+n_2-1$ zeros in $[c,d]$ (see \cite{Nik82} and \cite[p. 1022]{RHPNik}).
\begin{example}
It is easy to see that if $c<d$, then for $\gamma, \delta\notin \mathbb Z$, $\gamma+\delta<0$, $\gamma+\delta\in \mathbb Z$, function
$$
u(x)=(x-c)^\gamma (d-x)^\delta
$$
can be represented as the Cauchy integral \eqref{Cauchytrans}. As a consequence, a pair of weights
$$
w_1(x)= |x-a|^{\alpha }\,|x-b|^{\beta }, \quad w_2(x)= |x-a|^{\alpha }\,|x-b|^{\beta } |x-c|^{\gamma }\,|x-d|^{\delta }, \quad x \in (a, b),
$$
where $(a,b) \cap (c,d) = \emptyset$, and $\alpha, \beta, \gamma, \delta>-1$, $\gamma, \delta\notin \mathbb Z$, $\gamma+\delta\in \mathbb Z$, constitute an example
of a system defined above.
Since we do not assume that that the intervals $(a,b)$ and $(c,d)$ are bounded, another example is the pair of weights of the form
$$
w_1(x) = \exp (- x^{r}), \quad w_2(x)= |x-a|^{\gamma }\,|x-b|^{\delta}\exp (- x^{r}), \quad x\in [0,+\infty),
$$
with $r\in \N$, $-\infty<a<b<0$, $ \gamma, \delta>-1$, $\gamma, \delta\notin \mathbb Z$, and $\gamma+\delta\in \mathbb Z$.
\end{example}
With the introduction of the polynomial factor $\Pi$ in the weight $w_2$ we can no longer guarantee that all the zeros of $P_{\bm n}$ are in $(a,b)$; however, the following result still holds:
\begin{prop}\label{thm:zerosNik}
Under the conditions on the weights $w_1$ and $w_2$ stated above, the Hermite--Pad\'e polynomial $P_{\bm n}$, satisfying \eqref{defHP}, has at least $n_1+\ell+1$ sign changes on $(a,b)$, where
$$\ell = \min (n_2-1,n_1-m), \quad m=\deg(\Pi),
$$
while its Cauchy transform $\mathfrak C_{w_1}[P_{\bm n}]$ has at least $\ell+1$ sign changes in $(c,d)$.
\end{prop}
\begin{proof}
We basically follow the arguments in \cite[Sec. 2.5]{VanAssche06}. Suppose that $n_1\geq m$. Taking in \eqref{eq:CauchTrEq} $T = P \Pi $, where $P$ is and arbitrary polynomial of degree $k\leq n_1-m$, we get that
$$
P(x)\, \Pi(x) \,\int_a^b\,\frac{P_{\bm n}(t)}{t-x}\,w_1(t) dt = \int_a^b\,\frac{P(t)\,P_{\bm n}(t)\Pi(t)}{t-x}\,w_1(t) dt.
$$
Integrating this identity with respect to $d\sigma$ in $[c,d]$, applying Fubini's theorem and using \eqref{Cauchytrans} yields
$$
\int_c^d\,P(x)\,\mathfrak C_{w_1}[P_{\bm n}](x)\,\Pi(x) v(x) \,d x = \,\int_a^b\,P(t)\,P_{\bm n}(t)\,w_2(t) dt\,= 0\,,
$$
as long as the degree $k$ of $P$ is $\leq n_2-1$. Since polynomial $\Pi(x)$ does not vanish in $(c,d)$ it proves that $\mathfrak C_{w_1}[P_{\bm n}]$ changes sign at least $\ell + 1$ times in $(c,d)$.
To prove the first part of the proposition, let $P$ be a polynomial non vanishing on $(a,b)$ and such that $\mathfrak C_{w_1}[P_{\bm n}]/P$ is analytic in $\C \setminus [a,b]$. By the assertion we just proved, we can take $P$ of degree at least $\ell+1$, so that by \eqref{eq:asymptCauchy},
$$
\frac{\mathfrak C_{w_1}}{P}(z)=\mathcal O \left(\frac{1}{z^{n_1+\ell+2}}\right), \quad z\rightarrow \infty.
$$
Let $\Gamma$ be a positively oriented Jordan contour encircling $[a,b]$ and leaving $[c,d]$ in its exterior. Then for $k=0, 1, \dots, n_1+\ell$,
\begin{equation}\label{fubini}
\begin{split}
0= \frac{1}{2\pi i}\,\oint_{\Gamma}\,z^k\,\frac{\mathfrak C_{w_1}[P_{\bm n}](z)}{P(z)}\,dx & = \frac{1}{2\pi i}\,\oint_{\Gamma}\,\frac{z^k}{P(z)}\,\left( \int_a^b\,\frac{P_{\bm n}(t) }{t-z}\,w_1(t) dt\right) \,dz \\
& = \int_{a}^b \, P_{\bm n}(t) \left( \frac{1}{2\pi i}\, \oint_\Gamma \, \frac{z^k}{P(z)}\, \frac{1}{t-z}\,dz \right) w_1(t) \,dt \\
& = \int_a^b\,t^k\,P_{\bm n}(t)\,\frac{w_1(t) dt}{P(t)},
\end{split}
\end{equation}
where we have used Fubini's and Cauchy's theorems. Since $w_1/P$ has a constant sign on $(a,b)$, $P_{\bm n}$ satisfies quasi--orthogonality conditions (of order at least $n_1+\ell$) there. Standard arguments yield that $P_{\bm n}$ has at least $n_1+\ell+1$ sign changes on $(a,b)$.
\end{proof}
Consider the case when $n_2\leq n_1-m+1$, so that $\ell =n_2-1$. According to Proposition~\ref{thm:zerosNik}, $P_{\bm n}$ has exactly $N=n_1+n_2$ zeros, all simple, in $(a,b)$, while $\mathfrak C_{w_1}[P_{\bm n}]$ has $\ge n_2$ zeros in $(c,d)$, exactly as in the classical Nikishin setting ($m=0$).
Moreover, if these zeros of $\mathfrak C_{w_1}[P_{\bm n}]$ are disjoint with the zeros of $A_1 S_{\bm n, 1}$ then, by the second assertion of Proposition~\ref{propInterlacing1}, at least $\ell=n_2-1$ zeros of polynomial $S_{\bm n, 1}$ (out of a total of $\le n_2+\sigma_1$ of its zeros) interlace with those of $\mathfrak C_{w_1}[P_{\bm n}]$.
\subsubsection{Generalized Nikishin systems: case of overlapping supports}\label{sec:rakhmanov}
\
Generalized Nikishin systems (GN systems) were introduced in \cite{GRS97} using a rooted tree graph. A particular example of such a system, whose asymptotics was studied in \cite{MR2475084}, \cite{MR2656323} and \cite{MR2796829}, shares characteristics of both cases described in Sections~\ref{sec:angelesco} and \ref{sec:nikishin}. Namely, we assume that
\begin{equation} \label{mixedR1}
\Delta_1 \subseteq \Delta_2,
\end{equation}
in addition to the Nikishin relation between the semiclassical weights $w_1$ and $w_2$, given by conditions \eqref{defW2}--\eqref{Cauchytrans}, with $\Pi\equiv 1$, and the assumption that the interior of
$$
\Delta_3:=[c,d]
$$
is disjoint with $\Delta_2$. On one hand, when $\Delta_1\to \Delta_2$, we obtain the classical Nikishin configuration of Section~\ref{sec:nikishin}. On the other, redefinition of the supports
$$
\Delta_1 \mapsto \Delta_1, \quad \Delta_2\setminus\Delta_1 \mapsto \Delta_2
$$
yields the Angelesco setting of Section~\ref{sec:angelesco}.
Let us study the diagonal case of $n_1=n_2=n$, so that $N=2n$. As before, the key ingredient is the location of the zeros of the Hermite--Pad\'e polynomial $P_{\bm n}$ and its electrostatic partners $S_{\bm n,1}$ and $S_{\bm n,2}$. It was proved in \cite{MR2796829} that \textit{for any $n$, the zeros of the Hermite--Pad\'e polynomial $P_{\bm n}$, with the possible exception of five of them, are in $\Delta_2$.} Recall that by orthogonality assumptions, at least $n$ of them belong to the subinterval $\Delta_1$. Additionally, we have:
\begin{prop}\label{prop:Rakhzeros}
If $P_{\bm n}$ has $n+k_1\ge n$ sign changes in $\Delta_1$ and $k_2\ge 0$ sign changes in $\Delta_2 \setminus \Delta_1$, then $S_{\bm n,1}$ has at least $\max \{k_2-2,0\}$ zeros in $\Delta_2 \setminus \Delta_1$, which interlace with the zeros of $P_{\bm n}$, and at least $\max\{k_1-3,0\}$ zeros in $\Delta_3$, interlacing with the zeros of $\mathfrak C_{w_1}[P_{\bm n}]$.
\end{prop}
\begin{proof}
Let us denote by $x_i$, $i=1,\ldots,n+k_1$ the points of sign change of $P_{\bm n}$ in $\Delta_1$, and by $y_j$, $j=1,\dots,k_2$ the corresponding points of sign change in $\Delta_2 \setminus \Delta_1$. Using \eqref{eq:CauchTrEq} with
$$
\Pi(x) =\prod_{j=1}^{k_2} (x-y_j),
$$
we conclude again that $\mathfrak C_{w_1}[P_{\bm n}]$ does not change sign in each component of $\Delta_2 \setminus \Delta_1$. Then, the first part of Lemma \ref{propInterlacing1} asserts that $S_{\bm n,1}$ has at least $k_2-2$ zeros in $\Delta_2 \setminus \Delta_1$, interlacing with those of $P_{\bm n}$ (observe that $\Delta_2 \setminus \Delta_1$ can have up to two disjoint components).
Notice that $k_1 + k_2 \le n$, so that if $k_2=n$, our proof is finished. Suppose that $k_2<n$, and let us see that $\mathfrak C_{w_1}[P_{\bm n}]$ has at least $k_2-2$ sign changes in $\Delta_3$.
From \eqref{defHP} and \eqref{defW2}--\eqref{Cauchytrans}, we have that for any polynomial $P \in \mathbb P_{n-1}$,
\begin{equation}\label{Rakhorth}
\begin{split}
0 & = \int_{\Delta_1}\,P(x) P_{\bm n} w_1(x) u(x) dx + \int_{\Delta_2 \setminus \Delta_1}\,P(x) P_{\bm n} w_2(x) dx \\
& = - \int_{\Delta_3}\,\pi(t) \mathfrak C_{w_1}[P_{\bm n}](t) \sigma (t) dt + \int_{\Delta_2 \setminus \Delta_1}\,P(x) P_{\bm n} w_2(x) dx\,,
\end{split}
\end{equation}
where we have used Fubini's theorem for the last identity. Now, denote by $z_i\,,\,i=1,\dots,k_3$ the points where $\mathfrak C_{w_1}[P_{\bm n}]$ changes sign in $\Delta_3$. Let us suppose that $k_3<k_1-2$ and define
$$
P(x) := (x - \zeta_1)^{\epsilon_1} (x - \zeta_2)^{\epsilon_2} \prod_{i=1}^{k_2} (x-y_i) \prod_{j=1}^{k_3} (x-z_j)\,,
$$
where $\epsilon_i \in \{0,1\}\,,\,i=1,2\,,$ $\zeta_1 \in \Delta_1$ and $\zeta_2$ is located in the ``gap'' between $\Delta_2$ and $\Delta_3$ (which may consist of a single point, since we only require the interiors to be disjoint). We can use the parameters $\zeta_1, \zeta_2$ and $\epsilon_1, \epsilon_2$ to guarantee that
$$P(t) \mathfrak C_{w_1}[P_{\bm n}](t) \sigma (t) \geq 0,\quad t\in \Delta_3$$ and
$$P(x) P_{\bm n}(x) w_2(x) \leq 0,\quad x\in \Delta_2.$$
Since by assumption, $\deg(P) = k_2+k_3+2 < k_1+k_2 \le n$, so that \eqref{Rakhorth} should hold for this particular choice of $P$. This is possible only if both integrands in the right hand side of \eqref{Rakhorth} were identically $0$, which is a contradiction. Hence, $k_3\geq k_1-2$, and it remains to use again the second assertion in Lemma \ref{propInterlacing1} to conclude the proof.
\end{proof}
Thus, as expected from an intermediate case between Angelesco and Nikishin settings, now a part of the zeros of the electrostatic partner $S_{\bm n,1}$ lie on $\Delta_3$ (as in the Nikishin case) while the rest are located in $\Delta_2 \setminus \Delta_1$ (Angelesco). Therefore, in this situation, part of the ``ghosts'' attractive charges interlace with part of the zeros of $P_{\bm n}$ in $\Delta_2 \setminus \Delta_1$, while other part are placed in $\Delta_3$. Of course, depending on the specific case we are dealing with, some of these sets of attractive charges may be empty.
\section{Asymptotic zero distribution}\label{sec:asymptotics}
It is instructive to observe the discrete-to-continuous transition of the electrostatic model described in the previous section, assuming that the total degree $N=n_1+n_2\to \infty$.
\subsection{Vector critical measures}
If $\mu_1$, $\mu_2$ are two finite positive Borel measures with compact support then their (continuous) mutual logarithmic energy is
\begin{equation}\label{defMutual}
\langle \mu_1, \mu_2 \rangle := \iint \log \frac{1}{|x-y|}\, d\mu_1(x) d\mu_2(y);
\end{equation}
and the logarithmic energy of $\mu_i$ is
\begin{equation}\label{defEnergyContinuous}
E(\mu_i) := \langle \mu_i, \mu_i \rangle, \quad i=1, 2.
\end{equation}
Given a vector of measures $\vec \mu=(\mu_1, \dots, \mu_k)$, and a symmetric positive-definite \textit{interaction matrix}
$$
M=(m_{ij})_{i,j=1}^k,
$$
the vector energy of $\vec \mu $ is
\begin{equation}\label{defWeightedEnergyCont}
E_{ M}(\vec \mu ):=
\sum_{i,j=1}^k m_{ij} \langle \mu_i, \mu_j \rangle.
\end{equation}
In the particular case of $k=2$, when
$$M =\begin{pmatrix}1&a\\a&1\end{pmatrix}, \quad -1<a<1, $$
we call $a$ the \textit{interaction parameter}.
We define the critical vector measures following \cite{MR2770010} and \cite{MR3545949}. Recall that any smooth complex-valued function $h$ in the closure $\overline \Omega $ of a domain $\Omega$ generates a local variation of $\Omega$ by $ z
\mapsto z^t=z+ t \, h(z)$, $t\in \C$. It is easy to see that $ z \mapsto z^t $ is injective for small
values of the parameter $t$. The transformation above induces a variation of sets $ e \mapsto
e^t := \{z^t:\, z \in e\}$, and measures: $ \mu \mapsto
\mu^t$, defined by $\mu^t(e^t)=\mu(e) $; in the differential form, the pullback measure $\mu^t$
can be written as $d\mu^t (x^t)=d\mu(x)$. Recall also the notation, introduced in Section~\ref{sec:generalHermitePade}, of $\mathbb A_i$ for the set of zeros of $A_i$ on $\C$, $i=1,2$.
\begin{definition}[see \cite{MR3545949}] \label{def:AcriticalCont}
A vector measure $\vec \mu = (\mu_1, \dots, \mu_k)$ is a (continuous) \textit{vector critical measure} if for any $h$ smooth in $\C\setminus (\mathbb A_1 \cup \mathbb A_2)$ such that $h\big|_{\mathbb A_1 \cup \mathbb A_2} \equiv 0$,
\begin{equation}\label{derivativeEnergy}
\frac{d}{dt}\, E_M (\vec \mu^t)\big|_{t=0} = \lim_{t\to 0} \frac{E_M (\vec \mu^t)- E_M (\vec \mu)}{t}=0.
\end{equation}
\end{definition}
As in the discrete case,
\begin{equation*}
\mu_i \text{ is $F_i$-critical, with } F_i :=\sum_{1\le j\le k,\, j\neq i} \frac{m_{ij}}{m_{ii}}\, U^{\mu_j} , \quad i\in \{ 1, \dots, k\},
\end{equation*}
which yields the following variational conditions on the components of $\vec \mu$: for $x\in \supp(\mu_i)$, with a possible exception of a subset of logarithmic capacity $0$,
\begin{equation} \label{variationalCond}
U^{\mu_i}(x) + \sum_{1\le j\le k,\, j\neq i} \frac{m_{ij}}{m_{ii}}\, U^{\mu_j} = c_i =\const , \quad i\in \{ 1, \dots, k\}.
\end{equation}
\subsection{Asymptotic electrostatic model: general case}
Under a general assumption that for each multi-index $\bm n=(n_1, n_2)\in \N^2$ the Hermite--Pad\'e polynomial $P_{\bm n}$ exists and is of degree $N=n_1+n_2$, as well as that $\deg (S_{\bm n, i})=N-n_i+1$, we consider the zero-counting measures (see the definition in \eqref{defCountingMeaure}) for $P_{\bm n}$, $S_{\bm n,1}$ and $S_{\bm n,2}$ in the asymptotic regime
\begin{equation} \label{asymptoticRegime}
N=n_1 + n_2 \to \infty, \quad \lim_{N\rightarrow \infty}\,\frac{n_2}{N}\,= t\in [0,1].
\end{equation}
Let us assume that (perhaps, along a subsequence of multi-indices), weak limits
\begin{equation} \label{munu}
\mu := \lim_{\bm n} \frac{1}{N}\,\nu(P_{\bm n}),\quad \nu_1 := \lim_{\bm n} \frac{1}{N}\,\nu(S_{\bm n,1}), \quad \nu_2 := \lim_{\bm n} \frac{1}{N}\,\nu(S_{\bm n,2}),
\end{equation}
exist. With our assumptions,
\begin{equation} \label{weightconstraints1}
\|\mu\|:= \int d\mu =1, \quad \|\nu_1\|:= \int d\nu_1 =t, \quad \|\nu_2\|:= \int d\nu_2 =1-t.
\end{equation}
\begin{cor}\label{cor:asymptoticsGeneral}
With the assumptions and notations above, each vector measure $(\mu,\nu_1)$ and $(\mu,\nu_2)$ is critical in the sense of Definition~\ref{def:AcriticalCont}, with the interaction parameter $ a=-1/2$ and constraints \eqref{weightconstraints1}.
\end{cor}
It is worth also discussing a connection with a more traditional model involving 3-component vector measures. Define
\begin{equation} \label{lambdas}
\Omega=\overline{\{ x\in\R:\, \mu=\nu_1 \}}, \quad \lambda_1 :=\mu\big|_{\Omega}, \quad \lambda_2=\mu-\lambda_1, \quad \lambda_3= \nu_1-\lambda_1.
\end{equation}
Obviously,
\begin{equation} \label{weightconstraints2}
\|\lambda_1\| + \|\lambda_2\|= 1, \quad \|\lambda_1\| + \|\lambda_3\|= t.
\end{equation}
Variational conditions \eqref{variationalCond} for $(\mu, \nu_1)$ with the interaction matrix
$$
\begin{pmatrix}1&-1/2\\-1/2&1\end{pmatrix}
$$
imply that
\begin{equation*}
\begin{split}
U^{\mu}(x) -\frac{1}{2}\, U^{\nu_1}(x) = c_1 =\const , & \quad x\in \supp(\mu), \\
U^{\nu_1}(x) -\frac{1}{2}\, U^{\mu}(x) = c_2 =\const , & \quad x\in \supp(\nu_1),
\end{split}
\end{equation*}
or equivalently,
\begin{equation*}
\begin{split}
\frac{1}{2}\, U^{\lambda_1}(x) +U^{\lambda_2}(x) -\frac{1}{2}\, U^{\lambda_3}(x) = c_1 =\const , & \quad x\in \supp(\mu)= \supp(\lambda_1) \cup \supp(\lambda_2), \\
\frac{1}{2}\, U^{\lambda_1}(x) -\frac{1}{2}\, U^{\lambda_2}(x) + U^{\lambda_3}(x) = c_2 =\const , & \quad x\in \supp(\nu_1) = \supp(\lambda_1) \cup \supp(\lambda_3). \\
\end{split}
\end{equation*}
Additionally, on $\supp(\lambda_1)$, where both identities hold, we have that
$$
U^{\lambda_1}(x) +\frac{1}{2}\, U^{\lambda_2}(x) +\frac{1}{2}\, U^{\lambda_3}(x) = c_1 +c_2.
$$
\begin{cor}\label{cor:asymptotics3}
With the assumptions and notations above, $(\lambda_1, \lambda_2, \lambda_3)$ is a vector critical measure satisfying the constraints \eqref{weightconstraints2} and with the interaction matrix
$$
\begin{pmatrix}
1&1/2 & 1/2 \\1/2&1 & -1/2 \\
1/2 & -1/2 & 1
\end{pmatrix}.
$$
\end{cor}
This electrostatic model has been used to describe the asymptotics of the zeros of Hermite--Pad\'e polynomials in several situations, see e.g. \cite{MR2475084, ALT2011, MR3545949, MR3939592, AMFS21, MR2796829}. In particular, a spectral curve for such critical measures was derived in \cite{MR3545949}, and it was shown that $\lambda_i$'s are supported on a finite number of analytic arcs, that are trajectories of a quadratic differential globally defined on a three-sheeted Riemann surface.
\subsection{Asymptotic electrostatic model: Angelesco case} \label{sec:AsymptAngelesco}
Using the notation \eqref{munu} and Proposition~\ref{cor:zerosS12ang}, in this case
$$
\supp(\mu) \subseteq \Delta_1 \cup \Delta_2, \quad \supp(\nu_1)\subseteq \Delta_2, \quad \supp(\nu_2)\subseteq \Delta_1,
$$
and
$$
\nu_1 = \mu\big|_{\Delta_2}, \quad \nu_2 = \mu\big|_{\Delta_1},
$$
so that in notation \eqref{lambdas},
$$
\lambda_1 = \nu_1, \quad \lambda_2 = \nu_2, \quad \lambda_3=0.
$$
Thus, in this case the vector electrostatic model of Corollary~\ref{cor:asymptotics3} reduces to a $2\times 2$ equilibrium for $(\nu_1, \nu_2)$, with the interaction matrix
$$
M =\begin{pmatrix}
1 & 1/2 \\
1/2 & 1
\end{pmatrix}
$$
and constraints
$$
\supp (\nu_1) \subset \Delta_1, \quad \|\nu_1\|=t, \qquad \text{and} \qquad \supp (\nu_2) \subset \Delta_2, \quad \|\nu_2\|=1-t.
$$
This is already classical, see e.g.~\cite[Ch. 5]{niksor}; actually, a stronger result is valid: the vector measure $(\nu_1,\nu_2)$ is a \textit{global minimum} for \eqref{def:EnergiaVect} and $a=1/2$. This does not follow directly from our electrostatic model.
Notice that measure $\nu=\nu_1 + \nu_2$ is the limit zero distribution of both $P_{\bm n}$ and $S_{\bm n,1}S_{\bm n,2}$.
\begin{remark}
Although in the Angelesco case the zeros of $P_{\bm n}$ are confined in $\Delta_1 \cup \Delta_2$, in principle up to $\sigma_i+1$ of $S_{\bm n, i}$, $i=1, 2$, are out of our control. Same happens with (a bounded number of) zeros of the polynomials $R_{\bm n}$. In order to guarantee weak convergence of the zero-counting measures, it is sufficient to impose an additional assumption: that \textit{the zeros of $S_{\bm n, 1}$, $S_{\bm n, 2}$ and $R_{\bm n}$ are uniformly bounded along the double sequence $(n_1, n_2)$}.
\end{remark}
\subsection{Asymptotic electrostatic model: Nikishin case} \label{sec:AsymptNikishin}.
Consider the generalized Nikishin system as described in Section~\ref{sec:nikishin}, in the asymptotic regime \eqref{asymptoticRegime} and with the additional assumption that
\begin{equation} \label{asymptoticRegimeNikishin1}
n_2\leq n_1-m+1,
\end{equation}
so that
\begin{equation} \label{asymptoticRegimeNikishin2}
t = \lim_{N\rightarrow \infty}\,\frac{n_2}{N} \in [0,1/2].
\end{equation}
As we have seen, all $N$ zeros of $P_{\bm n}$ live on $[a,b]$; according to Theorem~\ref{thm:vectorelectrostatics}, each one of them, endowed with a charge $+1$, interacts with zeros of $S_{\bm n, 1}$, each one with charge $-1/2$. We have seen that the majority of them (at least $\ell=n_2-1$ out of $ n_2+\sigma_1$ possible) belongs to $[c,d]$. Imposing the same additional assumption that before, that \textit{the zeros of both $S_{\bm n, 1}$ and $R_{\bm n}$ are uniformly bounded along the double sequence $(n_1, n_2)$}, we can use again the weak-* compactness of measures.
With the notation \eqref{munu}, now
$$
\supp(\mu) \subseteq \Delta_1 =[a,b], \quad \supp(\nu_1)\subseteq \Delta_2=[c,d],
$$
so that in notation \eqref{lambdas},
$$
\lambda_1 = 0, \quad \lambda_2 = \mu, \quad \lambda_3=\nu_1.
$$
Thus, in this case the vector electrostatic model of Corollary~\ref{cor:asymptotics3} reduces to a $2\times 2$ equilibrium for $(\mu, \nu_1)$, with the interaction matrix
$$
M =\begin{pmatrix}
1 & -1/2 \\
-1/2 & 1
\end{pmatrix}
$$
and constraints
$$
\supp (\mu) \subset \Delta_1, \quad \|\mu\|=1, \qquad \text{and} \qquad \supp (\nu_1) \subset \Delta_2, \quad \|\nu_2\|=1-t.
$$
This model was initially put forward by Nikishin himself in \cite{Nikishin86}. Again, a stronger result is valid: the vector measure $(\mu,\nu_1)$ is a \textit{global minimum} for the vector energy, which does not follow directly from our electrostatic model.
Observe that the roles of the weights $w_1$ and $w_2$ in a Nikishin system are not symmetric, or at least the symmetry is not immediate. An argument that allows to swap $w_1$ and $w_2$, and thus break the barrier of $n_2\leq n_1$, is based on the fact (see e.g.~\cite[Lemma 6.3.5]{StTo}) that if $u$ is a Markov function \eqref{Cauchytrans}, then
$$\frac{1}{u(x)} = r(x) - \int\,\frac{d\tau (t)}{x-t},$$
where $r$ is a polynomial of degree $\leq 1$ and $\tau$ is a positive measure on $[c,d]$. Standard arguments allow to extend the previous result when $m=0$ (the classical Nikishin case), see e.g. \cite{RHPNik}. Unfortunately, the presence of a non-trivial polynomial factor $\Pi$ in \eqref{defW2} prevents these arguments from going through. Thus, if $m=0$, in the asymptotic regime \eqref{asymptoticRegime} we can drop the restriction \eqref{asymptoticRegimeNikishin1}, and the electrostatic model discussed above is still valid. Another interesting result that sheds light on the roles of $w_1$ and $w_2$, allowing to connect the situations of $n_1\le n_2$ and $n_1\ge n_2$, appears in \cite{zbMATH07308461}.
\subsection{Asymptotic electrostatic model: case of the overlapping supports}
Here we consider the asymptotic electrostatics for the intermediate case studied in \cite{MR2475084, MR2796829} and partially analyzed in Section~\ref{sec:rakhmanov}. Recall that here we consider $n_1=n_2=n$, and thus, we are interested in what happens as $n\rightarrow \infty$.
With our notation in the current section, we have that $\supp \mu \subseteq \Delta_2$ and $\supp \nu_1 \subseteq (\Delta_2 \setminus \Delta_1) \cup \Delta_3$, in such a way that now we have,
$$\supp \lambda_1 \subseteq \Delta_2 \setminus \Delta_1\,,\, \supp \lambda_2 \subseteq \Delta_1\,,\, \supp \lambda_3 \subseteq \Delta_3\,,$$
The results in Proposition \ref{prop:Rakhzeros} guarantee that
$$\|\lambda_1\| + \|\lambda_2\| = 1\,,\; \|\lambda_2\| = \|\lambda_3\| + \,\frac{1}{2}\,. $$
Since in this intermediate case none of the measures $\lambda_i$ becomes null, the $3\times 3$ interaction matrix in Corollary 6.3 does not reduce to a $2\times 2$ one, as in the previous (extremal) cases. As for the constraints on the size of the measures, in this case we have
$$\|\lambda_1\| = \,\frac{1}{2} - \theta \,,\; \|\lambda_2\| = \,\frac{1}{2} + \theta\,,\; \|\lambda_3\| = \theta\,,$$
where $\theta \in [0, 1/2]$ is a parameter which depends on the relative sizes and mutual positions of the three intervals $\Delta_i\,,\,i=1,2,3$; but especially on the first two ones, as asserted by the author of \cite{MR2796829}.
Moreover, as in the previous cases, for this ``critical'' value of the parameter $\theta$, a stronger result is valid. The vector measure $(\lambda_1,\lambda_2, \lambda_3)$ described above is a global minimum of the vector energy (see \cite{MR2796829}); but, again, this result does not follow directly from our electrostatic approach.
\section{Differential equation of order 3} \label{sec:ODE3}
The system of second order linear differential equations in Theorems \ref{TeoClaveHP} and \ref{thm:WronskianS/w} allowed us to derive an electrostatic interpretation of the zeros of the Hermite--Pad\'e polynomials of type II. In this section, we show how these equations can be combined into a single third order homogeneous differential equation, satisfied simultaneously by the polynomial $P_{\bm n}$ and by the functions of the second kind $q_{\bm n,1}$ and $q_{\bm n,2}$. Notice that if we only cared about a third order ODE solved by $P_{\bm n}$, it would be sufficient to differentiate one of the equations \eqref{odeHP}; thus, it is convenient to stress here that we seek equations whose basis of solutions is precisely $( P_{\bm n} , q_{\bm n,1}, q_{\bm n,2})$.
As it was mentioned in the introduction, third order homogeneous linear ODE whose solutions are $P_{\bm n}$ have been already described in the literature. For instance, such an equation was found in \cite{kaliaguine:1996} for the Jacobi-Angelesco multiple orthogonal polynomials, see Section~\ref{sec:AngelescoJacobi}. In \cite{Aptekarev:97}, Aptekarev et al.~considered the case of a semiclassical weight $w$ of class $\sigma$ and multiple orthogonal polynomials $P_{\bm n}$ of type II with respect to $w$ and $\sigma +1$ non-homotopic paths of integration, showing for instance that in the diagonal setting $\bm n=(n,n,...,n)$, $P_{\bm n}$ satisfies a linear differential equation of order $\sigma+2$. A kind of opposite case, when distinct classical ($\sigma=0$) weights $w_j$ are supported on the same contour $\Gamma$, was analyzed in \cite{AptBraVA}, where again a linear ODE of order $r+1$, where $r$ is the number of weights, was derived.
In a clear resemblance to the definition of the polynomial $R_{\bm n}$ in \eqref{defDiffOperators}--\eqref{defRpolyn}, let
\begin{equation}\label{defEFalt2}
E_{\bm n}:=A_1^2A_2^2v_1 v_2 \,
\det\begin{pmatrix}P_{\bm n}&q_{\bm n,1}&q_{\bm n ,2}\\P_{\bm n}''&q_{\bm n,1}''&q_{\bm n ,2}''\\
P_{\bm n}'''&q_{\bm n,1}'''&q_{\bm n ,2}'''\end{pmatrix}\,,\quad
F_{\bm n}:=A_1^2A_2^2v_1 v_2\,
\det\begin{pmatrix}P_{\bm n}'&q_{\bm n,1}'&q_{\bm n ,2}'\\P_{\bm n}''&q_{\bm n,1}''&q_{\bm n ,2}''\\
P_{\bm n}'''&q_{\bm n,1}'''&q_{\bm n ,2}'''\end{pmatrix}.
\end{equation}
In this section we maintain the assumption \eqref{mainconditionwronks}.
\begin{thm} \label{thmODER}
\begin{enumerate}[a)]
\item Functions $E_{\bm n}$ and $F_{\bm n}$, defined above, are polynomials, with
\begin{align*}
\deg(E_{\bm n}) & \leq \max\{\deg(A_1)+\sigma_2+\deg(R_{\bm n}),\deg(A_2)+\sigma_1+\deg(R_{\bm n}),\deg(B_1)+\deg(B_2)+\deg(R_{\bm n})\}, \\
\deg(F_{\bm n}) & \leq \sigma_1+\sigma_2+\deg(R_{\bm n})+1,
\end{align*}
and $\sigma_i$ defined in \eqref{def:classHP}.
\item $P_{\bm n}$, $q_{\bm n,1}$ and $q_{\bm n,2}$ are solutions of the linear differential equation with polynomial coefficients
\begin{equation}\label{ode3}
A_1A_2R_{\bm n}y'''+\left[ (A_1(2A_2'+B_2)+A_2(2A_1'+B_1))R_{\bm n}-A_1A_2R_{\bm n}' \right] y''+E_{\bm n}y'+F_{\bm n}y=0.
\end{equation}
\item
In the particular case when $A_1=A_2=A$, $B_1=B_2=B$ (so that $w_1=w_2$), and $\sigma=1$, $n_1=n_2$, the differential equation \eqref{ode3} reduces to
\begin{equation} \label{ode3bis}
A^2y'''+2A(A'+B)y''+E_{\bm n}^*\, y'+F_{\bm n}^*\, y=0,
\end{equation}
where $E_{\bm n}^*$ and $F_{\bm n}^*$ are polynomials of degree at most $4$ and $3$, respectively.
\end{enumerate}
\end{thm}
\begin{proof}
Recall the second order differential operators introduced in \eqref{defDiffOperators},
$$
\mathcal L_i[y]:= A_i V_i \, \mathfrak{Wrons}[y, P_{\bm n}, q_{\bm n,i} ], \quad i=1, 2.
$$
By \eqref{diffOp1} (see Remark~\ref{remark:operator}),
$$
\mathcal{L}_i[y]= A_i S_{\bm n, i}\, y''+(A'_i S_{\bm n, i}-A_i S_{\bm n, i}'+B_iS_{\bm n, i})\, y'+C_{\bm n,i}y.
$$
Clearly, $\mathcal{L}_i[P_{\bm n}]=\mathcal{L}_i[q_{\bm n,i}]=0$, and by \eqref{defRpolyn},
\begin{equation} \label{expressionEll}
\mathcal{L}_1[q_{\bm n,2}]=\frac{R_{\bm n}}{A_2 v_2}, \quad \mathcal{L}_2[q_{\bm n,1}]=-\frac{R_{\bm n}}{A_1 v_1}.
\end{equation}
Consider the third order linear differential operator
\begin{equation} \label{constructionL}
\mathcal{M}[y]:=\frac{A_2^2v_2}{S_{\bm n,1}}\left( \mathcal{L}_1[q_{\bm n,2}] \left(\mathcal{L}_1[y]\right)'
-\left( \mathcal{L}_1[q_{\bm n,2}] \right)'\mathcal{L}_1[y]\right).
\end{equation}
By construction, the differential equation $\mathcal{M}[y]=0$ is solved by $P_{\bm n}$, $q_{\bm n,1}$ and $q_{\bm n,2}$. Using \eqref{defRpolyn} and \eqref{expressionEll} we can find the explicit expressions for the coefficients of $\mathcal M$. For instance, the coefficient at $y'''$ is
$$\frac{A_2^2v_2}{S_{\bm n,1}}\frac{R_{\bm n}}{A_2 v_2}A_1S_{\bm n,1}=A_1A_2R_{\bm n}.$$
The one at $y''$ is
\begin{align*}
&\frac{A_2^2v_2}{S_{\bm n,1}}\left(\frac{R_{\bm n}}{A_2 v_2}\Big((A_1S_{\bm n,1})'+(-A_1S_{\bm n,1}'+A_1'S_{\bm n,1}+B_1S_{\bm n,1})\Big)-\left(\frac{R_{\bm n}}{A_2 v_2}\right)'A_1S_{\bm n,1}\right)\\
=&\frac{A_2^2v_2}{S_{\bm n,1}}\left(\frac{R_{\bm n}}{A_2 v_2}\left(2A_1'S_{\bm n,1}+B_1S_{\bm n,1}\right)-\frac{R_{\bm n}}{A_2 v_2}\left(\frac{R_{\bm n}'}{R_{\bm n}}-\frac{A_2'}{A_2}-\frac{v_2'}{v_2}\right)A_1S_{\bm n,1}\right)\\
=&A_2R_{\bm n}\left(\left(2A_1'+B_1\right)-\left(\frac{R_{\bm n}'}{R_{\bm n}}-\frac{A_2'}{A_2}-\frac{v_2'}{v_2}\right)A_1\right)\\
=&A_2R_{\bm n}\left(\left(2A_1'+B_1\right)-\left(\frac{R_{\bm n}'}{R_{\bm n}}-\frac{A_2'}{A_2}-\frac{A_2'+B_2}{A_2}\right)A_1\right)\\
=&A_1(2A_2'+B_2)R_{\bm n}+A_2(2A_1'+B_1)R_{\bm n}-A_1A_2R_{\bm n}'.
\end{align*}
Similar calculations for the rest of the coefficients show that
$$
\mathcal{M}[y]=A_1A_2R_{\bm n}y'''+(A_1(2A_2'+B_2)R_{\bm n}+A_2(2A_1'+B_1)R_{\bm n}-A_1A_2R_{\bm n}')y''+E_{\bm n}y'+F_{\bm n}y,
$$
where
\begin{align}
E_{\bm n}&=\frac{-A_1A_2R_{\bm n}S_{\bm n,1}''+((-A_1(2A_2'+B_2)+A_2B_1)R_{\bm n}+A_1A_2R_{\bm n}')S_{\bm n,1}'+A_2R_{\bm n}C_{\bm n,1}}{S_{\bm n,1}}\nonumber\\
&+(A_1''A_2+A_1'(2A_2'+B_2)+2A_2'B_1+A_2B_1'+B_1B_2)R_{\bm n}-A_2(A_1'+B_1)R_{\bm n}'\,,\label{def:E}
\end{align}
and
\begin{equation}\label{def:F}
F_{\bm n}=\frac{(A_2C_{\bm n,1}'+(B_2+2A_2')C_{\bm n,1})R_{\bm n}-A_2C_{\bm n,1}R_{\bm n}'}{S_{\bm n,1}}.
\end{equation}
Construction \eqref{constructionL} can be carried out exchanging the role of the indices $i=1$ and $i=2$. It yields a third order linear differential equation with the same coefficient at $y'''$. Since $P_{\bm n}$, $q_{\bm n,1}$ and $q_{\bm n,2}$ are linearly independent (assumption \eqref{mainconditionwronks}), we conclude that this is the same ODE. In other words, \eqref{def:E} and \eqref{def:F} are invariant by exchange of the indices $i=1$ and $i=2$.
Moreover, a third order linear differential equation with the same set of solutions is
$$
A_1^2A_2^2v_1v_2\, \mathfrak{Wrons}[P_{\bm n},q_{\bm n,1},q_{\bm n ,2},y]=0.
$$
Again, by \eqref{defRpolyn}, the coefficient at $y'''$ is $A_1A_2R_{\bm{n}}$, which shows that
\begin{equation} \label{identityL}
\mathcal L[y]=A_1^2A_2^2v_1v_2\, \mathfrak{Wrons}[P_{\bm n},q_{\bm n,1},q_{\bm n ,2},y].
\end{equation}
In particular, functions $E_{\bm n}$ and $F_{\bm n}$ in \eqref{def:E} and \eqref{def:F} coincide with those defined in \eqref{defEFalt2}.
From \eqref{def:E} and \eqref{def:F} it follows that $E_{\bm n}$ and $F_{\bm n}$ are rational functions. However, expression \eqref{identityL} and assertion c) of Proposition~\ref{prop:fromTHM2.1} imply that all their possible poles are de facto removable singularities, so that they are polynomials. Since by \eqref{odegeneral}, $\deg(C_{\bm n,1})-\deg(S_{\bm n,1})\leq \sigma_1$, identities \eqref{def:E} and \eqref{def:F} imply the claimed upper bounds for the degrees of $E_{\bm n}$ and $F_{\bm n}$ . This proves a) and b) of the statement of the theorem.
Finally, let $A_1=A_2=A$, $B_1=B_2=B$ (so that $w_1 =w_2 $), and $\sigma=1$, $n_1=n_2$.
In this situation, $R_{\bm n}= cA^2$, $c\in \R\setminus \{0\}$ (see Remark~\ref{remark:equal}), so that by \eqref{def:E},
\begin{equation}\label{def:E2}
E_{\bm n}= c A^2 \left( A\frac{-AS_{\bm n,i}''+C_{\bm n,i}}{S_{\bm n,i}}
+ AA''+A'B+AB'+B^2\right),
\end{equation}
while by \eqref{def:F},
\begin{equation}\label{def:F2}
F_{\bm n}=c A^2\frac{BC_{\bm n,i}+AC_{\bm n,i}'}{S_{\bm n,i}}.
\end{equation}
Dividing \eqref{ode3} through by $cA^2$ we get \eqref{ode3bis}.
\end{proof}
\section{Further examples}\label{sec:examples}
In this section, we discuss several examples of multiple orthogonal polynomials.
\subsection{Jacobi polynomials}\label{sec:complex}
\
Les us return to Jacobi polynomials $P_N=P_N^{(\alpha, \beta)}$ in the non-standard situation considered already in Example \ref{ex:Jacobi1}, namely, when $\alpha, \beta, \alpha + \beta \notin \mathbb Z$, $\beta> -1$, and $-N<\alpha < -1$. As it was shown in \cite[Theorem 6.1]{ETNA05}, in this case $P_N$ is a type II multiple orthogonal polynomial. Indeed, on one hand it satisfies
$$
\int_{-1}^1 x^j P_N(x) w_1(x)dx=0\,,\quad j=0,1,\dots, n_1-1,
$$
where $\Delta_1= [-1,1]$, $n_1=N-[-\alpha]$, and $w_1(x)=(x-1)^{\alpha+[-\alpha]}(x+1)^\beta $ (see Example \ref{ex:Jacobi1}). On the other,
$$
\int_{\Delta_2} z^j P_N(z) w_2(z)dz=0\,,\quad j=0,1,\dots, n_2-1,
$$
where $\Delta_2$ is an arbitrary curve oriented clockwise, connecting $-1 - i 0$ with $-1 + i 0$ and lying
entirely in $\C \setminus (-\infty, 1]$, except for its endpoints, $n_2=[-\alpha]$, and $w_2(z)=(z-1)^{\alpha }(z+1)^\beta $.
We have established in Example \ref{ex:Jacobi1} that
$$
S_{\bm n, 1}(x)=(x-1)^{[-\alpha]},
$$
as well as that $C_{\textbf{n},1}(x) = -\lambda_{\textbf{n},1} S_{\textbf{n},1}(x)$. As for $S_{\bm{n},2}$, we lack in this case the second condition in \eqref{defHP}, that is, we cannot guarantee that
\begin{equation}\label{nonzero}
\int_{\Delta_2} z^{n_2} P_N(z) w_2(z)dz\neq 0,
\end{equation}
(and in general, this is false), which makes the formula \eqref{leadingC} of no value.
Reasoning as in Example \ref{ex:Jacobi1} and combining \eqref{ode1} and the standard differential equation for the Jacobi polynomials we arrive at the identity
\begin{equation}\label{id1S2}
(x^2-1) S'_{\textbf{n},2}(x) P'_{N}(x) = (\lambda_N S_{\bm {n},2}(x) + C_{\bm {n},2}(x))\,P_{N}(x)\,.
\end{equation}
Again, we have two options: either $S'_{\bm {n},2} \equiv 0$ and, thus,
\begin{equation}\label{S2}
S_{\bm {n},2}(x) \equiv \;\text{const.}\;,\;\;C_{\bm {n},2} = - \lambda_N S_{\bm {n},2} \equiv \;\text{const}\;,
\end{equation}
or, otherwise, the identity
\begin{equation}\label{id2S2}
\frac{P'_{N}(x)}{P_{N}(x)}\,=\,\frac{\lambda_N S_{\bm {n},2}(x) + C_{\bm {n},2}(x)}{(x+1) \left((x-1) S'_{\bm {n},2}(x) - [-\alpha] S_{\bm {n},2}(x)\right) }
\end{equation}
holds. In this case, the facts that $P_{N}$ and $P'_{N}$ are relatively prime and that the degree of $S_{\bm {n},2} \leq N-[-\alpha]\,,$ with $-N <\alpha <-1\,,$ imply that this cannot take place, and thus, \eqref{S2} is the unique possible solution.
We already saw in Example \ref{ex:Jacobi1} that the zeros of $P_{\bm {n}}$ were in equilibrium (that is, their counting measure was critical) in the external field \eqref{extFieldJacobi}. From \eqref{expressionS_NJac}, \eqref{expreforR} and \eqref{S2}, and taking into account that in this case $A_1 = A_2 = A$, we have that $R_{\bm n} \equiv 0$. Moreover, the zeros of $S_{\bm {n},1}$ are obviously not simple, and in such a case we cannot say anything about electrostatics for its zeros.
Obviously, what fails in this case is \eqref{nonzero}, which implies that the index $\bm n=(N-[-\alpha], [-\alpha])$ is not normal.
\subsection{Multiple Hermite polynomials} \label{sec:multipleHermite}
Multiple Hermite polynomials $\frak H_{\bm n}$, $\bm n=(n_1, n_2)$, are type II MOP of degree $\leq N=n_1+n_2$, defined by
$$
\begin{aligned}
&\int_{-\infty}^{\infty} x^{k} \frak H_{\bm n}(x) e ^{-x^{2}+c_{1} x} \, dx=0, \quad k=0,1, \ldots, n_1-1 ,\\
&\int_{-\infty}^{\infty} x^{k} \frak H_{\bm n}(x) e ^{-x^{2}+c_{2} x} \, dx=0, \quad k=0,1, \ldots, n_2-1.
\end{aligned}
$$
If $c_{1} \neq c_{2}$ then the weights $w_i(x)=e ^{-x^{2}+c_{i} x} $ form an AT-system, see \cite{MR2187942}, \cite[section 23.5]{Ismail05} and \cite[section 3.4]{MR1808581}.
These MOP can be obtained using the Rodrigues formula
$$
e^{-x^{2}} \frak H_{\bm n}(x)=(-1)^{N} 2^{-N}\left(\prod_{j=1}^{2} e^{-c_{j} x} \frac{d^{n_{j}}}{d x^{n_{j}}} e^{c_{j} x}\right) e^{-x^{2}},
$$
which yields the explicit expression
$$
\frak H_{\bm n}(x)=(-1)^{|\vec{n}|} 2^{-|\vec{n}|} \sum_{k_{1}=0}^{n_{1}} \sum_{k_{2}=0}^{n_{2}} \binom{n_1}{ k_1} \binom{n_2}{ k_2}
c_{1}^{n_{1}-k_{1}} c_{2}^{n_{2}-k_{2}}(-1)^{k_1+k_2} H_{k_1+k_2}(x),
$$
where $H_n(x)=2^n x^n +\dots$ is the standard Hermite polynomial \eqref{hermiteH}, see \cite[Section 23.5]{Ismail05} or \cite{MR3907776}.
In our notation,
$$
A_1(x)=A_2(x)\equiv 1, \quad B_i(x)=-2x+c_i , \quad \sigma_i=0, \quad i=1, 2,
$$
with $\Delta_1=\Delta_2=\mathbb R$.
The differential equation \eqref{ode3} takes the form
$$
R_{\bm n}y'''+\left[ (B_1 + B_2)R_{\bm n}- R_{\bm n}' \right] y''+E_{\bm n}y'+F_{\bm n}y=0.
$$
Formula \eqref{degRoverAparticularcase} shows that in this case $ R_{\bm n}$ is a constant, so that the equation boils down to
$$
y'''+ (B_1 + B_2) y''+E_{\bm n}y'+F_{\bm n}y=0,
$$
where by Theorem \ref{thmODER}, $E_{\bm n}$ and $F_{\bm n}$ have degree at most $2$ and $1$, respectively. These polynomials can be obtained explicitly by taking into account the behavior of the solutions of the equation at the singular points. Indeed, if we write
$$E_{\bm n}(x)=e_0+e_1x+e_2x^2\,,\qquad F_{\bm n}(x)=f_0+f_1 x\,,$$
and replace the asymptotic behavior of $q_{\bm n,1}(x)$ as $x\to\infty$ (for instance, along the imaginary axis),
$$q_{\bm n,1}(x)=\text{const} \times e^{x^2-c_1x}x^{-n_1-1}(1+\mathcal O(1/x)),$$
into the differential equation, we get consecutively
$$e_2=4,\qquad e_1=-2c_1-2c_2,\qquad e_0=-2+c_1c_2-\frac{f_1}{2},\qquad f_0=2c_2n_1-2c_1n_1-\frac{c_1f_1}{2}.$$
An analogous procedure for $q_{\bm n,2}$, together with the previous identities, yields
$$f_1=-4n_1-4n_2\,.$$
As a consequence, we get explicit expressions for polynomials $E_{\bm n}$ and $F_{\bm n}$, which can be expressed as follows:
$$E_{\bm n}=B_1B_2+2(n_1+n_2-1)\,,\qquad F_{\bm n}=2n_2B_1+2n_1B_2.$$
Thus, $\frak H_{\bm n}$ and the corresponding functions of the second kind $q_{\bm n,1}$ and $q_{\bm n,2}$ are independent solutions of the equation
$$
y'''+ (B_1 + B_2) y''+(B_1B_2+2(n_1+n_2-1))y'+(2n_2B_1+2n_1B_2)y=0,
$$
which coincides with the one obtained previously in \cite[section 5.1]{MR3055365}.
By Theorem \ref{thm:vectorelectrostatics},
the vector discrete measure $\vec \nu_1:=(\nu(\frak H_{\bm n}),\nu(S_{\bm n,1}))$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
$$
\vec \varphi (z)=\frac{1}{2} \left( x^2-c_1 x , (c_1-c_2) x \right), \quad z=x+iy.
$$
Direct computation shows that
$$
m^{\pm}_k:=\int_{-\infty}^\infty x^k e ^{-x^{2}\pm x}dx =\sqrt{\pi } \sqrt[4]{e} \left(\frac{\pm 1}{2i}\right)^k H_k\left(\frac{i}{2}\right), \quad k=0, 1, \dots ,
$$
and that
$$
\int_{-\infty}^\infty x^k e ^{-x^{2}+c x}dx =e^{(c^2-1)/4} \sum_{j=0}^k \binom{k}{j} \left( \frac{c-1}{2}\right)^{k-j} m^+_j, \quad c\neq 1, \quad k=0, 1, \dots .
$$
These formulas allow us to obtain (at least, using symbolic computation) the moments of $w_i$, and in consequence, the asymptotic expansion of $\mathfrak C_{w_i}[\frak H_{\bm n}]$ at infinity. This yields $S_{\bm n, i}$ by formula \eqref{defS12}.
In the following examples we will consider the symmetric case $n_1=n_2$ and $c_1=-c_2=c$, for which clearly $S_{\bm n, 1}(-x)$ and $S_{\bm n, 2}(x)$ coincide up to a multiplicative constant. In this case, explicit formulas for $\frak H_{(n,n) }$ and $S_{\bm n, 1}$ are easily obtained with the help of a computer algebra system, at least for low $n$'s.
For instance, for $\bm n=(5,5)$ and $c=1$ we have that
$$
\frak H_{(5,5) }(x)=x^{10}-\frac{95 x^8}{4}+\frac{1405 x^6}{8}-\frac{14855 x^4}{32}+\frac{94325 x^2}{256}-\frac{39971}{1024},
$$
and up to normalization,
$$
S_{\bm n, 1}(x)=S_{\bm n, 2}(-x)=32 x^5+560 x^4+4240 x^3+17560 x^2+39970 x+39971 .
$$
All zeros of $\frak H_{(5,5) }$ are real and simple, while $S_{\bm n, 1}$ has one real zero (smaller than the zeros of $\frak H_{(5,5) }$) and two pairs of complex conjugate simple zeros.
Asymptotics of sequences of (rescaled) multiple Hermite polynomials
$$
p_{(n,n)}(t) = \frak H_{(n,n) }\left( \sqrt{n }\, t \right), \quad
$$
with $c_1=-c_2=c$ proportional to $\sqrt{n}$, has been studied by Aptekarev, Bleher and Kuijlaars in a series of papers \cite{MR2172687, Bleher/Kuijlaars1, MR2276453} in the context of the random matrix theory. In particular, they found that the support of the limit of the zero-counting measures $\nu(p_{(n,n)})$ is a single interval, roughly speaking, for $0\le c \ll 2\sqrt{n}$, and is comprised of two symmetric intervals for $c \gg 2\sqrt{n}$. It is interesting to compare these conclusions with results of the numerical experiments presented in Figure~\ref{FigHermite35}. There, $\bm n=(35,35)$ and $c_1=-c_2=c>0$, with the phase transition happening around $c\approx 12$. Notice that for $c=1$, the zeros of $S_{\bm n, 1}$ are visibly distributed along a curve on the complex plane. As $c$ increases, more and more zeros of $S_{\bm n, 1}$ migrate to the negative semi-axis, interlacing with the zeros of $\frak H_{(n,n) }$, until we get a two-cut situation. In this case, the configuration resembles the relative position of the zeros of $\frak H_{(n,n) }$ and $S_{\bm n, 1}$ for the Angelesco system, described in Section \ref{sec:angelesco}, which explains why the description of the asymptotic limit of $\nu(\frak H_{\bm n})$ in this case is given in \cite{Bleher/Kuijlaars1} in terms of the Angelesco vector equilibrium problem, see Section~\ref{sec:AsymptAngelesco}.
\begin{figure}[h]
\centering
\hspace{-3mm} \begin{tabular}{cc}
\begin{overpic}[scale=0.6]{Fig1a}
\end{overpic}
&
\begin{overpic}[scale=0.6]{Fig1b}
\end{overpic} \\
\begin{overpic}[scale=0.6]{Fig1c}
\end{overpic}
&
\begin{overpic}[scale=0.6]{Fig1d}
\end{overpic}
\end{tabular}
\caption{Zeros of the multiple Hermite polynomial $\frak H_{\bm n}$ (indicated by empty circles, all on the real axis) and of $S_{\bm n, 1}$ (filled circles) for $\bm n=(35,35)$, $c_1=-c_2$, for different values of $c$: $c=1$ (top left), $c=4$ (top right), $c=8$ (bottom left) and $c=16$ (bottom right). The zeros of $S_{\bm n, 1}$ that are on the left semi-axis apparently interlace with the zeros of $\frak H_{\bm n}$.} \label{FigHermite35}
\end{figure}
A generalization of multiple Hermite polynomials to the case of polynomials $P_{\bm n}$, $\bm n = (n,n)$, satisfying the varying orthogonality conditions
$$
\int_{-\infty}^{\infty} x^{k} P_{\bm n}(x) e ^{-n(V(x)\pm c x)} \, dx=0, \quad k=0,1, \dots, n -1,
$$
where $V$ is a polynomial of even degree and positive leading coefficient, has been carried out in \cite{MR2743878}, again associated to random matrix models with external source. The limit zero distribution of $P_{\bm n}$'s was described there in terms of a constrained 2-component vector equilibrium problem, with one of the components on the imaginary axis. The two-cut situation in the asymptotics of multiple Hermite polynomials, and thus the reduction to the Angelesco equilibrium, is in this case equivalent to the constraint on the imaginary axis to be not achieved (not ``saturated''). The study in \cite{MR2743878} has been extended in \cite{AMFS21} (see also \cite{ALT2011} for the quartic case) to a non-symmetric situation, showing that it can be alternatively characterized in terms of a 3-component vector critical measure with the interaction matrix from Corollary~\ref{cor:asymptotics3}. The curves outlined by the zeros of $P_{\bm n}$ and $S_{\bm n, 1}$ in Figure~\ref{FigHermite35} is consistent with the support of the three components described in \cite[Theorem C]{AMFS21}.
\subsection{Multiple Laguerre polynomials of the first kind} \label{sec:MultLaguerreIkind}
These polynomials are defined by the orthogonality conditions
$$
\begin{aligned}
&\int_{0}^{\infty} x^{k} \frak L_{\bm n}(x) x^{\alpha_{1}} e^{-x} \, d x=0, \quad k=0,1, \ldots, n_1-1 \\
&\int_{0}^{\infty} x^{k} \frak L_{\bm n}(x) x^{\alpha_{2}} e^{-x} \mathrm{~d} x=0, \quad k=0,1, \ldots, n_2-1,
\end{aligned}\quad \deg \frak L_{\bm n} \leq N=n_1+n_2,
$$
where $\alpha_{1}, \alpha_{2}>0$ and $\alpha_{1}-\alpha_{2} \notin \mathbb{Z}$, under which condition the weights form an AT-system; see \cite{AptBraVA}, \cite[section 23.4.1]{Ismail05}, and \cite[Section 3.2]{MR1808581}. Not only that, since $x^\beta$, for $\beta<0$ and $x>0$, can be written as the Cauchy integral of a positive weight on $(-\infty, 0)$, coinciding up to a multiplicative constant with $|x|^\beta$, we conclude that this pair of weight forms a Nikishin system with $\Delta_1=\Delta_2=[0,+\infty)$ and $[c,d]=[-\infty, 0]$, see Section \ref{sec:nikishin}.
Polynomials $\frak L_{\bm n}$ can be obtained by using the Rodrigues formula,
$$
(-1)^{N} e^{-x} \frak L_{\bm n}(x)=\prod_{j=1}^{2}\left(x^{-\alpha_{j}} \frac{d^{n_{j}}}{d x^{n_{j}}} x^{n_{j}+\alpha_{j}}\right) e^{-x}, \quad N=n_1+n_2,
$$
from which one can find the explicit expression
\begin{align*}
(-1)^{N} e^{-x} \frak L_{\bm n}(x) & = \left(\alpha_{1}+1\right)_{n_{1}} \left(\alpha_{2}+1\right)_{n_{2}} {}_{2} F_{2}
\left(\begin{array}{c}
\alpha_{1}+n_{1}+1, \alpha_{2}+n_{2}+1 \\
\alpha_{1}+1, \alpha_{2}+1
\end{array} \bigg| -x\right).
\end{align*}
In our notation,
$$
A_1(x)=A_2(x)=x, \quad B_i(x)= \alpha_i - x, \quad \sigma_i=0, \quad i=1, 2,
$$
with $\Delta_1=\Delta_2=[0,+\infty)$. These polynomials satisfy the differential equation \eqref{ode3}, which takes the form
\begin{equation} \label{laguerremult}
x^2 R_{\bm n}(x) y'''(x)+\left[ (B_1(x)+B_2(x)+4) xR_{\bm n}(x)- x^2 R_{\bm n}' (x)\right] y''(x)+E_{\bm n}(x)y'(x)+F_{\bm n}(x)y(x)=0.
\end{equation}
It follows from formula \eqref{degRoverAparticularcase} that $ R_{\bm n}(x)/x$ is a constant. Thus, the equation reduces to
$$
x^3 y'''+ x^2 \left[ B_1(x)+B_2(x)+3 \right] y''+E_{\bm n}y'+F_{\bm n}y=0,
$$
where $E_{\bm n}$ and $F_{\bm n}$ are of degrees at most $3$ and $2$, respectively. As in Section~\ref{sec:multipleHermite}, the asymptotics of the corresponding functions of second kind $q_{\bm n,1}$ and $q_{\bm n,2}$ at $\infty$ yields some constraints on the coefficients of $E_{\bm n}$ and $F_{\bm n}$, which unfortunately are not sufficient to determine the polynomials in this case. We need to make use also of the predicted behavior of the solutions at the origin.
Notice that $0$ is a regular singular point (a Fuchsian singularity) of \eqref{laguerremult}. The fact that the weights constitute an AT-system on $[0,+\infty)$ implies also that $\frak L_{n_1, n_2}(0)\neq 0$. In consequence, $E_{\bm n}(0)=0$ and $F_{\bm n}(0)=0$. Expanding the solutions at the origin we conclude that the indicial polynomial must vanish at $0$, $-\alpha_1$ and $-\alpha_2$. All this additional information allows us to determine $E_{\bm n}$ and $F_{\bm n}$:
\begin{align*}
E_{\bm n}&=x(x^2+(n_1+n_2-\alpha_1-\alpha_2-3)x+(1+\alpha_1)(1+\alpha_2)\,,\\
F_{\bm n}&=x(-(n_1+n_2)x+n_1+n_2+n_1n_2+\alpha_1n_2+\alpha_2n_1)\,.
\end{align*}
Canceling the common factor $x$ in the four coefficients of \eqref{laguerremult} yields the equation that appeared already in \cite[Section 4.3]{AptBraVA},
$$
\begin{aligned}
x^{2} y''' (x) &+\left(-2 x^{2}+\left(\alpha_{1}+\alpha_{2}-3\right) x\right) y''(x)+\left(x^{2}-x\left(\alpha_{1}+\alpha_{2}-n_1-n_2+3\right)\right.\\
&\left.+\left(\alpha_{1}+1\right)\left(\alpha_{2}+1\right)\right) y'(x)-\left(x(n_1+n_2)-\left(n_1+n_2+n_1 n_2+\alpha_{1} n_2 +\alpha_{2} n_1\right)\right) y(x)=0.
\end{aligned}
$$
From the conclusions of Section~\ref{sec:nikishin} it follows that the zeros of $\frak L_{\bm n}$ are all positive, and those of $S_{\bm n,1}$ are negative. By Theorem \ref{thm:vectorelectrostatics},
the vector discrete measure $\vec \nu_1:=(\nu(\frak L_{\bm n}),\nu(S_{\bm n,1}))$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
$$
\vec \varphi (x)=\frac{1}{2} \left( x-(\alpha_1+1) \log(x) , (\alpha_1-\alpha_2-1)\log|x| \right) .
$$
Direct computations show that
$$
\int_{0}^\infty x^{k+\alpha_j} e ^{- x}dx =\begin{cases}
\sqrt{\pi} \, 2^{-k-1} (2k+1)!! , & j=1, \\
(k+1)!, & j=2,
\end{cases} \quad k=0, 1, \dots ,
$$
which allows to find the moments of $w_i$, the asymptotic expansion of $\mathfrak C_{w_i}[\frak L_{\bm n}]$ at infinity, and in consequence, $S_{\bm n, i}$ (using formula \eqref{defS12}).
Let us consider the particular case of $n_1=n_2=n$, with $\alpha_1=1/2$, $\alpha_2=1$.
Then, for $\bm n=(5,5)$,
\begin{align*}
\frak L_{(5,5) }(x) = & x^{10}-\frac{165 x^9}{2}+\frac{5445 x^8}{2}-\frac{186615 x^7}{4} +\frac{7224525 x^6}{16}-\frac{80613225 x^5}{32}+\frac{127182825
x^4}{16} \\
& -\frac{107120475 x^3}{8}+10758825 x^2-\frac{13253625 x}{4}+\frac{467775}{2} ,
\end{align*}
and up to normalization,
\begin{align*}
S_{\bm n, 1 }(x) & = 8 x^5+2720 x^4+107500 x^3+1945020 x^2+46682295 x+1425581520, \\
S_{\bm n, 2}(x) & = 32 x^5+2960 x^4+67424 x^3+1313480 x^2+37066290 x+1173966885.
\end{align*}
All zeros of $\frak L_{(5,5) }$ are positive and simple, while each $S_{\bm n, j}$ has one negative and two pairs of complex conjugate simple zeros, all of them simple.
Asymptotics of sequences of (rescaled) multiple Laguerre polynomials of the first kind
$$
p_{(n,n)}(t) = \frak L_{(n,n) }\left( 2 n t \right)
$$
was obtained in \cite{zbMATH05356762} and \cite{MR3471160}. It was shown that the support of the weak-* limit of the zero-counting measures $\nu(p_{(n,n)})$ is the interval $\left[ 0, 27/8 \right]$, with the density presenting the usual square root vanishing at the rightmost endpoint of the support. The expression of the density was derived from the recurrence relation satisfied by polynomials $\frak L_{(n_1,n_2) }$ and no equilibrium problem associated to that distribution was given.
\begin{figure}[h]
\centering
\begin{overpic}[scale=0.7]{Fig2}
\end{overpic}
\caption{Zeros of the multiple Laguerre polynomial of the first kind $\frak L_{\bm n}$ (indicated by empty circles, all on the positive semiaxis) and of $S_{\bm n, 1}$ (filled circles, all on the negative semiaxis) for $\bm n=(35,35)$ and $\alpha_1=1/2$, $\alpha_2=1$. Nine real zeros of $S_{\bm n, 1}$ (ranging from $-74000$ to $-201.53$, are not represented. } \label{FigLaguerreI35}
\end{figure}
Again, it is interesting to compare these conclusions with results of the numerical experiments presented in Figure~\ref{FigLaguerreI35}, where we take $\bm n=(35,35)$, with $\alpha_1=1/2$, and $\alpha_2=1$.
The largest zero of $\frak L_{(35,35) }$ is $217.597$, which is consistent with the expected value of
$$
\frac{27}{8} \times 70 = 236.25.
$$
\subsection{Multiple Laguerre polynomials of the second kind} \label{sec:multipleLag2kind}
These polynomials are defined by the orthogonality conditions
$$
\begin{aligned}
&\int_{0}^{\infty} x^{k} \mathcal L_{\bm n}(x) x^{\alpha} e^{-c_{1} x} \, dx=0, \quad k=0,1, \ldots, n_1-1, \\
&\int_{0}^{\infty} x^{k} \mathcal L_{\bm n}(x) x^{\alpha} e^{-c_{2} x} \, dx=0, \quad k=0,1, \ldots, n_2-1,
\end{aligned}
$$
where we assume that $\alpha>0$ and $c_{1}, c_{2}>0$ with $c_{1} \neq c_{2}$, under which condition the weights form an AT-system; see e.g.~\cite{MR2187942}, \cite{zbMATH05343896}, or \cite[section 3.3]{MR1808581}.
An explicit expression can be found in \cite[Section 3]{AptBraVA} or \cite[Section 23.4]{Ismail05}:
$$
\mathcal L_{\bm n}(x)= \sum_{k_{1}=0}^{n_{1}} \sum_{k_{2}=0}^{n_{2}} (-1)^{k_1+k_2} \frac{(k_1+k_2) !}{c_{1}^{k_{1}} c_{2}^{k_{2}}} \binom{n_{1} }{k_{1} } \binom{n_{2} }{k_{2} } \binom{N+\alpha}{ k_1+k_2} x^{N-k_1-k_2}.
$$
In our notation,
$$
A_1(x)=A_2(x)=x, \quad B_i(x)= \alpha -c_i x, \quad \sigma_i=0, \quad i=1, 2,
$$
with $\Delta_1=\Delta_2=[0,+\infty)$. These polynomials also satisfy the differential equation \eqref{laguerremult},
that is,
$$
x^2 R_{\bm n}(x) y'''(x)+\left[ (-(c_1+c_2)x^2 + 2 (\alpha +2)x)R_{\bm n}(x)- x^2 R_{\bm n}' (x)\right] y''+E_{\bm n}(x)y'(x)+F_{\bm n}(x)y(x)=0.
$$
Formula \eqref{degRoverAparticularcase} shows that now $ R_{\bm n}(x)/x$ is a polynomial of degree $1$.
Furthermore, by \eqref{IdR1} and since $B_1-B_2$ vanishes at $0$, $R_{\bm n}\mathfrak{L}_{\bm n}$ has a double root at $0$. Again, the fact that the weights constitute an AT-system on $[0,+\infty)$ implies also that $\mathfrak{L}_{\bm n}(0)\neq 0$. Hence, $R_{\bm n}(x)/x^2$ is a constant, and the third order differential equation is
$$x^4y'''-x^3((c_1+c_2)x-\alpha-2)y''+E_{\bm n}y'+F_{\bm n}y=0.
$$
Arguments as described in Sections~\ref{sec:multipleHermite} and \ref{sec:MultLaguerreIkind}, using the asymptotics of the functions of second kind both at $0$ and at $\infty$ and the fact that the indicial polynomial vanishes at $0$ and $-\alpha$, yield the expressions for polynomials $E_{\bm n}$ and $F_{\bm n}$:
\begin{align*}
E_{\bm n}&=c_1c_2x^4-[(c_1+c_2)(\alpha+1)-n_1c_1-n_2c_2]x^3+\alpha(\alpha+1)x^2,\\
F_{\bm n}&=-c_1c_2(n_1+n_2)x^3+\alpha(c_1n_1+c_2n_2) x^2.
\end{align*}
Canceling the common factor $x^2$ in the differential equation yields
$$
\begin{gathered}
x^{2} y'''(x)-\left(x^{2}\left(c_{1}+c_{2}\right)-2 x(\alpha+1)\right) y''(x)+\left(x^{2} c_{1} c_{2}-x\left[\left(c_{1}+c_{2}\right)(\alpha+1)-n_1 c_{1}-n_2 c_{2}\right]\right. \\
+\alpha(\alpha+1)) y'(x)-\left(x c_{1} c_{2}(n_1+n_2)-\alpha\left(n_1 c_{1}+n_2 c_{2}\right)\right) y(x)=0,
\end{gathered}
$$
which matches the equation found in \cite[section 4.3]{AptBraVA} and \cite[section 5.2]{MR3055365}.
By Theorem \ref{thm:vectorelectrostatics},
the vector discrete measure $\vec \nu_1:=\left(\nu(\mathcal L_{\bm n}),\nu(S_{\bm n,1})\right)$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
$$
\vec \varphi (x)=\frac{1}{2} \left( c_1 x - (\alpha +1) \log x , (c_2-c_1) x \right), \quad z=x+iy.
$$
Since for $c>0$ and $\alpha>-1$,
$$
\int_{0}^\infty x^{\alpha} e ^{- c x}dx = c^{-\alpha-1} \Gamma(\alpha+1),
$$
we can easily calculate the moments of $w_i$, the asymptotic expansion of $\mathfrak C_{w_i}[\mathcal L_{\bm n}]$ at infinity, and in consequence, $S_{\bm n, i}$ (using formula \eqref{defS12}).
Let us consider the particular case of $n_1=n_2=n$, with $\alpha=1$, $c_1=1$, and $c_2=2$.
Then, for $\bm n=(5,5)$,
\begin{align*}
\mathcal L_{(5,5) }(x) = & x^{10}-\frac{165 x^9}{2}+2750 x^8-\frac{96525 x^7}{2}+487575 x^6-\frac{5831595 x^5}{2}+10239075 x^4\\ & -20270250 x^3+20790000 x^2-9355500 x+1247400 ,
\end{align*}
and up to normalization,
\begin{align*}
S_{\bm n, 1 }(x) & = 2 x^5+25 x^4+605 x^3+16580 x^2+506065 x+16197810, \\
S_{\bm n, 2}(x) & = 4 x^5-320 x^4+9975 x^3-151645 x^2+1115560 x-2967600.
\end{align*}
All zeros of $\mathcal L_{(5,5) }$ are positive and simple, $S_{\bm n, 1}$ has one negative and two pairs of complex conjugate simple zeros, while $S_{\bm n, 2}$, has one positive and two pairs of complex conjugate roots, all of them simple.
Asymptotics of sequences of (rescaled) multiple Laguerre polynomials of the second kind,
$$
p_{(n,n)}(t) = \mathcal L_{(n,n) }\left( n t \right),
$$
with varying $0<c_1<c_2$ proportional to $n$, has been studied by Lysov and Wielonsky in \cite{zbMATH05343896} using the Riemann-Hilbert technique and the analysis of the Riemann surface derived from the differential equation.
In particular, they found that there is a critical value $\kappa \approx 12.11\dots$ such that for $0<c_2/c_1<\kappa$, the support of the limit of the zero-counting measures $\nu(p_{(n,n)})$ is a single interval of the form $[0,d]$, $d>0$, while for $c_2/c_1>\kappa$, it is comprised of two real intervals $[0,a]\cup [b,d]$, with $0<a<b<d$. The expression of the density was also derived, but no equilibrium problem associated to that distribution was given.
It is interesting to compare these conclusions with results of the numerical experiments presented in Figure~\ref{FigLaguerreII25}, where we take $\bm n=(35,35)$. We observe that for small values of $c_2/c_1$ the zeros of $S_{\bm n, 1 }$ sit on a curve on the complex plane. However, for large ratios $c_2/c_1$, zeros of $\mathcal L_{(n,n) }$ split into two groups, and the zeros of $S_{\bm n, 1 }$, all real, approximately interlace with the zeros of $\mathcal L_{(n,n) }$ on the leftmost subinterval. This allows us to conjecture that in this case, the asymptotic zero distribution can be described again in terms of the Angelesco-type vector equilibrium, see Section~\ref{sec:AsymptAngelesco}.
\begin{figure}[h]
\centering
\hspace{-3mm} \begin{tabular}{cc}
\begin{overpic}[scale=0.55]{Fig3a}
\end{overpic}
&
\begin{overpic}[scale=0.55]{Fig3b}
\end{overpic}
\\
\begin{overpic}[scale=0.55]{Fig3c}
\end{overpic}
&
\begin{overpic}[scale=0.55]{Fig3d}
\end{overpic}
\end{tabular}
\caption{Zeros of the multiple Laguerre polynomial of the second kind $\mathcal L_{\bm n}$ (indicated by empty circles, all on the positive semiaxis) and of $S_{\bm n, 1}$ (filled circles) for $\bm n=(35,35)$, and $(c_1,c_2)=(35,70)$ (top left), $(c_1,c_2)=(35,140)$ (top right), and $(c_1,c_2)=(35,525)$ (bottom left). Bottom right: zoom of the interval $(0,0.25)$ for $(c_1,c_2)=(35,525)$.} \label{FigLaguerreII25}
\end{figure}
\subsection{Jacobi-Pi\~neiro polynomials} \label{sec:JacobiPinero}
The Jacobi-Piñeiro polynomials are multiple orthogonal polynomials associated with an AT system consisting of Jacobi weights on $[0,1]$ with different powers at 0 and the same behavior at $1 $. They are defined by the orthogonality conditions
$$
\begin{aligned}
&\int_{0}^{1} x^{k} P_{\bm n}(x) x^{\beta_1} (1-x)^\alpha\, dx=0, \quad k=0,1, \ldots, n_1-1, \\
&\int_{0}^{1} x^{k} P_{\bm n}(x) x^{\beta_2} (1-x)^\alpha\, dx=0, \quad k=0,1, \ldots, n_2-1.
\end{aligned}
$$
In our notation, $\Delta_1=\Delta_2=[0,1]$ and $w_j(x)= x^{\beta_j} (1-x)^\alpha$, $j=1, 2$, with $\alpha, \beta_1, \beta_2>-1$ and $\beta_1-\beta_2\notin \mathbb Z$. We have
$$
A_1(x)=A_2(x)=x(x-1), \quad B_i(x)= (\beta_i +\alpha) x-\beta_i, \quad \sigma_i=0, \quad i=1, 2.
$$
For the same reason mentioned at the beginning of Section~\ref{sec:MultLaguerreIkind}, this pair of weight forms a Nikishin system with $\Delta_1=\Delta_2=[0,1]$ and $[c,d]=[-\infty, 0]$, see Section \ref{sec:nikishin}.
These polynomials were first studied by Piñeiro \cite{MR884516} when $\alpha=0$. The general case appears in \cite[p. 162]{niksor}.
There is a Rodrigues formula for Jacobi-Pi\~neiro polynomials $P_{\bm n}$, $\bm n=(n_1, n_2)$, see \cite[Section 23.3.2]{Ismail05}: with $N=n_1+n_2$, and up to a constant factor,
\begin{align*}
P_{\bm n}(x) &=(1-x)^{-\alpha} \prod_{j=1}^{2}\left(x^{-\beta_{j}} \frac{d^{n_{j}}}{d x^{n_{j}}} x^{n_{j}+\beta_{j}}\right)(1-x)^{\alpha+N} .
\end{align*}
There is even an explicit expression \cite[Section 3.1]{MR1808581}: again, up to a multiplicative constant,
$$
P_{\bm n}(x) = \sum_{k=0}^{n_1}\binom{ \beta_{1}+n_1 }{
k} \binom{
\alpha +N }{
n_1-k} \sum_{j=0}^{n_2} \binom{ \beta_{2}+N-k }{ j} \binom{
\alpha +k+n_2 }{
n_2-j} x^{N-k-j}(x-1)^{k+j}.
$$
Proposition \ref{prop:WronskianS/w} assures that polynomial $R_{\bm n}$ has degree at most $3$. By \eqref{IdR1}, $R_{\bm n}P_{\bm n}$ has a double root at $1$ (observe that $B_1-B_2$ also vanish at $1$) and a simple one at $0$. Since the weights form an AT-system, $P_{\bm n}(0)\neq 0$ and $P_{\bm n}(1)\neq 0$, so that, up to a multiplicative constant, $R_{\bm n}(z) =z(z-1)^2$ .
By Theorem \ref{thm:vectorelectrostatics}, and taking into account the expression for $R_{\bm n}$, we conclude that
the vector discrete measure $\vec \nu_1:=(\nu(\frak L_{\bm n}),\nu(S_{\bm n,1}))$ is a vector critical measure for the energy functional $\mathcal{E}_{\vec \varphi, a}$, with $a=-1/2$ and
$$
\vec \varphi = \left( \frac{\beta_1+1}{2}\log \frac{1}{\left| x\right| } + \frac{\alpha+1}{2}\log \frac{1}{\left| x-1\right| } , \left(\beta_1-\beta_2+\frac{1}{2} \right) \log\frac{ 1}{ \left|z \right|} \right), \quad z=x+iy.
$$
The third order differential equation can be also obtained following the arguments used in Sections \ref{sec:multipleHermite}--\ref{sec:multipleLag2kind}, and making use of the asymptotics at $\infty$, $0$, and $1$, and of the known roots of the indicial polynomials at both finite points. For instance, in the case $\alpha=0$ (studied by Piñeiro in \cite{MR884516}), polynomials $E_{\bm n}$ and $F_{\bm n}$ (coefficients of $y'$ and $y$, respectively) are
\begin{align*}
E_{\bm n}&=-x(x-1)^3 ((1 + \beta_1) (1 + \beta_2) +
x (n_1^2 + n_2^2 + n_1n_2+ n_1(1 + \beta_1) +
n_2 (1 + \beta_2) - (2 + \beta_1) (2 + \beta_2))),\\
F_{\bm n}&=-x(x-1)^3 (n_1 + n_2) (1 + n_1 + \beta_1) (1 + n_2 + \beta_2).
\end{align*}
Canceling the common factor $x(x-1)^3$ we obtain the differential equation
\begin{align*}
&x^2(x-1)y'''+x ( x (5 + \beta_1 + \beta_2)-3 - \beta_1 - \beta_2 )y''\\
&-(x (n_1^2 + n_2^2 + n_1n_2+ n_1(1 + \beta_1) +
n_2 (1 + \beta_2) - (2 + \beta_1) (2 + \beta_2))-(1 + \beta_1) (1 + \beta_2)
)y'\\
&-(n_1 + n_2) (1 + n_1 + \beta_1) (1 + n_2 + \beta_2)y=0.
\end{align*}
This is the particular case (after canceling the common factor $x-1$) of the equation derived in \cite[Section 4.3]{AptBraVA}.
As for the electrostatic partners, let us consider the example when $\alpha=\beta_1=0$ and $\beta_2=-1/2$. Direct computation shows that the moments of $w_i$ are
$$
\int_{0}^1 x^{k } w_j(x) dx = \begin{cases}
(k+1)^{-1}, & j=1, \\
2/(2j+1), & j=2,
\end{cases} \quad k=0, 1, \dots ,
$$
which allows to find the asymptotic expansion of $\mathfrak C_{w_i}[P_{\bm n}]$ at infinity, and in consequence, $S_{\bm n, i}$ (using formula \eqref{defS12}) by means of symbolic computation. For instance, in the case $\bm n=(5,5)$ we obtain that
\begin{align*}
\mathcal L_{(5,5) }(x) = & x^{10}-\frac{380 x^9}{87}+\frac{1615 x^8}{203}-\frac{20672 x^7}{2639}+\frac{9044 x^6}{2001}-\frac{5168 x^5}{3335}+\frac{204
x^4}{667} \\ & -\frac{64 x^3}{2001}+\frac{x^2}{667}-\frac{4 x}{182091}+\frac{1}{30045015},
\end{align*}
and up to normalization,
\begin{align*}
S_{\bm n, 1 }(x) = &\ 882230895 x^5+4709406975 x^4+5720142090 x^3+8795888965 x^2 \\
& +11696347475 x+11645469674, \\
S_{\bm n, 2}(x) = & \ 5192762585 x^5+313459871725 x^4+662076961780 x^3+782465377400 x^2 \\
& +1267133219685 x+1386883054197.
\end{align*}
All zeros of $P_{(5,5) }$ are positive and simple, both $S_{\bm n, j}$ have one negative and two pairs of complex conjugate simple zeros, all of them simple.
Zero asymptotics for sequences of Jacobi-Pi\~neiro polynomials $P_{(n,n)}$ as $n\to \infty$ and $\alpha$, $\beta_j$'s fixed, was obtained in \cite{zbMATH05356762} and \cite{MR3471160}. Again, the expression of the density, this time on $[0,1]$, was derived from the recurrence relation satisfied by polynomials $\frak L_{(n_1,n_2) }$ and no equilibrium problem associated to that distribution was given. For comparison, results of the numerical experiments are presented in Figure~\ref{FigJP25}, where we take $\bm n=(75,75)$, with $\beta_1=0$, $\beta_2=-1/2$, and $\alpha=0$. According to our discussion in Section~\ref{sec:nikishin}, the zeros of $S_{\bm n, 1}$ are real and negative, and the asymptotic zero distribution can be described in terms of the Nikishin-type vector equilibrium, see Section~\ref{sec:AsymptNikishin}.
\begin{figure}[h]
\centering
\hspace{-3mm} \begin{tabular}{cc}
\begin{overpic}[scale=0.6]{Fig4a}
\end{overpic}
&
\begin{overpic}[scale=0.6]{Fig4b}
\end{overpic}
\end{tabular}
\caption{Left: zeros of the Jacobi-Pi\~neiro polynomial $P_{\bm n}$ (all on $[0,1]$) and of $S_{\bm n, 1}$ (filled circles, all negative) for $\bm n=(75,75)$, with $\beta_1=0$, $\beta_2=-1/2$, and $\alpha=0$; approximately 19 real zeros of $S_{\bm n, 1}$ (ranging from $-771$ to $-1.74$), are not represented. Right: the histogram of the zeros of $P_{\bm n}$ and the plot of the asymptotic density, predicted in \cite{zbMATH05356762}.} \label{FigJP25}
\end{figure}
\subsection{Angelesco-Jacobi polynomials} \label{sec:AngelescoJacobi}
These polynomials, known also as Jacobi-Jacobi polynomials (see \cite{Aptekarev:97}) are Hermite--Pad\'e polynomials $P_{\bm n}$, $\bm n=(n_1, n_2)$, satisfying orthogonality relations
\begin{equation*}
\begin{split}
&\int_{a}^{0} x^{k} P_{\bm n}(x)(x-a)^{\alpha}|x|^{\beta}(1-x)^{\gamma} \, d x=0, \quad k=0,1,2, \dots, n_1-1, \\
&\int_{0}^{1} x^{k} P_{\bm n}(x)(x-a)^{\alpha} x^{\beta}(1-x)^{\gamma} x^{k} \, d x=0, \quad k=0,1,2, \dots, n_2-1,
\end{split}
\end{equation*}
with $a<0$. In our notation,
$$
A(x)=x(x-a)(x-1), \quad B(x)= \alpha x (x-1)+ \beta (x-a)(x-1)+ \gamma x(x-a),
$$
and
\begin{align*}
w_1(x) & =w_2(x)=w(x)=(x-a)^{\alpha}|x|^{\beta}(1-x)^{\gamma}, \\
v_1(x) & =v_2(x)=v(x)=(x-a)^{\alpha+1}|x|^{\beta+1}(1-x)^{\gamma+1},
\end{align*}
with $\alpha, \beta, \gamma>-1$ and
$$
\Delta_1=[a,0], \quad \Delta_2=[0,1]
$$
(cf.~Example~\ref{exampleAngelescoJJ}).
This is an Angelesco system of semiclassical weights of class $\sigma=1$, see \eqref{def:class}. As it follows from \cite[Theorem 2.1]{Aptekarev:97}, for $n_1=n_2=n$, $\bm n =(n,n)$, $n\in \N$, $P_{\bm n}$ can be expressed using a Rodrigues formula,
$$
P_{\bm n}(x)=\frac{1}{w(x)} \left( \frac{d}{dx}\right)^n \left[ A^n(x)w (x) \right],
$$
or explicitly, see \cite[Section 3.5]{MR1808581}: up to a constant factor,
$$
P_{\bm n}(x)= \sum_{k=0}^{n} \sum_{j=0}^{n-k} \frac{(-n)_{k+j} (-\alpha-n)_{j} (-\gamma-n)_{k} }{(\beta+1)_{k+j} k ! j !}(x-a)^{n-k}(x-1)^{k+j} x^{n-j}.
$$
Kaliaguin \cite{kaliaguine:1981} studied the case of $a=-1$, with the particular sub-case of $B\equiv 0$ or $w(x)\equiv 1$ going back to the work of Appell \cite{Appell1901}. The case of $-1<a<0$ was addressed in \cite{kaliaguine:1996}, but see \cite{Aptekarev:97} for further historical details.
If $B\equiv 0$ (that is, $\alpha=\beta=\gamma=0$), definition \eqref{defTransform2} reduces to
$$
S_{\bm n, 1} = A \times \mathfrak{Wrons}\left[P_{\bm n} ,\mathfrak C_{w_1}[P_{\bm n}] \right] ,
\quad \mathfrak C_{w_1}[P_{\bm n}] (x)=\int_{a}^0 \frac{P_{\bm n} (t) }{t-x}dt.
$$
Notice that in particular, $x=1$ is one of the $n+1$ zeros of $S_{\bm n, 1} $, $n-1$ of which interlace with the zeros of $P_{\bm n}$ on $[0, 1)$.
\begin{figure}[h]
\centering
\hspace{-3mm} \begin{tabular}{cc}
\begin{overpic}[scale=0.55]{Fig5a}
\end{overpic}
&
\begin{overpic}[scale=0.55]{Fig5b}
\end{overpic}
\end{tabular}
\caption{Appell's polynomials ($\alpha=\beta=\gamma=0$). Left: graph of $P_{\bm n}$ (dashed line) and $S_{\bm n, 1}$ (thick line) on $[0,1]$ for $\bm n=(6,6)$ in the case. Right: zeros of $P_{\bm n}$ (empty circles, all on $[-1,1]$) and of $S_{\bm n, 1}$ (filled circles, all on $[0,1]$) for $\bm n=(15,15)$. }\label{Fig1Appel}
\end{figure}
In Appell's case, $w\equiv 1$ and $a=-1$,
$$
P_{\bm n}(x)= \left( \frac{d}{dx}\right)^n \left[ x(x^2-1) \right]^n ,
$$
and
$$
S_{\bm n, 1} (x)=(-1)^{n+1} S_{\bm n, 2} (-x), \quad \deg S_{\bm n, 1}=n+1.
$$
These explicit formulas allow to use symbolic computation (e.g.~Mathematica) to find explicit expressions for small values of $n$. For instance, for $\bm n=(6,6)$, and up to normalization,
\begin{align*}
P_{\bm n}(x) & = x^{12}-\frac{44 x^{10}}{17}+\frac{165 x^8}{68}-\frac{220 x^6}{221}+\frac{75 x^4}{442}-\frac{2 x^2}{221}+\frac{1}{18564}, \\
S_{\bm n, 1}(x) &= (x-1) \left(12288 x^6-38763 x^5+47253 x^4-27822 x^3+8018 x^2-991 x+33\right),
\end{align*}
and their graphs are plotted in Figure~\ref{Fig1Appel}. We can clearly observe the interlacing predicted by Proposition~\ref{cor:zerosS12ang}, which in the limit $n\to\infty$ gives the description in therms of the Angelesco equilibrium problem, as described in Section~\ref{sec:AsymptAngelesco}.
However, according to Remark~\ref{rem:linearcomb}, the critical configuration for the zeros of $P_{\bm n}$ in this case is not unique: we can use for the second component the zeros of any linear combination of $S_{\bm n, 1}$ and $S_{\bm n, 2}$. In Figure~\ref{Fig2Appell} we illustrate the behavior of zeros of $S_{\bm n, 1} + t S_{\bm n, 2}$, for different values of $0<t\leq 1$ (notice that the representation of the zeros of $S_{\bm n, 1}$ in Figure~\ref{Fig1Appel}, right, corresponds to $t=0$).
\begin{figure}[h]
\centering
\hspace{-3mm} \begin{tabular}{cc}
\begin{overpic}[scale=0.55]{Fig6a}
\end{overpic}
&
\begin{overpic}[scale=0.55]{Fig6b}
\end{overpic} \\
\begin{overpic}[scale=0.55]{Fig6c}
\end{overpic}
&
\begin{overpic}[scale=0.55]{Fig6d}
\end{overpic}
\end{tabular}
\caption{Appell's polynomials ($\alpha=\beta=\gamma=0$) and $\bm n=(35,35)$: zeros of $P_{\bm n}$ (indicated by empty circles, all on $(-1,1)$) and of $S_{\bm n, 1} + t S_{\bm n, 2}$ (filled circles) for $t=10^{-10}$ (top left), $t=10^{-5}$ (top right), $t=1$ (bottom left), and $t=10^{5}$ (bottom right). }\label{Fig2Appell}
\end{figure}
\subsection{Multiple orthogonal polynomials for the cubic weight}
Although we have not discussed the purely complex weights, we finish our presentation with the illustrative example of polynomials $P_{\bm n}$, $\bm n=(n_1, n_2)$, satisfying orthogonality relations
\begin{equation*}
\begin{split}
&\int_{\Delta_1} z^{k} P_{\bm n}(z)e^{-z^3}, d z=0, \quad k=0,1,2, \dots, n_1-1, \\
&\int_{\Delta_2} z^{k} P_{\bm n}(z)e^{-z^3}, d z=0, \quad k=0,1,2, \dots, n_2-1,
\end{split}
\end{equation*}
where $\Delta_1$ and $\Delta_2$ are contours on the complex plane, extending to $\infty$ on their two ends along the directions determined by the angles $-2\pi/3$ and $0$, and $-2\pi/3$ and $2\pi/3$, respectively. They were introduced in \cite{MR3304586} (where the diagonal case, $n_1=n_2$, was discussed) and studied in full generality in \cite{MR3939592}.
In our notation,
$$
A_1(x)=A_2(x)=A(x)=1, \quad B_1(x)=B_2(x)=B(x)= -3x^2.
$$
Detailed explanation of the algorithm of computation of the zeros of $P_{\bm n}$ was given in \cite[Section 9]{MR3939592}. Since the moments of the weight were given explicitly, we use the expansion of $\mathfrak D_{w }[P_{\bm n}]$ at infinity to calculate the expressions for $S_{\bm n, 1} $ and $ S_{\bm n, 2}$, see Figure~\ref{FigCubic}.
It is interesting to compare the location of the zeros of $S_{\bm n, 1} $ with those of \textit{type I} multiple orthogonal polynomials $\mathfrak A_{\bm n}$ and $\mathfrak B_{\bm n}$, defined by the following conditions:
$$
\deg \mathfrak A_{\bm n} \leq n-1, \quad \deg \mathfrak B_{\bm n}\leq m-1,
$$
and
\begin{equation}\label{mops_conditionsTypeI}
\begin{aligned}
\int_{\Delta_1}z^k \mathfrak A_{\bm n}(z) e^{- z^3}dz + \int_{\Delta_2}z^k \mathfrak B_{\bm n}(z) e^{- z^3}dz =0, & \quad k=0,\dots, N-2, \\
\int_{\Delta_1}z^k \mathfrak A_{\bm n}(z) e^{- z^3}dz + \int_{\Delta_2}z^k \mathfrak B_{\bm n}(z) e^{- z^3}dz =1, & \quad k=N-1,
\end{aligned}
\end{equation}
where $N=n+m$. According to Figure~\ref{FigCubic}, the zeros of $\mathfrak B_{\bm n}$ `` interlace'' with the zeros of $S_{\bm n, 1} $, which brings up a natural question of a possible connection of these two polynomials.
\begin{figure}[h]
\centering
\begin{overpic}[scale=0.7]{Fig7}
\end{overpic}
\caption{Zeros of the multiple orthogonal polynomial $P_{\bm n}$ with respect to the cubic weight (indicated by empty circles, part of them on the positive semiaxis, forming a symmetric star) and of $S_{\bm n, 1}$ (filled circles) for $\bm n=(25,25)$. For comparison, zeros of type I MOP $\mathfrak B_{\bm n}$ (filled squares) are also represented. }\label{FigCubic}
\end{figure}
| {
"timestamp": "2022-03-04T02:04:24",
"yymm": "2203",
"arxiv_id": "2203.01419",
"language": "en",
"url": "https://arxiv.org/abs/2203.01419",
"abstract": "For a given polynomial $P$ with simple zeros, and a given semiclassical weight $w$, we present a construction that yields a linear second-order differential equation (ODE), and in consequence, an electrostatic model for zeros of $P$. The coefficients of this ODE are written in terms of a dual polynomial that we call the electrostatic partner of $P$. This construction is absolutely general and can be carried out for any polynomial with simple zeros and any semiclassical weight on the complex plane. An additional assumption of quasi-orthogonality of $P$ with respect to $w$ allows us to give more precise bounds on the degree of the electrostatic partner. In the case of orthogonal and quasi-orthogonal polynomials, we recover some of the known results and generalize others. Additionally, for the Hermite--Padé or multiple orthogonal polynomials of type II, this approach yields a system of linear second-order differential equations, from which we derive an electrostatic interpretation of their zeros in terms of a vector equilibrium. More detailed results are obtained in the special cases of Angelesco, Nikishin, and generalized Nikishin systems. We also discuss the discrete-to-continuous transition of these models in the asymptotic regime, as the number of zeros tends to infinity, into the known vector equilibrium problems. Finally, we discuss how the system of obtained second-order ODEs yields a third-order differential equation for these polynomials, well described in the literature. We finish the paper by presenting several illustrative examples.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Electrostatic partners and zeros of orthogonal and multiple orthogonal polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543743067415,
"lm_q2_score": 0.7154240018510025,
"lm_q1q2_score": 0.7093817985193959
} |
https://arxiv.org/abs/1201.3221 | Spanning trees and even integer eigenvalues of graphs | For a graph $G$, let $L(G)$ and $Q(G)$ be the Laplacian and signless Laplacian matrices of $G$, respectively, and $\tau(G)$ be the number of spanning trees of $G$. We prove that if $G$ has an odd number of vertices and $\tau(G)$ is not divisible by $4$, then (i) $L(G)$ has no even integer eigenvalue, (ii) $Q(G)$ has no integer eigenvalue $\lambda\equiv2\pmod4$, and (iii) $Q(G)$ has at most one eigenvalue $\lambda\equiv0\pmod4$ and such an eigenvalue is simple. As a consequence, we extend previous results by Gutman and Sciriha and by Bapat on the nullity of adjacency matrices of the line graphs. We also show that if $\tau(G)=2^ts$ with $s$ odd, then the multiplicity of any even integer eigenvalue of $Q(G)$ is at most $t+1$. Among other things, we prove that if $L(G)$ or $Q(G)$ has an even integer eigenvalue of multiplicity at least $2$, then $\tau(G)$ is divisible by $4$. As a very special case of this result, a conjecture by Zhou et al. [On the nullity of connected graphs with least eigenvalue at least $-2$, Appl. Anal. Discrete Math. 7 (2013), 250--261] on the nullity of adjacency matrices of the line graphs of unicyclic graphs follows. | \section{Introduction}
The graphs we consider are simple, that is, without loops or multiple edges.
Let $G$ be a graph.
The {\em order} of $G$ is the number of vertices of $G$.
We denote by $A(G)$ the adjacency matrix,
by $\li(G)$ the line graph and by $\tau(G)$ the number of spanning trees of $G$.
The purpose of this paper is to study the interconnection between $\tau(G)$ and the multiplicities of even integer eigenvalues of
$A(\li(G))$.
Our motivation comes partly from the previous works by several authors on the connection between $\tau(G)$ and the multiplicity of zero eigenvalue, i.e. the nullity of $A(\li(G))$. A brief review of the previous results is in order.
Doob \cite{doob} proved that the binary rank (i.e. the rank over the two-element field) of $A(\li(G))$ for any connected graph $G$ of order $n$ is $n-1$ if $n$ is odd, and $n-2$
if $n$ is even. This result was stated and proved in the context of Matroid Theory.
Gutman and Sciriha \cite{gut} showed
that the nullity (over reals) of $A(\li(T))$ for any tree $T$ is at most 1 and
if $A(\li(T))$ is singular, then $T$ has an even order. Indeed, this is an immediate consequence of Doob's result.
Recently, Bapat \cite{bapat} found an interesting generalization by proving that if $\tau(G)$ is odd, then $A(\li(G))$ has nullity (over reals) at most 1. He also showed that a bipartite graph $G$ with odd $\tau(G)$ and with singular $A(\li(G))$
must have even order.
We extend these results to the following.
\begin{thm}\label{t+1} Let $G$ be a connected graph and $\tau(G)=2^ts$ with $s$ odd. Then the multiplicity of any even integer $\la\ne-2$ as an eigenvalue of $A(\li(G))$ is at most $t+1$.
\end{thm}
\begin{thm}\label{mult2line} Suppose that $G$ is a graph with $\tau(G)$ not divisible by $4$. If $\la\ne-2$ is an even integer eigenvalue of $A(\li(G))$, then $\la\equiv2\pmod4$, $\la$ is a simple eigenvalue, and $A(\li(G))$ has at most one such eigenvalue.
\end{thm}
\begin{cor} If a graph $G$ has odd order and $\tau(G)$ is not divisible by $4$, then $A(\li(G))$ is nonsingular.
\end{cor}
\begin{thm}\label{oddline} If $A(\li(G))$ has an even integer eigenvalue $\la\ne-2$ of multiplicity at least $2$, then $\tau(G)$ is divisible by $4$.
\end{thm}
Since even integer eigenvalues of $A(\li(G))$ and the signless Laplacian matrix $Q(G)$ are the same modulo a shift (see Section~2) it is enough to consider those of $Q(G)$ as we do in what follows.
The rest of the paper is organized as follows. In Section~2 we recall some necessary preliminaries. In Section~3 we give a simple proof for Doob's result which will be used later on.
In Section~4, the proofs of Theorems~\ref{t+1}, \ref{mult2line}, and \ref{oddline} in terms of $Q(G)$ are given along with some improvements and similar results for the eigenvalues of the Laplacian matrix $L(G)$.
\section{Preliminaries}
By $X=X(G)$ we denote the $0,1$ vertex-edge incidence matrix of $G$. If we orient each edge of $G$, then $D=D(G)$ will denote
the $0,\pm1$ vertex-edge incidence matrix of the resulting graph. The Laplacian
matrix of $G$ is $L=L(G)=DD^\p$ and the signless Laplacian matrix of $G$ is $Q=Q(G)=XX^\p$.
Note that the Laplacian does not depend on the orientation.
The matrices $L$ and $Q$ are positive semidefinite.
It is easily seen that the incidence matrix of $G$ and the adjacency matrix of $\li(G)$ satisfy the following
\begin{equation}\label{A+2I}
A(\li(G))+2I=X^\p X.
\end{equation}
Recall that for a matrix $M$, the matrices $MM^\p$ and $M^\p M$ have the same nonzero eigenvalues with the same multiplicities.
This together with (\ref{A+2I}) imply that
the matrices $A(\li(G))+2I$ and $Q(G)$ have the same nonzero eigenvalues with the same multiplicities.
In particular, the multiplicity of eigenvalue $2$ for $Q(G)$ is the same as the nullity of $A(\li(G))$.
Therefore, studying even integer eigenvalues of $A(\li(G))$ and that of $Q(G)$ are equivalent.
We denote the vertex set and the edge set of $G$ by $V(G)$ and $E(G)$, respectively.
If $S\subseteq E(G)$, then $\langle S\rangle$ denotes the induced subgraph on $S$.
For a matrix $M$ with $R,S$ being subsets of row and column indices of $M$, respectively, we denote the submatrix with row indices from $R$ and column indices from $S$ by $M(R,S)$.
The following two lemmas describe the invertible submatrices of $D$ and $X$. For the first one we refer to pp. 32 and 47 of \cite{biggs} and for the second one to p. 30 of \cite{bapatBook}.
\begin{lem} \label{invertD} Let $G$ be a graph and $R\subseteq V(G)$, $S\subseteq E(G)$ with $|R|=|S|\ge1$.
Let $V_0$ denote the vertex set of $\langle S\rangle$. Then $D(R,S)$ is invertible if and only if the
following conditions are satisfied:
\begin{itemize}
\item[\rm(i)] $R$ is a subset of $V_0$.
\item[\rm(ii)]$\langle S\rangle$ is a forest.
\item[\rm(iii)]$V_0\setminus R$ contains precisely one vertex from each connected component
of $\langle S\rangle$.
\end{itemize}
Moreover, if $D(R,S)$ is invertible, then $\det(D(R,S))=\pm1$.
\end{lem}
\begin{lem} \label{invertX} Let $G$ be a graph and $R\subseteq V(G)$, $S\subseteq E(G)$ with $|R|=|S|\ge1$.
Let $V_0$ denote the vertex set of $\langle S\rangle$. Then $X(R,S)$ is invertible if and only if the
following conditions are satisfied:
\begin{itemize}
\item[\rm(i)] $R$ is a subset of $V_0$.
\item[\rm(ii)] each connected component of $\langle S\rangle$ is either a tree or a unicyclic graph with odd cycle.
\item[\rm(iii)]$V_0\setminus R$ contains precisely one vertex from each tree in $\langle S\rangle$.
\end{itemize}
Moreover, if $X(R,S)$ is invertible, then $\det(X(R,S))=\pm2^c$ where $c$ is the number of components of $\langle S\rangle$ which are unicyclic with odd cycle.
\end{lem}
The nullity of $L(G)$ and $Q(G)$ are respectively equal to the number of components and to the number of bipartite components of $G$.
Let
$$p_Q(x)=x^n+q_1x^{n-1}+\cdots+q_n,~~~p_L(x)=x^n+\ell_1x^{n-1}+\cdots+\ell_{n-1}x$$
be the characteristic polynomials of $Q$ and $L$, respectively.
A subgraph of $G$ whose components are trees or unicyclic graphs with odd cycles is called a
{\em TU-subgraph} of $G$. Suppose that a TU-subgraph $H$ of $G$ contain $c$ unicyclic graphs and trees
$T_1, T_2,\ldots, T_s$. Then the weight $W(H)$ of $H$ is defined by $$W(H) = 4^c\prod_{i=1}^s(1 + e(T_i )),$$
where $e(T_i)$ denotes the number of edges of $T_i$.
The weight of an acyclic subgraph, that is, a union of trees, is defined similarly with $c=0$.
We shall express the coefficients of $p_Q(x)$ and $p_L(x)$ in terms of the weights of TU-subgraphs and acyclic subgraphs of $G$.
By the Matrix-Tree Theorem, for any $1\le i,j\le n$,
$\tau(G)$ is equal to $(-1)^{i+j}$ times the determinant of the submatrix of $L(G)$ obtained by eliminating the $i$th row and $j$th column. It follows that $\ell_{n-1}=(-1)^{n-1}n\tau(G)$.
The first part of the following theorem which is the generalization of the Matrix-Tree Theorem was appeared in \cite{kel} (see also \cite[p. 90]{crsB}). The second part was proved in \cite{dedo} (see also \cite{crs}).
\begin{thm}\label{coef} The coefficients of $p_L(x)$ and $p_Q(x)$ are determined as follows.
\begin{itemize}
\item[\rm(i)] $\ell_j=(-1)^j\sum_{F_j}W(F_j)$, for $j=1,\ldots,n-1$,
where the summation runs over all acyclic subgraphs $F_j$ of $G$ with $j$ edges.
\item[\rm(ii)] $p_j=(-1)^j\sum_{H_j}W(H_j)$, for $j=1,\ldots,n$,
where the summation runs over all TU-subgraphs $H_j$ of $G$ with $j$ edges.
\end{itemize}
\end{thm}
We close this section by stating the following well known lemma for later use.
\begin{lem}\label{princ} Any symmetric matrix of rank $r$ (over any field) has a
principal $r\times r$ submatrix of full rank.
\end{lem}
\section{Binary rank of line graphs}
In this section we give a simple proof of Doob's result. For a matrix $M$, we use the notation $\rk_2(M)$ to denote the binary rank of $M$.
\begin{thm}\label{doob} {\rm(Doob \cite{doob})} Let $G$ be a connected graph of order $n$ and $\A=A(\li(G))$. Then
${\rm rank}_2(\A)$ is equal to $n-1$ if $n$ is odd, and $n-2$ if $n$ is even.
\end{thm}
\begin{proof}{
If $S$ is the edge set of a spanning tree and $R$ any set of $n-1$ vertices of $G$, then by Lemma~\ref{invertX},
$\det(X(R,S))=\pm1$. Hence, $\rk_2(X)\ge n-1$. In fact we have equality since the rows of $X$ sum up to the all 2 vector.
From (\ref{A+2I}), it follows that $\rk_2(\A)\le\rk_2(X)=n-1$.
Let $S\subseteq E(G)$ with $|S|=n-1$. By the Binet--Cauchy Theorem and Lemma~\ref{invertX},
\begin{align*}
\det(\A(S,S))&\equiv\det((X^\p X)(S,S))\pmod{2}\\
&=\sum_{R\subseteq V(G),\,|R|=n-1}\det(X(R,S))^2
=\left\{\begin{array}{ll} n & \hbox{if $\langle S\rangle$ is a tree,} \\0 & \hbox{otherwise.}\end{array}\right.
\end{align*}
This shows that $\A$ has a principal submatrix of order $n-1$ with full binary rank if $n$ is odd and has none if $n$ is even.
This proves the theorem for odd $n$.
Assume that $n$ is even. The above argument together with Lemma~\ref{princ} show that $\A$ has no submatrices of order $n-1$ with full binary rank.
Thus $\rk_2(\A)\le n-2$.
Let $T$ be a subtree of $G$ with $n-2$ edges. Then the adjacency matrix ${\cal B}$ of $\li(T)$ is a principal submatrix of $\A$ and further ${\cal B}$ has full binary rank by the above argument for odd $n$.
This shows that $\rk_2(\A)=n-2$.
}\end{proof}
\section{Even integer eigenvalues of Laplacian and signless Laplacian}
In this section we demonstrate the interconnection between the number of spanning trees of a graph $G$ and the even integr eigenvalues of $L(G)$ and $Q(G)$.
In view of the fact that the matrices $A(\li(G))+2I$ and $Q(G)$ have the same nonzero eigenvalues, Theorems~\ref{t+1}, \ref{mult2line}, and \ref{oddline} follow respectively from Theorems~\ref{t+1Q}, \ref{nodd}, and \ref{mult2} below.
\begin{thm}\label{t+1Q} Let $G$ be a connected graph having $2^ts$ spanning trees with $s$ odd. Then the multiplicity of any even integer $\la$ as an eigenvalue of $Q(G)$ is at most $t+1$.
\end{thm}
\begin{proof}{
It is well known that for a given integral
matrix $A$ of rank $r$, there exist unimodular matrices (that is, integral matrices with
determinant $\pm1$) $U$ and $V$
such that $$UAV={\rm diag}(s_1, \ldots ,s_r, 0, \ldots, 0)$$ where
$s_1, \ldots ,s_r$ are positive integers with $s_1s_2\cdots s_i=d_i$ where $d_i$ is the greatest common divisor of all minors
of $A$ of order $i$, $1\leq i\leq r$.
(The matrix ${\rm diag}(s_1, \ldots ,s_r, 0, \ldots, 0)$ is called the Smith form of $A$.)
Let $S={\rm diag}(s_1, \ldots ,s_{n-1},0)$ be the Smith form of $L$. Note that
${\rm rank}_2(L)={\rm rank}_2(S)$.
By the Matrix-Tree Theorem, $\tau(G)=d_{n-1}=s_1s_2\cdots s_{n-1}$. It follows that at most $t$ of the $s_i$ are even. Therefore, ${\rm rank}_2(L)\ge n-t-1$ and so ${\rm rank}_2(Q)\ge n-t-1$.
By Lemma~\ref{princ}, $Q$ has a principal
submatrix $B$ of order $n-t-1$ with full binary rank. By interlacing, if an even integer $\la$ is an eigenvalue of $Q$ with multiplicity at least $t+2$, then any principal submatrix of $Q$ of order $n-t-1$ has $\la$ as an eigenvalue.
So $\la$ is an eigenvalue of $B$. This implies that $\det(B)/\la$ is a rational algebraic integer and thus an integer. Hence $\det(B)$ is even, a contradiction. This completes the proof.
}\end{proof}
\begin{rem} The bound `$t+1$' of Theorem~\ref{t+1Q} on the multiplicity of even integer eigenvalues of $Q$ is best possible. For, if we let $G$ to be the complete graph of order $n\equiv2\pmod4$, then by the Cayley's Formula, $\tau(G)=n^{n-2}=2^{n-2}s$ for some odd $s$, and $Q(G)$ has the even integer $n-2$ as an eigenvalue of multiplicity $n-1$.
\end{rem}
Suppose that $G$ is a connected graph with $n$ vertices, $e(G)$ edges and $\A=A(\li(G))$.
By the same argument as the proof of Theorem~\ref{t+1Q}, we see that the multiplicity of any even integer eigenvalue $\la$ of $\A$ is at most $e(G)-\rk_2(\A)$.
Therefore, in view of Theorem~\ref{doob}, the multiplicity of $\la$ is at most $e(G)-2\lceil n/2\rceil+2$. Combining this with Theorem~\ref{t+1} we come up with the following.
\begin{thm} Let $G$ be a connected graph with $n$ vertices, $e$ edges, and $2^ts$ spanning trees with $s$ odd. Then the multiplicity of any even integer $\la\ne-2$ as an eigenvalue of $A(\li(G))$ is at most $\min\{t+1,e(G)-2\lceil n/2\rceil+2\}$.
\end{thm}
In the rest of the paper, we shall need a variation of Theorem~\ref{coef} on the coefficients of the characteristic polynomials of
principal submatrices of order $n-1$ of $L(G)$ and $Q(G)$. For simplicity we denote by $L_1=L_1(G)$ and $Q_1=Q_1(G)$ the matrices obtained from
$L(G)$ and $Q(G)$ by removing the first row and the first column, respectively.
Note that $L_1(G)$ and $Q_1(G)$ are not the same as $L(G-v_1)$ and $Q(G-v_1)$ where $v_1$ is the vertex corresponding to the first rows of $L(G)$ and $Q(G)$.
A notion of `restricted weight' with respect to $v_1$ is useful for
describing the coefficients of $p_{L_1}(x)$ and $p_{Q_1}(x)$.
Let $U$ be a unicyclic subgraph of $G$ with odd cycle and $T$ be a tree subgraph of $G$. We define
$$W_1(U)=\left\{\begin{array}{ll} 0 & \hbox{if $U$ contains $v_1$,} \\4 & \hbox{otherwise,}\end{array}\right.~~\hbox{and}~~~
W_1(T)=\left\{\begin{array}{ll} 1 & \hbox{if $T$ contains $v_1$,} \\1+e(T) & \hbox{otherwise.}\end{array}\right.$$
We extend the domain of $W_1$ to all TU-subgraphs $H$ of $G$ by defining $W_1(H)$ to be the product of the $W_1$'s of the connected components of $H$.
\begin{lem}\label{coefnn} Let $p_{L_1}(x)=x^{n-1}+\ell'_1x^{n-2}+\cdots+\ell'_{n-1}$ and $p_{Q_1}(x)=x^{n-1}+p'_1x^{n-2}+\cdots+p'_{n-1}$
be the characteristic polynomials of $L_1$ and $Q_1$, respectively.
Then their coefficients are determined as follows.
\begin{itemize}
\item[\rm(i)] $\ell'_j=(-1)^j\sum_{F_j}W_1(F_j)$, for $j=1,\ldots,n-1$,
where the summation runs over all spanning forests $F_j$ of $G$ with $j$ edges.
\item[\rm(ii)] $p'_j=(-1)^j\sum_{H_j}W_1(H_j)$, for $j=1,\ldots,n-1$,
where the summation runs over all TU-subgraphs $H_j$ of $G$ with $j$ edges.
\end{itemize}
\end{lem}
\begin{proof}{ Let $E=E(G)$ and $V_1=V(G)\setminus\{v_1\}$.
(i)
For $j=1,\ldots,n-1$, we have $$\ell'_j = \sum_{R\subseteq V_1,|R|=j}\det(L(R,R)).$$
From the Binet--Cauchy Theorem it follows that
$$\det(L(R,R)) = \sum_{S\subseteq E,|S|=j}\det(D(R, S))^2.$$
Thus,
\begin{equation}\label{l'j}
\ell'_j = \sum\det(D(R, S))^2,
\end{equation}
where the summation is over $R\subseteq V_1$, $S\subseteq E$ with $|R|=|S|=j$.
Now $\det(D(R,S))^2$ is either $0$ or $1$ by Lemma~~\ref{invertD}. Further, it takes the value 1 if and only if the three conditions of Lemma~\ref{invertD} hold.
Hence, if $\det(D(R,S))^2=1$, then $\langle S\rangle$ must be a union of some trees $T_1,\ldots,T_r$.
For such a $S$, the contribution of $\langle S\rangle$ in (\ref{l'j}) is the number of $R\subseteq V_1$, $|R|=j$ such that $R$ is obtained by omitting one vertex from each $V(T_i)$.
Assume that $v_1$ is contained in $T_1$. Since $v_1\not\in R$, we have no more option to omit any vertex of $T_1$.
For the other components $T_i$, $i=2,\ldots,r$, we have $1+e(T_i)$ ways of omitting one vertex.
It follows that the contribution of $\langle S\rangle$ in (\ref{l'j}) is $(1+e(T_2))\cdots(1+e(T_r))$ which is equal to $W_1(\langle S\rangle)$.
(ii) The proof is similar to that of part (i). The only points different from part (i) are that here we use Lemma~\ref{invertX} instead of Lemma~\ref{invertD} and that if $\langle S\rangle$ is a TU-subgraph and some unicyclic component of $\langle S\rangle$ contains $v_1$, then for any $R\subseteq V_1$, $\det(X(R,S))=0$. Hence any TU-subgraph $\langle S\rangle$ with nonzero contribution in $p'_j$ must have all of its unicyclic components included in $V_1$.
}\end{proof}
\begin{thm}\label{nodd} Suppose that $G$ is a connected graph with an odd order and $\tau(G)$ is not divisible by $4$.
Then
\begin{itemize}
\item[\rm(i)] $L(G)$ has no even integer eigenvalue;
\item[\rm(ii)] $Q(G)$ has no integer eigenvalue $\la\equiv2\pmod4$;
\item[\rm(iii)] $Q(G)$ has at most one eigenvalue $\la\equiv0\pmod4$ and such an eigenvalue is simple.
\end{itemize}
\end{thm}
\begin{proof}{Let $G$ be of order $n$.
(i) We claim that the coefficient $\ell_{n-2}$ of the characteristic polynomial $p_L(x)=x^n+\ell_1x^{n-1}+\cdots+\ell_{n-1}x$ of $L(G)$ is even.
By Theorem~\ref{coef}, we have $\ell_{n-2}=(-1)^{n-2}\sum_{F_{n-2}}W(F_{n-2})$ where the summation runs over all spanning forests $F_{n-2}$ of $G$ with $n-2$ edges.
Any $F_{n-2}$ is necessarily a union of two trees $T_1$ and $T_2$ with $e(T_1)+e(T_2)=n-2$. As $n$ is odd, $W(F_{n-2})=(1+e(T_1))(1+e(T_2))$ is even.
This shows that $\ell_{n-2}$ is even.
Let $k$ be an even integer where $k=2^ts$ with $s$ odd and $t\ge1$.
Then, all the terms of $p_L(k)$ are divisible by $2^{t+2}$ except the last term, namely $\ell_{n-1}k=(-1)^{n-1}n\tau(G)k$ which is congruent to $2^t\tau(G)\pmod{2^{t+2}}$.
Therefore, $p_L(k)\equiv2^{t+1}\pmod{2^{t+2}}$ if $\tau(G)\equiv2\pmod4$ and $p_L(k)\equiv2^t\pmod{2^{t+2}}$ if $\tau(G)$ is odd. This proves (i).
(ii) From Theorem~\ref{coef} it follows that for some integers $s_1,\ldots,s_n$, we have $p_j=\ell_j+4s_j$, $j=1,\ldots,n-1$, and $p_n=4s_n$.
This implies that $p_Q(x)=p_L(x)+4f(x)$ where $f(x)$ is a polynomial with integer coefficients. First assume that $\tau(G)$ is odd. Hence $\ell_{n-1}=(-1)^{n-1}n\tau(G)$ is an odd integer. It follows that if $k\equiv2\pmod4$, then $p_Q(k)\equiv2\pmod4$, and we are done.
Next assume that $\tau(G)\equiv2\pmod4$.
We claim that $s_n$, the constant term of $f(x)$, is even.
Let $\cal U$ be the set of all spanning unicyclic subgraphs of $G$.
By Theorem~\ref{coef}\,(ii), $s_n$ is the number of $U\in\cal U$ such that the cycle of $U$ has an odd length.
If $s_n=0$, we are done. So assume that $s_n\ge1$.
Let $\cal F$ be the set of all pairs $(T,U)$ such that $U\in\cal U$ and $T$ is an spanning tree of $U$.
For any fixed $U$, the number of pairs $(T,U)\in\cal F$ is equal to the length of the cycle of $U$.
Therefore, $|\cal F|$ is congruent to $s_n$ mod 2.
On the other hand, for any spanning tree $T$ of $G$, there are exactly $e(G)-n+1$ unicyclic graphs $U\in\cal U$ containing $T$.
Therefore, $|{\cal F}|=\tau(G)(e(G)-n+1)\equiv0\pmod2$.
It follows that $s_n$ is even which in turn implies that $f(k)$ is even.
On the other hand, from the proof of part (i) we see that $p_L(k)\equiv n\tau(G)k\equiv4\pmod8$.
Therefore, $p_Q(k)=p_L(k)+4f(k)\equiv4\pmod8$.
(iii)
Suppose that $Q(G)$
has an even integer eigenvalue $\la$.
By part (ii), $\la\equiv0\pmod4$.
If the multiplicity of $\la$ is more than $1$, then $\la$ is an eigenvalue of $Q_1$.
Take $\theta$ as $\theta\la=\det(Q_1)$.
Then $\theta$ is a rational algebraic integer, and so it is an integer.
It follows that $\la$ divides $\det(Q_1)$ which means $\det(Q_1)\equiv0\pmod4$.
On the other hand, from Theorem~\ref{coefnn} it follows $p_{Q_1}(x)=p_{L_1}(x)+4f(x)$ for some integer polynomial $f(x)$.
This implies that $\det(Q_1)\equiv\det(L_1)\equiv\tau(G)\pmod4$
which is a contradiction.
}\end{proof}
\begin{rem} Note that if $G$ is a $k$-regular graph, then $2k$ is the largest eigenvalue of $Q(G)$.
Hence if $k$ is even, $Q(G)$ has an eigenvalue divisible by $4$.
This shows that Theorem~\ref{nodd}\,(iii) cannot be improved.
\end{rem}
\begin{thm}\label{mult2} Let $G$ be a connected graph. If $L(G)$ or $Q(G)$ has an even integer eigenvalue of multiplicity at least $2$, then $\tau(G)$ is divisible by $4$.
\end{thm}
\begin{proof}{Let $G$ be of order $n$. If $n$ is odd we are done by Theorem~\ref{nodd}. So we may assume that $n$ is even.
First, let $\la$ be an even integer eigenvalue of $L(G)$ with multiplicity at least $2$.
By the interlacing property of eigenvalues of Hermitian matrices, $\la$ is also an eigenvalue of $L_1(G)$. Let $p_{L_1}(x)=x^{n-1}+\ell'_1x^{n-2}+\cdots+\ell'_{n-1}$ be the characteristic polynomials of $L_1$.
By the Matrix-Tree Theorem,
$\ell'_{n-1}=(-1)^{n-1}\tau(G)$. We show that $\ell'_{n-2}$ is even.
Any spanning forest of $G$ with $n-2$ edges is a union of two trees $T_1$ and $T_2$ where we may assume that $T_1$ contains the vertex $v_1$, and hence by Lemma~\ref{coefnn}, $\ell'_{n-2}=\sum_{T_1\cup T_2}(1+e(T_2))$.
We note that $(1+e(T_1))(1+e(T_2))\equiv(1+e(T_2))\pmod2$ for if $e(T_2)$ is even, then $e(T_1)$ is also even as $e(T_1)+e(T_2)=n-2$ so both sides are odd, and if $e(T_2)$ is odd both sides are even.
This implies that
$$\ell'_{n-2}=\sum_{T_1\cup T_2}(1+e(T_2))\equiv\sum_{T_1\cup T_2}(1+e(T_1))(1+e(T_2))=\ell_{n-2} \pmod2.$$
Note that $p_L(x)=(x-\la)^2g(x)$ for some integer polynomial $g(x)$.
If $ax^2+bx$ are the last two terms of $g(x)$, then $\ell_{n-2}=\la^2a-2\la b$. It follows that $\ell_{n-2}$ and so $\ell'_{n-2}$ is even.
Therefore, $$0=p_{L_1}(\la)\equiv\ell'_{n-2}=(-1)^{n-1}\tau(G)\pmod4.$$
Now let $\la$ be an even integer eigenvalue of $Q(G)$ with multiplicity at least $2$.
So $\la$ is also an eigenvalue of $Q_1(G)$.
From Theorem~\ref{coefnn} it follows $p_{Q_1}(x)=p_{L_1}(x)+4f(x)$ for some integer polynomial $f(x)$.
Therefore, $0=p_{Q_1}(\la)\equiv p_{L_1}(\la)\equiv\tau(G)\pmod4.$
}\end{proof}
As a very special case of Theorem~\ref{mult2} we come up with the following which was conjectured in \cite{zsyb}.
\begin{cor} Suppose that $G$ is a unicyclic graph and the nullity of $A(\li(G))$ is equal $2$. Then the length of the unique cycle of $G$ is divisible by $4$.
\end{cor}
More general assertions than Theorems~\ref{nodd} and \ref{mult2} hold for Laplacian matrix. These are given below. We omit the proof which is essentially the same as the proofs of
Theorems~\ref{nodd} and \ref{mult2}.
\begin{thm}
Let $G$ be a connected graph of order $n$.
\begin{itemize}
\item[\rm(i)] If $n$ is odd and $\tau(G)=2^ts$ with $s$ odd, then $L(G)$ has no eigenvalue $\la$ with $2^{\max(1,t)}|\la$.
\item[\rm(ii)] If $L(G)$ has an integer eigenvalue $\la=2^ts$ with $t\ge1$, $s$ odd and with multiplicity at least $2$,
then $2^{t+1}|\tau(G)$.
\end{itemize}
\end{thm}
\section*{Acknowledgments}
I would like to thank Ali Mohammadian for drawing my attention to the conjecture given in \cite{zsyb} which motivated me to establish Theorem~\ref{mult2}.
| {
"timestamp": "2013-09-24T02:03:42",
"yymm": "1201",
"arxiv_id": "1201.3221",
"language": "en",
"url": "https://arxiv.org/abs/1201.3221",
"abstract": "For a graph $G$, let $L(G)$ and $Q(G)$ be the Laplacian and signless Laplacian matrices of $G$, respectively, and $\\tau(G)$ be the number of spanning trees of $G$. We prove that if $G$ has an odd number of vertices and $\\tau(G)$ is not divisible by $4$, then (i) $L(G)$ has no even integer eigenvalue, (ii) $Q(G)$ has no integer eigenvalue $\\lambda\\equiv2\\pmod4$, and (iii) $Q(G)$ has at most one eigenvalue $\\lambda\\equiv0\\pmod4$ and such an eigenvalue is simple. As a consequence, we extend previous results by Gutman and Sciriha and by Bapat on the nullity of adjacency matrices of the line graphs. We also show that if $\\tau(G)=2^ts$ with $s$ odd, then the multiplicity of any even integer eigenvalue of $Q(G)$ is at most $t+1$. Among other things, we prove that if $L(G)$ or $Q(G)$ has an even integer eigenvalue of multiplicity at least $2$, then $\\tau(G)$ is divisible by $4$. As a very special case of this result, a conjecture by Zhou et al. [On the nullity of connected graphs with least eigenvalue at least $-2$, Appl. Anal. Discrete Math. 7 (2013), 250--261] on the nullity of adjacency matrices of the line graphs of unicyclic graphs follows.",
"subjects": "Combinatorics (math.CO)",
"title": "Spanning trees and even integer eigenvalues of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543718110063,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7093817967338871
} |
https://arxiv.org/abs/2208.03450 | An Optimal "It Ain't Over Till It's Over" Theorem | We study the probability of Boolean functions with small max influence to become constant under random restrictions. Let $f$ be a Boolean function such that the variance of $f$ is $\Omega(1)$ and all its individual influences are bounded by $\tau$. We show that when restricting all but a $\rho=\tilde{\Omega}((\log(1/\tau))^{-1})$ fraction of the coordinates, the restricted function remains nonconstant with overwhelming probability. This bound is essentially optimal, as witnessed by the tribes function $\mathrm{TRIBES}=\mathrm{AND}_{n/C\log n}\circ\mathrm{OR}_{C\log n}$.We extend it to an anti-concentration result, showing that the restricted function has nontrivial variance with probability $1-o(1)$. This gives a sharp version of the "it ain't over till it's over" theorem due to Mossel, O'Donnell, and Oleszkiewicz. Our proof is discrete, and avoids the use of the invariance principle.We also show two consequences of our above result: (i) As a corollary, we prove that for a uniformly random input $x$, the block sensitivity of $f$ at $x$ is $\tilde{\Omega}(\log(1/\tau))$ with probability $1-o(1)$. This should be compared with the implication of Kahn, Kalai, and Linial's result, which implies that the average block sensitivity of $f$ is $\Omega(\log(1/\tau))$. (ii) Combining our proof with a well-known result due to O'Donnell, Saks, Schramm, and Servedio, one can also conclude that: Restricting all but a $\rho=\tilde\Omega(1/\sqrt{\log (1/\tau) })$ fraction of the coordinates of a monotone function $f$, then the restricted function has decision tree complexity $\Omega(\tau^{-\Theta(\rho)})$ with probability $\Omega(1)$. | \section{Introduction}
For any Boolean function $f:\{-1,1\}^{n}\to\{0,1\}$, the individual
\emph{influence} of the $i$th coordinate is the probability of flipping
the value of $f$ by flipping $x_{i}$ on a random input $x$. Let $x \oplus (-1)^{e_i}$
denote the string obtained by flipping the $i$th coordinate of $x$, then
\[
\INF_{i}(f):=\PP_{x\in\DC}[f(x)\not= f(x\oplus (-1)^{e_i})].
\]
In this paper, we study Boolean functions with small influences, hence
functions satisfying
\[
\INF_{\infty}(f):=\max_{i\in[n]}\INF_{i}(f)=o(1).\footnotemark
\]
\footnotetext{For the rest of the paper, we consider the function
$f$ as a family of functions. Thus here by $o(\cdot)$, we
mean ``as $n$ goes to infinity.'' The bound $o(1)$ on the influences is
worse than needed. For illustration, this is good enough as many examples
we are interested in this paper satisfy that their influences
are $o(1)$.}
Let $\cR_{p}$ denote a $p$-\emph{random} \emph{restriction}, namely,
a randomly-chosen subcube where for each coordinate, one flips a coin
and, with probability $p$ one fixes the value of the coordinate
to $-1$ or $1$ (with equal probabilities) and, with probability
$1-p$, the coordinate is left undetermined (alive). Then $f|_{\cR_{p}}$
is a random sub-function given by restricting $f$ to the subcube.
In this paper, we study Boolean functions with small influences under
random restrictions. Our main goal is to prove a lower bound for the
probability of the function to remain nonconstant under the restriction.
We prove the following near-optimal result:
\begin{thm}[A simplified version of Theorem~\ref{thm:main}]
\label{thm:mainintro} Given $f:\{-1,1\}^{n}\to\{0,1\}$ such that
the variance of $f$ is $\Omega(1),$ and $\tau:=\INF_{\infty}(f)=o(1).$
Let $\cR_{1-\rho}$ be a random restriction where
\[
\rho=\Omega\left(\frac{\log\log(1/\tau)}{\log(1/\tau)}\right).
\]
Then for any $p\ge \mINF(f)^{\Theta(\rho)}$,
\[
\PP[\Var[f|_{\cR_{1-\rho}}]\le p^{\tilde{\Theta}\left(\frac{1}{\rho}\right)}]\le p.
\]
\end{thm}
The bound on the variance is near-optimal by the majority function.
In particular, if $f$ is the majority function, then
\[
\PP[\Var[f|_{\cR_{1-\rho}}]\le p^{\Theta\left(\frac{1}{\rho}\right)}]\le p.
\]
Furthermore, our bound on $\rho$ is optimal up to a $\log\log$ factor.
Because randomly restricting the tribes function with $\rho=O(1/\log(1/\tau)),$
we get a constant function with probability $\Omega(1)$. Previously,
Mossel et al. proved a similar result for $\rho=\Omega(1/\sqrt{\log(1/\tau)})$
using completely different techniques~\cite{mossel2010noise}. Prior to
Mossel et al.'s work, the related conjecture, with a very suggestive name ``it
ain't over till it's over'' conjecture, was proposed by Kalai and
Friedgut in studying social indeterminacy~\cite{kalai02arrow,kalai04social}.
It implies a quantitative version
of the Arrow's Theorem. We refer the interested readers to~\cite{mossel2010noise} for
more discussions.
Next, we discuss a corollary of this theorem to block sensitivity
of functions with small influences. The \emph{sensitivity }of an input
$x$ with respect to Boolean function $f$, denoted $\sen_{f}(x):=\sum_{i\in[n]}[f(x)\not=f(x\oplus (-1)^{e_i})]$, is the number
of the Hamming neighbors of $x$ which have a different function value.
An inequality by Kahn, Kalai and Linial \cite{KKL88} asserts that
\[
\EE_{x}[s_{f}(x)]=\Omega\left(\log\frac{\Var[f]}{\INF_{\infty}(f)}\right),
\]
which naturally leads to the question of whether it is also true that
$s_{f}(x)=\Omega\left(\log\frac{1}{\INF_{\infty}(f)}\right)$
for a \emph{typical} point $x$. This is clearly not the case, as
witnessed by the majority function. However, a corollary to our theorem
is that such an estimate does indeed hold true for most points $x$,
if sensitivity is replaced by the related notion of \emph{block sensitivity}.
The block sensitivity of an input $x$ with respect to function $f$,
denoted $\bs_{f}(x)$ is the maximum number of disjoint sets $S_{1},S_{2},\ldots,S_{m}\subseteq[n]$,
such that for $i\in[m]$, one has $f(x)\not=f(x\oplus(-1)^{\1_{S_{i}}})$,
by $x\oplus(-1)^{\1_{S_{i}}}$ we mean flipping the sign of variables
in $S_{i}$. Clearly,\footnote{By taking singleton sets above.} one has $\bs_{f}(x)\geq s_{f}(x)$ for all $f,x$.
Our second result shows that for functions with small influences,
the block sensitivity is large on almost all points $x$:
\begin{thm}
\label{thm:blockintro} For any function $f:\{-1,1\}^{n}\to\{0,1\}$
such that its variance is $\Omega(1),$ and $\tau:=\INF_{\infty}(f)=o(1).$
Then
\begin{align*}
& \PP_{x}[\mathbf{\bs}_{f}(x)\ge\tilde{\Omega}(\log1/\tau)]=1-o(1).
\end{align*}
\end{thm}
Finally, if the function $f$ is monotone in addition to having small influences,
our analysis to Theorem~\ref{thm:mainintro} implies an upper bound on the influences
of $f$ under random restrictions. In the work due to O'Donnell et al.~\cite{osss2005dt-maxinf},
it is proved that every shallow decision tree must have an influential
variable. Combining these facts, one can also conclude that, for
monotone function $f$, the restricted function will have large decision
tree complexity. In particular, let $\DT(f)$ denote the decision tree complexity of $f$.
Then,
\begin{theorem}
\label{thm:dt-intro}
For any monotone function $f:\DC\to\{0,1\}$ with $\Omega(1)$ variance,
and $\tau=\mINF(f)=o(1)$. Then for any $\rho=\tilde\Omega(\sqrt{1/\log (1/\tau) })$,
\begin{equation*}
\PP[\DT(f|_{\cR_{1-\rho}})=\tau^{-\Theta(\rho)}]\ge \frac{1}{2}.
\end{equation*}
\end{theorem}
The above theorem is, in a sense, a reverse statement to the H{\aa}stad switching
lemma, which states that applying the $(1-O(1/\log n))$-random restriction
to any polynomial-size DNF/CNF (or in general any $\mathrm{AC}^0$ circuits),
one gets a shallow decision tree with high probability. Our result, on the contrary,
states that random restrictions with
alive probability $\tilde\Omega(1/\log(\mINF(f)))$
cannot simplify $f$ to a too shallow decision tree for monotone functions $f$ with low influences.
\subsection*{Context and related works}
The notion of influences studied in this paper is first introduced
by Ben-Or and Linial~\cite{BL85} in the context of \emph{collective
coin flipping.} It coincides with the ``Banzaf index'' studied in
game theory. The class of Boolean functions with small influences
have been widely studied. There are several motivations to study such
functions. First, they arise naturally in social choice theory~\cite{kalai02arrow,kalai04social}.
For example, in a voting system of two candidates and $n$ voters,
each bit $x_{i}$ represents the individual preference of each voter
between the two candidates. When aggregating the social preference,
it is natural to use a function $f$ where the potential of any given
individual to determine the final outcome is limited. Second, from
an algorithmic perspective, suppose that we have access to the input
via a limited number of queries. Then, it is natural to query a variable
when its individual influence is large. In many cases, such variables
can be found iteratively and this process leads to a good approximation
of $f$ with a small number of queries. This observation has been
applied in different settings~\cite{Friedgut98,AA14}. In computational
complexity, to distinguish the dictatorship function v.s.
functions with small individual influences is a key component of proving
optimal NP-hardness for approximations~\cite{BGS95,has96,has97,KGMO07UG}.
From an analytic perspective, it has been observed that functions
with small influences exhibit improved concentration inequalities
(e.g., \cite{Talagrand94}) and often tend to exhibit Gaussian-like
behavior~\cite{mossel2010noise}.
Applying random restrictions and studying the properties of the restricted
functions has been widely studied and has led to breakthroughs in
a variety of areas. For example, it is the key idea of the exponential
lower bounds in circuit complexity~\cite{hastad1987} and the dramatic
improvements of the sunflower lemma in combinatorics~\cite{ALWZ21sunflower}.
The problem of determining whether a function with small influences
becomes constant under random restrictions has attracted some attention
in the context of hardness amplification within NP for circuits~\cite{ODONNELL04NPHardness}.
A sub-optimal version of Theorem \ref{thm:mainintro} follows
from the ``It ain't over till it's over'' theorem proven by Mossel,
O'Donnell, and Oleszkiewicz in \cite{mossel2010noise}. Their approach
uses the \emph{invariance principle,} which at a high level asserts
that when feeding a ``smooth''\footnote{By ``smooth'' here, we mean $f$
has low degree. With additional work, the invariance principle applies
to $f$ that has its Fourier mass concentrated in low degrees.}
function $f$ with independent random
inputs $X_{1},X_{2},\ldots,X_{n}$ from a product space such that
each $X_{i}$ has zero mean, unit second moment and bounded third
moment, then the output distribution is ``invariant'' to the actual
distribution of the inputs. This approach usually studies a related
problem, then translates the result of the related problem to the
Boolean cube. This translation suffers from two drawbacks. First,
it obscures what is actually happening in Boolean cube. Second, the
requirement of $f$ being ``smooth'' normally requires additional
technical treatment, and becomes the main obstacle for obtaining an
optimal result.
\subsection*{Our approach}
Our approach relies on a control-theory point of view to the problem combined with ideas from ``pathwise-analysis,'' using arguments which are somewhat inspired by~\cite{EG18talagrand-ineq}. We assume that the coordinates are revealed in a random order and are randomly assigned values $\pm1$ one by one. For each coordinate being revealed,
we assume that with probability $\rho$ a player gets an opportunity
to ``override'' the value that has been assigned to that coordinate.\footnotemark
If the player has the capability of deciding, with high probability,
the value of the function, this implies that restricting all but a
$\rho$-fraction of coordinates leaves the restricted function nonconstant.
\footnotetext{We note that Lichtenstein-Linial-Saks~\cite{LLS89control} also study a
control theoretic problem, which in the surface may seem similar. The main difference
is that in their model the player picks which coordinates to influence,
whereas in our model these coordinates are picked randomly, as we
will see momentarily. The two models differ drastically in their nature.}
To this end, assume that $X(t)\in\{-1,0,1\}^{n}$ is the process where
at each step another coordinate is being revealed, where coordinates
whose value is not determined are set to $0$. We view our Boolean
function $f$ as a function $f:\RR^{n}\to\RR$ by considering its
multilinear extension. If the player does not override any coordinate,
then the process $M(t):=f(X(t))$ is a martingale (where, for $x\in\{-1,0,1\}^{n}$,
the expression $f(x)$ is the value of $f$ when taking expectation
over coordinates whose value is set to $0$). The player's ability to override coordinates effectively allows the
player to add a drift to $M(t)$, where the player's goal is to end
up with $M(n)$ being equal to $1$ (or, by the same argument, to
$0$ simply by replacing $f$ with $1-f$).
At this point, let us assume for simplicity that the increments of the martingale
$M(t)$ have a fixed step size $\eta$ (in other words, assume that it is actually
a random walk up to the time when it hits $\{0,1\}$). Moreover assume that $M(0)$
is bounded away from $\{0,1\}$. Suppose that the player has probability $\rho$ to
override each step and is trying to force the process to end up at the value $1$
by overriding the increment with the value $+\eta$. In this case over $t$ steps
the process accumulates a variance of $\eta^{2}t$ and a drift of size $t\rho\eta$. Since the
process eventually moves a distance of $\Omega(1)$, and hence accumulates a variance
of constant order, we have $t\asymp\eta^{-2}$. It follows that in order for the
effect of the drift to be more significant than that of the variance, one arrives
at the condition $\rho\gg\eta$. In other words, the process
can be efficiently controlled (meaning that the player gets to determine its endpoint)
as long as the step size is at most $\rho$. In fact, we will see that this heuristic is only correct when
$M(t)$ is not close to the edges, which will create an additional
technical complication.
The step size of the process is, in turn, is controlled by the $\ell_{\infty}$
norm of the first-order Fourier coefficients of restrictions of the
function $f$, or equivalently by the quantity $\max_{i}|\partial_{i}f(X(t))|$,
where $i$ is over the coordinates not fixed at time $t$.
We need to show that this quantity remains small along the process,
which is where the fact that the initial influences are small will be used.
The control of the first-order Fourier coefficients relies on a new
\emph{hypercontractive} \emph{inequality} for random restrictions.
We consider random restrictions $\cR{}_{p},\cR_{q}$, where $0\le p\le q\le1$
are the probabilities of a variable being fixed, and show that for
any multilinear function $f$ and any $0\le\epsilon\le q-p,$
\begin{equation}
(\EE[\mu(f|_{\cR_{p}})^{2+\epsilon}])^{\frac{1}{2+\epsilon}}\le(\EE[\mu(f|_{\cR_{q}})^{2}])^{\frac{1}{2}},
\end{equation}
where we use $\mu(f)$ to denote the expected value of $f$ over
the uniform measure on $\DC$.
The hypercontractive inequality will allow us to control the evolution
of the first-order coefficients under the original (namely, the uncontrolled)
martingale. However, we need to control those coefficients under the
``controlled'' process (where the player gets to override some coordinates).
This can be solved by assuming that the strategy taken by the player
tries to mimic yet another process obtained by \emph{conditioning} the original
martingale $M(t)$ to end up at the value $1$ ($0$, respectively).
This amounts to a change of measure over the space of paths of $X(t)$
which gives tractable formulas for the corresponding change of measure
of a single step. Equivalently, this is the strategy which ensures
ending up at the desired value under a change of measure which has
the minimal possible relative entropy to the uncontrolled process.
Finally, we explain how to strengthen the above result to give
a quantitative bound on the variance of the restricted
function. We analyze the Kullback-Leibler divergence between, roughly
speaking, the string $Y(n)$ generated by the ``controlled'' process
given the restrictions $\cR_{1-\rho}$ determined by those coordinates
that is not controlled by the player, and a uniformly random string
$X\in\mopo^{n}$. With the Fourier-analytic tool of Level-1 inequality, one can show
that the expected KL-divergence over the random restrictions is about
$\tilde{O}(1/\rho)$. Somewhat surprisingly, the KL-divergence is,
in addition to being small in expectation, highly concentrated. Recall
that $Y(n)$ is sampled from $f^{-1}(1)$. All these imply that $\mu(f|_{\cR_{1-\rho}})\ge\exp(-\tilde{O}(1/\rho))$
with high probability. The variance bound then follows
once we put together with the other direction that $\mu(f|_{\cR_{1-\rho}})\le1-\exp(-\tilde{O}(1/\rho))$
by replacing $f$ with $1-f$.
\subsection*{Organization}
We present the necessary preliminaries in Section~\ref{sec:Preliminaries}.
Then in Section~\ref{sec:control}, we carefully define the uncontrolled and
controlled process discussed in the introduction and we study the properties
of these random processes. With this tool at our disposal, we prove our main
result Theorem~\ref{thm:mainintro} in Section~\ref{sec:appl}. Then we explain
the applications of this result to the block sensitivity and decision tree
complexity. We leave the technical analysis to the final section,
Section~\ref{sec:hypercontractive}, that the Fourier coefficients
of the first order remain small under random restrictions .
\section{Preliminaries\label{sec:Preliminaries}}
\subsection*{General}
We adopt the the shorthand notation $[n]$ for the set $\{1,2,\ldots,n\}.$
For a string $x\in\{-1,1\}^{n}$ and a set $S\subseteq\{1,2,\ldots,n\},$
we let $x|_{S}$ denote the restriction of $x$ to the indices in
$S.$ In other words, $x|_{S}=x_{i_{1}}x_{i_{2}}\ldots x_{i_{|S|}},$
where $i_{1}<i_{2}<\cdots<i_{|S|}$ are the elements of $S.$ Analogously,
for any function $f:\Omega\to\RR$ over an arbitrary domain $\Omega$.
Let $A\subseteq\Omega$, we adopt the notation $f|_{A}$ for the sub-function
of $f$ over the domain $A$. Namely, $f|_{A}(x)=f(x)$ for $x\in A.$
Given a set $S$, when the universe $U$ is clear from the context
we use $\bar{S}:=U\setminus S$ to denote the complement of $S$.
The \emph{characteristic function} of a set $S$ is given by
\[
\1_{S}(i)=\begin{cases}
1 & \text{if }i\in S,\\
0 & \text{otherwise.}
\end{cases}
\]
For a permutation $\pi:U\to U$. Let $\pi S$ be the permuted set
of $S$, i.e., $\pi S=\{\pi(i):i\in S\}.$
The primary interest of this paper is Boolean functions $f:\{-1,1\}^{n}\to\{0,1\}.$
Note that we use $-1,1$ to denote ``true'' and ``false'' on the
domain of $f$, respectively. For example, the logic AND function
and the logic OR function are defined as below,
\begin{align*}
\bigwedge_{i=1}^{n}x_{i}=\begin{cases}
1 & x_{i}=-1\,\,\forall i\in[n],\\
0 & \text{otherwise},
\end{cases}\qquad & \bigvee_{i=1}^{n}x_{i}=\begin{cases}
0 & x_{i}=1\,\,\forall i\in[n],\\
1 & \text{otherwise}.
\end{cases}
\end{align*}
We abuse the notation $x\oplus y$ to denote the entrywise XOR function
for $x,y\in\DC$. Thus $(x\oplus y)_{i}=x_{i}\cdot y_{i}.$ For any
univariate function $h:\RR\to\RR$ and $x\in\RR^{n},$ the application
of $h$ to $x$ means entrywise application, i.e., $h(x)$ denotes
the vector such that $(h(x))_{i}=h(x_{i})$. For any $f:\{-1,1\}^{n}\to\RR,$
$S\subseteq\{1,2,\ldots,n\}$ and $y\in\{-1,1\}^{n}$, let $\cube{S}{y}:=\{x\in\DC:x|_{S}=y|_{S}\}$
be the subcube of $\DC.$ We abbreviate $f|_{(S,y)}=f|_{\cube{S}{y}}.$
The same definition $f|_{(S,y)}$ extends to $y\in\RR^{T}$ for any
$T\supseteq S$ such that $y|_{S}\subseteq\{-1,1\}^{S}$. A random
$p$-restriction $\cR_{p}$ is a random tuple $(S,y)$ such that for each
$i\in[n],$ $i\in S$ with independent probability $p$, and $y$ is a uniformly
random element from $\{-1,1\}^{n}.$
For a logical condition $C,$ we use the Iverson bracket
\[
\II\{C\}=\begin{cases}
1 & \text{if \ensuremath{C} holds,}\\
0 & \text{otherwise.}
\end{cases}
\]
Denote $|x|$ the length of $x$ for any vector $x\in\RR^{n}$, i.e.,
\[
|x|=\left(\sum_{i=1}^{n}x_{i}^{2}\right)^{1/2}.
\]
For two vectors $x,y\in\RR^{n},$ we adopt the following inner product
\[
\langle x,y\rangle=\sum_{i\in[n]}x_{i}y_{i}.
\]
The set $\{e_{1},e_{2},\ldots,e_{n}\}$ forms a standard basis, where
$e_{i}$ denotes the vector whose only nonzero coordinate $i$ is
1.
Given some discrete space $\Omega$ and a probability measure $\gamma$ over $\Omega$.
If the random variable $X$ is drawn from $\gamma$, we denote it
by $X\sim\gamma$. For any function $f:\Omega\to\RR$, we often abbreviate
the expectation of $f$ over $\gamma$ as $\gamma(f),$ namely,
\[
\gamma(f):=\sum_{x\in\Omega}f(x)\gamma(x).
\]
We let $\ln x$ and $\log x$ stand for the natural logarithm of $x$
and the logarithm of $x$ to base $2,$ respectively. For any distribution
$\gamma$ over some discrete space $\Omega$, the entropy function
\[
H(\gamma)=\EE_{x\in\Omega}\gamma(x)\log\frac{1}{\gamma(x)}.
\]
When $\Omega$ contains only two elements, we can think of the binary
entropy function $H\colon[0,1]\to[0,1]$ as given by
\[
H(x)=x\log\frac{1}{x}+(1-x)\log\frac{1}{1-x}.
\]
Basic calculus reveals that for $x\in[-1,1],$
\begin{equation}
1-H(x)\leq4\left(x-\frac{1}{2}\right)^{2}.\label{eq:entropy-upper-bound}
\end{equation}
Recall that the Kullback-Leibler divergence (KL-divergence) between
two distributions $\mu_{0},\mu_{1}$ over $\Omega$ is defined by
the following formula
\[
\KL{\mu_{0}}{\mu_{1}}=\sum_{x\in\Omega}\mu_{0}(x)\log\frac{\mu_{0}(x)}{\mu_{1}(x)}.
\]
The KL-divergence is convex. In particular, let $\mu_{0},\mu_{1}$,
$\gamma_{0},\gamma_{1}$ be distributions over the same space. Then
for any $\lambda\in[0,1],$
\begin{align*}
& \KL{\lambda\mu_{0}+(1-\lambda\mu_{1})}{\lambda\gamma_{0}+(1-\lambda\gamma_{1}}\\
& \qquad\qquad \le\lambda\KL{\mu_{0}}{\gamma_{0}}+(1-\lambda)\KL{\mu_{1}}{\gamma_{1}}.
\end{align*}
If two random variables $X_{0},X_{1}$ obey $\mu_{0}$ and $\mu_{1}$,
respectively, we also use $\KL{X_{0}}{X_{1}}$ to denote the KL-divergence
between the two distributions. The KL-divergence satisfies the following
chain rule:
\[
\KL{X_{0}Y_{0}}{X_{1}Y_{1}}=\KL{X_{0}}{X_{1}}+\EE_{x\sim X_{0}}\left[\KLfrac{Y_{0}\mid X_{0}=x}{Y_{1}\mid X_{1}=x}\right].\footnotemark\textsuperscript{,}\footnotemark
\]
\addtocounter{footnote}{-1}
\footnotetext{
We refer the interested readers to~\cite{cover2006elements} for a complete treatment on these facts.
}
\addtocounter{footnote}{1}
\footnotetext{Here, we use the fraction-like notation to also denote the
KL-divergence for aesthetics, as we are comparing two conditional distributions.
The numerator in the fraction-like notation corresponds to the first argument
in the standard notation.}
The following simple analytical fact will be useful for us.
\begin{fact}
\label{fact:inequality-a} Given $x,p\in\RR,$ then
\end{fact}
\begin{enumerate}
\item $(1+x)^{p}\ge1+xp,$ for any $x>-1,$ and $p\ge1$.
\item $(1+x)^{p}\le1+xp,$ for any $x>-1,$ and $0\le p\le1$.
\end{enumerate}
\begin{proof}
Let $g=(1+x)^{p}-1-xp$ be a function on $x.$ Then
\begin{equation}
g'=p(1+x)^{p-1}-p.\label{eq:partial-g-partial-a}
\end{equation}
When $p>1$, (\ref{eq:partial-g-partial-a}) is negative for $x\in(-1,0)$
and nonnegative for $x\ge0$. Thus, $g$ is decreasing in the interval
$x\in(-1,0)$ and increasing in $(0,\infty)$. Plug $x=0$ into $g$,
we get $0$. Therefore, $(1+x)^{p}\ge1+xp$. When $0<p<1,$ (\ref{eq:partial-g-partial-a})
is positive for $x\in(-1,0)$ and nonpositive for $x\ge0.$ Hence
$g$ attains its maxima within $(-1,\infty)$ at point $x=0$.
\end{proof}
\subsection*{Discrete Fourier analysis}
Let $f:\DC\to\{0,1\}$ be any Boolean function. We would often treat
$f$ as a function $f:\CC\to[0,1]$ or $f:\RR^{n}\to\RR$ by considering
its multilinear extension, i.e.,
\[
f(x)=\sum_{S\subseteq[n]}\hat{f}(S)\chi_{S},
\]
here $\chi_{S}$ is the abbreviation of $\prod_{j\in S}x_{j}.$ An
important observation is that under this notation,
\[
f(0)=\EE_{x\in\{-1,1\}^{n}}[f(x)].
\]
The set $\{\chi_{S}\}_{S\subseteq[n]}$ is a complete orthogonal basis
of the space $\RR^{\{-1,1\}^{n}}.$ Further,
\[
2^{-n}\langle\chi_{S},\chi_{T}\rangle=\begin{cases}
1 & S=T,\\
0 & S\not=T.
\end{cases}
\]
Thus, for any $f,g:\{-1,1\}^{n}\to\RR,$ we have the following by
Parseval's identity and Plancherel Theorem,
\begin{align}
& 2^{-n}\langle f,g\rangle=\sum_{S\subseteq[n]}\hat{f}(S)\hat{g}(S).\label{eq:parseval}\\
& \EE_{x\in\{-1,1\}^{n}}[f^{2}]=\sum_{S\subseteq[n]}\hat{f}(S)^{2}.\label{eq:plancherel}
\end{align}
Adopt the following notations for partial derivatives and the vector
differential operator,
\begin{align*}
& \partial_{i}f(x)=\sum_{S\ni i}\hat{f}(S)\chi_{S\setminus\{i\}},\\
& \nabla f=(\partial_{1}f,\partial_{2}f,\ldots,\partial_{n}f).
\end{align*}
For functions on Boolean cubes, by considering their multilinear extensions
it's easy to see that the above definitions work exactly as expected:
For any $\delta\in\RR,$
\[
f(x+\delta e_{i})-f(x)=\delta\cdot\partial_{i}f(x).
\]
An important fact about the weight of the Fourier coefficients is
the following inequality, often referred to as the Level-1 inequality.
\begin{thm}[Level-1 inequality~\cite{talagrand1996much}]
\label{thm:level-1-ineq} Let $f:\CC\to\{0,1\}$ be the multilinear
extension of a Boolean function. Then for some absolute constant $C$,
we have
\[
|\nabla f(0)|^{2}\le Cf(0)^{2}\log\frac{e}{f(0)}.
\]
\end{thm}
We adopt the following standard definitions of the individual influence
and the max influence of function $f$:
\begin{align*}
& \INF_{i}(f)=\EE_{x\in\DC}[\partial_{i}f(x){}^{2}].\\
& \mINF(f)=\max_{i\in[n]}\INF_{i}(f).
\end{align*}
By Plancherel Theorem,
\[
\INF_{i}(f)=\sum_{S\subseteq[n]:\,i\in S}f(S)^{2}.
\]
The variance of $f$ is the following
\[
\Var[f]=\EE_{x\in\DC}[f^{2}]-\EE_{x\in\DC}[f]^{2}.
\]
It is clear that
\[
\Var[f]\le\sum_{i}^{n}\INF_{i}(f).
\]
Below is a straightforward corollary of the above inequality.
\begin{fact}
\label{fact:mINF-bound}If $\Var[f]=2^{-o(n)},$ then
\[
\mINF(f)=2^{-o(n)}.
\]
\end{fact}
\subsection*{Martingales}
Recall that a discrete-time martingale is a sequence of random variables
$X_{0},X_{1},X_{2},\ldots,$ that satisfies
\begin{itemize}
\item For each $n=0,1,2,\ldots,$ $\EE[|X_{i}|]<\infty$.
\item For any $m<n$, $\EE[X_{m}\mid X_{n}]=X_{n}.$
\end{itemize}
A continuous-time martingale is a stochastic process $(X_{t})_{t\ge0}$
such that
\begin{itemize}
\item For any $t$, $\EE[|X_{t}|]<\infty$.
\item For any $s<t$, $\EE[X_{t}\mid X_{s}]=X_{s}.$
\end{itemize}
A submartingale is a stochastic process with the second property from
the above definition replaced by
\[
\EE[X_{t}\mid X_{s}]\ge X_{s}.
\]
\begin{fact}
\label{fact:martingales}Let $X_{t},Y_{t}$ be martingales.
\begin{enumerate}
\item $aX_{t}+bY_{t}$ and $X_{t}\cdot Y_{t}$ are also martingales for
any constant $a,b$. Hence any multilinear function of martingales
is a martingale.
\item If $f:\RR\to\RR$ is a convex function, then the process $f(X_{t})$
is a submartingale.
\end{enumerate}
\end{fact}
The stopping time $\tau$ of a stochastic process is a random variable
such that the event $\{\tau\le t\}$ is completely determined by $X_{\le t}$.
Given two stopping times $\tau_{1},\tau_{2},$ let $\tau_{1}\land\tau_{2}$
denote the new stopping time $\min\{\tau_{1},\tau_{2}\}$. For martingales,
we have the optional stopping theorem.
\begin{thm}[Stopping Theorem]
If $\tau$ is almost surely bounded, then
\[
\EE[X_{\tau}]=\EE[X_{0}].
\]
\end{thm}
For submartingales, the equality is replaced by a greater-than inequality.
Finally, the following inequalities will be useful
for us.
\begin{thm}[Doob's martingale inequality]
\label{thm:Doob-ineq}Let $X$ be a submartingale taking real values.
Then for any constant $C\ge0,$
\[
\PP\left[\sup_{0\le t\le T}X_{t}\ge C\right]\le\frac{\EE[\max\{X_{T},0\}]}{C}.
\]
\end{thm}
\begin{thm}[Concentration inequality~{\cite[Theorem 2.21]{chung2006complex}}]
\label{thm:concentration}Let $X_{1},X_{2},\ldots,X_{n}$ be martingales
with filtration $\cF$, such that for $i=1,2,\ldots,n$
\begin{align*}
& \Var[X_{i}\mid\cF_{i-1}]\le\sigma_{i}^{2},\\
& X_{i}-X_{i-1}\le M.
\end{align*}
Then
\begin{align*}
\PP[X\ge\lambda] & \le\exp\left(-\frac{\lambda^{2}}{2\sum\sigma_{i}^{2}+2M\lambda/3}\right).
\end{align*}
\end{thm}
Finally, we should warn the readers that in this paper, often $X$
is a vector and the subscripts are used for coordinates. In that case,
the random process $X$ is denoted $X(t),$ and $X_{i}(t)$
denotes the evolution of each individual coordinate.
\section{Controlled Process\label{sec:control}}
Fix a function $f:\{-1,1\}^{n}\to\{0,1\}$, and we view $f:\RR^{n}\to\RR$
by considering its multilinear extension. We assume that $f$ is not
a constant function. Therefore $f(0)>0$. In this section, we will
consider three different discrete random processes.
The first one is the uniform process $X(t)\in\{-1,0,1\}^n$ for $t\in\{0,1,\ldots,n\}$.
It's called the uniform process because $X(n)$ will be a uniformly random
string from $\DC$. The second process $Y(t)$ is obtained from $X(t)$ by
conditioning on $f(X(n))=1$. Therefore, we call $Y(t)$ the conditioned
process. The third process is in effect the same as the second process. They have
identical distributions. However, we will take the control theory perspective,
and give a player a small number of random coordinates to control. We show that the player will
be able to alter a process to the conditioned process, which is otherwise
the uniform process. Therefore we sometimes call the third process the controlled
process.
First, we consider the following uniform process $X(t)$
for $t=0,1,2,\ldots,n$, such that $X(t)\in\{-1,0,1\}^{n}$, and $X(0)=0^{n}$.
\vspace{1.5mm}
\noindent\fbox{\begin{minipage}[t]{1\textwidth - 2\fboxsep - 2\fboxrule}%
\textbf{\uline{Procedure 1}} (To generate the discrete uniform
process $X(t)$): \vspace{0.5mm}
Sample a uniformly random permutation $\pi:[n]\to[n].$
For time $t=1,2,\ldots,n$
\begin{itemize}
\item Let $i=\pi(t).$ Set $X_{i}(t)$ to be $-1$ or $1$ uniformly at
random.
\item For all $j\in[n]\setminus\{i\}$, set $X_{j}(t):=X_{j}(t-1)$.
\end{itemize}
\end{minipage}}
\vspace{1.5mm}
Clearly, the above process is just another way to sample a random
element from $\{-1,1\}^{n}$. We use the notation $P$ to denote the
probability measure over the paths of the above process. The subscript
$P$ will be used to emphasize the underlying process and the corresponding
measure. For example, $\Exp_{P}[f],\Pr_{P}[\cE]$ are the expectation
of the function $f$ and the probability of the event $\cE$, respectively,
both defined over the space of the paths of the above process $X(t)$.
A crucial component of our analysis is that all the partial derivatives
of $f(X(t))$ will be small with high probability for $t$ even very
close to $n$. We formulate it as the following lemma, whose proof
requires some technical preparations, and is therefore deferred to Section~\ref{subsec:proof-influence-small}.
\begin{lem}
\label{lem:influence-remain-small-P}Let $\epsilon>0$\footnote{Throughout this section, let's assume that $\epsilon n$ is a positive
integer.} be such that
\[
\frac{16}{\epsilon}\ln\frac{4}{\epsilon}\le\ln\frac{1}{\mINF(f)}.
\]
Then for any $\theta\in(0,1)$,
\begin{align*}
\Pr_{P}\left[\max_{0\le t\le(1-\epsilon)n}|\partial_{i}f(X(t))|\ge\theta\right] & \le\theta^{-3}\mINF(f)^{\frac{\epsilon}{16}}+\exp(-\epsilon n/8).
\end{align*}
\end{lem}
Next, we modify Procedure 1 to generate what we call the conditioned process.
The goal is to guarantee that
the new process ends up being a random element sampled from $f^{-1}(1)$.
We use $Y(t)$ to distinguish this new process from $X(t)$. Let $Q$
be a new probability measure defined by the equation
\begin{equation}
\Pr_{Q}[Y_{i}(t)=\pm1\mid Y(t-1),\pi(t)]:=\frac{1}{2}\pm\frac{\partial_{i}f(Y(t-1))}{2f(Y(t-1))}.\label{eq:defQ}
\end{equation}
A calculation shows that the Radon-Nykodym derivative of the two measures
satisfies that for any realization $y(1),y(2),\ldots,y(s)\in\{-1,0,1\}^{n}$
of the process $Y(t)$ up to time $s$,
\begin{align}
\frac{dQ\bigl((y(t))_{1\leq t\leq s}\bigr)}{dP\bigl((y(t))_{1\leq t\leq s}\bigr)} & =\prod_{t=1}^{s}2\Pr_{Q}[Y_{\pi(t)}(t)=y_{\pi(t)}(t)\mid Y(t-1)=y(t-1)]\nonumber \\
& =\prod_{t=1}^{s}\left(1+y_{\pi(t)}(t)\frac{\partial_{\pi(t)}f(y(t-1))}{f(y(t-1))}\right)\nonumber \\
& =\prod_{t=1}^{s}\frac{f(y(t))}{f(y(t-1))}\nonumber \\
& =\frac{f(y(s))}{f(0)}.\label{eq:Radon-Nykodym-derivative}
\end{align}
\iffalse Note that $f(X(t))$ is a martingale for any multilinear
function $f$. Thus~(\ref{eq:Radon-Nykodym-derivative}) defines
a valid change of measure. In particular, fix some time $t\in[n]$
and let $i=\pi(t),$ then
\begin{align*}
\Pr_{Q}[X_{i}(t) & =1\mid X(t-1),\pi(t)]\\
& =\frac{1}{2}\cdot\frac{f(X(t-1)+e_{i})}{f(X(t-1)}\\
& =\frac{f(X(t-1))+\partial_{i}f(X(t-1))}{2f(X(t-1))}\\
& =\frac{1}{2}+\frac{\partial_{i}f(X(t-1)}{2f(X(t-1))}.
\end{align*}
Similarly,
\[
\Pr_{Q}[X_{i}(t)=-1\mid X(t-1),\pi(t)]=\frac{1}{2}-\frac{\partial_{i}f(X(t-1))}{2f(X(t-1))}.
\]
In words, in the new process at any time $t=1,2,\ldots,n$, the probability
of getting $Y(t)$ is proportional to the size of $f^{-1}(1)\cap C(t)$,
where $C(t)=\{x\in\{-1,1\}^{n}:x_{i}\cdot Y_{i}(t)\ge0\}$ is the
subcube fixed by $Y(t)$. \fi By taking $s=n$ above, we see that
the process $Y(t)$ according to $Q$ is equivalent to the same process
$X(t)$ according to $P$, only conditioned on the event that $f(X(n))=1$.
In particular, according to $Q$, $Y(n)$ is just a uniformly random
element from $f^{-1}(1)$. Further, if we sample $Y(t)$ and let $\cR(t)$ be
the restriction induced by $Y(t)$, then $(f|_{\cR(t)})^{-1}(1)$
is nonempty for any $t$ as long as $f$ is not the constant $0$
function. We record this simple but useful observation that $Q$ is a mild change
of measure of $P$.
\begin{claim}
\label{claim:closeness-P-Q}Let $\cE_{t}$ be some event that depends
only on the paths of the random process up to time $t$, e.g., $X(1),X(2),\ldots,X(t)$
according to $P$ or $Y(1),Y(2),\ldots,Y(t)$ according to $Q$. Then
for any $t\in[n],$
\[
\Pr_{Q}[\cE_{t}]\le\frac{\Pr_{P}[\cE_{t}]}{f(0)}.
\]
\end{claim}
\begin{proof}
This is immediate from~(\ref{eq:Radon-Nykodym-derivative}),
\[
\Pr_{Q}[\cE_{t}]=\Exp_{P}\left[\II\{\cE_{t}\}\cdot\frac{dQ}{dP}\right]\le\frac{\Pr_{P}[\cE_{t}]}{f(0)}.\qedhere
\]
\end{proof}
\noindent We summarize the distribution of the ``conditioned'' process $Y(t)$ according
to $Q$:
\vspace{1.5mm}
\noindent\fbox{\begin{minipage}[t]{1\columnwidth - 2\fboxsep - 2\fboxrule}%
\textbf{\uline{Procedure 2}} (To generate the conditioned process $Y(t)$): \vspace{0.5mm}
Sample a uniformly random permutation $\pi:[n]\to[n].$
For time $t=1,2,\ldots,n$:
\begin{itemize}
\item Let $i=\pi(t).$ Set $Y_{i}(t)$ according to the following distribution
\begin{align*}
& \PP[Y_{i}(t)=\pm1]=\frac{1}{2}\pm\frac{\partial_{i}f(Y(t-1))}{2f(Y(t-1))},
\end{align*}
\item For all $j\in[n]\setminus\{i\}$, set $Y_{j}(t):=Y_{j}(t-1)$.
\end{itemize}
\end{minipage}}
\vspace{1.5mm}
\subsection*{A control-theory point of view}
\noindent The next step will be to consider the above process as a
\emph{controlled version} of the conditioned process $Y(t)$. Fix $\epsilon>0$ and
consider the control problem where at each time $t$, with probability
$1-\epsilon$, $Y(t)$ does a uniformly random step (according to Procedure
1), and with probability $\epsilon$ a player gets to determine the sign
of $Y_{\pi(t)}$ according to her own choosing.
The key observation of this section is that as long as the player
can control a small fraction of random coordinates, she can simulate
the conditioned process exactly. The motivation to study this controlled
version of $Y(t)$ is the following:
The randomly fixed coordinates out of the player's control induces a
random restriction of the function $f$. If the player can assign the values
to the coordinates of her control, that is the alive coordinates of the
corresponding random restriction, to end up in $f^{-1}(1)$,
this means the restricted function has a nonempty preimage of $1$.
To this end, we consider the following procedure
(see \hyperref[Procedure-Y]{Procedure $\Pi$}) that generates the conditioned
process $Y(t)$ as well as the uniform process $X(t)$.
The Procedure $\Pi$ starts with a sampling subroutine as the preparation
stage, then followed by two phases that generate $Y(t)$ for time
$t$ from $0$ to $n$. The first phase corresponds to that described
in the first paragraph of this section. During this phase the player
needs to cherish her rare opportunity and play ``aggressively.''
The second phase starts at a point of time $\tau$ when the aggressive
strategy no longer works. However, we will show that $\tau$ is very
close $n$ with high probability. As a result, it would not be a problem
to give the player full control from now on and let her play ``safely'' till the end.
\vspace{1.5mm}
\noindent\fbox{\begin{minipage}[t]{1\textwidth - 2\fboxsep - 2\fboxrule}%
\textbf{\uline{Procedure \mbox{$\Pi$}\label{Procedure-Y}}} (The
controlled version of processes $Y(t)$ and $X(t)$): \vspace{1mm}
\emph{\# Sampling Subroutine}
\begin{itemize}
\item Sample a uniformly random permutation $\pi:[n]\to[n].$
\item Sample a set $T\subseteq\{1,2,\ldots,n\},$ such that independently for each $i\in[n],$
\[
\PP[i\in T]=\epsilon.
\]
$T$ will be the set of times when the player gets to determine the
value of the coordinate.
\item Sample a uniformly random $z\in\mopo^{\pi\bar{T}},$ the random assignment
to the variables not controlled by the player.
\end{itemize}
\vspace{3mm}
\emph{\# Phase 1}
Set $Y(0)=X(0)=0^{n}$.
For time $t=1,2,\ldots,n$:
\begin{itemize}
\item Let $i=\pi(t).$
\item (Coordinate picked uniformly) If $t\not\in T$, set $Y_{i}(t)=X_{i}(t)=z_{i}$.
\item (Coordinate determined by player) If $t\in T$, set $Y_{i}(t)$ and
$X_{i}(t)$ according to the following distributions
\begin{align*}
& \PP[Y_{i}(t)=\pm1]=\frac{1}{2}\pm\frac{1}{2\epsilon}\cdot\frac{\partial_{i}f(Y(t-1))}{f(Y(t-1))},\\
& \PP[X_{i}(t)=\pm1]=\frac{1}{2}.
\end{align*}
\item For all $j\in[n]\setminus\{i\}$, set $Y_{j}(t):=Y_{j}(t-1)$, $X_{j}(t):=X_{j}(t-1)$.
\item If either of the following holds, \textbf{exit} this loop
\begin{align*}
& \max_{i\in[n]}|\partial_{i}f(Y(t))|>\epsilon\delta,\quad f(Y(t))<\delta. & \text{\emph{\#\text{ the breaking condition}}}
\end{align*}
\end{itemize}
\vspace{3mm}
\emph{\# Phase 2}
While $t<n$:
\begin{itemize}
\item $t=t+1.$
\item Let $i=\pi(t).$ Set $Y_{i}(t)$ and $X_{i}(t)$ according to the
following distributions
\begin{align*}
& \PP[Y_{i}(t)=\pm1]=\frac{1}{2}\pm\frac{\partial_{i}f(Y(t-1))}{2f(Y(t-1))},\\
& \PP[X_{i}(t)=\pm1]=\frac{1}{2}.
\end{align*}
\item For all $j\in[n]\setminus\{i\}$, set $Y_{j}(t):=Y_{j}(t-1)$, $X_{j}(t):=X_{j}(t-1)$.
\end{itemize}
\vspace{3mm}
\textbf{Output} $\{Y(t)\}_{t\in\{0,1,\ldots,n\}},\{X(t)\}_{t\in\{0,1,\ldots,n\}}.$%
\end{minipage}}
\vspace{1.5mm}
The process $Y(t)$ will be the main process with which our analysis
concerns, whereas the process $X(t)$ is only defined for the sake
of entropy comparison: We will later argue that the KL-divergence
between the two processes is not too large. It is evident that in
Phase 1 the distribution of $Y(1),Y(2),...$ according to Procedure
$\Pi$ is identical to its distribution according to measure $Q$
as long as
\begin{equation}
|\partial_{\pi(t)}f(Y(t-1))|\le\epsilon f(Y(t-1)).\label{eq:stopping-condition}
\end{equation}
Indeed, if~(\ref{eq:stopping-condition}) holds, we have
\begin{align*}
\PP[Y_{\pi(t)}(t)=\pm1] & =(1-\epsilon)\frac{1}{2}+\epsilon\left(\frac{1}{2}\pm\frac{1}{2\epsilon}\cdot\frac{\partial_{\pi(t)}f(Y(t-1))}{f(Y(t-1))}\right)\\
& =\frac{1}{2}\pm\frac{\partial_{\pi(t)}f(Y(t-1))}{2f(Y(t-1))}.
\end{align*}
Let time $\tau$ be $n+1$ if the breaking condition is never hit,
otherwise let $\tau$ be the time when the breaking condition is hit.
The reader may wonder that a more natural choice of the ``breaking''
condition would be the violation of~(\ref{eq:stopping-condition}).
Our definition forces that (i) $f(Y(\tau-1))$ is large, in addition
to that (ii) all derivatives $|\partial_{i}f(Y(\tau-1))|$ is small
compared to the magnitude of $f(Y(\tau-1))$. Both facts will be very
useful in later sections. Formally, we summarize our definition of
$\tau$ as below,
\begin{equation}
\tau=\tau_{1}\land\tau_{2}\land(n+1),\label{eq:stopping-time}
\end{equation}
where
\begin{align*}
& \tau_{1}=\min\{t:\max_{i\in[n]}|\partial_{i}f(Y(t))|>\epsilon\delta\}.\\
& \tau_{2}=\min\{t:f(Y(t))<\delta\}.
\end{align*}
The values of the parameters $\epsilon,\delta$ will be specified
later on. By definition, the condition~(\ref{eq:stopping-condition})
holds for $t<\tau$. We should think $\tau$ as a stopping time of
Phase 1. After the stopping time $\tau$, the player gets to control
each coordinates left. She simply assigns the values according $Q$
as in Procedure 2. Since in both phases Procedure $\Pi$ has the same
law as that of $Q$, the controlled process $Y(t)$ defined in procedure $\Pi$
is identical in distribution to the conditioned process $Y(t)$ defined in Procedure
$2$. The same is clearly true for the uniform process $X(t)$ in its two
versions (Procedure $1$ and Procedure $\Pi$).
In the preparation stage, Procedure $\Pi$ samples a random
permutation $\pi,$ a set $T$ of times controlled by the player and
$z$ the random assignment to the variables not controlled by the
player. For every $m\in[n]$, let $\cG_{m}$ be the $\sigma$-algebra generated by
\[
\pi|_{\{1,2,\ldots,m\}},\quad T\cap\{1,2,\ldots,m\},\quad\text{and }z|_{\pi\{1,2,\ldots,m\}}.
\]
Thus, $\cG_{m}$ contains all the information in a run of Procedure
$\Pi$, excluding the player's choices, up to time $m$. Also, $\cG_m$ induces
a restriction of $T$:\footnote{We can also consider any restriction
$\cR=(S,z)$ for $S\subseteq \pi(\{1,2,\ldots,m\}\setminus T)$. We will use this observation in later sections.}
\[\cR=(\pi(\{1,2,\ldots,m\}\setminus T)),z).\]
A moment's thought reveals that if the controlled process $Y(t)$ in a run of Procedure $\Pi$ satisfies
that $\tau>m$, then $f|_{\cR}$ contains a nonempty preimage of 1.
\begin{claim}
If $\PP[ \tau > m \mid \cG_m] >0$, then
$(f|_{\cR})^{-1}(1) \not=\varnothing.$
\end{claim}
Therefore, if we can argue that $\tau>m$ running Procedure $\Pi$ on $f$ and $1-f$ with the same $\cG_m$,
then we actually proved that $f|_{\cR}$ is nonconstant. To give a quantitative bound on the variance
of $f|_\cR$ requires some more work. The above discussion sets two tasks for the remainder of
this section. First, to analyze the stopping time $\tau$ and second, to provide the necessary tools
to bound the variance of the restricted function.
\subsection{Stopping time \texorpdfstring{$\tau$ of the process $Y(t)$}{of the controlled process} }
Next, we prove that with high probability $\tau>(1-\epsilon)n$ for
very small $\epsilon$. Therefore, Phase 2 in Procedure $\Pi$ can
not be too long.
\begin{lem}[Stopping time $\tau$ of the process $Y(t)$]
\label{lem:stopping-time-tau} Let $f:\DC\to\{0,1\}$ be such that
$\Var[f]\ge2^{-o(n)}.$ Further, let $\epsilon>0$ and $\delta$ be
such that
\begin{align*}
& \frac{16}{\epsilon}\ln\frac{4}{\epsilon}\le\ln\frac{1}{\mINF(f)},\\
& \delta\ge\frac{\mINF(f)^{\epsilon/80}}{\epsilon}.
\end{align*}
Then for sufficiently large $n$, we have
\[
\Pr_{Q}[\tau\le(1-\epsilon)n]\le\frac{3\delta}{f(0)}.
\]
\end{lem}
\begin{proof}
The proof relies on the fact that $Q$ is a mild change of measure
with respect to $P$. Consider the following two bad events,
\begin{align*}
\cE_{1}:\quad\tau_{1}\le(1-\epsilon)n,\\
\cE_{2}:\quad\tau_{2}\le(1-\epsilon)n.
\end{align*}
We first bound $\PP_{Q}[\cE_{1}]$. Note that
\begin{align*}
\Pr_{P}[\cE_{1}] & =\Pr_{P}\left[\max_{0\le s\le(1-\epsilon)n}|\partial_{i}f(X(s))|\ge\epsilon\delta\right]\\
& \le\Pr_{P}\left[\max_{0\le s\le(1-\epsilon)n}|\partial_{i}f(X(s))|\ge\mINF(f)^{\epsilon/60}\right]\\
& \le\mINF(f)^{\epsilon/80}+\exp(-\epsilon n/8),
\end{align*}
where the second step holds as $\epsilon\delta\ge\mINF(f)^{\epsilon/80}\ge\mINF(f)^{\epsilon/60}$;
the final step applies Lemma~\ref{lem:influence-remain-small-P}.
The above bound in turn by Claim~\ref{claim:closeness-P-Q} implies
that
\begin{align}
\Pr_{Q}[\cE_{1}] & \le\frac{\Pr_{P}[\cE_{1}]}{f(0)}\nonumber \\
& \le\frac{\mINF(f)^{\epsilon/80}+\exp(-\epsilon n/8)}{f(0)}.\label{eq:controlled-bound-E1}
\end{align}
Next we move to bound $\PP_{Q}[\cE_{2}]$. It is immediate from Claim~\ref{claim:closeness-P-Q}:
\begin{equation}
\Pr_{Q}[\cE_{2}]\le\frac{\delta}{f(0)}.\label{eq:controlled-bound-E2}
\end{equation}
Apply union bound to~(\ref{eq:controlled-bound-E1}) and~(\ref{eq:controlled-bound-E2}),
then for large enough $n$,
\begin{align*}
\Pr_{Q}[\tau\le(1-\epsilon)n] & =\Pr_{Q}[\cE_{1}\lor\cE_{2}]\\
& \le\frac{1}{f(0)}\cdot\left(\delta+\mINF(f)^{\epsilon/80}+\exp(-\epsilon n/8)\right)\\
& \le\frac{3\delta}{f(0)}
\end{align*}
where in the final step we have $\delta\ge\mINF(f)^{\epsilon/80}=\exp(-o(\epsilon n))$
by Fact~\ref{fact:mINF-bound}.
\end{proof}
\subsection{\label{subsec:KL-control-process}The KL-divergence between \texorpdfstring{$Y(t)$
and $X(t)$}{the controlled and the discrete uniform process}}
The purpose
of this subsection is to show that for any $m\in[n]$, $Y(n)$ given $\cG_{m}$
is close to uniform with high probability over the random choices
associated with $\cG_{m}$. In particular, we will show that the KL-divergence
between $Y(n)$ and $X(n)$ given $\cG_{m}$ is small with high probability
over the random choices associated with $\cG_{m}.$ Recall that the
coordinates of $X(n)$ not fixed by $\cG_{m}$ are uniform.
\begin{lem}
\label{lem:Q-KL-bound}For any $m\in [n]$, abbreviate
\begin{align*}
& \cG_{m}=(\pi|_{\{1,2,\ldots,m\}},T\cap\{1,2,\ldots,m\},z|_{\pi\{1,2,\ldots,m\}}).
\end{align*}
Then for some universal constant $C$, and $\epsilon$, $\delta$ in the
breaking condition in~\hyperref[Procedure-Y]{Procedure $\Pi$},
\begin{align*}
& \PP_{\cG_{m}}\left[\KLfrac{Y(n)\mid\cG_{m}}{X(n)\mid\cG_{m}}\ge\frac{C}{\epsilon}\ln\frac{e n}{n-m+1}\log\frac{e}{\delta}\right]\le\delta.
\end{align*}
\end{lem}
\begin{proof}
Let $\tau'=\tau\land(m+1)$. By definition of the stopping time $\tau$~(\ref{eq:stopping-time}),
for $t<\tau',$
\begin{align}
& f(Y(t))\ge\delta,\label{eq:KL-bound-mean}\\
& |\partial_{i}f(Y(t))|\le\epsilon f(Y(t)).\label{eq:KL-bound-derivative}
\end{align}
We calculate the KL-divergence between $Y(n)\mid\cG_{m}$ and $X(n)\mid\cG_{m}$.
By the chain rule,
\begin{align*}
\KLfrac{Y(n)\mid\cG_{m}}{X(n)\mid\cG_{m}} & =\EE_{\cG_{m}}\left[\sum_{t=1}^{\tau'-1}\II\{t\in T\}\KLfrac{Y_{\pi(t)}(t)\mid Y(t-1)}{X_{\pi(t)}}\right.\\
& \qquad\qquad\left.+\KLfrac{Y(n)|_{\pi\{\tau',\tau'+1,\ldots,n\}}\mid Y(\tau'-1)}{X(n)|_{\pi\{\tau',\tau'+1,\ldots,n\}}}\right],
\end{align*}
where the equality holds because for any $t\in T$,
\[
(Y_{\pi(t)}\mid Y(t-1))=(Y_{\pi(t)}\mid Y(t-1),\cG_{m}),
\]
namely, any variable $Y_{\pi(t)}(t)$ controlled by the player is
independent of the variables in the future that she has no control
of; and all coordinates in $X(n)$ are independent.
Next, using formula~\eqref{eq:Radon-Nykodym-derivative}, combined
with~(\ref{eq:KL-bound-mean}), it follows that for any $t\le\tau'$,
\iffalse $Y(n)|_{\pi\{t,t+1,\ldots,n\}}$ is uniform over an $f(Y(t-1))$-fraction
of the subcube $\mopo^{\pi\{t,t,\ldots,n\}}$ and $X(n)|_{\pi\{t,t+1,\ldots,n\}}$
is uniform over $\mopo^{\pi\{t,t,\ldots,n\}}$. By~(\ref{eq:KL-bound-mean}),
it follows that for any $t\le\tau'$, \fi
\begin{align}
& \KLfrac{\left.\left(Y(n)|_{\pi\{t,t+1,\ldots,n\}}\right)\;\right|Y(t-1)}{X(n)|_{\pi\{t,t+1,\ldots,n\}}}\nonumber \\
& \qquad\qquad\qquad=\log\frac{dQ((Y(i))_{t\leq i\leq n})}{dP((X(i))_{t\leq i\leq n})}=\log\frac{1}{f(Y(t-1))}\le\log\frac{1}{\delta}.\label{eq:KL-after-stopping-time}
\end{align}
Combining the above two displays,
\begin{align*}
& \KLfrac{Y(n)\mid\cG_{m}}{X(n)\mid\cG_{m}}-\log\frac{1}{\delta}\\
& \qquad\le\sum_{t=1}^{n}\EE_{Y(t-1)\mid\cG_{m}}\left[\II\{t\in T\}\II\{t<\tau'\}\KLfrac{Y_{\pi(t)}(t)\mid Y(t-1)}{X_{\pi(t)}(t)}\right]\\
& \qquad=\sum_{t=1}^{n}\EE\left[\II\{t\in T\}\II\{t<\tau'\}\left(1-H\!\left(\frac{1}{2}+\frac{\partial_{\pi(t)}f(Y(t-1))}{2\epsilon f(Y(t-1))}\right)\right)\right]\\
& \qquad\le\sum_{t=1}^{n}\EE\left[\II\{t\in T\}\II\{t<\tau'\}\frac{1}{\epsilon^{2}}\left(\frac{\partial_{\pi(t)}f(Y(t-1))}{f(Y(t-1))}\right)^{2}\right],
\end{align*}
where the second step is by the definition of the KL-divergence; and
the final step is due to~\eqref{eq:entropy-upper-bound}. Abbreviate
\[
Z_{t}:=\II\{t\in T\}\II\{t<\tau'\}\left(\frac{\partial_{\pi(t)}f(Y(t-1))}{f(Y(t-1))}\right)^{2}.
\]
\begin{claim}
There is some universal constant $C\ge1,$ such that for any $t<\tau',$
\begin{align}
& Z_{t}\mid Y(t-1)\in[0,\epsilon^{2}],\label{eq:Z-bounded}\\
& \EE[Z_{t}\mid Y(t-1)]\le\frac{C\epsilon}{n-t+1}\log\frac{e}{\delta},\label{eq:Z-mean}\\
& \Var[Z_{t}\mid Y(t-1)]\le\epsilon^{2}\EE[Z_{t}\mid Y(t-1)].\label{eq:Z-variance}
\end{align}
\end{claim}
\begin{proof}
(\ref{eq:Z-bounded}) follows from~(\ref{eq:KL-bound-derivative}).
Let
\[
v(t):=\frac{(\nabla f(Y(t-1)))|_{\pi\{t,t+1,\ldots,n\}}}{f(Y(t-1))}.
\]
Then,
\begin{align*}
\EE[Z_{t}\mid Y(t-1)] & =\EE_{\pi(t)}[\epsilon v(t)_{\pi(t)}^{2}\mid Y(t-1)]\\
& =\frac{\epsilon|v(t)|^{2}}{n-t+1}\\
& \le\frac{C\epsilon}{n-t+1}\log\frac{e}{f(Y(t-1))}\\
& \le\frac{C\epsilon}{n-t+1}\log\frac{e}{\delta},
\end{align*}
where the first step holds as $t\in T$ with probability $\epsilon$;
in the second step, $\pi(t)$ is random within the $n-t+1$ alive
coordinates given $Y(t-1)$; the third step follows the Level-1 inequality
of Theorem~\ref{thm:level-1-ineq}; and the final step is due to~(\ref{eq:KL-bound-mean}).
The variance of $Z_{t}\mid Y(t-1)$ can be bounded as follows:
\begin{align*}
(\epsilon^{2}-\EE[Z_{t}])\EE[Z_{t}]-\Var[Z_{t}] & =\EE[(\epsilon^{2}-Z_{t})Z_{t}]\ge0.
\end{align*}
We comment that such a bound is sometimes referred to as the Bhatia-Davis
inequality.
\end{proof}
By~(\ref{eq:Z-mean})-(\ref{eq:Z-variance}), the definition that $\tau'\le m+1,$ and the
following elementary fact that
\[
\ln(n+1)\le\sum_{i=1}^{n}\frac{1}{n}\le\ln en,
\]
we have
\begin{align}
& \sum_{t=1}^{n}\EE[Z_{t}\mid Y(t-1)]\le \lambda,\label{eq:sum-Z-mean}\\
& \sum_{t=1}^{n}\Var[Z_{t}\mid Y(t-1)] \le \epsilon^2\lambda, \label{eq:sum-Z-var}
\end{align}
where
\[
\lambda=C\epsilon\ln\frac{en}{n-m+1}\log\frac{e}{\delta}.
\]
The lemma is concluded by estimating,
\begin{align*}
& \PP\left[\KLfrac{Y(n)\mid\cG_{m}}{X(n)\mid\cG_{m}}\ge\frac{3\lambda}{\epsilon^{2}}+\log\frac{1}{\delta}\right]\\
& \qquad\qquad\le\PP\left[\sum_{t=1}^{n}Z_{t}\mid Y(t-1)\ge3\lambda\right]\\
& \qquad\qquad\le\exp\left(-\frac{(2\lambda)^{2}}{2\epsilon^{2}\lambda+4\epsilon^{2}\lambda/3}\right)\\
& \qquad\qquad\le\exp\left(-\frac{C}{\epsilon}\ln\frac{en}{n-m+1}\log\frac{e}{\delta}\right)\le\delta,
\end{align*}
where the second step invokes the concentration inequality of Theorem~\ref{thm:concentration}
since $Z_{t}-\EE[Z_{t}\mid Y(t-1)]$ is a martingale with respect
to $Y(0),Y(1),\ldots,Y(t-1)$. This finishes our proof to Lemma~\ref{lem:Q-KL-bound}
with a change of the constant $C$.
\end{proof}
\section{Proofs of the Main Results\label{sec:appl}}
In this section, we prove a sharp ``it ain't over till it's over'' theorem, i.e.,
the nonasymptotic version of Theorem~\ref{thm:mainintro}. Then we comment on its optimality,
and discuss its applications to block sensitivity and decision tree complexity.
\begin{thm}[``It ain't over till it's over'']
\label{thm:main}There are absolute constant $C>1.$ Given $f:\{-1,1\}^{n}\to\{0,1\}$,
such that $\mINF(f)<1/C$ and $\Var[f]=2^{-o(n)}$. Let $\cR$ be
a random restriction that keeps exactly $\lceil\rho n\rceil$ variables
alive, where
\[
\frac{C}{\Var[f]}\cdot\frac{\ln\ln(1/\mINF(f))}{\ln(1/\mINF(f))}\le\rho\le1.
\]
Let $p$ be such that
\begin{equation}
\frac{8\mINF(f)^{\rho/C}}{\rho\Var[f]}\le p\le1.\label{eq:main-delta-bound}
\end{equation}
Then for large enough $n,$
\begin{align}
& \PP\left[\Var[f|_{\cR}]\le\exp\left(-\frac{C}{\rho}\ln\frac{e}{\rho}\cdot\log\frac{8e}{p\Var[f]}\right)\right]\le p.\label{eq:aintover-variance-bound}
\end{align}
\end{thm}
Before proving the above theorem, we record the following simple fact.
\begin{prop}
\label{fact:KL-to-mean}Let $f:\mopo^{n}\to\{0,1\}$ be a Boolean
function. Let $\mu$ be the uniform distribution and $\gamma$ be
some arbitrary distribution over $\mopo^{n}.$ If
\begin{align*}
& \gamma(f)\ge\delta,\quad\KL{\gamma}{\mu}\le K.
\end{align*}
Then
\[
\mu(f)\ge2^{-(K+H(\delta))/\delta}.
\]
In particular, if $\gamma(f)=1,$ then
\[
\mu(f)\ge2^{-K}.
\]
\end{prop}
\begin{proof}
Assume that $\gamma(f)=\delta,$ and $\KL{\gamma}{\mu}=K.$ This is
without loss of generality because $2^{-(K+H(\delta))/\delta}$ is
decreasing in $K$ and increasing in $\delta$ by elementary calculus.
Let $\gamma_{0},\gamma_{1}$ be the uniform distributions over $f^{-1}(0)$
and $f^{-1}(1)$, respectively. Note $\delta\gamma_{1}+(1-\delta)\gamma_{0}=\EE_{\pi}[\gamma\circ\pi],$
where $\pi$ is taken over the product of permutations on $f^{-1}(0)$
and $f^{-1}(1).$ Thus by convexity,
\begin{align*}
\KL{\delta\gamma_{1}+(1-\delta)\gamma_{0}}{\mu} & \le\KL{\gamma}{\mu}.
\end{align*}
Consequently, let $\eta=\mu(f)$, then
\begin{align*}
& \delta\log\frac{\delta}{\eta}+(1-\delta)\log\frac{1-\delta}{1-\eta}\le K\\
\implies\quad & \delta\log\frac{1}{\eta}+(1-\delta)\log\frac{1}{1-\eta}\le K+H(\delta)\\
\implies\quad & \delta\log\frac{1}{\eta}\le K+H(\delta)\\
\implies\quad & \eta\ge2^{-(K+H(\delta))/\delta}.\qedhere
\end{align*}
\end{proof}
Next, we set forth to prove Theorem~\ref{thm:main}. Set
\begin{align}
& \epsilon=\max\left\{ \eta:\eta\le\frac{\rho}{3},\eta n\text{ is an integer}\right\} ,\label{eq:main-eps}\\
& \delta=p\Var[f]/8.\label{eq:main-delta}
\end{align}
It's straightforward to verify that for large enough $n$, and large
enough constant $C$, we have
\begin{align}
& \frac{16}{\epsilon}\ln\frac{4}{\epsilon}\le\ln\frac{1}{\mINF(f)},\label{eq:main-eps-mINF}\\
& \delta \ge\frac{\mINF(f)^{\epsilon/80}}{\epsilon}.\label{eq:main-delta-inf}
\end{align}
We will run Procedure $\Pi$ described in Section~\ref{subsec:KL-control-process}
with the above setting of parameters $\epsilon$ and $\delta$. Recall that Procedure $\Pi$
first samples the random permutation $\pi$, the set $T$ of times
controlled by the player and $z\in\mopo^{\pi\bar{T}}$ is the random
assignment for $t\not\in T$ in Phase 1. \iffalse We run Procedure
$\Pi$ with respect to $f$ and $1-f.$ To make a distinction, we
let $Y'(t)$ denote the first process outputted by $\Pi$ with respect
to $1-f,$ and $Y(t)$ still denote the first process outputted by
$\Pi$ with respect to $f$. \fi Let $m=(1-\epsilon)n$, $U=\{1,2,\ldots,m\}\setminus T.$
Note that by a Chernoff bound, the probability that $|U|$ is less
than $\lfloor(1-\rho)n\rfloor$ is at most $\exp(-\epsilon n/2).$
Conditioning on that $|U|\ge\lfloor(1-\rho)n\rfloor,$ we randomly
sample a set $S$ of $\lfloor(1-\rho)n\rfloor$ elements from $U$.
\begin{figure}[h]
\includegraphics[scale=0.35]{drawing}\caption{Random process and random restrictions}
\end{figure}
Consider the following event
\begin{align*}
& \cE:=\{\tau>m\}\cap\{|U|\ge\lfloor(1-\rho)n\rfloor\}.
\end{align*}
The first event in this intersection can be bounded by Lemma~\ref{lem:stopping-time-tau}. Hence,
by union bound,
\begin{align}
\PP[\neg\cE] & \le\frac{3\delta}{f(0)}+\exp(-\epsilon n/2), \nonumber\\
& \le\frac{4\delta}{f(0)}, \label{eq:bad-event-main}
\end{align}
where
the second step applies Fact~\ref{fact:mINF-bound} with~(\ref{eq:main-delta-inf}).
Conditioning on $\cE,$ $\cR=(S,Y(n))$ is distributed as a random
restriction that keeps $\lceil\rho n\rceil$ variables alive. Furthermore,
the restricted function $f|_{\cR}$ satisfies that its mean is bounded
away from 0 with high probability.
\begin{claim}
\label{claim:restriction-mean-bound}For some universal constant $C'>1$,
\begin{align}
& \PP[\mu(f|_{\cR})<2^{-K}\mid\cE]\le\frac{2\delta}{\PP[\cE]},\label{eq:f-mean}
\end{align}
where
\begin{equation}
K=\frac{C'}{\epsilon}\ln\frac{e}{\epsilon}\log\frac{e}{\delta}.\label{eq:main-K}
\end{equation}
\end{claim}
Our theorem follows immediately from the above claim. Indeed, if $\cE'$ is
defined analogously to $\cE$ where $f$ is replaced by $1-f$, we
have
\begin{align*}
& \PP[\Var[f|_{\cR}]<2^{-K-1}]\\
& \qquad\le\PP[\neg\cE\lor\neg\cE']+\PP[\mu(f|_{\cR})<2^{-K}\mid\cE]+\PP[\mu(f|_{\cR})>1-2^{-K}\mid\cE']\\
& \qquad\le\frac{4\delta}{f(0)}+\frac{4\delta}{1-f(0)}+\frac{2\delta}{\PP[\cE]}+\frac{2\delta}{\PP[\cE']}\\
& \qquad\le\frac{8\delta}{\Var[f]},
\end{align*}
where in the first step, note that $2^{-K}<1/2$; the second step plugs in~(\ref{eq:bad-event-main})-(\ref{eq:f-mean});
in the final step, note that by~(\ref{eq:main-delta}),
\[
\PP[\cE],\PP[\cE']\ge1-\frac{4\delta}{\Var[f]}=1-\frac{p}{2}\ge\frac{1}{2}>\Var[f].
\]
In view of~(\ref{eq:aintover-variance-bound}) and~(\ref{eq:main-K}),
the proof to Theorem~\ref{thm:main} is finished. It remains to prove
Claim~\ref{claim:restriction-mean-bound}.
\begin{proof}[Proof of Claim~\emph{\ref{claim:restriction-mean-bound}}]
Recall that $m=(1-\epsilon)n$ and $U=\{1,2,\ldots,m\}\setminus T.$
Abbreviate
\[
\cG_{m}=(\pi|_{\{1,2,\ldots,m\}},T\cap\{1,2,\ldots,m\},z|_{\pi\{1,2,\ldots,m\}}),
\]
the information of the random process generated by Procedure $\Pi$
excluding the player's choices up to time $m$. Further, let
$\gamma$ be the distribution of $Y(n)|_{\pi\bar{U}}$ given $\cG_{m}$.
Then by definition of Procedure $\text{\ensuremath{\Pi}}$ running
with with respect to function $f$, $\gamma(f|_{(\pi U,z)})=1$. Hence,
\begin{align*}
& \PP\left[\mu(f|_{(\pi U,z)})<2^{-K}\mid\cE\right]\\
& \qquad\le\PP\left[\KLfrac{Y(n)\mid\cG_{m}}{X(n)\mid\cG_{m}}>K\;\middle|\;\cE\right]\\
& \qquad\le\frac{\delta}{\PP[\cE]},
\end{align*}
where the first step is due to Proposition~\ref{fact:KL-to-mean}; the
second step follows Lemma~\ref{lem:Q-KL-bound} for a suitable constant
$C'$ in~(\ref{eq:main-K}). Now for any
\[
S\in\binom{\{1,2,\ldots,m\}}{\lfloor(1-\rho)n\rfloor},\quad\text{and }y\in\mopo^{\pi S},
\]
let $\zeta(S,y)$ be the distribution of $(U,z)\mid\{(S\subseteq U)\land(z|_{\pi S}=y)\}.$
Then
\begin{align*}
& \mu(f|_{(\pi S,y)})=\EE_{(U,z)\sim\zeta(S,y)}[\mu(f|_{(\pi U,z)})].
\end{align*}
Thus, by Markov's inequality $\mu(f|_{(\pi S,y)})<2^{-K-1}$ implies
that
\[
\PP_{(U,z)\sim\zeta(S,y)}\left[\mu(f|_{(\pi U,z)})<2^{-K}\right]>\frac{1}{2}.
\]
Consequently,
\begin{align*}
& \frac{1}{2}\PP_{\pi,S,y}[\mu(f|_{(\pi S,y)})<2^{-K-1}]\\
& \quad\le\PP_{\pi,S,y,(U,z)\sim\zeta(S,y)}\left[\mu(f|_{(\pi U,z)})<2^{-K}\right]\\
& \quad=\PP_{\pi,T,z}\left[\left.\mu(f|_{(\pi U,z)})<2^{-K}\;\right|\;\{|U|\ge\lfloor(1-\rho)n\rfloor\}\right]\\
& \quad\le\frac{\delta}{\PP[\cE]}.
\end{align*}
In view of~(\ref{eq:f-mean}), we are done.
\end{proof}
\begin{rem}
\label{rem:main}If we consider the random restriction that keeps
each variable alive independently with probability $\rho$, the same
statement holds with a slight
modification on the proof to the corresponding version of Claim~\ref{claim:restriction-mean-bound}.
\end{rem}
\subsection*{Optimality of our result}
Our Theorem~\ref{thm:main} is essentially optimal with respect to
$p$ and $\rho$. Consider the $(1-\rho)$-random restriction $\cR_{1-\rho}$.
First, we check the optimality in the regime when $\rho=\Omega((\log(1/\mINF(f)))^{-1}).$
Consider the majority function $\MAJ_{n}:\{-1,1\}^{n}\to\{0,1\},$
\[
\MAJ_{n}(x)=\begin{cases}
0 & \sum_{i\in n}x_{i}>0,\\
1 & \text{otherwise.}
\end{cases}
\]
It's well-known that $\mINF(\MAJ_{n})=\Theta(1/\sqrt{n}).$ For $\rho=\Omega(1/\log n),$
let $\cR_{1-\rho}=(S,X)$ be the random restriction. Say $|S|=n-k.$
With probability at least $1-\exp(-\Theta(\rho n))$, $k\in(0.5\rho n,2\rho n).$
Then by the Berry-Esseen Theorem, for $\lambda = O(\sqrt{(n-k)\log(n-k)})$,
\begin{align*}
& \PP\left[\left|\sum_{i\in S}X_{i}\right| \ge\lambda \right]=\exp\left(-\Theta\left(\frac{\lambda^{2}}{n-k}\right)\right),\\
& \Var\left[\MAJ_{n}|_{\cR_{1-\rho}} \;\middle\vert\; \left\{ \left|\sum_{i\in S}X_{i}\right|\ge\lambda\right\}\right]\le\exp\left(-\Theta\left(\frac{\lambda^{2}}{k}\right)\right).
\end{align*}
Thus for $p=\Omega(1/\sqrt{n-k})$,
\[
\PP\left[\Var[\MAJ_{n}|_{\cR_{1-\rho}}]\le p^{\Theta\left(\frac{1}{\rho}\right)}\right]= p.
\]
Our bound on the variance is tight up to a $\log(1/\rho)$
factor in the exponent with respect to $\rho$.
Second, we check the optimality in the regime when $\rho=O((\log(1/\mINF(f)))^{-1}).$
Consider the tribes function $\TR_{n}:\{-1,1\}^{n}\to\{0,1\},$
\[
\TR{}_{n}:x\mapsto\AND_{n/w}\left(\cdots,\bigvee_{j=1}^{w}x_{ij},\cdots\right),
\]
where for any positive integer $w$, $n$ is the smallest integral
multiple of $w$ such that $\Pr[\TR_{n}(x)=1]\le1/2$; $\AND_{n/w}:\{0,1\}^{n/w}\to\{0,1\}$
is the standard logic and function. In particular, $n\approx\ln2\cdot w2^{w},$
$w=\log n-\log\ln n+o(1).$ Then $\mINF(\TR{}_{n})=\Theta(\log n/n)$,
and $\mu(\TR_{n})=\Theta(1).$ Apply random restriction $\cR$ that
fixes a variable with probability $1-1/w=1-\Theta(\log1/\mINF(\TR{}_{n})).$
Then for large enough $n$,
\[
\PP[\TR_{n}|_{\cR}\equiv1]=\left(1-\left(\frac{1}{2}+\frac{1}{2w}\right)^{w}\right)^{n/w}=\Omega(1).
\]
Therefore, in this regime with constant
probability, there is no variance left under random restrictions for
the tribes function. Our bound is tight up to a $\log\log$
factor in the sense that it gives a bound up to the minimum $\rho$
where there is still some variance left after the random restriction.
\subsection{Block sensitivity is large almost everywhere}
We now move on to our second theorem, concerning block sensitivity.
The following is a nonasymptotic version of Theorem~\ref{thm:blockintro}.
\begin{thm}
There are absolute constant $C>1.$ For any Boolean function $f:\{-1,1\}^{n}\to\{0,1\}$,
let $\tau=\mINF(f)<1/C$, $\Var[f]=2^{-o(n)}.$ Then for large enough
$n$,
\begin{align*}
\PP_{x}\left[\mathbf{\bs}_{f}(x)\ge\frac{\Var[f]\ln1/\tau}{C\ln\ln1/\tau}\right]\ge1-\exp\left(-\Theta\left(\frac{1}{\Var[f]}\ln\ln\frac{1}{\tau}\right)\right).
\end{align*}
\end{thm}
\begin{proof}
Let
\[
M=\left\lceil \frac{2\Var[f]\ln1/\tau}{C\ln\ln1/\tau}\right\rceil .
\]
Let $X\in\{-1,1\}^{n}$ be random. Randomly partition $[n]$ into
$M$ sets, $S_1,S_2,\ldots,S_M$, each of size $\lfloor n/M\rfloor$ with maybe a small number
of remaining indices. Note that for any $i\in[M]$, $\cR=(S_i, X)$ is a random restriction
of fixed size. Then by Theorem~\ref{thm:main}, with probability
at least $1-\exp(-\Theta(\log\log(1/\tau)/\Var[f])),$ $\Var[f|_{\cR}]>0$.
In that case, exists $T_{i}\subseteq S_{i}$, such that $f(X\oplus(-1)^{\1_{T_{i}}})\not=f(X).$
The statement thus holds by the following double-counting principle,
\begin{align*}
& \frac{1}{2}\PP_{x}[\bs_{f}(x)<M/2]\le\PP_{\cR}[f|_{\cR}\text{ is constant}].\qedhere
\end{align*}
\end{proof}
\subsection{Decision tree complexity of random restriction to monotone functions}
We record another application of our main result regarding the decision
tree complexity of the restricted function, which is in some sense
a reverse statement to the famous H{\aa}stad's switching lemma. Let $\DT(f)$
denote the deterministic decision tree complexity of $f$.\footnote{Although
the theorem is stated with respect to the deterministic decision tree complexity,
one can replace the deterministic decision tree complexity by many other
complexity measures, for example, the randomized decision tree complexity, as
they are polynomially related for total functions.}
\begin{thm}[Decision tree complexity of random restrictions]
There are absolute constant $C>1.$ For any monotone function $f:\{-1,1\}^{n}\to\{0,1\}$,
such that
\begin{align}
& \log\left(\frac{1}{\mINF(f)}\right) \ge C\log\left(\frac{1}{\Var[f]}\right). \label{eq:bounds-V-vs-I-dt}
\end{align}
Let $\cR$ be a random restriction that keeps exactly $\lceil\rho n\rceil$
variables alive, where
\[
\rho=\Omega\left(\sqrt{\frac{\log\Var[f]}{\log\mINF(f)}\log\frac{\log\mINF(f)}{\log\Var[f]}}\right).
\]
Then for large enough $n,$
\begin{equation}
\PP\left[\DT(f|_{\cR})\ge\mINF(f)^{-\Theta(\rho)}\right]\ge \frac{1}{2}. \label{eq:prob-dt-thm}
\end{equation}
\end{thm}
\begin{proof}
For simplicity, assume that $\rho n$ is a positive integer. We need
the following well-known result due to O'Donnell et al.~\cite{osss2005dt-maxinf}:
For any Boolean function $h,$
\begin{equation}
\mINF(h)\cdot\DT(h)\ge\Var[h].\label{eq:osss-ineq}
\end{equation}
Consider the uniform process $X(t)$. By definition,
$X((1-\rho)n)$ induces a random restriction $\cR$ that keeps exactly
$\rho n$ variables alive. By Lemma~\ref{lem:influence-remain-small-P},
\begin{align*}
\Pr_{P}\left[\max_{0\le t\le(1-\rho)n}|\partial_{i}f(X(t))|\ge\mINF(f)^{\frac{\rho}{30}}\right] & \le\mINF(f)^{\frac{\rho}{40}}+\exp(-\rho n/8).
\end{align*}
Since $f$ is monotone, the influence $\INF_{i}(f|_{\cR})=|\partial_{i}f(X((1-\rho)n))|$
for any alive coordinate $i$. The above formula then implies
\begin{equation}
\PP_{\cR}\left[\mINF(f|_{\cR})\ge\mINF(f)^{\frac{\rho}{30}}\right]\le\mINF(f)^{\frac{\rho}{40}}+\exp(-\rho n/8).\label{eq:maxInf-random-restriction}
\end{equation}
By Theorem~\ref{thm:main},
\begin{equation}
\PP\left[\Var[f|_{\cR}]\le\exp\left(-\frac{C}{\rho}\log\frac{e}{\rho}\cdot\log\frac{8e}{\Var[f]p}\right)\right]\le p.\label{eq:var-random-restriction}
\end{equation}
Set $p=1/3$. Then combining~(\ref{eq:osss-ineq})-(\ref{eq:var-random-restriction}),
\begin{align*}
\PP & \left[\DT(f)\ge\mINF(f)^{-\frac{\rho}{60}}\right]\\
& \ge\PP\left[\DT(f)\ge\exp\left(-\frac{C}{\rho}\log\frac{e}{\rho}\cdot\log\frac{8e}{\Var[f]p}+\frac{\rho}{30}\ln\frac{1}{\mINF(f)}\right)\right]\\
& \ge \PP\left[\Var[f|_\cR] \ge \exp\left(-\frac{C}{\rho}\log\frac{e}{\rho}\cdot\log\frac{8e}{\Var[f]p}\right)
\land \left(\mINF(f|_{\cR})\le\mINF(f)^{\frac{\rho}{30}}\right)
\right]\\
& \ge 1-p-\mINF(f)^{\frac{\rho}{40}}-\exp(-\rho n/8)\\
& \ge 1 - p - 2\mINF(f)^{\frac{\rho}{40}},
\end{align*}
where the first step holds for our choice of $\rho$ and~(\ref{eq:bounds-V-vs-I-dt}); the final step holds by Fact~\ref{fact:mINF-bound}. Finally, by our choice of $\rho$ and the bound on $\mINF(f)$, we have $2\mINF(f)^{\frac{\rho}{40}}<1/6$. In view of~(\ref{eq:prob-dt-thm}), we have finished the proof.
\end{proof}
\begin{comment}
\subsection*{The majority function and the tribes function as extremal cases.}
Consider the family of Boolean functions with small individual influences
and $\Omega(1)$ variance. It is known that majority is the stablest
balanced function with respect to large noise \cite{mossel2010noise},
while the tribes function is known to be noise sensitive. On the other
hand, when then noise is $o(1),$ the tribes function is in fact more
stable than the majority function. In the extreme case where only
one coordinate can be altered, the tribes is the most stable function~\cite{KKL88}.
Our result re-confirms this phenomenon in the context of random restrictions.
When the alive probability is constant, the majority function is more
stable as the variance of the restricted function decreases faster.
To be more precise, let $\stab_{1-\rho}(f):=\EE_{x}\EE_{y\in\cN_{1-\rho}(x)}[f(x)f(y)],$
where $y\in\cN_{1-\rho}(x)$ is the distribution that $y_{i}=x_{i}$
with probability $1-\rho,$ and $y_{i}$ is uniformly random otherwise.
Then
\[
\stab_{1-\rho}(f)=\EE[(T_{\sqrt{1-\rho}}f(x))^{2}].
\]
\begin{fact}
For any constant $\delta<2,$ $\rho\in(0,1),$ and $f:\{-1,1\}^{n}\to\{-1,1\},$
\[
\EE[\Var(f|_{\cR_{1-\rho}})]=1-\stab_{1-\rho}(f).
\]
\end{fact}
\begin{proof}
To sample a $(1-\rho)$ correlated pair $x$ and $y$, it is same
to first sample the set of variables $S$ that $x_{i}$ is guaranteed
to equal $y_{i}$ and then for variables outside $S$, $x_{i}$ and
$y_{i}$ are independent and uniformly random.
\begin{align*}
\stab_{1-\rho}(f) & =\EE_{x}\EE_{S}\EE_{y:y|_{S}=x|_{S}}[f(x)f(y)]\\
& =\EE_{\cR_{1-\rho}=(S,x)}\EE_{x|_{S}=y|_{S}}[f(x)f(y)]\\
& =\EE_{\cR_{1-\rho}}[\mu(f|_{\cR_{1-\rho})})^{2}].\qedhere
\end{align*}
\end{proof}
Thus, the behavior of random restrictions can be understood by analyzing
the stability of the functions. The stability of both the majority
and tribes function has been analyzed (\cite[Corollary 12]{mossel2003noise},
\cite[Corollary 2.10]{mossel2010noise}). For any constant $\rho\in(0,1)$,
we have
\[
\stab_{1-\rho}(\TR_{n})=(1-o(1))\frac{\log^{2}n}{n}\rho(2-\rho)^{\log n}+O\left(\left(\frac{\log n}{n}\right)^{2}\right),
\]
and
\begin{align*}
\stab_{1-\rho}(\MAJ_{n})=1-(\sqrt{8}/\pi-o(1))\rho^{1/2}.
\end{align*}
We observe that under random restriction where a large fraction
of coordinates remain free, the tribes function has much more variance
than the majority function, but when the alive probability is as small
as $O(1/\log n),$ the restricted tribes function goes through a phase
transition from nonconstant to constant, whereas the majority function
remains nonconstant.
Our result is sharp in that it captures the entire regime in which
the tribes function is nonconstant. The It-Ain't-Over-Till-It's-Over
theorem in~\cite{mossel2010noise}, on the other hand, does not give
a meaningful bound for the entire regime, but for the majority function
it gives an optimal bound on the variance for constant noise.
\begin{proof}
By invariance principle, the Levy distance between $\TR_{n}|_{\cR}$
and $T_{\sqrt{\rho}}\TR_{n}$ is bounded by
\[
d_{L}(f|_{\cR},T_{\sqrt{\rho}}f)\le\left(\frac{\log n}{n}\right)^{\Omega(C)},
\]
where $C=\frac{1-\rho}{\log(4/\rho(1-\rho))}.$ Abbreviate $\gamma=d_{L}(f|_{\cR},T_{\sqrt{\rho}}f),$
therefore, for any constant $\delta<1/2$
\begin{align*}
\PP[|\TR_{n}|_{\cR}|>1-\delta] & \le\PP[|f|_{\cR}|>1-2\delta]\\
& \le\PP[|T_{\sqrt{\rho}}f|>1-2\delta-\gamma]\\
& \le\frac{\EE[(T_{\sqrt{\rho}}f)^{2}]}{(1-2\delta-\gamma)^{2}}+\gamma\\
& \le\frac{(1-o(1))\rho\log^{2}n}{(1-2\delta-\gamma)^{2}n^{1-\log(1+\rho)}}+\gamma\\
& =\frac{\rho}{(1-2\delta)^{2}}\frac{\log^{2}n}{n^{\Omega(1-\rho)}}.
\end{align*}
\end{proof}
\end{comment}
\section{Random Restrictions and Hypercontractivity\label{sec:hypercontractive} }
In this section, we consider the continuous random process revealing information
about the inputs $X\in\{-1,1\}^{n}$ gradually in a bit by bit manner.
We establish a hypercontractivity theorem for this ``operator,''
and then use the new hypercontractivity theorem to show that the first-order
Fourier coefficients remain small under random restriction given that
the original function has small individual influences.
\subsection{A martingale setup for random restrictions}
Consider the following random process. Let $x\in\DC$ be a uniformly
random element. Let $(\tau_{i})_{i\in[n]}$ be random variables uniformly
distributed in the interval $[0,1]$. $\tau$ induces a permutation
on $[n]$. This is essentially the only relevant information. For
technical reasons we prefer this continuous description in this section. Define
$S(t)=\{i:\tau_{i}\le t\}$, and define process $X(t)\in\CC$ as follows
\[
X_{i}(t)=\begin{cases}
0 & \tau_{i}>t,\\
x_{i} & \tau_{i}\le t.
\end{cases}
\]
In another word, a random $\pm1$ variable is revealed with probability
$t$ at time $t$. This random process induces a random restriction
$\cR(t)=(S(t),Y)$ of function $f$, that all the variables in $S(t)$
is set according to $Y$ while the other variables are left alive.
Below, we collect some properties of a function $f$ with respect
to the above process.
\begin{prop}
\label{prop:nabla-f}For any multilinear function $f:[-1,1]^{n}\to\RR$
and any $t\ge0,$
\begin{enumerate}
\item \label{enu:nabla-f-length}$\EE[|\nabla f(X(t))|^{2}]\le\|f\|_{\infty}^{2}/(1-t),$
\item \label{enu:del-vs-inf}$\EE[\partial_{i}f(X(t))^{2}]\le\INF_{i}[f],$
for $i=1,2,\ldots,n.$
\end{enumerate}
\end{prop}
\begin{proof}
\ref{enu:nabla-f-length} Note that for $i\not\in S(t),$ by definition
\[
\partial_{i}f(X(t))=\widehat{f|_{\cR(t)}}(i).
\]
Thus, by Parseval's identity,
\begin{align*}
& \sum_{i=1}^{n}\II\{\tau_{i}>t\}\partial_{i}f(X(t))^{2}\le\EE[(f|_{\cR(t)})^{2}]\le\|f\|_{\infty}^{2}.
\end{align*}
Since $\II\{\tau_{i}>t\}$ and $\partial_{i}f(X(t))^{2}$ are independent,
we have
\begin{align*}
\|f\|_{\infty}^{2}\ge\EE\left[\sum_{i=1}^{n}\II\{\tau_{i}>t\}\partial_{i}f(X(t))^{2}\right] & =(1-t)\EE[|\nabla f(X(t))|^{2}].
\end{align*}
\ref{enu:del-vs-inf} By Fourier expansion of $\partial_{i}f,$
\begin{align*}
\EE\left[\partial_{i}f(X(t))^{2}\right] & =\EE\left[\left(\sum_{S\ni i}\hat{f}(S)\chi_{S\setminus\{i\}}(X(t))\right)^{2}\right]\\
& =\EE\left[\sum_{S\ni i}\hat{f}(S)^{2}\II\{\tau_{j}\le t,\,\forall j\in S\setminus\{i\}\}\right]\\
& \le\INF_{i}(f).\qedhere
\end{align*}
\end{proof}
\subsection{A hypercontractive inequality for random restrictions}
As the time $t$ increases, the process $X(t)$ reveals more information
about the location of $X(1)$. Thus, for $0\le t\le T\le1,$ we may
view $f(X(t))$ as a ``noisy'' version of $f(X(T))$. It is therefore
expected that some hypercontractive inequality holds for those two
expressions. This intuition can be made concrete by the following
theorem.
\begin{thm}[A hypercontractive inequality]
\label{thm:HC-ineq}For any $0\le t\le T\le1,$ and any multilinear
$f:[-1,1]^{n}\to\RR,$ we have for the random process $X$ defined
in the previous section,
\begin{equation}
\left(\EE|f(X(t))|^{2+\epsilon}\right)^{\frac{1}{2+\epsilon}}\le\left(\EE|f(X(T))|^{2}\right)^{1/2},\label{eq:HC-ineq}
\end{equation}
where
\[
\epsilon=T-t.
\]
\end{thm}
\begin{proof}
The proof is by induction on $n$. Once we establish the base case,
the inductive step follows from a standard argument. We first show
the inductive step since the base case is more involved.
\uline{Inductive step.} Let $f(z,x)=zg(x)+h(x),$ where $x\in[-1,1]^{n}$,
and $z\in[-1,1]$. Then
\begin{align*}
\EE[|f & (Z(t),X(t))|^{2+\epsilon}]^{\frac{1}{2+\epsilon}}\\
& =\left(\EE_{X(t)}\left[\EE_{Z(t)}[|Z(t)g(X(t))+h(X(t))|^{2+\epsilon}]\right]\right)^{\frac{1}{2+\epsilon}}\\
& \stackrel{(\mathrm{i})}{\le}\left(\EE_{X(t)}\left[\EE_{Z(T)}[|Z(T)g(X(t))+h(X(t))|^{2}]^{\frac{2+\epsilon}{2}}\right]\right)^{\frac{1}{2+\epsilon}}\\
& \stackrel{(\mathrm{ii})}{\le}\left(\EE_{Z(T)}\left[\EE_{X(t)}[|Z(T)g(X(t))+h(X(t))|^{2+\epsilon}]^{\frac{2}{2+\epsilon}}\right]\right)^{\frac{1}{2}}\\
& \stackrel{(\mathrm{iii})}{\le}\left(\EE_{Z(T)}\left[\EE_{X(T)}[(Z(T)g(X(T))+h(X(T)))^{2}]\right]\right)^{\frac{1}{2}},
\end{align*}
where (i) holds because for any fixed $X(t),$ $f=z\cdot g(X(t))+h(X(t))$
is a multilinear function on $z$, thus we can apply the inductive hypothesis;
inequality (iii) is true, again because for any fixed $Z(T),$ $f$
is a multilinear function on $x$ and we apply the inductive hypothesis; (ii) follows by the Minkowski inequality,
in particular,
\begin{align*}
\left(\EE_{x}\left[\EE_{z}[f(z,x)^{2}]^{\frac{2+\epsilon}{2}}\right]\right)^{\frac{2}{2+\epsilon}} & \le\EE_{z}\left[\EE_{x}[|f(z,x)|^{2+\epsilon}]^{\frac{2}{2+\epsilon}}\right].
\end{align*}
\uline{Base case}. For the base case we consider two scenarios
separately. (i) $f$ is nonnegative (or, nonpositive) function. Let
$f:[-1,1]\to[0,\infty)$, say $f=ax+b$. It suffices to consider the
special case when $f=ax+1$ for some $0<a<1$ after normalization.
The reason is as follows: Since $f$ is nonnegative, $0\le|a|\le b.$
Thus, we can assume $a\ge0,$ this assumption does not change $\EE[|f(X(t))|^{p}]$.
For $b=0,$ there is nothing to prove. So we can assume $b=1$ by
normalization. For $a=0,$ $f$ is constant function. The statement
is clearly true. Finally, for the case $a=1,$ it follows from continuity.
After the above simplification, we make the actual analysis.
\begin{align*}
\EE[(aX(t)+1)^{2+\epsilon}] & =(1-t)+\frac{t}{2}((1+a)^{2+\epsilon}+(1-a)^{2+\epsilon})\\
& =1-t+t\sum_{k\ge0}a^{2k}\binom{2+\epsilon}{2k},\\
& =1+t\sum_{k>0}a^{2k}\binom{2+\epsilon}{2k},
\end{align*}
where the second step uses Taylor expansion of $(1+x)^{p}$ for $|x|<1$.
Note that for $\epsilon\in[0,1]$ and any $k\ge2,$
\[
\binom{2+\epsilon}{2k}\le0.
\]
Hence,
\begin{align}
\EE[(aX(t)+1)^{2+\epsilon}] & \le1+t(1+\epsilon/2)(1+\epsilon)a^{2}.\label{eq:single-bit-2+eps}
\end{align}
On the other hand,
\begin{align}
\EE[(aX(T)+1)^{2}]^{\frac{2+\epsilon}{2}}%
& =(1+Ta^{2})^{\frac{2+\epsilon}{2}}\nonumber \\
& \ge1+(1+\epsilon/2)Ta^{2},\label{eq:single-bit-2}
\end{align}
where the last step follows from Fact~\ref{fact:inequality-a}. Compare~(\ref{eq:single-bit-2+eps})
and~(\ref{eq:single-bit-2}), we get that
\[
\EE[(aX(t)+1)^{2+\epsilon}]\le\EE[(aX(T)+1)^{2}]^{1+\epsilon/2}
\]
as long as
\begin{equation}
\epsilon\le\frac{T-t}{t}\label{eq:eps-hypercontractivity-gain}
\end{equation}
and $\epsilon\in[0,1].$
(ii) $f$ takes both positive and negative values, say $f=ax+b$.
This time it suffices to consider the special case when $f=x+b,$
for $0<b<1$. Since if $a$ is not 1, we can consider the function
$f/a.$ In addition, changing $b$ to $|b|$ does not affect $\EE[|f|^{p}]$.
Then
\begin{align}
\EE[|X(t & )+b|^{2+\epsilon}]\nonumber \\
& =(1-t)b^{2+\epsilon}+t/2((1+b)^{2+\epsilon}+(1-b)^{2+\epsilon})\nonumber \\
& =(1-t)b^{2+\epsilon}+t\sum_{k\ge0}b^{2k}\binom{2+\epsilon}{2k}\nonumber \\
& \le(1-t)b^{2}+t(1+b^{2}(1+\epsilon/2)(1+\epsilon))\nonumber \\
& =1+(1-t)(b^{2}-1)+t(1+\epsilon/2)(1+\epsilon)b^{2},\label{eq:single-bit-2+eps-general}
\end{align}
where the third step uses the facts that $\binom{2+\epsilon}{2k}\le0$
for $k\ge2$ and $\epsilon\in[0,1],$ and that $b^{x}$ is decreasing
on $x$ for $0<b<1$. On the other hand,
\begin{align}
(\EE[(X & (T)+b)^{2}])^{\frac{2+\epsilon}{2}}\nonumber \\
& =(T+b^{2})^{1+\epsilon/2}\nonumber \\
& =(1+b^{2})^{1+\epsilon/2}\left(1-\frac{1-T}{1+b^{2}}\right)^{1+\epsilon/2}\nonumber \\
& \ge(1+(1+\epsilon/2)b^{2})\left(1-(1+\epsilon/2)\frac{1-T}{1+b^{2}}\right)\nonumber \\
& =1+(1+\epsilon/2)b^{2}-(1+\epsilon/2)(1+b^{2}+\epsilon b^{2}/2)\frac{1-T}{1+b^{2}}\nonumber \\
& =1+(1+\epsilon/2)b^{2}-(1+\epsilon/2)(1-T)-(1+\epsilon/2)b^{2}\epsilon(1-T)/(2+2b^{2})\nonumber \\
& \ge1+(1+\epsilon/2)b^{2}-(1+\epsilon/2)(1-T)-(1+\epsilon/2)b^{2}\epsilon(1-T)/2,\label{eq:single-bit-2-general}
\end{align}
where the third step invokes Fact~\ref{fact:inequality-a} twice.
Let $R,L$ denote~(\ref{eq:single-bit-2-general}) and~(\ref{eq:single-bit-2+eps-general}),
respectively. Further, let $B=b^{2},$ then $R-L$ is a linear function
in $B$. To verify that $R\ge L$, one only needs verify the cases
when $B=0$ and $B=1$. Recall that $\epsilon=T-t,$ therefore
\begin{align*}
B=0:\qquad\qquad R-L & =1-(1+\epsilon/2)(1-T)-t\\
& =\epsilon-(1-T)\epsilon/2\\
& =\epsilon(1+T)/2\\
& \ge0,
\end{align*}
and
\begin{align*}
B=1:\qquad\qquad R-L & =(1+\epsilon/2)(T-(1-T)\epsilon/2-t(1+\epsilon))\\
& =(1+\epsilon/2)(\epsilon/2+T\epsilon/2-\epsilon t)\\
& =(1+\epsilon/2)\epsilon/2(1+T-2t)\\
& \ge0.
\end{align*}
This concludes our proof.
\end{proof}
\begin{rem}
One can also prove a hypercontractive inequality of the $p$-norm
vs. 2-norm for $1<p<2$. The proof is analogous.
\end{rem}
\subsection{
\texorpdfstring{$\ell_{\infty}$-Fourier mass of $f|_{\cR(t)}$ of the first order}{maximum first-order Fourier mass of the restricted function}\label{sec:beta-uniform}}
A key quantity in our analysis is the $\ell_{\infty}$-Fourier mass
of $f|_{\cR(t)}$ of the first order. Namely,
\begin{equation}
\beta^{*}(t)=\max_{i\not\in S(t)}|\partial_{i}f(X(t))|.\label{eq:beta}
\end{equation}
In some sense, $\beta^{*}(t)$ represents the maximal influence of
$f|_{\cR(t)}.$ In particular, for the special case when $f$ is a
monotone Boolean function, $\beta^{*}(t)$ is exactly $\mINF(f|_{\cR(t)})$.
The importance of $\beta^{*}(t)$ will become clear in later sections.
Next, we show that with high probability $\beta^{*}(t)$ remains small
for $t$ even very close to 1. In fact, what we will show is that
\[
\beta=\max_{i\in[n]}|\partial_{i}f(X(t))|
\]
remains small with high probability. In particular, we establish the
following lemma using the hypercontractive inequality from the last
section.
\begin{lem}[``influence'' remains small under random restriction]
\label{lem:influence-remains-small}Given $f:\{-1,1\}^{n}\to[-1,1].$
For any $0\le t<1$ such that
\begin{equation}
\frac{8}{1-t}\ln\frac{2}{1-t}\le\ln\frac{1}{\mINF(f)}.\label{eq:lem-t-bound}
\end{equation}
Then for any $\theta\in(0,1)$,
\[
\PP\left[\sup_{0\le s\le t}\beta(s)\ge\theta\right]\le\theta^{-3}\mINF(f)^{\frac{1-t}{8}}.
\]
\begin{comment}
\begin{lem}
In particular, for any $\theta\ge\mINF(f)^{\frac{1-t}{30}},$
\[
\PP\left[\sup_{0\le s\le t}\beta(s)\ge\theta\right]\le\mINF(f)^{\frac{1-t}{40}}.
\]
\end{lem}
\end{comment}
\end{lem}
\begin{proof}
Take $T=(1+t)/2$ and let
\begin{equation}
\epsilon=T-t.
\end{equation}
Then
\begin{align}
\PP\left[\sup_{0\le s\le t}\beta(s)\ge\theta\right] & \le\PP\left[\sup_{0\le s\le t}\sum_{i=1}^{n}|\partial_{i}f(X(s))|^{2+\epsilon}\ge\theta^{2+\epsilon}\right]\nonumber \\
& \stackrel{(\mathrm{i})}{\le}\theta^{-2-\epsilon}\sum_{i}\EE[|\partial_{i}f(X(t))|^{2+\epsilon}]\nonumber \\
& \stackrel{(\mathrm{ii})}{\le}\theta^{-2-\epsilon}\sum_{i}(\EE[\partial_{i}f(X(T)){}^{2}])^{1+\epsilon/2}\nonumber \\
& \stackrel{(\mathrm{iii})}{\le}\theta^{-2-\epsilon}\sum_{i}\INF_{i}(f)^{\epsilon/2}\EE[\partial_{i}f(X(T))^{2}]\nonumber \\
& \stackrel{(\mathrm{iv})}{\le}\theta^{-2-\epsilon}\frac{\mINF(f)^{\epsilon/2}}{1-T}\nonumber \\
& \stackrel{(\mathrm{v})}{\le}\theta^{-2-\epsilon}\mINF(f)^{\epsilon/4}%
\label{eq:beta-fine-bound}
\end{align}
where (i) is true due to Fact~\ref{fact:martingales} and Theorem~\ref{thm:Doob-ineq};
(ii) follows from Theorem~\ref{thm:HC-ineq}, (iii) follows from
Proposition~\ref{prop:nabla-f}~\ref{enu:del-vs-inf} , (iv) follows
from Proposition~\ref{prop:nabla-f}~\ref{enu:nabla-f-length} and
(v) follows by our choice of $T$, and~(\ref{eq:lem-t-bound}).
\end{proof}
\subsection{\label{subsec:proof-influence-small}Proof of Lemma~\ref{lem:influence-remain-small-P}}
At this point, we have almost proved Lemma~\ref{lem:influence-remain-small-P}
except that in the previous section we proved the version with the continuous
random process instead of the discrete one. Next, we show that the
continuous random process and the corresponding probability measure
$\tilde{P}$ used in Lemma~\ref{lem:influence-remains-small} is
close to the discrete uniform process generated by Procedure 1 in
Section~\ref{sec:control} with measure $P$ in the following sense.
\begin{claim}
\label{claim:closeness-P-tildeP}Let $\cE_{t}$ be some event that
depends only on $X(t)$. Then for any $\epsilon\in(0,1),$
\[
\Pr_{P}\left[\bigvee_{0\le t\le(1-\epsilon)n}\cE_{t}\right]\le\Pr_{\tilde{P}}\left[\bigvee_{0\le\tilde{t}\le(1-\epsilon/2)}\cE_{\tilde{t}}\right]+\exp(-\epsilon n/8).
\]
\end{claim}
\begin{proof}
We couple the two processes in the obvious way. Recall that $(\tau_{i})_{i\in[n]}$
is the random variables uniformly distributed in the interval $[0,1]$
in the continuous process. $\tau$ induces a permutation $\pi$ on
$[n]$. As time $\tilde{t}$ goes from 0 to 1 in $\tilde{P}$, whenever
a variable is set to value $v\in\{-1,1\},$ the corresponding variable
in the discrete process is also set to $v$. Recall that we denote
the set of fixed variables at time $\tilde{t}$ in $\tilde{P}$ by
$S(\tilde{t}).$ Then at time $\tilde{t}=(1-\epsilon/2)$, by Chernoff
bound,
\[
\Pr_{\tilde{P}}[|S(\tilde{t})|<(1-\epsilon)n]\le\exp(-\epsilon n/8).
\]
Conditioning on $|S(\tilde{t})|\ge(1-\epsilon)n$,
\[
\left\{ \bigvee_{0\le t\le(1-\epsilon)n}\cE_{t}\right\} _{P}\impliedby\left\{ \bigvee_{0\le\tilde{t}\le(1-\epsilon/2)}\cE_{\tilde{t}}\right\} _{\tilde{P}}.
\]
The claim follows.
\end{proof}
Now, Lemma~\ref{lem:influence-remain-small-P} is an immediate corollary
of Lemma~\ref{lem:influence-remains-small} and Claim~\ref{claim:closeness-P-tildeP}.
\begin{cor}[Restatement of Lemma~\ref{lem:influence-remain-small-P}]
Let $\epsilon>0$ be such that
\[
\frac{16}{\epsilon}\ln\frac{4}{\epsilon}\le\ln\frac{1}{\mINF(f)}.
\]
Then for any $\theta\in(0,1)$,
\begin{align*}
\Pr_{P}\left[\max_{0\le t\le(1-\epsilon)n}|\partial_{i}f(X(t))|\ge\theta\right] & \le\theta^{-3}\mINF(f)^{\frac{\epsilon}{16}}+\exp(-\epsilon n/8).
\end{align*}
\end{cor}
\begin{proof}
Let $t=(1-\epsilon/2).$ The choice of $\epsilon$ guarantees that
we can apply Lemma~\ref{lem:influence-remains-small}. In view of
Claim~\ref{claim:closeness-P-tildeP}, we are done.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2022-08-09T02:04:55",
"yymm": "2208",
"arxiv_id": "2208.03450",
"language": "en",
"url": "https://arxiv.org/abs/2208.03450",
"abstract": "We study the probability of Boolean functions with small max influence to become constant under random restrictions. Let $f$ be a Boolean function such that the variance of $f$ is $\\Omega(1)$ and all its individual influences are bounded by $\\tau$. We show that when restricting all but a $\\rho=\\tilde{\\Omega}((\\log(1/\\tau))^{-1})$ fraction of the coordinates, the restricted function remains nonconstant with overwhelming probability. This bound is essentially optimal, as witnessed by the tribes function $\\mathrm{TRIBES}=\\mathrm{AND}_{n/C\\log n}\\circ\\mathrm{OR}_{C\\log n}$.We extend it to an anti-concentration result, showing that the restricted function has nontrivial variance with probability $1-o(1)$. This gives a sharp version of the \"it ain't over till it's over\" theorem due to Mossel, O'Donnell, and Oleszkiewicz. Our proof is discrete, and avoids the use of the invariance principle.We also show two consequences of our above result: (i) As a corollary, we prove that for a uniformly random input $x$, the block sensitivity of $f$ at $x$ is $\\tilde{\\Omega}(\\log(1/\\tau))$ with probability $1-o(1)$. This should be compared with the implication of Kahn, Kalai, and Linial's result, which implies that the average block sensitivity of $f$ is $\\Omega(\\log(1/\\tau))$. (ii) Combining our proof with a well-known result due to O'Donnell, Saks, Schramm, and Servedio, one can also conclude that: Restricting all but a $\\rho=\\tilde\\Omega(1/\\sqrt{\\log (1/\\tau) })$ fraction of the coordinates of a monotone function $f$, then the restricted function has decision tree complexity $\\Omega(\\tau^{-\\Theta(\\rho)})$ with probability $\\Omega(1)$.",
"subjects": "Computational Complexity (cs.CC); Discrete Mathematics (cs.DM); Probability (math.PR)",
"title": "An Optimal \"It Ain't Over Till It's Over\" Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543748058886,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7093817928602125
} |
https://arxiv.org/abs/math/0507163 | Permutohedra, associahedra, and beyond | The volume and the number of lattice points of the permutohedron P_n are given by certain multivariate polynomials that have remarkable combinatorial properties. We give several different formulas for these polynomials. We also study a more general class of polytopes that includes the permutohedron, the associahedron, the cyclohedron, the Pitman-Stanley polytope, and various generalized associahedra related to wonderful compactifications of De Concini-Procesi. These polytopes are constructed as Minkowski sums of simplices. We calculate their volumes and describe their combinatorial structure. The coefficients of monomials in Vol P_n are certain positive integer numbers, which we call the mixed Eulerian numbers. These numbers are equal to the mixed volumes of hypersimplices. Various specializations of these numbers give the usual Eulerian numbers, the Catalan numbers, the numbers (n+1)^{n-1} of trees, the binomial coefficients, etc. We calculate the mixed Eulerian numbers using certain binary trees. Many results are extended to an arbitrary Weyl group. | \section{Introduction}
The {\it permutohedron\/} $P_n(x_1,\dots,x_{n})$ is the convex hull of the $n!$
points obtained from $(x_1,\dots,x_{n})$ by permutations of the coordinates.
Permutohedra appear in representation theory as {\it weight polytopes} of
irreducible representations of $GL_n$ and in geometry as {\it moment
polytopes}.
In this paper we calculate volumes of permutohedra and numbers of their integer
lattice points. Let us give a couple of examples.
It was known before that the volume of the
regular permutohedron $P_n(n,n-1,\dots,1)$ equals the number $n^{n-2}$ of
{\it trees\/} on $n$ labeled vertices and the number of lattice points of this
polytope equals
the number of {\it forests\/} on $n$ labeled vertices. Another
example is the {\it hypersimplex\/} $\Delta_{k,n} =
P_n(1,\dots,1,0,\dots,0,)$
(with $k$ ones).
It is well-know that the volume of $\Delta_{k,n}$ is
the Eulerian number, that is the number of permutations
of size $n-1$ with $k-1$ descents, divided by $(n-1)!$.
This calculation dates back to Laplace~\cite{Lap}.
These examples are just a tip of an iceberg. They indicate
at a rich combinatorial structure. Both the volume and the number of
lattice points of the permutohedron $P_n(x_1,\dots,x_n)$ are given
by multivariate polynomials in $x_1,\dots,x_n$ that have remarkable
properties.
We present three different combinatorial interpretations of these polynomials
using three different approaches. Our first approach is based of Brion's
formula that expresses the sum of exponents over lattice points of a
polytope as a rational function. From this we deduce a formula for the volume
of the permutohedron as a sum on $n!$ polynomials. Then we deduce a
combinatorial formula for the coefficients in terms permutations
with given descent sets. We extend the formula for the volume to
weight polytopes for any Lie type.
There are some similarities between this formula and the Weyl's
character formula.
Our second approach is based on a way to represent permutohedra as a weighted
Minkowski sum $\sum y_I\Delta_I$ of the coordinate simplices. We extend
our results to a larger class of polytopes that we call {\it generalized
permutohedra}. These polytopes are obtained from usual permutohedra by
parallel translations of their faces.
We discuss combinatorial structure of generalized permutohedra.
This class includes many interesting polytopes: associahedra,
cyclohedra, various generalized associahedra related to
De~Concini-Procesi's wonderful compactifications,
graph associahedra, Pitman-Stanley polytopes, graphical zonotopes, etc.
We describe the combinatorial structure for a class
of generalized permutohedra in terms of {\it nested families.}
This description leads to a generalization of the
Catalan numbers.
We calculate volumes of generalized
permutohedra by first calculating {\it mixed volumes\/} of various
coordinate simplices using {\it Bernstein's theorem\/} on systems of
algebraic equations.
More generally, we calculate the {\it Ehrhart polynomial\/} of generalized permutohedra,
i.e., the polynomial that expresses their number of lattice points.
Interestingly, the formula for the number of lattice points is obtained
from the formula for the volume by replacing usual powers in monomials
with raising powers. We also found an interesting new {\it duality\/}
for generalized permutohedra that preserves the number of lattice points.
We introduce and study {\it root polytopes\/} and their
triangulations. These are convex hulls of the origin and end-points of
several positive roots for a type $A$ root system.
In particular, this class of polytopes includes direct products of
two simplices. We apply the {\it Cayley trick\/} to show that the volume
of a root polytope is related to the number of lattice points in a
certain associated generalized permutohedron. Each triangulation
of a root polytope leads to a bijection between lattice points of the
associated generalized permutohedron and its dual generalized permutohedron.
As an application of these techniques we solve a problem about
combinatorial description of diagonal vectors of shifted
Young tableaux of the triangular shape.
Our third approach is based on a way to represent permutohedra as a Minkowski
sum of the hypersimplices $\sum u_k\Delta_{k,n}$. We express volumes of
permutohedra in terms of mixed volumes of the hypersimplices.
We call these mixed volume the {\it mixed Eulerian numbers.}
Various specializations of these numbers lead to the usual Eulerian
numbers, the Catalan numbers, the binomial coefficients,
the factorials, the number $(n+1)^{n-1}$ of trees, and many other
combinatorial sequences. We prove several identities for the
mixed Eulerian number and give their combinatorial interpretation
in terms of weighted binary trees.
We also extend this approach and generalize mixed Eulerian numbers
to an arbitrary root system.
A brief overview of the paper follows.
In Section~\ref{sec:perm_and_zono}, we define permutohedra,
give their several known properties, and discuss their
relationship with zonotopes.
In Section~\ref{sec:descent_div_sym}, we give a formula
for volumes of permutohedra (Theorem~\ref{th:f1})
based on Brion's formula
and derive another formula for volumes
(Theorem~\ref{th:vol=descent_number})
that involves numbers of permutations with given descents sets.
In Section~\ref{sec:weight-polytopes}, we give a formula
for volumes and lattice points enumerators of weight polytopes
for any Lie type (Theorems~\ref{th:f1-W}
and~\ref{th:sum-e-W}).
In Section~\ref{sec:dragon_marriage_condition}, we give
a formula for volume of permutohedra
(Theorem~\ref{th:second-formula})
based on our second approach.
In Section~\ref{sec:generalized_permutohedra}, we discuss
generalized permutohedra and several ways to parametrize
this class of polytopes.
In Section~\ref{sec:nested}, we discuss combinatorial
structure for a class of generalized permutohedra in terms
of nested families (Theorem~\ref{th:nested_complex}).
In Section~\ref{sec:examples_of_gen_perm}, we apply this
description to several special cases of generalized permutohedra.
In Section~\ref{sec:vol_via_Bernstein},
we extend Theorem~\ref{th:second-formula} to generalized
permutohedra and calculate their
volumes (Theorem~\ref{th:second-formula-generalized})
using Bernstein's theorem.
In Section~\ref{sec:vol_via_Brion}, we give alternative
formulas for volumes
(Theorems~\ref{th:gen-perm-sum-w} and~\ref{th:gen_descents_sum_w})
based on our first approach.
In Section~\ref{sec:generalized_Ehrhart}, we state
a formula for the Ehrhart polynomial of generalized permutohedra
(Theorem~\ref{th:gen_ehrhrart})
and derive the duality theorem
(Corollary~\ref{cor:duality-lattice-points}).
In Section~\ref{sec:root_polytope}, we discuss root
polytopes and their triangulations for bipartite graphs.
In Section~\ref{sec:root_non_bipartite}, we treat the case of
non-bipartite graphs.
In Section~\ref{sec:subdivision}, we show how triangulations of roots
polytopes are related to lattice points of generalized permutohedra.
We also finish the proof of Theorem~\ref{th:gen_ehrhrart}.
In Section~\ref{sec:shifted_tableaux}, we describe diagonals
of shifted Young tableaux.
In Section~\ref{sec:Mixed_Eulerian}, we discuss our third
approach based on the mixed Eulerian numbers. We prove
several properties of these numbers
(Theorems~\ref{th:mixed_eul_properties}
and~\ref{th:equivalence-classes}).
In Section~\ref{sec:weighted_binary_trees}, we give
the third combinatorial formula for volumes
of permutohedra (Theorem~\ref{th:vol_perm_binary_trees})
and give
a combinatorial interpretation for the mixed Eulerian numbers
(Theorem~\ref{th:mixed_eulerian_binary_trees}).
Finally, in Section~\ref{sec:vol_weight_Phi} we extend
our third approach to weight polytopes for an arbitrary
root system (Theorems~\ref{th:Phi_volume_trees}
and~\ref{th:Phi_mixed_eulerian_binary_trees}).
In Appendix~\ref{sec:appendix-lattice-points}, we review
and give short proofs of needed general results on enumeration of
lattice points in polytopes.
Let us give a notational remark about our use of various coordinate
systems. We use the $x$-coordinates to parametrize permutohedra
expressed in the standard form as convex hulls of $S_n$-orbits
of $(x_1,\dots,x_n)$. We use the $z$-coordinates to parametrize
(generalized) permutohedra expressed by linear inequalities as
$\{t \mid f_i(t)\geq z_i\}$, i.e., the $z$-coordinates correspond
to the facets of these polytopes.
We use the $y$-coordinates to parametrize (generalized) permutohedra
written as weighted Minkowski sums $\sum y_I \Delta_{I}$
of the coordinate simplices.
Finally, we use the $u$-coordinates to parametrize permutohedra written
as weighted Minkowski sums $\sum u_k\,\Delta_{n,k}$
of the hypersimplices.
For all other purposes we use the $t$-coordinates.
Throughout the paper, we use the notation
$[n]:=\{1,2\dots,n\}$ and $[m,n]:=\{m,m+1,\dots,n\}$.
\medskip
\noindent
{\sc Acknowledgments:}
I thank Richard Stanley and Andrei Zelevinsky for helpful discussions.
\section{Permutohedra and zonotopes}
\label{sec:perm_and_zono}
\begin{definition}
For $x_1,\dots,x_{n}\in \mathbb{R}$,
the {\it permutohedron} $P_n(x_1,\dots,x_{n})$
is the convex polytope in $\mathbb{R}^{n}$ defined as the convex hull
of all vectors obtained from $(x_1,\dots,x_{n})$
by permutations of the coordinates:
$$
P_n(x_1,\dots,x_{n}):=\mathrm{ConvexHull}((x_{w(1)},\dots,x_{w(n)})
\mid w\in S_{n}),
$$
where $S_{n}$ is the symmetric group.
This polytope lies in the hyperplane
$H_c=\{(t_1,\dots,t_{n})\mid t_1+\cdots + t_{n} = c\}
\subset \mathbb{R}^{n}$, where $c=x_1+\cdots+x_{n}$.
Thus $P_n(x_1,\dots,x_{n})$ has the dimension at most $n-1$.
\end{definition}
For example, for $n=3$ and distinct $x_1,x_2,x_3$, the permutohedron
$P_3(x_1,x_2,x_3)$
is the hexagon shown below.
If some of the numbers $x_1,x_2, x_3$ are equal to each other
then the permutohedron degenerates into a triangle, or even a single
point.
\begin{center}
\input{fig1-2.pstex_t}
\end{center}
For a polytope $P\in H_c$, define its volume $\mathrm{Vol}\, P$ as
the usual $(n-1)$-dimensional volume of the polytope $p(P)\in\mathbb{R}^{n-1}$,
where $p$
is the projection $p:(t_1,\dots,t_{n})\mapsto(t_1,\dots,t_{n-1})$.
If $c\in\mathbb{Z}$, then the volume of any parallelepiped formed
by generators of the integer lattice $\mathbb{Z}^{n}\cap H_c$ is 1.
In this paper, we calculate the volume
$$
V_n(x_1,\dots,x_{n}):=\mathrm{Vol}\, P_n(x_1,\dots,x_{n})
$$
of the permutohedron. Also, for integer $x_1,\dots,x_{n}$,
its number of lattice points
$$
N_n(x_1,\dots,x_{n}):=P_n(x_1,\dots,x_{n})\cap \mathbb{Z}^{n}.
$$
We will see that both $V_n(x_1,\dots,x_{n})$ and $N_n(x_1,\dots,x_{n})$
are polynomials of degree $n-1$ in the variables $x_1,\dots,x_{n}$.
The polynomial $V_n$ is the top homogeneous part of $N_n$.
The {\it Ehrhart polynomial} of the permutohedron
is $E_{P_n}(t)=N_n(tx_1,\dots,tx_n)$.
We will give 3 totally different formulas for these polynomials.
\medskip
The special permutohedron for $(x_1,\dots,x_{n}) = (n-1,n-2,\dots,0)$,
$$
P_n(n-1,\dots,0)=\mathrm{ConvexHull}((w(1)-1,...,w(n)-1)\mid w\in
S_{n})
$$
is the most symmetric permutohedron.
It is invariant under the action of the symmetric group $S_{n}$.
For example, for $n=3$, it is the regular hexagon:
\begin{center}
\input{fig2-1.pstex_t}
\end{center}
We will call this special permutohedron $P_n(n-1,\dots,0)$
the {\it regular permutohedron}.
The volume of the regular permutohedron and its Ehrhart polynomial
can be easily calculated using the general result on graphical zonotopes
given below.
Recall that the {\it Minkowski sum} of several subsets $A$, \dots, $B$
in a linear space is the locus of sums of vectors that belong
to these subsets $A+\cdots + B :=\{a+\cdots + b\mid
a\in A, \dots, b\in B\}$. If $A,\dots,B$ are convex polytopes then
so is their Minkowski sum. The Newton polytope $\mathrm{Newton}(f)$
for a polynomial $f=\sum_{a\in\mathbb{Z}^n} \beta_{a}\,t_1^{a_1}\cdots t_n^{a_n}$
is the convex hull of integer points $a\in\mathbb{Z}^n$ such that $\beta_a\ne 0$.
Then $\mathrm{Newton}(f\cdot g)$ is the Minkowski sum $\mathrm{Newton}(f)+\mathrm{Newton}(g)$.
A {\it zonotope} is a Minkowski sum of several line intervals.
\begin{definition}
\label{def:graph_zonotope}
For a graph $\Gamma$ on the vertex set $[n]:=\{1,\dots,n\}$,
the {\it graphical zonotope\/} $Z_\Gamma$ is defined as the Minkowski
sum of the line intervals:
$$
Z_\Gamma :=\sum_{(i,j)\in \Gamma} [e_i,e_j] = \mathrm{Newton}\left(\prod_{(i,j)\in \Gamma}
(t_i-t_j)\right),
$$
where the Minkowski sum and the product are over edges $(i,j)$, $i<j$,
of the graph $\Gamma$, and
$e_1,\dots,e_{n}$ are the coordinate vectors in $\mathbb{R}^{n}$. The zonotope
$Z_\Gamma$ lies in the hyperplane $H_{c}$, where $c$ is the number of edges of $\Gamma$.
The polytope $Z_\Gamma$ was first introduced by Zaslavsky (unpublished).
\end{definition}
The following two claims express well-know properties of graphical
zonotopes and permutohedra.
\begin{proposition}
The regular permutohedron $P_n(n-1,\dots,0)$ is the
graphical zonotope $Z_{K_{n}}$ for the complete graph $K_{n}$.
\end{proposition}
\begin{proof}
The permutohedron $P_n(n-1,\dots,0)$ is the Newton polytope
of the Vandermonde determinant
$\det( t_i^{j-1})_{1\leq i,j\leq n}$.
On the other hand, the Vandermonde determinant
is equal to the product $\prod_{1\leq i<j\leq n} (t_j-t_i)$,
whose Newton polytope is the zonotope $Z_{K_{n}}$.
\end{proof}
The following claim is given in Stanley~\cite[Exer.~4.32]{EC1}.
\begin{proposition}
\label{prop:vol-Z-G}
For a connected graph $\Gamma$ on $n$ vertices, the volume
$\mathrm{Vol}\,\, Z_\Gamma$ of the graphical zonotope $Z_\Gamma$
equals the number of spanning trees of the graph $\Gamma$.
The number of lattice points of $Z_\Gamma$ equals to the number of
forests in the graph $\Gamma$.
In particular, the volume of the regular permutohedron
is $\mathrm{Vol}\, P_n(n-1,\dots,0) = n^{n-2}$ and its number of lattice points
equals the number of forests on $n$ labeled vertices.
\end{proposition}
The zonotope $Z_\Gamma$ can be subdivided into unit parallelepipeds associated
with spanning trees of $\Gamma$, which implies the first claim.
In general, for arbitrary $x_1,\dots,x_{n}$, the permutohedron
$P_n(x_1,\dots,x_{n})$ is not a zonotope. We cannot easily calculate its
volume by subdividing it into parallelepipeds.
One can alternatively describe the permutohedron
$P_n(x_1,\dots,x_{n})$ in terms of linear inequalities.
\begin{proposition}
\label{prop:Rado}
{\rm Rado~\cite{Rad}}
Let us assume that $x_1\geq \cdots \geq x_{n}$.
Then a point $(t_1,\dots,t_{n})\in \mathbb{R}^n$ belongs to the permutohedron
$P_n(x_1,\dots,x_{n})$ if and only if
$$
t_1+\dots+t_{n} = x_1 + \dots + x_{n}
$$
and, for any nonempty subset $\{i_1,\dots,i_k\}\subset\{1,\dots,n\}$,
we have
$$
t_{i_1}+\cdots +t_{i_k} \leq x_1+\cdots + x_k.
$$
\end{proposition}
The combinatorial structure of the permutohedron $P_n(x_1,\dots,x_{n})$
does not depend on $x_1,\dots,x_{n}$ as long as all these numbers
are distinct. More precisely, we have the following well-know statement.
\begin{proposition}
\label{prop:comb_structute_perm}
Let us assume that $x_1 > \cdots > x_{n}$.
The $d$-dimensional faces of $P_n(x_1,\dots,x_{n})$ are
in one-to-one correspondence with disjoint subdivisions of the
set $\{1,\dots,n\}$ into nonempty ordered blocks
$B_1 \cup \dots \cup B_{n-d} = \{1,\dots,n\}$.
The face corresponding to the subdivision into blocks $B_1,\dots,B_{n-d}$
is given by the $n-d$ linear equations
$$
\sum_{i\in B_1\cup \cdots \cup B_k} t_i = x_1 + \dots +
x_{|B_1\cup\cdots \cup B_k|},\quad\textrm{for } k =1,\dots,n-d.
$$
In particular, two vertices $(x_{u(1)},\dots,x_{u(n)})$
and $(x_{w(1)},\dots,x_{w(n)})$, $u,w\in S_{n+1}$, are connected
by an edge if and only if $w=u\,s_i$, for some adjacent transposition
$s_i = (i,i+1)$.
\end{proposition}
\section{Descents and divided symmetrization}
\label{sec:descent_div_sym}
\begin{theorem}
\label{th:f1}
Let us fix distinct numbers $\lambda_1,\dots,\lambda_{n}\in \mathbb{R}$.
The volume of the
permutohedron $P_n = P_n(x_1,\dots,x_{n})$ is equal to
$$
\mathrm{Vol}\, P_n = \frac{1}{(n-1)!} \sum_{w\in S_{n}}
\frac{(\lambda_{w(1)} x_1 + \cdots + \lambda_{w(n)} x_{n})^{n-1}}
{(\lambda_{w(1)}-\lambda_{w(2)})
(\lambda_{w(2)}-\lambda_{w(3)})\cdots
(\lambda_{w(n-1)}-\lambda_{w(n)}). }
$$
\end{theorem}
Notice that all $\lambda_i$'s in the right-hand side cancel each other after
the symmetrization. Theorem~\ref{th:f1-W} below gives a similar formula
for any Weyl group. Its proof is based on the
Brion's formula~\cite{Bri};
see Appendix~\ref{sec:appendix-lattice-points}.
Theorem~\ref{th:f1} gives an efficient way to calculate the polynomials
$V_n = \mathrm{Vol}\, P_n$.
However this theorem does not explain the combinatorial significance of the
coefficients in these polynomials. The next theorem gives a combinatorial
interpretation for the coefficients.
Given a sequence of nonnegative integers $(c_1,\dots,c_{n})$
such that $c_1+\cdots+c_{n} = n-1$, let us construct the sequence
$(\epsilon_1,\dots,\epsilon_{2n-2})\in \{1,-1\}^{2n-2}$ by replacing
each entry `$c_i$' with `$1,\dots,1,-1$' ($c_i$ `1's followed by one `$-1$'),
for $i=1,\dots,n$, and then removing the last `$-1$'.
For example, the sequence $(2,0,1,1,0,1)$ gives
$(1,1,-1,-1,1,-1,1,-1,-1,1)$. This map is actually a bijection
between the sets
$\{(c_1,\dots,c_{n})\in\mathbb{Z}_{\geq 0}^{n}\mid c_1+\cdots +c_{n} =n-1\}$
and $\{(\epsilon_1,\dots,\epsilon_{2n-2})\in\{1,-1\}^{2n-2}\mid
\epsilon_1+\cdots + \epsilon_{2n-2} =0\}$.
Let us define the set $I_{c_1,\dots,c_{n}}$ by
$$
I_{c_1,\dots,c_{n}}:=\{i\in \{1,\dots,n-1\}\mid
\epsilon_1 + \cdots + \epsilon_{2i-1}<0\}.
$$
The {\it descent set} of a permutation $w\in S_{n}$
is $I(w)=\{i\in\{1,\dots,n-1\} \mid w_i> w_{i+1}\}$.
Let $D_{n}(I)$ be the number of permutations in $S_{n}$
with the descent set $I(w)=I$.
\begin{theorem}
\label{th:vol=descent_number}
The volume of the permutohedron $P_n = P_n(x_1,\dots,x_{n})$
is equal to
$$
\mathrm{Vol}\, P_n =
\sum
(-1)^{|I_{c_1,\dots,c_{n}}|}
\, D_{n}(I_{c_1,\dots,c_{n}})\,
\frac{x_1^{c_1}}{c_1!}\cdots \frac{x_{n}^{c_{n}}}{c_{n}!}\,,
$$
where the sum is over sequences of nonnegative integers $c_1,\dots,c_{n}$
such that $c_1+\cdots +c_{n} =n-1$.
\end{theorem}
We can graphically describe the set $I_{c_1,\dots,c_{n}}$, as follows.
Let us construct the lattice path $P$ on $\mathbb{Z}^2$
from $(0,0)$ to $(n-1,n-1)$ with steps of the two types $(0,1)$ ``up'' and
$(1,0)$ ``right'' such that $P$ has exactly $c_i$ up steps
in the $(i-1)$-st column, for $i=1,\dots,n$.
Notice that the $(2i-1)$-th and $2i$-th steps in the path $P$ are either
both above the $x=y$ axis or both below it. The set $I_{c_1,\dots,c_{n}}$
is the set of indices $i$ such that the $(2i-1)$-th and $2i$-th steps in $P$ are
below the $x=y$ axis.
\medskip
\begin{center}
\input{fig9-1.pstex_t}
\end{center}
\begin{example}
We have
$V_2= x_1 - x_2$ and
$V_3=\frac{x_1^2}{2} + x_1 x_2 - 2 x_1 x_3 - 2 \frac {x_2^2}{2}
+x_2 x_3 + \frac{x_3^2}{2}$.
The following figure shows the paths corresponding to all terms
in $V_2$ and $V_3$.
\begin{center}
\input{fig10-1.pstex_t}
\end{center}
For example, $I_{1,0,1} = \{2\}$ and there are 2 permutations $132, 231\in S_3$
with the descent set $\{2\}$. Thus the coefficient of $x_1 x_3$ in $V_3$
is $-2$.
\end{example}
\medskip
For a polynomial $f(\lambda_1,\dots,\lambda_{n})$, define its
{\it divided symmetrization} by
$$
\left<f\right>:=
\sum_{w\in S_{n}} w\left(\frac{f(\lambda_1,\dots,\lambda_{n})}
{(\lambda_1-\lambda_2)(\lambda_2-\lambda_3)\cdots (\lambda_{n-1}-\lambda_{n})}
\right),
$$
where the symmetric group $S_{n}$ acts by permuting the variables $\lambda_i$.
\begin{proposition}
\label{prop:<f>=const}
Let $f$ be a polynomial of degree $n-1$ in the variables
$\lambda_1,\dots,\lambda_{n}$. Then its divided symmetrization
$\left<f\right>$ is a constant.
If $\deg f<n-1$, then $\left<f\right>=0$.
\end{proposition}
\begin{proof}
We can write $\left<f\right> = g/\Delta$, where
$\Delta=\prod_{i<j}(\lambda_i-\lambda_j)$ is the common denominator
of all terms in $\left<f\right>$ and $g$ is a certain polynomial of degree
$\deg \Delta = \binom{n}{2}$.
Since $\left<f\right>$ is a symmetric rational function, $g$ should
be an anti-symmetric polynomial and thus it is divisible by $\Delta$.
Since $g$ and $\Delta$ have the same degree, their quotient is a constant.
If $\deg f<n-1$, then $\deg g< \deg \Delta$ and, thus, $g=0$.
\end{proof}
\begin{proposition}
\label{prop:<lambda>}
We have
$\left<\lambda_1^{c_1}\cdots \lambda_{n}^{c_{n}}\right>=
(-1)^{|I|} D_{n}(I)$,
where $c_1,\dots,c_{n}$ are nonnegative integers
with $c_1+\cdots+c_{n} = n-1$ and $I= I_{c_1,\dots,c_{n}}$.
\end{proposition}
\begin{proof}
We can expand the expression $\frac{1}{\lambda_i - \lambda_j}$, $i<j$
as the Laurent series that converges in the region
$\lambda_1>\cdots > \lambda_{n} > 0$:
$$
\frac{1}{\lambda_i - \lambda_j} = \lambda_i^{-1} \frac {1}{1-\lambda_j/\lambda_i}
= \sum_{k\geq 0} \lambda_i^{-k-1}\, \lambda_j^k.
$$
Let us use this formula to expand each term
$w\left(\frac{\lambda_1^{c_1}\cdots \lambda_{n}^{c_{n}}}{(\lambda_1-\lambda_2)
\cdots (\lambda_{n-1}-\lambda_{n})}\right)$
as a Laurent series $f_w$ that converges in this region.
Let $CT_w$ be the constant term of the series $f_w$.
Then, according to Proposition~\ref{prop:<f>=const}, we have
$\left<\lambda_1^{c_1}\cdots \lambda_{n}^{c_{n}}\right> =
\sum_{w\in S_{n}} CT_w$.
Equivalently, the number $CT_w$ is the constant term in
the series $w^{-1} (f_w)$, i.e.,
the Laurent series obtained by the expansion of each term
$\frac{1}{\lambda_i - \lambda_{i+1}}$ in
$\frac{\lambda_1^{c_1}\cdots \lambda_{n}^{c_{n}}}{(\lambda_1-\lambda_2)
\cdots (\lambda_{n-1}-\lambda_{n})}$ as
$$
\frac{1}{\lambda_i - \lambda_{i+1}} = \left\{
\begin{array}{cl}
\displaystyle
\sum_{k\geq 0} \lambda_i^{-k-1}\, \lambda_{i+1}^k, & \textrm{for }w(i)<w(i+1), \\[.2in]
\displaystyle
-\sum_{k\geq 0} \lambda_i^{k}\, \lambda_{i+1}^{-k-1}, & \textrm{for }w(i)>w(i+1).
\end{array}
\right.
$$
Let $I=I(w)$ be the descent set of the permutation $w$.
Then $CT_w$ equals $(-1)^{|I|}$ times the number of
nonnegative integer sequences $(k_1,\dots,k_{n-1})$
such that we have $(c_1,\dots,c_{n}) = v_1+\dots + v_{n-1}$,
where
$$
v_i = \left\{ \begin{array}{cl}
(k_i + 1)\,e_i - k_i \, e_{i+1}, & \textrm{for }i \not\in I
\ \ (w_i<w_{i+1}), \\[.1in]
-k_i\,e_i + (k_i +1) \, e_{i+1}, & \textrm{for }i \in I
\ \ (w_i>w_{i+1}),
\end{array}
\right.
$$
and the $e_i$ are the coordinate vectors.
Notice that, for a fixed permutation $w$, there is at most 1 sequence
$(k_1,\dots,k_{n-1})$
that produces $(c_1,\dots,c_{n})$, as above. Thus $CT_w\in\{1,-1,0\}$.
Let $P$ be the lattice path from $(1,1)$ to $(n,n)$ constructed from
the sequence $(c_1,\dots,c_{n})$ as shown after
Theorem~\ref{th:vol=descent_number}.
In other words, $P$ is the continuous piecewise-linear path obtained by joining
the points
$$
(0,0) - (0,c_1) - (1,c_1) - (1, c_1 + c_2) - (2, c_1 + c_2) -
(2,c_1+c_2+ c_3) - \cdots - (n-1,n-1)
$$
by the straight lines.
Let $r$ be the maximal index such that $w(1)< w(2)<\cdots < w(r)$.
Then we have $c_1 = k_1+1$, $c_2 = k_2 + 1 - k_1$, \dots, $c_{r-1} = k_{r-1} + 1
- k_{r-2}$, $c_r = - k_r - k_{r-1}$.
Thus $k_i = c_1 +\cdots + c_i -i\geq 0 $, for $i = 1,\dots,r-2$,
$k_{r-1} = c_1+ \cdots + c_{r-1} - (r-1) = 0$ and $k_r = c_r = 0$.
This means that the path $P$ stays weakly above
the $x=y$ axis as it goes from the point $(0,0)$ to the point $(r-1,r-1)$,
then it passes through the point $(r-1,r-1)$, and goes
strictly below the $x=y$ axis (if $r< n+1$).
For $i=1,\dots, r-1$, the number $k_i$ is exactly the distance between
the lowest point of the path $P$ on the line $x=i$ and the point $(i,i)$.
Let $r'$ be the maximal index such $w(r)>w(r+1)>\cdots > w(r')$.
Then we have $c_r = - k_r=0$, $c_{r+1} = k_r + 1 - k_{r+1}$, \dots,
$c_{r'-1} = k_{r'-2} + 1 - k_{r'-1}$, and
$c_{r'} = (k_{r'-1} + 1) + (k_{r} + 1)$. Thus
$k_i = i-r - c_r - \cdots - c_i = i-1 - c_1-\cdots - c_i\geq 0$,
for $i=r,\dots,r'-1$,
and $k_{r'} = c_r + \cdots + c_{r'} -r \geq 0$.
This means that the path $P$ stays weakly below the $x=y$ axis as it goes
from the point $(r-1,r-1)$ to the point $(r'-1,r'-1)$,
then it passes through the point $(r'-1,r'-1)$ and goes strictly above the
$x=y$ axis (if $r'<n+1$).
For $i=r,\dots, r'-1$, the number $k_i$ is the distance between
the highest point of the path $P$ on the line $x=i-1$ and the point $(i-1,i-1)$.
We can continue working with maximal monotone intervals in the permutation
$w$ in this fashion.
Let $r''$ be the maximal index such that $w(r')< \cdots < w(r'')$.
Similarly to the above argument, we obtain that that path $P'$ stays
weakly above the $x=y$ axis until it crosses it at the point $(r''-1,r''-1)$,
etc.
We deduce that the indices $r,r',r'',\dots$ characterizing
the descent set of $w$ correspond to the points where the path $P$ crosses
the $x=y$ axis. Thus the descent set of $w$ is uniquely reconstructed
from the sequence $(c_1,\dots,c_n)$ as $I= I_{c_1,\dots,c_{n}}$.
Moreover, for any permutation $w$ with such descent set,
the nonnegative integer sequence $(k_1,\dots,k_{n-1})$
is uniquely reconstructed
from the sequence $(c_1,\dots,c_n)$ as
$$
k_i = \left\{
\begin{array}{ll}
\min \{y-i\mid (i,y)\in P\} & \textrm{if } i\not\in I,\\
\min \{i-1-y\mid (i-1,y)\in P\} & \textrm{if } i\in I,
\end{array}
\right.
$$
and, thus, $CT_w = (-1)^{|I|}$.
This shows that only permutations with the descent set $I=I_{c_1,\dots,c_{n}}$
make a contribution
to $\left<\lambda_1^{c_1}\cdots \lambda_{n}^{c_{n}}\right>$, and the
contribution of any such permutation is $(-1)^{|I|}$.
This finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:vol=descent_number}]
According to Theorem~\ref{th:f1}, the volume of the permutohedron
can be written as the divided symmetrization of the power of
a linear form:
$$
V_n = \frac {1}{(n-1)!}
\left<(x_1\lambda_1 + \cdots + x_{n}\lambda_{n})^{n-1}\right>
=\sum_{c_1+\cdots + c_{n} = n-1}
\left<\lambda_1^{c_1}\cdots \lambda_{n}^{c_n}\right>\,
\frac{x_1^{c_1}}{c_1!}\cdots \frac{x_{n}^{c_{n}}}{c_{n}!}\,.
$$
Now apply Proposition~\ref{prop:<lambda>}.
\end{proof}
\section{Weight polytopes}
\label{sec:weight-polytopes}
Theorem~\ref{th:f1} can be extended to any Weyl group, as follows.
Let $\Phi$ be a root system of rank $r$. Let $\Lambda$ be the
associated integer {\it weight lattice} and $\Lambda_\mathbb{R} = \Lambda\otimes\mathbb{R}$
be the weight space. The roots in $\Phi$ span the root lattice
$L\subseteq \Lambda$.
The associated {\it Weyl group} $W$
acts on the weight space $\Lambda_\mathbb{R}$.
Let $(x,y)$ be a nondegenerate $W$-invariant inner product on $\Lambda_\mathbb{R}$.
\begin{definition}
\label{def:weight_polytope}
For $x\in \Lambda_\mathbb{R}$, we can define the {\it weight polytope}
$P_W(x)$ as the convex hull of a Weyl group orbit:
$$
P_W(x) :=\mathrm{ConvexHull}(w(x)\mid w\in W)\subset \Lambda_\mathbb{R}.
$$
For the Lie type $A_r$, the weight polytope
$P_W(x)$ is the permutohedron $P_{r+1}(x)$.
\end{definition}
Let us fix a choice of {\it simple roots} $\alpha_1,\dots,\alpha_r$ in $\Phi$.
Let $\mathrm{Vol}\,$ be the volume form on $\Lambda_\mathbb{R}$ normalized so that
the volume of the parallelepiped generated by the simple roots
$\alpha_i$ is 1.
Recall that a weight $\lambda\in\Lambda_\mathbb{R}$ is called
{\it regular} if $(\lambda,\alpha)\ne 0$ for any root $\alpha\in\Phi$.
A weight $\lambda$ is called {\it dominant} if $(\lambda,\alpha_i)\geq 0$, for
$i=1,\dots,r$.
\begin{theorem}
\label{th:f1-W}
Let $\lambda\in \Lambda_\mathbb{R}$ be a regular weight.
The volume of the weight polytope is equal to
$$
\mathrm{Vol}\, P_W(x) = \frac{1}{r!}\sum_{w\in W} \frac{(\lambda,w(x))^r}
{(\lambda,w(\alpha_1))\cdots (\lambda,w(\alpha_r))}.
$$
\end{theorem}
For type $A_r$, $W=S_{r+1}$ and
Theorem~\ref{th:f1-W} specializes to Theorem~\ref{th:f1}.
Let $G$ be a Lie group with the root system $\Phi$.
For a dominant weight $\lambda$, let $V_\lambda$ be the irreducible
representation of $G$ with the highest weight $\lambda$.
The character of $V_\lambda$ is a certain nonnegative linear combination
$ch(V_\lambda)$ of the formal exponents $e^\mu$, $\mu\in\Lambda$.
(These formal exponents are subject to the relation
$e^\mu\cdot e^\nu=e^{\mu+\nu}$.)
The weights that occur in the representation $V_\lambda$ with nonzero
multiplicities, i.e., the weights $\mu$ such that $e^\mu$ has a nonzero
coefficient in $ch(V_\lambda)$, are exactly the points of the weight polytope
$P_W(\lambda)$ in the lattice $L+\lambda$ (the root lattice shifted by
$\lambda$).
Let
$$
S(P_W(\lambda)):=\sum_{\mu\in P_W(\lambda)\cap(L+\lambda)}e^\mu
$$
be the sum of formal exponents over these lattice points.
In other words, $S(P_W(\lambda))$ is obtained from the character
$ch(V_\lambda)$ by replacing all nonzero coefficients with 1.
For example, in the type $A$, the expression $S(P_n(\lambda))$
is obtained from the Schur polynomial by erasing the coefficients
of all monomials.
We have the following identity in the field of rational expressions
in the formal exponents.
\begin{theorem}
\label{th:sum-e-W}
For a dominant weight $\lambda$, the sum of exponents over lattice
points of the weight polytope $P_W(\lambda)$ equals
$$
S(P_W(\lambda)) = \sum_{w\in W}
\frac{e^{w(\lambda)}}
{(1-e^{-w(\alpha_1)})\cdots (1-e^{-w(\alpha_r)})}.
$$
\end{theorem}
Notice that if we replace the product over simple roots $\alpha_i$
in the right-hand side of Theorem~\ref{th:sum-e-W} by
a similar product over {\it all} positive roots, we obtain exactly
Weyl's character formula for $ch(V_\lambda)$.
Theorems~\ref{th:f1}, \ref{th:f1-W}, and \ref{th:sum-e-W} follow from
Brion's formula~\cite{Bri} on summation over lattice points in a rational
polytope.
In Appendix~\ref{sec:appendix-lattice-points}, we give a brief overview
of this result and related results of Khovanskii-Pukhlikov~\cite{KP1, KP2}
and Brion-Vergne~\cite{BV1, BV2}. The following proof assumes reader's
familiarity with the Appendix.
\begin{proof}[Proof of Theorems~\ref{th:f1}, \ref{th:f1-W}, \ref{th:sum-e-W}]
Let us identify the lattice $L+\lambda$ embedded into $\Lambda_\mathbb{R}$
with $\mathbb{Z}^r\subset \mathbb{R}^r$.
Then (for a regular weight $\lambda$) the polytope $P_W(\lambda)$
is a Delzant polytope, i.e., for any vertex of $P_W(\lambda)$,
the cone at this vertex is generated by an integer basis of the lattice $\mathbb{Z}^r$;
see Appendix~\ref{sec:appendix-lattice-points}.
Indeed, the generators of the cone at the vertex $\lambda$
are $-\alpha_1,\dots,-\alpha_r$. Thus the generators
of the cone at the vertex $w(\lambda)$, for $w\in W$,
are $g_{i, w(\lambda)} = -w(\alpha_i)$, $i=1,\dots,r$.
Now Theorem~\ref{th:sum-e-W} is obtained from
Brion's formula given in Theorem~\ref{th:S-any-polytope}(2).
As we mention in the proof of
Theorem~\ref{th:Todd-Euler-Maclaurin}(1), this claim remains
true for non-regular weights $\lambda$ when some of the vertices
$w(\lambda)$ may accidentally merge.
Similarly, Theorems~\ref{th:f1} and \ref{th:f1-W},
are obtained from Theorem~\ref{th:S-any-polytope}(4).
\end{proof}
In a sense, Theorems~\ref{th:f1-W} and \ref{th:f1}
are deduced from Theorem~\ref{th:sum-e-W} in the same way
as Weyl's dimension formula is deduced from Weyl's character
formula, cf.~Appendix~\ref{sec:appendix-lattice-points}.
\section{Dragon marriage condition}
\label{sec:dragon_marriage_condition}
In this section we give a different combinatorial formula
for the volume of the permutohedron.
Let us use the coordinates $y_1,\dots,y_{n}$ related to
$x_1,\dots,x_{n}$ by
$$
\left\{
\begin{array}{l}
y_1 = -x_1\\
y_2 = - x_2 + x_1\\
y_3 = - x_3 + 2 x_2 - x_1 \\
\vdots
\\
y_{n} = - \binom{n-1}{0}\, x_n + \binom{n-1}{1}\, x_{n-1} - \cdots \pm
\binom{n-1}{n-1}\,x_1
\end{array}
\right.
$$
Write $V_n = \mathrm{Vol}\, P_n(x_1,\dots,x_{n})$ as a polynomial
in the variables $y_1,\dots,y_{n}$.
\begin{theorem}
\label{th:second-formula}
We have
$$
\mathrm{Vol}\, P_n = \frac{1}{(n-1)!}\sum_{(J_1,\dots,J_{n-1})}
y_{|J_1|}\cdots y_{|J_{n-1}|},
$$
where the sum is over ordered collections of subsets
$J_1,\dots,J_{n-1}\subseteq [n]$
such that, for any distinct $i_1,\dots,i_k$, we have
$|J_{i_1}\cup \cdots \cup J_{i_k}| \geq k+1$.
\end{theorem}
We will extend and prove Theorem~\ref{th:second-formula} for a larger class of
polytopes called generalized permutohedra; see
Theorem~\ref{th:second-formula-generalized}.
Theorem~\ref{th:second-formula}
implies that $(n-1)!\,V_n$ is a polynomial in $y_2,\dots,y_{n}$
with {\it positive} integer coefficients.
\begin{example}
We have
$V_2 = \mathrm{Vol}\,([(x_1,x_2),(x_2,x_1)]) = x_1 - x_2 = y_2$
and $2V_3 =
x_1^2 + 2x_1 x_2 - 4 x_1 x_3 - 2 x_2^2 + 2x_2 x_3 + x_3^2
= 6\,y_2^2 + 6\, y_2\, y_3 + y_3^2$.
\end{example}
\begin{remark} The condition on subsets $J_1,\dots,J_{n-1}$
in Theorem~\ref{th:second-formula} is similar to the condition in
Hall's marriage theorem~\cite{Hal}. One just needs to replace the inequality
$\geq k+1$ with $\geq k$ to obtain Hall's marriage condition.
\end{remark}
Let us give an analogue of the marriage problem and Hall's theorem.
\medskip
\noindent
{\large$\mathfrak{Dragon\ marriage\ problem.}$}
{\it
There are $n$ brides, $n-1$ grooms living in a medieval town,
and 1 dragon who likes to visit the town occasionally.
Suppose we know all possible pairs of brides and grooms
who do no mind to marry each other.
A dragon comes to the village and takes one of the brides.
When will it be possible to match the remaining brides and grooms
no matter what the choice of the dragon was?
}
\begin{proposition}
\label{prop:dragon-marriage}
Let $J_1,\dots,J_{n-1}\subseteq[n]$.
The following three conditions are equivalent:
\begin{enumerate}
\item
For any distinct $i_1,\dots,i_k$, we have
$|J_{i_1}\cup \cdots \cup J_{i_k}| \geq k+1$.
\item For any $j\in[n]$, there is a
system of distinct representatives
in $J_1,\dots,J_{n-1}$ that avoids $j$.
(This is a reformulation of the dragon marriage problem.)
\item There is a system of $2$-element representatives
$\{a_i,b_i\}\subseteq J_i$, $i=1,\dots,n-1$,
such that $(a_1,b_1),\dots,(a_{n-1},b_{n-1})$ are edges
of a spanning tree in $K_{n}$.
\end{enumerate}
\end{proposition}
\begin{proof} It is clear that (2) implies (1).
On the other hand, (1) implies (2) according to usual Hall's theorem.
We leave it as an exercise for the reader to check that either of these
two conditions is equivalent to (3).
\end{proof}
We will refer to the three equivalent conditions
in Proposition~\ref{prop:dragon-marriage}
as the {\it dragon marriage condition.}
\begin{example}
\label{exam:dragon_marriage}
Let $M_n$ be the number of sequences of subsets
$J_1,\dots,J_{n-1}\subseteq[n]$ satisfying
the dragon marriage condition. Equivalently, $M_n$ is the
number of bipartite subgraphs $G\subseteq K_{n-1,n}$ such that
for any vertex $j$ in the second part there is a matching in $G$
covering the remaining vertices.
According to Theorem~\ref{th:second-formula} with $y_1=\dots=y_n=1$,
we have
$M_n = (n-1)!\,\mathrm{Vol}\, P_n(-1,-2,-4,\dots,-2^{n-1})$.
Let us calculate a few numbers $M_n$ using
Theorem~\ref{th:f1}.
\smallskip
\begin{center}
\begin{tabular}{|c||l|l|l|l|l|l|l|}
\hline
$n$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
$M_n$ & 1 & 13 & 1009 & 354161 & 496376001 & 2632501072321 &
52080136110870785 \\
\hline
\end{tabular}
\end{center}
\smallskip
\end{example}
\section{Generalized permutohedra}
\label{sec:generalized_permutohedra}
\begin{definition}
Let us define {\it generalized permutohedra} as deformations
of the usual permutohedron, i.e., as polytopes obtained by moving vertices
of the usual permutohedron so that directions of all edges are preserved
(and some of the edges may accidentally degenerate into a singe point);
see Appendix~\ref{sec:appendix-lattice-points}.
In other words, a generalized permutohedron is the convex hull
of $n!$ points $v_w\in\mathbb{R}^n$ labeled by permutations $w\in S_{n}$
such that, for any $w\in S_{n}$ and any adjacent transposition $s_i=(i,i+1)$,
we have $v_{w} - v_{w\,s_i} = k_{w,i} (e_{w(i)} - e_{w(i+1)})$,
for some nonnegative number $k_{w,i}\in\mathbb{R}_{\geq 0}$, where $e_1,\dots,e_{n}$
are the coordinate vectors in $\mathbb{R}^{n}$,
cf.\ Proposition~\ref{prop:comb_structute_perm}.
\end{definition}
Each generalized permutohedron is obtained by parallel translation
of the facets of a usual permutohedron. Recall that these facets
are given by Rado's theorem (Proposition~\ref{prop:Rado}).
Thus generalized permutohedra are parametrized by collections
$\{z_I\}$ of the $2^{n}-1$ coordinates $z_I$, for nonempty subsets
$I\subseteq[n]$, that belongs to a certain
deformation cone $\mathcal{D}_n$.
Each generalized permutohedron has the form
$$
P_n^z(\{z_I\}) = \left\{(t_1,\dots,t_{n})\in \mathbb{R}^{n}\mid
\sum_{i=1}^{n}t_i = z_{[n]},\
\sum_{i\in I} t_i \geq z_I,\textrm{ for subsets } I\right\},
$$
for $\{z_I\}\in\mathcal{D}_n$.
If $z_I = z_J$ whenever $|I|=|J|$, then $P_n(\{z_I\})$ is
a usual permutohedron.
The following figure shows examples of generalized permutohedra:
\begin{center}
\end{center}
\begin{center}
\input{fig4-2.pstex_t}
\end{center}
\medskip
According to Theorem~\ref{th:Todd-Euler-Maclaurin}, we have
the following statement.
\begin{proposition} The volume of the generalized permutohedron
$P_n(\{z_I\})$ is a polynomial function of the $z_I$'s defined
on the deformation cone $\mathcal{D}_n$. The number of lattice points
$P_n(\{z_I\})\cap \mathbb{Z}^{n}$ in the generalized permutohedron
is a polynomial function of the $z_I$'s defined
on the lattice points $\mathcal{D}_n\cap\mathbb{Z}^{2^{n}-1}$ of the deformation
cone.
\end{proposition}
Let us call the multivariate polynomial that expresses the number of
lattice points in $P_n(\{z_I\})$ the {\it generalized Ehrhart
polynomial} of the permutohedron.
Let us give a different construction for a class of generalized permutohedra.
Let $\Delta_{[n]} = \textrm{ConvexHull}(e_1,\dots,e_{n})$ be
the standard coordinate simplex in $\mathbb{R}^{n}$.
For a subset $I\subset [n]$, let $\Delta_I =
\textrm{ConvexHull}(e_i\mid i\in I)$ denote the face
of the coordinate simplex $\Delta_{[n]}$:
$$
\Delta_I = \mathrm{ConvexHull}(e_i\mid i\in I).
$$
Let $\{y_I\}$ be a collection of nonnegative parameters $y_I\geq 0$,
for all nonempty subsets $I\subset [n]$.
Let us define the polytope
$P_n^y(\{y_I\})$ as the Minkowski sum of the simplices $\Delta_I$
scaled by the factors $y_I$:
$$
P_n^y(\{y_I\}) := \sum_{I\subset[n+1]} y_I\cdot \Delta_I.
$$
\begin{proposition}
\label{prop:Py=Pz}
Let $\{y_I\}$ be a collection of
nonnegative real numbers for all nonempty subsets $I\subseteq [n]$, and let
$\{z_I\}$ be the collection of numbers given by
$$
z_I = \sum_{J\subseteq I} y_J,\quad \textrm{for all nonempty $I\subseteq[n]$}.
$$
Then $P_n^y(\{y_I\} = P_n^z(\{z_I\}$.
\end{proposition}
\begin{proof} Let us first pick a nonempty subset $I_0\subseteq[n]$
and set $y_I = \delta(I,I_0)$ (Kronecker's delta). Then
$P_n^y(\{y_I\}) = \Delta_{I_0}$, because the Minkowski contains
only 1 nonzero term. In this case, we have $z_I = 1$, if $I\supseteq I_0$,
and $z_I=0$, otherwise. The inequalities describing the polytope
$P_n^z(\{z_I\})$ give the same coordinate simplex $\Delta_{I_0}$.
The general case follows from the fact that the Minkowski sum
of two generalized permutohedra $P_n^z(\{z_I\})$ and
$P_n^z(\{z_I'\})$, for $\{z_I\}, \{z_I'\} \in\mathcal{D}_n$,
is exactly the generalized permutohedron $P_n^z(\{z_I+z_I'\})$
parametrized by the coordinatewise sum $\{z_I+z_I'\}\in\mathcal{D}_n$.
This fact is immediate from the definition of $P_n^z(\{z_I\})$.
\end{proof}
\begin{center}
\input{fig11-1.pstex_t}
\end{center}
\begin{remark} Not every generalized permutohedron $P_n^z(\{z_I\})$
can be written as a Minkowski sum $P_n^y(\{y_I\})$
of the coordinate simplices.
For example, for $n=3$, the polytope $P_3^y(\{y_I\})$ (usually a hexagon)
is the Minkowski sum of the coordinate triangle $\Delta_{[3]}$
and 3 line intervals $\Delta_{\{1,2\}}$, $\Delta_{\{1,3\}}$, $\Delta_{\{2,3\}}$
parallel to its edges (scaled by some factors); see the
figure above. For this hexagon we always have $|AB|\leq |DE|$.
On the other hand, any hexagon with edges parallel to the edges
of $\Delta_{[3]}$
is a certain generalized permutohedron $P_3^z(\{z_I\})$.
The points $\{z_I\}$ of the deformation cone $\mathcal{D}_n$ that can be
expressed as $z_I=\sum_{J\subseteq I} y_J$ through nonnegative
parameters $y_I$ form a certain region $\mathcal{D}_n'$ of top dimension in
the deformation cone $\mathcal{D}_n$. Since the volume and generalized
Ehrhart polynomial are polynomial functions on $\mathcal{D}_n$, it is
enough to calculate them for the class of polytopes $P_n^y(\{y_I\})$
and then extend from $\mathcal{D}_n'$ to $\mathcal{D}_n$ by the polynomiality.
\end{remark}
In what follows will refer to the polytopes $P_n^y(\{y_I\})$
as generalized permutohedra, keeping in mind that they form
a special class of polytopes $P_n^z(\{z_I\})$.
\section{Nested complex}
\label{sec:nested}
The combinatorial structure of the generalized permutohedron
$P_n^y=P_n^y(\{y_I\})$ depends only on the set $B\subset 2^{[n]}$ of nonempty
subsets $I\subseteq[n]$ such that $y_I>0$. In this section, we describe the
combinatorial structure of $P_n^y$ when the set $B$ satisfies
some additional conditions.
\begin{definition}
\label{def:building_set}
Let us say that a set $B$ of nonempty subsets in $S$
is a {\it building set\/} on $S$ if it satisfies the
conditions:
\begin{itemize}
\item[(B1)]
If $I,J\in B$ and $I\cap J\ne\emptyset$,
then $I\cup J\in B$.
\end{itemize}
\begin{itemize}
\item[(B2)]
$B$ contains all singletons $\{i\}$, for $i\in S$.
\end{itemize}
\end{definition}
Condition (B1) is a certain ``connectivity condition'' for building sets.
Note that condition (B2) does not impose any additional restrictions
on the structure of generalized permutohedra and was added only
for convenience.
Indeed, the Minkowski sum of a polytope with $\Delta_{\{i\}}$,
which is a single point, is just a parallel translation of
the polytope.
Let $B_{\max}\subset B$ be the subset of maximal by inclusion elements in $B$.
Let us say that a building set $B$ is {\it connected\/} if it has a unique
maximal by inclusion element $S$. According to (B1) all elements of $B_{\max}$
are pairwise disjoint. Thus each building set $B$ is a union
of pairwise disjoint connected building sets, called the {\it connected
components\/} of $B$,
that correspond to elements of $B_{\max}$.
For a subset $C\subset S$, define the {\it induced building set\/}
as $B|_C=\{I\in B\mid I\subseteq C\}$.
\begin{example}
\label{example:building_set_G}
Let $\Gamma$ be a graph on the set of vertices $S$.
Define the {\it graphical building\/} $B(\Gamma)$ as the set of all nonempty
subsets $C\subseteq S$ of vertices such that the induced graph $\Gamma|_C$
is connected. Clearly, it satisfies conditions (B1) and (B2).
The building set $B(\Gamma)$ is connected if and only if the graph $\Gamma$
is connected. The connected components of $B(\Gamma)$ correspond to
connected components of the graph $\Gamma$. The induced building set
is the building set for the induced graph: $B(\Gamma)|_C = B(\Gamma|_C)$.
\end{example}
\begin{definition}
\label{def:nested_family}
A subset $N$ in the building set $B$ is called a {\it nested set\/}
if it satisfies the following conditions:
\begin{enumerate}
\item[(N1)] For any $I,J\in N$, we have either $I\subseteq J$, or
$J\subseteq I$, or $I$ and $J$ are disjoint.
\item[(N2)] For any collection of $k\geq 2$ disjoint subsets
$J_1,\dots,J_k\in N$, their union $J_1\cup \cdots \cup J_k$ is not in $B$.
\item[(N3)] $N$ contains all elements of $B_{\max}$.
\end{enumerate}
The {\it nested complex\/} $\mathcal{N}(B)$ is defined as the poset of all nested
families in $B$ ordered by inclusion.
\end{definition}
Clearly, the collection of all nested sets in $B$ (with elements of $B_{\max}$ removed)
is a simplicial complex.
\begin{theorem}
\label{th:nested_complex}
Let us assume that the set $B$ associated with a generalized
permutohedron $P_n^y$ is a building set on $[n]$.
Then the poset of faces of $P_n^y$ ordered by reverse inclusion
is isomorphic to the nested complex $\mathcal{N}(B)$.
\end{theorem}
This claim was independently discovered by E.~M.~Feichtner and
B.~Sturmfels~\cite[Theorem~3.14]{FS}. They also
defined objects similar to $B$-forests discussed below;
see~\cite[Proposition~3.17]{FS}.
\begin{proof} Each face of an arbitrary polytope can be described
as the set of points of the polytope that minimize a linear function $f$.
Moreover, the face of a Minkowski sum $Q_1+\dots+Q_m$ that minimizes $f$
is exactly the Minkowski sum of the faces of $Q_i$'s that minimize $f$.
Let us pick a linear function $f(t_1,\dots,t_n)=a_1 t_1+ \cdots + a_nt_n$ on
$\mathbb{R}^n$. It gives an ordered set partition of $[n]$ into a disjoint
union of nonempty blocks $[n]=A_1\cup \cdots \cup A_s$ such that $a_i=a_j$,
whenever $i$
and $j$ are in the same block $A_s$, and $a_i<a_j$, whenever $i\in A_s$ and
$j\in A_t$, for $s<t$.
The face of a coordinate simplex $\Delta_I$ that minimizes the linear
function $f$ is the simplex $\Delta_{\widehat{I}}$,
where $\widehat{I}:= I\cap A_{j(I)}$ and $j=j(I)$ is
the minimal index such that the intersection $I\cap A_j$ is nonempty.
We deduce that the face of $P_n^y$ minimizing $f$
is the Minkowski sum $\sum_{I\in B} y_I \,\Delta_{\widehat{I}}$.
We always have $j(I)\geq j(J)$, for $I\subset J$.
Let $N\subseteq B$
be the collection of elements $I\in B$ such that
$j(I)\gneq j(J)$, for any $J\supsetneq I$, $J\in B$.
We can also recursively construct the subset $N\subseteq B$, as follows.
First, all maximal by inclusion elements of $B$ should be in $N$.
According to (B1), all other elements of $B$ should belong to one
of the maximal elements $I_m$.
For each maximal element $I_{m}\in B$, all elements $I\subsetneq I_{m}$
such that $j(I)=j(I_{m})$, i.e., the elements $I$ that have
a nonempty intersection with $\widehat{I}_m$, do not belong to $N$.
The remaining elements $I\subsetneq I_m$ are exactly the elements
of the induced building set
$B|_{I_m\setminus\widehat{I}_m}$.
Let us repeat the above procedure for each of the induced building sets.
In other words, find all maximal by inclusion elements $I_{m'}$ in
$B|_{I_m\setminus\widehat{I}_m}$.
These maximal elements should be in $N$. Then, for each maximal element
$I_{m'}$, construct the induced building set
$B|_{I_{m'}\setminus \widehat{I}_{m'}}$,
etc. Let us keep on doing this branching
procedure until we arrive to building sets that consist of singletons,
all of which should be in $N$.
It follows from this branching construction that $N$
is a nested set in $B$. It is immediate that $N$ satisfies
conditions (N1) and (N3). If $J_1,\dots,J_k\in N$ are disjoint
subsets and $J_1\cup \cdots \cup J_k\in B$, $k\geq 2$,
then we should have included
$J_1\cup \cdots \cup J_k$ in $N$ in the recursive construction,
and then the $J_i$ cannot all belong to $N$. This implies condition (N2).
It is also clear that, given $N$,
we can uniquely reconstruct the subset $\widehat{I} \subseteq I$,
for each $I\in B$.
Indeed, find the minimal by inclusion element $J\in N$ such that
$J\supseteq I$. Then $\widehat{J} = J\setminus
\bigcup_{K\subsetneq J, K\in N}K$ and $\widehat{I}$ is the
intersection of the last set with $I$.
Thus the nested set $N$ uniquely determines the face
$\sum_{I\in B} y_I\, \Delta_{\widehat{I}}$ of $P_n^y$ that minimizes $f$.
Let us show that, for any nested set $N\in\mathcal{N}(B)$, there exists
a face of $P_n^y$ associated with $N$. Indeed,
let $A_I= I\setminus \bigcup_{J\subsetneq I,J\in N} J$,
for any $I\in N$. Then $\bigcup_{I\in N} A_I$ is a disjoint decomposition
of $[n]$ into nonempty blocks. Let us pick any linear order of
$A_1<\dots<A_s$ of the blocks $A_I$ such that $A_I<A_J$, for $I\subsetneq J$,
and any linear function $f$ on $\mathbb{R}^n$ that gives this set partition,
for example, $f(t_1,\dots,t_n) = \sum_{i,j\in A_i} i\, t_j$.
Then the function $f$ minimizes a certain face $F_N$ of $P_n^y$ and
if we apply the above procedure to $F_N$ we will recover the nested set
$N$. We also see from this construction that the face $F_N$
contains the face $F_{N'}$ if and only if $N\subseteq N'$.
\end{proof}
We can express the generalized permutohedron
$P_n^y(\{y_I\})$ as $P_n^z(\{z_I\})$,
where $z_I = \sum_{i: I_i\subseteq I} y_i$; see
Section~\ref{sec:generalized_permutohedra}.
Let us give an explicit description of its faces.
\begin{proposition}
\label{prop:faces_F_N}
As before, let us assume that $B$ is a building set.
The face $P_N$ of $P_n^y(\{y_I\}) = P_n^z(\{z_I\})$ associated
with a nested set $N\in \mathcal{N}(B)$ is given by
$$
P_N = \{(t_1,\dots,t_n)\in\mathbb{R}^n\mid \sum_{i\in I} t_i = z_I,\textrm{ for }
I\in N; \ \sum_{i\in J} t_i \geq z_J,\textrm{ for }J\in B\}.
$$
The dimension of the face $P_N$ equals $n-|N|$.
In particular, the dimension of $P_n^y(\{y_I\})$ is $n-|B_{\max}|$.
\end{proposition}
\begin{proof}
According to the proof of Theorem~\ref{th:nested_complex},
for a nested set $N\in\mathcal{N}(B)$, we have the disjoint
decomposition $[n]=\bigcup_{I\in N} A_I$ into nonempty blocks, and
the corresponding face of $P_n^y$ is given by
$$
P_N = \sum_{I\in N,\, J\in B,\, J\cap A_I \ne \emptyset} y_J\,
\Delta_{J\cap A_I}.
$$
This Minkowski sum involves the terms $\Delta_{A_I}$, among others.
Thus $\dim P_N \geq \dim (\sum_{I\in N} \Delta_{A_I}) = n-|N|$.
It also follows from the construction that $J\cap A_I\ne\emptyset$
implies that $J\subseteq I$. Thus we have the equality
$\sum_{i\in I} t_i = z_I$, for $I\in N$ and any point
$(t_1,\dots,t_n)\in P_N$. It follows that the codimension of $P_N$
in $\mathbb{R}^n$ is at least $|N|$. Together with the inequality for
the dimension, this implies that $\dim P_N = n-|N|$ and the face
$P_N$ is described by the above $|N|$ linear equations, as needed.
\end{proof}
Theorem~\ref{th:nested_complex} implies that vertices of $P_n^y$ are
in a bijective correspondence with maximal by inclusion elements
of the nested complex $\mathcal{N}(B)$. We will call these elements
{\it maximal nested families.}
The following proposition gives their description.
\begin{proposition}
\label{prop:max_nested_bijection}
A nested set $N\in \mathcal{N}(B)$ is maximal
if and only if, for each $I\in N$, we have
$|A_I| = 1$, where $A_I = I\setminus\bigcup_{J\subsetneq I, J\in N}J$.
For a maximal nested set $N$, the map $I\mapsto i_I$, where $\{i_I\}=A_I$,
is a bijection between $N$ and $[n]$.
\end{proposition}
\begin{proof}
According to the proof of Theorem~\ref{th:nested_complex} and
Proposition~\ref{prop:faces_F_N}, a nested set $N\in \mathcal{N}(B)$ is maximal
(and $F_N$ is a point) if and only if $\dim(\sum_{I\in N}\Delta_{A_I}) =
\sum_{I\in N} (|A_I|-1) = 0$, i.e., all $A_I$ should be one elements sets. The
map $I\mapsto i_I$ is clearly an injection. On the other hand, for
any $i\in [n]$ and the minimal by inclusion element $I$ of $N$ that
contains $i$, we have $I\mapsto i$.
\end{proof}
For a maximal nested set $N\in\mathcal{N}(B)$, let us partially order
the set $[n]$ by $i\geq_N j$ whenever $I\supseteq J$.
The Hasse diagram of the order ``$\geq_N$'' is a rooted forest,
i.e., a forest with a chosen root in each connected component
and edges directed away from the roots.
The set of such forests can be described, as follows.
For two nodes $i$ and $j$ in a rooted forest, we
say that $i$ is a {\it descendant\/} of $j$
if the node $j$ belongs to the shortest chain connecting $i$ and the root of
its connected component. In particular, each node is a descendant
of itself. Let us say that two nodes $i$ and $j$ are
{\it incomparable\/} if neither $i$ is a descendant of $j$, nor
$j$ is a descendant of $i$.
\begin{definition}
\label{def:B_forests}
For a rooted forest $F$ and a node $i$, let $\mathrm{desc}(i,F)$ be the set of all
descendants of the node $i$ in $F$ (including the node $i$ itself).
Define a {\it $B$-forest\/} as a rooted forest $F$ on
the vertex set $[n]$ such that
\begin{enumerate}
\item[(F1)] For any $i\in[n]$, we have $\mathrm{desc}(i,F)\in B$.
\item[(F2)] There are no $k\geq 2$ distinct incomparable nodes $i_1,\dots,i_k$ in $F$
such that $\bigcup_{j=1}^k\mathrm{desc}(i_j,F)\in B$.
\item[(F3)] The sets $\mathrm{desc}(i,F)$, for all roots $i$ of $F$, are
exactly the maximal elements of the building set $B$.
\end{enumerate}
Condition (F3) implies that the number of connected components in a $B$-forest equals
the number of connected components of the building set $B$.
We will call such graphs {\it $B$-trees\/} in the case when $B$ is connected.
\end{definition}
\begin{proposition}
\label{prop:max_nested_trees}
The map $N\mapsto F_N$ is a bijection between maximal nested
families $N\in \mathcal{N}(B)$ and $B$-forests.
\end{proposition}
\begin{proof}
The claim is immediate from the above discussion.
Indeed, note that each maximal nested set $N\in\mathcal{N}(B)$ can be reconstructed
from the forest $F=F_N$ as $N=\{\mathrm{desc}(1,F),\dots,\mathrm{desc}(n,F)\}$.
\end{proof}
Let us describe the vertices of the generalized permutohedron
in the coordinates.
\begin{proposition}
\label{prop:vertices-coord}
The vertex $v_F=(t_1,\dots,t_n)$ of the generalized permutohedron
$P_n^y$ associated with a $B$-forest $F$
is given by
$t_i = \sum_{J\in B:\,i\in J\subseteq \mathrm{desc}(i,F)} y_{J}$,
for $i=1,\dots,n$.
\end{proposition}
\begin{proof}
Let $N$ be the maximal nested set associated with the $B$-forest $F$.
By Proposition~\ref{prop:faces_F_N}, the associated vertex
$v_F= (t_1,\dots,t_n)$ is
given by the $n$ linear equations $\sum_{i\in I} t_i= z_I$,
for each $I\in N$.
Notice that, for each $J\in B$, there exists a unique $i\in J$
such that $i\in J\subseteq \mathrm{desc}(i,F)$. Indeed, $\mathrm{desc}(i,F)$
should be the minimal element of $N$ containing $J$.
Thus, for the numbers $t_i$ defined as in
Proposition~\ref{prop:vertices-coord} and any $I\in N$, we have
$$
\sum_{i\in I} t_i = \sum_{i\in I}\ \sum_{J\in B:\,i\in J\subseteq
\mathrm{desc}(i,F)}
y_J = \sum_{J\subseteq I} y_J = z_I,
$$
as needed.
\end{proof}
\begin{proposition}
Let $F$ be a $B$-forest and let $v_F$ be the associated vertex of the generalized
permutohedron $P_n^y$.
For each nonrooted node $i$ of $F$, define the $n$-vector $g_{i,F} = e_i - e_j$,
where the node $j$ is the parent of the node $i$ in $F$.
(Here $e_1,\dots,e_n$ are the coordinate vectors in $\mathbb{R}^n$.)
Then the integer vectors $g_{i,F}$ generate the local cone of the
polytope $P_n^y$ at the vertex $v_F$.
In particular, the generalized permutohedron $P_n^y$ is a simple Delzant polytope;
see Appendix~\ref{sec:appendix-lattice-points}.
\end{proposition}
\begin{proof} Let $N$ be the maximal nested set associated with the forest $F$.
Then each edge of $P_n^y$ incident to $v_F$ correspond to a nested sets
obtained from $N$ by removing an element $I\in N\setminus B_{\max}$.
There are $n-|B_{\max}|$ such edges and
Proposition~\ref{prop:faces_F_N} implies that they are generated by the vectors
$g_{i,F}$.
\end{proof}
Let $f_B(q)$ be the {\it $f$-polynomial} of the generalized permutohedron
$P_n^y$. According to
Theorem~\ref{th:nested_complex} is is given by
$$
f_B(q) = \sum_{i=0}^{n-1} f_i \,q^i = \sum_{N\in\mathcal{N}(B)} q^{n-|N|},
$$
where $f_i$ is the number of $i$-dimensional faces of $P_n^y$.
The recursive construction of nested families
implies the following recurrence relations fort the $f$-vector.
\begin{theorem}
\label{th:f-recurrence}
The $f$-polynomial $f_B(q)$ is determined by
the following recurrence relations:
\begin{enumerate}
\item If $B$ consists of a single singleton, then $f_B(q)=1$.
\item If $B$ has connected components $B_1,\dots,B_k$, then
$$
f_B(q) = f_{B_1}(q)\cdots f_{B_k}(q).
$$
\item If $B$ is a connected building on $S$, then
$$
f_{B}(q) = \sum_{C\subsetneq S} q^{|S|-|C|-1} f_{B|_C}(q).
$$
\end{enumerate}
\end{theorem}
\begin{definition}
\label{def:gen_Catalan}
We define the {\it generalized Catalan number},
for a building set $B$, as the number $C(B)=f_B(0)$ of vertices
of the generalized permutohedron $P_n^y$, or, equivalently,
the number of maximal nesting families in $\mathcal{N}(B)$,
or, equivalently, the number of $B$-forests.
\end{definition}
The reason for this name will become apparent from examples in the next
section. The generalized Catalan numbers $C(B)$ are determined by the
recurrence relations similar to the ones in Theorem~\ref{th:f-recurrence},
where in (3) we sum only over subsets $C\subset S$ of cardinality $|S|-1$.
In the following section we show that the associahedron is a special
case of generalized permutohedra. Thus we can also call this class
of polytopes {\it generalized associahedra}. However this name is already
reserved for a different generalization of the associahedron studied by Chapoton,
Fomin, and Zelevinsky~\cite{CFZ}.
Even though Chapoton-Fomin-Zelevinsky's generalized associahedra are different from
our ``generalized associahedra,'' there are some similarities between these two
families of polytopes. In~\cite{Zel} Zelevinsky gives an alternative
construction for generalized permutohedra associated with building sets which is
parallel to the construction from~\cite{CFZ}. He first constructs the dual fan
for the nested complex $\mathcal{N}(B)$ and then shows that it has a polytopal realization.
A natural question to ask is how to find a common generalization of
Chapoton-Fomin-Zelevinsky's generalized associahedra and generalized
permutohedra discussed in this section.
\section{Examples of generalized permutohedra}
\label{sec:examples_of_gen_perm}
\subsection{Permutohedron}
Let us assume that building set $B=B_{all}=2^{[n]}\setminus\{\emptyset\}$
is the set of all nonempty
subsets in $[n]$. Then $P_n^y$ is combinatorially equivalent to
the usual permutohedron, say, $P_n(n,n-1,\dots,1)$. This
is the generic case of generalized permutohedra.
In this case, nested families are flags of subsets
$J_1\subsetneq J_2\subsetneq \cdots \subsetneq J_s = [n]$.
Indeed, two disjoint subsets $I$ and $J$ cannot belong to a nested
set because their union $I\cup J$ is in $B$.
The maximal nested families are complete flags on $n$ subsets.
Clearly, there are $n!$ such flags, which correspond to the $n!$ vertices
of the permutohedron. In this case, $B_{all}$-trees are directed chains
of the form $(w_1,w_2),(w_2,w_3),\dots,(w_{n-1},w_n)$, where
$w_1,\dots,w_n$ is a permutation in $S_n$. The generalized
Catalan number in this case is $C(B_{all}) = n!$.
\subsection{Associahedron}
\label{ssec:associahedron}
Assume that the building set $B=B_{int}=\{[i,j]\mid 1\leq i\leq j\leq n\}$
is the set of all continuous intervals in $[n]$.
In this case, the generalized permutohedron is combinatorially equivalent
to the {\it associahedron,} also known as the {\it Stasheff polytope,}
which first appeared in the work of Stasheff~\cite{Sta}.
A nested set $N\subseteq B_{int}$ is a collection of intervals such that,
for any $I, J\in N$, we either have $I\subseteq J$,
$J\subseteq I$, or $I$ and $J$ are disjoint {\it non-adjacent\/} intervals,
i.e., $I\cup J$ is not an interval.
Let us describe $B_{int}$-trees.
Recall that a {\it plane binary tree\/} is an tree such
that each node has at most 1 left child and at most
one right child. (If a node has only one child, we specify
if it is the left or the right child.) It is well
known that there are the Catalan number $C_n = \frac{1}{n+1}
\binom{2n}{n}$ of plane binary trees on $n$ unlabeled nodes.
For a node $v$ in such a tree, let $L_v$ be the left branch and $R_v$ be the
right branch at this node, both of which are smaller plane binary trees.
If $v$ has no left child, then $L_v$ is the empty
graph, and similarly for $R_v$. For any plane binary tree on $n$ nodes,
there is a unique way to label the nodes by the numbers $1,\dots,n$ so that,
for any node $v$, all labels in $L_v$ are less than the label of $v$ and all
labels in $R_v$ are greater than the label of $v$. Indeed, label each node
$v$ by the number $|L_v|+1$.
We can also describe this labeling using the {\it depth-first search}.
This is the walk on the nodes of a tree that starts at the root
and is determined by the rules: (1) if we are at a some node and have never
visited its left child, then go to the left child; (2) otherwise, if we have
never visited its right child, then go to the right child; (3) otherwise, if
the node has the parent, then go to the parent; (4) otherwise stop.
Let us mark the nodes by the integers $1,\dots,n$ in the order
of their appearance in this walk, as follows.
Each time when we visit an unmarked vertex and {\it do not\/} apply rule (1),
we mark this node.
The labeling of nodes defined by any of these equivalent ways
is called the {\it binary search labeling}.
It was described by Knuth in~\cite[6.2.1]{Knu}.
Example~\ref{ex:plane_binary_trees} below shows a plane binary tree
with the binary search labeling.
\begin{proposition} The $B_{int}$-trees are exactly plane binary
trees on $n$ nodes with the binary search labeling.
\end{proposition}
\begin{proof} Let $N$ be a maximal nested set.
Suppose that the maximal element $[n]\in N$ corresponds to
$i=i_{[n]}$ under the bijection in Proposition~\ref{prop:max_nested_bijection}.
Then $N\setminus [n]$ is the union of two maximal nested families
on $[1,i-1]$ and on $[i+1,n]$. Equivalently, each $B_{int}$-tree
is a rooted tree with root labeled $i$ and two branches which
are $B_{int}$-trees on the vertex sets $[1,i-1]$ and $[i+1,n]$.
This implies the claim.
\end{proof}
Thus in this case the generalized permutohedron has the Catalan number
$C_n$ vertices associated with plane binary trees.
Proposition~\ref{prop:vertices-coord} implies the following
description of the vertices of $P_n^y(\{y_{ij}\})$,
where $y_{ij} = y_{[i,j]}$ for each interval $[i,j]\subseteq [n]$.
For a plane binary trees $T$ with binary search labeling,
let $\mathrm{desc}(k,T)=[l_k,r_k]$, for $k=1,\dots,n$.
Then the left branch of a vertex $k$ is $L_k = [l_k,k-1]$
and the right branch is $R_k = [k+1,r_k]$.
\begin{corollary}
\label{cor:ass_realization}
The vertex $v_T=(t_1,\dots,t_n)$ associated
with a plane binary tree $T$ is given by
$t_k = \sum_{l_k\leq i\leq k\leq j\leq r_k} y_{ij}$.
In particular, in the case when $y_{ij}=1$, for any $1\leq i\leq j\leq n$,
we have
$$
v_T=((|L_1|+1)(|R_1|+1),\cdots, ((|L_n|+1)(|R_n|+1)).
$$
\end{corollary}
The polytope $\mathrm{Ass}_n$ with the $C_n$ vertices given by the second part of
Corollary~\ref{cor:ass_realization} is exactly the realization of the
{\it associahedron\/} described by Loday~\cite{Lod}.
The will refer to this particular
geometric realization of the associahedron as the {\it Loday realization.}
This polytope can be equivalently defined as the Newton polytope
$\mathrm{Ass}_n := \mathrm{Newton}\left(\prod_{1\leq i\leq j\leq n}
(t_i+t_{i+1}+\cdots +t_j)\right)$.
We will calculate volumes and numbers of lattice points in
$\mathrm{Ass}_n$, for $n=1,\dots,8$, in
Examples~\ref{exam:assoc_volume} and~\ref{exam:lattice_points_ass}.
We can also describe the Loday realization, as follows.
There are $C_n$ subdivisions of the triangular
Young diagram of the shape $(n,n-1,\dots,1)$ into a disjoint
union of $n$ rectangles; see Thomas~\cite[Theorem~1.1]{Tho}
and Stanley's Catalan addendum~\cite[Problem 6.19($u^5$)]{St3}.
These subdivisions are in simple a bijective correspondence with
plane binary trees on $n$ nodes.
The $i$-th rectangle in such a subdivision is the rectangle that
contains the $i$-th corner of the triangular shape.
Then, for a vertex $v_T=(t_1,\dots,t_n)$
of the associahedron in the Loday realization, the $i$-the coordinate
$t_i$ equals the number of boxes in the $i$-th rectangle of the
associated subdivision;
see Example~\ref{ex:plane_binary_trees} below.
\begin{example}
\label{ex:plane_binary_trees}
Here is an example of a plane binary tree $T$ with the binary search labeling:
\begin{center}
\input{fig12-1.pstex_t}
\end{center}
This tree is associated with the maximal nested set
$$
N=\{\mathrm{desc}(1,T),\dots,\mathrm{desc}(8,T)\}=
\{[1,1],[1,4],[3,3],[3,4],[1,8],[6,8],[7,7],[7,8]\}.
$$
This tree corresponds to the following subdivision of the triangular
shape into rectangles. (Here we used shifted Young diagram notation
for a future application; see Section~\ref{sec:shifted_tableaux}.)
\begin{center}
\input{fig13-1.pstex_t}
\end{center}
The corresponding vertex of the associahedron in the Loday realization
is
$$
(1\cdot 1 \,,\ 2\cdot 3\,,\ 1\cdot 1\,,\ 2\cdot 1\,,\ 5\cdot 4\,,\
1\cdot 3\,,\ 1\cdot 1\,,\ 2\cdot 1).
$$
\end{example}
\begin{example}
The next figure shows the Loday realization of the associahedron for $n=3$:
\begin{center}
\input{fig14-1.pstex_t}
\end{center}
\end{example}
\subsection{Cyclohedron}
Let $B=B_{cyc}$ be the set of all {\it cyclic intervals\/} in $[n]$,
i.e., subsets of the form $[i,j]$ and $[1,i]\cup [j,n]$, for
$1\leq i\leq j \leq n$.
In this case, the generalized permutohedron is the
{\it cyclohedron\/} that was also introduced by Stasheff~\cite{Sta}.
If we restrict the building set $B_{cyc}$ to $[n]\setminus \{i\}$,
then we obtain the building set isomorphic to the set $B_{int}$
of usual intervals in $[n-1]$. Thus we obtain the following
description of $B_{cyc}$-trees.
\begin{proposition} The set of $B_{cyc}$-trees is exactly the set
of trees that have a root at some vertex $i$ attached to a plane binary
tree on $n-1$ nodes with the binary search labeling by integers in
$[n]\setminus\{i\}$ with respect to the order
$i+1<i+2<\cdots <n < 1<\cdots<i-1$.
\end{proposition}
The generalized Catalan number in this case is
$C(B_{cyc}) = n\cdot C_{n-1} = \binom{2n-2}{n-1}$.
\subsection{Graph associahedra}
\label{ssec:graph_ass}
Let $\Gamma$ be a graph on the vertex set $[n]$. Let us assume that $B=B(\Gamma)$ is
is the set of subsets $I\subseteq [n]$ such
that the induced graph $\Gamma|_I$ is connected; see
Example~\ref{example:building_set_G}. In this case, the generalized
permutohedron $P_n^y$ is called the {\it graph associahedron.}
The above examples are special cases of graph associahedra.
If $\Gamma=A_n$ is the chain with $n$ nodes, i.e., the type $A_n$ Dynkin
diagram, then we obtain the usual associahedron discussed above.
In the case of the complete graph $\Gamma=K_n$ we obtain the
usual permutohedron. If $\Gamma$ is the $n$-cycle, then we obtain
the cyclohedron.
Various graph associahedra, especially those graph associahedra
that correspond to Dynkin diagrams and extended Dynkin diagrams,
came up earlier in the work of De~Concini and Procesi~\cite{DP1}
on wonderful models of subspace arrangements and then in the
work on Davis-Januszkiewitz-Scott~\cite{DJS}.
The class of graph associahedra was independently
discovered by Carr and Devadoss in~\cite{CD}.
They constructed these polytopes using blow-ups, cf.~\cite{DJS}.
These polytopes also recently appeared in
the paper by Toledano-Laredo~\cite{Tol}
under the name De~Concini-Procesi associahedra.
We borrowed the term graph associahedra from~\cite{CD}.
Since they are special cases of our generalized permutohedra,
we can also call them {\it graph permutohedra.}
In the case of graph associahedra it is enough to require condition (N2) of
Definition~\ref{def:nested_family} and condition (F2) of Definition~\ref{def:B_forests}
only for $k=2$.
Indeed, if we have several disjoint subsets
$I_1,\dots,I_k\in B(\Gamma)$ such that $\Gamma|_{I_1\cup \cdots \cup I_k}$ is connected,
then $\Gamma|_{I_i\cup I_j}$ is connected for some pair $i$ and $j$.
\begin{definition}
For a graph $\Gamma$, let us define the {\it $\Gamma$-Catalan number\/} as $C(\Gamma) =
C(B(\Gamma))$, i.e., it the number of vertices of the graph associahedron, or,
equivalently, the number of $B(\Gamma)$-trees; see
Definition~\ref{def:gen_Catalan}.
\end{definition}
For the $n$-chain $\Gamma=A_n$, i.e., the Dynkin diagram of the type $A_n$, the
$A_n$-Catalan number is the usual Catalan number $C(A_n)=C_n$. For the
complete graph, we have $C(K_n)=n!$. Let us calculate several other
$G$-Catalan numbers.
Let $T_{n_1, \dots ,n_r}$ be the star graph that has a central node with $r$
attached chains with $n_1,\dots,n_r$ nodes.
For example, $T_{1,1,1}$
is the Dynkin diagram of the type $D_4$.
\begin{proposition} For a positive integer $r$, the generating
function $\tilde C(x_1,\dots,x_r)$
for the $T_{n_1,\dots,n_r}$-Catalan numbers is given by
$$
\sum_{n_1,\dots,n_r\geq 0} C(T_{n_1,\dots,n_r})\, x_1^{n_1} \dots x_r^{n_r}=
\frac{C(x_1)\cdots\, C(x_r)}
{1-x_1\,C(x_1)-\cdots - x_r C(x_r)},
$$
where $C(x)=\sum_{n\geq 0} C_n x^n = \frac{1-\sqrt{1-4x}}{2x}$
is the generating function for the usual Catalan numbers.
\end{proposition}
\begin{proof}
According to the recurrence relation in Theorem~\ref{th:f-recurrence},
we have
\begin{equation}
\label{ex:CTnnn}
C(T_{n_1,\dots, n_r}) = C_{n_1}\cdots C_{n_r} +
\sum_{k=1}^r \sum_{i=1}^{n_k}
C(T_{n_1,\dots ,n_{k-1}, n_k-i ,n_{k+1},\dots ,n_r})
\cdot C_{i-1}.
\end{equation}
Indeed, the first term corresponds to removing the central node
and splitting the graph $T_{n_1,\dots,n_k}$ into $r$ chains.
The remaining terms correspond to removing a node in one of the chains
and splitting the graph into two connected components.
This relation can be written in terms of generating functions as
$$
\tilde C(x_1,\dots,x_r) = C(x_1)\dots C(x_r) + \sum_{k=1}^r x_k\cdot
\tilde C(x_1,\dots,x_r)\cdot C(x_k),
$$
which is equivalent to the claim.
\end{proof}
Let us calculate $\Gamma$-Catalan numbers for
a class of graphs which includes all Dynkin diagrams.
Let $D_n = T_{1,1,n-3}$, $\hat A_n$ be the $(n+1)$-cycle,
$E_n= T_{1,2,n-4}$.
\begin{proposition} The $\Gamma$-Catalan numbers for these graphs are given by
$$
\begin{array}{l}
C(A_n) = C_n = \frac{1}{n+1}\binom{2n}{n},
\textrm{ for } n\geq 1,
\\[.05in]
C(\hat A_n) = (n+1)C_n = \binom{2n}{n},
\textrm{ for } n\geq 3,
\\[.05in]
C(D_n) = 2\,C_n -2\,C_{n-1}-C_{n-2}, \textrm{ for } n\geq 3,
\\[.05in]
C(E_n) = 3\,C_n -4\,C_{n-1}-3\,C_{n-2} - 2\,C_{n-3}, \textrm{ for } n\geq 4.
\end{array}
$$
\end{proposition}
\begin{proof}
We already proved that $C(A_n)=C_n$.
Using Theorem~\ref{th:f-recurrence}, we deduce that
$C(\hat A_n) = (n+1)C(A_n)$.
According Theorem~\ref{th:f-recurrence} or~(\ref{ex:CTnnn}),
we deduce that the numbers $C(D_n)$ can be calculated
using the recurrence relations
$C(D_n) = C_{n-3} + 2C_{n-1} + \sum_{i=1}^{n-3} C(D_{n-i})\, C_{i-1}$,
for $n\geq 4$, and $C(D_3) = 5$.
In order to prove that $C(D_n) = 2C_n - 2C_{n-1} - C_{n-2}$
it is enough to check that the right-hand side satisfy the recurrence
this relation and that $2C_3 -2 C_2 - C_1 = 5$.
We can easily do this using the recurrence relation for the
Catalan numbers
$C_n = \sum_{i=1}^n C_{n-i} C_{i-1}$, for $n\geq 1$.
Similarly, the numbers $C(E_n)$ are given by the recurrence relation
$C(E_n) = C_{n-1} + C(D_{n-1}) + C_{n-2} + 2\,C_{n-4} +
\sum_{i=1}^{n-4} C(E_{n-i})\,C_{i-1} =
3\, C_{n-1} - C_{n-2} - C_{n-3} + 2\,C_{n-4} +
\sum_{i=1}^{n-4} C(E_{n-i})\,C_{i-1}$, for $n\geq 5$,
and $C(E_4)=14$.
Again, we can easily check that the right hand side
of $C(E_n) = 3\,C_n - 4\,C_{n-1} -3 \,C_{n-2} - 2 \,C_{n-3}$
satisfies this relation, and that $3\,C_4 - 4\,C_3 - 3\,C_2 - 2\,C_1 = 14$.
\end{proof}
Similarly, for any fixed $n_1,\dots,n_{k-1}$, the number
$f_n=C(T_{n_1,n_2,\dots,n_{k-1},n})$ can be expressed as a linear
combination of several Catalan numbers.
\begin{remark}
One can define the generalized Catalan number for any Lie type.
However this number does not depend on multiplicity of edges in the Dynkin
diagram. The Catalan number for the Lie types $B_n$ and $C_n$ is the
usual Catalan number $C_n$.
\end{remark}
\subsection{Pitman-Stanley polytope}
\label{ssect:Pitman-Stanley}
All above examples are special cases of graph associahedra.
Let us consider an example that does not belong to this class.
Let $B=B_{\mathit{flag}} =\{[1],[2],\dots,[n]\}$
be the complete flag of subsets in $[n]$, and
let $z_i=\sum_{j=1}^i y_{[j]}$, for $i=1,\dots,n$.
According to Proposition~\ref{prop:Py=Pz},
the generalized permutohedron is this case is the polytope
given by the inequalities:
$$
\{(t_1,\dots,t_n)\mid t_i\geq 0,\ t_1+\cdots + t_i\geq z_i,
\textrm{ for }i=1,\dots,n-1,\ t_1+\dots+t_n = z_n\}
$$
This is exactly the polytope studied by Pitman and Stanley~\cite{PiSt}.
We will call it the {\it Pitman-Stanley polytope.}
Let $B_{\mathit{flag}}^+ = B_{\mathit{flag}}\cup \{\{1\},\dots,\{n\}\}$
be the set obtained from $B_{\mathit{flag}}$ by adding all singletons.
The generalized permutohedron for $B_{\mathit{flag}}^+$ is just a parallel
translation of the Pitman-Stanley polytope.
The set $B_{\mathit{flag}}^+$ is a building set. Nested families
$N\in \mathcal{N}(B_{\mathit{flag}}^+)$ are the subsets
$N\subset B_{\mathit{flag}}^+$ such that (1) if $[i]\in N$ then $\{i+1\}\not\in N$,
and (2) $[n]\in N$.
Let us encode a nested set $N$ by a word $u_1,\dots,u_{n-1}$
in the alphabet $\{0,1,*\}$ such that, for $i=1,\dots,n-1$, if
$[i]\in N$ then $u_i=0$, if $\{i+1\}\in N$ then $u_i=1$,
otherwise $u_i=*$. This gives a bijection between nested families
and $3^{n-1}$ words of length $n-1$ with these $3$ letters.
A nested set $N$ contains a nested set $N'$
whenever the word for $N$ is obtained from the word for $N'$
by replacing some $*$'s with $0$'s and/or $1$'s.
In particular, a nested set is maximal if its words
contains only $0$'s and $1$'s.
Thus the nested complex $\mathcal{N}(B_{\mathit{flag}}^+)$ is isomorphic to the face lattice
of the $(n-1)$-dimensional hypercube.
\begin{proposition} The Pitman-Stanley polytope has $2^{n-1}$
vertices and it is combinatorially equivalent to the $(n-1)$-dimensional
hypercube.
\end{proposition}
Thus the generalized Catalan number in this case is
$C(B_{\mathit{flag}}^+) = 2^{n-1}$.
\begin{example}
The following figure shows the combinatorial structure of the
Pitman-Stanley polytope for $n=3$ in terms of nested families.
\medskip
\begin{center}
{\tiny
\input{fig16-1.pstex_t}
}
\end{center}
Note that, as a geometric polytope, the Pitman-Stanley
polytope is a {\it non-regular\/} quadrilateral, as shown on the following figure.
\medskip
\begin{center}
\input{fig17-1.pstex_t}
\end{center}
\end{example}
\subsection{Graphical zonotope}
Let $\Gamma$ be a graph on the vertex set $[n]$, and
let $B$ be the set of all pairs $\{i,j\}\subset [n]$ such
that $(i,j)$ is an edge of $\Gamma$. The set $B$ {\it does not\/} satisfy
the axioms of a building set; see Definition~\ref{def:building_set}.
The minimal building set that contains $B$ is
the graphical building set $B(\Gamma)$; see Example~\ref{example:building_set_G}.
The generalized permutohedron for the set $B$ is the graphical zonotope
$Z_\Gamma$; see Definition~\ref{def:graph_zonotope}.
In this case, we can not describe combinatorial structure of $Z_\Gamma$ using
nested families.
However it is well-known that the vertices of $Z_\Gamma$ correspond to
acyclic orientations of the graph $\Gamma$. It is not hard to describe the
faces of this polytope as well. Note that the polytope $Z_\Gamma$ is dual to the
graphic arrangement for the graph $\Gamma$.
\section{Volume of generalized permutohedra via Bernstein's theorem}
\label{sec:vol_via_Bernstein}
Let $G\subseteq K_{m,n}$ be a bipartite graph with no isolated vertices.
(This graph should not be confused with graphs used in
Section~\ref{sec:examples_of_gen_perm}.)
We will label the vertices $G$ by $1,\dots,m,\bar 1,\dots,\bar n$
and call $1,\dots,m$ the {\it left vertices\/} and $\bar 1,\dots,\bar n$
the {\it right vertices.}
Let us associate this graph with the collection $\mathcal{I}_G$ of subsets
$I_1,\dots,I_m\subseteq[n]$ such that
$j\in I_i$ if and only if $(i,\bar j)$ is an edge of $G$.
Let us define the polytope $P_G(y_1,\dots,y_m)$ is the Minkowski sum
$$
P_G(y_1,\dots,y_m) = y_1\Delta_{I_1} + \cdots + y_m \Delta_{I_m}.
$$
The polytope $P_G(y_1,\dots,y_m)$ is exactly the generalized permutohedron
$P_{n}^y(\{y_I\})$, where $y_I = \sum_{i\mid I_i = I} y_i$.
\begin{remark}
\label{rem:P_G(1,1,1)}
The class of polytopes $P_G(1,\dots,1)$
is as general as $P_G(y_1,\dots,y_m)$ for arbitrary nonnegative
integers $y_1,\dots,y_m$. Indeed, we can always replace
a term $y_i\Delta_{I_i}$ with $y_i$ terms $\Delta_{I_i}$.
We use the notation $P_G(y_1,\dots,y_m)$ in order to
emphasize dependence of this class of polytopes on
the parameters $y_1,\dots,y_m$.
\end{remark}
\begin{definition}
Let us say that a sequence of nonnegative integers $(a_1,\dots,a_m)$ is a
{\it $G$-draconian sequence\/} if $\sum a_i = n-1$ and, for any subset
$\{i_1<\cdots <i_k\}\subseteq[m]$, we have $|I_{i_1} \cup \cdots \cup I_{i_k}|
\geq a_{i_1}+\cdots + a_{i_k} + 1$. Equivalently, $(a_1,\dots,a_m)$ is an {\it
$G$-draconian\/} sequence of integers if the sequence of subsets
$I_1^{(a_1)},\dots,I_m^{(a_m)}$, where $I^{(a)}$ means $I$ repeated $a$ times,
satisfies the dragon marriage condition; see
Proposition~\ref{prop:dragon-marriage}.
\end{definition}
Theorem~\ref{th:second-formula} can be extended to generalized
permutohedra, as follows.
\begin{theorem}
\label{th:second-formula-generalized}
The volume of the generalized permutohedron
$P_G(y_1,\dots,y_m)$ equals
$$
\mathrm{Vol}\, P_G(y_1,\dots,y_m) = \sum_{(a_1,\dots,a_m)}
\frac {y_1^{a_1}}{a_1!} \cdots
\frac {y_m^{a_m}}{a_m!},
$$
where the sum is over all $G$-draconian sequences
$(a_1,\dots,a_m)$.
\end{theorem}
We can also reformulate Theorem~\ref{th:second-formula-generalized},
as follows.
\begin{corollary}
\label{cor:vol_gen_perm}
The volume of the generalized permutohedron
$P_{n}^y(\{y_I\})$ is given by
$$
\mathrm{Vol}\, P_{n}^y(\{y_I\})= \frac{1}{(n-1)!}\sum_{(J_1,\dots,J_{n-1})}
y_{J_1}\cdots y_{J_{n-1}},
$$
where the sum is over ordered collections of nonempty subsets
$J_1,\dots,J_{n-1}\subset [n]$
such that, for any distinct $i_1,\dots,i_k$, we have
$|J_{i_1}\cup \cdots \cup J_{i_k}| \geq k+1$.
\end{corollary}
\begin{proof}
Assume in Theorem~\ref{th:second-formula-generalized}
that $G$ is the bipartite graph associated with the collection
$I_1,\dots,I_m$, $m=2^n-1$, of all nonempty subsets in $[n]$.
Then replace the summation over $G$-draconian sequences
$(a_1,\dots,a_m)$ by the summation over $\binom {n-1}{a_1,\dots,a_m}$
rearrangements $(J_1,\dots,J_{n-1})$ of the sequence
$(I_1^{(a_1)},\dots,I_m^{(a_m)})$.
\end{proof}
\begin{example}
\label{exam:regular_permuitohedron-K_n}
Suppose that $I_1,\dots,I_m$, $m=\binom n2$, is the collection of all
$2$-element
subsets in $[n]$ and $G\subset K_{m,n}$ is the associated bipartite graph.
Then $P_G(1,\dots,1)$ is the regular permutohedron $P_{n-1}(n-1,n-2,\dots,0)$.
In this case, there are $n^{n-2}$ $G$-draconian sequences $(a_1,\dots,a_m)$,
which are in a bijective correspondence with trees on $n$ vertices.
For a tree $T\subset K_n$, the $a_i$'s corresponding to the edges of $T$
are equal to $1$ and the remaining $a_i$' are zero,
cf.\ Proposition~\ref{prop:dragon-marriage}.
Thus we recover the result that $\mathrm{Vol}\, P_{n}(n-1,n-2,\dots,0) = n^{n-2}$.
\end{example}
\begin{definition} A sequence of positive integers $(b_1,\dots,b_m)$
is called a {\it parking function\/} if its increasing rearrangement
$c_1\leq c_2\leq \cdots \leq c_m$ satisfies $c_i\leq i$, for $i=1,\dots,m$.
\end{definition}
Recall that there are $(m+1)^{m-1}$ parking functions of the length $m$.
\begin{example}
\label{exam:pitman-stanley-volume}
Suppose that $I_i=[n+1-i]$, for $i=1,\dots,m$,
where $m=n-1$.
In this case, the generalized permutohedron $P_{G}(y_1,\dots,y_{m})$
is the Pitman-Stanley polytope; see Subsection~\ref{ssect:Pitman-Stanley}.
A $G$-draconian sequence is a nonnegative integer sequence
$(a_1,\dots,a_{m})$ such that $a_1+\cdots + a_i \geq i$, for $i=1,\dots,m$,
and $a_1+\dots+a_{m}=m$. There are the Catalan number
$C_{m} = \frac{1}{m+1}\binom {2m}{m}$ such sequences.
Let us call them {\it Catalan sequences.}
A collection of intervals $I_{b_1},\dots,I_{b_{m}}$ satisfies
the dragon marriage condition if and only if $(b_1,\dots,b_{m})$
is a parking function.
We recover the following two formulas for the volume of the Pitman-Stanley
polytope proved in~\cite{PiSt}:
$$
\mathrm{Vol}\, P_G(y_1,\dots,y_m)=
\sum_{(a_1,\dots,a_m)} \frac{y_1^{a_1}}{a_1!}\cdots \frac{y_m^{a_m}}{a_m!}
=\frac{1}{m!} \sum_{(b_1,\dots,b_m)} y_{b_1} \cdots y_{b_m},
$$
where the first sum is over Catalan sequences
$(a_1,\dots,a_m)$ and the second sum is over parking functions
$(b_1,\dots,b_m)$.
In particular, $\mathrm{Vol}\, P_G(1,\dots,1) = \frac{ (m+1)^{m-1}}{m!} =
\frac{n^{n-2}}{(n-1)!}$.
\end{example}
The proof of Theorem~\ref{th:second-formula-generalized}
relies on Bernstein's theorem
on systems of polynomial equations.
Let us first recall the definition of the {\it mixed volume}
$\mathrm{Vol}\,(Q_1,\dots,Q_n)$ of $n$ polytopes $Q_1,\dots,Q_n\subset \mathbb{R}^n$.
It is based on the following proposition.
\begin{proposition}
\label{prop:mixed_volume}
There exists a unique function
$\mathrm{Vol}\,(Q_1,\dots,Q_n)$ defined on $n$-tuples of polytopes in $\mathbb{R}^n$
such that, for any collection of $m$ polytopes $R_1,\dots,R_m\subset\mathbb{R}^n$,
the usual volume of the Minkowski sum
$y_1 R_1+ \cdots + y_m R_m$, for nonnegative factors $y_i$,
is the polynomial in $y_1,\dots,y_m$ given by
$$
\mathrm{Vol}\,(y_1 R_1+ \cdots + y_m R_m) =
\sum_{(i_1,\dots,i_n)} \mathrm{Vol}\,(R_{i_1},\dots,R_{i_n})\,
y_{i_1}\cdots y_{i_n},
$$
where the sum is over ordered sequences $(i_1,\dots,i_n)\in[m]^n$.
\end{proposition}
For a finite subset $A\subset\mathbb{Z}^n$, let $f_A(t_1,\dots,t_n)
=\sum_{a\in A} \beta_a\, t_1^{a_1}\cdots t_n^{a_n}$
be a Laurent polynomial in $t_1,\dots,t_n$ with
some complex coefficients $\beta_a$.
\begin{theorem}
\label{th:bernstein}
{\rm Bernstein~\cite{Ber}} \
Fix $n$ finite subsets $A_1,\dots,A_n\subset\mathbb{Z}^n$.
Let $Q_i$ be the convex hull of $A_i$, for $i=1,\dots,n$.
Then the system
$$
\left\{
\begin{array}{c}
f_{A_1}(t_1,\dots,t_n) = 0,\\
\vdots \\
f_{A_n}(t_1,\dots,t_n) = 0
\end{array}
\right.
$$
of $n$ polynomial equations in the $n$ variables $t_1,\dots,t_n$
has exactly $n!\,\mathrm{Vol}\,(Q_1,\dots,Q_n)$ isolated solutions
in $(\mathbb{C}\setminus\{0\})^n$ whenever the collection
of all coefficients of the polynomials $f_{A_i}$
belong to a certain Zariski open set in $\mathbb{C}^{\sum|A_i|}$.
\end{theorem}
Bernstein's theorem is usually used for finding the number
of solutions of a system of polynomial equations by calculating
the mixed volume.
We will apply Bernstein's theorem in the opposite direction.
Namely, we will calculate the mixed volume by solving a system
of polynomial equations. Actually, in our case we need
to solve a system of linear equations.
\begin{proof}[Proof of Theorem~\ref{th:second-formula-generalized}]
According to Proposition~\ref{prop:mixed_volume}
and the definition of the polytope $P_G(y_1,\dots,y_m)$ as the
Minkowski sum of simplices, we have
$$
\mathrm{Vol}\, P_G(y_1,\dots,y_m) = \sum_{i_1,\dots,i_{n-1}}
\mathrm{Vol}\,(\Delta_{I_{i_1}},\dots,\Delta_{I_{i_{n-1}}})\,y_{i_1}\cdots y_{i_{n-1}},
$$
where the sum is over all $i_1,\dots,i_{n-1}\in[m]$.
Here we can define $(n-1)$-dimensional (mixed) volumes of polytopes
embedded into $\mathbb{R}^{n}$ as (mixed) volumes of their projections
into, say, the first $n-1$ coordinates.
It remains to show that the mixed volume of several coordinate simplices
is equal to
$$
\mathrm{Vol}\,(\Delta_{J_1},\dots,\Delta_{J_{n-1}}) =
\left\{
\begin{array}{cl}
\frac{1}{(n-1)!} &\textrm{if $J_1,\dots,J_{n-1}$ satisfy DMC}, \\[.1in]
0 &\textrm{otherwise,}
\end{array}
\right.
$$
where ``DMC'' stands for the dragon marriage condition; see
Proposition~\ref{prop:dragon-marriage}.
Consider the following system of $n-1$ linear equations in
the variables $t_1,\dots,t_{n-1}$:
$$
\left\{
\begin{array}{c}
\sum_{j\in J_1} \beta_{1,j}\, t_j = 0,\\
\vdots\\
\sum_{j\in J_{n-1}} \beta_{n-1,j}\, t_j = 0,
\end{array}
\right.
$$
where we assume that $t_{n} = 1$. According to
Bernstein's theorem, this system has exactly
$(n-1)!\, \mathrm{Vol}\,(\Delta_{J_1},\dots,\Delta_{J_{n-1}})$ isolated solutions
in $(\mathbb{C}\setminus\{0\})^{n-1}$ for generic coefficients $\beta_{i,j}\in\mathbb{C}$,
for $j\in J_i$.
Of course, we can always solve this linear system using Cramer's rule.
Let $B=(\beta_{ij})$ be the $(n-1)\times n$-matrix formed by
the coefficients of the system, where we assume that $\beta_{i,j}=0$,
for $j\not\in I_i$; and let $|B^{(i)}|$ be the $i$-th maximal minor
of this matrix. The system in nondegenerate if and only if
$|B^{(n)}|\ne 0$. In this case, its only solution is given by
$t_i = (-1)^i |B^{(i)}|/|B^{(n)}|$, for $i=1,\dots,n-1$.
Thus the system has a single isolated solution in $(\mathbb{C}\setminus\{0\})^{n-1}$
if and only if {\it all} $n$ maximal minors of $B$ are nonzero.
Otherwise, the system has no isolated solutions in $(\mathbb{C}\setminus\{0\})^{n-1}$.
The matrix $B=(\beta_{i,j})$ is
subject to the only constraint $\beta_{i,j}=0$, for $j\not\in J_i$.
For generic values of $\beta_{i,j}$, the $k$-th maximal minor of
this matrix is nonzero if and only if there is a system of
distinct representatives of $J_1,\dots,J_{n-1}$ that avoids $k$.
According to Proposition~\ref{prop:dragon-marriage}, these
conditions are equivalent to the needed condition.
This finishes the proof.
\end{proof}
\section{Volumes via Brion's formula}
\label{sec:vol_via_Brion}
Let us give a couple of alternative formulas for volume of generalized
permutohedra that extend results of Section~\ref{sec:descent_div_sym}.
It is more convenient to expresses generalized permutohedra
in the form $P_n^y(\{y_I\})$;
see Section~\ref{sec:generalized_permutohedra}.
\begin{theorem}
\label{th:gen-perm-sum-w}
For any distinct $\lambda_1,\dots,\lambda_{n}$, we have
$$
\mathrm{Vol}\, P_n^y(\{y_I\}) = \frac{1}{(n-1)!} \sum_{w\in S_{n}}
\frac{\left(\sum_{I\subseteq [n]} \lambda_{w(\min(I))} y_{w(I)}\right)^{n-1}}
{(\lambda_{w(1)}-\lambda_{w(2)})\cdots (\lambda_{w(n-1)}-\lambda_{w(n)})}.
$$
\end{theorem}
This theorem is deduced from Brion's formula
(see Appendix~\ref{sec:appendix-lattice-points})
in exactly the same way as Theorems~\ref{th:f1-W}
and~\ref{th:f1}.
For example, we have
$$
\mathrm{Vol}\, P_2^y(\{y_I\})=
\frac{\lambda_1 y_{\{1\}} + \lambda_2 y_{\{2\}} + \lambda_1 y_{\{1,2\}}}
{\lambda_1-\lambda_2}
+
\frac{\lambda_2 y_{\{2\}} + \lambda_1 y_{\{1\}} + \lambda_2 y_{\{2,1\}}}
{\lambda_2-\lambda_1} = y_{\{1,2\}}
$$
Note the terms $\lambda_i y_{\{i\}}$ make a zero contribution.
Thus in the summation in Theorem~\ref{th:gen-perm-sum-w} we can skip
singleton subsets $I$.
For a collection of subsets $J_1,\dots,J_{n-1}\subseteq [n]$,
construct the integer vector $(a_1,\dots,a_{n})
= e_{\min(J_1)} + \cdots + e_{\min(J_{n-1})}$.
Let $I(J_1,\dots,J_{n-1}) = I_{a_1,\dots,a_{n}}$, defined
as in Section~\ref{sec:descent_div_sym}.
Theorem~\ref{th:vol=descent_number} can be extended as follows.
\begin{theorem}
\label{th:gen_descents_sum_w}
We have
$$
\mathrm{Vol}\, P_n^y(\{y_I\}) =
\sum_{J_1,\dots,J_{n-1} \in[n]} (-1)^{|I(J_1,\dots,J_{n-1})|}
\sum_w y_{w(J_1)} \cdots y_{w(J_{n-1})},
$$
where the second sum is over permutations $w\in S_{n}$ with
the descent set $I(w)=I(J_1,\dots,J_{n-1})$.
\end{theorem}
This result is deduced from Theorem~\ref{th:gen-perm-sum-w}
using the same argument as in the proof of
Theorem~\ref{th:vol=descent_number}.
Theorem~\ref{th:gen-perm-sum-w}
is convenient for explicit calculations
of volumes. Let us give a couple of examples obtained with some help
of a computer.
\begin{example}
\label{exam:assoc_volume}
Let $A_n = (n-1)!\, \mathrm{Vol}\, \mathrm{Ass}_n$, where $\mathrm{Ass}_n$ is the associahedron
in the Loday realization; see Subsection~\ref{ssec:associahedron}.
According to Theorem~\ref{th:gen-perm-sum-w} we have
$$
A_n = \sum_{w\in S_n}
\frac{\left(\sum_{1\leq i\leq j\leq n} \lambda_{m(i,j,w)}\right)^{n-1}}
{(\lambda_{w(1)}-\lambda_{w(2)})\cdots (\lambda_{w(n-1)}-\lambda_{w(n)})},
$$
where $m(i,j,w) = w(\min(w^{-1}([i,j])))= \min\{k\mid w(k)\in[i,j]\}$.
The numbers $A_n$, for $n=1,\dots,8$, are the following:
\smallskip
\begin{center}
\begin{tabular}{|c||l|l|l|l|l|l|l|l|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
$A_n$ & 1 & 1 & 7 & 142 & 5895 & 417201 & 45046558 & 6891812712 \\
\hline
\end{tabular}
\end{center}
\smallskip
\end{example}
\begin{example}
{\rm (cf.~Example~\ref{exam:dragon_marriage})} \
Let us call a subgraph $G\subseteq K_{n,n}$ a {\it Hall graph\/}
if it contains a perfect matching or, equivalently, satisfies
the Hall marriage condition. Let $H_n$ be the number of Hall
subgraphs in $K_{n,n}$. According to Corollary~\ref{cor:vol_gen_perm},
$\frac{1}{(n-1)!}\, H_{n-1}$ is the volume of the generalized permutohedron
$P_n^y(\{y_I\})$ with $y_I=1$, for subsets $I\subseteq[n]$ such that
$n\in I$, and $y_I=0$, otherwise.
Using Theorem~\ref{th:gen-perm-sum-w} we can calculate several
numbers $H_n$.
\smallskip
\begin{center}
\begin{tabular}{|c||l|l|l|l|l|l|l|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
$H_n$ & 1 & 7 & 247 & 37823 & 23191071 & 54812742655 & 494828369491583 \\
\hline
\end{tabular}
\end{center}
\smallskip
\end{example}
\section{Generalized Ehrhart polynomial}
\label{sec:generalized_Ehrhart}
In this section we give a formula for the number of lattice points
of generalized permutohedra.
Let us define the {\it Minkowski difference\/} of two polytopes
$P,Q\subset \mathbb{R}^n$ as $P-Q = \{x\in\mathbb{R}^n\mid x+ Q \subseteq P\}$.
Its main property is the following.
\begin{lemma}
\label{lem:Minkowski_difference}
For any two polytopes, we have $(P+Q)-Q = P$.
\end{lemma}
\begin{proof} We need to prove that, for a point $x$,
we have $x+Q\subseteq P+Q$
if and only if $x\in P$. The ``if'' direction is trivial.
Let us check the ``only if'' direction. It is enough to assume
that $x=0$. We need to show that $Q\subseteq P+Q$ implies that
$0\in P$. Suppose that $0\not\in P$. Because of convexity of $P$
we can find a linear form $f$ such that $f(p)>0$, for any point $p\in P$
(and, of course, $f(0)=0$). Let $q_{\min}\in Q$ be the point of $Q$ with
minimal possible value of $f(q_{\min})$. Then for any point $p+q\in P+Q$,
where $p\in P$ and $q\in Q$, we have $f(p+q) = f(p)+f(q)> f(q_{\min})$.
Thus $q_{\min}\not\in P+Q$. Contradiction.
\end{proof}
\begin{definition}
Let us define the {\it trimmed generalized permutohedron}
as the Minkowski difference of $P_G(y_1,\dots,y_m)$
and the simplex $\Delta_{[n]}$:
$$
P_G^-(y_1,\dots,y_{m}) = P_G(y_1,\dots,y_m) -
\Delta_{[n]} = \{x\in \mathbb{R}^n\mid x+\Delta_{[n]} \subseteq P_G\}.
$$
\end{definition}
This is a slightly more general class of polytopes than generalized
permutohedra $P_G$.
Suppose that $I_1=[n]$, i.e., the vertex $1$ in $G$ is connected
with all vertices in the right part. (If this is not the case,
we can always add such a vertex to $G$.)
According to Lemma~\ref{lem:Minkowski_difference}, we have
$$
P_G(y_1,\dots,y_m) = P_G^{-}(y_1+1,y_2,\dots,y_m).
$$
In other words, if one of the summands in the Minkowski sum for $P_G$ is
$\Delta_{[n]}$ then the trimmed generalized
permutohedron $P_G^-$ equals the (untrimmed) generalized permutohedron
given by a similar Minkowski sum without this summand.
Also notice that the class of polytopes $P_G^{-}(1,\dots,1)$ is as general as
$P_G^{-}(y_1,\dots,y_m)$ for arbitrary nonnegative integer $y_1,\dots,y_m$,
cf.~Remark~\ref{rem:P_G(1,1,1)}.
Let us give a formula for the generalized Ehrhart polynomial
of (trimmed) generalized permutohedra.
Define raising powers as
$(y)_a := y(y+1)\cdots (y+a-1)$, for $a\geq 1$, and $(y)_0 := 1$.
Equivalently, $\frac{(y)_a}{a!} := \binom{y+a-1}{a}$.
\begin{theorem}
\label{th:gen_ehrhrart}
For nonnegative integers $y_1,\dots,y_m$,
the number of lattice points in the trimmed generalized
permutohedron $P_G^-(y_1,\dots,y_m)$ equals
$$
P_G^-(y_1,\dots,y_m)\cap \mathbb{Z}^{n}=
\sum_{(a_1,\dots,a_m)}
\frac {(y_1)_{a_1}}{a_1!} \cdots
\frac {(y_m)_{a_m}}{a_m!},
$$
where the sum is over all $G$-draconian sequences
$(a_1,\dots,a_m)$.
In particular, the number of lattice points in
$P_G(y_1,\dots,y_m)$ equals the above expression with
$y_1$ replaced by $y_1+1$, assuming that $I_1=[n]$.
This also implies that the number of lattice points in $P_G^{-}(1,\dots,1)$
equals the number of $G$-draconian sequences.
\end{theorem}
In other words, the formula for the number of lattice points
in $P_G^{-}$ is obtained from the formula for the volume
of $P_G$ by replacing usual powers in all terms by raising powers.
We will prove this theorem in Section~\ref{sec:subdivision}.
\begin{example}
\label{exam:lattice_points_reg_permut}
Let $I_1=[n]$ and $I_2,\dots,I_m$, $m=\binom{n}{2}+1$,
be all $2$-element subsets in $[n]$, cf.\
Example~\ref{exam:regular_permuitohedron-K_n}.
Then the polytope $P_G^-(1,\dots,1)$ is the regular
permutohedron $P_n(n-1,\dots,0)$ and
$$
P_G^-(0,1,\dots,1)= P_n(n-1,\dots,0)-\Delta_{[n]} =
P_n(n-2,n-2,n-3,\dots,0).
$$
In this case, $G$-draconian sequences are in a bijection with
forests $F\subset K_n$. The $G$-draconian sequence $(a_1,\dots,a_m)$
associated with a forest $F$ with $c$ connected components
is given by $a_1 = c-1$, $a_i = 1$ if $I_i$ is an edge of $F$,
and $a_i=0$ otherwise, for $i=2,\dots,m$.
Theorem~\ref{th:gen_ehrhrart} implies that
the number of lattice points of lattice points in the regular
permutohedron equals the number of labeled forests on $n$ nodes.
More generally, if we set some $y_i$'s to zero, then we deduce that
the number of lattice points in a graphical zonotope equals
the number of forests in the corresponding graph;
see Proposition~\ref{prop:vol-Z-G}.
\end{example}
Theorem~\ref{th:gen_ehrhrart} and
Example~\ref{exam:lattice_points_reg_permut} also imply
the following statement.
\begin{corollary} Let $\Gamma$ be a connected graph on the vertex set $[n]$.
Let $Z_\Gamma$ be the graphical zonotope, i.e.,
the Minkowski sum of intervals $[e_i,e_j]$, for edges $(i,j)$ of $\Gamma$.
Also consider the Minkowski difference $Z_\Gamma^{-} = Z_\Gamma-\Delta_{[n]}$.
Then the volume of $Z_\Gamma$ equals the number of lattice points in $Z_\Gamma^{-}$:
$$
\mathrm{Vol}\, Z_\Gamma = \#(Z_\Gamma^-\cap \mathbb{Z}^n),
$$
and both these numbers are equal to the number of spanning trees in the graph $\Gamma$.
In particular, the number of lattice points
in the permutohedron $P_n(n-2,n-2,n-3,\dots,0)$ equals $n^{n-2}$.
\end{corollary}
\begin{example}
Suppose that $I_i=[n+1-1]$, for $i=1,\dots,m$, where $m=n-1$,
as in Example~\ref{exam:pitman-stanley-volume}.
Theorem~\ref{th:gen_ehrhrart} implies the following expression
for the number of lattice points in the Pitman-Stanley polytope
proved in~\cite{PiSt}:
$$
\#(P_G(y_1,\dots,y_m)\cap \mathbb{Z}^n) =
\sum_{(a_1,\dots,a_m)}
\frac{(y_1+1)_{a_1}}{a_1!}\cdots \frac{(y_m)_{a_m}}{a_m!},
$$
where the sum is over Catalan sequences
$(a_1,\dots,a_m)$
as in Example~\ref{exam:pitman-stanley-volume}.
Thus the number of lattice points in $P_G^-(1,\dots,1) = P_G(0,1,\dots,1) =
\Delta_{[2]} + \cdots + \Delta_{[n-1]}$ equals the Catalan number $C_m= C_{n-1}$.
Also the number of lattice points in $P_G(1,\dots,1) = \Delta_{[2]}+\cdots +\Delta_{[n]}$
equals the Catalan number $\sum_{(a_1,\dots,a_m)} (a_1+1) = C_n$,
where the sum is over Catalan sequences.
\end{example}
For a bipartite graph $G\subseteq K_{m,n}$, let
$G^*\subseteq K_{n,m}$ be mirror image of $G$ obtained
by switching the left and write components. In other words,
$G^*$ is the same graph with the relabeled vertices
$1,\dots,m,\bar 1,\dots,\bar n \longrightarrow
\bar 1,\dots,\bar m,1,\dots,n$.
\begin{lemma}
\label{lem:G-draconain=G*}
The set of $G$-draconian sequences is exactly
the set of lattice points of the polytope
$P_{G^*}^-(1,\dots,1)\subset\mathbb{R}^m$.
\end{lemma}
\begin{proof} In order to prove the lemma we just need to
check all definitions. Let $I_1^*,\dots,I_n^*\subseteq[m]$ be the
collection of subsets associated with the graph $G^*$,
i.e., $j\in I_i^*$ whenever $(i,\bar j)\in G^*$, or, equivalently,
$(j,\bar i)\in G$. Then $P_{G^*}(1,\dots,1) =
\Delta_{I_1^*}+\cdots +\Delta_{I_n^*}\subseteq \mathbb{R}^m$. This is
exactly the polytope $P_m^z(\{z_I\}$,
where $z_I=\#\{i\mid I_i^*\subseteq I\}$, for nonempty
$I\subseteq[m]$; see Proposition~\ref{prop:Py=Pz}.
According to Section~\ref{sec:generalized_permutohedra},
this polytope
is given by the inequalities
$$
P_{G^*}(1,\dots,1)=\{(t_1,\dots,t_m)\in\mathbb{R}^m\mid \sum_{i\in [m]} t_i = n,\
\sum_{i\in I} t_i \geq z_I, \textrm{ for }I\subset [m]\}.
$$
Thus the polytope $P_{G^*}^-(1,\dots,1)$, which is the Minkowski
difference of the above polytope and $\Delta_{[m]}$, is given by
$$
P_{G^*}^{-}(1,\dots,1)=\{(t_1,\dots,t_m)\in\mathbb{R}^m\mid \sum_{i\in [m]} t_i = n-1,\
\sum_{i\in I} t_i \geq z_I, \textrm{ for }I\subset [m]\}.
$$
We have $z_I =
\#\{j\in [n]\mid i\in I,\textrm{ for any edge }(i,\bar j)\in G\}
= n -\left|\bigcup_{j\in J} I_j\right|$,
for $I\subseteq [m]$ and $J=[m]\setminus I$. Thus we
can rewrite the inequality $\sum_{i\in I} t_i\geq z_I$
as $\sum_{j\in J}t_j \leq \left|\bigcup_{j\in J} I_j\right|-1$.
These are exactly the inequalities from
the definition of $G$-draconian sequence, which proves the claim.
\end{proof}
This shows that Theorem~\ref{th:gen_ehrhrart} gives
a formula for the number of lattice points of the polytope
$P_G^-(y_1,\dots,y_m)$ as a sum over the lattice points
of $P_{G^*}^-(1,\dots,1)$, and vise versa.
In particular, we obtain the following duality
for trimmed generalized permutohedra.
\begin{corollary}
\label{cor:duality-lattice-points}
The number of lattice points in the polytope
$P_{G}^-(1,\dots,1)$ equals the number of lattice points
in the polytope $P_{G^*}^-(1,\dots,1)$:
$$
\#(P_{G}^-(1,\dots,1)\cap \mathbb{Z}^n) = \#(P_{G^*}^-(1,\dots,1)\cap \mathbb{Z}^m ).
$$
\end{corollary}
Notice that the polytopes $P_{G}^-(1,\dots,1)$ and $P_{G^*}^-(1,\dots,1)$ have
different dimensions and they might be very different. In
Theorem~\ref{th:Left-Right-degrees} we will describe a class of bijections
between lattice points of these polytopes.
\begin{example}
\label{examp:delta_times_delta}
Let $G=K_{m,n}$ be the complete bipartite graph.
Then $P_{K_{m,n}}^-$ is the $(n-1)$-dimensional simplex inflated
$m-1$ times: $P_{K_{m,n}}^- = (m-1)\Delta_{[n]}$.
The polytope for the mirror image of the graph
is obtained by switching $m$ and $n$:
$P_{K_{m,n}^*}^- = (n-1)\Delta_{[m]}$.
Corollary~\ref{cor:duality-lattice-points} says that these two
polytopes have the same number of lattice points.
This is a advanced way to say that
$\binom{m+n-2}{m-1} = \binom{m+n-2}{n-1}$.
\end{example}
Theorem~\ref{th:Todd-Euler-Maclaurin}(2)
from Appendix~B (Euler-MacLaurin formula for polytopes)
gives the following alternative expression for the generalized Ehrhart
polynomial, i.e,
for the number of lattice points in $P_n^z(\{z_I\})$.
Without loss of generality, we will assume that $z_{[n]} = 0$.
The volume $\mathrm{Vol}\, P_n^z(\{z_I\})$ is a homogeneous polynomial
$\tilde V_n$ in the $z_I$, for all nonempty $I\subsetneq [n]$.
\begin{proposition}
The number of lattice points in the generalized
permutohedron $P_n^z(\{z_I\})$ is given by the polynomial
obtained from the polynomial $\tilde V_n$ by applying the
Todd operator
$\mathrm{Todd}_n =
\prod_{I\subsetneq [n]} \mathrm{Todd}\left(-
\frac{\partial}{\partial z_I}\right)$,
where $\mathrm{Todd}(q) = q/(1-e^{-q}) = 1 + \frac{t}{2} + \frac{t^2}{12} -
\frac{t^4}{720}+\cdots$.
\end{proposition}
\section{Root polytopes and their triangulations}
\label{sec:root_polytope}
\begin{definition}
For a graph $G$ on the vertex set $[n]$, let
$\tilde Q_G\subset \mathbb{R}^{n}$ be the convex hull of the origin $0$ and
the points $e_i-e_j$, for all edges $(i,j)$, $i<j$, of $G$.
We will call polytopes $\tilde Q_G$ {\it root polytopes.}
In other words, a root polytope is the convex hull of the origin and
some subset of end-points of positive roots for a root system of type $A_{n-1}$.
Polytopes $\tilde Q_G$ belong to an $n-1$ dimensional hyperplane.
\end{definition}
In the case of the complete graph $G=K_{n}$, the polytope
$\tilde Q_{K_{n}}$ was studied in~\cite{GGP}. In particular, we
constructed a triangulation of this polytope and proved
that its $(n-1)$-dimensional volume equals $\frac{1}{(n-1)!} C_{n-1}$,
where $C_{n-1} = \frac{1}{n}\binom{2(n-1)}{n-1}$ is the $(n-1)$-st
Catalan number.
In this section we study root polytopes for a bipartite graphs
$G\subseteq K_{m,n}$. It is convenient to introduce
related polytopes
$$
Q_G = \mathrm{ConvexHull}(e_i-e_{\bar j}\mid \textrm{for edges }(i,\bar j)
\textrm{ of } G) \subset \mathbb{R}^{m+n},
$$
where $e_1,\dots,e_m,e_{\bar 1},\dots,e_{\bar n}$ are the coordinate vectors
in $\mathbb{R}^{m+n}$.
Since $G$ is a bipartite graph, the polytope $Q_G$ belongs
to an $(m+n-2)$-dimensional affine subspace.
The polytope $\tilde Q_G$ is the pyramid with the base $Q_G$ and the vertex $0$.
Thus $r! \,\mathrm{Vol}_{r}\, \tilde Q_G =
(r-1)!\, \mathrm{Vol}_{r-1}\, Q_G $, where
$\mathrm{Vol}_r$ stands for the $r$-dimensional volume.
Slightly abusing notation, we will also refer to polytopes $Q_G$ as
{\it root polytopes}.
The polytope $Q_{K_{m,n}}$ for the complete bipartite graph
$K_{m,n}$ is the direct product of two
simplices $\Delta^{m-1}\times \Delta^{n-1}$ of dimensions $(m-1)$
and $(n-1)$. (Here $\Delta^{m-1}\simeq \Delta_{[m]}$.)
For other bipartite graphs, the polytope $Q_G$ is
the convex hull of some subset of vertices of
$\Delta^{m-1}\times \Delta^{n-1}$.
These polytopes are intimately related to generalized
permutohedra.
Let $I_1,\dots,I_m$ be the sequence of subsets associated
with the graph $G$, i.e., $j\in I_i$ whenever $(i,\bar j)\in G$.
Let $P_G =P_{G}(1,\dots,1) = \Delta_{I_1} + \cdots + \Delta_{I_m}$
and $P_G^- = P_G - \Delta_{[n]}$.
\begin{theorem}
\label{th:Vol_Q=N(P)}
For any connected bipartite graph $G\subseteq K_{m,n}$,
the $(m+n-2)$-dimensional volume of the root polytope $Q_{G}$ is
expressed in terms of the number of lattice points of
the trimmed generalized permutohedron $P_G^-$ as
$$
\mathrm{Vol}\, Q_{G} = \frac{\# (P_G^{-}\cap \mathbb{Z}^n)}{(m+n-2)!}.
$$
\end{theorem}
We will prove this theorem by constructing a bijection between
simplices in a triangulation of the polytope $Q_G$
and lattice points of the polytope $P_G^-$; see
Theorem~\ref{th:Left-Right-degrees}.
For a bipartite graph $G\subseteq K_{m,n}$, let $G^+\subseteq K_{m+1,n}$
be the bipartite graph obtained from $G$ by adding a new vertex $m+1$
connected by the edges $(m+1,\bar j)$, $j=1,\dots,n$, with all vertices
of the second part. Then $P_{G^+}^- = P_G$.
\begin{corollary}
For any bipartite graph $G\subseteq K_{m,n}$ without isolated vertices,
the $(m+n-1)$-dimensional volume of the polytope $Q_{G^+}$ is
related to the number of lattice points in the generalized permutohedron
as
$$
\mathrm{Vol}\, Q_{G^+} = \frac{\# (P_G\cap \mathbb{Z}^n)}{(m+n-1)!}.
$$
\end{corollary}
\begin{definition}
A {\it polyhedral subdivision} of a polytope $Q$ is a subdivision of $Q$ into a
union of cells of the same dimension as $P$ such that each cell is the convex
hull of some subset of vertices of $Q$ and any two cells intersect properly,
i.e., the intersection of any two cells is their common face. Polyhedral
subdivisions are partially ordered by refinement. Minimal elements of this
partial order, i.e., unsubdividable polyhedral subdivisions, are called {\it
triangulations}. In a triangulation each cell is a simplex.
\end{definition}
Triangulations of the product $\Delta^{m-1}\times \Delta^{n-1}$ were
first discussed by Gelfand-Kapranov-Zelevinsky~\cite[7.3.D]{GKZ}
and then studied by several authors; e.g., see Santos~\cite{San}.
We will analyze triangulations of more general root polytopes $Q_G$.
The following 3 lemmas were originally discovered circa 1992
by the author in collaboration with Zelevinsky and Kapranov
in the context of triangulations of
$\Delta^{m-1}\times \Delta^{n-1}$.
Assume that the graph $G\subseteq K_{m,n}$ is connected.
First, let us describe the simplices inside the polytope $Q_G$.
\begin{lemma}
For a subgraph $H\subseteq G$, the convex hull of the
collection $\{e_i-e_{\bar j}\mid (i,\bar j)\textrm{ is an edge of }H\}$
of vertices of $Q_G$ is a simplex if and only if
$H$ is a forest in the graph $G$.
Such a simplex has maximal dimension $m+n-2$
if and only $H$ is a spanning tree of $G$.
All $(m+n-2)$-dimensional simplices of this form have
the same volume $\frac{1}{(m+n-2)!}$.
\end{lemma}
\begin{proof} If $H$ contains a cycle $(i_1,\bar j_1),
(\bar j_1, i_2),(i_2,\bar j_2),\dots, (\bar j_k,i_1)$,
then the vectors $e_{i_1}- e_{\bar j_1}$,
$e_{\bar j_1}- e_{i_2}$, \dots , $e_{\bar j_k}-e_{i_1}$ corresponding
to the edges in this cycle are linearly dependent. (Their sum is zero.)
Thus the end-points of these vectors cannot be vertices of a simplex.
Conversely, for a forest, i.e., a graph without cycles, all vectors
are linearly independent and, thus, form a simplex.
\end{proof}
For a forest $F\subseteq G$, we will denote the simplex from
this lemma by
$$
\Delta_F: = \mathrm{ConvexHull}
(e_i-e_{\bar j}\mid (i,\bar j)\textrm{ is an edge of }F).
$$
A triangulation of $Q_G$ as a collection
of simplices $\{\Delta_{T_1},\dots,\Delta_{T_s}\}$, for some spanning trees
$T_1,\dots,T_s$ of $G$ such that
$Q_G= \cup \Delta_{T_i}$; and
each intersection $\Delta_{T_i}\cap \Delta_{T_j}$
is the common face of these two simplices.
Let us now describe pairs of simplices that intersect properly.
For two spanning trees $T$ and $T'$ of $G$, let $U(T,T')$ be the {\it
directed} graph with the edge set $\{(i,\bar j)\mid
(i,\bar j)\in T\}\cup
\{(\bar j,i)\mid (i,\bar j)\in T'\}$, i.e.,
$U(T,T')$ is the union of edges $T$ and $T'$
with edges of $T$ oriented from left to right and edges of $T'$ oriented
from right to left. An directed {\it cycle} is a sequence
of directed edges $(i_1,i_2),(i_2,i_3),\dots,(i_{k-1},i_k),(i_k,i_1)$
such that all $i_1,\dots,i_k$ are distinct.
\begin{lemma} For two trees $T$ and $T'$, the intersection
$\Delta_T\cap \Delta_{T'}$ is a common face of the simplices
$\Delta_{T}$ and $\Delta_{T'}$ if and only if the directed
graph $U(T,T')$ has no directed cycles of length $\geq 4$.
\end{lemma}
\begin{proof}
Suppose that $U(T,T')$ has a directed cycle of length $\geq 4$.
Then the graphs $T$ and $T'$ have nonempty partial matching (i.e.,
subgraphs with disjoint edges) $M$ and $M'$ such that
(1) $M$ and $M'$ have no common edges; and
(2) $M$ and $M'$ are matching on the same
vertex set. Then both $M$ and $M'$ should have $k\geq 2$ edges.
Let $x=\frac {1}{k}\sum_{(i,\bar j)\in M} (e_i - e_{\bar j}) =
\frac {1}{k} \sum_{(i,\bar j)\in M'} (e_i - e_{\bar j})$. Thus
$x\in \Delta_{T}\cap \Delta_{T'}$.
However, the minimal face of the simplex $\Delta_T$ that contains $x$
is $\Delta_M$ and the minimal face of $\Delta_{T'}$ that contains $x$ is
$\Delta_{M'}$. Since $M\ne M'$, we have $\Delta_M \ne \Delta_{M'}$.
Thus the intersection of the simplices $\Delta_{T}$ and $\Delta_{T'}$
{\it is not\/} their common face.
Conversely, assume that $U(T,T')$ has no directed cycles of length $\geq 4$.
Let $F=T\cap T'$ be the forest formed by the common edges of $T$
and $T'$. Because $U(T,T')$ is acyclic, we can find a function
$h:\{1,\dots,m,\bar 1,\dots,\bar n\}\to \mathbb{R}$ such that
(1) $h$ is constant on connected components of the forest $F$;
and (2) for any directed edge $(a,b)\in U(T,T')$ that joins
two different connected components of $F$, we have $h(a)<h(b)$.
The second condition says that if $(a,b)=(i,\bar j)$ is an edge of $T$
then $h(i)<h(\bar j)$, and if $(a,b) = (\bar j, i)$ is an edge of $T'$
then $h(i)>h(\bar j)$. The function $h$ defines a linear form
$f_h$ on the space $\mathbb{R}^{m+n}$ with the coordinates $h(1),\dots,h(m),
h(\bar 1),\dots,h(\bar n)$ in the standard basis. The above conditions
imply that
(1) for any vertex $x$ in the common face $\Delta_F$ of $\Delta_T$ and
$\Delta_{T'}$, we have $f_h(x) =0$,
(2) for any vertex $x\in \Delta_{T}\setminus \Delta_F$, we have $f_h(x)<0$;
and (3) for any vertex
$x\in \Delta_{T'}\setminus \Delta_F$, we have $f_h(x)>0$.
In other words, the hyperplane $f_h(x) = 0$ intersects the simplices
$\Delta_T$ and $\Delta_{T'}$ at their common face and separates
the remaining vertices of these simplices. This implies that
$\Delta_{T}\cap \Delta_{T'} = \Delta_F$, as needed.
\end{proof}
\begin{definition}
For a spanning tree $T\in K_{m,n}$, let us define the
{\it left degree vector} $LD=(d_1,\dots,d_m)$ and the
{\it right degree vector} $RD=(d_{\bar 1},\dots,d_{\bar n})$,
where $d_i=\deg_i(T)-1$ and $d_{\bar j}=\deg_{\bar j}(T)-1$ are the
degrees of the vertices $i$ and $\bar j$ in $T$ minus 1.
Note that $LD(T)$ and $RD(T)$ are nonnegative integer vectors
because all degrees of vertices in a tree are strictly positive.
\end{definition}
\begin{lemma}
\label{lem:different_RD_LD}
Let $\{\Delta_{T_1},\dots,\Delta_{T_s}\}$
be a triangulation of $Q_G$.
Then, for $i\ne j$, $T_i$ and $T_j$ have different
left degree vectors $LD(T_i)\ne LD(T_j)$ and different right degree
vectors $RD(T_i)\ne RD(T_j)$.
\end{lemma}
\begin{proof}
It is enough to prove that it is impossible to find two different
spanning trees $T$ and $T'$ have have
same degrees in, say, the left part $\deg_i(T)=\deg_i(T')$, for $i=1,\dots,m$,
and such that the directed graph $U(T,T')$ has no directed cycles of length
$\geq 4$. Suppose that we found two such trees.
Let $F$ be the forest formed by the common edges of $T$ and $T'$.
The directed graph $U(T,T')$ induces an acyclic directed graph
on connected components of $F$. Because
of the acyclicity of this graph, we can find a minimal connected
component $C$ of $F$ such that no directed edge of
$U(T,T')$ enters to any vertex of $C$ from outside of this component.
Since $T\ne T'$, the
component $C$ cannot include all vertices.
Thus some vertex $i$ of $C$ should be joined by an edge
$(i,\bar j)\in T\setminus F$ with a vertex in some other component.
Since we assumed that $\deg_i(T) = \deg_i(T')$, there
is an edge $(i,\bar k)\in T'\setminus F$. But this edge
should be oriented as $(\bar k, i)$ in the graph $U(T,T')$,
i.e., it enters the vertex $i$ of $C$. Contradiction.
\end{proof}
An alternative proof of Lemma~\ref{lem:different_RD_LD} follows
from Lemma~\ref{lem:shifts_right_degree} below.
For a bipartite graph $G\in K_{m,n}$, let $G^*\in K_{n,m}$
be the same graph with the left and right components switched, i.e., $G^*$
is the mirror image of $G$. Recall that the trimmed
generalized permutohedron $P_G^-$ is the
Minkowski difference of the generalized permutohedron $P_G$
and the simplex $\Delta_{[n]}$.
\begin{theorem}
\label{th:Left-Right-degrees}
For any triangulation
$\{\Delta_{T_1},\dots,\Delta_{T_s}\}$ of the root polytope $Q_G$,
the set of right degree vectors $\{RD(T_1),\dots,RD(T_s)\}$
is exactly the set of lattice points in the trimmed
generalized permutohedron $P_{G}^-$
(without repetitions).
Similarly, the set of left degree vectors $\{LD(T_1),\dots,LD(T_s)\}$
is exactly the set of lattice points in the polytope $P_{G^*}^-$
for the mirror image of the graph $G$.
\end{theorem}
We will prove this theorem in Section~\ref{sec:subdivision}.
This theorem says that each triangulation
$\tau = \{\Delta_{T_1},\dots,\Delta_{T_s}\}$ of the root polytope $G_Q$
gives a bijection
$$\phi_\tau: \#(P_{G}^-\cap \mathbb{Z}^n) \to \#(P_{G^*}^-\cap \mathbb{Z}^m )
$$
between lattice points of the polytope $P_{G}^-$ and
the lattice points of the polytope $P_{G^*}^-$ such that
$\phi_\tau:RD(T_i)\mapsto LD(T_i)$, for $i=1,\dots,s$.
It is interesting to investigate which properties of a triangulation
$\tau$ can be recovered from the bijection $\phi_\tau$.
Also it is interesting to intrinsically describe the class of
bijections associated with triangulations of $Q_G$.
\begin{example} Suppose that $G=K_{m,n}$.
Theorem~\ref{th:Left-Right-degrees} says that
each triangulation of the product
$\Delta^{m-1}\times\Delta^{n-1}$ of two simplices
gives a bijection between lattice points two inflated
simplices $P_{K_{m,n}}^- = (m-1)\Delta^{n-1}$ and
$P_{K_{n,m}}^- = (n-1)\Delta^{m-1}$;
see Example~\ref{examp:delta_times_delta}.
\end{example}
Another instance of a similar phenomenon related to maximal minors
of matrices was investigated by Bernstein-Zelevinsky~\cite{BZ}.
\section{Root polytopes for non-bipartite graphs}
\label{sec:root_non_bipartite}
Let us show how to extend the above results to root polytopes $\tilde Q_G$
for a more general class of graphs $G$ that may not be bipartite.
Assume that $G$ is a connected graph on the vertex set $[n]$ that satisfies
the following condition:
$$
\textrm{For $i<j<k$, if $(i,j)$ and $(j,k)$ are edges of $G$,
then $(i,k)$ is also an edge of $G$.}
$$
The polytope $\tilde Q_G$ is has the dimension $(n-1)$.
Let us say that a triangulation of the polytope $\tilde Q_G$ is
{\it central} if any $(n-1)$-dimensional simplex in this triangulation
contains the origin $0$.
\begin{definition}
Let us say that a tree is {\it alternating} if there are no
$i<j<k$ such that $(i,j)$ and $(j,k)$ are edges in $T$.
Equivalently, labels in any path in an alternating tree $T$ should alternate.
\end{definition}
Alternating trees were first introduced in~\cite{GGP}
in order to describe triangulations of $\tilde Q_{K_n}$.
They also appeared in~\cite{Pos} and~\cite{PS1}.
For a spanning tree $T\subseteq G$, let
$\tilde \Delta_T=\mathrm{ConvexHull}(0,e_i-e_j\mid (i,j)\in T, i<j)$.
\begin{lemma}
{\rm cf.~\cite{GGP}} \
A simplex $\tilde\Delta_T$ may appear in a central
triangulation of $\tilde Q_G$ if and only if $T$ is an alternating tree.
All these simplices have the same volume $\frac{1}{(n-1)!}$.
\end{lemma}
\begin{proof} Suppose that a tree $T$ is not alternating.
Let us find a pair of edges $(i,j)$ and $(j,k)$ in $T$ with $i<j<k$.
Let $T'$ be the tree obtained from $T$ by replacing the edge $(i,j)$
with $(i,k)$ and $T''$ be the tree obtained for $T$ by replacing
the edge $(j,k)$ with $(i,k)$.
Then two simplices $\tilde \Delta_{T'}$ and $\tilde \Delta_{T''}$
intersect at their common face. Their union $\tilde \Delta_{T'}
\cup\tilde \Delta_{T''}$ properly contain the simplex $\tilde \Delta_T$.
Moreover, for neighborhood $B$ of the origin,
$(\tilde \Delta_{T'} \cup\tilde \Delta_{T''})\cap B = \tilde\Delta_T \cap B$.
If the simplex $\tilde\Delta_T$ belongs to some central triangulation then
with can replace it by the pair of simplices
$\tilde\Delta_{T'}$ and $\tilde\Delta_{T'}$ and obtain a ``bigger''
triangulation, which is impossible.
\end{proof}
For an alternating tree $T$, we say that a vertex $i\in[n]$ is
a {\it left vertex} if, for any edge $(i,j)$ in $T$, we have $i<j$.
Otherwise, if, for any edge $(i,j)$ in $T$, we have $i>j$, we say
that $i$ is a {\it right vertex}.
For a disjoint decomposition $[n]=L\cup R$, let $G_{L,R}$ be
the subgraph of $G$ given by
$$
G_{L,R} = \{(i,j)\in G\mid i\in L, j\in R, i<j\}.
$$
The graph $G_{L,R}$ is a bipartite graph with the parts
$L$ and $R$.
Spanning trees of the graph $G_{L,R}$ are exactly alternating trees
of $G$ with fixed sets $L$ and $R$ of left and right vertices.
Note that in general there are $2^{n-2}$ possible choices of the subsets
$L$ and $R$ because we always have $1\in L$ and $n\in R$ and for any other
vertex we have 2 options. However, some of these choices may lead
to disconnected graphs $G_{L,R}$ that contain no spanning trees.
Since each alternating tree in $G$ belongs to one of the graphs $G_{L,R}$,
we deduce that each simplex $\tilde \Delta_{T}$ in a central triangulation
of $\tilde Q_G$ belongs to one of the polytopes $\tilde Q_{G_{L,R}}$.
Thus we obtain the following claim.
\begin{proposition}
\label{prop:LR-decomposes}
The polytope $\tilde Q_G$ decomposes into the union of polytopes
$\tilde Q_G = \bigcup_{L,R} \tilde Q_{G_{L,R}}$
over disjoint decompositions $[n]=L\cap R$ such
that the graph $G_{L,R}$ is connected.
Terms of this decompositions are in a bijection with
the facets of $\tilde Q_G$ that do not contain
the origin. Each such facet $F$ has the form $F=Q_{G_{L,R}}$ and
$\tilde Q_{G_{L,R}}$ is the pyramid with the base $F$.
Each central triangulation of $\tilde Q_G$ is obtained by selecting
a triangulation of each part $Q_{G_{L,R}}$.
\end{proposition}
Since each graph $G_{L,R}$ is bipartite, we can apply the
results of this section and relate the volume of $\tilde Q_{G_{L,R}}$
to the number of lattice points in a certain (trimmed) generalized
permutohedron. By Proposition~\ref{prop:LR-decomposes},
we can express the volume of the root polytope
$\tilde Q_G$ as a sum of numbers of lattice points in several
trimmed generalized permutohedra.
\begin{example} In~\cite{GGP} we constructed a triangulation of the polytope
$\tilde Q_{K_n}$, for the complete graph $G=K_n$.
This triangulation is formed by the simplices $\tilde\Delta_{T}$,
for all {\it noncrossing\/} alternating trees $T$, i.e., alternating
trees that contain no pair of crossing edges $(i,k)$ and $(j,l)$,
for $i<j<k<l$. The number of such trees equals the $(n-1)$-st
Catalan number $C_{n-1}$.
For a disjoint decomposition $[n]=L\cap R$, let $K_{L,R}$ be the bipartite
graph with the edge set $\{(i,j)\mid i\in L, j\in R, i<j\}$.
According to Proposition~\ref{prop:LR-decomposes},
we have $\tilde Q_{K_n} = \bigcup_{L,R} \tilde Q_{K_{L,R}}$, where different
terms have no common interior points. The collection of
simplices $\tilde \Delta_{T}$, for all noncrossing spanning trees
$T$ of the graph $K_{L,R}$, form a triangulation of the polytope
$\tilde Q_{K_{L,R}}$.
\end{example}
This example and Theorem~\ref{th:Vol_Q=N(P)}
imply the following statement.
\begin{corollary}
For any disjoint decomposition $[n]=L\cap R$ such that $1\in L$
and $n\in R$, the number
of noncrossing spanning trees of the graph $K_{L,R}$ equals
the number of lattice points in the trimmed
generalized permutohedron $P_{K_{L,R}}^-$.
\end{corollary}
For example, if $L=\{1,\dots,l\}$ and $R=\{l+1,\dots,n\}$,
then $K_{L,R} = K_{l,n-l}$ is the complete bipartite graph.
We deduce that the number of noncrossing trees in
the complete bipartite graph $K_{l,n-l}$ equals the number of
lattice points in the polytope $P_{K_{l,n-l}}^- = (l-1)\Delta^{n-l-1}$,
which equals $\binom {n-2} {l-1}$.
\section{Mixed subdivisions of generalized permutohedra}
\label{sec:subdivision}
In this section we study mixed subdivisions of generalized
permutohedra into parts isomorphic to direct products
of simplices. For this we use the Cayley trick that relates
mixed subdivisions of the Minkowski sum
of several polytopes $P_1+\cdots + P_m$ to all polyhedral
subdivision of a certain polytope $\mathcal{C}(P_1,\dots,P_m)$ of higher dimension.
The Cayley trick was first developed by Sturmfels~\cite{Stu}
for coherent subdivisions and by Humber-Rambau-Santos~\cite{HRS}
for arbitrary subdivisions. Santos~\cite{San} used this trick to study
triangulations of the product of two simplices.
\begin{definition}
Let $d$ be the dimension of the Minkowski sum $P_1+\cdots + P_m$.
A {\it Minkowski cell\/} in this Minkowski sum
is a polytope $B_1+\cdots +B_m$ of the top dimension $d$,
where each $B_i$ is a convex hull of
some subset of vertices of $P_i$. A {\it mixed subdivision\/} of
the Minkowski sum is its decomposition into a union of several
Minkowski cells such that the intersection of any two cells is
their common face. Mixed subdivisions form a poset with respect
to refinement.
A {\it fine mixed subdivision\/} is a minimal element in this poset.
\end{definition}
\begin{lemma}
\label{lem:fine_mixed_cells}
A mixed subdivision is fine if and only if, for each mixed
cell $B=B_1+\cdots + B_m$ in this subdivision, all
$B_i$ are simplices and $\sum\dim B_i =\dim B = d$.
\end{lemma}
\begin{proof}
We leave this claim as an exercise,
or refer to~\cite[Proposition~2.3]{San}.
\end{proof}
The mixed cells described in this lemma are called
{\it fine mixed cells.} The lemma implies that each
fine mixed cell $B_1+\cdots + B_m$ is isomorphic to
the direct product $B_1\times \cdots\times B_m$
of simplices, i.e., the simplices $B_i$ span
independent affine subspaces. In order to emphasize this fact,
we will use the direct product notation
for fine cells.
Let $e_1,\dots,e_m,e_{\bar 1},\dots,e_{\bar n}$ be the standard
basis of $\mathbb{R}^{m+n}=\mathbb{R}^m\times \mathbb{R}^n$. Embed the space $\mathbb{R}^n$, where
the polytopes $P_1,\dots,P_n$ live, into $\mathbb{R}^{m+n}$
as the subspace with the basis $e_{\bar 1}, \dots,e_{\bar n}$.
\begin{definition}
Following Sturmfels~\cite{Stu} and Huber-Rambau-Santos~\cite{HRS},
we define the
{\it Cayley embedding} of $P_1,\dots,P_m$
as the polytope $\mathcal{C}(P_1,\dots,P_m)$ given by
$$
\mathcal{C}(P_1,\dots,P_m) = \mathrm{ConvexHull}(e_i + P_i\mid i=1,\dots,m).
$$
\end{definition}
Let $(y_1,\dots,y_m)\times \mathbb{R}^n$ denote the $n$-dimensional affine
subspace in $\mathbb{R}^{m+n}$ such that the first $m$ coordinates are
equal to some fixed parameters $y_1,\dots,y_m$.
(Here we think of the $y_i$ not as coordinates but as fixed parameters.)
\begin{lemma} {\rm \cite{Stu, HRS}} \
For any choice of parameters $y_1,\dots,y_m\geq 0$ such that $\sum y_i = 1$,
the intersection of $\mathcal{C}(P_1,\dots,P_m)$ with the affine
subspace $(y_1,\dots,y_m)\times \mathbb{R}^n$
is exactly the weighted Minkowski sum $y_1P_1+\cdots + y_m P_m$
(shifted into this affine subspace).
\end{lemma}
\begin{proof}
Indeed, by the definition, the polytope $\mathcal{C}(P_1,\dots,P_m)$
is the locus of points
of the form $\sum_{i=1}^m \lambda_i(e_i + p_i)$, where $p_i\in P_i$,
$\lambda_i\geq 0$ and $\sum \lambda_i = 1$.
Intersecting a point of this form with $(y_1,\dots,y_n)\times \mathbb{R}^n$
means that we fix $\lambda_i = y_i$, for $i=1,\dots,m$. This gives
the needed Minkowski sum.
\end{proof}
The next proposition expresses the {\it Cayley trick}.
\begin{proposition}
\label{prop:cayley_trick}
{\rm \cite{HRS}} \
Fix strictly positive parameters $y_1,\dots,y_m>0$
such that $\sum y_i = 1$.
For a polyhedral subdivision of $\mathcal{C}(P_1,\dots,P_m)$,
intersecting its cells with $(y_1,\dots,y_n)\times R^n$
we obtain a mixed subdivision of $y_1 P_1 + \cdots + y_m P_m$.
This gives a poset isomorphism between polyhedral subdivisions of
$\mathcal{C}(P_1,\dots,P_m)$ and mixed subdivisions of $y_1P_1+\cdots + y_m P_m$.
\end{proposition}
\begin{proof}
The first claim that a polyhedral subdivision of $\mathcal{C}(P_1,\dots,P_m)$
gives a mixed subdivision of $y_1 P_1 + \cdots + y_m P_m$ is immediate.
On the other hand, we can recover a polyhedral subdivision
of $\mathcal{C}(P_1,\dots,P_m)$ from a mixed subdivision
of $y_1 P_1 + \cdots + y_m P_m$.
We can always rescale cells of the mixed subdivision by changing values
of $y_1,\dots,y_m$ and obtain a mixed subdivision of
$y_1' P_1 + \cdots + y_m' P_m$, for any nonnegative $y_1',\dots,y_m'$.
As we vary $y=(y_1,\dots,y_m)$ over all points of the simplex
$y_1,\dots,y_m\geq 0$, $y_1+\dots+y_m=1$, the unions
$\cup_{y\in\Delta^{m-1}} yB$, for each mixed cell $B$,
form cells of the polyhedral subdivision of $\mathcal{C}(P_1,\dots,P_m)$;
see~\cite{HRS} for details.
\end{proof}
Let $G\subseteq K_{m,n}$ be a connected bipartite graph.
Let $I_1,\dots,I_m\subseteq [n]$ be the associated collection of
nonempty subsets: $I_i = \{j\mid (i,\bar j)\in G\}$, for $i=1,\dots,m$.
Then the Cayley embedding of the simplices $\Delta_{I_1},\dots,
\Delta_{I_m}$ is exactly
the root polytope $Q_G$ from Section~\ref{sec:root_polytope}:
$$
Q_G = \mathcal{C}(\Delta_{I_1},\dots,\Delta_{I_m}).
$$
Recall that the generalized permutohedron $P_G(y_1,\dots,y_m)$
$$
P_G(y_1,\dots,y_m) = y_1\Delta_{I_1} + \cdots + y_m\Delta_{I_m},
$$
for the nonnegative $y_i$.
Proposition~\ref{prop:cayley_trick} specializes to
the following claim.
\begin{corollary}
\label{cor:cayley_trick_permutohedron}
For any strictly positive $y_1,\dots,y_m$,
mixed subdivisions of the generalized permutohedron
$P_G(y_1,\dots,y_m)$ are in one-to-one correspondence
with polyhedral subdivisions of the root polytope $Q_G$.
In particular, fine mixed subdivisions of $P_G(y_1,\dots,y_m)$
are in one-to-one correspondence with triangulations of $Q_G$.
This correspondence is given by intersecting a polyhedral
subdivision of $Q_G$ with the subspace
$(\frac{y_1}{s},\dots, \frac{y_m}{s})\times \mathbb{R}^n$, where $s=\sum y_i$,
and then inflating the intersection by the factor $s$.
\end{corollary}
In particular, this implies that the number of cells
in a fine mixed subdivision of $P_G$ equals $(m+n-2)!\,\mathrm{Vol}\, Q_G$.
Let us describe fine mixed cells that appear in subdivisions
of $P_G(y_1,\dots,y_m)$. For a sequence of nonempty subsets
$\mathcal{J}=(J_1,\dots,J_m)$,
let $G_{\mathcal{J}}$ be the graph with the edges
$(i,\bar j)$, for $j\in J_i$.
\begin{lemma}
\label{lem:fine_mixed_cells=trees}
Each fine mixed cell in a mixed subdivision of
$P_G(y_1,\dots,y_m)$
has the form $y_1\Delta_{J_1}\times\cdots \times y_m\Delta_{J_m}$,
for some sequence of nonempty subsets $\mathcal{J}=(J_1,\dots,J_m)$ in $[n]$,
such that $G_{\mathcal{J}}$ is a spanning tree of $G$.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:fine_mixed_cells}, each
fine cell has the form $y_1\Delta_{J_1}\times \dots \times y_m\Delta_{J_m}$
where $J_i\subseteq I_i$, for $i=1,\dots,m$, i.e., $G_{\mathcal{J}}$
is a subgraph of $G$, the simplices $\Delta_{J_i}$ span independent
affine subspaces, and $\sum \Delta_{J_i} = \sum (|J_i|-1) = n-1$.
This is equivalent to the condition that $G_{\mathcal{J}}$ is
a tree.
\end{proof}
Let us denote the fine cell associated with a spanning tree
$T\subseteq G$, as described in the above lemma, by
$$
\Pi_{T}:= y_{1} \Delta_{J_1}\times \cdots \times y_{m} \Delta_{J_m},
$$
where $J_i = \{j\mid (i,\bar j)\in T\}$, for $i=1,\dots,m$.
These fine cells $\Pi_T$ are exactly the cells
associated with the simplices $\Delta_T\subset Q_G$
from Section~\ref{sec:root_polytope} via the Cayley trick:
$$
\Pi_T = s \left(\Delta_T\cap \left(\frac{y_1}{s},\dots,\frac{y_m}{s}\right)\times \mathbb{R}^n\right),
$$
where $s=\sum y_i$. So it is not surprising that the fine cells
$\Pi_T$ are labeled by the same objects---spanning trees of $G$.
Let us explain the meaning of the left degree vector
$LD(T)=(d_1,\dots,d_m)$ and the right degree vector
$RD(T)=(d_{\bar 1},\dots,d_{\bar n})$ of a tree $T\subseteq G$
in terms of the fine cell $\Pi_T$.
\begin{lemma}
\label{lem:left_degree}
Let $LD(T)=(d_{1},\dots,d_{m})$ be the left
degree vector of a tree $T$, then
$$
\mathrm{Vol}\, \Pi_T = \frac{y_1^{d_{1}}}{d_{1}!}\cdots
\frac{y_m^{d_{m}}}{d_{m}!}.
$$
\end{lemma}
\begin{proof}
Indeed, $d_{\bar i} = |J_i|-1 = \dim \Delta_{J_i}$, for $i=1,\dots,m$.
\end{proof}
\begin{lemma}
\label{lem:shifts_right_degree}
Let us specialize $y_1=\cdots = y_m = 1$.
For a spanning tree $T\subseteq G$, the fine cell
$\Pi_T$ contains the shift
$(a_1,\dots,a_n) + \Delta_{[n]}$ of the simplex $\Delta_{[n]}$
by an integer vector $(a_1,\dots,a_n)\in\mathbb{Z}^n$ if and only if
$(a_1,\dots,a_n)$ is the right degree vector $RD(T)$ of the tree $T$.
Moreover, if $(a_1,\dots,a_n)\in\mathbb{Z}^n$ is not the right degree vector of $T$,
then the shift $(a_1,\dots,a_n) + \Delta_{[n]}$ has no common
interior points with the cell $\Pi_T$.
\end{lemma}
\begin{proof}
Notice that, for two subsets $I,J\subseteq[n]$ with a nonempty
intersection, we have the following inclusion of Minkowski sums:
$$
\Delta_I+\Delta_J\supseteq
\Delta_{I\cup J} + \Delta_{I\cap J}.
$$
Indeed, the polytope $\Delta_{I\cup J} + \Delta_{I\cap J}$ is the convex hull
of all possible sums $e_i+e_j$, where $e_i$ is a vertex of
$\Delta_{I\cup J}$ and
$e_j$ a vertex of $\Delta_{I\cap J}$, i.e., $i\in I\cup J$ and $j\in I\cap J$.
We have either ($i\in I$ and $j\in J$), or ($i\in J$ and $j\in I$), or both.
In all cases, we have $e_i+e_j\in \Delta_I+\Delta_J$.
For the fine cell $\Pi_T = \Delta_{J_1}\times \cdots \times \Delta_{J_m}
=\Delta_{J_1}+\cdots +\Delta_{J_m}$,
pick two summands $\Delta_{J_i}$ and $\Delta_{J_j}$ with a nonempty
intersection $J_i\cap J_j$ (what should contain exactly one element $k$)
and replace them by $\Delta_{J_i\cup J_j}$ and
$\Delta_{J_i\cap J_j}$. We obtain another cell $\Pi_{T'}\subseteq \Pi_T$,
where the tree $T'$ is obtained from $T$ by replacing all edges
$(j,\bar l)\in T$, for $l\ne k$, with the edges $(i,\bar l)$.
Notice that the tree $T'$ has exactly the same right degree vector
$RD(T')=RD(T)$. Let us keep repeating this operation until we
obtain a cell of the form $\Pi_{T''} =
\Delta_{\{i_1\}} + \dots + \Delta_{\{i_m\}} + \Delta_{[n]} \subseteq \Pi_T$,
i.e., all summands are single vertices except for one simplex $\Delta_{[n]}$.
Since the tree $T''$ has the same right degree vector
$(d_{\bar 1},\dots,d_{\bar n})=RD(T'')=RD(T)$ as the tree $T$,
we deduce that $\#\{j\mid i_j=i\} = d_{\bar i}$, for $i=1,\dots,n$.
In other words, $\Pi_{T''} =
(d_{\bar 1},\dots,d_{\bar n}) + \Delta_{[n]}\subseteq \Pi_F$.
It remains to show that any other shift
$(a_1,\dots,a_n)+\Delta_{[n]}$, for an integer vector
$(a_1,\dots,a_n)\ne
(d_{\bar 1},\dots,d_{\bar n})$, has no common interior points with
the cell $\Pi_T$. Suppose that there exists such a shift
with a common interior point $b\in \Pi_T\cap ((a_1,\dots,a_n)+\Delta_{[n]})$.
Let $r=(d_{\bar 1}-a_1,\dots,d_{\bar n}-a_n)\in
\mathbb{Z}^{n}\setminus \{(0,\dots,0)\}$. Then the point $b+r$ is an interior
point of $(d_{\bar 1}, \dots, d_{\bar n}) + \Delta_{[n]}\subseteq \Pi_T$.
Thus the whole line interval $[b,b+r]$ belong to the interior
of the fine cell $\Pi_F=\Delta_{J_1}\times\cdots \times \Delta_{J_m}$.
Here $b\in \mathbb{R}^n$ and $r$ is a nonzero integer vector.
Thus at least one projection $[b',b'+r']$ of the interval $[b,b+r]$
to some component $\Delta_{J_i}$ of the direct product has a nonzero length.
Here $r'$ is should be a nonzero integer vector and $[b',b'+r']$
should belong to the interior of the simplex $\Delta_{J_i}$.
But this is impossible. No coordinate simplex can contain such an
interval strictly in its interior. Indeed, the diameter of a
coordinate simplex in the usual Euclidean metric on $\mathbb{R}^n$ is
$2^{\frac{1}{n}}$. The only integer vectors
that have smaller length are the coordinate vectors $\pm e_j$.
If $b'$ belongs to a coordinate simplex then $b'\pm e_j$
does not belong to it, because the vector $\pm e_j$
does not lie in the hyperplane
where all coordinate simplices live. We obtain a contradiction.
\end{proof}
Let us now prove Theorems~\ref{th:Left-Right-degrees}
and~\ref{th:gen_ehrhrart}.
\begin{proof}[Proof of Theorem~\ref{th:Left-Right-degrees}]
It is enough to prove the statement about right degree vectors
and deduce the statement about left degree vectors by symmetry.
By Corollary~\ref{cor:cayley_trick_permutohedron},
simplices in a triangulation $\{\Delta_{T_1},\dots,\Delta_{T_s}\}$
of the root polytope $Q_G$ are in one-to-one correspondence with cells
in the corresponding fine mixed subdivision $\{\Pi_{T_1},\dots,\Pi_{T_s}\}$
of the generalized permutohedron $P_G$.
By Lemma~\ref{lem:shifts_right_degree}, each cell $\Pi_{T_i}$
contains the shifted simplex $a+\Delta_{[n]}$,
where $a=RD(T_i)$, and each integer shift
$a+\Delta_{[n]}\subseteq P_G$
belongs to one of the cells $\Pi_{T_i}$. Notice that
the set of integer vectors $a\in\mathbb{Z}^n$ such that
$a+\Delta_{[n]}\subseteq P_G$ is exactly the set of lattice points
of the trimmed generalized permutohedron $P_G^-$.
This proves that the map $\Delta_{T_i}\mapsto RD(T_i)$ is a bijection
between simplices in the triangulations and lattice points of $P_G^{-}$,
as needed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:gen_ehrhrart}]
Let us fix a fine mixed subdivision
$\{\Pi_{T_1},\dots,\Pi_{T_m}\}$ of the polytope $P_G(y_1,\dots,y_m)$.
According to Lemma~\ref{lem:left_degree}, the volume of
$P_G(y_1,\dots,y_m)$ can be written as
$$
\mathrm{Vol}\, P_G(y_1,\dots,y_m) = \sum_{i=1}^m \frac {y_1^{d_1(T)}}{d_1!}\cdots
\frac{y_m^{d_m(T)}}{d_m!}.
$$
Let us compare this expression with the expression for
$\mathrm{Vol}\, P_G(y_1,\dots,y_m)$ given by
Theorem~\ref{th:second-formula-generalized}.
We deduce that the map $\Pi(T_i)\mapsto LD(T_i)$ is a bijection
between fine cells $\Pi_{T_i}$ in this subdivision and $G$-draconian sequences.
According to the Cayley trick and Theorem~\ref{th:Left-Right-degrees},
the number of fine cells in this subdivision equals the number of simplices
in a triangulation for $Q_G$ equals the number of lattice points
in $P_{G^{-}}(1,\dots,1)$.
We deduce that the number of $G$-draconian sequences equals
the number of lattice points of $P_{G^-}(1,\dots,1)$.
This is exactly the claim of Theorem~\ref{th:gen_ehrhrart}
in the case when $y_1=\dots=y_m=1$.
The case of general $y_1,\dots,y_m$ follows from this special
case. Indeed, we can write any weighted Minkowski sum $y_1\Delta_{I_1} +
\cdots + y_m\Delta_{I_m}$, for nonnegative integers $y_1,\dots,y_m$, as the
Minkowski sum of $y_1$ copies of $\Delta_{I_1}$, $y_2$ copies of
$\Delta_{I_2}$, etc. When we do this transformation
the right-hand sides of expressions given by Theorem~\ref{th:gen_ehrhrart}
agree. For example, if we replace the term $y_1 \Delta_{I_1}$
in the Minkowski sum with the sum $z_1\Delta_{I_1} + z_2\Delta_{I_1}$,
where $y_1=z_1+z_2$,
then the can correspondingly modify the right-hand side using the identity
$\frac {(y_{1})_{a_1}}{a_1!}
= \binom{y_{1}+a_1-1}{a_1} =
\sum_{b_{1}+b_2=a_1} \frac {(z_{1})_{b_1}}{b_1!}\,
\frac {(z_2)_{b_2}}{b_2!}$.
\end{proof}
\begin{remark} We can also deduce that the number of
$G$-draconian sequences equals $(m+n-2)!\,\mathrm{Vol}\, Q_G$,
i.e., the number of simplices in a triangulation of $Q_G$,
using integration.
Let us calculate the volume $\mathrm{Vol}\, Q_G$ by integrating
the volume of its slice $P_G(y_1,\dots,y_m)$ given by
Theorem~\ref{th:second-formula-generalized} over all points
of the $(m-1)$-dimensional simplex $\Delta_{[m]}$:
$$
\mathrm{Vol}\, Q_G = \int_{(y_1,\dots,y_m)\in\Delta_{[m]}}\,\,
\mathrm{Vol}\, P_G(y_1,\dots,y_m) dy_1\cdots dy_{m-1}.
$$
Now we can use the fact that the integral
of a monomial $\frac{y_1^{a_1}}{a_1!}\cdots \frac{y_m^{a_m}}{a_m!}$
over the simplex $\Delta_{[m]}$ equals $((m-1+\sum a_i)!)^{-1}$.
\end{remark}
Also remark that the first part of the above proof
and Theorem~\ref{th:Left-Right-degrees} gives
an alternative proof of Lemma~\ref{lem:G-draconain=G*}
saying that the set of $G$-draconian sequences is
the set of lattice points in $P_{G^*}^-(1,\dots,1)$.
\begin{example}
Let us assume that $I_1,\dots,I_m$, $m=2^n-1$, are all
nonempty subsets of $[n]$ and $G$ is the associated
bipartite graph.
The $G$-draconian sequences of integers are in one-to-one
correspondence with all {\it unordered\/} collections of subsets
in $[n]$ satisfying the dragon marriage condition.
For a draconian sequence $(a_1,\dots,a_m)$ there
are $\binom{n-1}{a_1,\dots,a_m}$ associated ordered
sequences of subsets.
In this case, $P_G = P_n(2^{n-1},2^{n-2}\dots,2,1)$
and $P_G^- = P_n(2^{n-1}-1,2^{n-2}\dots,2,1)$ (both are
usual permutohedra).
The number of draconian sequences is exactly the
number of lattice points in the permutohedron
$P_n(2^{n-1}-1,2^{n-2}\dots,2,1)$.
\end{example}
Another approach to counting lattice points in generalized
permutohedra is based on constructing its fine mixed subdivision
and paying a special attention to lower dimensional cells.
Let us say that a {\it semi-polytope} is a bounded subset of points
in a real vector space given by a finite collection of affine weak
and strict equalities. Define coordinate {\it semi-simplices} as
$$
\Delta_{I,j}^{semi} =\Delta_I\setminus \Delta_{I\setminus\{j\}} =
\left\{\sum_{i\in I} x_{i}\,e_{i} \mid \sum_{i\in I} x_{i} =1;\
x_{i}\geq 0,\textrm{ for $i\in I$; and } x_j>0\right\},
$$
for $j\in I\subseteq[n]$.
\begin{proof} [Alternative semiproof of Theorem~\ref{th:gen_ehrhrart}]
Let $P_G(y_1,\dots,y_m)=y_1\Delta_{I_1}+\cdots +y_m\Delta_{I_m}$.
Assume that $I_1=[n]$.
It seems feasible that there exists a {\it disjoint}
decomposition of the polytope $P_G(y_1,\dots,y_m)$ into
semipolytopes of the form
\begin{equation}
\label{eq:disjoint_decom_PG}
P_G(y_1,\dots,y_m) = \bigcup_{(J_1,\dots,J_m)} y_{1} \Delta_{J_1}\times
y_{I_2} \Delta_{J_2,j_2}^{semi} \cdots \times y_{m} \Delta_{J_m,j_m}^{semi},
\end{equation}
where the sum is over sequences of subsets $(J_1,\dots,J_m)$ and
$j_2,\dots,j_m$ such that $j_i\in J_i\subseteq I_i$,
and bipartite graphs associated with $(J_1,\dots,J_m)$ are
spanning trees $T$ of $G$. In particular, the closure of each term
is a fine mixed cell $\Pi_T$ of top dimension.
Here is a not quite rigorous reason why this should be true.
Let us start with the top dimensional simplex $y_1\Delta_{I_1}$,
$I_1=[n]$. When we add the simplex $y_2\Delta_{I_2}$, we
create several new fine cells. Each of these cells is the direct product
$y_1\Delta_{J_1}\times y_2 \Delta_{J_2}$ of a face of $y_1\Delta_{I_1}$ and
a face of $y_2\Delta_{I_2}$ glued to $y_1\Delta_{I_1}$
by one if its facets $y_1\Delta_{J_1}\times y_2\Delta_{J_2\setminus\{j_2\}}$.
This is why we exclude elements of this facet. When we add
$y_3\Delta_{I_3}$ we again create several new fine cells.
Again each of these new
cells is a direct product of one of the faces of the polytope
created on the earlier stage and a face $y_3\Delta_{J_3}$ of $y_3\Delta_{I_3}$.
Again each of these cells should be glued by a facet of
$y_3\Delta_{J_3}$, etc.
Let us show that just an existence of a decomposition for the
form~(\ref{eq:disjoint_decom_PG}) already implies
Theorem~\ref{th:gen_ehrhrart}. Indeed, the number of lattice points
in one of the terms of this decomposition equals
$\frac{(y_1+1)_{a_1}}{a_1!} \frac{(y_2)_{a_2}}{a_2!}\cdots
\frac{(y_m)_{a_m}}{a_m!}$ and its volume is
$\frac{{(y_1+1)}^{a_1}}{a_1!} \frac{y_2^{a_2}}{a_2!}\cdots
\frac{y_m^{a_m}}{a_m!}$, where $a_i = \dim \Delta_{J_i} = |J_i|-1$.
Thus the formula for the number of lattice points in
$P_G(y_1,\dots,y_m)$ is obtained from the formula for
the volume given by
Theorem~\ref{th:second-formula-generalized} by
replacing usual powers with raising powers, as needed.
\end{proof}
In order to make this proof more rigorous, we need to
carefully analyze all possible cases. Preferably one would
like to have an explicit construction for a decomposition of the
form~(\ref{eq:disjoint_decom_PG}).
In Section~\ref{sec:shifted_tableaux}, we will need the following
statement.
\begin{proposition}
\label{prop:lattice=sum_vertices}
Any integer lattice point of the generalized
permutohedron $P_G= \Delta_{I_1}+\cdots +\Delta_{I_m}$
has the form $e_{j_1}+\cdots + e_{j_m}$, where $j_k\in I_k$,
for $k=1,\dots,m$.
\end{proposition}
\begin{remark} Proposition~\ref{prop:lattice=sum_vertices}
says that any lattice point of the generalized permutohedron is the sum
of vertices of its Minkowski summand.
Note that of a similar clam is not true for an arbitrary Minkowski sum.
For example, the Minkowski sum of two line intervals $[(0,1),(1,0)]$
and $[(0,0),(1,1)]$ contains the lattice point $(1,1)$ which cannot
be presented as a sum of the vertices.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:lattice=sum_vertices}]
Each lattice point of $P_G$ belongs to a fine mixed cell in
a fine mixed subdivision of $P_G$; see Section~\ref{sec:subdivision}.
According to Lemma~\ref{lem:fine_mixed_cells=trees}, each fine mixed
cell is a direct product $\Delta_{J_1}\times \cdots \times \Delta_{J_m}$
of simplices, where $J_i\subseteq I_i$, for $i=1,\dots,m$,
and the graph $T=G_{(J_1,\dots,J_m)}\subseteq K_{m,n}$ is a bipartite tree.
Any lattice point $(b_1,\dots,b_n)$ of
$\Delta_{J_1}\times \cdots \times \Delta_{J_m}$ comes from a function
$f:\{(i,\bar j)\}\to\mathbb{R}_{\geq 0}$ defined on edges of the tree $T$ such that
(1) $f(i,\bar j)\geq 0$, (2) $\sum_j f(i,\bar j) = 1$, and
(3) $\sum_i f(i,\bar j) = b_j$, for any $i=1,\dots,m$ and $j=1,\dots,n$.
Since $T$ is a tree and the sum of values of $f$ over edges at any node
of $T$ is
integer, we deduce that $f$ has all nonnegative integer values.
(First, we prove this for leaves of $T$, then for leaves of the tree
obtained by removing the leaves of $T$, etc.)
Thus, for any $i=1,\dots,m$, we have $f(i,\bar j_i)=1$, for some $j_i$,
and $f(i,\bar j)=0$, for $j\ne j_i$.
Thus $(b_1,\dots,b_n) = e_{j_1}+\cdots + e_{j_n}$, as needed.
\end{proof}
\section{Application: diagonals of shifted Young tableaux}
\label{sec:shifted_tableaux}
A {\it standard shifted Young tableaux\/} of the triangular shape
$(n,n-1,\dots,1)$ is a bijective map
$T:\{(i,j)\mid 1\leq i\leq j\leq n\} \to\{1,\dots, \binom{n+1}{2}\}$
increasing in the rows and the columns, i.e., $T((i,j))<T((i+1,j))$ and
$T((i,j))<T((i,j+1))$, whenever the entries are defined.
Let us say that the {\it diagonal vector\/} of such a tableau $T$
is the vector $\mathrm{diag}(T)=(d_1,\dots,d_n):=(T(1,1),T(2,2),\dots,T(n,n))$;
see Example~\ref{exam:shifted-tableau} below.
Is clear, that $d_1=1$, $d_n=\binom{n+1}{2}$, and $d_{i+1}>d_i$.
In this section we describe all possible diagonal vectors.
For a nonnegative integer $(n-1)$-vector $(a_1,\dots,a_{n-1})$,
let $N(a_1,\dots,a_{n-1})$ be the number of standard shifted
Young tableaux $T$ of the triangular shape with the diagonal vector
$\mathrm{diag}(T)=(1,a_1+2,a_1+a_2+3,\dots,a_1+\cdots+a_{n-1} + n)$,
or, equivalently, $a_i = d_{i+1}-d_i-1$, for $i=1,\dots,n-1$.
\begin{theorem}
\label{th:shifted=productij}
We have the following identity:
$$
\sum_{a_1,\dots,a_{n-1}\geq 0}
N(a_1,\dots,a_n)\, \frac{t_1 ^{a_1}} {a_1!}\cdots
\frac{t_{n-1} ^{a_{n-1}}} {a_{n-1}!} =
\prod_{1\leq i<j\leq n} \frac{t_i+t_{i+1}+\cdots + t_{j-1}}{j-i}.
$$
\end{theorem}
\begin{proof}
Let $\lambda=(\lambda_1\geq \dots\geq \lambda_n)$ be a partition.
The {\it Gelfand-Tsetlin polytope\/} $GT(\lambda)$ is defined
as the set of triangular arrays
$(p_{ij})_{i,j\geq 1, i+j\leq n}\in \mathbb{R}^{\binom{n+1}{2}}$
such that the first row is $(p_{11},p_{12},\dots,p_{1n}) = \lambda$
and entries in consecutive rows are interlaced
$p_{i1}\geq p_{i+1\, 1}\geq p_{i2}\geq p_{i+1\,2}\geq \cdots$,
for $i=1,\dots,n-1$.
Let us calculate the volume of the polytope $GT(\lambda)$ in
two different ways. First, recall that lattice points of $GT(\lambda)$
correspond to elements of the Gelfand-Tsetlin basis of the irreducible
representation $V_\lambda$ of $GL(n)$ with the highest weight $\lambda$.
Thus the number of the lattice points is given by the Weyl dimension
formula:
$\#(GT(\lambda)\cap \mathbb{Z}^{\binom{n+1}{2}}) = \prod_{1\leq i<j\leq n}
\frac{\lambda_i-\lambda_j + j-i}{j-i}$.
We deduce that the volume of $GT(\lambda)$ is given by the top homogeneous
component of this polynomial in $\lambda_1,\dots,\lambda_n$:
$$
\mathrm{Vol}\, GT(\lambda) = \prod_{1\leq i<j\leq n} \frac{\lambda_i-\lambda_j}{j-i}.
$$
On the other hand, note that the shape of an array
$(p_{ij})\in GT(\lambda)$ is equivalent to the shape of
a shifted tableau.
Let us subdivide $GT(\lambda)$ into parts by the hyperplanes
$p_{ij}=p_{kl}$, for all $i,j,k,l$. A region of this
subdivision of the Gelfand-Tsetlin polytopes $GT(\lambda)$
correspond to a choice of a total ordering of the $p_{ij}$ compatible
with all inequalities. Such ordering are in one-to-one correspondence
with standard shifted Young tableaux of the triangular shape
$(n,n-1,\dots,1)$.
For a tableau $T$ with the diagonal vector $\mathrm{diag}(T)=(d_1,\dots,d_n)$,
the associated region of $GT(\lambda)$ is isomorphic to
$\{(y_1<\dots < y_{\binom{n+1}{2}})\mid y_{d_i} = \lambda_{i},\textrm{ for }
i=1,\dots,n\}$,
that is, to the direct product of simplices
$(\lambda_1-\lambda_2)\Delta^{d_{2}-d_1-1}\times \cdots \times
(\lambda_{n-1}-\lambda_n)\Delta^{d_{n}-d_{n-1}-1}$.
The volume of this direct product equals
$$
\prod_{i=1}^{n-1} \frac {(\lambda_i-\lambda_{i+1})^{d_{i+1}-d_i-1}}
{(d_{i+1}-d_i-1)!}.
$$
Thus the volume of $GT(\lambda)$ can be written as the sum of these
expressions over standard shifted tableaux.
Comparing these two expressions for $\mathrm{Vol}\, GT(\lambda)$ and
writing them in the coordinates $t_i = \lambda_i-\lambda_{i+1}$,
we obtain the needed identity.
\end{proof}
Theorem~\ref{th:shifted=productij} implies that $N(a_1,\dots,a_n)$ can
be nonzero only if $(a_1,\dots,a_n)$ is a lattice point
of the Newton polytope
$$
\mathrm{Ass}_{n-1}:=\mathrm{Newton}\left(\prod_{1\leq i<j\leq n}
(t_i+t_{i+1}+\cdots +t_{j-1})\right) =
\sum_{1\leq i<j\leq n}\Delta_{[i,j-1]}.
$$
This Newton polytope
is exactly the associahedron in the Loday realization,
for $n-1$; see Subsection~\ref{ssec:associahedron}.
Using Proposition~\ref{prop:lattice=sum_vertices}, we
obtain the following statement.
\begin{corollary} The number of different diagonal vectors
in standard shifted Young tableaux of the shape $(n,n-1,\dots,1)$
is exactly the number of integer lattice points in the associahedron
$\mathrm{Ass}_{n-1}$. More precisely, $N(a_1,\dots,a_{n-1})$ is nonzero
if and only if $(a_1,\dots,a_{n-1})$ is an integer lattice point
of $\mathrm{Ass}_{n-1}$.
\end{corollary}
It would be interesting to extend this claim to other
shifted shapes.
\begin{example}
\label{exam:lattice_points_ass}
Let $D_n$ be the number of different diagonal vectors, or, equivalently,
the number integer lattice points in $\mathrm{Ass}_{n-1}$,
or, equivalently, the number of nonzero monomials
in the expansion of the product $\prod_{1\leq i< j\leq n}
\sum_{k=i}^{j-1} t_k$.
Several numbers $D_n$ are given below.
\smallskip
\begin{center}
\begin{tabular}{|c||l|l|l|l|l|l|l|l|l|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\hline
$D_n$ & 1 & 1 & 2 & 8 & 55 & 567 & 7958 & 142396 & 3104160 \\
\hline
\end{tabular}
\end{center}
\end{example}
Theorem~\ref{th:shifted=productij} also implies that $N(a_1,\dots,a_n)$
equals $\prod_{i=1}^{n-1} (a_i)!/(1!2!\cdots (n-1)!)$ times the number
of ways to write the point $(a_1,\dots,a_{n-1})$ as a sum of vertices
of the simplices $\Delta_{[i,j-1]}$. In particular, if
$(a_1,\dots,a_{n-1})$ is a vertex of the associahedron
$\mathrm{Ass}_{n-1}$ then the second factor is $1$.
Recall that vertices of $\mathrm{Ass}_{n-1}$ correspond to plane
binary trees on $n-1$ nodes; see Subsection~\ref{ssec:associahedron}.
For a plane binary tree on $n-1$ nodes, let $L_i, R_i$, $i=1,\dots,n-1$,
be the left and right branches of the nodes arranged in the binary
search order; see Subsection~\ref{ssec:associahedron}.
Also let $l_i = |L_i|+1$ and $r_i = |R_i|+1$.
\begin{corollary}
The numbers of standard shifted Young tableaux with diagonal vectors
corresponding to the vertices of the associahedron are given by
$$
T(l_1\cdot r_1,\dots,l_{n-1}\cdot r_{n-1}) = \frac{(l_1\cdot r_1)!\cdots
(l_{n-1}\cdot r_{n-1})!}{1!\,2!\cdots (n-1)!} =
f_{l_1\times r_1}\cdots f_{l_{n-1}\times r_{n-1}},
$$
where $f_{k\times l}$ is the number of standard Young tableaux
of the rectangular shape $k\times l$.
\end{corollary}
The second expression can be obtained from the first using
the hook-length formula for the number of standard Young tableaux.
We can also deduce it directly, as follows.
Recall that binary trees on $n-1$ nodes are associated with
subdivisions of the shifted shape $(n-1,n-2,\dots,1)$
into $n-1$ rectangles of sizes $l_1\times r_1$, \dots,
$l_{n-1}\times r_{n-1}$; see Subsection~\ref{ssec:associahedron}.
Each shifted tableaux with the diagonal vector
$(d_1,\dots,d_n) =
(1,\ 2 + l_1\cdot r_1,\ 3 + l_1\cdot r_1 + l_2 \cdot r_2,\ \cdots)$
is obtained from such a subdivision by adding $n$ diagonal boxes filled
with the numbers $d_1,\dots,d_n$ and filling the $i$-th rectangle
$l_i\times r_i$ with the numbers $d_{i}+1,d_{i}+2,\dots,d_{i+1}-1$
so that they from a rectangular standard tableau,
for $i=1,\dots,n-1$.
\begin{example}
\label{exam:shifted-tableau}
The diagonal vector
$(1,3,10,12,15,36,40,43,45)$ is associated with
the plane binary tree and the subdivision into rectangles
from Example~\ref{ex:plane_binary_trees}.
Here is a shifted tableau with the this diagonal vector
obtained by filling the rectangles of this subdivision:
\medskip
\begin{center}
{\tiny
\input{fig15-1.pstex_t}
}
\end{center}
\end{example}
\section{Mixed Eulerian numbers}
\label{sec:Mixed_Eulerian}
Let us return to the usual
permutohedron $P_{n+1} = P_{n+1}(x_1,\dots,x_{n+1})$.
Let us use the coordinates $u_1,\dots,u_n$ related to $x_1,\dots,x_{n+1}$ by
$$
u_1= x_1-x_2,\
u_2 = x_2 - x_3,\
\cdots,\
u_n = x_n - x_{n+1}
$$
This coordinate system is canonically defined for an arbitrary Weyl group
as the coordinate system in the weight space given by the fundamental weights.
The permutohedron $P_{n+1}$ can be written as the Minkowski sum
$$
P_{n+1} = u_1\,\Delta_{1,n+1} + u_2\,\Delta_{2,n}+ \cdots + u_n \,\Delta_{n,n+1}
$$
of the {\it hypersimplices} $\Delta_{k,n+1} := P_{n+1}(1,\dots,1,0,\dots,0)$
with $k$ `$1$'s.
For example, the hexagon can be expressed as the Minkowski sum
of the hypersimplices $\Delta_{1,3}$ and $\Delta_{2,3}$,
which are two triangles with opposite orientations:
\begin{center}
\input{fig5-1.pstex_t}
\end{center}
According to Proposition~\ref{prop:mixed_volume},
the volume of $P_{n+1}$ can be written as
$$
\mathrm{Vol}\, P_{n+1} =
\sum_{c_1,\dots,c_n}
{A_{c_1,\dots,c_n}}\, \frac{u_1^{c_1}}{c_1!}\cdots \frac{u_n^{c_n}}{c_n!},
$$
where the sum is over $c_1,\dots,c_n\geq 0$, $c_1+\cdots+c_n=n$, and
$$
{A_{c_1,\dots,c_n}} = n!\,V(\Delta_{1,n+1}^{c_1},\dots,
\Delta_{n,n+1}^{c_n})\in \mathbb{Z}_{> 0}
$$
is the mixed volume of hypersimplices multiplied by $n!$.
Here $P^l$ means the polytope $P$ repeated $l$ times.
\begin{definition}
Let us call the integers $A_{c_1,\dots,c_n}$
the {\it mixed Eulerian numbers}.
\end{definition}
The mixed Eulerian numbers are nonnegative integers because
hypersimplices are integer polytopes.
In particular,
$n!\, \mathrm{Vol}\, P_{n+1}$ is a polynomial in $u_1,\dots,u_n$ with
positive integer coefficients.
\begin{example}
We have
$$
\begin{array}{l}
\mathrm{Vol}\, P_2 = {\bf 1}\,u_1;\\
\mathrm{Vol}\, P_3 = {\bf 1}\, \frac{u_1^2}{2} + {\bf 2}\, u_1 u_2 +
{\bf 1} \,\frac{u_2^2}{2};\\
\mathrm{Vol}\, P_4 = {\bf 1}\,\frac{u_1^3}{3!} + {\bf 2}\,\frac{u_1^2}{2} u_2 +
{\bf 4}\,{u_1}\frac{u_2^2}{2} + {\bf 4}\, \frac{u_2^3}{3!} +
{\bf 3}\,\frac{u_1^2}{2}u_3 + {\bf 6} \,{u_1u_2u_3} + \\
\qquad \qquad \qquad
\qquad \qquad \qquad
\qquad
+ {\bf 4}\, \frac{u_2^2}{2}u_3 + {\bf 3}\, u_1\frac{u_3^2}{2} +
{\bf 2}\,{u_2}\frac{u_3^2}{2} + {\bf 1}\frac{u_3^3}{3!}.
\end{array}
$$
Here the mixed Eulerian numbers are marked in bold.
\end{example}
Recall that the usual {\it Eulerian number\/} $A(n,k)$ is
defined as the number of permutations in $S_n$ with exactly $k-1$ descents.
It is well-known that $n!\,\mathrm{Vol}\, \Delta_{k,n+1}=A(n,k)$;
see Laplace~\cite[p.~257ff]{Lap}.
\begin{theorem}
\label{th:mixed_eul_properties}
The mixed Eulerian numbers have the following
properties:
\begin{enumerate}
\item The numbers $A_{c_1,\dots,c_n}$ are positive integers defined
for $c_1,\dots,c_n\geq 0$, $c_1+\cdots+c_n=n$.
\item We have $A_{c_1,\dots,c_n} = A_{c_n,\dots,c_1}$.
\item For $1\leq k\leq n$, the number $A_{0^{k-1},n,0^{n-k}}$
is the usual Eulerian number $A(n,k)$.
Here and below $0^l$ denotes the sequence of $l$ zeros.
\item We have $\sum
\frac{1}{c_1!\cdots c_n!}\,A_{c_1,\dots,c_n} = (n+1)^{n-1}$,
where the sum is over $c_1,\dots,c_n\geq 0$ with $c_1+\cdots + c_n = n$.
\item We have $\sum A_{c_1,\dots,c_n} = n! \, C_n$,
where again the sum is over all $c_1,\dots,c_n\geq 0$ with $c_1+\cdots + c_n = n$
and $C_n= \frac{1}{n+1}\,\binom{2n}{n}$ is the Catalan number.
\item
For $1\leq k\leq n$ and $i=0,\dots,n$,
the number $A_{0^{k-1},n-i,i,0^{n-k-1}}$ is equal to the number of permutations
$w\in S_{n+1}$ with $k$ descents and $w(n+1)=i+1$.
\item We have $A_{1,\dots,1} = n!$.
\item We have $A_{k,0,\dots,0,n-k} = \binom{n}{k}$.
\item We have $A_{c_1,\dots,c_n} = 1^{c_1} 2^{c_2} \cdots n^{c_n}$
if $c_1+\cdots + c_i \geq i$, for $i=1,\dots,n-1$,
and $c_1+\cdots + c_n = n$.
There are exactly $C_n$ such sequences $(c_1,\dots,c_n)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Properties (1) and (2) follow
from the definition of the mixed Eulerian numbers.
Property (3) follows from the fact that $n!\,\mathrm{Vol}\, \Delta_{k,n+1}=A(n,k)$.
Property (4) follows from the fact that the volume of the regular permutohedron
$P_{n+1}(n,n-1,\dots,0)$, which corresponds to $u_1=\dots=u_n=1$,
equals $(n+1)^{n-1}$; see Proposition~\ref{prop:vol-Z-G}.
Property (5) follows from Theorem~\ref{th:equivalence-classes} below.
It was conjectured by R.~Stanley.
Property (6) is equivalent to the result by Ehrenborg, Readdy, and Steingr\'imsson
\cite[Theorem~1]{ERS} about mixed volumes of two adjacent hypersimplices.
Property (7) is a special case of Property~(9).
(8) According to Theorem~\ref{th:vol=descent_number},
we have
$$
\mathrm{Vol}\, P_{n+1}(x_1,0,\dots,0,x_{n+1}) =
\sum_{k=0}^n (-1)^{n-k}\,D_{n+1}([k+1,n])\,
\frac {x_1^{k}}{k!}\, \frac {x_{n+1}^{n-k}}{(n-k)!},
$$
where $D_{n+1}([k+1,n])=\binom{n}{k}$ is the number of permutations
$w\in S_{n+1}$ such that $w_1<\cdots < w_{k+1}>w_{k+2}>\cdots >w_{n+1}$.
This permutohedron corresponds to
$u_1 = x_1$, $u_2 = \cdots u_{n-1}=0$, $u_n=-x_{n+1}$, which implies
that $A_{k,0,\dots,0,n-k} = \binom nk$.
(9) Let us use Theorem~\ref{th:second-formula}.
The $y$-variables are related to the $u$-variables as
$$
\left\{
\begin{array}{l}
y_2 = u_1,\\
y_3 = u_2 - u_1,\\
y_4= u_3-2u_2 + u_1,\\
\vdots\\
\displaystyle y_{n+1} = \sum_{i=0}^{n-1} (-1)^i \binom {n-1}{i}\, u_{n-i}
\end{array}
\right.
$$
Using these relations, we can express
any coefficient $[u_n^{c_1}\cdots u_1^{c_n}]\,V_{n+1}$
of the polynomial $V_{n+1} = \mathrm{Vol}\, P_{n+1}$ written in
the $u$-coordinates as a combination
of coefficients $[y_{n+1}^{c_1'}\cdots y_2^{c_n'}]\,V_{n+1}$
of this polynomial written in the $y$-coordinates.
Let us assume that $(c_1,\dots,c_{n})$ satisfies
$c_1+\cdots + c_{i} \geq i$, for $i=1,\dots,n-1$, and $c_1+\cdots+ c_n = n$.
Then any sequence $(c_1',\dots,c_n')$ that appears in this expression
satisfies the same conditions.
For such a sequence, we have
$$
[y_{n+1}^{c_1'}\cdots y_2^{c_n'}] \,V_{n+1}=
\frac 1{c_1'!\cdots c_n'!} \, {\binom {n+1}{n+1}}^{c_1'}
{\binom{n+1}{n}}^{c_2'} \cdots {\binom{n+1}{2}}^{c_n'}.
$$
Indeed, any collection of subsets $J_1,\dots,J_n\subseteq[n+1]$
such that $c_i'$ of them have the cardinality $n+2-i$,
for $i=1,\dots,n$, automatically satisfies the dragon
marriage condition; see Theorem~\ref{th:second-formula}.
Thus we have
$$
\begin{array}{l}
A_{c_1,\dots,c_n} =
\left(\frac {\partial}{\partial u_n}\right)^{c_1}
\cdots
\left(\frac {\partial}{\partial u_1}\right)^{c_n} \,V_{n+1}
= \left(
\left(\frac {\partial}{\partial y_{n+1}}\right)^{c_1}
\left(
\frac {\partial}{\partial y_n}
- \binom{n-1}{1} \frac {\partial}{\partial y_{n+1}}
\right)^{c_2} \right.\times
\\[.1in]
\quad\times\left.
\left(
\frac {\partial}{\partial y_{n-1}}
- \binom {n-2}{1} \frac {\partial}{\partial y_{n}}
+ \binom{n-1}{2} \frac {\partial}{\partial y_{n+1}}
\right)^{c_3}
\cdots \right)
\,V_{n+1} =\\[.1in]
\quad={\binom{n+1}{n+1}}^{c_1}
\left(\binom{n+1}{n} - \binom{n-1}{1}\binom{n+1}{n+1}\right)^{c_2}
\left(\binom{n+1}{n-1} - \binom{n-2}{1}\binom{n+1}{n} +
\binom{n-1}{2}\binom{n+1}{n+1}\right)^{c_3}\cdots = \\[.1in]
\quad = 1^{c_1} 2^{c_2} \cdots n^{c_n}.
\end{array}
$$
In the last equality we used the binomial identity
$$
\sum_{i=0}^{k-1} (-1)^i\binom{n-k+i}{i} \binom{n+1}{n+2-k+i} = k,
\quad\textrm{for } 1\leq k\leq n,
$$
which we leave as an exercise.
\end{proof}
Let ``$\sim$'' be the equivalence relation of the set of
nonnegative integer sequences $(c_1,\dots,c_n)$ with $c_1+\cdots +c_n =n$
given by $(c_1,\dots,c_n)\sim (c_1',\dots,c_n')$
whenever $(c_1,\dots,c_n,0)$ is a cyclic shift of
$(c_1',\dots,c_n',0)$.
\begin{theorem}
\label{th:equivalence-classes}
For a fixed $(c_1,\dots,c_n)$, we have
$$
\sum_{(c_1',\dots,c_n')\sim (c_1,\dots,c_n)} A_{c_1',\dots,c_n'} = n!
$$
In other words, the sum of mixed Eulerian numbers in each
equivalence class is $n!$.
There are exactly the Catalan number $C_n=\frac{1}{n+1}\,\binom{2n}{n}$
equivalence classes.
\end{theorem}
This claim was conjectured by R.~Stanley.
For example, it says that $A_{1,\dots,1} = n!$ and that
$A_{n,0,\dots,0} + A_{0,n,0,\dots,0}+A_{0,0,n,\dots,0}+\cdots
+A_{0,\dots,0,n}= n!$, i.e.,
the sum of usual Eulerian numbers $\sum_k A(n,k)$ is $n!$.
\begin{remark}
The claim that there are $C_n$ equivalence classes
is well-known.
Every equivalence class contains exactly one sequence $(c_1,\dots,c_n)$
such that $c_1+\cdots+c_i\geq i$, for $i=1,\dots,n$.
For this special sequence, the
mixed Eulerian number is given by the simple product
$A_{c_1,\dots,c_n} = 1^{c_1} \cdots n^{c_n}$; see
Theorem~\ref{th:mixed_eul_properties}.(9).
\end{remark}
Theorem~\ref{th:equivalence-classes}
follows from the following claim.
\begin{proposition}
\label{prop:ciclic_symmetrization}
Let us write $\mathrm{Vol}\, P_{n+1}$ as a polynomial
$\hat V_{n+1}(u_1,\dots,u_{n+1})$ in $u_1,\dots, u_{n+1}$.
(This polynomial does not depend on $u_{n+1}$.)
Then the sum of cyclic shifts of this polynomial
equals
$$
\begin{array}{r}
\hat V_{n+1}(u_1,\dots,u_{n+1}) + \hat V_{n+1}
(u_{n+1},u_1,\dots,u_n) + \cdots +
\hat V_{n+1}(u_2,\dots,u_{n+1},u_1) =\quad \\
= (u_1+\cdots+u_{n+1})^n
\end{array}
$$
\end{proposition}
This claim has a simple geometric explanation
in terms of alcoves of the affine Weyl group.
Cyclic shifts come from symmetries of the type $A_n$ extended
Dynkin diagram.
\begin{proof}
Let $W=S_{n+1}$ be the type $A_n$ Weyl group.
The associated {\it affine Coxeter arrangement\/}
is the hyperplane arrangement in the vector
space $\mathbb{R}^{n+1}/(1,\dots,1)\mathbb{R} \simeq \mathbb{R}^n$ given by
$t_i -t_j =k$, for $1\leq i<j\leq n+1$ and $k\in \mathbb{Z}$.
Here and below in this proof the coordinates
$t_1,\dots,t_{n+1}$ in $\mathbb{R}^{n+1}$ are understood modulo
$(1,\dots,1)\mathbb{R}$.
These hyperplanes subdivide the vector space into simplices,
which are called the {\it alcoves.}
The reflections with respect to these hyperplanes
generate the {\it affine Weyl group} $W_{\mathrm{aff}}$ that acts
simply transitively on the alcoves.
The {\it fundamental alcove\/} $A_\circ$ is given by the inequalities
$t_1>t_2>\cdots > t_{n+1}> t_1-1$. It is the $n$-dimensional simplex
with the vertices
$v_0= (0,\dots,0)$, $v_1=(1,0,\dots,0)$, $v_2=(1,1,0,\dots,0)$, \dots,
$v_n=(1,\dots,1,0)$.
For $i=1,\dots,n$, the map
$$
\phi_i:(t_1,\dots,t_{n+1})\mapsto
(t_{i+1},\dots,t_{n+1},t_1-1,\dots,t_i-1)
$$
preserves the fundamental
alcove and sends the vertex $v_i$ to the origin $v_0$.
We have $\mathrm{Vol}\, A_\circ = \frac{1}{|W|} = \frac{1}{(n+1)!}$,
assuming that we normalize the volume as in
Section~\ref{sec:weight-polytopes}.
Let up pick a point
$x=(x_1,\dots,x_{n+1})$ in $A_\circ$.
The $W_{\mathrm{aff}}$-orbit of $x$ has a unique representative in each alcove.
For any vertex $v$ of the affine Coxeter arrangement, i.e., for a
0-dimensional intersection of its hyperplanes, the convex hull of elements
the orbit $W_{\mathrm{aff}} \cdot x$ contained in the alcoves adjacent to $v$ is
a (parallel translation) of a permutohedron.
This collection of permutohedra
associated with vertices of the arrangement forms a subdivision
of the linear space.
For the origin $v=v_0$, we obtain the permutohedron
$P_{(0)}=P_{n+1}(x_1,\dots,x_{n+1})$, and, for the vertex $v_i$,
$i=1,\dots,n$, we obtain the permutohedron
$$
P_{(i)}=\phi_i^{-1}P_{n+1}(\phi_i(x)) =
\phi_i^{-1} P_{n+1}(x_{i+1},\dots,x_{n+1}, x_1-1,\dots,x_{i}-1).
$$
Note that, for $i=0,\dots,n$, we have
$\mathrm{Vol}\, P_{(i)} \cap A_\circ = \frac 1{|W|}\, \mathrm{Vol}\, P_{(i)}$.
Indeed, each permutohedron
$P_{(i)}$ is composed of $|W|$ isomorphic parts obtained by reflections
of $\mathrm{Vol}\, P_{(i)} \cap A_\circ$.
Thus the volume of the fundamental alcove times $|W|$
equals the sum of volumes of $n+1$ adjacent permutohedra,
For example, the 6 areas of the blue triangle on the following picture
is the sum of the areas of three hexagons.
\begin{center}
\input{fig6-1.pstex_t}
\end{center}
\nopagebreak
In other words, we have
$1=|W|\cdot \mathrm{Vol}\, A_\circ = \sum_{i=0}^n\mathrm{Vol}\, P_{(i)}$.
The last expression can be written in the $u$-coordinates as
$$
\hat V_{n+1}(u_1,\dots,u_{n+1})
+ \hat V_{n+1}(u_2,\dots,u_{n+1},u_1) + \cdots
+ \hat V_{n+1} (u_{n+1},u_1,\dots,u_n),
$$
assuming that $u_1+\cdots + u_n = 1$. The case of arbitrary
$u_1,\dots,u_n$ is obtained by multiplying all $u_i$'s by the same
factor $\alpha$ which corresponds to multiplying the volume by
$\alpha^n$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:equivalence-classes}]
We obtain the required equality when we extract the coefficient
of $u_1^{c_1}\cdots u_n^{c_n} u_{n+1}^0$ in the both sides
of the identity in
Proposition~\ref{prop:ciclic_symmetrization}.
\end{proof}
Proposition~\ref{prop:ciclic_symmetrization} together with Theorem~\ref{th:f1}
implies the following identity. It would be interesting to find a direct proof
of this claim.
\begin{corollary}
The symmetrization of the expression
$$
\frac{1}{n!} \,\frac{(\lambda_1 u_1 + (\lambda_1+\lambda_2)u_2 + \cdots
(\lambda_1+\cdots+\lambda_{n+1})u_{n+1})^n}{
(\lambda_1-\lambda_2)\cdots (\lambda_n-\lambda_{n+1})}
$$
with respect to $(n+1)!$ permutations of $\lambda_1,\dots,\lambda_{n+1}$
and $(n+1)$ {\it cyclic permutations} of $u_1,\dots u_{n+1}$ equals
$(u_1+\cdots+u_{n+1})^{n}$.
\end{corollary}
\section{Weighted binary trees}
\label{sec:weighted_binary_trees}
Let us give a combinatorial interpretation for
the mixed Eulerian numbers based on plane binary trees.
Let $T$ be a plane binary tree on $[n]$ with
the binary search labeling of the nodes;
see Subsection~\ref{ssec:associahedron}.
There are the Catalan number $C_n$ of such trees.
For any node $i=1,\dots,n$, the set $\mathrm{desc}(i,T)$ of descendants of $i$
(including the node $i$ itself) is a consecutive interval $\mathrm{desc}(i,T) = [l_i,r_i]$
of integers. In particular, we have $l_i\leq i\leq r_i$.
For a pair nodes $i$ and $j$ in $T$ such that $i\in \mathrm{desc}(j,T)$, i.e.,
$l_j\leq i\leq r_j$, define the weight
\begin{equation}
\label{eq:wij}
wt(i,j) =
\min\left(
\frac{i - l_{j} +1}{j - l_{j} + 1},
\frac{r_{j} - i +1}{r_{j} - j + 1}\right) =
\left\{
\begin{array}{cl}
\frac{i - l_{j} +1}{j - l_{j} + 1} & \textrm{if } i \leq j,\\[.1in]
\frac{r - i +1}{r_{j} - j + 1} & \textrm{if } i > j.
\end{array}
\right.
\end{equation}
Let $h(j,T):=|\mathrm{desc}(j,T)|$ be the ``hook-length'' of a node $j$ in a rooted tree $T$.
\begin{theorem}
\label{th:vol_perm_binary_trees}
The volume of the permutohedron $P_{n+1}$ is given by the following
polynomial in the variables $u_1,\dots,u_n$:
$$
\mathrm{Vol}\, P_{n+1} =
\sum_{T}
\frac{n!}{\prod_{j=1}^n h(j,T)}\,
\prod_{j=1}^n \left(\sum_{i\in \mathrm{desc}(j,T)} wt(i,j) \, u_i\right),
$$
where the sum is over $C_n$ plane binary trees $T$ with $n$ nodes.
\end{theorem}
\begin{example}
\label{ex:n=3_binary_trees}
For $n=3$, we have the following five binary trees, where we indicated the binary search
labeling inside the nodes and also indicated the hook-lengths of the nodes:
\begin{center}
\input{fig8-1.pstex_t}
\end{center}
Theorem~\ref{th:vol_perm_binary_trees} says that
$$
\begin{array}{l}
\mathrm{Vol}\, P_4 = (u_1)(\frac{1}{2}\,u_1 + u_2) (\frac{1}{3}\,u_1+\frac{2}{3}\,u_2 + u_3)
+ (u_1 + \frac{1}{2}\,u_2) (u_2)
(\frac{1}{3}\,u_1+\frac{2}{3}\,u_2+u_3) \\[.1in]
\qquad\quad {}+ (u_1 + \frac{2}{3}\,u_2 + \frac{1}{3}\,u_3) (u_2)(\frac{1}{2}u_2 + u_3)
+ (u_1 + \frac{2}{3}\,u_2 + \frac{1}{3}\,u_3)
(u_2 + \frac{1}{2} u_3) (u_3) \\[.1in]
\qquad\quad {}+ 2\cdot (u_1) (\frac{1}{2}\,u_1 + u_2 + \frac{1}{2}\,u_3)(u_3).
\end{array}
$$
\end{example}
\begin{corollary}
\label{cor:hook_lengths_binary_trees}
We have
$$
(n+1)^{n-1} = \sum_T \frac{n!}{2^{n}} \, \prod_{j\in T}
\left(1+\frac{1}{h(j,T)}\right),
$$
where is sum is over $C_n$ plane binary trees $T$ with $n$ nodes.
\end{corollary}
For $n=3$, the corollary says that
$(3+1)^2 = 3 + 3 + 3 +3 + 4$; see figure in Example~\ref{ex:n=3_binary_trees}.
\begin{proof}
Let us specialize Theorem~\ref{th:vol_perm_binary_trees} for
$u_1=\dots=u_n=1$. In this case, $P_{n+1}$ is the regular permutohedron
with volume $(n+1)^{n-1}$, see Proposition~\ref{prop:vol-Z-G}.
Easy calculation shows that $\sum_{i\in\mathrm{desc}(j,T)}wt(i,j) = \frac{h(j,T) +1}{2}$.
Thus the right-hand side of Theorem~\ref{th:vol_perm_binary_trees} gives the
needed expression.
\end{proof}
Various combinatorial proofs and generalizations of
Corollary~\ref{cor:hook_lengths_binary_trees} were given by
Seo~\cite{Seo}, Du-Liu~\cite{DL}, and Chen-Yang~\cite{CY}.
An {\it increasing labeling\/} of nodes in a rooted tree $T$ on $[n]$
is a permutation $v\in S_n$ such that, whenever $i\in\mathrm{desc}(j,T)$,
i.e., the node $i$ is a descendant of the node $j$,
we have $v(i)\geq v(j)$.
It is well-known that the number of increasing labelings
is given by the following ``hook-length formula;''
see Knuth~\cite[Exer.~5.1.4.(20)]{Knu} and Stanley~\cite[Prop.~22.1]{St-OS}.
It can be easily proved by induction.
\begin{lemma}
\label{lem:num_incr_lab}
The number of increasing labeling of a tree $T$
equals $\frac{n!}{\prod_{j=1}^n h(j,T)}$.
\end{lemma}
Let us say that an {\it increasing binary tree\/}
$(T,v)$ is a plane binary tree $T$ with the binary search labeling
as above and a choice of an increasing labeling $v$ of its nodes.
It is well-known that there are $n!$ increasing binary trees.
The map $(T,v)\mapsto v$ is a bijection between increasing binary trees
and permutations $v\in S_n$; cf.~\cite[1.3.13]{EC1}.
Let $\mathbf{i}=(i_1,\dots,i_n)\in [n]^n$ be a sequence of integers.
Let us say that an increasing binary tree $(T,v)$
is {\it $\mathbf{i}$-compatible\/}
if $i_{v(j)}\in [l_{j},r_{j}]$, for $j=1,\dots,n$.
Define the {\it $\bf i$-weight} of an $\bf i$-compatible increasing binary
tree $(T,v)$ as
$$
wt(\mathbf{i},T,v) = \prod_{j=1}^n wt(i_{v(j)},j).
$$
where $wt(i_{v(j)},j)$ is given by~(\ref{eq:wij}).
The number $n!\,wt(\mathbf{i}, T, v)$ is always a positive integer. The following lemma
can be easily proved by induction, cf.\ Lemma~\ref{lem:num_incr_lab}.
We leave it as an exercise.
\begin{lemma}
We have, $n!$ divided by all denominators in $wt(\mathbf{i},T,v)$
equals the number labelings
of the nodes of $T$ by permutations $w\in S_n$ such that,
for any node $j$, for which we pick the first (respectively, second) case in
the definition of $wt(i_{v(j)}, j)$, the label $w(j)$ is less than labels $w(k)$
of all nodes $k$ in the left (respectively, right) branch of the node $j$.
\end{lemma}
\begin{example}
The following figure shows an $\bf i$-compatible increasing binary tree, for
${\bf i} = (3,4,8,7,1,7,4,3)$. The labels for the binary search labeling
are shown inside the nodes. The increasing labeling is
$v = 5,2,8,7,1,3,6,4$ (shown in blue color).
The intervals $[l_j,r_j]$ are
$[1,1]$, $[1,2]$, $[3,3]$, $[3,4]$, $[1,8]$, $[6,8]$, $[7,7]$, $[7,8]$.
We also marked each node $j$ by the variable $u_{i_{v(j)}}$ (shown in red color).
The $\bf i$-weight of this tree is $wt({\bf i},T,v)=
\frac{3}{5}\cdot
\frac{1}{3}\cdot
\frac{1}{3}\cdot
\frac{1}{2}\cdot
\frac{1}{1}\cdot
\frac{1}{1}\cdot
\frac{2}{2}\cdot
\frac{1}{1}$.
\begin{center}
\input{fig7-1.pstex_t}
\end{center}
\end{example}
Let us give a combinatorial interpretation for the mixed Eulerian numbers.
\begin{theorem}
\label{th:mixed_eulerian_binary_trees}
Let $(i_1,\dots,i_n)$ be any sequence such that
$u_{i_1}\cdots u_{i_n} = u_1^{c_1}\cdots u_n^{c_n}$.
Then
$$
A_{c_1,\dots,c_n} = \sum_{(T,v)}
n! \,wt(\mathbf{i}, T,v),
$$
where the sum is over $\bf i$-compatible increasing binary
trees $(T,v)$ with $n$ nodes.
\end{theorem}
Note that all terms $n!\,wt(\mathbf{i},T,v)$
in this formula are positive integers.
Actually, this theorem gives not just one but $\binom{n}{c_1,\dots, c_n}$
different combinatorial interpretations of the mixed Eulerian numbers
$A_{c_1,\dots,c_n}$ for each way to write
$u_1^{c_1}\cdots u_n^{c_n}$ as $u_{i_1}\cdots u_{i_n}$.
We will extend and prove Theorem~\ref{th:vol_perm_binary_trees}
in Section~\ref{sec:vol_weight_Phi}.
Let us now derive Theorem~\ref{th:mixed_eulerian_binary_trees} from it.
\begin{proof}[Proof of Theorem~\ref{th:mixed_eulerian_binary_trees}]
The volume of the permutohedron is obtained by multiplying
the right-hand side of Theorem~\ref{th:vol_perm_binary_trees}
by $\frac{1}{n!}\,u_{i_1}\cdots u_{i_n}$ and summing
over all sequences ${\bf i} = (i_1,\dots,i_n)\in [n]^n$:
$$
\mathrm{Vol}\, P_{n+1} = \sum_{{\bf i}\in[n]^n} u_{i_1}\cdots u_{i_n}
\sum_{(T,v)} wt(\mathbf{i}, T,v),
$$
where the second sum is over $\bf i$-compatible increasing binary
trees $(T,v)$ with $n$ nodes. This formula together with Lemma~\ref{lem:num_incr_lab}
implies the needed expression.
\end{proof}
\section{Volumes of weight polytopes via $\Phi$-trees}
\label{sec:vol_weight_Phi}
In this section we extend the results of the previous section
to weight polytopes for an arbitrary root system.
\medskip
Let $\Phi$ be an irreducible root system of rank $n$ with a choice of simple
roots $\alpha_1,\dots,\alpha_n$, and let $W$ be the associated Weyl group.
Let $(x,y)$ be a $W$-invariant inner product.
Let $\omega_1,\dots,\omega_n$ be the fundamental weights.
They form the dual basis to the basis of simple coroots
$\alpha_i^\vee = \frac{2\alpha_i}{(\alpha_i,\alpha_i)}$.
Let $P_W(x)$ be the associated weight polytope,
where $x = u_1\omega_1+\dots+u_n\omega_n$;
see Definition~\ref{def:weight_polytope}.
Its volume it a homogeneous polynomial $V_\Phi$ of degree $n$
in the variables $u_1,\dots,u_n$:
$$
V_\Phi(u_1,\dots,u_n) :=\mathrm{Vol}\, P_W(u_1\omega_1+\dots+u_n\omega_n).
$$
Recall the definition of $B(\Gamma)$-trees;
cf.\ Definition~\ref{def:B_forests} and Subsection~\ref{ssec:graph_ass}.
\begin{definition}
For a connected graph $\Gamma$, a {\it $B(\Gamma)$-tree\/} is a rooted tree $T$ on the same
vertex set such that
\begin{enumerate}
\item[(T1)] For any node $i$ and the set $I=\mathrm{desc}(i,T)$ of all descendants of $i$ in $T$,
the induced graph $\Gamma|_I$ is connected.
\item[(T2)] There are no two nodes $i\ne j$ such that the sets $I=\mathrm{desc}(i,T)$
and $J=\mathrm{desc}(j,T)$ are disjoint and the induced graph $\Gamma|_{I\cup J}$ is connected.
\end{enumerate}
\end{definition}
An {\it increasing $B(\Gamma)$-tree\/} $(T,v)$ is $B(\Gamma)$-tree $T$ together
with an increasing labeling $v$ of its nodes, defined as in
Section~\ref{sec:weighted_binary_trees}.
In the case when $\Gamma$ is the Dynkin diagram of the root system $\Phi$,
we will call these objects {\it $\Phi$-trees\/} and {\it increasing $\Phi$-trees}.
The next proposition extends the well-known claim that there are
$n!$ increasing binary trees on $n$ nodes.
\begin{proposition} For any connected graph $\Gamma$ on $n$ nodes,
the number of increasing $B(\Gamma)$-trees equals $n!$.
\end{proposition}
\begin{proof}
The map $(T,v)\mapsto v$ is a
bijection between increasing $B(\Gamma)$-trees and permutations $v\in S_n$.
\end{proof}
For a subset $I\subseteq [n]$, let $\Phi_I$ be the root system
with simple roots $\{\alpha_i\mid i\in I\}$, and let $W_I\subset W$
be the associated parabolic subgroup.
Let $\omega_i^I$, $i\in I$ be the fundamental weights for the
root system $\Phi_I$.
For $j\in I\subseteq[n]$, let us define the linear form
$f_{I,j}(u) := \frac{1}{|I|} \sum_{i\in I} u_i\, (\omega_i^I,\omega_j^I)$
in the variables $u_i$.
\begin{theorem}
\label{th:Phi_volume_trees}
The volume of the weight polytope $P_W(x)$ is given by
$$
V_\Phi(u_1,\dots,u_n) = \frac{2^n\cdot |W|}
{\prod_{i=1}^n (\alpha_i,\alpha_i)}\,
\sum_{T} \prod_{j=1}^n f_{\mathrm{desc}(j,T),j}(u),
$$
where the sum is over all $\Phi$-trees $T$.
\end{theorem}
\begin{definition}
The {\it mixed $\Phi$-Eulerian numbers\/}
$A_{c_1,\dots,c_n}^\Phi$, for
$c_1,\dots,c_n\geq 0$, $c_1+\cdots + c_n =n$,
are defined as the coefficients of the polynomial expressing
the volume of the weight polytope:
$$
V_\Phi(u_1,\dots,u_n) = \sum_{c_1,\dots,c_n} A_{c_1,\dots,c_n}^\Phi\,
\frac{u_1^{c_1}}{c_1!}\cdots \frac{u_n^{c_n}}{c_n!}.
$$
Equivalently,
the mixed $\Phi$-Eulerian numbers are the mixed volumes of
the {\it $\Phi$-hypersimplices,} which are the weight polytopes for the
fundamental weights.
\end{definition}
For a sequence $\mathbf{i}=(i_1,\dots,i_n)\in[n]^n$,
let us say that an increasing $\Phi$-tree $(T,v)$ is
{\it $\mathbf{i}$-compatible\/}
if $i_{v(j)}\in \mathrm{desc}(j,T)$, for $j=1,\dots,n$.
\begin{theorem}
\label{th:Phi_mixed_eulerian_binary_trees}
Let $(i_1,\dots,i_n)$ be any sequence such that
$u_{i_1}\cdots u_{i_n} = u_1^{c_1}\cdots u_n^{c_n}$.
Then
$$
A_{c_1,\dots,c_n}^\Phi =
\frac{2^n\cdot |W|}
{\prod_{i=1}^n (\alpha_i,\alpha_i)}\,
\sum_{(T,v)} \prod_{j=1}^n \left(\omega_{i_{v(j)}}^{\mathrm{desc}(j,T)},
\omega_{j}^{\mathrm{desc}(j,T)}\right),
$$
where the sum is over $\bf i$-compatible increasing
$\Phi$-trees $(T,v)$.
\end{theorem}
The proof of these results is based on the following
recurrence relation for volumes of weight polytopes.
Let $\Phi_{(j)} := \Phi_{[n]\setminus \{j\}}$ be the root system
whose Dynkin diagram is obtained by removing the $j$th node,
and let $W_{(j)} := W_{[n]\setminus \{j\}}$ be the corresponding Weyl group,
for $j=1,\dots,n$.
\begin{proposition}
\label{prop:general_recur_weight_poly}
For $i=1,\dots,n$, we have
$$
\frac{\partial }{\partial u_i} V_{\Phi}(u_1,\dots,u_n) = \sum_{j=1}^n
\frac{|W|}{|W_{(j)}|}\,
\frac {(\omega_i,\omega_j)} {(\alpha_j,\omega_j)}\,
V_{\Phi_{(j)}}(u_1,\dots,u_{j-1},u_{j+1},\dots,u_n).
$$
\end{proposition}
Note that $(\alpha_j,\omega_j) = \frac{1}{2}\, (\alpha_j,\alpha_j)\,
(\alpha_j^\vee,\omega_j) = \frac{1}{2}\, (\alpha_j,\alpha_j)$.
\begin{proof}
The derivative $\partial V_{\Phi}/\partial u_i$ is the rate of change of the
volume of the weight polytope as we move its generating vertex $x$
in the direction of the $i$th fundamental weight $\omega_i$.
It can be written as the sum of $(n-1)$-dimensional volumes of facets of
$P_W(x)$ scaled by some factors, which tell how fast the facets move.
Facets of $P_W(x)$ have the form $w( P_{W_{(j)}}(x))$,
where $j\in [n]$ and $w\in W/W_{(j)}$.
In other words, $P_W(x)$ has $\frac{|W|}{|W_{(j)}|}$
facets isomorphic to $P_{W_{(j)}}(x)$.
The facet $P_{W_{(j)}}(x)$
is perpendicular to the fundamental weight $\omega_j$.
Note that this facet $P_{W_{(j)}}(x)$ is a parallel translate of
$P_{W_{(j)}}(x')$, where $x' =
u_1 \omega_1^{(j)} + \cdots + u_{j-1}\omega_{j-1}^{(j)} +
u_{j+1} \omega_{j+1}^{(k)} + \cdots + u_{n}\omega_{n}^{(j)}$
and $\omega_i^{(j)} := \omega_i^{[n]\setminus \{j\}}$.
Indeed, the fundamental weights $\omega_i^{(j)}$ for the root system
$\Phi_{(j)}$ are projections of the fundamental weights $\omega_i$, $i\ne j$,
for $\Phi$ to the hyperplane perpendicular to $\omega_j$.
Thus the $(n-1)$-dimensional volume of this facet is
$\mathrm{Vol}\, P_{W_{(j)}}(x) = V_{\Phi_{(j)}}(u_1,\dots,u_{j-1},u_{j+1},\dots,u_n)$.
If we move $x$ in the direction of a vector $v$, then the facet
$P_{W_{(j)}}(x)$
moves with the velocity proportional to $(v,\omega_j)$.
Recall that we normalize the volume so that the volume of the parallelepiped
generated by the simple roots $\alpha_1,\dots,\alpha_n$ is $1$;
see Section~\ref{sec:weight-polytopes}. Thus the scaling factor for
$v=\alpha_j$ is $1$, and, in general, the scaling factor
is $\frac{(v,\omega_j)}{(\alpha_j,\omega_j)}$.
In particular, for $v=\omega_i$, we obtain the needed factor
$\frac{(\omega_i,\omega_j)}{(\alpha_j,\omega_j)}$.
By symmetry, all facets $w( P_{W_{(j)}}(x)$ come with the same
factors.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:Phi_mixed_eulerian_binary_trees}]
Fix a sequence $\mathbf{i}=(i_1,\dots,i_n)$ such that
$u_{i_1}\cdots u_{i_n} = u_1^{c_1}\cdots u_n^{c_n}$.
Then, by the definition,
$$
A_{c_1,\dots,c_n}^\Phi =
\frac{\partial}{\partial\,u_{i_n}}
\cdots
\frac{\partial}{\partial\,u_{i_1}}\cdot V_{\Phi}(u_1,\dots,u_n).
$$
Applying Proposition~\ref{prop:general_recur_weight_poly}
repeatedly, we deduce
that $A_{c_1,\dots,c_n}^\Phi$ equals the weighted sum over
$\mathbf{i}$-compatible
increasing $\Phi$-trees $(T,v)$,
where each tree comes with the weight
$$
\prod_{k=1}^n \left(\frac{|W_{I_k}|}{\prod_l |W_{I_{k,l}}| }\cdot
\frac {2} {(\alpha_{j_k},\alpha_{j_k})}\,
(\omega_{i_k}^{I_k},\omega_{j_k}^{I_k})\right),
$$
where $j_1,\dots,j_n$ is the inverse permutation to $v$,
$I_k = \mathrm{desc}(j_k,T)$, and $I_{k,l}$, $l=1,2,\dots$, are the vertex sets
of the branches of the vertex $j_k$ in $T$.
Note that all terms in the first quotient,
except the term $|W|$, cancel each other.
Thus we obtain the expression in the right-hand side of
Theorem~\ref{th:Phi_mixed_eulerian_binary_trees}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:Phi_volume_trees}]
The volume $V_\Phi(u_1,\dots,u_n)$ is obtained by
multiplying the right-hand side of
Theorem~\ref{th:Phi_mixed_eulerian_binary_trees}
by $\frac{1}{n!}\, u_{i_1}\cdots u_{i_n}$ and summing
over all sequences $(i_1,\dots,i_n)\in[n]^n$.
Thus we obtain
$$
V_\Phi(u_1,\dots,u_n) =
\frac{2^n\cdot |W|}
{n!\cdot \prod_{i=1}^n (\alpha_i,\alpha_i)}\,
\sum_{T} \mathrm{incr}(T)\,\prod_{j=1}^n (|\mathrm{desc}(j,T)|
\cdot f_{\mathrm{desc}(j,T),j}(u)),
$$
where the sum is over all $\Phi$-trees $T$ and
$\mathrm{incr}(T)$ is the number of increasing labeling
of $T$. Using Lemma~\ref{lem:num_incr_lab}, which says
that $\mathrm{incr}(T)=n!/\prod|\mathrm{desc}(j,T)|$,
we derive the needed statement.
\end{proof}
For the Lie type $A_n$, Proposition~\ref{prop:general_recur_weight_poly}
specializes to the following claim.
Let us write $\mathrm{Vol}\, P_{n+1}$ as a polynomial $V_{n+1}(u_1,\dots,u_n)$ in
$u_1,\dots,u_n$.
\begin{proposition}
\label{prop:V_uuu_recurrence}
For any $i=1,\dots,n$, we have
$\frac{\partial }{\partial u_i} V_{n+1}(u_1,\dots,u_n) =$
$$\sum_{j=1}^n
\binom{n+1}{j}\,\frac{j\,(n+1-j)}{n+1}\,
wt_{i,j,n}\,V_{j}(u_1,\dots,u_{j-1})\, V_{n-j+1}(u_{j+1},\dots,u_n),
$$
where
$wt_{i,j,n} =
\min(\frac{i}{j},\frac{n+1-i}{n+1-j})$.
\end{proposition}
\begin{proof}
In this case, we have $W = S_{n+1}$, $V_W = V_{n+1}(u_1,\dots,u_n)$,
$W_{(j)} = S_{j}\times S_{n+1-j}$, $P_{W_j}=P_{j}\times P_{n+1-j}$,
and $V_{W_{(j)}} = V_{j}(u_1,\dots,u_{j-1})\,V_{n-j+1}(u_{j+1},\dots,u_n)$.
Thus $\frac{|W|}{|W_{(j)}|}= \binom{n+1}{j}$.
The root system lives in the space
$\{(t_1,\dots,t_{n+1})\in\mathbb{R}^{n+1}\mid t_1+\cdots t_{n+1}=0\}$
with the inner product induced from $\mathbb{R}^{n+1}$.
In this space, the simple roots are $\alpha_i = e_i-e_{i+1}$
and the fundamental weights are $\omega_i =
e_1+\cdots + e_i - \frac{i}{n+1}(1,\dots,1)$,
for $i=1,\dots,n$.
We have $(\alpha_j,\alpha_j)=2$ and $(\alpha_j,\omega_j)=1$.
Thus
$\frac{(\omega_i,\omega_j)}{(\alpha_j,\omega_j)}
= (\omega_i,\omega_j) = \min(i,j) - \frac{i\cdot j}{n+1} = \frac{j\,(n+1-j)}{n+1}\,wt_{i,j,n}$.
\end{proof}
\begin{proof}[Proof of Theorems~\ref{th:vol_perm_binary_trees}
and~\ref{th:mixed_eulerian_binary_trees}]
By Theorem~\ref{th:Phi_mixed_eulerian_binary_trees}
and proof of Proposition~\ref{prop:V_uuu_recurrence},
the mixed Eulerian number $A_{c_1,\dots,c_n}$ equals the weighted sum over
$\mathbf{i}$-compatible increasing binary trees, where each tree
$(T,v)$ comes with the weight
$$
(n+1)!\cdot \prod_{j=1}^n \frac{(j-l_{j}+1)\,(h_j+1-j)}{h_j+1}
\cdot
\min\left(\frac{i_{v(j)}-l_{j}+1}{j-l_{j}+1},
\frac{r_{j}-i_{v(j)} + 1}{r_{j} - j +1}\right),
$$
where $l_j\leq r_j$ are defined as in
Section~\ref{sec:weighted_binary_trees}
and $h_j = |\mathrm{desc}(j,T)|=r_{j}-l_{j}+1$.
All terms in the first quotient, except the term $\frac{1}{n+1}$,
cancel each other. Note that the product
$\prod_{j=1}^n
\min(\frac{i_{v(j)}-l_j+1}{j-l_j+1},\frac{r_j-i_{v(j)} + 1}{r_j - j +1})$
is exactly $wt(\mathbf{i},T,v)$.
Thus the total weight of $(T,v)$ equals $(n+1)!\,\frac{1}{n+1}\,
wt(\mathbf{i},T,v)$, as needed.
\end{proof}
\section{Appendix: Lattice points and Euler-MacLaurin formula}
\label{sec:appendix-lattice-points}
In this section, we review some results of Brion~\cite{Bri},
Khovanskii-Pukhlikov~\cite{KP1, KP2}, Guillemin~\cite{Gui},
and Brion-Vergne~\cite{BV1, BV2}
related to counting lattice points and volumes of polytopes.
For the completeness sake, we included short proofs of these results.
Instead of calculating the volume or counting the number of lattice points
in a polytope, let us sum monomials over the lattice points in the polytope.
We can work with unbounded polyhedra, as well.
Recall that a polytope in $\mathbb{R}^n$ is a convex hull of a finite set
of vertices. A {\it rational polyhedron} in $\mathbb{R}^n$ is an intersection
of a finite set of half-spaces with rational (equivalently, integer)
coordinates.
In particular, rational polyhedra include polytopes with rational
vertices and {\it rational cones}, i.e., cones with a rational vertex
and integer generating vectors.
Let $\chi_P:\mathbb{Z}^n\to\mathbb{Q}$ be the {\it characteristic function}
(restricted to the integer lattice) of a polyhedron $P$ given by
$\chi_P(x) = 1$, if $x\in P$, and $\chi_P(x)=0$, if $x\not\in P$.
The {\it algebra of rational polyhedra} $A$ is the linear space
of functions $\mathbb{Z}^n\to\mathbb{R}$ spanned by the characteristic functions
$\chi_P$ of rational polyhedra. The space $A$ is closed
under multiplications of functions,
because $\chi_P\cdot \chi_Q = \chi_{P\cap Q}$.
The algebra $A$ is generated by the
{\it Heaviside functions} $H_{h,c}=\chi_{\{x\mid h(x)\geq c\}}$,
where $h$ is an integer linear form
and $c\in\mathbb{Z}$.
The group algebra of the integer lattice $\mathbb{Z}^n$ is
the algebra of Laurent polynomials $\mathbb{Q}[t_1^{\pm 1},\dots,t_n^{\pm1}]$.
Let $\mathbb{Q}(t_1,\dots,t_n)$ be the {\it field of rational functions},
which is the field of fractions of the group algebra.
For a vector $a\in\mathbb{Z}^n$, let $t^a := t_1^{a_1}\cdots t_n^{a_n}$.
\begin{theorem}
\label{th:map-S}
{\rm Khovanskii-Pukhlikov~\cite{KP1}} \
There exists a unique linear map $S:A\to \mathbb{Q}(t_1,\dots,t_n)$ such that
\begin{enumerate}
\item[(a)] $S(\delta) = 1$, where $\delta=\chi_{\{0\}}$
is the delta-function.
\item[(b)] For any $\nu \in A$ and $a\in\mathbb{Z}^n$, we have $S(\nu(x-a)) =
t^a\,S(\nu)$.
\end{enumerate}
The map $S$ has the following properties:
\begin{enumerate}
\item For a function $\nu$ on $\mathbb{Z}^n$ with a finite support, we have
$S(\nu) = \sum_{a} \nu(a)\,t^a$. In particular, for a polytope $P$, we have
$S(\chi_P) = \sum_{a\in P\cap\mathbb{Z}^n} t^a$.
\item If $\nu\in A$ is a $b$-periodic function for some
nonzero vector $b\in\mathbb{Z}^n$, i.e., $\nu(x)\equiv \nu(x-b)$, then $S(\nu)=0$.
Thus, for a rational polyhedron $P$ that contains a line,
we have $S(\chi_P)=0$.
\item For a simple rational cone
$C=v+\mathbb{R}_{\geq 0} g_1 + \cdots + \mathbb{R}_{\geq 0} g_m$, where
$v\in\mathbb{Q}^n$ and $g_1,\dots,g_m\in\mathbb{Z}^n$ are linearly independent, we have
$$
S(\chi_C) = \left(\sum_{a\in\Pi\cap\mathbb{Z}^n} t^a \right) \prod_{i=1}^m (1-t^{g_i})^{-1},
$$
where $\Pi$ is the parallelepiped
$\{v + c_1 g_1 + \cdots + c_m g_m \mid 0\leq c_i < 1\}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let us first check that conditions (a) and (b) imply properties (1), (2), and (3).
We have $S(\nu) = S(\sum_a \nu(a) \delta(x-a))=\sum_{a} \nu(a)\,t^a$, for
a function $\nu$ with a finite support.
For a $b$-periodic function $\nu\in A$, we have $S(\nu) = t^b S(\nu)$ by (b),
and, thus, $S(\nu)=0$.
Let us write, using the inclusion-exclusion principle,
$\chi_\Pi = \chi_C - \sum_i \chi_{C + g_i} + \sum_{i<j} \chi_{C+g_i+g_j} -
\cdots$. Thus by (b),
we have $S(\chi_\Pi) = S(\chi_C) - (\sum_i t^{v_i}) S(\chi_C)
+(\sum_{i<j} t^{g_i+g_j})S(\chi_C) -\cdots = S(\chi_C) \prod_i (1-t^{g_i})$,
which is equivalent to (3).
Let us now prove the existence and uniqueness of the map $S$.
We can subdivide any rational polyhedron $P$ into rational simplices and simple
rational cones. Furthermore, we can present the characteristic function of a
simplex as an alternating sum of characteristic functions of simple rational
cones. Thus we can write $\chi_P$ as a linear combination of characteristic
functions of simple rational cones. Since conditions (a) and (b) imply
expression (3) for $S(\chi_C)$ for each simple rational cone, the
expression $S(\chi_P)$ is uniquely determined by linearity.
Let us verify that this construction for $S$ is consistent.
In other words, we need to check that, for any linear dependence
$b_1\chi_{C_1} + \cdots + b_N\chi_{C_N} = 0$
of characteristic functions of simple rational cones, we have
$b_1 S(\chi_{C_1}) + \cdots + b_N S(\chi_{C_N})=0$, where each term
$S(\chi_{C_i})=f_i\cdot \prod_j (1-t^{v_{ij}})^{-1}$ is given
by expression (3). Here $f_i$ are certain Laurent polynomials.
Let us assume that $b_1\chi_{C_1} + \cdots + b_N\chi_{C_N} = 0$
and $b_1 S(\chi_{C_1}) + \cdots + b_N S(\chi_{C_N})=f/D$,
where $f$ is a nonzero Laurent polynomial and $D=\prod_{ij}(1-t^{v_{ij}})$
is the common denominator of the terms $S(\chi_{C_i})$. Let us select a norm
on $\mathbb{Z}^n$, for example,
$|a|:=\sqrt{a_1^2+\cdots+a_n^{2}}$. Let $R$ be a sufficiently large number
such that $R> |a|$ for any monomial $t^a$ that occurs in $f$ or $D$ with
a nonzero coefficient. We can write each term as
$S(\chi_{C_i}) = \sum_{|a|\leq 3R} \chi_{C_i}(a)\,t^a +
\tilde f_i \cdot \prod_j (1-t^{v_{ij}})^{-1}$,
where, for any monomial $t^a$ that occurs in $\tilde f_i$, we have $|a|> 2R$.
Let us sum the right-hand sides of these expressions with
the coefficients $b_i$.
Then the first terms cancel and we obtain
$b_1 S(\chi_{C_1}) + \cdots + b_N S(\chi_{C_N}) = \sum_i
\tilde f_i \prod_j (1-t^{v_{ij}})^{-1} = f/D$. We deduce that $f$ is a
linear combination of monomials $t^a$ with $|a|>R$, which contradicts to
our choice of $R$. This proves the existence and
uniqueness of the map $S$.
\end{proof}
Let $A'$ be the subspace in the algebra of rational polyhedra $A$
spanned by characteristic functions $\chi_P$ of
rational polyhedra $P$ that contain lines.
According to Theorem~\ref{th:map-S}, we have $S(f)=0$, for any $f\in A'$.
Thus we obtain a well-defined linear map $S:A/A'\to B$.
For a rational polyhedron $P$ and a point $u\in\mathbb{Z}^n$,
let $C_{P,u}$ denote the rational cone with the vertex at $u$
such that $P\cap B = C_{P,u}\cap B$ for a sufficiently small
open neighborhood $B$ of $u$. Notice that $\chi_{C_{P,u}} \not \in A'$
if and only if $u$ is a vertex of the polyhedron $P$.
For an analytic function $f(t)$ defined in a neighborhood of 0, let $[t^n]\,f(t)$
denote the coefficient of $t^n$ in its Taylor expansion.
Notice that $\frac{t}{1-e^{-t}} = 1 + \frac{t}{2} +
\sum_{k=1}^\infty (-1)^{k-1}\,B_k\,\frac{t^{2k}}{(2k)!}$,
is an analytic function at $t=0$, where $B_k$ are the Bernoulli numbers.
\begin{theorem}
\label{th:S-any-polytope}
{\rm Brion~\cite{Bri}, Khovanskii-Pukhlikov~\cite{KP1}} \
{\rm (1)} For any rational polyhedron $P$, we have
$\chi_P \equiv \sum_{v\in V} \chi_{C_{P,v}}$ modulo the subspace $A'$,
where the sum is over the vertex set $V$ of $P$.
{\rm (2)} We have $S(P) = \sum_{v\in V} S(C_{P,v})$.
In particular, for a simple rational polyhedron $P$, we
have
$$
S(P) = \sum_{v\in V} \frac {\sum_{a\in\Pi_v\cap\mathbb{Z}^n} z^a}
{\prod_{i=1}^n(1-z^{g_{i,v}})},
$$
where the sum is over vertices $v$ of $P$, $g_{1,v},\dots,g_{n,v}\in\mathbb{Z}^n$
are the integer generators of the cone $C_{P,v}$,
and $\Pi_v = \{v+c_1 g_{1,v} + \cdots + c_n g_{n,v} \mid 0\leq c_i < 1\}$.
{\rm (3)} For a simple rational polytope $P$,
the number of lattice points in $P$ equals
$$
\#\{P\cap \mathbb{Z}^n\} =
[t^n] \left\{\sum_{v\in V} \left(\sum_{a\in\Pi_v\cap\mathbb{Z}^n} e^{t\cdot h(a)}\right)
\prod_{i=1}^n \frac {t} { 1 - e^{t\cdot h(g_{i,v})}}\right\}.
$$
where $h\in (\mathbb{R}^n)^*$ is any linear form such that $h(g_{i,v})\ne 0$,
for all vectors $g_{i,v}$.
{\rm (4)} The volume of a simple rational polytope $P$ equals
$$
\mathrm{Vol}\, P = \frac{1} {n!} \sum_{v\in V} \frac{|\det(g_{1,v},\dots,g_{n,v})| \, h(v)^n}
{(-1)^n \prod_{i=1}^n h(g_{i,v})},
$$
where $\det(g_{1,v},\dots,g_{n,v})$ is the determinant of the $n\times n$-matrix
with the row vectors $g_{i,v}$ and $h\in (\mathbb{R}^n)^*$ is any linear form such that
$h(g_{i,v})\ne 0$, for all vectors $g_{i,v}$.
\end{theorem}
The formula for the sum of exponents $S(P)$ was first obtained by M.~Brion~\cite{Bri}.
The formula for $\mathrm{Vol}\, P$ was given by Khovanskii-Pukhlikov~\cite{KP2}
(in case of Delzant polytopes) and by Brion-Vergne~\cite{BV1} in general.
\begin{proof}
(1) As we have mentioned in the proof of Theorem~\ref{th:map-S},
we can write the characteristic function of a rational polyhedron as
a finite linear combination of characteristic functions of rational cones:
$\chi_P = \sum_i b_i\, \chi_{C_i}$. Let $U\supseteq V$ be the set of vertices
of all cones $C_i$. For $u\in U$, let $I_u$ be the collection of indices
$i$ such that the cone $C_i$ has the vertex $u$.
Then $\sum_{i\in I_u} b_i \,\chi_{C_i}\equiv \chi_{C_{P,u}}\pmod{A'}$.
Also $\chi_{C_{P,u}}\in A'$, for $u\in U\setminus V$.
This proves the claim.
(2) This claim follows from (1) and Theorem~\ref{th:map-S}.
(3) Let us pick a linear form $h$ that does not annihilate any of the vectors
$g_{i,v}$. Let $B$ be the subalgebra of $\mathbb{Q}(t_1,\dots,t_n)$
generated by the $z^a$ and $\frac{1}{1-z^b}$, for $a,b\in\mathbb{Z}^n$ such that $h(b)\ne 0$.
Let $e_h:B\to \mathbb{R}((q))$ be the homomorphism from $B$ to the ring of formal Laurent
series in one variable $q$ given by $z^a\mapsto e^{q\cdot h(a)}$
and $\frac{1}{1-z^b}\mapsto \frac{1}{1-e^{q\cdot h(b)}}$.
Let us apply the homomorphism $e_h$ to the expression for $S(P)$ given by (2).
Then the number of lattice points $\#\{P\cap \mathbb{Z}^n\}$ is the constant
coefficient of the resulting Laurent series. This is exactly the need claim.
(4) The volume of a polytope $P$ can be calculated by counting the number
of lattice points in the inflated polytope $kP$ for large $k$.
Explicitly, $\mathrm{Vol}\, P = \lim_{k\to\infty}\#\{kP\cap \mathbb{Z}^n\}/k^n$.
The vertices of the inflated polytope $kP$ are the vectors $k\,v$, for $v\in V$,
and the generators of the cone $C_{kP,kv}$ are exactly the same vectors
$g_{i,v}$ as for the original polytope $P$. We may assume that the limit
is taken over $k$'s such that all vectors $k\,v$ are integer.
Each term in the expression for $\#\{kP\cap \mathbb{Z}^n\}$ given by (3)
has the form
$[t^n] \left\{ e^{t\cdot h(kv + a')}
\prod_{i=1}^n \frac {t} { 1 - e^{t\cdot h(g_{i,v})}}\right\}
= [t^n] \left\{ e^{t\cdot h(kv + a')}
\prod_{i=1}^n ( - \frac{1}{h(g_{i,v})} + O(t))\right\}$,
where $a'\in(\Pi_v- v)\cap\mathbb{Z}^n$.
Since $k$ appears only in the first exponent, this expression
is a polynomial in $k$ of degree $n$ with the top term
$k^n\, \left(\frac{1}{n!} h(v)^n (-1)^n \prod_{i=1}^n \frac{1}{h(g_{i,v})}\right)$.
There are
$|\det(g_{1,v},\dots,g_{n,v})| = | \Pi_v|$ choices for $a'$.
Thus summing these expressions over all $v$ and $a'$ we obtain the needed
expression for $\mathrm{Vol}\, P$.
\end{proof}
For a polytope $P$ with the vertices $v_1,\dots,v_M$, we say
that a {\it deformation} of $P$ is a polytope of the form
$P'=\mathrm{ConvexHull}(v_1',\dots,v_M')\in\mathbb{R}^n$ such that $v_i'-v_j' = k_{ij} (v_i-v_j)$,
for some nonnegative $k_{ij}\in\mathbb{R}_{\geq 0}$, whenever $[v_i,v_j]$ is a 1-dimensional
edge of $P$.
A generic deformation of $P$ has the same combinatorial structure as $P$.
However in degenerate cases some of the vertices $v_i'$ may merge with each other.
Deformations of $P$ are obtained by parallel translations of its facets.
Suppose that the polytope $P$ has $N$ facets and is given by the linear inequalities
$P=\{x\in\mathbb{R}^n\mid h_i(x)\leq c_i, \ i=1,\dots,N\}$, for
some $h_i\in(\mathbb{R}^n)^*$ and $c_i\in\mathbb{R}$. Then any deformation $P'=
\mathrm{ConvexHull}(v_1',\dots,v_M')$ has the
form
$$
P(z_1,\dots,z_n) := \{x\in\mathbb{R}^n\mid h_i(x)\leq z_i, \ i=1,\dots,N\},
\textrm{ for some } z_1,\dots,z_N\in \mathbb{R},
$$
where $h_i(v_j')=z_i$ whenever $i$-th facet of $P$ contains
the $j$-th vertex $v_j$.
For this polytope we will write $v_i(z_1,\dots,z_N) = v'_i$.
Let $\mathcal{D}_P\subset\mathbb{R}^N$ be the set of $N$-tuples $(z_1,\dots,z_N)$
corresponding to deformations of $P$. Then $\mathcal{D}_P$ is a certain
polyhedral cone in $\mathbb{R}^N$ that we call the {\it deformation cone}.
If $P$ is a simple polytope then $\mathcal{D}_P$ has dimension $N$,
because any sufficiently small parallel translations of the facets of $P$
give a deformation of $P$.
Deformations $P(z_1,\dots,z_n)$ for interior points $(z_1,\dots,z_n)
\in \mathcal{D}_P\setminus \partial\mathcal{D}_P$ of the cone $\mathcal{D}_P$ are exactly
the polytopes whose associated fan coincides with the fan of $P$.
A simple integer polytope $P$ is called a {\it Delzant polytope} if,
for each vertex $v$ of $P$, the cone $C_{P,v}$ is generated by
an integer basis of the lattice $\mathbb{Z}^n$.
Such polytopes are associated with smooth toric varieties.
Formulas in Theorem~\ref{th:S-any-polytope} are especially simple
for Delzant polytopes. Indeed, in this case $\Pi_v\cap\mathbb{Z}^n$ consists
of a single element $v$ and
$|\det(g_{1,v},\dots,g_{n,v})| = 1$.
For Delzant polytopes, we assume that we pick the linear forms $h_i$
corresponding to the facets of $P$ so that $h_i$ are integer and
are not divisible by a nontrivial integer factor.
Let $I_P(z_1,\dots,z_N)=\#\{P(z_1,\dots,z_N)\cap\mathbb{Z}^n\}$
be the number of lattice points
and $V_P(z_1,\dots,z_N)= \mathrm{Vol}\, P(z_1,\dots,z_N)$ be the volume
of a deformation of $P$.
Let $\mathrm{Todd}(q) = \frac{q}{1-e^{-q}}$. Since $\mathrm{Todd}(q)$ expands
as a Taylor series at $q=0$, we have the well-defined
operators $\mathrm{Todd}\left(\frac{\partial}{\partial\, z_i}\right)$
acting on polynomials in $z_1,\dots,z_N$.
\begin{theorem}
\label{th:Todd-Euler-Maclaurin}
{\rm (1)}
For an integer polytope $P$,
and $(z_1,\dots,z_N)\in \mathcal{D}_P\cap\mathbb{Z}^N$,
the number of lattice points $I_P(z_1,\dots,z_N)$
and the volume $V_P(z_1,\dots,z_N)$ are given by polynomials in $z_1,\dots,z_N$
of degree $n$.
The polynomial $V_P(z_1,\dots,z_N)$ is the top homogeneous component
of the polynomial $I_P(z_1,\dots,z_N)$.
{\rm (2)}
If $P$ is a Delzant polytope then we have
$$
I_P(z_1,\dots,z_N) = \left(\prod_{i=1}^N
\mathrm{Todd}\left(\frac{\partial}{\partial\, z_i}\right) \right)\, V_P(z_1,\dots,z_N).
$$
\end{theorem}
We will call the polynomial $I_P(z_1,\dots,z_N)$ the {\it generalized
Ehrhart polynomial} of the polytope $P$.
\begin{proof}
(1)
Assume $P$ is a simple polytope. The vertices $v_i(z_1,\dots,z_N)$
of the deformation $P(z_1,\dots,z_N)$ linearly depend on $z_1,\dots,z_N$.
According to formulas (3) and (4) in Theorem~\ref{th:S-any-polytope},
$I_P(z_1,\dots,z_N)$ and $V_P(z_1,\dots,z_N)$ are polynomials in
$z_1,\dots,z_N$, because each term in these formulas for $P(z_1,\dots,z_N)$
polynomially depend on $v$. This remains true for
degenerate deformations $P(z_1,\dots,z_N)$ when some of the vertices $v_i(z_1,\dots,z_N)$
merge. Indeed, all claims of Theorem~\ref{th:S-any-polytope} remain valid
(and proofs are exactly the same) if, instead of summation over actual
vertices of $P(z_1,\dots,z_N)$, we sum over $v_i(z_1,\dots,z_N)$.
If $P$ is not simple then a generic small parallel translation of its facets
results in a simple polytope. Thus $P$ can be thought of as a degenerate
deformation of a simple polytope and the above argument works.
(2) For a simple polytope $P$, we have
$$
\frac{\partial }{\partial z_i} v_j(z_1,\dots,z_N) =
\left\{
\begin{array}{cl}
-\alpha_{ij} \,g_{k,v_j} & \textrm{if $v_j$ belongs to the $i$-th facet},\\
0 & \textrm{otherwise},
\end{array}
\right.
$$
for some positive constants $\alpha_{ij}$,
where $g_{k,v_j}$ is the only generator of the cone $C_{P,v_j}$ that is not
contained in the $i$-th facet. Indeed, a small parallel translation
of the $i$-th facet, moves each vertex $v_j$ in this facet in the direction opposite
to the generator $g_{k,v_j}$ and does not change all other vertices.
If $P$ is a Delzant polytope then all constants $\alpha_{ij}$ are equal to $1$.
In this case,
by Theorem~\ref{th:S-any-polytope}(4), we have
$$
V_P(z_1,\dots,z_N) = \frac{1}{n!}\sum_{j=1}^M
\frac{h(v_j(z_1,\dots,z_N))^n}{(-1)^n\prod_{i=1}^n h(g_{i,v_j})}
= [t^n]\left\{\sum_{j=1}^M \frac{e^{t\cdot h(v_j(z_1,\dots,z_N))}}
{ (-1)^n\prod_{i=1}^n h(g_{i,v_j})}
\right\}
$$
The only term in this expression that involves $z_i$'s is the exponent
$e^{t\cdot h(v_j(z_1,\dots,z_N))}$.
For an analytic function $f(q)$,
the operator $f\left(\frac{\partial }{\partial z_i}\right)$
maps this exponent to
$$
e^{t\cdot h(v_j(z_1,\dots,z_N))}
\mapsto
\left\{
\begin{array}{cl}
e^{t\cdot h(v_j(z_1,\dots,z_N))}\,f(-t\,h(g_{k,v_j}))
& \textrm{if $v_j$ lies in the $i$-th facet},\\
e^{t\cdot h(v_j(z_1,\dots,z_N))}\,f(0) & \textrm{otherwise},
\end{array}
\right.
$$
where $k$ is the same as above.
Using this for Todd operators, we obtain the expression for
$I_P(z_1,\dots,z_N)$ given by Theorem~\ref{th:S-any-polytope}(3).
\end{proof}
| {
"timestamp": "2005-07-07T23:11:28",
"yymm": "0507",
"arxiv_id": "math/0507163",
"language": "en",
"url": "https://arxiv.org/abs/math/0507163",
"abstract": "The volume and the number of lattice points of the permutohedron P_n are given by certain multivariate polynomials that have remarkable combinatorial properties. We give several different formulas for these polynomials. We also study a more general class of polytopes that includes the permutohedron, the associahedron, the cyclohedron, the Pitman-Stanley polytope, and various generalized associahedra related to wonderful compactifications of De Concini-Procesi. These polytopes are constructed as Minkowski sums of simplices. We calculate their volumes and describe their combinatorial structure. The coefficients of monomials in Vol P_n are certain positive integer numbers, which we call the mixed Eulerian numbers. These numbers are equal to the mixed volumes of hypersimplices. Various specializations of these numbers give the usual Eulerian numbers, the Catalan numbers, the numbers (n+1)^{n-1} of trees, the binomial coefficients, etc. We calculate the mixed Eulerian numbers using certain binary trees. Many results are extended to an arbitrary Weyl group.",
"subjects": "Combinatorics (math.CO)",
"title": "Permutohedra, associahedra, and beyond",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543733084475,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.709381785772622
} |
https://arxiv.org/abs/1804.08483 | Erdős' Multiplication Table Problem for Function Fields and Symmetric Groups | Erdős first showed that the number of positive integers up to $x$ which can be written as a product of two number less than $\sqrt{x}$ has zero density. Ford then found the correct order of growth of the set of all these integers. We will use the tools developed by Ford to answer the analogous question in the function field setting. Finally, we will use a classical result relating factorization of polynomials to factorization of permutations to recover a result of Eberhard, Ford and Green of an analogous multiplication table problem for permutations. | \section{Introduction}\label{Intro}
Let $A(x)$ be the set of positive integers up to $x$ that can be written as a product of two numbers less than $\sqrt{x}$. Using estimates on the number of integers with a given number of prime divisors Erd\H{o}s \cite{E} was able to show that
$$|A(x)| \ll \frac{x}{(\log x)^{\delta} (\log\log x)^{1/2}},$$
where
$$\delta = 1 - \frac{1+\log\log2}{\log2} = 0.086071....$$
Then, Ford \cite{F1,F2} considered the set $H(x,y,z)$ consisting of the number of integers up to $x$ which has a divisor in $(y,z]$. In particular, he showed that
\begin{align}\label{Hxy2y}
|H(x,y,2y)| \asymp \frac{x}{(\log y)^\delta (\log\log y)^{3/2}} \quad \quad (3 \leq y \leq \sqrt{x}),
\end{align}
and that
$$\left|H\left(\frac{x}{4}, \frac{\sqrt{x}}{4}, \frac{\sqrt{x}}{2}\right)\right| \leq |A(x)| \leq \sum_{k\geq 0} \left|H\left(\frac{x}{2^k}, \frac{\sqrt{x}}{2^{k+1}}, \frac{\sqrt{x}}{2^k}\right)\right|$$
from which you can then conclude that
$$|A(x)| \asymp \frac{x}{(\log x)^\delta (\log\log x)^{3/2}}.$$
We are interested in the analogous statement in the function field setting.
\subsection{Function Field Analogy}
There is a dictionary of sorts that relates statements made about integers to statements about polynomials over finite fields:
\begin{center}
\begin{tabular} { c| c}
$\Z$ & $\Ff_q[t]$ \\
\hline
$p$, prime & $P$, prime polynomial \\
positive & monic \\
$|m|$ & $|F|=q^{\deg F}$ \\
$m\leq x$ & $\deg(F)=n$ \\
$\log x$ & $\deg F$
\end{tabular}
\end{center}
For example the prime number theorem states that the number of primes up to $x$ is
$$\pi(x) := |\{p \leq x : p \mbox{ is prime}\}| \sim \frac{x}{\log(x)}.$$
The analogous question, the prime polynomial theorem, asks how many prime polynomials over $\Ff_q[t]$ are there of degree $n$ with the answer being
$$\pi_q(n) := |\{P\in \M_n : P \mbox{ is a prime polynomial}\}| = \frac{q^n}{n} + O\left(\frac{q^{n/2}}{n}\right),$$
where $\M_n$ is the set of monic polynomials of degree $n$. Note that we get a square-root saving in the error term for the prime polynomial theorem as the Riemann Hypothesis is known for function fields.
Using this dictionary we can create analogous sets to $A(x)$ and $H(x,y,2y)$ in the function field setting. The background set for $A(x)$ is all the positive integers less than $x$ so the background set in the function field setting would be $\M_n$, the monic polynomials of degree $n$. Since degree is the analogy of $\log$, the condition of being the product of two integers less than $\sqrt{x}$ in $A(x)$ is analogous to being the product of two polynomials of degree $n/2$. Clearly this only makes sense if $n$ is even and so we define
\begin{align}\label{M2n}
M(2n) := \{ F\in\ \M_{2n} : F=GH, G,H\in \M_n \}.
\end{align}
Then the multiplication table problem would be to find the size of set $M(2n)$. Using the dictionary we can make a good guess as to how large the set should be and in fact that is what we get.
\begin{thm}\label{MainThm1}
$$|M(2n)|\asymp \frac{q^{2n}}{n^\delta (\log n)^{3/2}}$$
as $q^n\to\infty$.
\end{thm}
Notice that since $n$ replaces $\log x$, $\log n$ replaces $\log\log x$.
The analogy for $H(x,y,2y)$ is a little subtler. We must ask ourselves what is the correct analogue of $2$ and the importance it plays in the proof of \eqref{Hxy2y}. In fact $2$ is important in this context because it is the smallest prime. While the concept of a smallest prime is not well defined in the function field setting, the degree of the smallest prime is well defined, and it is $1$. Therefore, the analogue of a number having a divisor in $(y,2y]$ would be for a polynomial to have a divisor with degree in $(b,b+1]$. But since the degree is always an integer we see that this is equivalent to saying a polynomial has a divisor of some fixed degree. Thus we define
\begin{align}\label{Hnb}
H(n,b) := & \{F \in \M_n: F \mbox{ has a divisor of degree } b\} \\
= & \{F\in \M_n : F=GH, G\in \M_b, H\in \M_{n-b}\}. \nonumber
\end{align}
Moreover, we see that $H(n,b)=H(n,n-b)$ so we may always assume that $b\leq n/2$. Again, the result predicted by the dictionary is the truth.
\begin{thm}\label{MainThm2}
For $b\leq n/2$,
$$|H(n,b)| \asymp \frac{q^{n}}{b^\delta (\log b)^{3/2}}$$
as $q^n\to\infty$.
\end{thm}
Of course $M(2n)=H(2n,n)$ so Theorem \ref{MainThm1} is a direct corollary of Theorem \ref{MainThm2}.
\subsection{Symmetric Groups}
As well as the analogy between integers and polynomials over a finite field, there is an analogy between polynomial over a finite field of degree $n$ and the symmetric group on $n$ elements. In particular, one can show that, in the $q$-limit, the probability a polynomial has a given factorization into prime polynomials is the same as the probability a permutation has the same factorization type into cyclic elements. Through this analogy we can state and prove a multiplication table problem for the symmetric group. Before we do this though, we need two definitions.
\begin{defn}
We say $\sigma,\tau\in S_n$ are \textbf{disjoint} if they permute different elements. That is, if $\sigma(k)\not=k$ then $\tau(k)=k$ and, vice versa, if $\tau(k)\not=k$ then $\sigma(k)=k$.
\end{defn}
\begin{defn}
We say $\sigma\in S_n$ \textbf{embeds into $S_m$} if there is a subset $I\subset \{1,\dots,n\}$ of size $m$ such that $\sigma$ permutes $I$ and is trivial outside of $I$. That is, $\sigma(k)\in I$ for all $k\in I$ and $\sigma(k)=k$ for all $k\not \in I$.
\end{defn}
Then the multiplication table problem for $S_{2n}$ would be to deduce the order of growth of the set
\begin{align}\label{T2n}
T(2n):= \{\sigma \in S_{2n} : \sigma = \tau_1\tau_2 \mbox{ such that $\tau_1,\tau_2$ are disjoint and both embed into $S_n$}\}.
\end{align}
It is not necessarily obvious that this is the correct analogue to consider however, as we will show in Section \ref{Section2} this is what naturally comes up from analyzing the relationship between polynomials and permutations.
Indeed we get the result
\begin{thm}\label{SymmThm1}
$$|T(2n)| \asymp \frac{(2n)!}{n^\delta(\log n)^{3/2}}$$
as $n\to\infty$.
\end{thm}
This is surprising since, at first guess, one might suspect $T(2n)$ to grow like
$$\frac{(2n)!}{(\log((2n)!))^{\delta} (\log\log((2n)!))^{3/2}} \asymp \frac{(2n)!}{(n\log n)^{\delta} (\log n)^{3/2}}.$$
So, perhaps there is a different analogue of the multiplication table problem in the symmetric group setting that will give the first guess rate of growth.
Again, we can ask about the rate of growth of a more general set:
\begin{align}\label{Tnb}
T(n,b) := \{\sigma \in S_n : \sigma = \tau_1\tau_2 \mbox{ such that $\tau_1,\tau_2$ are disjoint and $\tau_1$ embeds into $S_b$}\}.
\end{align}
Note that the condition that $\tau_1$ and $\tau_2$ are disjoint and $\tau_1$ embeds in $S_b$ implies $\tau_2$ embeds into $S_{n-b}$. Hence we have that $T(n,b)=T(n,n-b)$ and we may assume $b\leq n/2$.
\begin{thm}\label{SymmThm2}
For $b\leq n/2$,
$$|T(n,b)| \asymp \frac{n!}{b^\delta(\log b)^{3/2}}$$
as $n\to\infty$.
\end{thm}
Likewise, Theorem \ref{SymmThm2} implies Theorem \ref{SymmThm1} as $T(2n) = T(2n,n)$.
\textbf{Outline of the paper:} In Section \ref{Section2} we will deduce Theorem \ref{SymmThm2} from Theorem \ref{MainThm2}. Then Sections \ref{Section3} and \ref{Section4} will be devoted to proving the lower and upper bounds for Theorem \ref{MainThm2}, respectively. We will use the techniques developed by Ford to reduce the question down to the same estimates as for the integer case. Finally, we include an appendix with proofs of function field analogues of well known useful results in the integer setting.
We will preserve the variable $P$ (with any subscript) to denote a prime polynomial. Moreover, for brevity, if we write a sum (or product) with $P$ in the subscript, we will always have this denote the sum (or product) over prime polynomials that satisfy the other conditions imposed by the sum (or product).
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement n$^{\text{o}}$ 320755.
\section{Symmetric Groups}\label{Section2}
Let $F\in \M_n$. Suppose it can be factored as $F = \prod_{i=1}^t P_i$ where the $P_i$ are not necessarily distinct primes. Then the tuple $(\deg P_1,\dots,\deg P_t)$ gives a partition of $n$. Denote this partition as $\lambda_F$. Further, for any partition $\lambda$ of $n$, define
$$\pi_q(n,\lambda) = |\{F\in \M_n : \lambda_F=\lambda\}|$$
to be the number of polynomials of degree $n$ with a fixed factorization type. Note that if we set $\lambda = (n)$, the partition consisting only of $n$, then we see that $\pi_q(n,(n))=\pi_q(n)$, the number of primes of degree $n$.
Likewise, all $\sigma\in S_n$ can be decomposed as $\sigma = \prod_{i=1}^t c_i$ where the $c_i$ are disjoint cycles. Then the tuple $(\ell(c_1),\dots,\ell(c_t))$ gives a partition of $n$, where $\ell(c_i)$ is the length of $c_i$. Denote this partition $\lambda_\sigma$. Note: if $\sigma(k)=k$, then we include the cycle $(k)$ in the decomposition of $\sigma$ and this contributes a $1$ to the partition of $n$. Now, for any partition $\lambda$ of $n$, define
$$P(\lambda) = \frac{|\{\sigma\in S_n : \lambda_\sigma = \lambda\}|}{n!}$$
to be the probability that a permutation has a certain cycle decomposition. Then there is a classical result that follows directly from the prime polynomial theorem:
\begin{thm}\label{BBR}
Let $n$ be a positive integer. Then there exists a $c(n)>0$ depending only on $n$ such that
$$|\pi_q(n,\lambda) - P(\lambda)q^n| \leq c(n) q^{n-1}.$$
\end{thm}
We can now use this result, together with Theorem \ref{MainThm2} to prove Theorem \ref{SymmThm2}.
\begin{proof}[Proof of Theorem \ref{SymmThm2}]
We will say $\lambda$ has a $b$-subpartition if there exists a subset of $\lambda$ that is a partition of $b$. Therefore $F\in H(n,b)$ if and only if $\lambda_F$ has a $b$-subpartition. Indeed if $F\in H(n,b)$ then $F=GH$ with $G\in\M_b$ and $\lambda_G$ will be a $b$-subpartition of $\lambda_F$. Conversely, if $\lambda'$ is a $b$-subpartition of $\lambda_F$, then define $G$ to be the product of the primes of $F$ corresponding to $\lambda'$. Then $G|F$ and $G\in\M_b$ and hence $F\in H(n,b)$.
Let
$$\Lambda(n,b) := \{\lambda: \lambda \mbox{ is a partition of $n$ with a $b$-subpartition} \}.$$
Then we get that
$$H(n,b) = \bigcup_{\lambda\in\Lambda(n,b)} \{F\in \M_n : \lambda_F=\lambda\}.$$
Moreover, this union is disjoint as if $\lambda_F\not=\lambda_G$ then $F\not=G$. Therefore,
\begin{align*}
|H(n,b)| & = \sum_{\lambda\in\Lambda(n,b)}|\{F\in M_n: \lambda_F = \lambda\}|\\
& = \frac{q^n}{n!}\sum_{\lambda\in\Lambda(n,b)}| \{\sigma\in S_n : \lambda_\sigma=\lambda\}| + O\left(c(n)q^{n-1}e^{\pi \sqrt{2n/3}}\right)
\end{align*}
where the last equality comes from Theorem \ref{BBR} and bounds on the number of partition of $n$ as proved by Hardy and Ramanujan \cite{HR}.
Furthermore, $\sigma\in T(n,b)$ if and only if $\lambda_\sigma\in \Lambda(n,b)$. Indeed if $\sigma\in T(n,b)$ then $\sigma=\tau_1\tau_2$ with $\tau_1$ and $\tau_2$ disjoint and $\tau_1$ embeds into $S_b$ therefore, $\lambda_{\tau_1}$ will be a $b$-subpartition of $\lambda_\sigma$. Conversely, if $\lambda_\sigma$ has a $b$-subpartition then let $\tau_1$ be the product of the cycles corresponding to the subpartition and $\tau_2$ be the product of the remaining cycles. Then $\tau_1$ will embed into $S_b$, $\tau_1$ and $\tau_2$ will be disjoint and $\sigma=\tau_1\tau_2$.
Therefore, we get
$$T(n,b) = \bigcup_{\lambda\in\Lambda(n,b)} \{\sigma\in S_n: \lambda_\sigma=\lambda\}$$
and since this union is disjoint (as $\lambda_\sigma\not=\lambda_\tau$ implies $\sigma\not=\tau$) then we finally have
\begin{align*}
|T(n,b)| & = \sum_{\lambda\in\Lambda(n,b)}| \{\sigma\in S_n : \lambda_\sigma=\lambda\}| \\
& = \frac{n!}{q^n}|H(n,b)| + O\left(\frac{c(n)n!e^{\pi \sqrt{2n/3}}}{q}\right)\\
& \asymp \frac{n!}{b^\delta(\log b)^{3/2}}
\end{align*}
as we can chose $q \gg c(n)e^n$.
\end{proof}
\section{Lower Bound}\label{Section3}
In Ford's proof for the integers, he expresses the size of $H(x,y,2y)$ in terms of ``a measure of the degree of clustering of the divisors of an integer $a$" which he defines as
$$L(a) = \mbox{meas}\Ll(a), \quad \quad \quad \Ll(a) = \bigcup_{d|a}[\log(d/2), \log d).$$
Again, here the importance of $2$ is just that it is the smallest prime integer. The analogue of $\log 2$ in the function field setting is then just the degree of the smallest prime, which is $1$. Hence, for a polynomial $A$ and a divisor $D$ of $A$, the corresponding interval we will want to consider is something of the form $[\deg(D)-1,\deg(D))$. However, since the $\deg$ function only takes integer values, we actually only care about the singleton $\deg(D)$. Hence, we will define
$$\Ll(A) = \{d : d=\deg(D) \mbox{ for some } D|A\}, \quad \quad \quad L(A)=|\Ll(A)|.$$
\begin{lem}\label{lowboundlem1}
For $b\leq n/2$,
$$|H(n,b)| \gg \frac{q^n}{b^2} \sum_{\deg(A)\leq b/8} \frac{L(A)}{|A|}$$
as $q^n\to\infty$.
\end{lem}
\begin{proof}
Consider the set of polynomials of the form $F=APB$ where $\deg(A) \leq b/8$, $P$ is a prime with $b-\deg(P)\in\Ll(A)$ and all the primes of $B$ have degree $\geq b$ or in $[b/4,3b/4]$. The condition on $P$ implies that $AP$ has a divisor of degree $b$. Moreover, we have $7b/8\leq \deg(P) \leq b$ and so every polynomial of this form has a unique representation. Fix $A,P$ and note that $\deg(B)=n-\deg(AP)\geq 7b/8$. If $\deg(B)\geq b$ then, by \eqref{smooth}, the number of such $B$ will be greater than
$$|\{B\in \M_{n-\deg(AP)} : \deg(P^-(B))\geq b\}| \asymp \frac{q^{n-\deg(AP)}}{b} = \frac{q^n}{b|AP|},$$
where $P^-(B)$ is the smallest prime divisor of $B$.
Otherwise, if $\deg(B)<b$, then $B$ will have at least two prime divisors from $[b/4,3b/4]$. Hence the number of such $B$ will be greater than
\begin{align*}
\sum_{\substack{d_1,d_2\in [b/4,3b/4] \\ d_1+d_2 = n-\deg(AP)}} \pi_q(d_1)\pi_q(d_2) & = \sum_{d = b/4}^{n-b/4-\deg(AP)} \pi_q(d)\pi_q(n-\deg(AP)-d) \\
& \gg \frac{q^{n}}{|AP|}\sum_{d = b/4}^{5b/4} \frac{1}{d(b-d)} \gg \frac{q^{n}}{b|AP|},
\end{align*}
where $\pi_q(n)$ is the number of prime polynomials of degree $n$.
Therefore,
$$|H(n,b)| \geq \sum_{\deg(A)\leq b/8} \sum_{\substack{P \\ b-\deg(P)\in \Ll(A)}} \sideset{}{^*}\sum_B 1 \gg \frac{q^n}{b} \sum_{\deg(A)\leq b/8} \frac{1}{|A|} \sum_{\substack{P \\ b-\deg(P)\in \Ll(A)}}\frac{1}{|P|},$$
where $\sideset{}{^*}\sum$ indicates we sum over all such $B$ described above.
Finally,
$$\sum_{\substack{P \\ b-\deg(P)\in \Ll(A)}}\frac{1}{|P|} = \sum_{\substack{d \\ b-d\in \Ll(A)}} \frac{\pi_q(d)}{q^d} \sim \sum_{\substack{d \\ b-d\in \Ll(A)}} \frac{1}{d} \gg \frac{L(A)}{b}$$
and this completes the proof.
\end{proof}
For any polynomial $A$, let $\tau(A)$ be the number of divisors of $A$ and $\tau_d(A)$ be the number of divisors of $A$ of degree $d$. Then we clearly have
$$\tau(A) = \sum_{d\in \Ll(A)} \tau_d(A).$$
Then, for any subset $\A$ of polynomials we have by Cauchy-Schwarz that
\begin{align*}
\left(\sum_{A\in \A} \frac{\tau(A)}{|A|} \right)^2 & = \left(\sum_{A\in\A} \sum_{d\in\Ll(A)} \frac{\tau_d(A)}{|A|}\right)^2 \leq \left(\sum_{A\in\A}\sum_{d\in \Ll(A)} \frac{1}{|A|}\right) \left( \sum_{A\in\A}\sum_{d\in\Ll(A)} \frac{\tau_d(A)^2}{|A|}\right)\\
& = \left(\sum_{A\in\A} \frac{L(A)}{|A|}\right)\left(\sum_{A\in\A} \frac{W(A)}{|A|}\right),
\end{align*}
where
$$W(A) = \sum_{d\in \Ll(A)}\tau_d^2(A) = |\{(D,D') : D,D'|A, \deg(D)=\deg(D')\}|.$$
Hence if we have any collection of disjoint sets of polynomials $\A_1,\dots,\A_t$, all of whose degrees are less than $b/8$ then we get from Lemma \ref{lowboundlem1} that
\begin{align} \label{lowboundeq1}
|H(n,b)| \gg \frac{q^n}{b^2} \sum_{j=1}^t\sum_{A\in\A_j} \frac{L(A)}{|A|} \geq \frac{q^n}{b^2}\sum_{j=1}^t \frac{\left(\sum_{A\in\A_j} \frac{\tau(A)}{|A|}\right)^2}{\sum_{A\in \A_j} \frac{W(A)}{|A|}}
\end{align}
We will now construct appropriate sets that will give us the lower bound we desire. Towards this, partition the primes into subsets $D_1,D_2,\dots,$ such that $D_j$ consists of primes whose degree are in the interval $(\lambda_{j-1},\lambda_j]$ so that $\lambda_j$ is largest so that
$$\sum_{\deg(P)\in (\lambda_{j-1},\lambda_j]} \frac{1}{|P|} \leq \log(2).$$
Such partitions exists as a consequence of \eqref{inverseprime}. In fact, \eqref{inverseprime} tells us that for any $\lambda_{j-1}<\lambda_j$, we have
$$\sum_{\deg(P)\in (\lambda_{j-1},\lambda_j]} \frac{1}{|P|} = \log(\lambda_j) - \log(\lambda_{j-1}) +O\left(\frac{1}{\lambda_{j-1}}\right).$$
Therefore, there exists some constant $K$ such that
$$2^{j-K} \leq \lambda_j \leq 2^{j+K}.$$
Finally, for any $\bf{b}=(b_1,\dots,b_J)$ let $\A(\bf{b})$ be the set of square-free polynomials with exactly $b_j$ prime factors coming from the interval $D_j$.
\begin{lem}\label{WAlem}
$$\sum_{A\in \A(\bf{b})} \frac{W(A)}{|A|} \ll \frac{(2\log(2))^{b_1+\dots+b_J}}{b_1!\cdots b_J!} \sum_{j=1}^J 2^{-j+b_1+\cdots+b_j}.$$
\end{lem}
\begin{proof}
Let $B = b_1+\cdots+b_J$ and $A=P_1\cdots P_B$ such that
\begin{align}\label{intervalcond}
P_1,\dots,P_{b_1}\in D_1, P_{b_1+1},\dots,P_{b_1+b_2}\in D_2 \mbox{ and so on.}
\end{align}
Then $W(A)$ is the number of subsets $Y,Z \subset \{1,\dots,B\}$ such that
\begin{align}\label{YZcond}
\sum_{i\in Y} \deg(P_i) = \sum_{i\in Z} \deg(P_i).
\end{align}
Hence,
\begin{align}\label{WAbound1}
\sum_{A\in \A(\bf{b})} \frac{W(A)}{|A|} \leq \frac{1}{b_1!\cdots b_J!} \sum_{Y,Z \subset \{1,\dots,B\}} \sideset{}{'}\sum_{P_1,\dots,P_B} \frac{1}{|P_1|\cdots|P_B|}.
\end{align}
where $\sideset{}{'}\sum$ indicates that we are summing over all tuples $P_1,\dots,P_B$ that satisfy \eqref{intervalcond} and \eqref{YZcond}.
Consider the diagonal terms when $Y=Z$ of \eqref{WAbound1}:
$$\sum_{Y=Z \subset\{1,\dots,B\}}\sideset{}{'}\sum_{P_1,\dots,P_B} \frac{1}{|P_1|\cdots|P_B|} \leq \sum_{Y\subset\{1,\dots,B\}} \prod_{j=1}^J \left(\sum_{P_j\in D_j} \frac{1}{|P_j|}\right)^{b_j} \leq (2\log(2))^B $$
For the off-diagonal terms when $Y\not=Z$, let $I$ be the maximum element of $(Y\cup Z) \setminus (Y\cap Z)$. If we fix all the other $P_i$, then this fixes the degree of $P_I$ by \eqref{YZcond}. Moreover, if we let $E(I)$ be such that $P_I\in D_{E(I)}$ then $\deg(P_I)\geq \lambda_{E(I)-1} \gg 2^{E(I)}$. Therefore,
$$\sum_{P_I} \frac{1}{P_I} = \frac{\pi_q(\deg(P_I))}{q^{\deg(P_I)}}\ll \frac{1}{|\deg(P_I)|} \ll 2^{-E(I)}$$
Hence for a fixed $Y\not=Z$ we get
$$\sideset{}{'}\sum_{P_1,\dots,P_B} \frac{1}{|P_1|\cdots |P_B|} \ll 2^{-E(I)}\log(2)^B. $$
Finally, there are $2^{B+I-1}$ pairs of $Y,Z$ for each fixed $I$ and we get,
\begin{align*}
\sum_{A\in \A(b)} \frac{W(A)}{|A|} & \leq \frac{(2\log(2))^B}{b_1!\cdots b_J!} \left[ 1+\sum_{I=1}^B 2^{-E(I)}2^{I-1} \right]\\
& \ll \frac{(2\log(2))^B}{b_1!\cdots b_J!} \sum_{j=1}^J 2^{-j}\sum_{I: E(I)=j}2^{I} \\
& \ll \frac{(2\log(2))^{b_1+\dots+b_J}}{b_1!\cdots b_J!} \sum_{j=1}^J 2^{-j} 2^{b_1+\cdots+b_j},
\end{align*}
where the last inequality follows from that fact that $E(I)=j$ if and only if $b_1+\cdots+b_{j-1}\leq I \leq b_1+\cdots b_j$.
\end{proof}
\begin{lem}\label{tauAlem}
If we suppose that $b_i=0$ for $i<M$ and $b_j\leq Mj$ for a sufficiently large $M$, then
$$\sum_{A\in \A(\textbf{b})} \frac{\tau(A)}{|A|} \gg \frac{(2\log(2))^{b_M+\cdots+b_J}}{b_M!\cdots b_J!}$$
\end{lem}
\begin{proof}
We have
$$\sum_{A\in \A(\textbf{b})} \frac{\tau(A)}{|A|} = 2^{b_M+\cdots+b_J} \prod_{j=M}^J \frac{1}{b_j!} \left(\sum_{\substack{P_1,\cdots,P_{b_j} \in D_j \\ P_i \mbox{ distinct}}} \frac{1}{|P_1\cdots P_{b_j}|} \right)$$
By the choice of $D_j$ and the prime polynomial theorem, we get that there is an absolute constant $C$ such that
$$\sum_{P\in D_j} \frac{1}{|P|} \geq \log(2) - \sum_{\deg(P)=\lambda_j+1} \frac{1}{|P|} \geq \log(2) - \frac{1}{\lambda_j+1} - \frac{C}{q^{\lambda_j/2}}.$$
Now, fix $P_1,\dots,P_k \in D_j$ and consider the sum
\begin{align*}
\sum_{\substack{P\in D_j \\ P\not=P_1,\dots,P_k}} \frac{1}{|P|} & = \sum_{P\in D_j} \frac{1}{|P|} - \sum_{i=1}^k \frac{1}{|P_i|} \geq \log(2) - \frac{1}{\lambda_j+1}- \frac{C}{q^{\lambda_j/2}} - \frac{k}{q^{\lambda_{j-1}}}
\end{align*}
Therefore,
\begin{align*}
\prod_{j=M}^J \left(\sum_{\substack{P_1,\cdots,P_{b_j} \in D_j \\ P_i \mbox{ distinct}}} \frac{1}{|P_1\cdots P_{b_j}|} \right) & \geq \prod_{j=M}^J \left(\log(2)-\frac{1}{\lambda_j+1}- \frac{C}{q^{\lambda_j/2}} - \frac{b_j}{q^{\lambda_{j-1}}} \right)^{b_j} \\
& \geq \log(2)^{b_M+\cdots+b_J} \prod_{j=M}^J \left(1 - \frac{1}{\log(2)}\left(\frac{1}{\lambda_j+1}+ \frac{C}{q^{\lambda_j/2}} + \frac{b_j}{q^{\lambda_{j-1}}} \right) \right)^{b_j}.
\end{align*}
So it remains to show that this remaining product is bounded above. Indeed if we denote
$$C_j := \frac{1}{\log(2)}\left(\frac{1}{\lambda_j+1}+ \frac{C}{q^{\lambda_j/2}} + \frac{b_j}{q^{\lambda_{j-1}}} \right) \ll \frac{1}{2^j}$$
then
\begin{align*}
-\log \prod_{j=M}^J \left(1 - C_j\right)^{b_j} & = -\sum_{j=M}^J b_j \log \left(1-C_j\right) \\
& = \sum_{j=M}^J b_j\sum_{n=1}^{\infty} \frac{C_j^n}{n} \\
& \ll \sum_{j=M}^J \sum_{n=1}^{\infty} \frac{j}{2^{nj}n} = O(1).
\end{align*}
This completes the proof.
\end{proof}
Finally, set $k = \lfloor \log_2(b) - 2M\rfloor$ and let $\mathcal{B}$ be the set of $\mathbf{b} = (b_1,\dots,b_J)$ with $J=M+k-1$, $b_j=0$ for $j\leq M$, $b_j \leq \min(Mj, M(J-j+1))$. Then for every $A\in \A(\mathbf{b})$, we have
\begin{align*}
\deg(A) & \leq \sum_{j=M}^J b_j \lambda_j \leq M \sum_{\ell=0}^{J-M} (\ell+1) 2^{J+K-\ell} \\
& \leq 2^{K+1}M2^{J+1} = 2^{K+1}M2^{M+k} \\
& \leq 2^{K+1} \frac{M}{2^M} b \leq \frac{b}{8}
\end{align*}
for $M$ sufficiently large.
Therefore, \eqref{lowboundeq1} gives us
$$|H(n,b)| \gg \frac{q^n}{b^2} \sum_{\mathbf{b}\in \mathcal{B}}\frac{\left(\sum_{A\in\A(\mathbf{b})} \frac{\tau(A)}{|A|}\right)^2}{\sum_{A\in \A(\mathbf{b})} \frac{W(A)}{|A|}}.$$
Now, if we let
$$f(\mathbf{b}) = \sum_{h=M}^J 2^{M-1-h+b_M+\cdots+b_h}$$
then we have by Lemma \ref{WAlem} that
\begin{align}\label{WAbound2}
\sum_{A\in \A(\mathbf{b})} \frac{W(A)}{|A|} \ll \frac{(2\log(2))^k}{b_M!\cdots b_J!} \left(1+2^{1-M}f(\mathbf{b})\right) \leq \frac{(2\log(2))^k}{b_M!\cdots b_J!}f(\mathbf{b})
\end{align}
since $f(\mathbf{b})\geq 1/2$. Hence, by Lemma \ref{tauAlem}, \eqref{lowboundeq1} and \eqref{WAbound2}, we get
$$|H(n,b)| \gg \frac{q^{n}(2\log(2))^k}{b^2} \sum_{\mathbf{b}\in\mathcal{B}} \frac{1}{b_M!\cdots b_J! f(\mathbf{b})}.$$
Finally, Ford in \cite{F2} shows that
$$\sum_{\mathbf{b}\in\mathcal{B}} \frac{1}{b_M!\cdots b_J! f(\mathbf{b})} \gg \frac{k^{k-1}}{k!} \gg \frac{1}{k^{3/2}},$$
where the last inequality is due to Stirling's formula.
Therefore, since $k \sim \log(b)/\log(2)$, we get
$$|H(n,b)| \gg \frac{q^{n}}{b^\delta \log(b)^{3/2}}$$
which finished the proof of the lower bound.
\section{Upper Bound}\label{Section4}
Before we begin, we need some basic bounds for $L(A)$.
\begin{lem}\label{Upboundlem1}
\begin{enumerate}
\item $L(A) \leq \min(\tau(A), \deg(A))$
\item If $(A,B)=1$, then $L(AB)\leq \tau(B)L(A)$
\item If $P_1,\dots,P_k$ are distinct primes, then $L(P_1\cdots P_k)\leq \min_{0\leq j \leq k} (2^{k-j} \deg(P_1\cdots P_j))$
\end{enumerate}
\end{lem}
\begin{proof}
For part $(1)$, we have
$$L(A) = \sum_{d\in\Ll(A)} 1 \leq \sum_{D|A} 1 = \tau(A).$$
While on the other hand, $\Ll(A) \subset \{1,\dots,\deg(A)\}$ and so $L(A) \leq \deg(A)$.
For part $(2)$, we have
$$\Ll(AB) = \bigcup_{D|B} \{d+\deg(D) : d\in \Ll(A)\}$$
and so $L(AB) \leq \sum_{D|B} L(A) = \tau(B)L(A)$.
Part $(3)$ follows from applying parts $(1)$ and $(2)$ with $A = P_1\cdots P_j$ and $B=P_{j+1}\cdots P_k$.
\end{proof}
We shall first prove the upper bound in the case of squarefree polynomials. That is, let $H^*(n,b)$ be the set of squarefree polynomials in $M_n$ which has a divisor of degree $b$.
\begin{lem} \label{sqfreelem}
For $b\leq n/2$,
$$|H^*(n,b)| \ll q^n (S(b)+S(n-b)),$$
as $q^n\to\infty$, where
$$S(d) = \sum_{\substack{\deg(P^+(A))\leq d \\ \mu^2(A)=1}} \frac{L(A)}{|A|\left(\deg(P^+(A))+d-\deg(A)\right)^2}$$
and $P^+(A)$ denotes the largest prime divisor of $A$ and $\mu$ is the M\"obius function.
\end{lem}
\begin{proof}
Let $F\in H^*(n,b)$. Then $F=G_1G_2$ where $G_1\in M_b$ and $G_2\in M_{n-b}$. Moreover, necessarily, $G_1$ and $G_2$ are squarefree and coprime.
First, suppose that $\deg(P^+(G_1))\leq \deg(P^+(G_2))$ and choose $P|G_1$ such that $\deg(P)=\deg(P^+(G_1))$. Write $F=ABP$ such that $\deg(P^+(A))\leq \deg(P)$ and all primes dividing $G_1$, except for $P$, divide $A$ and $\deg(P^-(B))\geq \deg(P)$ and all primes dividing $G_2$ with degree greater than or equal to $P$ divides $B$.
Then, by design we have $AP$ has a divisor of degree $b$. Therefore, $\deg(P)\geq b-\deg(A)$. Moreover, if we fix $A$ and $P$, we get that $B\in \M_{n-\deg(AP)}$ with $\deg(P^-(B))\geq \deg(P)$. Therefore, by \eqref{smooth} the number of such $B$ will be
$$\ll \frac{q^n}{|AP|\deg(P)}$$
We know that $A$ has a divisor of degree $b-\deg(P)$. So we get that
\begin{align} \label{sqfreelemeq1}
\sum_{\substack{\deg(P)\geq C \\ b-\deg(P)\in L(A)}} \frac{1}{|P|\deg(P)} &\ll \frac{1}{C} \sum_{\substack{d\in\Ll(A) \\ d-b\geq C }} \sum_{P\in M_{d-b}} \frac{1}{|P|}\\
& \ll \frac{1}{C} \sum_{\substack{d\in\Ll(A) \\ d-b\geq C }} \frac{1}{d-b} \ll \frac{L(A)}{C^2} \nonumber
\end{align}
We have that $\deg(P) \geq \max(\deg(P^+(A)),b-\deg(A))$. The case where $\deg(P^+(A))\leq b-\deg(A)$ will contribute to $H^*(n,b)$ at most
\begin{align*}
& q^n \sum_{\substack{ \deg(P^+(A)) \leq b-\deg(A) \\ \mu^2(A)=1}} \frac{1}{|A|} \sum_{\substack{\deg(P)\geq b-\deg(A) \\ b-\deg(P)\in \Ll(A)}} \frac{1}{|P|\deg(P)} \\
\ll & q^n \sum_{\substack{\deg(P^+(A))\leq b \\ \mu^2(A)=1}} \frac{L(A)}{|A|(b-\deg(A))^2} \\
\ll & q^n S(b),
\end{align*}
where the last inequality comes from the fact that $b-\deg(A) \geq (\deg(P^+(A))+b-\deg(A))/2$ in this case.
In the case where $\deg(P^+(A))\geq d-\deg(A)$, then $\deg(P) \geq \deg(P^+(A))$. Moreover, since $AP$ has a divisor of degree $b$, we must have $\deg(P^+(A))\leq b$. Hence we get this case contributes to $H^*(n,b)$ at most
\begin{align*}
&q^n \sum_{\substack{b-\deg(A)\leq \deg(P^+(A))\leq b \\ \mu^2(A)=1}} \frac{1}{|A|} \sum_{\substack{\deg(P) \geq \deg(P^+(A)) \\ b-\deg(P) \in \Ll(A)}} \frac{1}{|P|\deg(P)}\\
\ll & q^n \sum_{\substack{ \deg(P^+(A))\leq b \\ \mu^2(A)=1}} \frac{L(A)}{|A|\deg(P^+(A))^2}\\
\ll & q^n S(b),
\end{align*}
where again the last inequality comes from the fact that $\deg(P^+(A)) \geq (\deg(P^+(A))+b-\deg(A))/2$ in this case.
Therefore, we get a contribution of at most $q^nS(b)$ under the assumption that $\deg(P^+(G_1))\leq \deg(P^+(G_2))$. Now, suppose $F=G_1G_2$ with $G_1\in M_b$, $G_2\in M_{n-b}$ such that $\deg(P^+(G_2))\leq \deg(P^+(G_1))$ and choose $P|G_2$ such that $\deg(P)=\deg(P^+(G_2))$. Then write $F=ABP$ such that $\deg(P^+(A))\leq \deg(P)$, all primes that divide $G_2$ divide $A$ and $\deg(P^-(B))\geq \deg(P)$ and all the primes dividing $G_1$ whose degree is greater than or equal to $\deg(P)$ divide $B$.
Following the same logic as above with $b$ replaced with $n-b$, we get that this contributes at most
$$\ll q^n \sum_{\substack{\deg(P^+(A))\leq n-b \\ \mu^2(A)=1}} \frac{L(A)}{|A|\left(n-b-\deg(A)+\deg(P^+(A))\right)^2} = q^nS(n-b)$$
which concludes the proof.
\end{proof}
Define
$$T(d,m) = \sum_{\substack{\deg(P^+(A))\leq d \\ \deg(A)\geq m, \mu^2(A)=1}} \frac{L(A)}{|A|}$$
If either $\deg(A)\leq d/2$ or $\deg(P^+(A))\geq \epsilon d$, then $(d-\deg(A)+\deg(P^+(A)))^2 \gg d^2$. Conversely if $\deg(P^+(A))\leq \epsilon d$ then we can find a $0\leq g \leq \log(d)+\log(\epsilon)$ such that $e^g \leq \deg(P^+(A))\leq e^{g+1}$ and we get
\begin{align*}
S(d) & = \sum_{\substack{\deg(P^+(A))\leq d \\ \mu^2(A)=1}} \frac{L(A)}{|A|\left(\deg(P^+(A))+d-\deg(A)+\right)^2} \\
&\ll \frac{T(d,1)}{d^2} + \sum_{\substack{\deg(P^+(A))\leq \epsilon d \\ \deg(A)\geq d/2, \mu^2(A)=1}} \frac{L(A)}{|A|\left(\deg(P^+(A))+d-\deg(A)\right)^2}\\
&\ll \frac{T(d,1)}{d^2} + \sum_{g=0}^{\log(d)+\log(\epsilon)} \sum_{\substack{e^{g-1}\leq \deg(P^+(A))\leq e^g \\ \deg(A)\geq d/2, \mu^2(A)=1}} \frac{L(A)}{|A|\left(\deg(P^+(A))+d-\deg(A)\right)^2}\\
&\ll \frac{T(d,1)}{d^2} + \sum_{g=0}^{\log(d)+\log(\epsilon)} e^{-2g}T(e^g,d/2).
\end{align*}
Finally define
$$T_k(d,m) = \sum_{\substack{\deg(P^+(A))\leq d \\ \deg(A)\geq m, \mu^2(A)=1 \\ \omega(A)=k}} \frac{L(A)}{|A|},$$
where $\omega(A)$ is the number of prime divisors of $A$.
\begin{lem}\label{Tklem}
For $d$ large and $m\geq 1$, let $v=\lfloor \log_2(d)\rfloor$. The for $1\leq k \leq 10v$, we have
$$T_k(d,m) \ll e^{-m/d} (2\log(d))^k \frac{1+|v-k|^2}{(k+1)!(2^{k-v}+1)}$$
\end{lem}
\begin{proof}
Firstly,
$$T_k(d,m) \leq \sum_{\substack{\deg(P^+(A))\leq d \\ \deg(A)\geq m, \omega(A)=k}} \frac{L(A)}{|A|} \leq e^{-m/d} \sum_{\substack{\deg(P^+(A))\leq d \\ \omega(A)=k}} \frac{L(A)}{|A|^{1-1/\log(q) d}}$$
Now, by \eqref{inverseprime2} we get
$$\sum_{\deg(P)\leq d} \frac{1}{|P|^{1-1/\log(q)d}} = \log(d) + O(1).$$
Therefore, we can partition the interval $[1,d]$ into subintervals $E_0, \dots, E_{v+K-1}$ (for some constant $K$) such that for all $j$, $E_j$ is the next largest interval such that
$$\sum_{\substack{P\in M_e\\e\in E_j}} \frac{1}{|P|^{1-1/\log(q) d}} \leq \log(2)$$
Consequently, $P\in E_j$ implies that $\deg(P)\leq 2^{j+K}$.
Now, let $A=P_1\cdots P_k$ with $\deg(P_1)\leq \dots \leq \deg(P_k)\leq d$. Let $j_i$ be such that $P_i\in E_{j_i}$. Then Lemma \ref{Upboundlem1} says
$$L(A) \leq \min_{0\leq t \leq k} 2^{k-t} \deg(P_1\cdots P_t) \leq 2^{k+K} \min_{0\leq t \leq k} 2^{-t} \sum_{i=1}^t 2^{j_i}$$
Therefore, if we define
$$F(\mathbf{j}) := \min_{0\leq t \leq k} 2^{-t} \sum_{i=1}^t 2^{j_i}$$
then
$$T_k(d,m) \leq q^{-m/d} 2^{k+K} \sum_{\mathbf{j}\in J} F(\mathbf{j}) \sum_{\substack{P_1,\dots,P_k \\ P_i\in E_{j_i}}} \frac{1}{|P_1\dots P_k|^{1-1/\log(q) d}} $$
where $J$ is the set of all vectors $\mathbf{j}$ such that $j_1\leq \dots \leq j_k \leq v+K-1$.
Fix a $\mathbf{j} = (j_1,\dots, j_k)$. For each $0\leq j\leq v+K+1$, let $b_j$ be the number of $i$ such that $j_i=j$. Then the inner sum of $P_1,\dots,P_k$ will be less than
\begin{align*}
\prod_{j=1}^{v+K-1} \frac{1}{b_j!} \left(\sum_{P\in E_j} \frac{1}{|P|^{1-1/\log(q) d}}\right)^{b_j} & \leq \frac{\log(2)^k}{b_0!\cdots b_{v+K-1}!}\\
& = ((v+K)\log(2))^k \int_{R(\mathbf{j})} 1 d\mathbf{\xi}\\
& \leq e^{10K} (v\log(2))^k \int_{R(\mathbf{j})} 1 d\mathbf{\xi}
\end{align*}
where
$$R(\mathbf{j}) = \{0\leq \xi_1\leq \dots \leq \xi_k \leq 1: j_i \leq (v+K)\xi_i \leq j_i+1 \forall i\}$$
and the last inequality uses the hypothesis that $k \leq 10v$.
Finally, Ford in \cite{F2} shows that
$$\sum_{\mathbf{j}\in J} F(\mathbf{j}) \int_{R(\mathbf{j})} 1 d\mathbf{\xi} \ll \frac{1+|v-k|^2}{(k+1)!(2^{k-v}+1)}$$
and the lemma follows.
\end{proof}
\begin{lem}\label{Tlem}
$$T(d,m) \ll e^{-m/d} \frac{d^{2-\delta}}{\log(d)^{3/2}}$$
\end{lem}
\begin{proof}
We clearly have
$$T(d,m) = \sum_k T_k(d,m)$$
Then if $v=\lfloor \log_2(d) \rfloor$, Lemma \ref{Tklem} says that
$$\sum_{v \leq k \leq 10v} T_k(d,m) \ll e^{-m/d} \sum_{v \leq k \leq 10v} \frac{1+(k-v)^2}{2^{k-v}} \frac{(2\log(d))^k}{(k+1)!} \ll e^{-m/d} \frac{(2\log(d))^v}{(v+1)!}.$$
For $1\leq k \leq v$, we have
\begin{align*}
\sum_{1 \leq k \leq v} T_k(d,m) & \ll 2^vq^{-m/d} \sum_{1 \leq k \leq v} \frac{(1+(v-k)^2)(\log(d))^k}{(k+1)!} \\
& = e^{-m/d}(2\log(d))^v\sum_{0\leq k \leq v-1} \frac{1+k^2}{\log(d)^k (v-k+1)!} \\
& \ll e^{-m/d} \frac{(2\log(d))^v}{(v+1)!} \sum_{0\leq k \leq v-1} (1+k^2) \left(\frac{v+1}{\log(d)}\right) \cdots \left(\frac{v-k+1}{\log(d)}\right) \\
& \ll e^{-m/d} \frac{(2\log(d))^v}{(v+1)!} \sum_{0 \leq k \leq v-1}\frac{1+k^2}{\log(2)^k} \\
& \ll e^{-m/d} \frac{(2\log(d))^v}{(v+1)!},
\end{align*}
where the second last inequality comes from the fact that $v-j \leq \log_2(d)$ for all $j$.
For $k\geq 10v$, we use the Lemma \ref{Upboundlem1} and the definition of $T_k(d,m)$ to get
\begin{align*} \sum_{k\geq 10v} T_k(d,m) & = \sum_{k \geq 10v} \sum_{\substack{\deg(P^+(A))\leq d \\ \deg(A)\geq m, \mu^2(A)=1 \\ \omega(A)=k}} \frac{L(A)}{|A|} \ll e^{-m/d}\sum_{k \geq 10v} 2^k \sum_{\substack{\deg(P^+(A))\leq d \\ \omega(A)=k, \mu^2(A)=1}} \frac{1}{|A|^{1-1/d}}\\
& \ll e^{-m/d}\sum_{k \geq 10v} \frac{2^k}{k!} \left(\sum_{\deg(P)\leq d} \frac{1}{|P|^{1-1/d}}\right)^k \ll e^{-m/d} \sum_{k \geq 10v} \frac{2^k}{k!} \left(\log(d)+O(1)\right)^k\\
& \ll e^{-m/d} \frac{(2\log(d))^{10v}}{(10v)!} \ll e^{-m/d} \frac{(2\log(d))^{v}}{(v+1)!}
\end{align*}
Finally, we using Stirling's bound we get the desired result.
\end{proof}
Hence,
\begin{align*}
S(d) & \ll \frac{T(d,1)}{d^2} + \sum_{g=1}^{\log(\epsilon d)} e^{-2g}T(e^g,d/2) \\
& \ll \frac{q^{-1/d}}{d^{\delta}(\log(d))^{3/2}} + \sum_{g=1}^{\log(\epsilon d)} \frac{1}{q^{d/2e^g}e^{\delta g} g^{3/2} }\\
& \ll \frac{1}{d^\delta(\log(d))^{3/2}}
\end{align*}
and as long as we assume that $b\leq n/2$, then
\begin{align*}
|H^*(n,b)| & \ll q^n(S(b)+S(n-b))\\
& \ll q^n \left(\frac{1}{b^\delta(\log(b))^{3/2}} + \frac{1}{(n-b)^\delta(\log(n-b))^{3/2}}\right) \\
& \ll \frac{q^n}{b^\delta(\log(b))^{3/2}}
\end{align*}
It remains now to deduce the correct upper bound from the square-free case.
\begin{lem}
$$|H(n,b)| \ll \frac{q^n}{b^\delta(\log(b))^{3/2}}.$$
\end{lem}
\begin{proof}
Write $F=F'F''$ where $F'$ is square-free, $F''$ is square-full and $(F',F'')=1$. The number of $F$ with $\deg(F'')\geq (4+\epsilon)\log(b)$ will be less than
$$ q^n \sum_{\substack{F'' \mbox{ square-full} \\ deg(F'')\geq (4+\epsilon)\log(b)}}\frac{1}{|F''|} \ll \frac{q^n}{b^2}$$
by \eqref{sqfull1}
Now, suppose $\deg(F'')\leq (4+\epsilon)\log(b)$, then there is a $D|F''$ such that $F'$ has a divisor of degree $b-\deg(D)$. Thus
\begin{align*}
|H(n,b)| & \leq \sum_{\substack{F'' \mbox{ square-full} \\ \deg(F'')\leq (4+\epsilon)\log(b)}} \sum_{D|F''} |H^*(n-\deg(F''), b-\deg(D))| + O\left(\frac{q^n}{b^2}\right) \\
& \ll q^n \sum_{\substack{F'' \mbox{ square-full} \\ \deg(F'')\leq (4+\epsilon)\log(b)}} \sum_{D|F''} \frac{1}{|F''| (b-\deg(D))^{\delta} (\log(b-\deg(D)))^{3/2}} + O\left(\frac{q^n}{b^2}\right)\\
& \ll \frac{q^n}{b^\delta(\log(b))^{3/2}} \sum_{\substack{F'' \mbox{ square-full} \\ \deg(F'')\leq (4+\epsilon)\log(b)}} \frac{\tau(F'')}{|F''|} + O\left(\frac{q^n}{b^2}\right)\\
& \ll\frac{q^n}{b^\delta(\log(b))^{3/2}},
\end{align*}
where the last inequality is due to \eqref{sqfull2}.
\end{proof}
| {
"timestamp": "2018-04-24T02:18:15",
"yymm": "1804",
"arxiv_id": "1804.08483",
"language": "en",
"url": "https://arxiv.org/abs/1804.08483",
"abstract": "Erdős first showed that the number of positive integers up to $x$ which can be written as a product of two number less than $\\sqrt{x}$ has zero density. Ford then found the correct order of growth of the set of all these integers. We will use the tools developed by Ford to answer the analogous question in the function field setting. Finally, we will use a classical result relating factorization of polynomials to factorization of permutations to recover a result of Eberhard, Ford and Green of an analogous multiplication table problem for permutations.",
"subjects": "Number Theory (math.NT)",
"title": "Erdős' Multiplication Table Problem for Function Fields and Symmetric Groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543723101534,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7093817850584183
} |
https://arxiv.org/abs/2211.02319 | Evaluating a distance function | Computing the distance function to some surface or line is a problem that occurs very frequently. There are several ways of computing a relevant approximation of this function, using for example technique originating from the approximation of Hamilton Jacobi problems, or the fast sweeping method. Here we make a link with some elliptic problem and propose a very fast way to approximate the distance function. |
\section{Introduction}
In many cases, one has to evaluate the distance function to a surface $\Gamma_D$ \remi{which is part of the boundary of an open set $\Omega\in \mathbb R^d$}. An example in fluid mechanics is that of turbulence modelling: in some models, one of the parameters in the evaluation of the turbulent viscosity is the distance to the airfoil. Other examples can be found in medical image processing, surface reconstruction, etc. To evaluate the distance function, there are many ways. One technique is to numerically evaluate the viscosity solution of the Eikonal equation,
\begin{equation}
\label{eq:1}
\begin{split}
\Vert \nabla u\Vert -1=0 & \text{~}\mathbf{x} \in\remi{ \Omega}\\
u(\mathbf{x})=0 & \text{ if and only if } \mathbf{x}\in \Gamma_D\\
u(\mathbf{x})\geq 0.
\end{split}
\end{equation}
In this formulation, only one boundary condition is prescribed, the Dirichlet one on $\Gamma_D$, nothing is said for the other parts of $\partial \Omega$. Numerical techniques for this can be found in \cite{shu,abgrall}. This can be improved by using a fast marching technique in the spirit of \cite{sethian}, or technique \remi{coming} from computer sciences. From time to time, one can see in the literature methods that compute an approximation of the distance function as the solution of some Laplace equation, an example can be found in \cite{nishi} and the reference therein. \remi{Other references, where elliptic problems are considered, can be found in \cite{kuzmin,adams}. They solve \eqref{eq:1} by minimizing $\big (\Vert \nabla u\Vert-1)^2$ using a variational formulation with continuous or discontinuous finite element that needs to be penalized to enforce the Dirichlet boundary conditions. It is interesting to notice that their formulation is relatively close to ours, using completely different paths.}
There is no obvious links between \eqref{eq:1}, which is of hyperbolic nature, and an elliptic problem. The purpose of this small note is to provide a link, via the Hopf-Cole transform, and to provide a very fast algorithm { (compared to explicit algorithms for computing the viscosity solution of \eqref{eq:1} to \Remi{evaluate} the solution of \eqref{eq:1})}, at least if one accept a small viscosity term (which will nevertheless be present in any numerical method of PDE origin), and discretisation errors.
The format of this note is the following. First, we recall the Hopf-Cole transform and show how it can be applied to the steady Eikonal equation. This leads to an elliptic problem on a function constructed from the distance. We discuss the boundary conditions for this problem, and in particular for the part of $\partial \Omega$ which is not $\Gamma_D$. Then we provide a numerical method, and show its behaviour on unstructured triangular meshes.
\section{The problem}
We want to solve the following problem:
Let $\Omega\subset \mathbb R^n$ be open, and we set $\partial \Omega=\Gamma_D\cup \Gamma_S$, $\Gamma_D\cap \Gamma_S$ of empty interior. We want to compute the distance function to $\Gamma_D$. We \remi{ consider} the problem of finding the viscosity solution of
\begin{equation}
\label{HJ}
\begin{split}
\Vert \nabla u \Vert -1=0 & \quad \mathbf{x}\in \Omega\\
u=0 & \quad\mathbf{x} \in \Gamma_D\\
u=+\infty & \quad\mathbf{x} \in \Gamma_S
\end{split}
\end{equation}
Of course $\Gamma_S$ can be empty, but $\Gamma_D$ is never empty by assumption. On $\Gamma_S$ we have set Soner type boundary condition, see \cite{barles} for example.
For the sake of completeness, we recall the notion of viscosity solution for \eqref{HJ}:
Let $\varphi\in C^1(\overline{\Omega})$, and $\mathbf{x}_0$ a point where $u-\varphi$ reaches a local minimum: \eqref{HJ} means that if $\mathbf{x}_0\in \Omega$,
$\Vert \nabla \varphi(\mathbf{x}_0) \Vert -1 \geq 0,$ and if $\mathbf{x}_0\in \Gamma_D$
$\min\big (\Vert \nabla \varphi(\mathbf{x}_0) \Vert -1, u(x_0)\big )\geq 0$ while, if $\mathbf{x}_0\in \Gamma_S$
$\Vert \nabla \varphi(\mathbf{x}_0) \Vert -1\geq 0.$
If $\mathbf{x}_0$ is a minimum of $u-\varphi$, we get:
if $\mathbf{x}_0\in \Omega$,
$\Vert \nabla \varphi(\mathbf{x}_0) \Vert -1 \leq 0,$ if $\mathbf{x}_0\in \Gamma_D$
$\max\big (\Vert \nabla \varphi(\mathbf{x}_0) \Vert -1, u(x_0)\big )\leq 0$ and
there is no condition on $\Gamma_S$.
Here we propose a method where we solve a viscous regularisation of \eqref{HJ}, or more precisely of
\begin{equation}
\label{HJ2}
\begin{split}
\Vert \nabla u \Vert^2 -1=0 & \quad \mathbf{x}\in \Omega\\
u=0 & \quad\mathbf{x} \in \Gamma_D\\
u=+\infty & \quad\mathbf{x} \in \Gamma_S
\end{split}
\end{equation}
since the two problems have the same solutions,
we consider, for $\nu>0$, the problem (in the viscosity sense, see \cite{barles} for second order problems)
\begin{equation}
\label{viscous:HJ2}
\begin{split}
\Vert \nabla u \Vert^2 -1=\nu \Delta u & \quad \mathbf{x}\in \Omega\\
u=0 & \quad\mathbf{x} \in \Gamma_D\\
u=+\infty & \quad\mathbf{x} \in \Gamma_S
\end{split}
\end{equation}
\section{Rewriting the problem}
If, instead of looking at the steady problem \eqref{viscous:HJ2}, we consider the unsteady one,
$$\dpar{u}{t}+\Vert \nabla u\Vert^2 -1=\nu \Delta u$$
with the same initial and boundary conditions, this \remi{is} "almost" the \remi{viscous} Burgers equation,
$$\dpar{u}{t}+\frac{1}{2}\Vert \nabla u\Vert^2 =\nu \Delta u.$$
for which it is very well known that we can transform it into the heat equation by using the Hopf-Cole transform,
\begin{equation}\label{eq:3:3}u(\mathbf{x}, t)=-2\nu\log\big (\varphi(\mathbf{x}, t)\big ) . \end{equation}
The proof is classical (though done in one dimension in most textbooks), but we nevertheless repeat it.
Our notations will be: $\nabla u$ represents the first derivative of the function $u$: for any $h\in \mathbb R^d$
$$ u(\mathbf{x}+h)=u(\mathbf{x})+ \nabla u(\mathbf{x})\cdot h+o(h), $$
and $D^2u$ represents the second derivative (the Hessian) of $u$:
$$\remi{\nabla u(\mathbf{x}+h)=\nabla u (\mathbf{x})+D^2u(\mathbf{x})\cdot h +o(h)}.$$
With this in mind, we have, from \eqref{eq:3:3}, that
$$\nabla u(\mathbf{x},t)=-2\nu\dfrac{\nabla\varphi(\mathbf{x},t)}{\varphi(\mathbf{x},t)}, \qquad \dpar{u}{t}=-2\nu\dfrac{\dpar{\varphi(\mathbf{x},t)}{t}}{\varphi(\mathbf{x},t)}$$
and
$$D^2u(\mathbf{x}, t)=-2\nu \dfrac{D^2\varphi(\mathbf{x},t)}{\varphi(\mathbf{x},t)}+2\nu \dfrac{\nabla\varphi(x,t)\otimes \nabla\varphi(x,t)}{\varphi(\mathbf{x},t)^2}, $$
so that
$$\Delta u=\text{trace}\big (D^2u(\mathbf{x}, t)\big )=-2\nu \dfrac{\Delta \varphi(\mathbf{x},t)}{\varphi(\mathbf{x},t)}+2\nu \dfrac{\Vert \nabla\varphi(x,t)\Vert^2}{\varphi(\mathbf{x},t)^2}.$$
Hence plugging this into the Burgers equation, we have
\remi{\begin{equation*}
\begin{split}
\dpar{u}{t}+\frac{1}{2}\Vert \nabla u\Vert ^2-\nu \Delta u&=-2\nu\dfrac{\dpar{\varphi(\mathbf{x},t)}{t}}{\varphi(\mathbf{x},t)}+\frac{1}{2}\bigg ( 4\nu^2\dfrac{\Vert \nabla\varphi(\mathbf{x},t)\Vert ^2}{\varphi(\mathbf{x},t)^2}\bigg )\\
&\qquad \qquad-\nu\bigg ( -2\nu \dfrac{\Delta \varphi(\mathbf{x},t)}{\varphi(\mathbf{x},t)}+2\nu \dfrac{\Vert \nabla\varphi(x,t)\Vert^2}{\varphi(\mathbf{x},t)^2}\bigg )\\
&=-\frac{2\nu}{\varphi(\mathbf{x},t)} \bigg ( \dpar{\varphi(\mathbf{x},t)}{t}-\nu \Delta \varphi(\mathbf{x},t)\bigg )
\end{split}
\end{equation*}}
so that in the end we see that $\varphi$ needs to satisfy
$$\dpar{\varphi(\mathbf{x},t)}{t}-\nu \Delta \varphi(\mathbf{x},t)=0$$
with $\varphi=1$ on the Dirichlet boundary and $\varphi\geq 0$ on $\Omega$.
Unfortunately, the time dependant problem
$$\dpar{u}{t}+ \Vert \nabla u\Vert^2-1=\nu \Delta u$$
does not go through as well, but this is not an issue because this is not the problem we want to solve. We want to solve
$$ \Vert \nabla u\Vert^2-1=\nu \Delta u$$ for which we again use the change of variable
$$u(\mathbf{x})=\alpha \log\big ( \varphi(\mathbf{x})\big )$$ with $\alpha$ to be determined.
We get
\begin{equation}\label{calcul}
\begin{split}
\Vert \nabla u\Vert^2-1-\nu \Delta u&=\alpha^2\dfrac{\Vert \nabla\varphi(\mathbf{x},t)\Vert ^2}{\varphi(\mathbf{x},t)^2}-1-\nu\bigg ( \alpha \dfrac{\Delta \varphi(\mathbf{x},t)}{\varphi(\mathbf{x},t)}-\alpha \dfrac{\Vert \nabla\varphi(x,t)\Vert^2}{\varphi(\mathbf{x},t)^2}\bigg )\\
&=\frac{-1}{\varphi(\mathbf{x},t)}\bigg ( \alpha\nu \Delta \varphi(\mathbf{x},t)+\varphi(\mathbf{x},t)\bigg )+\dfrac{\alpha^2+\nu\alpha}{\varphi(\mathbf{x},t)^2} \Vert \nabla\varphi(\mathbf{x},t)\Vert ^2
\end{split}
\end{equation}
so we take $\alpha=-\nu$ and we need to solve in $\Omega$
\begin{equation}
\label{eq:3}
\nu^2 \Delta \varphi(\mathbf{x},t)=\varphi(\mathbf{x},t).
\end{equation}
with the boundary condition
\begin{equation}
\label{eq:3.2}
\varphi(\mathbf{x},t)=1, \qquad \mathbf{x}\in \Gamma_D.
\end{equation}
On $\Gamma_S$, inspired by what is done for the inviscid problem, and using \eqref{calcul}, we see that a condition is
$$\nu^2 \Delta \varphi(\mathbf{x},t)\leq \varphi(\mathbf{x},t)$$ on $\Gamma_S$. This looks a bit like an obstacle problem, but this is not exactly the same (because the "obstacle" is at the boundary). In the next section, inspired by what is done for the Eikonal equation, we will propose a discretisation of this kind of condition. \remi{In the numerical section, we will also compare this boundary condition with more natural ones, such as a Neuman condition on the distance function, on $\Gamma_S$.}
\section{Numerical discretisation}
\subsection{Formulation}
We consider a triangulation of \remi{the polygonal domain} $\Omega$ that respects $\Gamma_D$ and $\Gamma_S$. They consists of triangles (or tetrahedrons) that are generically denoted by $K$. The vertices of the triangulation are denoted by $\mathbf{a}_i$, \remi{$i=1, \ldots , n_s$}. The number of element is $n_e$. For any vertex $\mathbf{a}_i$, $\mathcal{V}(j)$ is the set of vertices connected to $\mathbf{a}_i$ by an edge of the triangulation. Often, we make the identification between a vertex $\mathbf{a}_i$ and its index $i$. For the sake of simplicity we only consider the two dimensional case, the three dimensional one can be done in a similar way. The approximation space is
$$\remi{V^h=\{\psi\in H^1(\Omega), \forall K, \psi_{|K}\in \P^1(K)\}\cap \{\psi=1 \text{ on }\Gamma_D\}}.$$
The trial space is
$$\remi{W^h=\{\psi\in H^1(\Omega), \forall K, \psi_{|K}\in \P^1(K)\}\cap \{\psi=0 \text{ on }\partial \Omega\}}.$$
We write the problem as:
Find $\varphi^h\in V^h$ such that for any $\psi^h\in W^h$,
\begin{subequations}\label{viscous:HJ}
\label{viscous:HJ:num}
\begin{equation}
\label{viscous:HJ:num:1}
\nu^2 \int_\Omega \nabla \varphi^h\cdot \nabla \psi^h \; d\mathbf{x} + \int_\Omega \varphi^h \psi^h \; d\mathbf{x} =0
\end{equation}
coupled to boundary conditions on $\Gamma_S$. If $\Gamma_S=\emptyset$, there is nothing more to do.
In the case when $\Gamma_S\neq\emptyset$, we define the boundary conditions according to what is done for the Eikonal equations, see for example \cite{abgrallHJ}. There is defined a numerical Hamiltonian $\mathcal{H}$ which role is to translate the viscosity inequality
$$\Vert \nabla u\Vert -1-\nu^2 \Delta u\geq 0$$ on $\Gamma_S$ in the limit $\nu\rightarrow 0$. There are several possible versions, but the best (because the gradient of the numerical solution is controlled, see \cite{abgrallHJ}) is to use a Godunov Hamiltonian which amounts to write for any vertex $\mathbf{a}_i\in \Gamma_S$, that
$$\max\limits_{j\in \mathcal{V}(i)}\big ( \frac{u_i-u_j}{\Vert a_ia_j\Vert}-1\big ) =0$$
that is the distance function $d$ satisfies
$$u_i=\min\limits_{j\in \mathcal{V}(i)} \big ( u_j+\Vert a_ia_j\Vert\big ).$$Keeping in mind that the solution of \eqref{viscous:HJ} is related to the solution of \eqref{viscous:HJ2} by
$d=-\nu \log \varphi$, we will consider the following \remi{implementation } for the Soner boundary condition: for $\mathbf{a}_i\in \Gamma_S$,
\begin{equation}
\label{viscous:HJ:num:BC}
u_i=\exp\big (-\frac{\varphi_i}{\nu}\big ), \quad \varphi_i=\max\limits_{j\in \mathcal{V}(i)}\big ( -\nu\log u_j+\Vert a_ia_j\Vert \big ).
\end{equation}
\end{subequations}
\subsection{Numerical procedure}
We use the following notations. A triangle $K$ has 3 vertices, denoted by $a_i$, $a_j$, $a_k$. We assume that the elements are oriented positively. The gradient of the basis function, $\theta_i$ associated to the vertex $a_i$ is
$${\nabla \theta_i}_{|K}=\dfrac{\mathbf{a}_j\mathbf{a}_k^\bot}{2|K|}$$ where, for any vector $\mathbf{x}$, $\mathbf{x}^\bot$ is orthogonal to $\mathbf{x}$ such that the basis $(\mathbf{x}, \mathbf{x}^\bot)$ is direct. As usual, the angle at $\mathbf{a}_i$ in $K$ is denoted by $\alpha_i^K$.
The variational formulation in $\Omega$ leads to
\begin{subequations}
\label{discrete}
\begin{equation}\label{discrete:eq1}
M\varphi+\nu^2 R\varphi=0
\end{equation}
with the boundary condition
\begin{equation}\label{discrete:eq2}
\varphi_i=1, \text{ for any } \mathbf{a}_i\in \Gamma_D
\end{equation}
\end{subequations}
and \eqref{viscous:HJ:num:BC} on $\Gamma_S$ when this set is not empty. Here, $M$ is the mass matrix and $R$ the rigidity matrix,
$$\remi{M_{ij}=\int_\Omega \theta_i\theta_j\;d\mathbf{x}, R_{ij}=\int\nabla\theta_i\cdot \nabla\theta_j\; d\mathbf{x}.}$$
If $\Gamma_S=\emptyset$, this can be solved by an iterative or a direct solver. Here we have chosen the direct solver PastiX \cite {pastix}. If $\Gamma_S\neq \emptyset$ the problem becomes non linear. In that case we use an \remi{Uzawa-type} procedure: we construct a sequence of functions by initialising with $\varphi^0=1$ on $\Omega$, and from $\varphi^n$ we construct $\varphi^{n+1}$ by setting:
\begin{enumerate}
\item We compute $\widetilde{\varphi}^{n+1}$ solution of \eqref{discrete:eq1} with the Dirichlet boundary
\begin{equation}
\label{BC:auxiliary}\widetilde{\varphi^{n+1}}=1 \text{ on }\Gamma_D \text{ and } \widetilde{\varphi^{n+1}}=\varphi^n \text{ on } \Gamma_S.
\end{equation}
\item Then we set
\begin{equation}\label{BC:auxiliary2}
\begin{split}
\varphi^{n+1}&=\widetilde{\varphi^{n+1}} \text{ on }\Omega\backslash\Gamma_S\\
\varphi_i^{n+1}&=\exp\big (-\frac{v_i}{\nu}\big ), \quad v_i=\max\limits_{j\in \mathcal{V}(ji}\big ( -\nu\log \widetilde{\varphi^{n+1}}_j+\Vert a_ia_j\Vert \big ) \text{ on }\Gamma_S.
\end{split}
\end{equation}
\end{enumerate}
It is well known that for $\P^1$ approximation on triangular elements,
the contribution of $K$ for $R$ is
$$\frac{1}{2}\begin{pmatrix}
\cot\alpha_j^K+\cot\alpha_k^K & -\cot\alpha_k^K & -\cot\alpha_j^K\\
-\cot\alpha_k^K & \cot\alpha_i^K+\cot\alpha_k^K & -\cot\alpha_i^K\\
-\cot\alpha_j^K & -\cot\alpha_i^K & \cot\alpha_i^K+\cot\alpha_j^K
\end{pmatrix}
$$
Since $\cot\alpha+\cot\beta=\frac{\sin(\alpha+\beta)}{\sin\alpha\sin \beta}$, and since $\alpha_i^K+\alpha_j^K+\alpha_k^K=\pi$, the diagonal terms are positive \footnote{a quicker way to see this is to write $R_{ii}=\frac{\Vert \mathbf{a}_j-\mathbf{a}_k\Vert^2}{2|K|}$.}. The term $R_{ij}$ for two adjacent points is, since $\mathbf{a}_i$ and $\mathbf{a}_j$ defines the common edges between two triangles $K^+$ and $K^-$,
$$\remi{R_{ij}=\int_{K^+}\langle \nabla\theta_i, \nabla\theta_j\rangle +\int_{K^-}\langle \nabla\theta_i, \nabla\theta_j\rangle=-\frac{1}{2}\big (\cot\alpha_k^{K^+}+\cot\alpha_k^{K^-}\big )}$$
and it is known that $R_{ij}\leq 0$ for $i\neq j$ if and only if
\begin{equation}
\label{positivity}\alpha_k^{K^+}+\alpha_k^{K^-}\leq \pi,
\end{equation}see figure \ref{triangulation}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\textwidth]{triangulation.pdf}
\end{center}
\caption{\label{triangulation} Notations.}
\end{figure}
The mass matrix $M$ is a matrix with positive entries, and the contribution $M_K$ of element $K$ to it is
$$|K|\begin{pmatrix} 1/6 & 1/12& 1/12\\ 1/12& 1/6 &1/6\\ 1/12 & 1/12 & 1/6 \end{pmatrix}$$
If for any vertex, the condition \eqref{positivity} is met, the solution of the auxiliary problem satisfies $$\remi{\min\limits_{\mathbf{x}\in \Gamma_S}\varphi^n(\mathbf{x})\leq \widetilde{\varphi}^{n+1}\leq 1},$$
and from \eqref{BC:auxiliary2}, we see that $$0\leq \min\limits_{\mathbf{x}\in \Gamma_S}\varphi^{n+1}(\mathbf{x})\leq {\varphi}^{n+1}\leq 1.$$
Similarly, we can show that the sequence is monotone increasing, and since $\varphi^n\geq 0$, it is convergent. The monotone increasing nature comes from $\varphi^1\leq 1=\varphi^0$ and then we proceed by induction. We have (using the discrete maximum principle thanks to the condition \eqref{positivity}) that $\widetilde{\varphi^{n+1}}\geq \varphi^n$ and then using \eqref{BC:auxiliary2}, we see that $\varphi^{n+1}\leq \widetilde{\varphi^{n+1}}$ on $\Gamma_S$.
We have thus shown:
\begin{proposition}
If the variational formulation of the Laplace operator satisfies a maximum principle, the sequence $(\varphi^n)_{n\in \mathbb N}$ converges. This is in particular true is the triangulation satisfies the angle condition
\eqref{positivity}.
\end{proposition}
\section{Numerical examples}
All the calculations have been done on an Imac with 3.5GHz Quad-Core Intel core i7 processors with the version 6.0.2 of PastiX \cite{pastix}. To report the performance of the solver, for a mesh with 295 296vertices, 587 520 elements generated by GMSH \cite{gmsh} using the frontal Delaunay option, the symbolic factorisation takes 0.47 s, the evaluation of the non zeros entries of the matrices and the \remi{second} hand side takes 0.11 s, and the solution takes 9.16 seconds. The averaged maximal band-with of the matrix was 171 781, its maximal band-with is 292 530. The computations have been done sequential. We do only calculations when $\Gamma_S\neq \emptyset$ because they are a priori more complicated. \remi{The symbolic factorisation is done once for all (for a given mesh) if the Uzawa-type method is needed.}
The first test is the evaluation of the distance function on the annulus $\{\mathbf{x}, 1\leq \Vert\mathbf{x}\Vert\leq 2\}$ The viscosity is set to $\nu=0.1$. \remi{The Dirichlet condition is set for the inner circle, and the Soner condition on the outer circle.} The solution is displayed on Figure \ref{annulus}-a, while the error to the true solution is on Figure \ref{annulus}-b.
\begin{figure}[h]
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{anneau.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{erreur_anneau.png}}
\end{center}
\caption{\label{annulus} Results for the distance in an annulus. On (a) we have the solution, and on (b) we have $\vert -\nu \log u_i-d_i\vert$}
\end{figure}
The second case is that of the distance to a body made of two NACA airfoils. \remi{The Dirichlet condition is set on the airfoils, and the Soner one on the outer boundary.} The results are displayed on figure \ref{Naca} as well as the mesh. Here again, $\nu=0.1$
\begin{figure}[h]
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{naca_dist.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{naca_mesh.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{zoom_naca.png}}
\end{center}
\caption{\label{Naca} Results for the two Naca problem. The mesh and a zoom of the mesh is displayed.}
\end{figure}
\remi{We also have considered a case where $\Gamma_D$ and $\Gamma_S$ are not disjoint. The example under consideration is
\begin{subequations}\label{eq:aslam}
\begin{equation}\label{eq:aslam:1}\Omega=\{\mathbf{x}=(x_1,x_2) \in \mathbb R^2, 1\leq \Vert \mathbf{x}\Vert \leq 2\Remi{,} x_1\geq 0\}.\end{equation}
We take \begin{equation}\label{eq:aslam:2}\Gamma_D=\{(x_1,0), 1\leq x_1\leq 2\}\text{ , and }\Gamma_S=\partial \Omega\backslash \Gamma_S.
\end{equation}
\end{subequations}
We take $\nu=0.01$.
We provide the result on a fine mesh ($151\;713$ points and $301\;568$ elements) on figure \ref{aslam}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{Aslam.pdf}
\end{center}
\caption{\label{aslam} Solution for the geometry and the boundary conditions defined by \eqref{eq:aslam}. The values range between $0$ and $3.907$, 30 isolines.}
\end{figure}
In figure \ref{convergenceiterative}, we compare the iterative convergence of the algorithm for several meshes ($2473$, $9657$ and $151\;713$ vertices).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{iterativeconv.png}
\caption{\label{convergenceiterative} Convergence to the steady solution for several meshes.}
\end{center}
\end{figure}
We observe that we need 65 iterations for the coarse mesh, 130 for the medium one and 485 for the fine one. The \Remi{ratio} of mesh points w.r.t. the coarse one is 1:4:61, and for the iteration the ratios 1:2:7.5, so we see that the cost evolves like $h^{-1}$ (the mesh is very regular). For a classical explicit hyperbolic solver, the cost scales like the number of points, so $h^{-2}$. In this case, and others that we have computed (such as the airfoil case, the conclusion is similar, or even better. }
\remi{To end this paragraph, let us comment a bit on the boundary condition of Soner type.
Since we expect that approximation of the distance has a gradient of norm approximately equal to unity, one may wonder why a Neuman type boundary condition on the distance, say $\nabla u\cdot {\mathbf {n}}=1$, would not fit. Indeed, this has been our starting point for imposing boundary conditions on $\Gamma_S$.
If we do that, we first have, since $u=-\nu \log \varphi$,
$$\dpar{u}{n}=\nabla u\cdot {\mathbf {n}}=-\nu \dfrac{\nabla \varphi \cdot{\mathbf {n}}}{\varphi}.$$
Thus, to set $\dpar{u}{n}=1$, we are let to Robin type conditions on $\varphi$. We have compared our formulation and this one on the fine mesh of the previous test case to the exact solution
$$
u(x,y)=\left \{\begin{array}{ll}
y & \text{ if } 1\leq x\leq 2 \text{ and } (x,y)\in \Omega\\
d_1+\theta & \text{ else,}
\end{array}\right .
$$
where, if $\mathbf{x}=(x,y)$,
$$d_1=\Vert \mathbf{x}-\mathbf{p}\Vert \text{ with }\mathbf{p}=\frac{\mathbf{x}}{\Vert\mathbf{x}\Vert^2}-\sqrt{1-\frac{1}{\Vert \mathbf{x}\Vert^2}} \frac{\mathbf{x}^\bot}{\Vert \mathbf{x}\Vert}$$ and $\theta$ is the arclength on the inner circle between $\mathbf{p}$ and $(1,0)$.
The results are displayed on figure \ref{comparison} with $\nu=0.01$.
\begin{figure}[h]
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{Exact_Cole.png}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{Exact_Neuman.png}}
\end{center}
\caption{\label{comparison} (a): isolines for the exact solution (red) and the solution with the Hopf-Cole transform, (b): isolines for the exact solution(red) and the solution with Robin boundary conditions (black). The background colour is that of the exact solution, in both case we have 30 isolines between $0$ and $u=3.826$ (the maximum value of the distance on this domain). The solution with the Neumann condition get values larger than 3.9, this explains why the isolines stop}
\end{figure}
From the figure it is clear that the solution with the Neumann condition is completely off compared with our method, though this one is not perfect (because of $\nu$ not small enough).
However, the solution with the Robin condition could be used as an initial guess, this has not been done in our code.}
\section{Conclusion}
This paper is dedicated to Roland Glowinski. He always have been very nice to the author. Roland has also worked a lot on Hamilton Jacobi equations, for example in \cite{zbMATH07161449,zbMATH07074516,zbMATH06993413}. This plus the handling of \cite{nishi} as editor of JCP has motivated the present paper.
A final remark is that it is certainly possible to establish rigorous error bounds between the distance function and what is computed here using approximation results between the viscous regularisation of Hamilton Jacobi equations and standard $L^\infty$ error estimates on $\P^1$ approximation of elliptic equations. This \remi{has not been} done here because we were motivated by designing a working algorithm.
This algorithm has its own drawbacks. The first one is that when $\nu$ becomes very small, the problem becomes stiffer and stiffer. When the domain is large, the actual value of the solution of the elliptic problem becomes extremely small. It is interesting to note links with large deviation problems (see the last chapter of \cite{barles} where exactly the same PDE is studied, for completely different reasons).
However, if one comes back to the initial motivation of this work (finding the distance function for turbulence modelling), our experience is that the computation close to the Dirichlet boundary is very reliable, and that applying the Dirichlet condition on all boundaries is enough to get a good approximation.
When the mesh is \remi{too distorted} so that a discrete maximum principle does not apply, the solution can be slightly above $1$ (so that the 'distance' would be negative): \remi{in that case, }the solution provided by this method can be used as a good initial condition to a 'traditional ' Hamilton Jacobi problem.
\bibliographystyle{plain}
| {
"timestamp": "2022-12-02T02:10:09",
"yymm": "2211",
"arxiv_id": "2211.02319",
"language": "en",
"url": "https://arxiv.org/abs/2211.02319",
"abstract": "Computing the distance function to some surface or line is a problem that occurs very frequently. There are several ways of computing a relevant approximation of this function, using for example technique originating from the approximation of Hamilton Jacobi problems, or the fast sweeping method. Here we make a link with some elliptic problem and propose a very fast way to approximate the distance function.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Evaluating a distance function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787830929848,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7093811498940478
} |
https://arxiv.org/abs/2008.11150 | Hidden Positivity and a New Approach to Numerical Computation of Hausdorff Dimension: Higher Order Methods | In [14], the authors developed a new approach to the computation of the Hausdorff dimension of the invariant set of an iterated function system or IFS. In this paper, we extend this approach to incorporate high order approximation methods. We again rely on the fact that we can associate to the IFS a parametrized family of positive, linear, Perron-Frobenius operators $L_s$, an idea known in varying degrees of generality for many years. Although $L_s$ is not compact in the setting we consider, it possesses a strictly positive $C^m$ eigenfunction $v_s$ with eigenvalue $R(L_s)$ for arbitrary $m$ and all other points $z$ in the spectrum of $L_s$ satisfy $|z| \le b$ for some constant $b < R(L_s)$. Under appropriate assumptions on the IFS, the Hausdorff dimension of the invariant set of the IFS is the value $s=s_*$ for which $R(L_s) =1$. This eigenvalue problem is then approximated by a collocation method at the extended Chebyshev points of each subinterval using continuous piecewise polynomials of arbitrary degree $r$. Using an extension of the Perron theory of positive matrices to matrices that map a cone $K$ to its interior and explicit a priori bounds on the derivatives of the strictly positive eigenfunction $v_s$, we give rigorous upper and lower bounds for the Hausdorff dimension $s_*$, and these bounds converge rapidly to $s_*$ as the mesh size decreases and/or the polynomial degree increases. | \section{Introduction}
\label{sec:intro}
In this paper, we continue previous work in finding rigorous estimates
for the Hausdorff dimension of invariant sets for iterated function
systems or IFS's. To describe the framework of the problem we are
considering, we let $S \subset \mathbb{R}$ be a nonempty compact set, and for
some positive integer $m$, let $\theta_p: S \to S$ and $g_p : S \to
[0, \infty] \in C^m(S)$ for $1 \le p \le n < \infty$. If $\theta_p$
are contraction mappings, it is known that there exists a unique,
compact, nonempty set $C \subset S$ such that $C= \cup_{p=1}^n
\theta_p(C)$. The set $C$ is called the invariant set for the IFS
$\{\theta_p: 1 \le p \le n\}$.
For $s >0$, define a bounded linear map $L_s: C(S) \to C(S)$, (often
called a {\it Perron-Frobenius operator} or {\it linear transfer operator}) by
\begin{equation}
\label{2.1}
(L_s f)(t) = \sum_{p=1}^n [g_p(t)]^s f(\theta_p(t)), \quad t \in S.
\end{equation}
Under additional appropriate hypotheses (stated in the next section),
$L_s$, considered as a map from $C^m(S) \mapsto C^m(S)$, has a
strictly positive eigenfunction $v_s \in C^m(S)$ with algebraically
simple eigenvalue $\lambda_s = \rmf(L_s)$, the spectral radius of
$L_s$. In addition, all other points $z$ in the spectrum of $L_s$
satisfy $|z| \le b$ for some constant $b < \rmf(L_s)$. A more precise
statement of this result, along with other conclusions, is given in
Theorem~\ref{thm:2.4} in the next section. Note that in the $C^m$
setting, $L_s$ is, in general, not compact, has positive essential
spectral radius and cannot be the limit in operator norm of a sequence
of finite dimensional linear operators. These difficulties do not
usually arise if $L_s$ can be studied in a Banach space of complex
analytic functions; and there is an extensive literature concerning
the spectral theory of Perron-Frobenius operators which map a Banach
space of analytic functions into itself. We prefer to work in the
more general $C^m$ setting so as to provide tools which also can be
applied to some non-analytic examples, e.g., as in Section 5 of
\cite{hdcomp1}.
The aim of this paper is to derive an approximation scheme that allows
us to estimate $\rmf(L_s)$ by the spectral radius of an associated
matrix $\bL_s$ which approximates the operator $L_s$ in a weak sense
and then to obtain rigorous bounds on the error $|\rmf(L_s) -
\rmf(\bL_s)|$. We then use this approximation scheme to estimate
$s_*$, the unique number $s \ge 0$ such that $\rmf(L_s) =1$. Under
appropriate assumptions, $s_*$ equals the Hausdorff dimension of the
invariant set associated to the IFS. This observation about Hausdorff
dimension has been made, in varying degrees of generality by many
authors. See, for example, \cite{Bumby1}, \cite{Bumby2}, \cite{Bowen},
\cite{Cusick1}, \cite{Cusick2}, \cite{Falconer}, \cite{Good},
\cite{Hensley1}, \cite{Hensley2}, \cite{Hensley3}, \cite{J},
\cite{Jenkinson}, \cite{Jenkinson-Pollicott}, \cite{MR1902887},
\cite{H}, \cite{Mauldin-Urbanski}, \cite{N-P-L}, \cite{Ruelle},
\cite{Ruelle2}, \cite{Rugh}, \cite{Schief}, and \cite{C-L-U}. There
is also a large literature on the approximation of linear transform
operators, not necessarily related to the computation of Hausdorff
dimension, and often assuming the maps are analytic. We do not
attempt to survey that literature, other than to cite one recent
paper, \cite{B-S}, which has some connections to our work here, and
contains many references to that literature.
In previous work, \cite{hdcomp1}, the authors presented a new approach
to the problem described in the preceding paragraph. We obtained
rigorous upper and lower bounds for the Hausdorff dimension $s_*$, and
these bounds exhibited second order convergence to $s_*$ as the mesh
size decreases. The approximate matrix was obtained by a collocation
method using continuous piecewise linear functions, motivated by the
fact that if such functions are nonnegative at the mesh points, they
are nonnegative at all points of the interval in which they are
defined. This property leads to nonnegative matrix approximations of
the operator $L_s$. One would like these matrices to mimic the
properties of the continuous operator $L_s$, which means they should
satisfy the conclusions of the Perron theorem for positive matrices
(matrices with strictly positive entries), i.e., they should have an
eigenvalue of multiplicity one equal to the spectral radius of the
matrix with corresponding positive eigenvector and all other
eigenvalues of the matrix should have modulus less than the spectral
radius. This is not true for nonnegative matrices, however, unless
they have an additional property. One such property that would
guarantee this is that the matrix $\bL_s$ be {\it primitive}, i.e.,
there exists a positive integer $p$ such that $\bL_s^p$ is a positive
matrix. Note that if $\bL_s$ is {\it irreducible}, then the first two
properties hold, but there can be other eigenvalues of the same
modulus as the spectral radius. Unfortunately, the approximation
scheme used led to matrices which are neither primitive nor
irreducible.. The remedy to obtain the desired properties was to note
that the cone $K$ of nonnegative vectors is not the natural cone in
which such matrices should be studied. Using a more general notion of
positivity of an operator $L$ in which $L$ maps a cone $K$ into
itself, one can still obtain the conclusions of the Perron theorem.
This is important since we use the spectral radius of the approximate
matrix $\bL_s$ to approximate the spectral radius of $L_s$ and the
fact that there is a single dominant eigenvalue enables us to
calculate it efficiently using some variant of the power method.
In this paper, we analyze a similar method obtained by approximation
using higher order piecewise polynomials. As we shall see, the
matrices resulting from the approximation scheme appear to be even
more problematic, since they are not even nonnegative. Despite this
fact, the use of an alternative cone, in place of the standard cone of
nonnegative vectors, allows us to show that the conclusions of the
classical Perron theorem also hold for the matrices of this paper.
There is a substantial abstract theory which has been developed for
finite dimensional linear operators which are positive in the sense
that they map a cone into itself. The survey paper \cite{Tam},
references in \cite{Tam}, and appendices A and B in
\cite{Lemmens-Nussbaum-2013} provide a good starting point. However,
the difficulty lies in finding such a cone that fits the application
under study. We use the term {\it hidden positivity} to call attention
to the fact that we are able to find such a cone for the approximate
operators developed in this paper.
The cone we use is easiest to describe in the case of continuous, piecewise
linear functions, and is defined as follows. On the interval $[0,1]$,
for fixed integer $N$, let $h = 1/N$ and $x_i = i h$, $i = 0,1, \ldots, N$.
The space of continuous, piecewise linear functions is just the finite
dimensional space of continuous functions that restricted to each subinterval
$[x_i, x_{i+1}]$ are linear functions. Since a function $w$ in this space
is completely determined by its values $w_i = w(x_i)$, $i = 0,1, \ldots, N$
(the {\it degrees of freedom} of $w$), we can also view $w$ as the
vector $[w_0, \ldots, w_N]$. For any integer $M >0$, we then define
the cone $K_M$ by
\begin{equation*}
K_M = \{w: w_i \le \exp(M |x_i- x_j|) w_j, \quad i,j =0, 1, \ldots N\}.
\end{equation*}
The cone for higher order piecewise polynomials is similar, but its
description is more involved because of the more complicated nature of
the degrees of freedom of such functions. The details are provided in
Section~\ref{sec:approx}.
One technical difference between piecewise linear functions and higher
order piecewise polynomials is that in order to obtain the results
described above, we must consider approximations $\bL_{s, \nu}$ of the matrix
$L_s^{\nu}$, where $\nu$ depends on the degree $r$ of the piecewise
polynomial approximation. As we observe in the next section, the operator
$L_s^{\nu}$ has the same form as $L_s$, i.e.,
\begin{equation*}
(L_s^{\nu} f)(t) = \sum_{\omega \in \Omega_{\nu}} [g_{\omega}(t)]^s f(\theta_{\omega}(t)),
\end{equation*}
where for $\nu \ge 1$,
$\Omega_{\nu} = \{\omega = (p_1, p_2, \ldots, p_{\nu}): 1 \le p_j \le n
\text{ for } 1 \le j \le \nu\}$; and for
$\omega = (p_1, p_2, \ldots, p_{\nu}) \in \Omega_{\nu}$,
\begin{equation*}
\theta_{\omega}(t) = (\theta_{p_1} \circ \theta_{p_2} \circ
\cdots \circ \theta_{p_{\nu}})(t),
\end{equation*}
and $g_{\omega}(t)$ is defined in the next section. We note that
under the weaker assumption that $\theta_{\omega}$ is a contraction
mapping for all $\omega \in \Omega_{\nu}$, there exists a unique
compact set $C$ such that $C= \cup_{\omega \in
\Omega_{\nu}}\theta_{\omega}(C)$ and that necessarily $C =
\cup_{p=1}^n \theta_p(C)$. By using the matrix $\bL_{s, \nu}$, one
reduces the domain of the operator to a finite set of subintervals
whose total length is much less than the original length of the
domain, resulting in many fewer mesh points. The downside, however,
is that this advantage is completely offset by the increase in the
number of terms in the operator $L_s$ for each time the
map is iterated, i.e., from $n$ to $n^{\nu}$. We note that since
we have not found any case where the method fails if we do not iterate
the matrix, we conjecture that this extra condition is an artifact of
the method of proof, and not the method itself.
To obtain the conclusions of the Perron theorem, the key
result is to show that for some $0 < M' < M$,
\begin{equation}
\label{keycond-intro}
\bL_{s,\nu}(K_M \setminus \{0\}) \subset K_{M'} \setminus \{0\}.
\end{equation}
This enables us to apply
results from the literature on mappings of a cone to itself to obtain
the desired conclusions. Details of this connection, along with
references to the relevant literature, are described in
Section~\ref{sec:theory-c}.
A main goal of our approach, in addition to proposing a new
approximation scheme, is to provide rigorous upper and lower bounds
for the Hausdorff dimension of the underlying IFS. This will follow
directly if we are able to derive rigorous error bounds for
$|[\rmf(L_s)]^{\nu} - \rmf(\bL_{s,\nu})|$. In the case of piecewise
linear functions, we obtained the bounds by using a simple and
well-known result (c.f. Lemma 2.2 of \cite{hdcomp1}) that if $A$ is an
nonnegative matrix and $w$ a vector with strictly positive components,
then if for all components $k$, (i) if $(A w)_k \ge \lambda w_k$, then
$\rmf(A) \ge \lambda$ and (ii) if $(A w)_k \le \lambda w_k$, then
$\rmf(A) \le \lambda$. Here, we use an analogous result for a matrix
mapping a cone $K$ to itself, in which we replace $\le$ by $\le_K$,
i.e, $u \le_K v$ if and only if $v-u \in K$.
Another key tool for obtaining rigorous upper and lower bounds for the
Hausdorff dimension $s_*$, is to obtain and use explicit a priori
bounds on the quantity $D^q v_s(x)/v_s(x)$ of the strictly positive
eigenfunction $v_s$ of $L_s$, where $D^q v_s$ denotes the $q$-th derivative of
$v_s$. Such estimates are derived in Section~\ref{sec:est-constants}.
In order to improve the efficiency of our computation, we consider in
Section~\ref{sec:est-constants} the possibility of replacing the
original interval $S = [a,b]$ by a smaller interval $S_0 \subset S$
such that $\theta_{\beta}(S_0) \subset S_0$ for $\beta \in \mathcal B$. In
particular, for the maps $\theta_{\beta} =1/(x+ \beta)$, setting
$\gamma= \min \{\beta: \in \mathcal B\}$ and $\Gamma = \max \{\beta: \in
\mathcal B\}$, we can reduce the interval $S$ to $[\amf_{\infty},
\bmf_{\infty}]$, where
\begin{equation*}
\amf_{\infty} = - \frac{\gamma}{2} + \sqrt{(\gamma/2)^2 + (\gamma/\Gamma)}
\quad \text{and} \quad
\bmf_{\infty} = - \frac{\Gamma}{2} + \sqrt{(\Gamma/2)^2 + (\Gamma/\gamma)}
= \frac{\Gamma}{\gamma} \amf_{\infty}.
\end{equation*}
For example, for the set $\{1,2\}$, we reduce the interval $[0,1]$
to $[(\sqrt{3} - 1)/2, \sqrt{3} - 1]$ of length $0.366$, while for
the set $\{10,11\}$, we reduce the interval $[0,1]$
to $[0.0901, .0991]$ of length $.009$.
A main result of the paper (Theorem~\ref{thm:8.1}) says that under appropriate
hypotheses, there is a computable constant $H$, such that
\begin{equation*}
[\rmf([1 + H h^r]^{-1} \bL_{s,\nu})]^{1/\nu} \le \lambda_s \le
[\rmf([1 - H h^r]^{-1} \bL_{s,\nu})]^{1/\nu},
\end{equation*}
where $h$ denotes the maximum mesh size and $r$ the degree of the piecewise
polynomial approximation.
Using these inequalities, we can obtain rigorous upper and lower bounds
on the Hausdorff dimension of the invariant set associated with the transfer
operator $L_s$ as follows.
Let $s_l$ and $s_u$ denote values of $s$ satisfying
\begin{equation*}
[1-Hh^r]^{-1} \rmf(\bL_{s_u,\nu}) <1, \qquad
[1+Hh^r]^{-1} \rmf(\bL_{s_l,\nu}) >1.
\end{equation*}
It follows immediately from Theorem~\ref{thm:8.1} that
$\lambda_{s_u}^{\nu} < 1$ and $\lambda_{s_l}^{\nu} >1$. Since the
spectral radius $\lambda_s$ of $L_s$ is a strictly decreasing function
of $s$, there will be a value $s_*$ satisfying $s_l < s_* < s_u$ for
which $\lambda_{s_*}^{\nu} = 1$, or equivalently $\lambda_{s_*} = 1$.
The value $s_*$ gives the Hausdorff dimension $s_*$ of the invariant
set associated with the transfer operator $L_s$. Since $s_u-s_l$ is
of order $h^r$, by choosing $h$ to be sufficiently small and/or $r$ to
be sufficiently large, we obtain a highly accurate estimate for $s_*$.
As noted above, for a given $s$, $\rmf([1 \pm H h^r]^{-1}
\bL_{s,\nu})$ is easily computed by variants of the power method for
eigenvalues, since the largest eigenvalue has multiplicity one and is
the only eigenvalue of its modulus. Our theoretical results imply that
$\bL_{s,\nu}$ has an eigenvector $w$ in $K:=K_M$ with eigenvalue
$\rmf(L_{s,\nu})$ and that this eigenvector can be computed to high
accuracy. Still, one might be concerned about possible errors in the
computation of of $R(\bL_{s,\nu})$ and $w$. However, independently of
how a purported eigenvector $w \in K$ for $\bL_{s,\nu}$ is found, if
$\alpha w \le_K \bL_{s,\nu} w \le_K \beta w$,
Lemma~\ref{lem:cone-compare} in Section~\ref{sec:theory-c} implies
that $\alpha \le \rmf(\bL_{s,\nu}) \le \beta$. This provides a means
of giving rigorous bounds for $\rmf(\bL_{s,\nu})$.
In Section~\ref{sec:num-comp}, we present results of computations of
the Hausdorff dimension $s$ of invariant sets in $[0,1]$ arising from
continued fraction expansions. In this much studied case, one defines
$\theta_p = 1/(x+p)$, for $p$ a positive integer and $x \in [0,1]$;
and for a subset $\mathcal B \subset \mathbb{N}$, one considers the IFS $\{\theta_p: p
\in \mathcal B\}$ and seeks estimates on the Hausdorff dimension of the
invariant set $C =C(\mathcal B)$ for this IFS. This problem has previously
been considered by many authors. See \cite{Bourgain-Kontorovich},
\cite{Bumby1}, \cite{Bumby2}, \cite{Good}, \cite{Hensley1},
\cite{Hensley2}, \cite{Hensley3}, \cite{Jenkinson},
\cite{Jenkinson-Pollicott}, and \cite{Heinemann-Urbanski}. In this
case, \eqref{2.1} becomes
\begin{equation*}
(L_{s} f)(x) = \sum_{p \in \mathcal B} \Big(\frac{1}{x+p}\Big)^{2s}
f\Big(\frac{1}{x+p}\Big), \qquad 0 \le x \le 1,
\end{equation*}
and one seeks a value $s \ge 0$ for which $\lambda_s:= \rmf(L_{s})
=1$. Several of the papers listed above contain a large number
of computations to various degrees of accuracy of the Hausdorff
dimension of the IFS $\{\theta_p: p \in \mathcal B\}$, for various
choices of the set $\mathcal B$. An early paper, \cite{Hensley2}, gives
results for over 30 choices of $\mathcal B$, containing between two and five
terms in the set $\mathcal B$, with results reported to an accuracy between
$10^{-6}$ and $10^{-19}$, depending on the problem studied. A {\it
Mathematica} code implementing the algorithm is also provided. In
\cite{Jenkinson}, computations to four decimal places are given for
over $35$ choices of the set $\mathcal B$, ranging from two terms, to as many
as 34 (this computation is to three decimal places), and also includes
a computation of $E[1,2]$, ($\mathcal B = \{1,2\}$), accurate to 54 decimal
places. In \cite{Jenkinson-Pollicott}, eight examples of $\mathcal B$,
consisting of two terms, are computed with accuracies ranging from
$10^{-13}$ to $10^{-52}$, depending on the choice of $\mathcal B$, although
the authors note that for the sets $[10,11]$, and $[100, 10,000]$,
they were able to compute to accuracies of $10^{-61}$ and $10^{-122}$,
respectively. This depends on the fact that the speed of convergence
of their methods depends on the size of the smallest value of $p \in
\mathcal B$. In \cite{JP100}, the Hausdorff dimension of $E[1,2]$ is
rigorously computed to 100 decimal places, although more digits are
computed. It is less clear how well some of the approximation schemes
employed in these papers work when $|\mathcal B|$ is moderately large or when
different real analytic functions $\hat \theta_j: [0,1] \to [0,1]$ are
used. Here and in \cite{hdcomp1}, in the one dimensional case, we
present an alternative approach with much wider applicability that
only requires the maps in the IFS to be $C^m$, for some finite value
of $m$. As an illustration, we considered in \cite{hdcomp1},
perturbations of the IFS for the middle thirds Cantor set for which
the corresponding contraction maps are $C^3$, but not $C^4$.
The computations in Section~\ref{sec:num-comp} include choices of
various sets of continued fractions, maximum mesh size $h$, piecewise
polynomial degree $r$, and number of iterations $\nu$, ( where $\nu=1$
corresponds to the original map), including choices of $\nu$ for which
the hypotheses of our theorems are satisfied, but also computations
which obtain the same results when the mappings are not
iterated. These results support our conjecture that our method also
works in the non-iterated situation. To facilitate computation of
further examples, a {\it Matlab} code is provided in {\tt
https://sites.math.rutgers.edu/\char'176falk/hausdorff/codes.html}.
An outline of the paper is as follows. In the next section, we
introduce further notation and state some preliminary results we will
use in our analysis. Section~\ref{sec:approx} contains a description
of the approximate problem and the cone we use to analyze
it. Section~\ref{sec:theory-c} contains the theoretical results we
will need to show that the matrices arising from the approximation
scheme satisfy the conclusions of the Perron theorem. In
Section~\ref{sec:theory-d}, the main result is to determine conditions
under which the matrix $\bL_{s,\nu}$ satisfies \eqref{keycond-intro}.
These conditions involve a number of constants, which we then estimate
in Section~\ref{sec:estimating-sr}, ultimately deriving bounds for
$\rmf(L_s)$ in terms of $[\rmf(\bL_{s,\nu})]^{1/\nu}$. In
Section~\ref{sec:est-constants}, we consider a method for reducing the
size of the interval $S$ on which the problem is defined, with the aim
of reducing the number of mesh points that will be needed in the
approximation scheme. In so doing, we are also able to improve the
bound on two constants which are used in the error estimate for
$\rmf(L_s)$. Recall that condition \eqref{keycond-intro} requires
determining for each constant $M$, a constant $0 < M' <M$ such that
\eqref{keycond-intro} is satisfied. In Section~\ref{sec:computation},
we provide a procedure for determining this constant. Finally, the
numerical computations described above are given in
Section~\ref{sec:num-comp}.
It would be of considerable interest to extend the methods of this
paper to the two dimensional case, e.g., to the problem of obtaining
rigorous estimates for the Hausdorff dimension of sets of complex
continued fractions. We conjecture that such an extension can be done,
but we leave it as an open problem for possible future work.
\section{Notation and Preliminaries}
\label{sec:notation}
Let $C(S)$ denote the Banach space of continuous functions $f :S \to \mathbb{R}$, where
$S$ is a compact subset of $\mathbb{R}$. Assume
(H0): For $1 \le p \le n < \infty$, $\theta_p:S \to S$ is a
Lipschitz map.
(H1): For $1 \le p \le n < \infty$, $g_p:S \to [0, \infty)$
is a nonnegative continuous function which is not identically zero.
In addition, there exists a constant $M_0 >0$ such that
\begin{equation*}
g_p(t_1) \le g_p(t_2) \exp(M_0|t_1 - t_2|), \quad \forall t_1, t_2 \in S,
\quad 1 \le p \le n.
\end{equation*}
We note that it is easy to show that (H1) is equivalent to assuming
that $g_p(t) >0$ for all $t \in S$, and
\begin{equation*}
|\ln(g_p(t_1)) - \ln(g_p(t_2))| \le M_0 |t_1-t_2|, \qquad \forall t_1,t_2
\in S, \quad 1 \le p \le n.
\end{equation*}
For $s >0$, define a bounded linear map $L_s:C(S) \to C(S)$ (often called a
{\it Perron-Frobenius operator}) by \eqref{2.1}, i.e.
\begin{equation*}
(L_sf)(t) = \sum_{p=1}^n [g_p(t)]^s f(\theta_p(t)), \quad t \in S.
\end{equation*}
We shall need to consider the $\nu$th iterate of $L_s$, $L_s^{\nu}$.
For $\nu \ge 1$,
let $\Omega_{\nu} = \{\omega = (p_1, p_2, \ldots, p_{\nu}): 1 \le p_j \le n
\text{ for } 1 \le j \le \nu\}$; and for
$\omega = (p_1, p_2, \ldots, p_{\nu}) \in \Omega_{\nu}$, define
\begin{equation*}
\theta_{\omega}(t) = (\theta_{p_1} \circ \theta_{p_2} \circ
\cdots \circ \theta_{p_{\nu}})(t)
\end{equation*}
and
\begin{equation*}
g_{\omega}(t) = g_{p_1}(\theta_{p_2} \circ \cdots \circ \theta_{p_{\nu}}(t))
g_{p_2}(\theta_{p_3} \circ \cdots \circ \theta_{p_{\nu}}(t))
\cdots g_{p_{\nu-1}}(\theta_{p_{\nu}}(t)) g_{p_{\nu}}(t).
\end{equation*}
The reader can verify (e.g., see \cite{N-P-L}) that for all $f \in C(S)$,
\begin{equation*}
(L_s^{\nu}f)(t) = \sum_{\omega \in \Omega_{\nu}} [g_{\omega}(t)]^s f(\theta_{\omega}(t)).
\end{equation*}
Note that $L_s^{\nu}$ has the same form as $L_s$, except with index
set $\Omega_{\nu}$. To analyze the operator $L_s^{\nu}$, we shall need
stronger assumptions than (H0). We will thus assume
(H2): (H0) is satisfied and there exist constants $C_0 \ge 1$ and
$\kappa$, $0 < \kappa <1$, such that for all integers $\nu \ge 1$, all $\omega
\in \Omega_{\nu}$, and all $t_1, t_2 \in S$,
\begin{equation*}
|\theta_{\omega}(t_1) - \theta_{\omega}(t_2)| \le C_0 \kappa^{\nu} |t_1 - t_2|.
\end{equation*}
Assuming (H1) and (H2), one can prove that for all $\omega \in \Omega_{\nu}$ and
all $t_1, t_2 \in S$,
\begin{equation*}
g_{\omega}(t_1) \le \exp(M_0'|t_1-t_2|) g_{\omega}(t_2),
\end{equation*}
where $M_0' = M_0 C_0[(1- \kappa^{\nu})/(1- \kappa)]$.
The proof is left to the reader. The reader will notice that the above
framework carries over to the more general case that $S$ is a compact
metric space with metric $\rho$. (H1) and (H2) take the same form except
that $|t_1-t_2|$ is replaced by $\rho(t_1,t_2)$.
The following result provides some theoretical background which will be
essential for our later work concerning the operator $L_s$. This
theorem is a special case of Corollary 6.6 in \cite{Nussbaum-2016}. We
refer to Section 3 of \cite{Nussbaum-1970} for a brief discussion of the
essential spectrum, which is mentioned in Theorem~\ref{thm:2.4} below.
\begin{thm}
\label{thm:2.4}
Assume hypotheses (H1) and (H2) are satisfied, that $S$ is a finite
union of compact intervals, and that $L_s$ is given by \eqref{2.1},
where $s >0$. Assume also that $\theta_i \in C^m(S)$ and $g_i \in
C^m(S)$ for some positive integer $m$. Let $\Lambda_s:Y:=
C^m(S) \to Y$ be the bounded linear operator given by
\eqref{2.1}, but considered as a map of $Y \mapsto Y$, so $L_s(f) =
\Lambda_s(f)$ for $f \in Y$. If $\rmf(L_s)$ (respectively,
$\rmf(\Lambda_s)$) denotes the spectral radius of $L_s$ (respectively, of
$\Lambda_s$) and $\rho(\Lambda_s)$ denotes the essential spectral
radius of $\Lambda_s$ and $\kappa$ is as in (H2), then
\begin{equation*}
\rho(\Lambda_s) \le \kappa^m \rmf(\Lambda_s), \qquad
\rmf(\Lambda_s) = \rmf(L_s):= \lambda_s >0.
\end{equation*}
Let $\hat \Lambda_s$ denote the complexification of $\Lambda_s$. If
$\sigma(\hat \Lambda_s)$ denotes the spectrum of $\hat \Lambda_s$, and we
define $\sigma(\Lambda_s) := \sigma(\hat \Lambda_s)$, then if $z \in
\sigma(\Lambda_s)$ and $\rho(\Lambda_s) < |z|$, $z$ is an isolated point
of $\sigma(\Lambda_s)$ and is an eigenvalue of $\Lambda_s$ of finite
algebraic multiplicity. Also, there exists $b_s < \lambda_s$ such that
\begin{equation*}
\sigma(\Lambda_s) \setminus \{\lambda_s\} \subset \{z \in \mathbb{C} : |z| \le |b_s|\}.
\end{equation*}
There exists a strictly positive eigenfunction $v_s \in C^m(S)$ with
eigenvalue $\lambda_s >0$, and $\lambda_s$ is an algebraically simple eigenvalue
of $\Lambda_s$. If $u \in Y$ and $u(t) >0$ for all $t \in S$,
there exists a positive real number $\alpha$ (dependent on $u$) such that
\begin{equation}
\label{2.14}
\lim_{k \rightarrow \infty} \Big(\frac{1}{\lambda_s}\Big)^k \Lambda_s^k (u) = \alpha
v_s,
\end{equation}
where the convergence is in the norm topology on $Y$.
\end{thm}
\begin{remark}
\label{rem:2.5}
In our work here, it will be important to have estimates on
\begin{equation*}
\sup \Big\{\frac{|(d^j v_s/dt^j)(t)|}{v_s(t)}: t \in S\Big\},
\end{equation*}
where $1 \le j \le m$. Note that if we take $u:=1$ in
\eqref{2.14}, we find that for $t \in S$ and $1 \le j \le m$,
\begin{equation}
\label{2.15}
\frac{|(d^jv_s/dt^j)(t)|}{v_s(t)} = \lim_{k \rightarrow \infty}
\frac{|\sum_{\omega \in \Omega_k} (d^j g_{\omega}^s/dt^j)(t)|}
{\sum_{\omega \in \Omega_k} g_{\omega}(t)^s},
\end{equation}
and the convergence in \eqref{2.15} is uniform in $t \in S$.
If $u$ is as in \eqref{2.14}, we also obtain from \eqref{2.14} that
\begin{equation*}
\lim_{k \rightarrow \infty} \frac{\Lambda_s^{k+1}u}{\Lambda_s^k u} = \lambda_s,
\end{equation*}
where the convergence to the constant function $\lambda_s$ is in the norm
topology on $Y$.
\end{remark}
\section{Approximation of the spectral radius of $L_s$}
\label{sec:approx}
Returning to the notation of \eqref{2.1}, we want to approximate $\rmf(L_s)$ by
the spectral radius of an appropriate finite dimensional linear map
$\bL_s$. To do so, we assume that $S = [a,b]$ in (H1) and (H2), with $a <b $
and let $\hat S$ denote a union of
disjoint subintervals $[a_i, b_i] \subset [a,b]$, $i=1, \ldots, I$.
We also assume throughout this section that $\theta_p(\hat S) \subset \hat S$
for $1 \le p \le n$.
Further subdivide each interval $[a_i, b_i]$ into $N_i$ equally spaced
subintervals $[t_{j-1}^i,t_j^i]$, $j =1, \ldots, N_i$ of width
\begin{equation}
\label{2.30}
h_i = (b_i-a_i)/N_i, \qquad 1 \le i \le I.
\end{equation}
Set $h = \max_{1 \le i \le I} h_i$.
Next let $\{c_{j,k}^i\}_{k=0}^r \in [t_{j-1}^i, t_j^i]$, with $c_{j,0}^i =
t_{j-1}^i$, $c_{j,r}^i = t_j^i$, and $c_{j,k}^i < c_{j,k+1}^i$ for $0 \le k <
r$. Given values $\{F_{j,k}^i\} = F(c_{j,k}^i)$, we then define on
$\hat S$, a piecewise polynomial $\mathcal F$ as follows: For $t_{j-1}^i \le x \le
t_{j}^i$, $1 \le j \le N_i$, and $1 \le i \le I$,
\begin{equation}
\label{2.31}
\mathcal F|_{[t_{j-1}^i, t_j^i]}(x) = \mathcal F_j^i(x) = \sum_{k=0}^r l_{j,k}^i(x) F_{j,k}^i,
\end{equation}
where
\begin{equation*}
l_{j,k}^i(x) = \frac{\prod_{\substack{l=0 \\ l \neq k}}^r (x-c_{j,l}^i)}
{\prod_{\substack{l=0 \\ l \neq k}}^r(c_{j,k}^i -c_{j,l}^i)}.
\end{equation*}
Since $c_{j,r}^i = c_{j+1,0}^i$, $\mathcal F \in \bV_h^r$, the space of
continuous piecewise polynomials of degree $\le r$,
whose degrees of freedom are the $Nr+I:=Q$ values $F_{j,k}^i$,
where $N = \sum_{i=1}^I N_i$.
We note that we can simplify our expressions by choosing points
$\{\hat c_k\}_{k=0}^r \in [-1,1]$, with $\hat c_k < \hat c_{k+1}$ for
$0 \le k < r$, $\hat c_0 = -1$ and $\hat c_r =1$. If we then define
\begin{equation*}
c_{j,k}^i = t_{j-1}^i + h_i(1 + \hat c_k)/2,
\end{equation*}
and write $x \in [t_{j-1}^i,t_j^i] \subset [a_i,b_i]$ in the form $x = t_{j-1}^i +
h_i(1 + \hat x)/2$, where $\hat x \in [-1,1]$, we obtain
\begin{equation}
\label {2.32}
l_{j,k}^i(x) = \hat l_{k}(\hat x)
= \frac{\prod_{\substack{l=0 \\ l \neq k}}^r (\hat x- \hat c_l)}
{\prod_{\substack{l=0 \\ l \neq k}}^r(\hat c_{k} - \hat c_{l})}.
\end{equation}
Because we seek to make use of high order piecewise polynomials, it is
important to choose the points $\hat c_k$ to avoid the large errors
that can occur in polynomial interpolation due to Runge's phenomenon
(e.g., when equally spaced interpolation points are used). Since, for
our analysis, we shall need the function $\mathcal F(x)$ in \eqref{2.31} to be
continuous, we choose the points $\hat c_k$ to be the extended
Chebyshev points in $[-1,1]$ given by
\begin{equation}
\label{hatck}
\hat c_k = - \cos \Big(\frac{2k+1}{2r+2} \pi\Big)\Big/
\cos\Big(\frac{\pi}{2r+2}\Big), \quad k=0, \ldots, r,
\end{equation}
obtained by rescaling the usual Chebyshev nodes. Then
\begin{equation}
\label{2.33}
c_{j,k}^i = t_{j-1}^i + \frac{h_i}{2}
\Big(1 - \Big[\cos \Big(\frac{2k+1}{2r+2}\pi\Big)\Big/
\cos \Big(\frac{\pi}{2r+2}\Big)\Big]\Big), \quad k=0, \ldots, r.
\end{equation}
We note that another possible choice is to use the augmented Chebyshev points,
consisting of the roots of the Chebyshev polynomial of degree $r-1$ shifted
to the interval $[t_{j-1}^i, t_j^i]$ plus the endpoints $t_{j-1}^i$ and $t_j^i$.
With this notation, we can now define, for $s >0$, the linear map
$\bL_s: \mathbb{R}^Q \to \mathbb{R}^Q$. If $\bg = \{F_{j,k}^i\} \in \mathbb{R}^Q$, we define
\begin{equation*}
\bL_s(\bg)(c_{j,k}^i) = \sum_{p=1}^n g_p(c_{j,k}^i)^s \mathcal F(\theta_p(c_{j,k}^i)),
\end{equation*}
where $\mathcal F$ is defined above.
Equivalently, we can also think of the operator $\bL_s$ as a map from the space
$\bV_h^r \to \bV_h^r$, if we replace $\bg$ by $\mathcal F$, and given the values
$\{G_{j,k}^i\} = \sum_{p=1}^n g_p(c_{j,k}^i)^s \mathcal F(\theta_p(c_{j,k}^i))$,
we define $\mathcal G(x)$ as follows: For $t_{j-1}^i \le x \le t_j^i$, $1 \le j \le
N_i$, and $1 \le i \le I$,
\begin{equation*}
\mathcal G|_{[t_{j-1}^i,t_j^i]}(x) = \mathcal G_j^i(x) = \sum_{k=0}^r l_{j,k}^i(x) G_{j,k}^i.
\end{equation*}
Given a positive real number $M$, we next define $K_M \subset \mathbb{R}^{Q}$
as $\{\bg \in \mathbb{R}^{Q}\}$ such that for all $\xi = c_{j,k}^i$ and
$\eta = c_{j',k'}^{i'}$, with $1 \le i,i' \le I$, $1 \le j \le N_i$,
$1 \le j' \le N_{i'}$, and $0 \le k,k' \le r$,
\begin{equation}
\label{2.35}
F(\xi) \le \exp(M|\xi- \eta|) F(\eta).
\end{equation}
Note that to verify \eqref{2.35}, it suffices to verify it whenever
$\xi$ and $\eta$ are two consecutive points in the linear ordering
inherited from $\mathbb{R}$ of the points $\{ c_{j,k}^i\}$.
One can easily verify that if $\bg \in K_M$, then either (a) $F_{j,k}^i =0$
for all $1 \le i \le I$, $1 \le j \le N_i$, and $0 \le k \le r$, or (b)
$F_{j,k}^i >0$ for all $i,j,k$ in this range. In case (b), one has for
all $1 \le i,i' \le I, \, 1 \le j \le N_i, \, 1 \le j' \le N_{i'},
\, 0 \le k,k' \le r$,
\begin{equation}
\label{2.36}
|\ln(F_{j,k}^i) - \ln(F_{j',k'}^{i'})| \le M|c_{j,k}^i- c_{j',k'}^{i'}|.
\end{equation}
and \eqref{2.36} implies that \eqref{2.35} holds.
One might hope to prove that the spectral radius $\rmf(\bL_s)$ of
$\bL_s$ closely approximates the spectral radius $\rmf(L_s)$, and we
shall see that this is true if the Lipschitz constant $C_0 \kappa$ in
(H2) (corresponding to the case $\nu=1$ and the operator $\rmf(L_s)$)
and the constant $h$ in \eqref{2.30} are sufficiently small. However,
if $C_0 \kappa$ is not sufficiently small, we can instead work with
the operator $L_s^{\nu}$, where $\nu$ is a positive integer, and the
corresponding Lipschitz constant in (H2) is then $C_0 \kappa^{\nu}$. This
in turn means that we will have to replace $\bL_s$ by $\bL_{s,\nu}:
\mathbb{R}^{Q} \to \mathbb{R}^{Q}$, where, $\nu$ is a positive integer and in our
earlier notation,
\begin{equation}
\label{3.8}
(\bL_{s,\nu}(\bg))(c_{j,k}^i) = \sum_{\omega \in \Omega_{\nu}}
g_{\omega}(c_{j,k}^i)^s \mathcal F(\theta_{\omega}(c_{j,k}^i)).
\end{equation}
\section{Cones, Positive eigenvectors, and Birkhoff's
contraction constant}
\label{sec:theory-c}
As noted in Section~\ref{sec:intro}, we would like to have the
approximating matrices defined in the previous section mimic the
properties of the infinite dimensional, bounded linear operator $L_s$,
which means they should satisfy the conclusions of the Perron theorem
for positive matrices, i.e., they should have an eigenvalue of
multiplicity one equal to the spectral radius of the matrix with
corresponding positive eigenvector and that all other eigenvalues of
the matrix should have modulus less than the spectral radius.
However, the matrix $\bL_s$ defined in the previous section is not
even a nonnegative matrix once the degree $r$ of the piecewise
polynomial is $>1$. The reason for this, seen by constructing the
matrix, is that the Lagrange basis functions for a polynomial of
degree $>1$ are not always positive.
The remedy, also used in the case $r=1$, when the resulting matrix was
nonnegative, but not {\it primitive} or {\it irreducible}, is to base
the analysis on a cone different from the usual cone of nonnegative
functions. More precisely, by using the cone $K_M$ defined in the
previous section, we shall show that the conclusions of the classical
Perron theorem also hold for the matrices of this paper.
To outline our method of proof, it is convenient to describe, at least
in the finite dimensional case, some basic definitions and classical
theorems concerning linear maps $L: \mathbb{R}^N \to \mathbb{R}^N$ which leave a cone
$K \subset \mathbb{R}^N$ invariant. In doing so, we shall closely follow the
analogous description in \cite{hdcomp1}. Recall that a closed subset $K$ of
$\mathbb{R}^N$ is called a closed cone if (i) $ax + by \in K$ whenever $a
\ge 0$, $b \ge 0$, $x \in K$ and $y \in K$ and (ii) if $x \in
K \setminus \{0\}$, then $-x \notin K$. If $K$ is a
closed cone, $K$ induces a partial ordering on $\mathbb{R}^N$ denoted by
$\le_{K}$ (or simply $\le$, if $K$ is obvious) by $u
\le_{K} v$ if and only if $v-u \in K$. If $u,v \in K$, we
shall say that $u$ and $v$ are {\it comparable} (with respect to
$K$) and we shall write $u \sim_{K} v$ if there exist positive
scalars $a$ and $b$ such that $v \le_{K} au$ and $u \le_{K} b
v$. {\it Comparable with respect to} $K$ partitions $K$ into
equivalence classes of comparable elements. We shall henceforth
assume that $\interior(K)$, the interior of $K$, is nonempty.
Then an easy argument shows that all elements of $\interior(K)$
are comparable. Generally, if $x_0 \in K$ and $K_{x_0}: = \{x
\in K : x \sim_{K} x_0\}$, all elements of $K_{x_0}$ are
comparable.
Following standard notation, if $u,v \in K$ are comparable elements, we
define \begin{align*}
M(u/v;K) &= \inf\{\beta >0 : u \le \beta v\},
\\
m(u/v;K) &= M(v/u;K)^{-1} = \sup\{\alpha >0 : \alpha v \le u\}.
\end{align*}
If $u$ and $v$ are comparable elements of $K \setminus \{0\}$, we define
Hilbert's projective metric $d(u,v;K)$ by
\begin{equation*}
d(u,v;K) = \ln(M(u/v;K)) + \ln M(v/u;K)).
\end{equation*}
We make the convention that $d(0,0;K) =0$. If $x_0 \in K \setminus
\{0\}$, then for all $u,v,w \in K_{x_0}$, one can prove that (i)
$d(u,v;K) \ge 0$, (ii) $d(u,v;K) = d(v,u;K)$, and (iii)
$d(u,v;K) + d(v,w;K) \ge d(u,w;K)$. Thus $d$ restricted to
$K_{x_0}$ is almost a metric, but $d(u,v;K) =0$ if and only if $v =
tu$ for some $t >0$ and generally, $d(su,tv;K) = d(u,v;K)$ for all
$u,v \in K_{x_0}$ and all $s >0$ and $t >0$. If $\|\cdot\|$ is any norm
on $\mathbb{R}^N$ and $S:= \{ u \in \interior(K): \|u\|=1\}$ (or, more generally,
if $x_0 \in K \setminus \{0\}$ and $S = \{x \in K_{x_0} : \|x\| =1\}$,
then $d(\cdot, \cdot; K)$, restricted to $S \times S$, gives a metric on
$S$; and it is known that $S$ is a complete metric space with this metric.
With these preliminaries, we can describe a special case of the
Birkhoff-Hopf theorem. We refer to \cite{Birkhoff}, \cite{V}, and
\cite{Samelson} for the original papers and to \cite{E-N-2} and
\cite{E-N-1} for an exposition of a general version of this theorem
and further references to the literature. We remark that
P.~P.~Zabreiko, M.~A.~Krasnosel$'$skij, Y.~V.~Pokornyi, and
A.~V.~Sobolev independently obtained closely related theorems; and we
refer to \cite{Y} for details. If $K$ is a closed cone as above, $S =
\{x \in \interior(K): \|x\|=1 \}$, and $L: \mathbb{R}^N \to \mathbb{R}^N$ is a linear
map such that $L(\interior(K)) \subset \interior(K)$, we define
$\Delta(L;K)$, {\it the projective diameter} of $L$ by
\begin{multline*}
\Delta(L;K) = \sup\{d(Lx,Ly;K): x,y \in K \text{ and }
Lx \sim_{K} Ly\}
\\
= \sup\{d(Lx,Ly;K): x,y \in \interior(K)\}.
\end{multline*}
The Birkhoff-Hopf theorem implies that if $\Delta:= \Delta(L;K) < \infty$,
then $L$ is a contraction mapping with respect to Hilbert's projective metric.
More precisely, if we define $\lambda = \tanh(\tfrac{1}{4} \Delta) <1$, then
for all $x,y \in K \setminus \{0\}$ such that $x \sim_{K} y$, we have
\begin{equation*}
d(Lx,Ly;K) \le \lambda d(x,y;K),
\end{equation*}
and the constant $\lambda$ is optimal.
If we define $\Phi:S \to S$ by $\Phi(x) = L(x)/\|L(x)\|$, it follows that
$\Phi$ is a contraction mapping with a unique fixed point $v \in S$, and
$v$ is necessarily an eigenvector of $L$ with eigenvalue $r(L) := r=$
the spectral radius of $L$. Furthermore, given any $x \in \interior(K)$,
there are explicitly computable constants $M$ and $c <1$ (see Theorem 2.1
in \cite{E-N-2}) such that for all $k \ge 1$,
\begin{equation*}
\|L^k(x)/\|L^k(x)\| -v\| \le Mc^k;
\end{equation*}
and the latter inequality is exactly the sort of result we need. Furthermore,
it is proved in Theorem 2.3 of \cite{E-N-2} that $r=r(L)$ is an
algebraically simple eigenvalue of $L$ and that if $\sigma(L)$ denotes
the spectrum of $L$ and $q(L)$ denotes the {\it spectral clearance of } $L$,
\begin{equation*}
q(L):= \sup\{|z|/r(L): z \in \sigma(L), z \neq r(L)\},
\end{equation*}
then $q(L) <1$ and $q(L)$ can be explicitly estimated.
The main issue, then, is to find a suitable cone satisfying the hypotheses
outlined above. We shall show in the sections that follow that the
cone $K_M$, defined in the previous section, is such a cone. To do
so, we shall show that there exists $M^{\prime}$, $0 < M^{\prime} <
M$, such that $L(K_M\setminus\{0\}) \subset
K_{M^{\prime}}\setminus\{0\}$. After correcting the typo in the
formula for $d_2(f,g)$ on page 286 of \cite{F}, it follows from Lemma
2.12 on page 284 of \cite{F} that
\begin{equation*}
\sup\{d(f,g;K_M) : f,g \in K_{M^{\prime}}\setminus\{0\}\}
\le 2 \ln\Big(\frac{M + M^{\prime}}{M- M^{\prime}}\Big)
+ 2 \exp(M^{\prime}(b-a)) < \infty,
\end{equation*}
where now $S:=[a, b]$ in (H1) and (H2) (c.f. Section~\ref{sec:approx}).
This implies that $\Delta(L;K_M) < \infty$, which in turn implies that
$L$ has a normalized eigenvector $v \in K_{M^{\prime}}$ with positive
eigenvalue $r = r(L) =$ the spectral radius of $L$.
Furthermore, $r$ has algebraic multiplicity 1, $q(L) <1$, and
$\underset{k \to \infty}{\lim} \|L^k(x)/\|L^k(x)\| -v\| =0$ for all $x \in K_M
\setminus\{0\}$. Thus it suffices to prove, for an appropriate map $L$,
that for some $M^{\prime} < M$,
\begin{equation}
\label{key-map-prop}
L(K_M\setminus\{0\}) \subset K_{M^{\prime}}\setminus\{0\}.
\end{equation}
We note that if the map $L$ satisfies \eqref{key-map-prop}, then it is
not difficult to show that $L(K_M\setminus\{0\})\subset \interior(K_M)$.
An alternative approach is then to apply Theorem 4.4 of
\cite{Vandergraft}, which concludes that $\rmf(L)$ is a simple
eigenvalue, greater than the magnitude of any other eigenvalue, and
that an eigenvector corresponding to $\rmf(L)$ lies in $\interior(K)$. In any
case, the key step is to show for an appropriate matrix $L$ and cone
$K_M$, that \eqref{key-map-prop} is satisfied.
A key part of the paper is to obtain upper and lower bounds on
$R(L_s)$ using the approximations developed in this paper. To do
so, we will use an extension to cones of a well known result for positive
matrices.
\begin{lem}
\label{lem:cone-compare}
Suppose $L(K_M\setminus\{0\}) \subset K_{M^{\prime}}$ and
$\mathcal V_s \in K_M\setminus\{0\}$. Then if there exists positive constants
$\alpha$ and $\beta$ such that
\begin{equation*}
\alpha \mathcal V_s \le_{K_M} L \mathcal V_s \le_{K_M} \beta \mathcal V_s,
\end{equation*}
then $\alpha \le R(L) \le \beta$.
\end{lem}
\section{Theory for the discrete approximation}
\label{sec:theory-d}
The main result of this section, Theorem~\ref{thm:5.6n}, is to show that
under appropriate hypotheses, the matrix operator $\bL_{s,\nu}$ defined
in Section~\ref{sec:approx}, satisfies
\begin{equation*}
\bL_{s,\nu}(K_M(T) \setminus \{0\}) \subset K_{M'}(T)\setminus \{0\},
\end{equation*}
where $T$ will be as in \eqref{T-def} below.
Throughout this section, we shall assume that (H1) and (H2) are satisfied
and that $S = [a,b]$ with $a <b$, where $S$ is as in (H1) and (H2). We shall
also assume that (H3), given below, is satisfied, and we shall use the
notation of (H1), (H2), and (H3).
(H3): For a given positive integer $\nu$, there exist pairwise disjoint,
nonempty compact intervals $[a_i,b_i] \subset S$, $1 \le i \le I$,
(where $S:= [a,b]$ is as in (H0) - (H2)) with the following property:
For every $\omega \in \Omega_{\nu}$, there exists $i =i(\omega)$, $1 \le i \le
I$, such that $\theta_{\omega}(S) \subset [a_{i},b_{i}]$.
\begin{remark}
\label{rem:disjoint}
Assume that (H0)-(H2) are satisfied and that for some positive integer $\nu'$,
$\theta_{\omega_1}(S)$ and $\theta_{\omega_2}(S)$ are disjoint whenever
$\omega_1$ and $\omega_2$ are unequal elements of $\Omega_{\nu'}$. Label the
elements of $\Omega_{\nu'}$ as $\omega_i$, $1 \le i \le I$, and define
$[a_i,b_i] = \theta_{\omega_i}(S)$. Then for all positive integers $\nu \ge
\nu'$, (H3) is satisfied. More generally, one could, for $1 \le i \le I$,
take $[a_i,b_i]$ to be any interval contained in $[a,b]$ such that
$\theta_{\omega_i}(S) \subset [a_i,b_i]$ as long as $[a_i,b_i] \cap [a_j,b_j]
= \emptyset$ for $1 \le i,j \le I$. Note that (H3) is also trivially
satisfied if we take $I =1$ and $[a_1,b_1] = [a,b]$.
\end{remark}
\begin{remark}
\label{rem:ordered}
In the context of (H3), it is possible by relabeling to assume that
$b_i < a_{i+1}$ for $1 \le i < I$, so the intervals are linearly ordered
as subsets of $\mathbb{R}$. Thus we shall make this assumption if convenient.
\end{remark}
We now follow the notation of Section~\ref{sec:approx}.
If we define
\begin{equation}
\label{T-def}
T:=\{c_{j,k}^i : 1 \le i \le I, 1 \le j \le N_i, 0 \le k \le r\},
\end{equation}
then $T$ is a finite subset of $\mathbb{R}$ and a compact metric space. Recall that
we consider the finite dimensional vector space $V= V(T)$ of dimension $Q = Nr
+I$ of all maps $F:T \to \mathbb{R}$ and $K_M(T)$ is then defined as in
Section~\ref{sec:theory-c} or \eqref{2.35}, i.e.,
$F \in K_M(T) \setminus \{0\}$ if and only if
\begin{equation*}
|\ln(F(\xi)) - \ln(F(\eta))| \le M|\xi - \eta|
\end{equation*}
for all $\xi, \eta \in T$. Note that $V$ is linearly isomorphic to $\mathbb{R}^Q$.
{\it A central question for our approach
is to determine under what conditions on $\bL_{s,\nu}$ (see \eqref{3.8})
$\bL_{s,\nu}(K_M(T)) \subset K_{M'}(T)$ for
some $M$, $M'$ with $0 < M' < M$.} To do so, we begin by recalling some
classical results.
\begin{lem} (See \cite{Markov}, \cite{Rivlin}, pages 121-123, or
\cite{Shadrin}).
\label{lem:3.1}
Let $p(t)$ be a real-valued polynomial of degree $\le r$. Then
\begin{equation*}
\max_{-1 \le t \le 1} |p^{\prime}(t)| \le r^2 \max_{-1 \le t \le 1} |p(t)|.
\end{equation*}
\end{lem}
A proof of the following estimate is given in \cite{Brutman} and refinements
for $r \ge 5$ can be found in \cite{Gunttner82}.
\begin{lem}
\label{lem:3.2}
If $\hat l_{k}(\hat x)$, $0 \le k \le r$, is defined by \eqref{2.32}, then
\begin{equation*}
\max_{-1 \le \hat x \le 1} \sum_{k=0}^r|\hat l_{k}(\hat x)|
\le \frac{2}{\pi} \ln(r +1) + 3/4 := \psi(r),
\end{equation*}
\end{lem}
where $\ln$ denotes natural logarithm.
It will also be convenient to have some elementary estimates concerning the
numbers $c_{j,k}^i$, $1 \le j \le N_i$, $0 \le k \le r$, in \eqref{2.33}.
If $x$ is a real number, $[x]$ denotes the greatest integer $m \le x$.
If $r$ is an integer, it follows that $[r/2] = r/2$ if $r$ is even and
$[r/2] = (r-1)/2$ is $r$ is odd. The next lemma is a straightforward
exercise and is left to the reader.
\begin{lem}
\label{lem:3.3}
Let the numbers $c_{j,k}^i$ be defined by \eqref{2.33}. Then for $0 \le k \le
r$ and $1 \le i \le I$,
\begin{align*}
|c_{j,k}^i - c_{j,[r/2]}^i| &\le h_i/2, \quad \text{if } r \text{ is even},
\\
|c_{j,k}^i - c_{j,[r/2]}^i| &\le h_i/2[1 + \tan(\pi/[2r+2]),
\quad \text{if } r \text{ is odd}.
\end{align*}
For $1 \le j \le N_i$ and $0 \le k < r$,
\begin{align*}
\min_{1 \le j \le N_i, 0 \le k < r} |c_{j,k}^i -c_{j,k+1}^i| &= 2h_i
[\sin(\pi/[2r+2])]^2,
\\
\max_{1 \le j \le N_i, 0 \le k < r} |c_{j,k}^i -c_{j,k+1}^i| &= h_i \tan(\pi/[2r+2])
\quad \text{if } r \text{ is odd},
\\
\max_{1 \le j \le N_i, 0 \le k < r} |c_{j,k}^i -c_{j,k+1}^i| &= h_i \sin(\pi/[2r+2])
\quad \text{if } r \text{ is even}.
\end{align*}
\end{lem}
Since the first result in Lemma~\ref{lem:3.3} will be used later, we
define the constant $\eta(r)$ for $r$ a
positive integer by:
\begin{equation}
\label{3.1}
\eta(r) = \begin{cases} \tfrac{1}{2}, & \text{if } r \text{ even}
\\
\tfrac{1}{2}[1 + \tan(\pi/[2r+2])], & \text{if } r \text{ odd}
\end{cases}.
\end{equation}
In addition, for $\nu$ a positive integer and
$\omega \in \Omega_{\nu}$, we define constants $M_0(\nu)$ and $c(\nu)$
such that for all $\omega \in \Omega_{\nu}$,
\begin{equation}
\label{3.4}
g_{\omega} \in K_{M_0(\nu)}(S) \qquad \text{and} \qquad \lip(\theta_{\omega}|_S)
\le c(\nu).
\end{equation}
We already know (see Section~\ref{sec:notation}) that $M_0(\nu) \le M_0 C_0 (1-
\kappa^{\nu})/(1 - \kappa)$, where $M_0$ is as in (H1) and $C_0$ and
$\kappa$ are as in (H2); and (H2) implies that $c(\nu) \le C_0
\kappa^{\nu}$. However, in specific examples which we shall study later,
these estimates can be significantly improved.
\begin{lem}
\label{lem:3.4}
Suppose that $\tau \in \mathbb{R}$, $\epsilon >0$, and $\hat c_k$, $0 \le k \le r$ is
a normalized extended Chebyshev point as in \eqref{hatck}, and $c_k = \tau +
\tfrac{\epsilon}{2}(1 + \hat c_k)$. If $x \in [\tau, \tau + \epsilon]$,
let $\hat x \in [-1,1]$ denote the unique point such that
$x = \tau + \tfrac{\epsilon}{2}[1 + \hat x]$. Assume that $M >0$,
$\Gamma = \{c_k: 0 \le k \le r\}$ and $F \in K_M(\Gamma) \setminus \{0\}$,
so $|\ln(F(\xi)) - \ln(F(\eta))| \le M|\xi - \eta|$ for all $\xi, \eta
\in \Gamma$. Let $\mathcal F: [\tau, \tau + \epsilon] \to \mathbb{R}$ denote the unique
polynomial of degree $\le r$ such that $\mathcal F(c_k) = F(c_k)$ for $0 \le k \le
r$. Let $\eta(r)$ be as in \eqref{3.1} and $\psi(r)$ as in Lemma~\ref{lem:3.2}
and define $u = M \epsilon \eta(r)$. If
\begin{equation*}
\psi(r) u \exp(u) <1,
\end{equation*}
then $\mathcal F(x) >0$ for all $x \in [\tau, \tau + \epsilon]$. If
$\psi(r) u \exp(u) <1$ and
\begin{equation*}
C:= \frac{[2 \eta(r) r^2 \psi(r)]\exp(u) M}{1 - \psi(r) u \exp(u)},
\end{equation*}
then for all $x,y \in [\tau, \tau + \epsilon]$,
\begin{equation*}
|\ln(\mathcal F(x)) - \ln(\mathcal F(y))| \le C|x-y|.
\end{equation*}
\end{lem}
\begin{proof}
Recall that for $0 \le k \le r$,
\begin{equation*}
l_k(x) = \frac{\prod_{\substack{l=0 \\ l \neq k}}^r (x-c_l)}
{\prod_{\substack{l=0 \\ l \neq k}}^r(c_k -c_l)}, \qquad
\hat l_k(\hat x) = \frac{\prod_{\substack{l=0 \\ l \neq k}}^r (\hat x- \hat c_l)}
{\prod_{\substack{l=0 \\ l \neq k}}^r(\hat c_k -\hat c_l)},
\end{equation*}
and $l_k(x) = \hat l_k(\hat x)$ for $x = \tau + \tfrac{\epsilon}{2}[1 + \hat
x]$ and, writing $F_k = F(c_k)$,
\begin{equation*}
\mathcal F(x) = \sum_{k=0}^r l_k(x) F(c_k) = \sum_{k=0}^r l_k(x) F_k.
\end{equation*}
Recalling that $\sum_{k=0}^r l_k(x) =1$ for all
$x \in [\tau, \tau + \epsilon]$, we obtain
\begin{multline*}
\mathcal F(x) = \sum_{k=0}^r l_k(x) F_k
= F_{[r/2]}\Big( 1 + \sum_{\substack{k=0 \\ k \neq [r/2]}}^r
l_k(x) \Big[\frac{F_k}{F_{[r/2]}} - 1\Big]\Big)
= F_{[r/2]}[1 + \phi(x)]
\\
= F_{[r/2]}\Big( 1 + \sum_{\substack{k=0 \\ k \neq [r/2]}}^r
\hat l_k(\hat x) \Big[\frac{F_k}{F_{[r/2]}} - 1\Big]\Big)
:= F_{[r/2]}[1 + \hat\phi(\hat x)],
\end{multline*}
where as usual, $x = \tau + \tfrac{\epsilon}{2}[1 + \hat x]$, $\hat x \in
[-1,1]$.
Since $F \in K_M(\Gamma) \setminus \{0\}$,
we have for $0 \le k \le r$, $k \neq [r/2]$,
\begin{equation*}
\exp(- M|c_k- c_{[r/2]}|) \le \frac{F_k}{F_{[r/2]}}
\le \exp(M|c_k- c_{[r/2]}|).
\end{equation*}
Because Lemma~\ref{lem:3.3} (with $h_i = \epsilon$) implies that $|c_k
- c_{[r/2]}| \le \eta(r) \epsilon$,
\begin{equation*}
\exp(-M \eta(r) \epsilon) -1 \le \frac{F_k}{F_{[r/2]}} -1 \le \exp(M \eta(r)
\epsilon) -1.
\end{equation*}
Using the mean value theorem and writing $u = M \eta(r) \epsilon$,
it follows that
\begin{equation*}
-u \le \frac{F_k}{F_{[r/2]}} -1
\le u \exp(u),
\end{equation*}
so
\begin{equation*}
\Big|\frac{F_k}{F_{[r/2]}} -1\Big|
\le u \exp(u).
\end{equation*}
Using Lemma~\ref{lem:3.2}, it follows that
\begin{equation}
\label{3.10}
|\hat \phi(\hat x)| \le
\sum_{\substack{k=0 \\ k \neq [r/2]}}^r |\hat l_k(\hat \xi)|
\Big|\frac{F_k}{F_{[r/2]}} -1\Big|
\le \psi(r) u \exp(u),
\end{equation}
so if $\psi(r) u \exp(u) <1$, $1 + \hat \phi(\hat x) > 0$, and $\mathcal F(x) >0$
for all $x \in [\tau, \tau + \epsilon]$.
For the remainder of the proof, we assume that $\psi(r) u \exp(u) <1$.
If $x,y \in [\tau, \tau + \epsilon]$, our previous calculations show that
\begin{multline*}
|\ln \mathcal F(x) - \ln \mathcal F(y)|
= \Big|\ln [1 + \phi(x)]
- \ln [1 + \phi(y)] \Big|
\\
= \Big|\int_{1 + \phi(y)}^{1 + \phi(x)} \frac{1}{s} \, ds \Big|
\le
\Big|\frac{\phi(x) - \phi(y)}{2}\Big| \Big[
\frac{1}{1 + \phi(x)} + \frac{1}{1 + \phi(y)}\Big]\Big|,
\end{multline*}
where we have used the fact that $(1/s)$ is a convex function and
hence the integral is bounded by the trapezoidal rule approximation.
Now, by the mean value theorem, for some $\hat \xi$ lying between $\hat x$ and
$\hat y$ and hence $\in [-1,1]$,
\begin{multline*}
|\phi(x) - \phi(y)| = |\hat \phi(\hat x) - \hat \phi(\hat y)|
= |\hat \phi^{\prime}(\hat \xi)| |\hat x- \hat y|
\\
\le \frac{2}{\epsilon} |x-y| \max_{-1 \le \hat \xi \le 1}
|\hat \phi^{\prime}(\hat \xi)|
\le \frac{2}{\epsilon} |x-y| r^2 \max_{-1 \le \hat \xi \le 1}
|\hat \phi(\hat \xi)|,
\end{multline*}
where in the last step we have used
Markov's polynomial inequality (Lemma~\ref{3.1}).
Recalling our earlier estimate for $|\hat \phi(\hat \xi)|$ in \eqref{3.10},
we obtain
\begin{equation*}
|\phi(x) - \phi(y)| \le 2 r^2 \psi(r) \eta(r)
M \exp(u) |x-y|
\end{equation*}
and
\begin{equation*}
\frac{1}{2} \Big[\frac{1}{1+ \phi(x)} + \frac{1}{1+ \phi(y)} \Big]
\le \frac{1}{1 - \psi(r) u \exp(u)},
\end{equation*}
which implies that
\begin{equation*}
|\ln( \mathcal F(x)) - \ln(\mathcal F(y))| \le C|x-y|,
\end{equation*}
with the constant $C$ defined in the statement of the lemma.
\end{proof}
\begin{lem}
\label{lem:5.5n}
Let notation be as in Section~\ref{sec:approx} and $T$ be as defined
by \eqref{T-def}. Suppose that $F:T \to \mathbb{R}$ is an element of $K_M(T)
\setminus \{0\}$ and let $\mathcal F: \cup_{i=1}^I [a_i,b_i] \to \mathbb{R}$ be
defined by \eqref{2.31}. For $1 \le i \le I$, define $u_i = M h_i
\eta(r)$ and assume that
\begin{equation*}
\psi(r) u_i \exp(u_i) < 1.
\end{equation*}
Then $\mathcal F(x) >0$ for all $x \in [a_i,b_i]$. If we define $C_i$ by
\begin{equation}
\label{5.13n}
C_i:= \frac{[2 \eta(r) r^2 \psi(r)]\exp(u_i) M}{1 - \psi(r) u_i \exp(u_i)},
\end{equation}
then for all $x,y \in [a_i,b_i]$,
\begin{equation}
\label{5.14n}
|\ln(\mathcal F(x)) - \ln(\mathcal F(y))| \le C_i|x-y|.
\end{equation}
\end{lem}
\begin{proof}
Recall that $[a_i,b_i] = \cup_{j=1}^{N_i} [t_{j-1}^i, t_j^i]$,
where $t_j^i - t_{j-1}^i = h_i = (b_i-a_i)/N_i$. If we write $h_i =
\epsilon$, note that whenever $x,y \in [t_{j-1}^i, t_j^i]$ for some $j$,
Lemma~\ref{lem:5.5n} implies that \eqref{5.14n} is satisfied. Thus we can
assume that $x,y \in [a_i,b_i]$ and that there does not exist $j$, $1 \le j
\le N_i$, such that $x$ and $y$ are both elements of $[t_{j-1}^i, t_j^i]$. We can
also assume that $x < y$ and select $j_1$, $1 \le j_1 \le N_i$, such that
$x \in [t_{j_1-1}^i, t_{j_1}^i]$ and $j_2$, $1 \le j_2 \le N_i$, such that
$y \in [t_{j_2-1}^i, t_{j_2}^i]$. By our assumptions, it must be true that
$j_1 < j_2$, If we apply Lemma~\ref{lem:5.5n} to $\mathcal F(x)$ and
$\mathcal F(t_{j_1}^i)$, we obtain
\begin{equation*}
|\ln(\mathcal F(t_{j_1}^i)) - \ln(\mathcal F(x))| \le C_i(t_{j_1}^i -x).
\end{equation*}
Similarly, if we apply Lemma~\ref{lem:5.5n} to $\mathcal F(t_{j_2 -1}^i)$ and
$\mathcal F(y)$, we obtain
\begin{equation*}
|\ln(\mathcal F(y)) - \ln(\mathcal F(t_{j_2 -1}^i))| \le C_i(y - t_{j_2 -1}^i).
\end{equation*}
Since $\mathcal F(t_{j_1}^i) = F(t_{j_1}^i)$ and $\mathcal F(t_{j_2 -1}^i) = F(t_{j_2
-1}^i)$ and $F \in K_M(T)$, we obtain
\begin{equation*}
|\ln(\mathcal F(t_{j_2 -1}^i)) - \ln(\mathcal F(t_{j_1}^i))| \le M(t_{j_2 -1}^i - t_{j_1}^i)
\le C_i (t_{j_2 -1}^i - t_{j_1}^i),
\end{equation*}
where we have used the fact that $C_i >M$. Combining these inequalities, we
find that
\begin{multline*}
|\ln(\mathcal F(y)) - \ln(\mathcal F(x))| \le |\ln(\mathcal F(y)) - \ln(\mathcal F(t_{j_2 -1}^i))|
\\
+ |\ln(\mathcal F(t_{j_2 -1}^i)) - \ln(\mathcal F(t_{j_1}^i))|
+ |\ln(\mathcal F(t_{j_1}^i)) - \ln(\mathcal F(x))| \le C_i (y-x),
\end{multline*}
which proves Lemma~\ref{lem:5.5n}.
\end{proof}
Up to this point, we have only used the fact that $F \in K_M(T)$,
where $T$ is defined in \eqref{T-def} and notation is as in
Section~\ref{sec:approx}. We now exploit the fact that $\lip(\theta_{\omega}|_S)
\le c(\nu)$ for all $\omega \in \Omega_{\nu}$.
\begin{thm}
\label{thm:5.6n}
Let notation be as in Section~\ref{sec:approx} and for positive reals $M' <
M$, let $K_M(T)$ and $K_{M'}(T)$ be as defined earlier, Recall that $h_i =
(b_i-a_i)/N_i$, $1 \le i \le I$, and $h = \max \{h_i: 1 \le i \le I\}$.
Assume that hypotheses (H1), (H2), and (H3) are satisfied, and that
\begin{equation}
\label{5.16n}
\psi(r) u \exp u <1,
\end{equation}
where we now set
\begin{equation*}
u = M h \eta(r).
\end{equation*}
If $F \in K_M(T) \setminus \{0\}$ and $\mathcal F$ is the piecewise polynomial
approximation of $F$ of degree $\le r$ on $\hat S = \cup_{i=1}^I [a_i,b_i]$,
then $\mathcal F(x) >0$ for all $x \in \hat S$.
Define $C:= \max \{C_i: 1 \le i \le I\}$, where $C_i$ is defined
by \eqref{5.13n} and let $\bL_{s, \nu}: V(T) := \mathbb{R}^Q \to V(T)$ be defined
by \eqref{3.8}. Assume the above hypotheses are satisfied and also assume
that
\begin{equation}
\label{5.17n}
c(\nu)C :=
\frac{c(\nu)[2 \eta(r) r^2 \psi(r)] \exp(u) M}
{1 - \psi(r) u \exp(u)}< M' - s M_0(\nu).
\end{equation}
Then it follows that
$\bL_{s,\nu}(K_M(T) \setminus \{0\}) \subset K_{M'}(T)\setminus \{0\}$.
\end{thm}
\begin{proof}
Suppose that $F \in K_M(T) \setminus \{0\}$, which implies that $F(\xi) >0$ for
all $\xi \in T$. Since $u \ge u_i$ for $1 \le i \le I$, \eqref{5.16n}
implies that $\psi(r) u_i \exp u_i <1$ for $1 \le i \le I$. It follows
from Lemma~\ref{lem:5.5n} that $\mathcal F(x) >0$ for all $x \in [a_i,b_i]$,
$1 \le i \le I$, so $\mathcal F(x) >0$ for all $x \in \hat S$. Since
(H1) implies that $g_{\omega}(x)^s >0$ for all $x \in S = [a,b]$ and
for all $\omega \in \Omega_{\nu}$
\begin{equation*}
\bL_{s,\nu}(F)(\xi) = \sum_{\omega \in \Omega_{\nu}} [g_{\omega}(\xi)]^s
\mathcal F(\theta_{\omega}(\xi)),
\end{equation*}
and $g_{\omega}(x)^s \mathcal F(\theta_{\omega}(x)) >0$ for all $x \in \hat S$ and
certainly for all $\xi \in T$, it suffices to prove that the map $\xi \mapsto
g_{\omega}(\xi)^s \mathcal F(\theta_{\omega}(\xi)) \in K_{M'}(T) \setminus \{0\}$ for
every $\omega \in \Omega_{\nu}$. We know that for all $x,y \in S$,
\begin{equation*}
|\ln(g_{\omega}(x)^{s}) - \ln(g_{\omega}(y)^{s})| \le s M_0(\nu)|x-y|,
\end{equation*}
so it suffices to prove that for all $x, y \in \hat S$,
\begin{equation*}
|\ln(\mathcal F(\theta_{\omega}(x)) - \ln(\mathcal F(\theta_{\omega}(y))| \le [M' - s
M_0(\nu)] \, |x-y|.
\end{equation*}
For each fixed $\omega \in \Omega_{\nu}$, (H3) implies that there exists
$i = i(\omega)$ such that $\theta_{\omega}(\hat S) \subset [a_i,b_i]$.
Writing $x' = \theta_{\omega}(x) \in [a_i,b_i]$ and
$y' = \theta_{\omega}(y) \in [a_i,b_i]$, Lemma~\ref{lem:5.5n} implies that
\begin{equation*}
|\ln(\mathcal F(x')) - \ln(\mathcal F(y'))| \le C|x'-y'| \le c(\nu)C|x-y|,
\end{equation*}
so \eqref{5.17n} completes the proof.
\end{proof}
\begin{remark}
\label{rem:3.2}
Assume that $\psi(r) u \exp(u) <1$.
Notice that for a given positive integer $r$, a necessary
condition that \eqref{5.17n} be satisfied is that
\begin{equation}
\label{3.21}
c(\nu) 2 r^2 \psi(r) \eta(r) < \frac{M' -s M_0(\nu)}{M}.
\end{equation}
For a given $M' < M$, if \eqref{3.21} is satisfied, then \eqref{5.17n} will be
satisfied if $h$ is sufficiently small.
\end{remark}
\begin{remark}
\label{rem:5.3n}
The reader will note that in our definition of $\bL_{s,\nu}(F)$ for
$F \in V(T)$, we arrange that $\mathcal F|_{[a_i,b_i]}$ is a piecewise polynomial
map of degree $\le r$. In some applications, it is desirable to make
$\mathcal F|_{[a_i,b_i]}$ a piecewise polynomial map of degree $\le r_i$, which
leads to a generalization of the definition of $\bL_{s,\nu}$. An analogue
of Theorem~\ref{thm:5.6n} which handles this more general case can be
proved by an argument similar to the proof of Theorem~\ref{thm:5.6n}.
Because of considerations of length, we omit the proof.
\end{remark}
\section{Estimating $\rmf(L_s)$ by the spectral radius of
$\bL_{s,\nu}$}
\label{sec:estimating-sr}
In the previous section (c.f. Theorem~\ref{thm:5.6n}),
we determined conditions under which
\begin{equation*}
\bL_{s,\nu}(K_M(T) \setminus \{0\}) \subset K_{M'}(T)\setminus \{0\}
\end{equation*}
for some $M' < M$. The main result of this section is to show that
under this condition, $\rmf(\bL_{s, \nu})$, the spectral radius of
$\bL_{s, \nu}$, satisfies
\begin{equation*}
\lambda_s^{\nu}(1 - H h^r) \le \rmf(\bL_{s,\nu}) \le \lambda_s^{\nu}(1 + H h^r)
\end{equation*}
for some constant $H$ to be specified below. Using this result, we
obtain the following explicit bounds on the spectral radius
$\lambda_s$ of $L_s$.
\begin{equation*}
[\rmf([1 + H h^r]^{-1} \bL_{s,\nu})]^{1/\nu} \le \lambda_s \le
[\rmf([1 - H h^r]^{-1} \bL_{s,\nu})]^{1/\nu},
\end{equation*}
where the entries of the matrices $[1 + H h^r]^{-1} \bL_{s,\nu}$ and
$[1 - H h^r]^{-1} \bL_{s,\nu}$ differ by $O(h^r)$.
Throughout this section, we shall assume the hypotheses and notation of
(H1), (H2), and (H3); and $v_s(\cdot)$ will always denote the positive
eigenvector $v_s$ of $L_s$ guaranteed by Theorem~\ref{thm:2.4}. In particular,
$S$ and $[a_i,b_i]$, $1 \le i \le I$ will be as in (H3) and
(as can be guaranteed by relabeling), we shall assume that
$b_i < a_{i+1}$ for $1 \le i < I$.
We shall further denote by $E$ and $\chi$, constants for which the following
two inequalities are satisfied.
\begin{equation}
\label{6.1}
\sup_{a \le x \le b} \frac{|d^p v_s(x)/dx^p|}{v_s(x)} \le E(s,p):=E,
\end{equation}
where $p$ is a positive integer, and
\begin{equation}
\label{6.2}
v_s(x_1) \le v_s(x_2) \exp(2s|x_1-x_2|/\chi),
\end{equation}
and for all $x_1,x_2 \in S$, where $\chi:= \chi(s, \{\theta_i, g_i:1
\le i \le n\})$.
Using Theorem~\ref{thm:2.4} and Remark~\ref{rem:2.5}, we shall see, in the
next section, that for some interesting examples, it is possible to
obtain sharp estimates on the constants $E$ and $\chi$ such that \eqref{6.1}
and \eqref{6.2} below are satisfied. These estimates will refine
earlier results in \cite{hdcomp1}.
Using the notation of Section~\ref{sec:approx}, if $\mathcal V_s(x)$ is the piecewise
polynomial interpolant of $v_s(x)$ of degree $\le r$ at the extended Chebyshev
points in $[a_i,b_i]$, $1 \le i \le I$, then on each subinterval
$[t_{j-1}^i,t_j^i]$, $j = 1, \ldots, N_i$, we have, using standard estimates for
polynomial interpolation,
\begin{equation*}
v_s(x) - \mathcal V_s(x) = \frac{v_s^{(r+1)}(\xi_x)}{(r+1)!}
\prod_{k=0}^r (x - c_{j,k}^i),
\end{equation*}
for some $\xi_x \in [t_{j-1}^i,t_j^i]$.
If we write, as done previously,
\begin{equation*}
c_{j,k}^i = t_{j-1}^i + h_i(1+ \hat c_k)/2, \qquad
x = t_{j-1}^i + h_i(1 + \hat x)/2, \quad \hat x \in [-1,1],
\end{equation*}
then for $x \in [a_i,b_i]$,
\begin{equation*}
|v_s(x) - \mathcal V_s(x)| \le |v_s^{(r+1)}(\xi_x)| h_i^{r+1} m_{r+1},
\end{equation*}
where
\begin{equation}
\label{6.4}
m_{r+1} = \frac{1}{2^{r+1} (r+1)!}
\max_{\hat x \in [-1,1]} \Big|\prod_{k=0}^r (\hat x - \hat c_k)\Big|.
\end{equation}
Using \eqref{6.1} and \eqref{6.2}, we see that
\begin{equation*}
|v_s^{(r+1)}(\xi_x)| \le E \exp(2sh_i/\chi) v_s(x),
\end{equation*}
so
\begin{equation*}
|v_s(x) - \mathcal V_s(x)| \le E h_i^{r+1} m_{r+1} v_s(x) \exp(2s h_i/\chi).
\end{equation*}
Defining, for $1 \le i \le I$,
\begin{equation}
\label{6.6}
G_{r,i} := E m_r \exp(2s h_i/\chi),
\end{equation}
\eqref{6.4} implies that for $1 \le i \le I$ and $x \in [a_i,b_i]$
\begin{equation}
\label{6.7}
(1 - G_{r+1,i} h_i^{r+1}) v_s(x) \le \mathcal V_s(x) \le (1 + G_{r+1,i} h_i^{r+1}) v_s(x).
\end{equation}
In order to make \eqref{6.4} explicit, we need a formula for
$\max_{\hat x \in [-1,1]} \Big|\prod_{k=0}^r (\hat x - \hat
c_k)\Big|$. The result and proof, which we provide below, are slight
modifications of the well-known corresponding bound and proof when
$\hat c_k$ are taken to be the zeros of the standard Chebyshev
polynomial.
\begin{lem}
\label{lem:6.1}
If $r \ge 2$ is a positive integer and $\hat c_k$ is defined by \eqref{hatck},
then
\begin{equation*}
\max_{\hat x \in [-1,1]} \Big|\prod_{k=0}^r (\hat x - \hat c_k)\Big|
= \frac{1}{2^r} \Big[\frac{1}{\cos(\pi/[2r+2])}\Big]^{r+1}.
\end{equation*}
\end{lem}
\begin{proof}
If we define $w = \hat x \cos(\pi/[2r+2])$, where $|\hat x| \le 1$, we have
$|w| \le \cos(\pi/[2r+2])$. For notational convenience, we write
$\alpha = 1/ \cos(\pi/[2r+2])$, and we obtain
\begin{multline*}
\max_{\hat x \in [-1,1]} \Big|\prod_{k=0}^r (\hat x - \hat c_k)\Big|
\\
= \alpha^{r+1} \max \Big\{ \Big|\prod_{k=0}^r(w + \cos([2k+1]\pi/[2r+2]))\Big|:
|w| \le \cos(\pi/[2r+2])\Big\}
\\
= \alpha^{r+1} \max \Big\{ \Big|\prod_{k=0}^r(w - \cos([2k+1]\pi/[2r+2]))\Big|:
|w| \le \cos(\pi/[2r+2])\Big\}.
\end{multline*}
If we define $q_{r+1}(w) = \prod_{k=0}^r(w - \cos([2k+1]\pi/[2r+2]))$,
$q_{r+1}(w)$ is a polynomial of degree $r+1$ which vanishes at the points
$\cos([2k+1]\pi/[2r+2])$, $0 \le k \le r$, and has leading term $w^{r+1}$; and
these properties uniquely determine $q_{r+1}(w)$.
Recall that for integers $r \ge 0$, $\cos([r+1]\theta) = p_{r+1}(\cos
\theta)$ for $0 \le \theta \le \pi$, where $p_{r+1}(w)$ is the {\it
Chebyshev} polynomial of degree $r+1$. These polynomials satisfy
$p_1(w) = w$, $p_2(w) = 2 w^2 -1$, and for $r \ge 2$, the recurrence
relation $p_{r+1}(w) = 2w p_r(w) - p_{r-1}(w)$.
Using the recursion relation for $p_{r+1}(w)$ and induction, it also
follows that the coefficient of $w^{r+1}$ in $p_{r+1}(w)$ is
$2^r$. Since $\cos([r+1]\theta) =0$ when $\theta =
[2k+1]\pi/[2r+2]$ for $0 \le k \le r$, we see that $p_{r+1}(w)
=0$ when $w = \cos([2k+1]\pi/[2r+2])$, for $0 \le k \le r$. It follows that
for $w = \cos(\theta)$ and $0 \le \theta \le \pi$,
\begin{equation*}
q_{r+1}(w)= \frac{1}{2^r} p_{r+1}(w) = \frac{1}{2^r} \cos([r+1] \theta),
\end{equation*}
so
\begin{equation*}
\max_{|w| \le 1} |q_{r+1}(w)| = 1/2^r.
\end{equation*}
However,
\begin{multline*}
\max \{ |q_{r+1}(w)|: |w| \le \cos(\pi/[2r+2])\}
\\
= \frac{1}{2^r} \max \{|\cos([r+1]\theta)|: \pi/[2(r+1)^2] \le \theta
\le (2r+1) \pi/(2r+2)\}.
\end{multline*}
Since $\pi/[2(r+1)^2] < \pi/(r+1) < (2r+1) \pi/(2r+2)$ and
$|\cos([r+1] \pi/(r+1))| =1$,
\begin{equation*}
\frac{1}{2^r}\max \{ |q_{r+1}(w)|: |w| \le \cos(\pi/[2r+2])\} = \frac{1}{2^r},
\end{equation*}
which completes the proof.
\end{proof}
If $E$ and $\chi$ are defined by \eqref{6.1} and \eqref{6.2}, we can use
Lemma~\ref{lem:6.1} to estimate the constant $G_{r+1,i}$ in \eqref{6.6} more
precisely:
\begin{equation}
\label{6.9}
G_{r+1,i} = E \exp(2sh_i/\chi) \Big[\frac{1}{(r+1)!}\Big]
\Big[\frac{1}{2 \cos(\pi/[2r+2])}\Big]^{r+1} \frac{1}{2^r},
\end{equation}
and with this estimate of $G_{r+1,i}$, \eqref{6.7} is satisfied
for all $x \in [a_i,b_i]$, $1 \le i \le I$. For notational convenience,
we define $h$ and $G_{r+1}$ by $h = \max_{1 \le i \le I} h_i$ and
\begin{equation}
\label{rn:6.8}
G_{r+1} = \max_{1 \le i \le I} G_{r+1,i}
= E \exp(2sh/\chi) \Big[\frac{1}{(r+1)!}\Big]
\Big[\frac{1}{2 \cos(\pi/[2r+2])}\Big]^{r+1} \frac{1}{2^r}.
\end{equation}
\begin{lem}
\label{lem:6.3}
Define $\lambda_s = \rmf(\Lambda_s) = \rmf(L_s)$, where $\Lambda_s$
and $L_s$ are as in Theorem~\ref{thm:2.4}. Let $[a_i,b_i]$, $1 \le i
\le I$, be as in (H3). For $1 \le i \le I$, let $N_i$, $h_i$ and
$\mathcal V_s$ be as defined in the fourth paragraph of this section. Assume
that, for $1 \le i \le I$,
\begin{equation*}
[\sin(\pi/(2r+2)]^2 h_i \le a_{i+1} - b_i.
\end{equation*}
Define $h_{min} = \min_{1 \le i \le I} h_i$ and $\mu = h/h_{min}$.
Then we have for all $x \in [a_i,b_i]$, $1 \le i \le I$,
\begin{equation}
\label{6.10}
(1 - G_{r+1,i} h_i^{r+1}) v_s(x) \le \mathcal V_s(x) \le (1 + G_{r+1,i} h_i^{r+1}) v_s(x),
\end{equation}
and
\begin{equation}
\label{6.11}
(1 - G_{r+1} h^{r+1}) \lambda_s^{\nu} v_s(x) \le (L_s^{\nu} \mathcal V_s)(x)
\le (1 + G_{r+1} h^{r+1}) \lambda_s^{\nu} v_s(x).
\end{equation}
If we define $M_1$ by
\begin{equation}
\label{6.12}
M_1 = \Big[\mu \frac{G_{r+1} h^r}{1- G_{r+1}^2 h^{2r+2}}\Big]
\Big[\frac{1}{\sin(\pi/[2r+2])}\Big]^2 + \frac{2s}{\chi}
\end{equation}
and
\begin{equation*}
T = \{c_{j,k}^i: 1 \le i \le I, 1 \le j \le N_i, 0 \le k \le r\} \subset S,
\end{equation*}
then $\mathcal V_s|_T \in K(2s/\chi;T)$ and $L_s^{\nu} \mathcal V_s|_T
= \bL_{s,\nu}(\mathcal V_s|_T) \in K(M_1;T)$.
\end{lem}
\begin{proof}
To simplify the exposition, we shall denote $G_{r+1}$ as $G$. Equation
\eqref{6.7} gives \eqref{6.10} and \eqref{6.2} implies that $v_s \in
K(2s/\chi;S)$. Since $ \mathcal V_s|_T = v_s|_T$, it follows that $\mathcal V_s|_T \in
K(2s/\chi;S)$. If we observe that $1 - Gh^{r+1} \le 1 - G_{r+1,i}h_i^{r+1}$
and $1 + G_{r+1,i}h_i^{r+1} \le 1 + Gh^{r+1}$ for $1 \le i \le I$, we derive
from \eqref{6.7} that for $1 \le i \le I$ and $x \in [a_i,b_i]$,
\begin{equation*}
(1 - Gh^{r+1}) v_s(x) \le \mathcal V_s(x) \le (1 + Gh^{r+1}) v_s(x).
\end{equation*}
Applying $L_s^{\nu}$ to this inequality, we obtain \eqref{6.11} and in
particular, \eqref{6.11} holds for all $x \in T$. A little thought
shows that for all $x \in T$,
\begin{equation*}
[L_s^{\nu} \mathcal V_s](x) = (\bL_{s,\nu}(\mathcal V_s|_T)(x).
\end{equation*}
If $x,y \in T \cap [a_i,a_{i+1}]$, $1 \le i \le I$, and $x \neq y$, we obtain
from \eqref{6.11} that
\begin{multline*}
(L_s^{\nu}\mathcal V_s)(x) \le (1 + G h^{r+1}) \lambda_s^{\nu} v_s(x)
\le (1 + G h^{r+1}) \exp(2s|x-y|/\chi) v_s(y)
\\
\le \frac{1 + G h^{r+1}}{1 - G h^{r+1}}\exp(2s|x-y|/\chi) (L_s^{\nu}\mathcal V_s)(y) .
\end{multline*}
Taking logarithms on both sides of the above inequality, and noting that
$x$ and $y$ are interchangeable in the inequality, we find that
\begin{equation*}
|\ln(L_s^{\nu}\mathcal V_s)(x) - \ln(L_s^{\nu}\mathcal V_s)(y)|
\le \frac{2s}{\chi}|x-y| + [\ln(1 + G h^{r+1}) - \ln(1 - G h^{r+1})].
\end{equation*}
Using the trapezoidal rule and the convexity of $u \mapsto 1/u$,
\begin{multline*}
\ln (1 + G h^{r+1}) - \ln(1 - G h^{r+1}) = \int_{1 - G h^{r+1}}^{1 + G h^{r+1}}
\frac{1}{u} \, du
\\
\le \frac{1}{2}\Big[\frac{1}{1 - G h^{r+1}} + \frac{1}{1 + G h^{r+1}}\Big]
\Big[2G h^{r+1}\Big] = \frac{2G h^{r+1}}{1 - G^2 h^{2r+2}}.
\end{multline*}
To prove that $L_s^{\nu}\mathcal V_s|_T \in K(M_1;T)$, it will suffice to prove that
\begin{multline}
\label{6.13n}
\frac{2s}{\chi}|x-y| + \frac{2G h^{r+1}}{1 - G^2 h^{2r+2}}
\\
\le \frac{2s}{\chi}|x-y| + \Big[\mu \frac{Gh^r}{1- G^2 h^{2r+2}}\Big]
\Big[\frac{1}{\sin(\pi/[2r+2])}\Big]^2 |x-y|,
\end{multline}
whenever $x,y \in ([a_i,b_i] \cap T) \cup \{a_{i+1}\}$ for $1 \le i \le I$
and whenever $x,y \in ([a_I,b_I] \cap T)$. (Of course, we assume, as we can,
that $x \neq y$). A calculation shows that this will be true if
\begin{equation*}
2h \le \mu \Big[\frac{1}{\sin(\pi/[2r+2])}\Big]^2 |x-y|.
\end{equation*}
If $x,y \in [a_i,b_i]$, we know that $|x-y| \ge 2 h_i [\sin(\pi/(2r+2)]^2$,
so it suffices to prove that $h \le \mu h_i$, which follows from the
definition of $\mu$. We can assume that $x <y$, so if $x,y \in [a_i,b_i]$,
the same argument applies. If $y = a_{i+1}$, $|x-y| \ge |a_{i+1} - b_i|$,
and we assume that $|a_{i+1} - b_i| \ge 2 h_i [\sin(\pi/(2r+2)]^2$, so
again the same argument applies and gives \eqref{6.13n}.
\end{proof}
\begin{remark}
\label{rem:6.1n}
If $I=1$, the condition on $a_{i+1} - b_i$ is vacuous and $\mu =1$.
\end{remark}
Our next lemma will play a crucial role in relating $\rmf(\bL_{s,\nu})$ to
$\rmf(L_s^{\nu})$.
\begin{lem}
\label{lem:6.4}
Let notation and assumptions be as in Lemma~\ref{lem:6.3}. Let $G:=G_{r+1}$
be as in \eqref{rn:6.8} and $M_1$ as in \eqref{6.12}. Assume
that $H:= H_{r+1}$ is a constant with $H >G$ and assume that $h <1$. Define
$M_2$ by
\begin{equation*}
M_2 = M_1 + \frac{G}{H} \Big[\frac{\mu}{1 - [(G/H)h]^2}\Big]
\frac{1}{[\sin(\pi/[2r+2])]^2}.
\end{equation*}
If $K = K(M_2;T)$, we have
\begin{equation}
\label{6.13}
\lambda_s^{\nu} \mathcal V_s (1 - H h^r) \le_{K} \bL_{s, \nu} \mathcal V_s|_T \le_{K}
\lambda_s^{\nu} \mathcal V_s (1 + H h^r).
\end{equation}
\end{lem}
\begin{proof}
Our previous results show that $(L_s^{\nu} \mathcal V_s)(x)
= (\bL_{s,\nu} \mathcal V_s)(x)$ for all $x \in T$, and, for all $x \in S = [a,b]$,
\begin{equation*}
\lambda_s^{\nu} \mathcal V_s(x) \frac{1 - G h^{r+1}}{ 1 + G h^{r+1}}
\le (L_s^{\nu} \mathcal V_s)(x) \le
\lambda_s^{\nu} \mathcal V_s(x) \frac{1 + G h^{r+1}}{ 1 - G h^{r+1}}.
\end{equation*}
Recalling that $\mathcal V_s(x) = v_s(x)$ for $x \in T$, we have for $x \in T$,
\begin{multline*}
\lambda_s^{\nu} \mathcal V_s(x)(1 + H h^r) - (L_s^{\nu} \mathcal V_s)(x)
\le \lambda_s^{\nu} \mathcal V_s(x)(1 + H h^r) - \lambda_s^{\nu}(1- G h^{r+1})
\mathcal V_s(x)
\\
= \lambda_s^{\nu}(H h^r + Gh^{r+1})\mathcal V_s(x)
= \lambda_s^{\nu} h^r \mathcal V_s(x)(1 + [G/H] h) H.
\end{multline*}
If $y \in T$, a similar argument
shows that
\begin{multline*}
\lambda_s^{\nu} \mathcal V_s(y)(1 + H h^r) - (L_s^{\nu} \mathcal V_s)(y)
\ge \lambda_s^{\nu} \mathcal V_s(y)(1 + H h^r) - \lambda_s^{\nu}(1+ G h^{r+1})
\mathcal V_s(y)
\\
= \lambda_s^{\nu} h^r \mathcal V_s(y)(1 - [G/H] h) H.
\end{multline*}
Using Lemma~\ref{lem:6.3} and the above estimates, we find that
\begin{multline}
\label{6.14}
\frac{\lambda_s^{\nu} \mathcal V_s(x)(1 + H h^r) - (L_s^{\nu} \mathcal V_s)(x)}
{\lambda_s^{\nu} \mathcal V_s(y)(1 + H h^r) - (L_s^{\nu} \mathcal V_s)(y)}
\le \frac{\mathcal V_s(x)(1 + [G/H] h)}{\mathcal V_s(y)(1 - [G/H] h)}
\\
\le \exp(M_1|x-y|) \frac{1 + [G/H] h}{1 - [G/H] h}.
\end{multline}
The right half of \eqref{6.13} will follow from \eqref{6.14} if we prove
that, for all $x,y \in T$ with $x \neq y$,
\begin{equation}
\label{6.16n}
\exp(M_1|x-y|) \frac{1 + [G/H] h}{1 - [G/H] h}
\le \exp(M_2|x-y|).
\end{equation}
As in Lemma~\ref{lem:6.3}, it suffices to verify \eqref{6.16n} for all
points $x \neq y$, $x,y \in [a_i,a_{i+1}] \cap T$, $1 \le i \le I$, where
$a_{I+1} = b_I$.
Arguing as in Lemma~\ref{lem:6.3}, we see that
\begin{equation*}
\ln(1 + [G/H] h) - \ln(1 - [G/H] h) \le \frac{G}{H}
\frac{2h}{1 - ([G/H] h)^2}.
\end{equation*}
If we take the log of both sides of \eqref{6.16n}, it suffices to prove
that
\begin{multline*}
M_1 |x-y| + \frac{G}{H} \frac{2h}{1 - ([G/H] h)^2}
\\
\le M_1 |x-y| + \frac{G}{H} \Big[\frac{\mu}{1 - [(G/H)h]^2}\Big]
\frac{1}{[\sin(\pi/[2r+2])]^2} |x-y|.
\end{multline*}
As was proved in Lemma~\ref{lem:6.3}, all $x,y \in [a_i,a_{i+1}] \cap T$
with $x \neq y$ satisfy $|x-y| \ge 2 h_i [\sin(\pi/(2r+2)]^2$.
Since $2|x-y| \ge 2h_i/[\sin(\pi/[2r+2])]^2$, we see
after simplification, the above inequality will be satisfied
if $h \le \mu h_i$, which holds by the definition of $\mu$. This proves
the right hand side of \eqref{6.13}. The proof of the left hand side
of inequality \eqref{6.13} follows by an exactly analogous argument and
is left to the reader.
\end{proof}
Our next theorem connects $\rmf(\bL_{s,\nu})$ and $\rmf(L_s^{\nu})$. To use the
theorem, we shall need to estimate various constants, and we shall carry
this out in the next section for an important class of examples.
\begin{thm}
\label{thm:6.5}
Let notation and assumptions be as in Lemma~\ref{lem:6.3} and let
$H$ and $M_2$ be as in Lemma~\ref{lem:6.4}. Assume that $\nu, h, s$ and
$r$ have been selected so that $(\bL_{s,\nu}(K(M;T)) \subset K(M';T)$ where
$0 < M' < M$ and $M \ge M_2$ (see Theorem~\ref{thm:5.6n}). Then we have
that $\rmf(\bL_{s, \nu})$, the spectral radius of $\bL_{s, \nu}$, satisfies
\begin{equation*}
\lambda_s^{\nu}(1 - H h^r) \le \rmf(\bL_{s,\nu}) \le \lambda_s^{\nu}(1 + H h^r).
\end{equation*}
\end{thm}
\begin{proof}
Our previous results show that $\bL_{s,\nu}$ has a unique, strictly
positive eigenvector $w_{s,\nu} \in K(M;T)$ with $\|w_{s,\nu}\|=1$. The
eigenvalue corresponding to $w_{s,\nu}$ is $\rmf(\bL_{s,\nu})$.
Furthermore, for every $u \in K(M;T)\setminus\{0\}$, $\lim_{m \rightarrow
\infty} (\bL_{s,\nu}^m u/\|\bL_{s,\nu}^m u\|) = w_{s,\nu}$, with
convergence in the $\sup$ norm topology on $\mathbb{R}^Q$.
If we use \eqref{6.13}, but define $K = K(M;T)$, then because $M \ge M_2$ and
$K(M;T) \supset K(M_2;T)$, we obtain
\begin{equation*}
\lambda_s^{\nu} \mathcal V_s (1 - H h^r) \le _K \bL_{s,\nu} \mathcal V_s
\le _K \lambda_s^{\nu} \mathcal V_s (1 + H h^r).
\end{equation*}
The theorem now follows directly from Lemma~\ref{lem:cone-compare}.
\end{proof}
Our ultimate goal has been to provide rigorous upper and lower bounds on
$\lambda_s = \rmf(L_s)$ in terms of the eigenvalues of computable matrices, as
was done in \cite{hdcomp1}. This follows immediately from
Theorem~\ref{thm:6.5}.
\begin{thm}
\label{thm:6.6}
Under the hypotheses of Theorem~\ref{thm:6.5}, we have
\begin{equation*}
[(1 + H h^r)^{-1} \rmf(\bL_{s,\nu})]^{1/\nu} \le \lambda_s \le
[(1 - H h^r)^{-1} \rmf(\bL_{s,\nu})]^{1/\nu},
\end{equation*}
where the entries of the matrices $[1 + H h^r]^{-1} \bL_{s,\nu}$ and
$[1 - H h^r]^{-1} \bL_{s,\nu}$ differ by $O(h^r)$.
\end{thm}
\section{Calculating the optimal interval $[a,b]$ and
estimating $E$ and $\chi$.}
\label{sec:est-constants}
Throughout this section, we shall assume at least the hypotheses of
Theorem~\ref{thm:2.4}, so $L_s$ has a strictly positive $C^m$ eigenfunction
$v_s$. We shall take $S=[a,b], a<b$ in (H2). If $S_0$ is a closed, nonempty
subset of $S$ and $\theta_i(S_0) \subset S_0$ for $1 \le i \le n$, then
$L_s:C(S) \to C(S)$ induces a bounded linear operator $L_{s,S_0}: C(S_0) \to
C(S_0)$. It is often desirable to replace the original interval $[a,b]$ by a
smaller interval (or union of intervals) $S_0 \subset [a,b]$ such that
$\theta_i(S_0) \subset S_0$ for $1 \le i \le n$, and we first describe a class
of examples for which this can be easily done. Note that $v_s|_{S_0}$ is
strictly positive; and since $L_{s,S_0}(v_s|_{S_0}) = \rmf(L_s) (v_s|_{S_0})$,
the following lemma implies that $\rmf(L_{s,S_0}) = \rmf(L_s)$. Although this
is a special case of another well-known result, a proof is provided for the
readers' convenience.
\begin{lem}
\label{lem:4.3}
Let $S_0$ be a compact metric space, $\W:= C(S_0)$ and $P=\{f \in
\W: f(t) \ge 0, \, \forall t \in S_0\}$. Assume that $L:\W \to \W$ is
a bounded linear operator such that $L(P) \subset P$. If $w \in \W$
and $w(t) >0$ for all $t \in S_0$, then
\begin{equation*}
\rmf(L)= \lim_{k \rightarrow \infty} \|L^k\|^{1/k} = \lim_{k \rightarrow \infty}
\|L^k w\|^{1/k}.
\end{equation*}
If, in addition, $L w = \lambda w$, then $\lambda = \rmf(L)$; and there exists a
constant $C \ge 1$ such that $\|L^k\| \le C \lambda^k$ for all positive
integers $k$.
\end{lem}
\begin{proof}
Since $S_0$ is compact, there exists $\alpha >0$ such that $w(t) \ge \alpha$ for
all $t \in S_0$. If $f \in \W$ and $\|f\| \le 1$, it follows that for all
$t \in S_0$,
\begin{equation*}
-(1/\alpha) w(t) \le f(t) \le (1/\alpha) w(t).
\end{equation*}
Because $L$ is order-preserving in the partial ordering from $P$,
\begin{equation*}
-(1/\alpha) (L^kw)(t) \le (L^kf)(t) \le (1/\alpha) (L^kw)(t)
\end{equation*}
for all positive integers $k$, which implies that
\begin{equation*}
\|L^k\|^{1/k} = \Big(\sup \{\|L^kf\|: f \in \W, \|f\|\le 1\}\Big)^{1/k}
\le (1/\alpha)^{1/k} \|L^k w\|^{1/k}.
\end{equation*}
We also have that
\begin{equation*}
(1/\alpha)^{1/k} \|L^k w\|^{1/k} \le (1/\alpha)^{1/k} \|w\|^{1/k} \|L^k\|^{1/k}.
\end{equation*}
Since
$\lim_{k \rightarrow \infty} \|L^k\|^{1/k} = \rmf(L)$ and
$\lim_{k \rightarrow \infty} (1/\alpha)^{1/k} = \lim_{k \rightarrow \infty}
\|w\|^{1/k} = 1$,
we conclude that $\lim_{k \rightarrow \infty}\|L^k w\|^{1/k} = \rmf(L)$.
If $L w = \lambda w$, the above argument shows that
\begin{equation*}
(1/\alpha) \lambda^k w(t) \le (L^k f)(t) \le (1/\alpha) \lambda^k w(t),
\end{equation*}
which implies $\|L^k\| \le \tfrac{1}{\alpha} \|w\| \lambda^k$, so the lemma
is satisfied with $C = \tfrac{1}{\alpha} \|w\| \ge 1$.
\end{proof}
Let $S_0=[\amf_0,\bmf_0]$ be a compact interval of reals, $\amf_0 < \bmf_0$,
and let $\mathcal B$ be a finite set of real numbers. For each $\beta \in \mathcal B$,
$\theta_{\beta}: S_0 \to \mathbb{R}$. We make the following hypothesis:
(H4) (i) For each $\beta \in \mathcal B$, $\theta_{\beta}: S_0 \to S_0$ and
$\theta_{\beta}$ is a continuous map.
(ii) There exists $\gamma \in \mathcal B$ and $\Gamma \in \mathcal B$ such that for
all $x \in S_0$ and all $\beta \in \mathcal B$, $\theta_{\Gamma}(x) \le \theta_{\beta}(x)
\le \theta_{\gamma}(x)$.
(iii) $x \mapsto \theta_{\gamma}(x)$ and $x \mapsto \theta_{\Gamma}(x)$ are
strictly decreasing functions on $S_0$.
The example we have in mind is that $\mathcal B$ is a finite set of distinct real
numbers with $\beta \ge \gamma >0$ for all $\beta \in \mathcal B$ and $\theta_{\beta}(x)
= 1/(x + \beta)$ and $S_0 = [0,1/\gamma]$, but there seems no gain in
specializing at this point.
\begin{lem}
\label{lem:5.1}
Assume (H4). Define $\amf_1 = \theta_{\Gamma}(\bmf_0)$ and $\bmf_1 =
\theta_{\gamma}(\amf_0)$. Then $\amf_0 \le \amf_1 \le \bmf_1 \le
\bmf_0$, $\amf_1 < \bmf_1$, and $\theta_{\beta}(x) \in
[\amf_1,\bmf_1]$ for all $x \in S_0$ and all $\beta \in \mathcal B$.
\end{lem}
\begin{proof}
Property (i) in (H4) implies that $\amf_0 \le \amf_1 \le \bmf_0$ and
$\amf_0 \le \bmf_1 \le \bmf_0$. Property (ii) implies that $\theta_{\Gamma}(\bmf_0)
= \amf_1 \le \theta_{\gamma}(\bmf_0)$ and Property (iii) implies that
$\theta_{\gamma}(\bmf_0) < \theta_{\gamma}(\amf_0) = \bmf_1$, so $\amf_1 < \bmf_1$.
For all $x \in [\amf_0,\bmf_0]$ and all $\beta \in \mathcal B$, $\theta_{\Gamma}(x)
\le \theta_{\beta}(x)$ (Property (ii)) and $\theta_{\Gamma}(\bmf_0)
\le \theta_{\Gamma}(x)$ (Property (iii)), so $\amf_1 \le \theta_{\beta}(x)$.
Similarly, $\theta_{\beta}(x) \le \theta_{\gamma}(x)$
and $\theta_{\gamma}(x) \le \theta_{\gamma}(\amf_0) = \bmf_1$,
so $\theta_{\beta}(x) \le \bmf_1$.
\end{proof}
\begin{lem}
\label{lem:5.2}
Assume (H4). Also assume that for $1 \le j \le k$, we have found an increasing
sequence of reals $\amf_0 \le \amf_1 \le \ldots \le \amf_k$ and a decreasing
sequence of reals $\bmf_0 \ge \bmf_1 \ge \ldots \ge \bmf_k$ such that $\bmf_j -\amf_j >0$ for
$1 \le j \le k$ and $\theta_{\beta}([\amf_j,\bmf_j]) \subset [\amf_{j+1},\bmf_{j+1}]$ for
$0 \le j \le k-1$ and all $\beta \in \mathcal B$. Define $\amf_{k+1} = \theta_{\Gamma}(\bmf_k)$
and $\bmf_{k+1} = \theta_{\gamma}(\amf_k)$. Then we have
$\amf_k \le \amf_{k+1}$, $\bmf_{k+1} \le \bmf_{k}$, $\bmf_{k+1} - \amf_{k+1} >0$ and
$\theta_{\beta}([\amf_k,\bmf_k]) \subset [\amf_{k+1},\bmf_{k+1}]$ for all $\beta \in B$.
\end{lem}
\begin{proof}
Apply Lemma~\ref{lem:5.1} with $[\amf_k,\bmf_k]$ taking the place of $[\amf_0,\bmf_0]$ and
$\theta_{\Gamma}(\bmf_k) = \amf_{k+1}$ taking the place of $\amf_1$ and
$\theta_{\gamma}(\amf_k) = \bmf_{k+1}$ taking the place of $\bmf_1$.
\end{proof}
It follows from Lemma~\ref{lem:5.2} that if (H4) holds and if we
inductively define sequences $\amf_k$ and $\bmf_k$ by
$\amf_{k+1} = \theta_{\Gamma}(\bmf_k)$ and $\bmf_{k+1} = \theta_{\gamma}(\amf_k)$ for $k \ge 0$,
then for all integers $k \ge 0$, $\amf_k < \bmf_k$, $\amf_{k+1} \ge \amf_k$, $\bmf_{k+1} \le \bmf_k$,
and $\theta_{\beta}([\amf_k,\bmf_k]) \subset [\amf_{k+1},\bmf_{k+1}]$ for all $\beta \in \mathcal B$.
It follows that $\lim_{k \rightarrow \infty} \amf_k:= \amf_{\infty}$ and
$\lim_{k \rightarrow \infty} \bmf_k:= \bmf_{\infty}$ both exist.
\begin{lem}
\label{lem:5.3}
Assume (H4) and let notation be as above. Then
$\theta_{\beta}([\amf_{\infty},\bmf_{\infty}]) \subset
[\amf_{\infty},\bmf_{\infty}]$ for all $\beta \in \mathcal B$ and
$\theta_{\gamma}(\amf_{\infty}) = \bmf_{\infty}$ and
$\theta_{\Gamma}(\bmf_{\infty}) = \amf_{\infty}$, so $\theta_{\Gamma} \circ
\theta_{\gamma}(\amf_{\infty}) = \amf_{\infty}$ and $\theta_{\gamma} \circ
\theta_{\Gamma}(\bmf_{\infty}) = \bmf_{\infty}$. If $\beta_1, \beta_2,
\ldots, \beta_k$ are elements of $\mathcal B$ and $x \in [\amf_0,\bmf_0]$, then
$(\theta_{\beta_1} \circ \theta_{\beta_2} \circ \cdots \circ
\theta_{\beta_k})(x) \in [\amf_k, \bmf_k]$. If $(\theta_{\beta_1} \circ
\theta_{\beta_2} \circ \cdots \circ \theta_{\beta_k})(x) = x$ for some
$x \in [\amf_0,\bmf_0]$ and some elements $\beta_1, \beta_2, \ldots,
\beta_k$ of $\mathcal B$, then $x \in [\amf_{\infty}, \bmf_{\infty}]$.
\end{lem}
\begin{proof}
Because $\lim_{k \rightarrow \infty} \amf_k:= \amf_{\infty}$,
$\lim_{k \rightarrow \infty} \bmf_k:= \bmf_{\infty}$, $\theta_{\gamma}(\amf_k) = \bmf_{k+1}$, and
$\theta_{\Gamma}(\bmf_k) = \amf_{k+1}$, it follows from the continuity of
$\theta_{\gamma}$ and $\theta_{\Gamma}$ that $\theta_{\gamma}(\amf_{\infty}) = \bmf_{\infty}$
and $\theta_{\Gamma}(\bmf_{\infty}) = \amf_{\infty}$.
If $x \in [\amf_0,\bmf_0]$ and $\beta_1, \beta_2, \ldots,
\beta_k$ are elements of $\mathcal B$, repeated applications of Lemma~\ref{lem:5.1}
show that $\theta_{\beta_k}(x) \in [\amf_1,\bmf_1]$, $(\theta_{\beta_{k-1}} \circ
\theta_{\beta_k})(x) \in [\amf_2,\bmf_2]$, and generally that
$(\theta_{\beta_{1}} \circ \theta_{\beta_{2}} \circ \cdots \circ
\theta_{\beta_k})(x) \in [\amf_k,\bmf_k]$. If $x \in [\amf_0,\bmf_0]$ and
$\beta_1, \beta_2, \ldots, \beta_k$ are elements of $\mathcal B$ are such that
$(\theta_{\beta_{1}} \circ \theta_{\beta_{2}} \circ \cdots \circ
\theta_{\beta_k})(x) = x$, it follows that $x \in [\amf_k,\bmf_k]$. Now the same
argument can be repeated to show that $x \in [\amf_{2k},\bmf_{2k}]$ and
generally that $x \in [\amf_{mk},\bmf_{mk}]$ for every positive integer $m$. Since
$\cap_{m \ge 1}[\amf_{mk},\bmf_{mk}] = [\amf_{\infty}, \bmf_{\infty}]$, we conclude that
$x \in [\amf_{\infty},\bmf_{\infty}]$.
\end{proof}
\begin{remark}
Under the hypotheses of Lemma~\ref{lem:5.3}, if
$(\theta_{\Gamma} \circ \theta_{\gamma})(x) = x$ or
$(\theta_{\gamma} \circ \theta_{\Gamma})(x) = x$ for some $x \in [\amf_0,\bmf_0]$, then
$\amf_{\infty} \le x \le \bmf_{\infty}$, so $\amf_{\infty}$ is the least fixed point
of $\theta_{\Gamma} \circ \theta_{\gamma}$ in $[\amf_0,\bmf_0]$ and
$\bmf_{\infty}$ is the greatest fixed point
of $\theta_{\gamma} \circ \theta_{\Gamma}$ in $[\amf_0,\bmf_0]$ .
\end{remark}
Lemma~\ref{lem:5.3} provides a way of obtaining invariant intervals
$J$, such that $\theta_{\beta}(J) \subset J$ for all $\beta \in \mathcal B$.
However, it is frequently the case that we have more information than
given in (H4), and then one can give more flexible methods to find
invariant intervals. The following lemma, whose proof we omit,
describes a commonly occurring class of examples where such methods
are available.
\begin{lem}
\label{lem:7.3'}
Let hypotheses and notation be as in Lemma~\ref{lem:5.3}. Suppose also
that there exist intervals $J_1 =[x_1, \amf_{\infty}]$ and $J_2 =
[\bmf_{\infty},x_2]$
with $\amf_0 \le x_1 < \amf_{\infty}$ and $\bmf_{\infty} < x_2 < \bmf_0$,
such that
$\theta_{\gamma}(J_2) \subset [\amf_{\infty}, \bmf_{\infty}]$,
$\theta_{\Gamma}(J_1) \subset [\amf_{\infty}, \bmf_{\infty}]$,
$\lip(\theta_{\gamma}|_{J_1}) \le c_1$, $\lip(\theta_{\Gamma}|_{J_2}) \le
c_2$, and $c_1 c_2 <1$. If $\xi_1 \in J_1$ is chosen so that $\xi_2 =
\theta_{\gamma}(\xi_1) \in J_2$, then
$\theta_{\beta}([\xi_1,\xi_2]) \subset [\xi_1,\xi_2]$ for all $\beta \in \mathcal B$.
Similarly, if $\eta_2 \in J_2$ is chosen so that $\eta_1 =
\theta_{\Gamma}(\eta_2) \in J_1$, then
$\theta_{\beta}([\eta_1,\eta_2]) \subset [\eta_1,\eta_2]$ for all $\beta \in \mathcal B$.
\end{lem}
We shall use $\mathcal B$ as an index set, so the operator $L_s$ can be written
\begin{equation*}
(L_s f)(x) = \sum_{\beta \in \mathcal B} g_{\beta}(x)^s f(\theta_{\beta}(x)),
\end{equation*}
where $\theta_{\beta}(S_0) \subset S_0$ for all $\beta \in \mathcal B$ and $S_0 =
[\amf_0,\bmf_0]$. If the conditions of Theorem~\ref{thm:2.4} are satisfied,
$L_s$ has a strictly positive, $C^m$ eigenfunction. Assuming (H4) and the
hypotheses of Theorem~\ref{thm:2.4}, the observation in the first paragraph of
this section implies that to compute $\rmf(L_s)$, we can, in the notation of
Lemma~\ref{lem:5.3}, replace $[\amf_0,\bmf_0]$ by
$[\amf_{\infty},\bmf_{\infty}]$ or by $[\amf_k,\bmf_k]$ for any integer $k \ge
1$. In fact, we could use any interval $J \subset [\amf_0,\bmf_0]$ with
$\theta_{\beta}(J) \subset J$ for all $\beta \in \mathcal B$ (compare
Lemma~\ref{lem:7.3'}).
For the remainder of this section, we shall assume
(H5) $\mathcal B$ is a finite set of distinct real numbers and
$\gamma= \min \{\beta: \beta \in \mathcal B\} \ge 1$. For every $\beta \in \mathcal B$, we
define $\theta_{\beta}:[0,1/\gamma]:=[\amf_0,\bmf_0] \to [0,1/\gamma]$
by $\theta_{\beta}(x) = (x+ \beta)^{-1}$.
We shall write $\Gamma = \max \{\beta: \beta \in \mathcal B\}$ and $\gamma =
\min \{\beta: \beta \in \mathcal B\}$ and always assume that $\gamma <
\Gamma$. The reader can check that $\{\theta_{\beta}: \beta \in \mathcal B\}$
satisfies the conditions of (H4) with $\theta_{\gamma}(x) = (x +
\gamma)^{-1}$ and $\theta_{\Gamma}(x) = (x + \Gamma)^{-1}$. Using the
calculations in the following paragraph, the reader can check that the
conditions of Lemma~\ref{lem:7.3'} are also satisfied.
We assume that the sequences $\{\amf_k: k \ge 1\}$ and $\{\bmf_k: k \ge 1\}$
are defined as in Lemmas~\ref{lem:5.2} and \ref{lem:5.3}, with $\amf_0 =0$ and
$\bmf_0 =1/\gamma$, and $\amf_{\infty}$ and $\bmf_{\infty}$ defined as in
Lemma~\ref{lem:5.3}. Since $\amf_{\infty}$ is a fixed point of
$\theta_{\Gamma} \circ \theta_{\gamma}$ in $[0,1/\gamma]$ and $\bmf_{\infty}$ is a
fixed point of $\theta_{\gamma} \circ \theta_{\Gamma}$ in $[0,1/\gamma]$, one can
easily solve the equations
\begin{equation*}
x= (\theta_{\Gamma} \circ \theta_{\gamma})(x) = \frac{x + \gamma}{\Gamma x +
(1+ \Gamma \gamma)} \quad \text{and}
\quad x = (\theta_{\gamma} \circ \theta_{\Gamma})(x)
= \frac{x + \Gamma}{\gamma x + (1+ \Gamma \gamma)}
\end{equation*}
to obtain
\begin{equation}
\label{5.1}
\amf_{\infty} = - \frac{\gamma}{2} + \sqrt{(\gamma/2)^2 + (\gamma/\Gamma)}
\quad \text{and} \quad
\bmf_{\infty} = - \frac{\Gamma}{2} + \sqrt{(\Gamma/2)^2 + (\Gamma/\gamma)}.
\end{equation}
One can verify that $\bmf_{\infty} = (\Gamma/\gamma) \amf_{\infty}$, so
$0 < \amf_{\infty} < \bmf_{\infty} <1/\gamma$.
Since our index set is $\mathcal B$, we slightly abuse previous notation and, for
a positive integer $\nu$, we define the set of ordered $\nu$-tuples of
elements of $\mathcal B$ by
\begin{equation*}
\Omega_{\nu} = \{(\beta_1, \beta_2, \cdots, \beta_{\nu}): \beta_j \in \mathcal B
\text{ for } 1 \le j \le \nu\}.
\end{equation*}
For each $\omega = (\beta_1, \beta_2, \cdots, \beta_{\nu}) \in \Omega_{\nu}$,
we define $\theta_{\omega} = \theta_{\beta_1} \circ \theta_{\beta_2} \circ
\cdots \circ \theta_{\beta_{\nu}}$. Our first task is to estimate $c(\nu)$
(see \eqref{3.4}), which gives an upper bound for $\lip(\theta_{\omega})$,
$\omega \in \Omega_{\nu}$.
If $\omega = (\beta_1, \beta_2, \cdots, \beta_{\nu}) \in \Omega_{\nu}$ and
$\beta \in \mathcal B$, define a matrix
\begin{equation*}
M_{\beta} = \begin{pmatrix} 0 & 1 \\ 1 & \beta \end{pmatrix}.
\end{equation*}
It is proved in Section 6 of \cite{hdcomp1} that
\begin{equation*}
M = M_{\beta_1} M_{\beta_2} \cdots M_{\beta_{\nu}}
= \begin{pmatrix} A_{\nu-1} & A_{\nu} \\ B_{\nu-1} & B_{\nu} \end{pmatrix},
\end{equation*}
where $A_j$ and $B_j$ are defined inductively by $A_0 =0$, $A_1 =1$,
$B_0 =1 $, $B_1 = \beta_1$, and generally, for $1 \le j \le \nu$, by
\begin{equation}
\label{5.2}
A_{j+1} = A_{j-1} + \beta_{j+1} A_j, \qquad
B_{j+1} = B_{j-1} + \beta_{j+1} B_j.
\end{equation}
Note that $\det(M_{\beta}) = -1$ so $\det(M) = (-1)^{\nu}$. Standard results
for M\"obius transforms now imply that for $x \in [\amf_k,\bmf_k]$,
$0 \le k \le \infty$,
\begin{align}
\nonumber
(\theta_{\beta_1} \circ \theta_{\beta_2} \circ \cdots \circ
\theta_{\beta_{\nu}})(x) &= \frac{A_{\nu-1} x + A_{\nu}}{B_{\nu-1} x +
B_{\nu}},
\\
\label{thetabetaest2}
\frac{d}{dx}(\theta_{\beta_1} \circ \theta_{\beta_2} \circ \cdots \circ
\theta_{\beta_{\nu}})(x) &= \frac{(-1)^{\nu}}{(B_{\nu-1} x + B_{\nu})^2}.
\end{align}
If we define $\tilde B_0 = 1$, $\tilde B_1 = \gamma$, and
$\tilde B_{j+1} = \tilde B_{j-1} + \gamma \tilde B_j$ for $j \ge 1$, then
because $\gamma \le \beta$ for all $\beta \in \mathcal B$, it is straightforward
to prove that $\tilde B_j \le B_j$ for $0 \le j \le \nu$, where $B_j$ is
defined by \eqref{5.2}. It follows that for all $\omega \in \Omega_{\nu}$
and $x \in [\amf_k, \bmf_k]$,
\begin{equation*}
|\theta_{\omega}^{\prime}(x)| \le [\tilde B_{\nu-1} \amf_k + \tilde
B_{\nu}]^{-2},
\end{equation*}
which implies that, for $\theta_{\omega}:[\amf_k, \bmf_k] \to \mathbb{R}$,
\begin{equation}
\label{5.4}
\max \{\lip(\theta_{\omega}): \omega \in \Omega_{\nu}\}
= [\tilde B_{\nu-1} \amf_k + \tilde B_{\nu}]^{-2}:= c(\nu).
\end{equation}
It remains to give an exact formula for the right hand side of \eqref{5.4}.
The linear difference equation
$\tilde B_{j+1} = \tilde B_{j-1} + \gamma \tilde B_j$ has solutions of
the form $\lambda^j$ for $j \ge 0$, which leads to the formula
$\lambda^{n+1} = \lambda^{n-1} + \gamma \lambda^n$, or for $\lambda \neq 0$,
$\lambda^2 = 1 + \lambda \gamma$. Hence
\begin{equation}
\label{5.5}
\lambda = \lambda_{+} = \frac{\gamma}{2} + \frac{1}{2}\sqrt{\gamma^2 +4},
\qquad
\lambda = \lambda_{-} = \frac{\gamma}{2} - \frac{1}{2}\sqrt{\gamma^2 +4}.
\end{equation}
The general solution of the difference equation is then
\begin{equation}
\label{5.6}
c_1 \lambda_{+}^j + c_2 \lambda_{-}^j = \tilde B_j, \quad j \ge 0,
\end{equation}
where $c_1$ and $c_2$ must be chosen so that $\tilde B_0 =1$ and
$\tilde B_1 = \gamma$. A calculation gives
\begin{equation}
\label{5.7}
c_1 = \frac{\sqrt{\gamma^2 + 4} + \gamma}{2\sqrt{\gamma^2 + 4}},
\qquad
c_2 = \frac{\sqrt{\gamma^2 + 4} - \gamma}{2\sqrt{\gamma^2 + 4}}.
\end{equation}
Summarizing the above discussion, we obtain
\begin{lem}
\label{lem:5.4}
Assume (H4) and consider $\theta_{\beta}$, $\beta \in \mathcal B$, as a map of
$[\amf_k,\bmf_k]$ to itself, for $0 \le k \le \infty$, where $\amf_0 =0$,
$\bmf_0 =1$, $\amf_k = \theta_{\Gamma}(\bmf_{k-1})$ and $\bmf_k =
\theta_{\gamma}(\amf_{k-1})$ for $k \ge 1$ and $\amf_{\infty}$ and
$\bmf_{\infty}$ are given by \eqref{5.1}. Then for $j \ge 1$, $\tilde B_j$ is
given by \eqref{5.6}, where $\lambda_{+}$ and $\lambda_{-}$ are given by
\eqref{5.5} and $c_1$ and $c_2$ by \eqref{5.7}.
\end{lem}
\begin{remark}
Because $\lambda_{+} >1$ and
$-1 < \lambda_{-} = - 1/\lambda_{+} <0$ for all $\gamma >0$,
$c_1 \lambda_{+}^j$ is the dominant term in \eqref{5.6} as $j$ increases; and
one can check that $|c_2 \lambda_{-}^j| < 1/2$ for all $j \ge 0$. Of course,
for moderate values of $j$, one can easily compute $\tilde B_j$ from its
recurrence formula. It is clear that the constant $c(\nu)$ in
\eqref{5.4} is minimized by working on the interval $[\amf_{\infty},
\bmf_{\infty}]$.
\end{remark}
Assuming (H5), we now define for $s >0$, $L_s:C([0,1]) \to C([0,1])$ by
\begin{equation*}
(L_sf)(x) = \sum_{\beta \in \mathcal B} |\theta_{\beta}^{\prime}(x)|^s
f(\theta_{\beta}(x)):=
\sum_{\beta \in \mathcal B} g_{\beta}(x)^s f(\theta_{\beta}(x)).
\end{equation*}
It is well-known that for $\nu$ a positive integer,
\begin{equation*}
(L_s^{\nu} f)(x) = \sum_{\beta \in \mathcal B} |\theta_{\omega}^{\prime}(x)|^s
f(\theta_{\omega}(x)):=
\sum_{\omega \in \Omega_{\nu}} g_{\omega}(x)^s f(\theta_{\omega}(x)).
\end{equation*}
Because $\theta_{\omega}([\amf_k,\bmf_k]) \subset [\amf_k,\bmf_k]$ for $0 \le k \le \infty$
and $\omega \in \Omega_{\nu}$, we can also consider $L_s^{\nu}$ as a map
of $C([\amf_k,\bmf_k]) \mapsto C([\amf_k,\bmf_k])$ and as noted earlier, this does
not change the spectral radius of $L_s^{\nu}$. Thus, we shall consider
$L_s^{\nu}$ as a map from $C([\amf_k,\bmf_k])$ into itself, with optimal results
obtained by taking $k = \infty$.
We need to find a constant $M_0(\nu)$ (compare \eqref{3.4}) such that
for all $\omega \in \Omega_{\nu}$, $g_{\omega}(x): =
|\theta_{\omega}^{\prime}(x)| \in K(M_0(\nu); [\amf_k,\bmf_k])$. In
this case, this is equivalent to proving that for all $\omega \in
\Omega_{\nu}$, $x \mapsto \ln (|\theta_{\omega}^{\prime}(x)|)$ is a
Lipschitz map on $[\amf_k,\bmf_k]$ with Lipschitz constant $\le
M_0(\nu)$. If $\omega = (\beta_1, \beta_2, \ldots, \beta_{\nu})$, we
have by \eqref{thetabetaest2}, that
\begin{equation*}
|\theta_{\omega}^{\prime}(x)| = \frac{1}{(B_{\nu-1} x + B_{\nu})^2},
\end{equation*}
so
\begin{equation*}
\ln(|\theta_{\omega}^{\prime}(x)|) = -2 \ln(B_{\nu-1} x + B_{\nu}).
\end{equation*}
Thus it suffices to choose $M_0(\nu)$ so that for all $x \in [\amf_k,\bmf_k]$ and
all $\omega \in \Omega_{\nu}$,
\begin{multline*}
\Big|\frac{d}{dx}\ln(|\theta_{\omega}^{\prime}(x)|) \Big|
= 2 \frac{B_{\nu-1}}{B_{\nu -1} x + B_{\nu}}
\\
= 2 \Big[\frac{1}{x + (B_{\nu}/B_{\nu-1})}\Big]
\le 2 \Big[\frac{1}{\amf_k + (B_{\nu}/B_{\nu-1})}\Big] \le M_0(\nu).
\end{multline*}
If we define $x_j = B_{j-1}/B_j$ for $1 \le j < \nu$, then since $B_{j+1} =
B_{j-1} + \beta_{j+1} B_j$ for $1 \le j \le \nu$, we get $B_{j+1}/B_j =
B_{j-1}/B_j + \beta_{j+1}$ or $x_{j+1} = 1/(x_j + \beta_{j+1}) =
\theta_{\beta_j +1}(x_j)$ for $1 \le j \le \nu$. Since $x_1 = 1/\beta_1 \in
[\amf_1,\bmf_1] = [1/(\Gamma +1/\gamma), 1/\gamma]$, it follows from
Lemma~\ref{5.2} that $x_{j+1} \in [\amf_{j+1}, \bmf_{j+1}]$ for $1 \le j <
\nu$, so $1/x_{j+1} = B_{j+1}/B_j \in [\bmf_{j+1}^{-1}, \amf_{j+1}^{-1}]$ and
$\mathcal B_{\nu}/B_{\nu-1} \ge \bmf_{\nu}^{-1}$. It follows that for $\omega \in
\Omega_{\nu}$ and $x \in [\amf_k,\bmf_k]$, we can take
\begin{equation}
\label{M0nuchoice}
M_0(\nu) = 2/(\amf_k + \bmf_{\nu}^{-1}).
\end{equation}
By adding the exponent $s$, one easily derives that for
all $\omega \in \Omega_{\nu}$, $\nu \ge 1$, and $0 \le k \le \infty$,
\begin{equation}
\label{5.10a}
g_{\omega}(\cdot)^s = |\theta_{\omega}^{\prime}(\cdot)|^s \in K(2s/(\amf_k +
\bmf_{\nu}^{-1}); [\amf_k,\bmf_k]).
\end{equation}
Note that we could replace $\bmf_{\nu}^{-1}$ by $\amf_{\nu-1} + \gamma$.
We summarize the above discussion in the following lemma.
\begin{lem}
\label{lem:5.5}
Assume (H5) and let $\amf_k$ and $\bmf_k$, $0 \le k \le \infty$, be as described
in Lemma~\ref{lem:5.4}. If $\omega \in \Omega_{\nu}$, $\nu \ge 1$, the map
$x \mapsto \ln(|\theta_{\omega}^{\prime}(x)|^s)$, $x \in [\amf_k,\bmf_k]$ is
Lipschitz with Lipschitz constant $\le 2s/(\amf_k + \bmf_{\nu}^{-1})$,
so \eqref{5.10a} is satisfied.
\end{lem}
It remains to estimate, for $0 \le k \le \infty$,
$\max_{x \in [\amf_k,\bmf_k]} |(d^j v_s/dx^j)(x)|/v_s(x)$, where $v_s$ denotes the unique
(to within normalization) strictly positive eigenfunction of $L_s$. The basic
idea is to exploit \eqref{2.15}, as was done in Section 6 of \cite{hdcomp1},
but our results will refine those in \cite{hdcomp1}.
Our previous calculations show that for $x \in [\amf_k,\bmf_k]$, $0 \le k \le
\infty$, we have
\begin{equation*}
g_{\omega}(x)^s = |\theta_{\omega}^{\prime}(x)|^s
= \frac{1}{B_{\nu -1}^{2s}( x + B_{\nu}/B_{\nu-1})^{2s}}.
\end{equation*}
It follows that for $j \ge 1$, and letting $D$ denote $d/dx$, we have
\begin{equation}
\label{5.10}
\frac{(-1)^j (D^j[(g_{\omega})^s])(x)}{g_{\omega}(x)}
= \frac{(2s)(2s+1) \cdots (2s+j-1)}{( x + B_{\nu}/B_{\nu-1})^j}.
\end{equation}
The same argument used in Lemma~\ref{lem:5.5} shows that
\begin{equation}
\label{5.11}
\bmf_{\nu}^{-1} \le B_{\nu}/B_{\nu-1} \le \amf_{\nu}^{-1},
\end{equation}
so if $x \in [\amf_k,\bmf_k]$, we derive from \eqref{5.10} and \eqref{5.11} that
\begin{multline}
\label{5.12}
\frac{(2s)(2s+1) \cdots (2s+j-1)}{[\bmf_k + \amf_{\nu}^{-1}]^j}
\le \frac{(-1)^j (D^j[(g_{\omega})^s])(x)}{g_{\omega}(x)^s}
\\
\le \frac{(2s)(2s+1) \cdots (2s+j-1)}{(\amf_k + \bmf_{\nu}^{-1})^j}.
\end{multline}
It follows from \eqref{5.12} that for $x \in [\amf_k,\bmf_k]$,
\begin{multline}
\label{5.13}
\frac{(2s)(2s+1) \cdots (2s+j-1)}{[\bmf_k + \amf_{\nu}^{-1}]^j}
\le \frac{(-1)^j \sum_{\omega \in \Omega_{\nu}} (D^j[(g_{\omega}^s])(x)}
{\sum_{\omega \in \Omega_{\nu}} g_{\omega}(x)^s}
\\
\le \frac{(2s)(2s+1) \cdots (2s+j-1)}{(\amf_k + \bmf_{\nu}^{-1})^j}.
\end{multline}
Taking limits as $\nu \rightarrow \infty$ in \eqref{5.13} and using
\eqref{2.15}, we find that for $x \in [\amf_k,\bmf_k]$,
\begin{multline}
\label{5.14}
\frac{(2s)(2s+1) \cdots (2s+j-1)}{[\bmf_k + \amf_{\infty}^{-1}]^j}
\le \frac{(-1)^j (d^j v_s/dx^j)(x)}{v_s(x)}
\\
\le \frac{(2s)(2s+1) \cdots (2s+j-1)}{(\amf_k + \bmf_{\infty}^{-1})^j}.
\end{multline}
Notice that we can replace $\amf_{\infty}^{-1}$ by $\bmf_{\infty} + \Gamma$ and
$\bmf_{\infty}^{-1}$ by $\amf_{\infty} + \gamma$ in \eqref{5.14}.
As one can easily see, the lower bound in \eqref{5.14} increases as $k$
increases and the upper bound decreases as $k$ increases, so the optimal
bounds are obtained when $k = \infty$ and apply to the interval
$[\amf_{\infty}, \bmf_{\infty}]$.
We summarize the above results in the following
lemma.
\begin{lem}
\label{lem:5.6}
Let $v_s$ denote the unique
(to with normalization) strictly positive eigenfunction of $L_s$. Assume (H4)
and let $\amf_k$ and $\bmf_k$, $k \ge 0$ be as in Lemma~\ref{5.4}. Then $v_s$
satisfies \eqref{5.14}.
\end{lem}
\begin{remark}
\label{rem:7.3}
Since, in Lemma~\ref{lem:5.6}, we have specified the coefficient $g_{\beta}$
and the maps $\theta_{\beta}$ for $\beta \in \mathcal B$, Lemma~\ref{lem:5.6} gives
us a simple formula for the constant $E(s,p) = E$ in \eqref{6.1}.
\begin{equation*}
\max_{x \in [\amf_k,\bmf_k]} \frac{|(d^p v_s/dx^p)(x)|}{v_s(x)}
\le \frac{(2s)(2s+1) \cdots (2s+p-1)}{(\amf_k + \bmf_{\infty}^{-1})^p} := E,
\end{equation*}
where $p$ and $k$ are positive integers and $ 1 \le k \le \infty$. Here
we have allowed the interval to vary with $k$, but we may eventually
restrict to $k = \infty$.
\end{remark}
It remains to find a constant $\chi$ (compare \eqref{6.2}) such that
for all $x_1, x_2 \in [\amf_k,\bmf_k]$,
\begin{equation*}
v_s(x_1) \le \exp(2 s|x_1 -x_2|/\chi) v_s(x_2).
\end{equation*}
It follows from \eqref{5.14} that if $\amf_k \le x_1 \le x_2 \le \bmf_k$, then
\begin{equation*}
- \int_{x_1}^{x_2} \frac{v_s^{\prime}(x)}{v_s(x)} \, dx
= \ln(v_s(x_1)) - \ln(v_s(x_2)) \le \frac{2s}{\amf_k + \bmf_{\infty}^{-1}}
|x_2 - x_1|,
\end{equation*}
which implies that
\begin{equation}
\label{5.17}
v_s(x_1) \le \exp(2 s|x_1 -x_2|/[\amf_k + \bmf_{\infty}^{-1}]) v_s(x_2).
\end{equation}
If $x_2 \le x_1$, we know that $v_s(x_2) \ge v_s(x_1)$, so \eqref{5.17}
is satisfied for all $x_1,x_2 \in [\amf_k,\bmf_k]$. In particular, \eqref{5.17}
is satisfied if the roles of $x_1$ and $x_2$ are reversed, which implies
that $x \mapsto \ln(v_s(x))$ is Lipschitz on $[\amf_k,\bmf_k]$ with
Lipschitz constant $2s/(\amf_k + \bmf_{\infty}^{-1})$. Summarizing, we have
\begin{lem}
\label{lem:5.7}
If $\chi = \amf_k + \bmf_{\infty}^{-1}$, \eqref{6.2} is satisfied on $[\amf_k,\bmf_k]$.
\end{lem}
\section{Computation of $\rmf(L_s)$}
\label{sec:computation}
In this section we shall describe how to to use the results of
Sections~\ref{sec:theory-d}-\ref{sec:est-constants} to obtain rigorous, high
order estimates for $\rmf(L_s) = \lambda_s$. As a subcase, we shall obtain
rigorous estimates for the Hausdorff dimension of certain fractal objects
described by iterated function systems.
For simplicity, we shall restrict attention to the class of maps
$\theta_{\beta}: [0,1] \to [0,1]$, where $\theta_{\beta}(x) = 1/(x + \beta)$
for $\beta \in \mathcal B$ and $\mathcal B$ as in (H5) of Section~\ref{sec:est-constants}.
For $s \ge 0$, we define $L_s: C[0,1] \to C[0,1]$ by
\begin{equation*}
(L_s f)(x) = \sum_{\beta \in \mathcal B} |\theta_{\beta}^{\prime}(x)|^s
f(\theta_{\beta}(x)).
\end{equation*}
Recall (see Lemmas~\ref{lem:5.2} and \ref{lem:5.3}) that we define $\amf_0
=0$, $\bmf_0 =1/\gamma$, $\amf_{k+1} = \theta_{\Gamma}(\bmf_k)$, $\bmf_{k+1} =
\theta_{\gamma}(\amf_k)$, $\amf_{\infty} = \lim_{k \rightarrow \infty}
\amf_k$, and $\bmf_{\infty} = \lim_{k \rightarrow \infty} \bmf_k$. Since
$\theta_{\beta}([\amf_k,\bmf_k]) \subset [\amf_k,\bmf_k]$ for $0 \le k \le
\infty$ and for all $\beta \in \mathcal B$, and since $L_s$ has a strictly positive
eigenvector on $[0,1/\gamma]$, $L_s$ induces a bounded linear operator
$L_{s,[\amf_k,\bmf_k]} : C([\amf_k,\bmf_k]) \to C([\amf_k,\bmf_k])$ and
$\rmf(L_s) = \rmf(L_{s,[\amf_k,\bmf_k]})$, Various constants are optimized by
working on $[\amf_{\infty}, \bmf_{\infty}]$, so we shall abuse notation and
also use $L_s$ to denote $L_s$ as an operator on $C([\amf_{\infty},
\bmf_{\infty}])$ (or, sometimes, $C([\amf_{k}, \bmf_{k}])$). For a given
positive integer $r$, we assume that (H3) is satisfied, but with $S:=
[\amf_{\infty}, \bmf_{\infty}]$.
Thus $[a_i, b_i] \subset S$, $1 \le i
\le I$, denote pairwise disjoint intervals that satisfy the conditions of
(H3), Given positive integers $N_i$, $1 \le i \le I$, we write $h_i = (b_i
- a_i)/N_i$. As in Section~\ref{sec:approx} (see \eqref{2.33}), we define
mesh points $c_{j,k}^i \in [a_i, b_i]$, $1 \le i \le I$, $1 \le j \le
N_i$, $0 \le k \le r$ and $T:=\{c_{j,k}^i\}$ for $i,j,k$ in the ranges given
above. As in Section~\ref{sec:approx}, if $v_s$ is the positive eigenvector of
$L_s$ on $S$, $\mathcal V_s: \hat S:= \cup_{i=1}^I [a_i, b_i] \to \mathbb{R}$ is the
polynomial interpolant of $v_s$ of degree $\le r$ on $[t_{j-1}^i, t_j^i]$ for
$1 \le i \le I$, $1 \le j \le N_i$, so $\mathcal V_s(x) = v_s(x)$ for all $x \in T$.
Our general approach will be as follows: Given $s >0$, we must find
$r, \nu, M$ and $h$ such that the conditions of Theorem~\ref{thm:5.6n} are
satisfied. First, we choose a positive integer $r \ge 2$, where $r$ is the
piecewise polynomial degree in \eqref{2.35}. Once $r$ has been chosen, we
select a positive integer $\nu$ such that (compare Remark~\ref{rem:3.2})
\begin{equation}
\label{8.2}
c(\nu)[2 \eta(r) r^2 \psi(r)]:= \kappa_1 <1.
\end{equation}
Here $c(\nu)$ is as in \eqref{3.4}; and for our case an exact formula
for $c(\nu)$ is provided by \eqref{5.4}; where we shall take $\amf_k =
\amf_{\infty}$ in \eqref{5.4}. Also, $\psi(r)$ is as in
Lemma~\ref{lem:3.2} and $\eta(r)$ as in \eqref{3.1}. As a practical
matter, we demand that $\kappa_1$ not be too close to 1, say $\kappa_1
\le 4/5$. Note that for fixed $r$, this means that $\nu$
must be sufficiently large and hence $c(\nu)$ sufficiently small, so
that \eqref{8.2} is satisfied. We next choose $\kappa_2$ with
$\kappa_1 < \kappa_2 <1$ and $\kappa_2$ not too close to 1. A
simple choice is $\kappa_2 = (1 + \kappa_1)/2$. We define (see
Theorem~\ref{thm:5.6n}), $M^{\prime} = \kappa_2 M$. If we write
$u:= M \eta(r) h$, the conditions of Theorem~\ref{thm:5.6n} take the
form
\begin{align}
\label{8.3}
\psi(r) u \exp(u) &< 1
\\
\label{8.4}
\frac{\kappa_1 \exp(u)}{1- \psi(r) u \exp(u)} &< \kappa_2 - \frac{s
M_0(\nu)}{M}.
\end{align}
Here $M_0(\nu)$ is as in \eqref{3.4}; and in our case Lemma~\ref{lem:5.5}
insures that $M_0(\nu) \le 2/(\amf_{\infty} + \amf_{\nu-1} + \gamma)$.
Since $\exp(u)/(1- \psi(r) u \exp(u)) > 1$, \eqref{8.4} implies that
\begin{equation*}
\kappa_1 < \kappa_2 - sM_0(\nu)/M.
\end{equation*}
We choose $M >0$ such that
\begin{equation}
\label{8.6}
M = \frac{4s}{\amf_{\infty} + \amf_{\nu-1} + \gamma}\frac{1}{\kappa_2- \kappa_1}
\ge 2 sM_0(\nu)/(\kappa_2- \kappa_1),
\end{equation}
which implies that
\begin{equation*}
\kappa_2 - sM_0(\nu)/M \ge \kappa_2 - (\kappa_2- \kappa_1)/2 > \kappa_1.
\end{equation*}
Also note that since $\amf_{\infty} + \amf_{\nu-1} + \gamma < \chi$, we have that
\begin{equation}
\label{8.6a}
M \ge \frac{4s}{\chi}\frac{1}{\kappa_2- \kappa_1}.
\end{equation}
Given an $M$ that satisfies \eqref{8.6}, we can choose $h = \max_i h_i$
sufficiently small, i.e., $h \le \amh_0$, that \eqref{8.3} and \eqref{8.4} are
satisfied. Recall, however, that we also have to insure that the constant $M$,
defined by \eqref{8.6} also satisfies $M \ge M_2$, where $M_2$ is as in
Lemma~\ref{lem:6.4} and $M_1$ is given by \eqref{6.12}. As we shall see, this
may require a further restriction on the size of $h$.
The constants $M_1$ and $M_2$ are defined in terms of $G_{r+1}:=G$ (compare
\eqref{rn:6.8}), $\chi:= 2 \amf_{\infty} + \gamma$, $s$, and $H$ (compare
Lemma~\ref{lem:6.4}), and it is desirable to choose $H$ to be small. The
constant $E$ in $G_{r+1}$ is given by Remark~\ref{rem:7.3} with $p =r+1$.
By using \eqref{6.9} in Section~\ref{sec:est-constants} and the estimates for
$E$ and $\chi$ in Lemmas~\ref{lem:5.6} and \ref{lem:5.7}, we find that we can
write
\begin{equation*}
G = E \exp\Big(\frac{2sh}{\amf_{\infty} + \bmf_{\infty}^{-1}}\Big)
\frac{1}{(r+1)!} \Big[\frac{1}{2 \cos(\pi/(2r+2))}\Big]^{r+1} \frac{1}{2^r},
\end{equation*}
where
\begin{equation}
\label{Erp1}
E:= \frac{(2s)(2s+1) \cdots (2s+r)}{(\amf_{\infty} + \bmf_{\infty}^{-1})^{r+1}}
= \frac{(2s)(2s+1) \cdots (2s+r)}{(2 \amf_{\infty} + \gamma)^{r+1}},
\end{equation}
and we have used the fact that $\bmf_{\infty} = 1/(\amf_{\infty} + \gamma)$.
Note that in the application of \eqref{5.14} for $E$, one must take $j = r+1$.
A calculation gives
\begin{multline}
\label{5.19}
G:= G_{r+1} = 2 \exp\Big(\frac{2sh}{2 \amf_{\infty} + \gamma}\Big)
\Big[\frac{(2s)(2s+1) \cdots (2s+r)}{(2)(4)(6) \cdots (2r+2)}\Big]
\\
\cdot
\Big[\frac{1}{2 \amf_{\infty} + \gamma}\Big]^{r+1} \Big[\frac{1}{2 \cos(\pi/(2r+2))}
\Big]^{r+1}.
\end{multline}
Finally, we define
\begin{equation}
\label{8.9}
D:= D_{r+1} = \Big[\frac{1}{\sin(\pi/(2r+2))}\Big]^2 G_{r+1}
\end{equation}
and
\begin{equation}
\label{Hr1def}
H_{r+1} = \mu D_{r+1} \frac{\chi}{2s} =
\mu \Big[\frac{1}{\sin (\pi/(2r+2))}\Big]^2 G_{r+1}
\frac{(2 \amf_{\infty} + \gamma)}{2s}.
\end{equation}
To estimate $M_1$ and $M_2$, we shall need estimates on these quantities.
\begin{lem}
\label{lem:G-decrease}
Assume that $r \ge 2$, $0 < s <2$, and $2 \amf_{\infty} + \gamma \ge 1$.
Then $G_r$ is a decreasing function of $r$.
\end{lem}
\begin{proof}
For $r \ge 2$ and $0 <s \le 2$, since $2 j+2 \ge 2s+j$ for $j \ge 2$,
it follows that
\begin{equation*}
\prod_{j=0}^r \frac{2s+j}{2j+2} \le \Big(\frac{2s}{2}\Big) \Big(\frac{2s+1}{4}
\Big) = s\frac{2s+1}{4}.
\end{equation*}
By using the Taylor series for $\cos(\theta)$, we see that
\begin{equation*}
\frac{1}{2 \cos(\pi/(2r+2))} \le \frac{1}{2 - [\pi/(2r+2)]^2}.
\end{equation*}
Using these estimates in \eqref{5.19} gives
\begin{equation*}
G_{r+1} \le 2 \exp\Big(\frac{2sh}{2 \amf_{\infty} + \gamma}\Big)
\Big[s\frac{2s+1}{4}\Big] \Big[\frac{1}{2 \amf_{\infty} + \gamma}\Big]^{r+1}
\Big[\frac{1}{2 - [\pi/(2r+2)]^2}\Big]^{r+1},
\end{equation*}
which implies that $\lim_{r \rightarrow \infty} G_{r+1} =0$.
Furthermore, for $r \ge 2$ and $2 \amf_{\infty} + \gamma \ge 1$,
another calculation shows that
\begin{equation*}
\frac{G_{r+1}}{G_r} = \Big[\frac{2s+r}{2r+2}\Big]
\Big[\frac{1}{2 \amf_{\infty} + \gamma}\Big] \Big[\frac{1}{2 \cos(\pi/[2r+2])}\Big]
\Big[\frac{\cos(\pi/(2r))}{\cos(\pi/(2r+2))}\Big]^r< 1,
\end{equation*}
so $G_r$ is a decreasing function for integer $r \ge 2$.
\end{proof}
If we set
\begin{equation*}
u_{r+1} = \frac{1}{\chi} \Big[\frac{1}{2 \cos(\pi/(2r+2))}\Big],
\end{equation*}
a calculation gives
\begin{equation*}
u_3 = \frac{1}{\chi \sqrt{3}}, \qquad
u_4 = \frac{1}{\chi \sqrt{2 + \sqrt{2}}},
\end{equation*}
which gives
\begin{align}
\label{8.10}
G_3 &= 2s \Big[\frac{1}{\chi\sqrt{3}}\Big]^3\Big[\frac{2s+1}{4} \Big]
\Big[\frac{s+1}{3}\Big]\exp(2sh/\chi),
\\
\nonumber
G_4 &= 2s \Big[\frac{1}{2 + \sqrt{2}}\Big]^2 \Big[\frac{1}{\chi}\Big]^4
\Big[\frac{2s+1}{4}\Big] \Big[\frac{s+1}{3}\Big]
\Big[\frac{2s+3}{8}\Big]\exp(2sh/\chi),
\end{align}
$D_3 = 4G_3$, and $D_4 = [4/(2 - \sqrt{2})] G_4$.
Then
\begin{lem}
\label{lem:8.1}
If $0 \le s \le 3/2$, $r \ge 2$, and $\chi:= 2 \amf_{\infty} + \gamma \ge 1$,
\begin{equation*}
\frac{G_{r+2}}{G_{r+1}} \le \Big[\frac{3}{4 \sqrt{2 + \sqrt{2}}}\Big]
\frac{1}{\chi},
\qquad \text{so} \qquad
G_{r+2} \le \Big[\frac{3}{4 \sqrt{2 + \sqrt{2}}}\Big(\frac{1}{\chi}\Big)
\Big]^{r-1} G_3.
\end{equation*}
\end{lem}
\begin{proof}
A calculation gives
\begin{equation*}
\frac{G_{r+2}}{G_{r+1}} = \Big[\frac{2s+r+1}{2r+4}\Big] u_{r+2}
\Big[\frac{u_{r+2}}{u_{r+1}}\Big]^{r+1}.
\end{equation*}
Since $0 < u_{r+2} < u_{r+1}$, it follows that
\begin{equation*}
\frac{G_{r+2}}{G_{r+1}} \le \Big[\frac{2s+r+1}{2r+4}\Big]
\frac{1}{\chi}\Big[\frac{1}{2\cos(\pi/(2r+4))}\Big],
\end{equation*}
and if $r \ge 2$ and $s \le 3/2$,
\begin{equation*}
\frac{G_{r+2}}{G_{r+1}} \le \Big[\frac{3}{4\chi}\Big]
\Big[\frac{1}{2\cos(\pi/8)}\Big]
\le \Big[\frac{3}{4[\sqrt{2 + \sqrt{2}}]}\Big] \frac{1}{\chi}
\end{equation*}
and so
\begin{equation*}
G_{r+2} \le \Big[\frac{3}{4[\sqrt{2 + \sqrt{2}}]}
\Big(\frac{1}{\chi}\Big)\Big]^{r-1}G_3.
\end{equation*}
Furthermore, since we assume that $\chi \ge 1$,
\begin{multline}
\label{G3bound}
G_3 = \Big[\frac{2s}{\chi}\Big] \Big[\frac{1}{\chi^2 3 \sqrt{3}}\Big]
\Big[\frac{2s+1}{4} \Big]
\Big[\frac{s+1}{3}\Big]\exp(2sh/\chi)
\\
\le \Big[\frac{2s}{\chi^3}\Big] \Big[\frac{5}{18 \sqrt{3}}\Big]\exp(3h)
\le \sqrt{3}/2,
\end{multline}
for $h \le \ln(9/5)/3 \le 0.2$.
\end{proof}
\begin{lem}
\label{lem:8.2}
Assume $0 \le s \le 3/2$, $r \ge 2$, $\chi \ge 1$,
and $D_{r+1}$ is as in \eqref{8.9}.
Then
\begin{equation*}
D_{r+2} \le \Big[\frac{3}{4 \chi}\Big]^{r-1}D_3.
\end{equation*}
\end{lem}
\begin{proof}
A calculation gives
\begin{multline*}
\frac{D_{r+2}}{D_{r+1}} = \Big[\frac{u_{r+2}}{u_{r+1}}\Big]^{r+1}
u_{r+2} \Big[\frac{\sin(\pi/[2r+2])}{\sin(\pi/[2r+4])}\Big]^2
\Big[\frac{2s+r+1}{2r+4}\Big]
\\
\le \frac{1}{\chi} \Big[\frac{1}{2 \cos(\pi/(2r+4))}\Big]
\Big[\frac{\sin(\pi/[2r+2])}{\sin(\pi/[2r+4])}\Big]^2
\Big[\frac{r+4}{2r+4}\Big].
\end{multline*}
Using Taylor series expansions for $\sin(u)$, $u \ge 0$, we have
$\sin(u) \le u$ and $\sin(u) \ge u - u^3/6$. Hence,
\begin{equation*}
\frac{\sin(\pi/[2r+2]}{\sin(\pi/[2r+4]}
\le \Big[\frac{2r+4}{2r+2}\Big]
\Big[1 - \frac{1}{6}\Big(\frac{\pi}{2r+4}\Big)^2\Big]^{-1}.
\end{equation*}
Noting that for $r \ge 2$, the expression on the right hand side of
the above is a decreasing function of $r$, as are the other two functions
of $r$ in the bound for $D_{r+2}/D_{r+1}$, we see that an upper bound
for $D_{r+2}/D_{r+1}$ is obtained by setting $r=2$ in each of the expressions
above. This gives
\begin{equation*}
\frac{D_{r+2}}{D_{r+1}} \le \Big[\frac{1}{2\chi \cos(\pi/8)}\Big]
\Big[\frac{4}{3}\Big]^2
\Big[1 - \frac{1}{6}\Big(\frac{\pi}{8}\Big)^2\Big]^{-2}\Big[\frac{3}{4}\Big]
\le \frac{3}{4 \chi},
\end{equation*}
and so
\begin{equation*}
D_{r+2} \le \Big[\frac{3}{4 \chi}\Big] D_{r+1} \le
\Big[\frac{3}{4 \chi}\Big]^{r-1}D_3.
\end{equation*}
\end{proof}
The following bound on $H_r$ is a direct consequence of the above estimates.
\begin{lem}
\label{lem:H-est}
Assume $0 \le s \le 3/2$, $r \ge 2$. Then
\begin{equation*}
H_{r+2} \le \mu \Big[\frac{\chi}{2s}\Big] \Big[\frac{3}{4 \chi}\Big]^{r-1} 4 G_3.
\end{equation*}
\end{lem}
\begin{lem}
\label{lem:8.4}
Suppose $0 \le s \le 3/2$, $r \ge 2$, $M_1$ is as in \eqref{6.12},
$M_2$ is as in Lemma~\ref{6.4}, and $H_{r+1} = \mu D_{r+1} (\chi/2s)$. Then
\begin{equation*}
M_1 = \frac{\mu D_{r+1} h^r}{1 - G_{r+1}^2 h^{2r+2}} + \frac{2s}{\chi},
\end{equation*}
and
\begin{multline}
\label{8.16}
M_2 = M_1 + \Big[\frac{2s}{\chi}\Big]\frac{1}{1- [(G_{r+1}/H_{r+1})h]^2}
\\
=\frac{2s}{\chi}\Big[1 + \frac{H_{r+1} h^r}{1 - G_{r+1}^2 h^{2r+2}}
+ \Big(1- h^2 \frac{2s}{\mu \chi}\sin^2(\pi/[2r+2])\Big)^{-1}\Big].
\end{multline}
\end{lem}
\begin{proof}
Applying the definitions of $M_1$,
$M_2$, $D_{r+1}$, $G_{r+1}$, and $H_{r+1}$, we get
\begin{align*}
M_2 &= M_1 + \Big[\frac{\mu G_{r+1}}{H_{r+1}}\Big]
\Big[ \frac{1}{1- [(G_{r+1}/H_{r+1})h]^2}\Big]
\Big[\frac{1}{[\sin(\pi/[2r+2])]^2}\Big]
\\
&= M_1 + \Big[ \frac{2s}{\chi}\Big] \frac{1}{1- [(G_{r+1}/H_{r+1})h]^2}
\\
&=\frac{2s}{\chi} + \frac{\mu D_{r+1} h^r}{1 - G_{r+1}^2 h^{2r+2}}
+ \Big[\frac{2s}{\chi}\Big]\frac{1}{1- [(G_{r+1}/H_{r+1})h]^2}
\\
&= \frac{2s}{\chi} + \Big[\frac{2s}{\chi}\Big]
\frac{H_{r+1} h^r}{1 - G_{r+1}^2 h^{2r+2}}
+ \Big[\frac{2s}{\chi}\Big]\frac{1}{1- (2s/[\mu\chi])[\sin(\pi/[2r+2])h]^2}
\\
&= \frac{2s}{\chi}\Big[1 + \frac{H_{r+1} h^r}{1 - G_{r+1}^2 h^{2r+2}}
+ \Big(1- h^2 \frac{2s}{\mu \chi}\sin^2(\pi/[2r+2])\Big)^{-1}\Big].
\end{align*}
\end{proof}
\begin{remark}
\label{rem:8.3}
Note that $D_{r+1}$ has the factor $2s/\chi$, so the
identity \eqref{8.16} for $M_2$ does not blow up as $s \rightarrow 0$.
\end{remark}
\begin{remark}
\label{rem:8.4}
If we replace in $G_3$ and $D_3$, the quantity $\exp(2sh/\chi)$ by
$\exp(2s\amh_0/\chi)$, where $\amh_0$ is chosen so that \eqref{8.3} and \eqref{8.4}
are satisfied for $0< h \le \amh_0$, then we easily obtain a bound for
$M_2$ in the form
\begin{equation}
\label{8.16a}
M_2 \le \frac{2s}{\chi} \Big[2 + H_{r+1} h^r(1 + \tilde ch^{2r+2}) + ch^2\Big].
\end{equation}
where $\tilde c$ and $c$ are easily computable from
\eqref{8.16}. Note that if $0 \le s \le 3/2$, $\chi = 2 \amf_{\infty}
+ \gamma \ge 1$, and $r \ge 2$, then $\chi \ge 1$ and
$\sin^2(\pi/[2r+2]) \le 1/4$. So, for $h \le 1/\sqrt{3}$,
\begin{equation*}
\Big(1- h^2 \frac{2s}{\mu \chi}\sin^2(\pi/[2r+2])\Big)^{-1}
\le (1 - 3h^2/4)^{-1} \le 1 + h^2,
\end{equation*}
i.e., we can take $c =1$. Similarly, for $h \le 0.2$, $G_{r+1}^2 \le 3/4$, so
we can also take $\tilde c =1$.
\end{remark}
Recall that we have to insure that $M > M_2$.
From \eqref{8.6a} we have that
\begin{equation*}
M \ge \Big[\frac{4s}{\chi}\Big]\frac{1}{\kappa_2- \kappa_1}.
\end{equation*}
Comparing this expression to the bound for $M_2$ given in \eqref{8.16a}
and noting that $\kappa_2- \kappa_1 <1$, $M$ will be $> M_2$ if we
choose $h \le \amh_1$ sufficiently small so that it also satisfies
\begin{equation}
\label{h1cond}
2 + H_{r+1} h^r(1 + h^{2r+2}) + h^2 \le 2/(\kappa_2- \kappa_1).
\end{equation}
We can now state versions of Theorem~\ref{thm:6.5} and Theorem~\ref{thm:6.6}
in the context of this section.
\begin{thm}
\label{thm:8.1}
Assume that $r\ge 2$, $\chi \ge 1$, and $0 \le s \le 3/2$, and let
$\nu, \kappa_1$, and $\kappa_2$ be as described at the beginning of
this section. Let $M_2$ and $H_{r+1}$ be as described in
Lemma~\ref{lem:8.4} and select $M$ such that \eqref{8.6} is satisfied.
Finally, assume that $h = \max_{i \in I} h_i \le \min(\amh_0,\amh_1,0.2)$. Then,
with $u = M\eta(r) h$, \eqref{8.3} and \eqref{8.4} are satisfied and
$M > M_2$. Furthermore, (compare Theorem~\ref{thm:6.5})
$\bL_{s,\nu}(K(M;T)) \subset K(M';T)$, where $M' = \kappa_2 M < M$.
In addition, we have that for $H = H_{r+1}$,
\begin{equation*}
\lambda_s^{\nu}(1 - H h^r) \le \rmf(\bL_{s,\nu}) \le \lambda_s^{\nu}(1 + H h^r).
\end{equation*}
\end{thm}
\begin{proof}
The fact that $M_2 >M$ follows directly from the computations above.
Our selection of $r, \nu, h, M$ and $M'= \kappa_2 M$ shows that the inequality
in Theorem~\ref{3.1} is satisfied, so $\bL_{s,\nu}(K(M;T)) \subset
K(M';T)$. The inequality for $\rmf(\bL_{s,\nu})$ in Theorem~\ref{thm:8.1}
follows directly from Theorem~\ref{thm:6.5}.
\end{proof}
\begin{thm}
\label{thm:8.2}
Under the hypotheses of Theorem~\ref{thm:8.1}, we have
\begin{equation*}
[(1 + H h^r)^{-1} \rmf(\bL_{s,\nu})]^{1/\nu} \le \lambda_s \le
[(1 - H h^r)^{-1} \rmf(\bL_{s,\nu})]^{1/\nu},
\end{equation*}
where the entries of the matrices $[1 + H h^r]^{-1} \bL_{s,\nu}$ and
$[1 - H h^r]^{-1} \bL_{s,\nu}$ differ by $O(h^r)$.
\end{thm}
Using the inequalities of Theorem~\ref{thm:8.1}, we can obtain rigorous
upper and lower bounds on the Hausdorff dimension $s_*$ of the invariant
set associated with the transfer operator $L_s$ as follows.
Let $s_l$ and $s_u$ denote values of $s$ satisfying
\begin{equation*}
(1-Hh^r)^{-1} \rmf(\bL_{s_u,\nu}) <1, \qquad
(1+Hh^r)^{-1} \rmf(\bL_{s_l,\nu}) >1.
\end{equation*}
It follows immediately from Theorem~\ref{thm:8.1} that $\lambda_{s_u}^{\nu} <
1$ and $\lambda_{s_l}^{\nu} >1$. Since the spectral radius $\lambda_s$ of
$L_s$ is a decreasing function of $s$, there will be a value $s_*$ satisfying
$s_l < s_* < s_u$ for which $\lambda_{s_*}^{\nu} = 1$, or equivalently
$\lambda_{s_*} = 1$. The value $s_*$ gives the Hausdorff dimension $s_*$ of
the invariant set associated with the transfer operator $L_s$.
\section{Numerical Computations}
\label{sec:num-comp}
In this final section, we present results of computations of the
Hausdorff dimension $s$ for various choices of sets of continued
fractions, maximum mesh size $h$, piecewise polynomial degree $r$, and
number of iterations $\nu$ of the map, where $\nu=1$ corresponds to
the original map. These computations include choices of the above
parameters (especially the number of iterations $\nu$), for which the
hypotheses of our theorems are satisfied, but also computations which
obtain the same results when the mappings are not iterated (denoted by
$\nu =1^*$ in Table~\ref{tb:t2} below). We note the obvious fact that
as the number of iterations increase, the complexity of the operator
increases, so the time to compute the maximum eigenvalue of the
resulting matrix operator increases as well. To keep the time involved
in the various computations to be reasonable, we have elected to
compute to fewer digits those sets with a large value of $\nu$,
especially if the set $E$ contains many digits. We have also done
additional computations, not reported in Table~\ref{tb:t2}, which
indicate that the method is robust with respect to the choices of $h$,
$r$, and $\nu$.
\begin{table}[htp]
\footnotesize
\caption{Computation of Hausdorff dimension $s$ for various choices of
sets of continued fractions, maximum mesh size $h$, piecewise polynomial
degree $r$, and number of iterations $\nu$.}
\label{tb:t2}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Set E & $r$ & $h$ & $\nu$ &\\
\hline
& \multicolumn{4}{l|}{$s =$}\\
\hline \hline
E[1,2] & 14 & 0.0002 & 7 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.531 280 506 277 205 141 624 468 647 368 471 785 493 059 109 018 398}
\\
\hline\hline
E[1,3] & 8 & 5.0e-05 & 6 \\
\hline
& \multicolumn{4}{l|}{$s =$
0.454 489 077 661 828 743 845 777 611 651}\\
\hline\hline
E[1,4] & 8 & 5.0e-05 & 6 \\
\hline
& \multicolumn{4}{l|}{$s =$
0.411 182 724 774 791 776 844 805 904 696}\\
\hline\hline
E[2,3] & 8 & 5.0e-05 & 3 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.337 436 780 806 063 636 304 494 910 387}
\\
\hline\hline
E[2,4] & 8 & 5.0e-05 & 3 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.306 312 768 052 784 030 277 908 307 445}
\\
\hline\hline
E[3,4] & 8 & 5.0e-05 & 3 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.263 737 482 897 426 558 759 863 384 275}
\\
\hline\hline
E[3,7] & 18 & .01 & 3 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.224 923 947 191 778 989 184 480 593 490}
\\
\hline\hline
E[10,11] & 20 & .002 & 2 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.146 921 235 390 783 463 311 108 628 515 904 073 067 083 129 676 755}
\\
\hline \hline
E[100, 10,000] & 20 & .002 & 1 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.052 246 592 638 658 878 652 588 416 300 508 181 012 676 284 431 681}
\\
\hline \hline
E[1,2,3] & 5 & .0001 & 5 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.705 660 908 028 738}
\\
\hline \hline
E[1,3,4] & 5 & .0001 & 5 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.604 242 257 756 515 }
\\
\hline \hline
E[1,3,5] & 8 & .001 & 6 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.581 366 821 182 975 }
\\
\hline \hline
E[1,4,7] & 6 & .001 & 6 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.517 883 757 006 911}
\\
\hline \hline
E[2,3,4] & 16 & .005 & 4 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.480 696 222 317 573 041 322 515 564 711}
\\
\hline \hline
E[1,2,3,4] & 8 & .005 & 6 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.788 945 557 483 153}
\\
\hline \hline
E[2,3,4,5] & 16 & .005 & 4 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.559 636 450 164 776 713 312 144 913 530}
\\
\hline \hline
E[1,2,3,4,5] & 5 & .0005 & 5 & \\
\hline
& \multicolumn{4}{l|}{$s =$ 0.836 829 443 681 209}\\
\hline \hline
E[2,4,6,8,10] & 7 & .005 & 3 & \\
\hline
& \multicolumn{4}{l|}
{$s =$ 0.517 357 030 937 017}
\\
\hline\hline
E[1,\ldots,10] & 10 & .01 & 1* &\\
\hline
& \multicolumn{4}{l|}{$s =$ 0.925 737 591 146 765}\\
\hline \hline
E[1,\ldots,34] & 10 & .01 & 1* &\\
\hline
& \multicolumn{4}{l|}{$s =$ 0.980 419 625 226 980 }\\
\hline \hline
E[1,3,5,\ldots,33] & 10 & .01 & 1* &\\
\hline
& \multicolumn{4}{l|}{$s =$ 0.770 516 008 717 163}\\
\hline \hline
E[2,4,6,\ldots,34] & 10 & .01 & 1* &\\
\hline
& \multicolumn{4}{l|}{$s =$ 0.633 471 970 241 089}\\
\hline
\end{tabular}
\end{center}
\end{table}
In addition to the results presented in Table~\ref{tb:t2}, we have also used
our method to compute the Hausdorff dimension of the set $E[1,2]$ with
degree $r =36$, $h = .001$, and $\nu = 1$ using multiple precision
with 108 digits. Although this choice does not satisfy the hypotheses
of our theorem, the result agrees to 100 decimal places with the
result in \cite{JP100}. While we do not have a proof that our method
also works in the non-iterated situation, we do not have any examples
where it fails. We conjecture that the limitation is the method of
proof and not the underlying method.
We next discuss how to control the size of some of the constants that appear
in the error estimates. We begin with the constant $\mu = \max_i h_i/\min_i
h_i$. Recall that to satisfy hypothesis (H3), for a given positive integer
$\nu$, we need to determine pairwise disjoint, nonempty compact intervals
$[a_i,b_i] \subset S$, $1 \le i \le I$, such that for every $\omega \in
\Omega_{\nu}$, there exists $i =i(\omega)$, $1 \le i \le I$, such that
$\theta_{\omega}(S) \subset [a_{i},b_{i}]$. By the results in the previous
section, we can take $S = [\amf_{\infty}, \bmf_{\infty}]$ and we then order
the $\omega \in \Omega_{\nu}$, such that the sets $\theta_{\omega}(S)$ are
ordered with $a_i < a_{i+1}$. Although we could use the domain consisting
of the union of the sets $\theta_{\omega}(S)$, this can lead to very
small subinterval sizes. Instead, we determine a new domain
by iterating only $\nu^{\prime}$ times, while still using
the mappings obtained by $\nu$ iterations to calculate the mapping $L$.
The constant $\nu^{\prime}$ is determined so that the length of the smallest
interval will be $\ge h_{max}/\mu.$
Although we have not done the computations using interval arithmetic,
we have only included the number of digits in each computation that we
expect to be correct, which is always less than the number of digits
provided by {\it Matlab} for the precision we have specified. For
computations that use more digits than provided by standard {\it Matlab}
computations, we have used the {\it Advanpix} multiprecision toolbox.
Because the theory developed in the previous sections involves the
computation and estimates for many constants and parameters, we next
provide, for the benefit of the reader, details for the specific
example of $E[1,4,7]$, corresponding to the computation for $r=6$, $h=0.001$,
$\nu = 6$, and $s =0.518$, shown in Table~\ref{tb:t2}. The computational
domain is determined by only iterating $\nu^{\prime} = 2$ times and the total
number of subintervals used is $= 191$. With these choices,
$\gamma = 1$ and $\Gamma =7$. Then by \eqref{5.1},
\begin{equation*}
\amf_{\infty} = - \frac{\gamma}{2} + \sqrt{(\gamma/2)^2 + (\gamma/\Gamma)}
\approx 0.127 \quad \text{and} \quad
\bmf_{\infty} = (\Gamma/\gamma) \amf_{\infty} \approx 0.8875.
\end{equation*}
Working on the interval $[\amf_{\infty}, \bmf_{\infty}]$, we have
$\chi = \amf_{\infty} + \bmf_{\infty}^{-1} = 2 \amf_{\infty} + \gamma
\approx 1.25$. From
\eqref{5.5}-- \eqref{5.7}, we have for $\nu = 6$, that
\begin{equation*}
c(\nu) = [\tilde B_{\nu-1} \amf_{\infty} + \tilde B_{\nu}]^{-2} = 0.0051
\end{equation*}
From \eqref{3.1}, we have that $\eta(r) = 1/2$ and from Lemma~\ref{lem:3.2}
that
\begin{equation*}
\psi(r) = (2/\pi) \ln(r +1) + 3/4 \approx 2.0
\end{equation*}
From \eqref{M0nuchoice}, since we are working on the interval
$[\amf_{\infty}, \bmf_{\infty}]$, we take
\begin{equation*}
M_0(\nu) = 2/(\amf_{\infty} + \bmf_{\nu}^{-1}) = 2/(\amf_{\infty} + \amf_{\nu-1}
+ \gamma) = 1.5954
\end{equation*}
Setting
\begin{equation*}
\kappa_1 = c(\nu) 2 \eta(r) r^2 \psi(r) = 0.364 <1,
\end{equation*}
we choose $\kappa_2 = (1 + \kappa_1)/2 = 0.6823$, so that $\kappa_1 <
\kappa_2 <1$. Next, we choose
\begin{equation*}
M = \frac{4s}{\amf_{\infty} + \amf_{\nu-1} + \gamma}\frac{1}{\kappa_2 - \kappa_1}
= 5.2022
\end{equation*}
and $M^{\prime} = \kappa_2 M$.
One of the conditions on the mesh size $h = \max_{i \in I} h_i$ is
that it be sufficiently small so that \eqref{8.3} and \eqref{8.4} are
satisfied. For our example, setting $u = M \eta(r) h$, we have
\begin{gather*}
\psi(r) u \exp(u) = .0052 < 1,
\\
\frac{\kappa_1 \exp(u)}{1- \psi(r) u \exp(u)} = 0.3674
< 0.5234 = \kappa_2 - \frac{s M_0(\nu)}{M}.
\end{gather*}
The second condition, coming from \eqref{h1cond} is that
\begin{equation}
\label{secondcond}
2 + H_{r+1}(h^r + h^{2r+2}) + h^2 \le 2/(\kappa_2 - \kappa_1),
\end{equation}
where, combining \eqref{8.9}, \eqref{Hr1def}, and Lemma~\ref{lem:8.2}, we have
\begin{equation*}
H_{r+1} \le \mu \frac{\chi}{2s} D_{r+1}
\le \mu \frac{\chi}{2s} \Big[\frac{3}{4 \chi}\Big]^{r-2} D_3
\le \mu \frac{\chi}{2s} \Big[\frac{3}{4 \chi}\Big]^{r-2} 4 G_3,
\end{equation*}
where
\begin{equation*}
G_3 = 2s \Big[\frac{1}{\chi\sqrt{3}}\Big]^3\Big[\frac{2s+1}{4} \Big]
\Big[\frac{s+1}{3}\Big]\exp(2sh/\chi).
\end{equation*}
Combining these results and setting $\mu$, the ratio of the maximum
subinterval size to the minimum subinterval size, $= 3.76$, the value
calculated by the computer code, we get
\begin{equation*}
H_{r+1} \le 0.0608.
\end{equation*}
Since $2/(\kappa_2 - \kappa_1) \ge 6$, it is clear that
\eqref{secondcond} is satisfied.
We thus have satisfied the conditions of Theorems~\ref{thm:8.1} and
\ref{thm:8.2}, which guarantee the rigorous bounds we use to obtain
rigorous upper and lower bounds on the Hausdorff dimension of
the set $E[1,4,7]$.
\bibliographystyle{amsplain}
| {
"timestamp": "2021-03-02T02:14:07",
"yymm": "2008",
"arxiv_id": "2008.11150",
"language": "en",
"url": "https://arxiv.org/abs/2008.11150",
"abstract": "In [14], the authors developed a new approach to the computation of the Hausdorff dimension of the invariant set of an iterated function system or IFS. In this paper, we extend this approach to incorporate high order approximation methods. We again rely on the fact that we can associate to the IFS a parametrized family of positive, linear, Perron-Frobenius operators $L_s$, an idea known in varying degrees of generality for many years. Although $L_s$ is not compact in the setting we consider, it possesses a strictly positive $C^m$ eigenfunction $v_s$ with eigenvalue $R(L_s)$ for arbitrary $m$ and all other points $z$ in the spectrum of $L_s$ satisfy $|z| \\le b$ for some constant $b < R(L_s)$. Under appropriate assumptions on the IFS, the Hausdorff dimension of the invariant set of the IFS is the value $s=s_*$ for which $R(L_s) =1$. This eigenvalue problem is then approximated by a collocation method at the extended Chebyshev points of each subinterval using continuous piecewise polynomials of arbitrary degree $r$. Using an extension of the Perron theory of positive matrices to matrices that map a cone $K$ to its interior and explicit a priori bounds on the derivatives of the strictly positive eigenfunction $v_s$, we give rigorous upper and lower bounds for the Hausdorff dimension $s_*$, and these bounds converge rapidly to $s_*$ as the mesh size decreases and/or the polynomial degree increases.",
"subjects": "Number Theory (math.NT); Numerical Analysis (math.NA)",
"title": "Hidden Positivity and a New Approach to Numerical Computation of Hausdorff Dimension: Higher Order Methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787879966232,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811474685211
} |
https://arxiv.org/abs/1411.4906 | On Eigenvalues of Random Complexes | We consider higher-dimensional generalizations of the normalized Laplacian and the adjacency matrix of graphs and study their eigenvalues for the Linial-Meshulam model $X^k(n,p)$ of random $k$-dimensional simplicial complexes on $n$ vertices. We show that for $p=\Omega(\log n/n)$, the eigenvalues of these matrices are a.a.s. concentrated around two values. The main tool, which goes back to the work of Garland, are arguments that relate the eigenvalues of these matrices to those of graphs that arise as links of $(k-2)$-dimensional faces. Garland's result concerns the Laplacian; we develop an analogous result for the adjacency matrix. The same arguments apply to other models of random complexes which allow for dependencies between the choices of $k$-dimensional simplices. In the second part of the paper, we apply this to the question of possible higher-dimensional analogues of the discrete Cheeger inequality, which in the classical case of graphs relates the eigenvalues of a graph and its edge expansion. It is very natural to ask whether this generalizes to higher dimensions and, in particular, whether the higher-dimensional Laplacian spectra capture the notion of coboundary expansion - a generalization of edge expansion that arose in recent work of Linial and Meshulam and of Gromov. We show that this most straightforward version of a higher-dimensional discrete Cheeger inequality fails, in quite a strong way: For every $k\geq 2$ and $n\in \mathbb{N}$, there is a $k$-dimensional complex $Y^k_n$ on $n$ vertices that has strong spectral expansion properties (all nontrivial eigenvalues of the normalised $k$-dimensional Laplacian lie in the interval $[1-O(1/\sqrt{n}),1+O(1/\sqrt{n})]$) but whose coboundary expansion is bounded from above by $O(\log n/n)$ and so tends to zero as $n\rightarrow \infty$; moreover, $Y^k_n$ can be taken to have vanishing integer homology in dimension less than $k$. | \section{Introduction}
\label{sec:introduction}
Eigenvalues of graphs are a classical and well-studied subject, which goes back to a fundamental paper of Kirchhoff \cite{Kirchhoff:1847di}, in which he used the combinatorial graph Laplacian to analyze electrical networks and formulated his celebrated \emph{Matrix-Tree Theorem} for the number of spanning trees of a graph (which includes, as the special case of the complete graph, Cayley's \cite{Cayley:Trees-1889} famous formula $n^{n-2}$ for the number of labeled trees on $n$ vertices).
The eigenvalues of a graph $G$ encode many important properties of $G$, in particular regarding connectivity and \emph{expansion} properties of $G$ (the mixing rate of a random walk on $G$) as well as other \emph{quasirandomness} properties of $G$. Because of this, eigenvalues of graphs also play a major role in the design and analysis of algorithms, including heuristic and approximation algorithms for hard graph partitioning problems (\emph{spectral partitioning}) and Markov Chain Monte Carlo approximation algorithms for hard counting problems. We cannot hope to survey the relevant literature here and refer the reader to the survey articles and monographs \cite{Chung:SpectralGraphTheory-1997,Jerrum:CountingSamplingIntegrating-2003,KrivelevichSudakov:PseudorandomGraphs-2006,
HooryLinialWigderson:ExpanderGraphs-2006,LevinPeresWilmer:MarkovChainsMixingTimes-2009,CvetkovicRowlinsonSimic:IntroductionGraphSpectra-2010,Zhang:2011un} for background and further references.
In the present paper, we consider eigenvalues of higher-dimensional simplicial complexes and, in a nutshell, prove two results: First, generalizing well-known results about random graphs $G(n,p)$, we show (Theorem~\ref{EigenvaluesRandomComplexes}) that the Linial--Meshulam $k$-dimensional random complexes are \emph{asymptotically almost surely} (\emph{a.a.s.}), i.e., with probability tending to $1$ as $n\rightarrow \infty$, strongly \emph{spectrally expanding} (their eigenvalues are strongly concentrated around two values). Second, we give a probabilistic construction (Theorem~\ref{thm:counterexample}) of $k$-dimensional complexes that are strong spectral expanders but that fail to have the property of \emph{coboundary expansion} --- a generalization of edge expansion that arose in the recent work of
Linial and Meshulam \cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006} and of Gromov \cite{Gromov:SingularitiesExpanders2-2010}. This shows that the most straightforward attempt of generalizing the discrete Cheeger--Buser inequalities to higher-dimensional complexes fails and answers a question raised, e.g., by Dotterrer and Kahle \cite{Dotterrer:2010fk}.
Before stating these results more precisely, we first recall the basic definitions and terminology.
\subsubsection*{Adjacency Matrix and Laplacians of Graphs}
We recall the three ($n\times n$)-matrices commonly associated with a graph\footnote{Throughout this paper, we will assume that $G$ is simple, i.e., we do not consider loops or multiple edges.} $G=(V,E)$ on $n$ vertices.
The \emph{adjacency matrix} $A=A(G)\in \{0,1\}^{V\times V}$ has entries defined by $A_{u,v}=1$ iff $\{u,v\}\in E$.
The \emph{combinatorial Laplacian} is defined as $L=L(G):=D-A$, where $D=D(G)\in \myboldmath{R}^{V\times V}$ is the diagonal matrix with entries $D_{v,v}=\deg_G(v)$, the \emph{degrees} of the vertices.
Both of these are \emph{symmetric} matrices and hence have a multiset of $n$ real eigenvalues, called the \emph{spectrum}.
The eigenvalues of $A$ and of $L$ turn out to be quite sensitive to the maximum and minimum degree of $G$. For graphs with very non-uniform degree distributions, it is often more convenient to consider the \emph{normalized Laplacian}, which is defined as $\Delta=\Delta(G):=D^{-1}L=I-D^{-1}A$, where $I\in \myboldmath{R}^{V\times V}$ is the identity matrix.\footnote{%
Strictly speaking, $D^{-1}$ is defined only if there are no isolated vertices, i.e., if $\deg_G(v)>0$ for all $v\in V$, which will be the case of primary interest to us. If there are isolated vertices, we adopt the convention that $D^{-1}_{v,v}=0$ whenever $\deg_G(v)=0$ and retain the definition $\Delta=D^{-1}L$. (The second equation $\Delta=I-D^{-1}A$ no longer holds in this case, since $\Delta$ has zero diagonal entries at isolated vertices.)
Sometimes, (e.g., in \cite{Chung:2003wn,Chung:SpectralGraphTheory-1997,CojaOghlan:2007gj}) a slightly different matrix is referred to as the normalized Laplacian, namely $\mathscr{L}:=I-D^{-1/2}AD^{-1/2}$. Assuming that there are no isolated vertices, $\Delta$ and $\mathscr{L}$ have the same spectra, since $\Delta x=\lambda x$ for some $\lambda\in \myboldmath{R}$ and $x\in \myboldmath{R}^{V}$ iff $\mathscr{L}y=\lambda y$, where $y=D^{1/2}x$.}
The normalized Laplacian is not symmetric but corresponds to a self-adjoint operator on $\myboldmath{R}^n$ with respect to a weighted inner product (see Section~\ref{sec:preliminaries}) and so also has $n$ real eigenvalues.
Both versions of the Laplacian are \emph{positive semidefinite} relative to their respective inner products and so have nonnegative eigenvalues, typically listed in increasing order $\lambda_1(L)\!\leq\!\ldots\!\leq\!\lambda_n(L)$ and $\lambda_1(\Delta)\!\leq\!\ldots\!\leq\! \lambda_n(\Delta)$. The ``all-1'' vector ${\bf 1}=(1,\ldots,1)^T$ satisfies $L\mathbf{1}=\Delta\mathbf{1}=0$, hence $\lambda_1(L)=\lambda_1(\Delta)=0$, which is called the \emph{trivial eigenvalue}.
For the adjacency matrix, the eigenvalues are typically listed in \emph{decreasing} order as $\mu_1(A)\!\ge\!\!\ldots\!\geq\! \mu_n(A)$. Define $\mu(G):=\max\{\mu_2(A),|\mu_n(A)|\}$.
The graph $G$ is connected iff $\lambda_2(L)>0$ iff $\lambda_2(\Delta)>0$. More generally, the multiplicity of $0$ as an eigenvector of either Laplacian equals the number of connected components of $G$, and if $G$ is connected, then the second eigenvalue $\lambda_2$ of either Laplacian controls the \emph{edge expansion} of the graph (see the discussion below).
\subsubsection*{Eigenvalues of Random Graphs} Let $G(n,p)$ be the binomial random graph on $n$ vertices, for which every edge is included independently with probability $p=p(n)$, and let $d=p(n-1)$ be the expected average degree.
We summarize known concentration results on the spectra of $G(n,p)$ as follows.
See Section~\ref{subsec:EigenvaluesAdjacencyRandomGraphs} for a more detailed account.
\begin{theorem}[\cite{CojaOghlan:2007gj,Feige:2005hp,Hoffman:2012}]
\label{thm:concentration-graph-eigenvalues}
For every $c>0$ and every $\gamma>c$ there exists a constant $C>0$ such that for $p\geq (1+\gamma)\cdot \log n/n$ and $d=p(n-1)$ the following statements hold with probability at least $1-n^{-c}$:
\begin{enumerate}
\item[\textup{(i)}] $\mu_1(A(G(n,p))) \in [d - C\cdot \sqrt{d}, d +C\cdot \sqrt{d}]$ and $\mu(G(n,p))\leq C\cdot \sqrt{d}$;
\item[\textup{(ii)}] $1-\frac{C}{\sqrt{d}}\!\leq\!\lambda_2(\Delta(G(n,p)))\!\leq\!\ldots\!\leq\!\lambda_n(\Delta(G(n,p)))\!\leq\!1+\frac{C}{\sqrt{d}}.
\end{enumerate}
For the adjacency matrix \textup{(i)} even holds for $p\geq \gamma\cdot \log n/n$.
\end{theorem}
One type of application of such results
is the analysis of spectral heuristics for algorithms that deal with random instances of NP-hard graph partitioning and related problems, see the discussions in \cite{Feige:2005hp,CojaOghlan:2007gj}.
\subsubsection*{Higher-Dimensional Laplacians}
Eckmann~\cite{Eckmann:HarmonischeFunktionenRandwertaufgabenKomplex-1945} introduced a generalization of the graph Laplacian $L$ to higher-dimensional simplicial
complexes $X$ to study discrete boundary value problems on such complexes.
More precisely, let $X$ be a finite simplicial complex and let $C^i(X;\myboldmath{R})$, $i\in \myboldmath{Z}$, be the vector space of $i$-dimensional simplicial cochains with real coefficients (we refer to Section~\ref{sec:preliminaries} for the necessary definitions). Eckmann defines three linear operators $L_i^\down(X)$, $L_i^\textup{up}(X)$ and $L_i(X)=L_i^\down(X)+L_i^\textup{up}(X)$ on the space $C^i(X;\myboldmath{R})$ and proves a discrete analogue of \emph{Hodge theory} \cite{Hodge:TheoryApplicationsHarmonicIntegrals-1989}, which implies, in particular, that the subspace $\mathcal{H}_i(X):=\ker L_i(X)$ of so-called \emph{harmonic cochains} on $X$ is isomorphic to $\widetilde{H}^i(X;\myboldmath{R})$, the $i$-th reduced cohomology.
In the case of a $1$-dimensional simplicial complex (graph) $G$, $L_0^\textup{up}(G)$ coincides with the usual graph Laplacian $L(G)$ discussed previously.
Subsequently, combinatorial Laplacians were applied in a variety of contexts. Dodziuk~\cite{Dodziuk:FiniteDifferenceApproachHodgeTheoryHarmonicForms-1976} and Dodziuk and Patodi~\cite{DodziukPatodi:RiemannianStructuresTriangulationsManifolds-1976} showed how the continuous Laplacian of a Riemannian manifold can be approximated by the combinatorial Laplacians of a suitable sequence of successively finer triangulations of the manifold.
Kalai \cite{Kalai} used combinatorial Laplacians to prove a higher-dimensional generalization of Cayley's formula for the number of labeled trees, and further results in this direction, including a generalization of the Matrix-Tree Theorem, were obtained in \cite{Adin:1992dx,Duval:2009jo}. For further combinatorial applications, see, e.g., \cite{FriedmanHanlon:BettiNumbersChessboardComplexes-1998,Friedman:ComputingBettiNumbersViaCombinatorialLaplacians-1998,MR1697094,MR1912799}. For
further background and references regarding combinatorial Laplacians, see also~\cite{HorakJost}.
We will mostly work with a normalized version of the Laplacian, $\Delta_i(X)=\Delta_i^\down(X)+\Delta_i^\textup{up}(X)$ (see Section~\ref{sec:preliminaries} for the definition) and focus on the operator $\Delta_{k-1}^\textup{up}(X)$.
Again, for graphs, $\Delta_0^\textup{up}(G)$ agrees with the normalized graph Laplacian $\Delta(G)$ discussed above.
\subsubsection*{Random Complexes}
Linial and Meshulam~\cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006} introduced a higher-dimensional analogue of the binomial
random graph model $G(n,p)$. By definition, the random $k$-dimensional complex $X^k(n,p)$ has $n$ vertices, a \emph{complete $(k-1)$-skeleton} (i.e., every subset of $k$ of fewer vertices form a face of the complex), and every $(k+1)$-element set of vertices is taken as a $k$-face independently with probability $p$, which may be constant or, more generally, a function $p(n)$ depending on $n$.
This model has been studied extensively, and \emph{threshold probabilities} for several basic topological properties of $X^k(n,p)$ have been determined quite precisely, see e.g.~\cite{MeshulamWallach:HomologicalConnectivityRandomComplexes-2009,BabsonHoffmanKahle:SimpleConnectivityRandom2Complexes, Aronshtam:CollapsibilityVanishingTopHomologyRandomComplexes-2013,CohenCostaFarberKappeler:2012,Kozlov:2009p2037, Wagner:MinorsRandomExpandingHypergraphs-2011}.
Our first result is a higher-dimensional analogue of Theorem~\ref{thm:concentration-graph-eigenvalues}.
The adjacency matrix of a $k$-dimensional complex $X$ is denoted by $A_{k-1}$ (see Section~\ref{sec:matrices-complexes} for the precise definition).
Both $A_{k-1}$ and the normalized up-Laplacian $\Delta_{k-1}^\textup{up}$ have rows and columns indexed by the $(k-1)$-faces of $X$; we assume that $X$ has $n$ vertices and a complete $(k-1)$-skeleton, so the matrices have dimension $\binom{n}{k}\times\binom{n}{k}$.
$A_{k-1}$ has entries in $\{0,\pm 1\}$, and $(A_{k-1})_{F,G}=\pm 1$ (with appropriate signs) iff $F\cup G$ is a $k$-face of $X$.
\begin{theorem}\label{EigenvaluesRandomComplexes}\xdef\savedtheoremnumber{\thetheorem}
Let $k\geq2$. For every $c>0$ and every $\gamma > c$ there exists a constant $C>0$ with the following property:
Assume $p \geq (k+\gamma)\log(n)/n$ and let\footnote{Thus, $d$ is the expected \emph{degree} of any $(k-1)$-face $F$ in $X^k(n,p)$, i.e., the expected number of $k$-faces incident to $F$.} $d:= p(n-k)$.
Then for $\gamma_A=C\cdot\sqrt{d}$ and $\gamma_\Delta=C/\sqrt{d}$ the following statements hold with probability at least $1-n^{-c}$:
\begin{enumerate}
\item[\textup{(i)}] The largest $\binom{n-1}{k-1}$ eigenvalues of $A_{k-1}(X^k(n,p))$ lie in the interval $[d-\gamma_A,d+\gamma_A]$, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[-\gamma_A,+\gamma_A]$.
\item[\textup{(ii)}] The smallest $\binom{n-1}{k-1}$ eigenvalues of $\Delta_{k-1}^\textup{up}(X^k(n,p))$ are \textup{(}trivially\textup{)} zero, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[1-\gamma_\Delta,1+\gamma_\Delta]$. In particular, $\tilde{H}^{k-1}(X^k(n,p);\myboldmath{R})=0$.
\end{enumerate}
For the adjacency matrix \textup{(i)} even holds for $p\geq \gamma\cdot \log n/n$.
\end{theorem}
Both concentration results are achieved by reducing the higher-dimensional problem to estimates for the eigenvalues of random graphs, i.e., to Theorem~\ref{thm:concentration-graph-eigenvalues}. For the Normalized Laplacian this is done by applying a fundamental estimate due to Garland~\cite{Garland}
that relates the eigenvalues of the higher-dimensional matrix to those of the graphs that arise as links of $(k-2)$-dimensional faces. For the generalized adjacency matrix we develop an analogous result (see Section~\ref{sec:Garland}).
Compared to the extended abstract \cite{GundertWagner-2012} of this paper, Theorem~\ref{EigenvaluesRandomComplexes} contains an improved concentration for the eigenvalues of $A_{k-1}$ in intervals of width $O(\sqrt{d})$ around the typical eigenvalues, as opposed to $O(\sqrt{d\log n})$.
Theorem~\ref{EigenvaluesRandomComplexes} also applies to any other random model for simplicial complexes with $n$ vertices and complete $(k-1)$-skeleton in which the links of $(k-2)$-faces are random graphs with distribution $G(n-k+1,p)$.
We use this for our second result, a probabilistic construction of a counterexample for a conjectural higher-dimensional discrete Cheeger inequality (Theorem~\ref{thm:counterexample} below).
\subsubsection*{Edge Expansion and the Cheeger Inequality for Graphs}
For a graph of arbitrary density, its \emph{edge expansion} can be defined as follows. Let $\varepsilon>0$ be a parameter. We say that $G=(V,E)$ is $\varepsilon$-edge expanding if for every $S\subseteq V$,
\begin{equation}
\label{eq:edge-expansion}
\frac{|E(S,V\setminus S)|}{|E|} \geq \varepsilon \cdot \frac{\min\{|S|,|V\setminus S|\}}{|V|},
\end{equation}
where $E(S,V\setminus S)=\{\{u,v\}\in E: u\in S,v\in V\setminus S\}$ is the set of edges across the cut $(S,V\setminus S)$. Moreover, we call the best possible constant $\varepsilon$ the \emph{edge expansion} of $G$ and denote it by $\varepsilon(G)$.\footnote{Note that (\ref{eq:edge-expansion}) is equivalent, to the more common condition that $|E(S,V\setminus S)| \geq\frac{\varepsilon}{2} \cdot d\cdot |S|$ for all $S\subseteq V$ with $|S|\leq |V|/2$, where $d=2|E|/|V|$ is the average degree. Thus, $\varepsilon(G)=2h(G)$, where $h(G):=\min\{\frac{|E(S,V\setminus S)|}{d|S|}:S\subseteq V,|S|\leq |V|/2\}$ is the (normalized) \emph{Cheeger constant}
of $G$.}
For a survey of the numerous applications of graph expansion in theoretical computer science and connections to other branches of mathematics, we refer to \cite{HooryLinialWigderson:ExpanderGraphs-2006}.
As mentioned above, the edge expansion of a graph is controlled by the second-smallest eigenvalue of its Laplacian. Here, we state this fact in its simplest form, for $d$-regular graphs (due to Dodziuk \cite{Dodziuk:DifferenceEquations-1984}, Alon and Milman \cite{Alon:1985jg,Alon:1986wi}; Cheeger~\cite{Cheeger:LowerBoundSmallestEigenvalueLaplacian-1970} proved an analogous result for Laplacians on Riemannian manifolds.). A version for non-regular graphs, with a slightly different notion of edge expansion, can be found, e.g., in \cite{Chung:SpectralGraphTheory-1997}.
\begin{theorem}[Discrete Cheeger Inequality]\label{cheeger}\hspace{0.05cm}
Let $G=(V,E)$ be a $d$-regular graph, and let $\lambda_2=\lambda_2(\Delta(G))$ be the second-smallest eigenvalue of its normalized Laplacian.
Then the edge expansion $\varepsilon(G)$ satisfies
$$\lambda_2 \leq \varepsilon(G) \leq \sqrt{8\lambda_2}.$$
\end{theorem}
The inequality on the left-hand side is proved fairly easily by expressing the characteristic function $\mathbf{1}_S\in \myboldmath{R}^V$ of a subset $S\subseteq V$ as a linear combination of eigenvectors of the Laplacian $\Delta$. We will refer to this as ``\emph{the easy part of the Cheeger inequality}.''
The harder part is the inequality on the right-hand side. For a short proof see, e.g., \cite{AlonSchwartzShapira:ElementaryConstructionExpanders-2008}.
We remark that even the easy part of the Cheeger inequality is very useful. For instance, essentially all explicit constructions of constant-degree expanders \cite{Margulis:ExplicitConstructionsExpanders-1973,GabberGalil:ExplicitConstructionsSuperconcentrators-1981,LubotzkyPhillipsSarnak:RamanujanGraphs-1988,Margulis:ExplicitGroupTheoreticConstructions-1988,ReingoldVadhanWigderson:ZigZagProduct-2002}
prove a lower bound on the edge expansion of the constructed graphs by analyzing their eigenvalues.
\subsubsection*{Higher-Dimensional Expansio
} Recently, a higher-dimensional analogue of edge-expansion of graphs, \emph{coboundary expansion} (more precisely, $\myboldmath{Z}_2$-coboundary expansion), arose in the recent work of Gromov~\cite{Gromov:SingularitiesExpanders2-2010} and of Linial, Meshulam and Wallach~\cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006,MeshulamWallach:HomologicalConnectivityRandomComplexes-2009}.
The precise definition will be given in Section~\ref{sec:preliminaries}. (For further related results, see, also \cite{Fox:2010uq,Karasev:Gromov-2010,Newman:2011,MatousekWagner:AlsoSprachGromov-2011,Dotterrer:2010fk}.)
It is natural to ask whether there is a higher-dimensional analogue of the discrete Cheeger inequality; this question was raised explicitly, e.g., by Dotterrer and Kahle~\cite{Dotterrer:2010fk}. As our second result we show, by a simple probabilistic construction, that the most straightforward attempt at a higher-dimensional Cheeger inequality fails, even for the ``easy part''. In higher dimensions, \emph{spectral expansion} (an eigenvalue gap for the Laplacian) does not imply $\myboldmath{Z}_2$-coboundary expansion:
\begin{theorem}\label{thm:counterexample}
For every $k>1$ there is an infinite family of $k$-dimensional complexes $(Y^k_n)_{n \in \myboldmath{N}}$, where $Y^k_n$ has $n$ vertices, that is \emph{spectrally but not coboundary expanding} in dimension $k$.
More precisely, all nontrivial eigenvalues of $\Delta_{k-1}^\textup{up}(Y^k_n)$ are $1\pm O(1/\sqrt{n})$,
but every $Y_n$ contains a cochain $a \in C^{k-1}(Y_n;\myboldmath{Z}_2)$ of normalized Hamming weight $\|[a]\| \geq \frac{1}{2}-o(1)$ with $\|\delta a\|=O(\log n/n)$. Furthermore, $Y_n$ can be chosen such that $H_i(Y_n;\myboldmath{Z}) = 0$ for all $i\leq k-1$.
\end{theorem}
For a graph $G$ and any abelian group $\myboldmath{G}$, $\tilde{H}^{0}(G;\myboldmath{G})=0$ iff $G$ is connected. In higher dimensions, however, it is well-known that the vanishing of a cohomology group may depend on the choice of coefficients. A basic example for this is the real projective plane $\myboldmath{R} P^2$ for which $\tilde{H}^1(\myboldmath{R} P^2;\myboldmath{R}) = 0$ but $\tilde{H}^1(\myboldmath{R} P^2;\myboldmath{Z}_2) = \myboldmath{Z}_2$. In general, $\tilde{H}^1(Y;\myboldmath{G}) = 0$ iff $Y$ is $\varepsilon$-expanding, with respect to a given norm on $\myboldmath{G}$-cochains, for some small $\varepsilon>0$ that may depend on $Y$. Thus, the point of Theorem~\ref{thm:counterexample} is that there is an infinite family of examples whose coboundary expansion tends to zero (as fast as $\log n/n$) while the spectral expansion is bounded away from zero (in fact, equal to $1\pm O(1/\sqrt{n})$).
Compared to the extended abstract \cite{GundertWagner-2012} of this paper, the probabilistic construction behind Theorem~\ref{thm:counterexample} has been adapted to also allow for $H_{k-1}(Y_n;\myboldmath{Z})$ to be trivial. To influence the random behaviour we choose two probabilities $p,q \geq C\cdot \log(n)/n$ for suitably large $C$ with $q=o(p)$.
The construction then covers a whole range of parameters:
\[
|f_k(Y_n)-\tfrac{p}{2} \tbinom{n}{k+1}| \leq o(1) \tfrac{p}{2}\tbinom{n}{k+1},\quad \|\delta a\|=O\Big(\frac{q}{p}\Big),
\] while all nontrivial eigenvalues $\Delta_{k-1}^\textup{up}(Y_n)$ lie in the interval $[1-\gamma,1+\gamma]$ with $\gamma=O\big(1/\sqrt{(p/2)n}\big)$.
The concentration of eigenvalues is essentially optimal, as one can show\footnote{This can be shown analogously to the corresponding bound \eqref{BoundAdjacencySpectrumGraph} for graphs, see Preliminaries.} that $\Delta_{k-1}^\textup{up}(X)$ always has a non-trivial eigenvalue $\lambda$ with $1-\lambda \geq \sqrt{k/d_{\max}\cdot(n-d_{\max})/(n-k)}$, where $d_{\max}$ is the maximal degree of a $k$-face in $X$, and the expected degree in $Y_n$ is $O((p/2)n)$.
In the extremal case $q = C\cdot \log(n)/n$ and $p=1$, we achieve a coboundary expansion of order $O(\log(n)/n)$ and eigenvalue concentration in $[1-O(1/\sqrt{n}),1+O(1/\sqrt{n})]$.
Of course it is just as natural to ask whether the other (``non-easy'') part of the Cheeger inequality has a simple higher-dimensional generalization. Even though any simplicial complex with non-zero $\myboldmath{Z}_2$-coboundary expansion has to have non-zero spectral expansion, it has been shown that also for this part of the Cheeger inequality no straight-forward generalization can hold in higher dimensions: There is an infinite family of simplicial $k$-balls $X_n$ with spectral expansion $O(1/\log(n)^{\log(k)})$ and coboundary expansion $\Omega(1/\log(n))$, see~\cite{Steenbergen:2012}.
To the best of our knowledge, it is an open question whether there are complexes
with coboundary expansion bounded away from zero and spectral expansion tending to zero.
\subsubsection*{Related Work}
A recent article by Steenbergen, Klivans and Mukherjee~\cite{Steenbergen:2012} also presents a class of counterexamples for the most straightforward attempt at a higher-dimensional Cheeger inequality -- an explicit construction for an infinite family of simplicial $k$-balls $X_n$ whose spectral expansion is bounded away from zero, while the coboundary expansion tends to zero.
Here, the non-trivial eigenvalues of $\Delta_{k-1}^\textup{up}(X_n)$ are bounded below by a constant depending on the dimension $k$, while the coboundary expansion of $X_n$ is of order $1/\Theta(\log(n))$.
In the same article, the authors present the counterexample for simple higher-dimensional generalizations of the other (``non-easy'') part of the Cheeger inequality mentioned above.
Chung~\cite{Chung:1993} studies a higher Laplacian for hypergraphs that is closely related\footnote{One difference is that Chung's Laplacian operates not just on cochains, i.e., skew-symmetric functions on oriented simplices, but on arbitrary real-valued functions.} to the combinatorial Laplacian $L_{k-1}=L_{k-1}^\textup{up}+L_{k-1}^\down$. In \cite[Section~7]{Chung:1993}, she proves a somewhat weaker concentration result
for eigenvalues of random hypergraphs, namely, essentially, that for \emph{constant} $p$ and any $\varepsilon>0$, the eigenvalues of $L_{k-1}(X^k(n,p))$ are concentrated in an interval of width $O(n^{1/2+\varepsilon})$. She also states, without proof, that the proof methods for random graphs can be extended to yield the sharp bound of $O(\sqrt{pn})$.
The probabilistic construction of the examples in Theorem~\ref{thm:counterexample} is well-known in the study of quasirandomness for hypergraphs, see, e.g., the discussion in \cite[Section~5]{Gowers:3UniformHypergraphs-2006}. In \cite[Section~8]{Chung:1993}, it is asserted, again without proof,
that the eigenvalues of the combinatorial Laplacian of these examples are concentrated in an interval of width $O(\sqrt{pn})$, but we are not aware of a proof appearing in the literature.
Hoffman, Kahle and Paquette prove closely related results in their preprint~\cite{Hoffman:2012}.
They improve previous results on eigenvalues of random graphs and achieve precise information about the constant factor in the threshold.
Using a result by \.{Z}uk \cite{Zuk:1996}, which is a strengthening of Garland's estimate, they obtain as an immediate corollary that for $p\geq (2+\varepsilon)\frac{\log n}{n}$, the fundamental group of the random $2$-complex $X^2(n,p)$ a.a.s. has Property~(T).
Using a weaker combinatorial notion of higher-dimensional expansion, but the same notion of Laplacian spectra, Parzanchevski, Rosenthal and Tessler show a version of a higher-dimensional Cheeger inequality~\cite{Parzanchevski:2012}. While $\myboldmath{Z}_2$-coboundary expanding complexes also possess this weaker notion of expansion, the converse is not true (see, e.g., \cite{GundertSzedlak-2014}, where an extension of their result is presented).
In another recent article, Lu and Peng~\cite{Lu:2011} study a rather different kind of Laplacian for random complexes. Specifically, given a $k$-dimensional complex $X$ on a vertex set $V$ and a parameter $s\leq \frac{k+1}{2}$, they consider an auxiliary weighted graph on the vertex set $\binom{V}{s}$ in which $I,J\in \binom{V}{s}$ are connected by an edge of weight $w$ if $I\cap J=\emptyset$ and $I$ and $J$ are contained in precisely $w$ common $k$-faces of $X$. Lu and Peng study the normalized Laplacian of this auxiliary weighted graph. However, this Laplacian seems to capture the topology of $X$ only in a limited way. For instance, in the case $k=2$ and $s=1$, any two $2$-dimensional complexes on $n$ vertices that have a complete $1$-skeleton and are $d$-regular (every edge is contained in $d$ triangles) yield the same auxiliary graph, even though the topologies of these complexes (as measured by real cohomology groups and the usual Laplacian, say) may be very different.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{More on Eigenvalues of Graphs}
It is known that the spectrum of the normalized Laplacian $\Delta$ is contained in the interval $[0,2]$, and that $\lambda_n(\Delta)=2$ iff $G$ has a nontrivial \emph{bipartite} connected component \cite[Lemma 1.7]{Chung:SpectralGraphTheory-1997}. Moreover, if $G$ has no isolated vertices then
$\lambda_{n-1}(\Delta)\geq \frac{n}{n-1}$.
If $G$ is \emph{$d$-regular}, i.e., $\deg_G(v)=d$ for all $v\in V$ (where $d$ may depend on $n$), then $L=d\cdot I-A=d\cdot \Delta$, and so the spectra of $A$, $L$, and $\Delta$ are equivalent (up to scaling and linear shifts): $\lambda_i(L)=d\cdot \lambda_i(\Delta)$ and $\mu_i(A)=d-\lambda_i(L)$, $1\leq i\leq n$. In particular, $\mu_1(A)=d$, $\mu_2(A)<d$ iff $G$ is connected, and $\mu_n(A)=-d$ iff $G$ has a nontrivial bipartite connected component.
For $\mu(G)=\max\{\mu_2(A),|\mu_n(A)|\}$, it is not hard to show that for every $d$-regular graph
\begin{equation}\label{BoundAdjacencySpectrumGraph}
\mu(G) \geq \sqrt{d\cdot(n-d)/(n-1)}
\end{equation}
(see, e.g., \cite[Claim~2.8]{HooryLinialWigderson:ExpanderGraphs-2006}). Hence $\mu(G) \geq \Omega(\sqrt{d})$ for $d\leq 0.99n$, say, which shows that the concentration results for the eigenvalues of random graphs are essentially optimal.
For constant $d$, one has the sharper \emph{Alon-Boppana bound} $\mu(G) \geq 2\sqrt{d-1}\cdot (1-O(1/\log^2 n))$, see \cite{Nilli:SecondEigenvalueGraph-1991,Friedman:GeometricAspectsGraphsEigenfunctions-1993}.
A $d$-regular graph $G$ is called a \emph{Ramanujan graph} if it meets this bound for the spectral gap, i.e., if $\mu(G)\leq 2\sqrt{d-1}$.
It is a deep result due to Lubotzky, Phillips and Sarnak~\cite{LubotzkyPhillipsSarnak:RamanujanGraphs-1988} and independently to Margulis~\cite{Margulis:ExplicitGroupTheoreticConstructions-1988} that for every fixed number $d$ with $d-1$ prime, there exist Ramanujan graphs on $n$ vertices for infinitely many $n$ (and moreover, these graphs can be explicitly constructed). Recently, the existence of bipartite Ramanujan graphs with arbitrary degree and arbitrary number of vertices has been established by Marcus, Spielman and Srivastava \cite{MarcusSpielmanSrivastava2015,MarcusSpielmanSrivastava2015-2}.
\subsection{Eigenvalues of Random Graphs}\label{subsec:EigenvaluesAdjacencyRandomGraphs}
In the introduction, Theorem~\ref{thm:concentration-graph-eigenvalues} summarizes known results on the concentration of eigenvalues for random graphs $G(n,p)$. Here we want to explain the corresponding references in more detail.
For the normalized Laplacian the situation is simple: Building on the results for the adjacency matrix and relating the spectrum of $\Delta(G(n,p))$ to that of $A(G(n,p))$, Coja-Oghlan \cite{CojaOghlan:2007gj} proved the result for the normalized Laplacian for probabilities $p\geq C \cdot \log(n)/n$ with a suitable constant $C$. For $p \gg (\log n)^2/n$ this was also shown by Chung, Lu and Vu \cite{Chung:2003wn}. A recent preprint by Hoffman, Kahle and Paquette \cite{Hoffman:2012} gives the precise result allowing all constants $C>1$ (and even $C >\frac{1}{2}$ when considering only the giant component of $G(n,p)$).
For the adjacency matrix the situation in the literature is more involved:
F\"uredi and Koml\'os~\cite{Furedi:1981ti} showed that for constant $p$ a.a.s. $\mu(G(n,p))=O(\sqrt{d})$, where $d=p(n-1)$ is the expected average degree. Their method of proof, the so-called \emph{trace method}, can be adapted to cover the range $\frac{\ln(n)^7}{n} \leq p \leq 1 - \frac{\ln(n)^7}{n}$ (see \cite{CojaOghlan:2005}).
Feige and Ofek~\cite{Feige:2005hp} extended the result to values of $p$ as small as $C\cdot \log n/n$, but their proof requires an upper bound on $p$. They used methods of Friedman, Kahn, and Szemer\'edi~\cite{Friedman:1989}, who proved that $\mu(G)=O(\sqrt{d})$ holds a.a.s.\ for \emph{random $d$-regular graphs} with constant $d$.
The most precise result is again by Hoffman, Kahle and Paquette \cite{Hoffman:2012}, who show that $\mu(G(n,p))=O(\sqrt{d})$ a.a.s. for $p\geq\gamma \log(n)/n$ for \emph{all} $\gamma>0$.
More precisely, in \cite{Hoffman:2012} it is shown that a.a.s.
\begin{equation}\label{preciseStatementAdjacencyGraphs}
|\langle Ax,y\rangle| =O(\sqrt{d}) \text{ for all unit vectors } x,y \text{ such that } x\perp\mathbf{1}.
\end{equation}
This, together with $\frac{1}{n}\langle A\mathbf{1},\mathbf{1}\rangle=\frac{2|E|}{n} \in [d - O(\sqrt{d}), d +O(\sqrt{d})]$, which follows from a straight-forward application of a Chernoff bound, gives the result as stated in Theorem~\ref{thm:concentration-graph-eigenvalues} (see e.g. \cite[Lemma~2.1]{Feige:2005hp} or Lemma~\ref{LEMMAConditions} in this paper).
We remark that both parts of Theorem~\ref{thm:concentration-graph-eigenvalues} can be extended to very sparse random graphs $G(n,p)$ with $p=\Theta(1/n)$ (for which they fail to hold as stated) by passing to a suitable large \emph{core subgraph}, see \cite{CojaOghlan:2007gj,Feige:2005hp,Hoffman:2012}. Moreover, analogous results are also known for other random graph models, including random $d$-regular graphs (see above) and
random graphs with prescribed expected degree sequences \cite{Chung:2003wn,CojaOghlan:2009ud}.
\subsection{Simplicial Complexes and Cohomology}
A (finite, abstract) \emph{simplicial complex} $X$ is a finite set system that is closed under taking subsets, i.e.\ $F \subseteq G \in X$ implies $F \in X$. The sets in $X$ are called \emph{simplices} or \emph{faces} of $X$. The \emph{dimension} of a face $F$ is $\dim(F):=|F|-1$. We denote the set of $i$-dimensional faces of $X$ by $X_i$. The dimension of $X$ is the maximum dimension of any of its faces.
The $0$\mbox{-}dimensional faces are called \emph{vertices}. Formally, these are singletons (one-element sets) but in this context we will usually identify the singleton $\{v\}$ with its unique element $v$.
A $k$-dimensional simplicial complex is \emph{pure} if all maximal simplices in $X$ have dimension $k$. We define the \emph{degree} of a face $F$ as $\deg(F)=|\{G \in X_k : F \subseteq G\}|$. The \emph{link} of $F$ in $X$ is $\lk(F,X):=\{G \in X \colon F \cup G \in X, F \cap G=\emptyset\}$.
We denote by $K_n^k$ the \emph{complete $k$-dimensional complex} on $n$ vertices, i.e.
$K_n^k = \{F \subseteq [n]: |F| \leq k+1\}.$
\subsubsection*{Orientations and Incidence Numbers} Throughout we assume that we have fixed a linear ordering on the vertex set $V:=X_0$ of $X$, and we consider the faces of $X$ with the orientations given by the order of their vertices. Formally, consider an $i$-simplex $F=\{v_0, v_1 ,\ldots, v_{i}\} \in X_i$, where $v_0<v_1<\ldots <v_i$. For an $(i-1)$-simplex $G\in X_{i-1}$, we define the \emph{oriented incidence number} $[F:G]$ by setting $[F:G]:=(-1)^j$ if $G\subseteq F$ and $F\setminus G=\{v_j\}$, $0\leq j\leq i$, and $[F:G]:=0$ if $G\not\subseteq F$. In particular, for every vertex $v\in X_0$ and the unique empty face $\emptyset\in X_{-1}$, we have $[v:\emptyset]=1$.
\subsubsection*{Cohomology}
Let $X$ be a finite simplicial complex and let $\myboldmath{G}$ be an Abelian group (we will mostly be concerned with the cases $\myboldmath{G}=\myboldmath{Z}_2$ and $\myboldmath{G}=\myboldmath{R}$, respectively). We denote by $C^i(X;\myboldmath{G})$ the group $\myboldmath{G}^{X_i}$ of functions from $X_i$ to $\myboldmath{G}$, which are called \emph{$i$-dimensional cochains of $X$ with coefficients in $\myboldmath{G}$}. In particular, since $\emptyset$ is the unique empty face of $X$, we have $C^{-1}(X;\myboldmath{G})\cong \myboldmath{G}$. It is convenient to define $C^i(X;\myboldmath{G}):=0$ for $i<-1$ or $i>\dim X$.
The characteristic functions $e_F$ of faces $F\in X_i$ form a basis of $C^i(X;\myboldmath{G})$. They are called \emph{elementary cochains}.
The \emph{coboundary map} $\delta_i\colon C^i(X;\myboldmath{G}) \rightarrow C^{i+1}(X,\myboldmath{G})$ is the linear map given by
\[
(\delta_i f)(F) := \sum_{G \in X_i} [F:G]\cdot f(G)
\]
for $f\in C^i(X;\myboldmath{G})$, $-1\leq i<\dim X$, and $\delta_i=0$ otherwise.
It is an easy but central observation that the composition $\delta_i\circ \delta_{i-1}=0$, which means that $B^i(X;\myboldmath{G}):=\operatorname{im} \delta_{i-1} \subseteq Z^i(X;\myboldmath{G}):=\ker \delta_i$. The elements of $B^i(X;\myboldmath{G})$ and $Z^i(X;\myboldmath{G})$ are called $i$-dimensional \emph{coboundaries} and \emph{cocycles}, respectively. Since $B^i(X;\myboldmath{G}) \subseteq Z^i(X;\myboldmath{G})$, we can form the quotient group $\tilde{H}^i(X;\myboldmath{G}):=Z^i(X;\myboldmath{G})/B^i(X;\myboldmath{G})$, the $i$-th (reduced) \emph{cohomology group} of $X$ with coefficients in $\myboldmath{G}$.
\subsection{Norms on Cochains and Expansion}
\label{sec:expansion}
We now describe a very general definition of \emph{expansion} for simplicial complexes, which was introduced in \cite{Gromov:SingularitiesExpanders2-2010} (with a slightly different normalization and under the name \emph{inverse \textup{(}co\textup{)}filling norm}).
Let $X$ be a finite simplicial complex. Assume that every cochain group $C^i(X;\myboldmath{G})$ is equipped with a \emph{pseudonorm} $\|\cdot \|$, taking real values and satisfying $\|f\|=\|-f\|$ and $\|f+g\|\leq \|f\|+\|g\|$ for all $f,g\in C^i(X;\myboldmath{G})$. We will focus on the following two cases.
\begin{enumerate}
\item \textbf{$\myboldmath{R}$-cochains with weighted $\ell_2$-norm:} Assume that we are given a \emph{weight function $w$} with nonnegative real values on the simplices of $X$. Define by $\langle f,g\rangle :=\sum_{F\in X_i} w(F)f(F)g(F)$ a weighted inner product on $C^i(X;\myboldmath{R})$. Observe that the inner products obtained in this way are characterized by the condition that the elementary cochains be pairwise orthogonal.
We then consider the corresponding weighted $\ell_2$-norm
$\|f\|=\|f\|_2:=\sqrt{\langle f,f\rangle}.$
\item \textbf{$\myboldmath{Z}_2$-cochains with weighted Hamming norm:} Let $w$ be as before and define the \emph{weighted Hamming norm} on $C^i(X;\myboldmath{Z}_2)$ by $\|f\|:=\sum_{F\in X_i:f(F)=1}w(F).$
\end{enumerate}
The idea is to define a notion of $i$-dimensional expansion that provides lower bounds for the norm of the coboundary $\delta_{i-1}(f)\in C^i(X;\myboldmath{G})$ of $(i-1)$-dimensional cochains $f\in C^{i-1}(X;\myboldmath{G})$. However, we cannot define such a lower bound in terms of the norm $\|f\|$ of $f$, since the set $B^{i-1}(X;\myboldmath{G})$ is always contained in the kernel of the coboundary operator $\delta=\delta_{i-1}$. Thus, the right comparison measure is the \emph{distance} of a cochain $f$ from this \emph{trivial part of the kernel}. That is, we define, for $f\in C^{i-1}(X;\myboldmath{G})$,
$$\|[f]\|:=\min\{\|f+\delta_{i-2}g\|\colon g\in C^{i-2}(X;\myboldmath{G})\}.$$
\subsubsection*{Coboundary Expansion for Arbitrary Coefficients}\label{def:face-expansion}
Suppose every cochain group $C^i(X;\myboldmath{G})$ is equipped with a pseudonorm $\|\cdot\|$ as above. We say that $X$ is \emph{$\varepsilon$-expanding in dimension $i$ }
(with respect to
$\myboldmath{G}$ and the given norm)
if
$$
\|\delta f \| \geq \varepsilon \cdot \|[f]\|
$$
for all $f \in C^{i-1}(X;\myboldmath{G})$. The best possible $\varepsilon$ is called the $i$-dimensional expansion of $X$. Note that, in particular, $\tilde{H}^{i-1}(X;\myboldmath{G})=0$ if $X$ has $i$-dimensional expansion $\varepsilon>0$.
For an infinite family of $k$-dimensional complexes $(X_n)_{n \in \myboldmath{N}}$ (where $k$ is fixed and independent of $n$) we say that the family $(X_n)$ is \emph{expanding in dimension $i$} (with respect to $\myboldmath{G}$ and the given norm) if the $i$-dimensional expansion of all $X_n$ is bounded away from zero.
\subsubsection*{$\myboldmath{Z}_2$-Coboundary Expansion}
Now we focus on the case of $\myboldmath{Z}_2$-coefficients. Define a weight function by $w(F):=1/|X_i|$ for $F\in X_i$ (whenever $|X_i|>0$). In this setting, the normalized Hamming weight of a $\myboldmath{Z}_2$-cochain $f\in C^{i-1}(X;\myboldmath{Z}_2)$ is just the number of faces in the support of $f$ divided by the number of all $(i-1)$-faces of $X$.
If $X$ is is $\varepsilon$-expanding in dimension $i$ with respect to this norm, we also say that $X$ is \emph{$\myboldmath{Z}_2$-coboundary $\varepsilon$-expanding} in dimension $i$.
Note that in the case $i=1$ of graphs, there are just two $0$-dimensional coboundaries, namely the constant functions ${\bf 0}$ and $\mathbf{1}$ on the set $V=X_0$ of vertices. Moreover, a $0$-dimensional cochain $f\in C^0(X;\myboldmath{Z}_2)$ is in bijective correspondence with its support $S=\{v\in V\colon f(v)=1\}\subseteq V$, and $\|[f]\|=\frac{\min\{|S|,|V\setminus S|\}}{|V|}$. Thus, $1$-dimensional $\myboldmath{Z}_2$-coboundary
expansion corresponds precisely to the definition (\ref{eq:edge-expansion}) of edge expansion discussed in the introduction.
A basic observation in this context is that complete complexes are $\myboldmath{Z}_2$-coboundary expanding in all dimensions.
This was observed independently by Gromov~\cite{Gromov:SingularitiesExpanders2-2010}, Linial, Meshulam and Wallach~\cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006,MeshulamWallach:HomologicalConnectivityRandomComplexes-2009} and Newman and Rabinovich~\cite{Newman:2011}:
\begin{proposition}
\label{prop:gromov}
The complete complex $K^k_n$ has $i$-dimen\-sional $\myboldmath{Z}_2$-coboundary
expansion $1$ for all $i \in \{0,1,\ldots, k\}$.
\end{proposition}
From this, standard Chernoff bounds immediately imply that a.a.s., $X^k(n,p)$ is $\myboldmath{Z}_2$-coboundary expanding in dimension $k$ and $H^{k-1}(X^k(n,p);\myboldmath{Z}_2)=0$ if $p> C\log n/n$ for a suitable constant $C$.
Much of the work in \cite{LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006,MeshulamWallach:HomologicalConnectivityRandomComplexes-2009} is devoted to refining this argument to obtain the optimal constant $C=k$ for the threshold.
Dotterrer and Kahle~\cite{Dotterrer:2010fk} prove results analogous to Proposition~\ref{prop:gromov} for some other complexes, specifically for skeleta of crosspolytopes and for complete multipartite complexes. They also explicitly raise the question whether there is some higher-dimensional analogue of the Cheeger inequality.
The most straightforward attempt at such an inequality would be to relate $\myboldmath{Z}_2$-coboundary expansion and eigenvalue gaps of higher-dimensional Laplacians, which we discuss next.
\subsection{Matrices and their spectra}
A symmetric real ($n\times n$)-matrix has a multiset of $n$ real eigenvalues, called its \emph{spectrum}, and $\myboldmath{R}^n$ has an orthonormal basis of corresponding eigenvectors.
We recall the variational characterization of eigenvalues:
\begin{theorem}[Courant-Fischer Theorem, see e.g.\ {\cite[Theorem 4.2.11]{HornJohnson}}]\label{CourantFischer}
Let $M \in \myboldmath{R}^{n \times n}$ be a symmetric matrix with eigenvalues $\lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_n$, and let $k$ be a given integer with $1\leq k \leq n$. Then
$$
\lambda_k = \min_{w_1,w_2,\ldots,w_{n-k} \in \myboldmath{R}^n} \max_{\substack{x \neq 0, x \in \myboldmath{R}^n\\x \perp w_1,w_2,\ldots,w_{n-k}}} \frac{\langle M x, x \rangle}{\langle x, x \rangle}
$$
and
$$
\lambda_k = \max_{w_1,w_2,\ldots,w_{k-1} \in \myboldmath{R}^n} \min_{\substack{x \neq 0, x \in \myboldmath{R}^n\\x \perp w_1,w_2,\ldots,w_{k-1}}} \frac{\langle M x, x \rangle}{\langle x, x \rangle}.
$$
\end{theorem}
For a matrix $M$ we denote ist $\ell_2$-norm by $\|M\| = \max_{x\neq 0}\|Mx\|/\|x\|$, which for a symmetric matrix $M$ equals the in absolute value largest eigenvalue of $M$.
\subsection{Higher-Dimensional Laplacians and Adjacency Matrices}
\label{sec:matrices-complexes}
We introduce generalizations of the graph Laplacians and the adjacency matrix for a $k$-di\-men\-sion\-al complex in all dimensions $0 \leq i \leq k-1$. Later on, we will only be concerned with these matrices in dimension $k-1$.
\subsubsection*{Adjacency matrices}
For a finite $k$-dimensional simplicial complex $X$ and $0 \leq i \leq k-1$ we define the \emph{adjacency matrix} $A_i=A_i(X)$ by
\[
(A_i(X))_{F,G} = \begin{cases}
-[F \cup G:F][F\cup G:G] = [F:F \cap G][G:F\cap G]& \text{if $F \sim G$},\\
0& \text{otherwise,}
\end{cases}
\]
where $F,G \in X_i$ and we write $F \sim G$ if $F$ and $G$ share a common $(i-1)$-face $F\cap G$ and $F \cup G \in X_{i+1}$.
Figure~\ref{fig:Adjacency2D} illustrates the case $i=1$. An entry $A_1(X)_{e,e'}$ is non-zero exactly if the two edges $e$ and $e'$ share a common vertex and the triangle $e \cup e'$ is contained in $X$. The sign of $A_1(X)_{e,e'}$ is then determined by the orientations of the two edges.
\begin{figure}[htbp]
\centering\includegraphics{FIGUREAdjacency2D.pdf}
\caption{Signs of non-zero entries $A_1(X)_{e,e'}$. The arrows represent the orientations of edges. \label{fig:Adjacency2D}}
\end{figure}
Note that the matrix $A_0(X)$ agrees with the adjacency matrix of the graph $(X_0,X_1)$ because
$[\{u,v\}\!:\!u][\{u,v\}\!:\!v] = -1$ for all vertices $u,v \in X_0$. The motivation for the signs in higher dimensions will hopefully become clear later on.
\subsubsection*{Weighted Laplacians}
Following the exposition in \cite{HorakJost}, we begin by defining a general weighted Laplacian. Suppose we are given a nonnegative weight function $w$ on the faces of a finite simplicial complex $X$ and that the spaces
$C^i(X;\myboldmath{R})$ are equipped with the weighted inner product and the corresponding weighted $\ell_2$-norm as described above.
The elementary cochains $e_F$, $F\in X_i$, form an orthogonal basis of $C^i(X;\myboldmath{R})$. With respect to these bases, the coboundary map $\delta_i\colon C^i(X;\myboldmath{R})\rightarrow C^{i+1}(X;\myboldmath{R})$ is given by the following $|X_{i+1}|\times |X_i|$-matrix (for which we abuse notation and again use the symbol $\delta$):
\[
(\delta_i(X))_{F,G} = [F:G].
\]
Consider the \emph{transpose map} $\delta_i^\ast\colon C^{i+1}(X;\myboldmath{R})\rightarrow C^i(X;\myboldmath{R})$ of $\delta_i(X)$ with respect to the given inner product. This transpose is determined by the condition that $\langle \delta_i^\ast f,g\rangle=\langle f,\delta_i g\rangle$ for all $f\in C^{i+1}(X;\myboldmath{R})$ and $g\in C^{i}(X;\myboldmath{R})$. More explicitly,
$$(\delta_i^\ast f) (G)=\sum_{F\in X_{i+1}} \frac{w(F)}{w(G)}[F:G]f(F)$$
for $f\in C^{i+1}(X;\myboldmath{R})$ and $G\in X_i$.
For example, in the case of unit weights $w(F)=1$ for all $F\in X$, we get the standard inner product on $C^i(X;\myboldmath{R})$,
and $\delta_i^\ast=\partial_{i+1}$ coincides with the usual \emph{boundary map} given on elementary cochains by $\partial_{i+1}(e_F)=\sum_{G\in X_i} [F:G]e_G$, $F\in X_{i+1}$.
In general, for arbitrary weights $w$ on $X$, we define the \emph{weighted Laplacian} by
$$\mathcal{L}_i^\down:=\delta_{i-1}\delta_{i-1}^\ast,\qquad \mathcal{L}_i^\textup{up}:=\delta_i^\ast \delta_i,\qquad \mathcal{L}_i:=\mathcal{L}_i^\down+\mathcal{L}_i^\textup{up}.$$
Note that all three maps $\mathcal{L}_i^\down,\mathcal{L}_i^\textup{up},\mathcal{L}_i$ are \emph{self-adjoint} and \emph{positive semidefinite} (with respect to the given weighted inner product) linear operators on $C^i(X;\myboldmath{R})$.
In general, setting $\mathcal{H}_i=\mathcal{H}_i(X;\myboldmath{R}):=\ker \mathcal{L}_i=\ker \mathcal{L}_i^\down \cap \ker \mathcal{L}_i^\textup{up} = \ker \delta_{i-1}^\ast \cap Z^i(X;\myboldmath{R})$, one gets a \emph{Hodge decomposition} of $C^i(X;\myboldmath{R})$ into pairwise orthogonal subspaces
\begin{equation}\label{eqn:hodge-decomp}
C^i(X;\myboldmath{R})=\mathcal{H}_i\oplus B^i(X;\myboldmath{R})\oplus \operatorname{im} (\delta_{i}^\ast),
\end{equation}
(see \cite{Eckmann:HarmonischeFunktionenRandwertaufgabenKomplex-1945,HorakJost}); in particular, $\mathcal{H}_i\cong H^i(X;\myboldmath{R})$.
\subsubsection*{Spectra of $\mathcal{L}_i^\textup{up}$ and Spectral Expansion}
Observe that, trivially, $B^i(X;\myboldmath{R})\subseteq \ker \mathcal{L}_i^\textup{up}$. Thus, every $f\in B^i(X;\myboldmath{R})$ is an eigenvector of $\mathcal{L}_i^\textup{up}$ with eigenvalue zero. We call these the trivial eigenvectors of $\mathcal{L}_i^\textup{up}$ and the trivial part of its spectrum. Thus, the nontrivial eigenvalues of $\mathcal{L}_i^\textup{up}$ are, by definition, the eigenvalues of the restriction of $\mathcal{L}_i^\textup{up}$ to the orthogonal complement (with respect to the given weighted inner product) $(B^i(X;\myboldmath{R}))^\bot$.
By the variational definition of eigenvalues, the minimal nontrivial eigenvalue of $\mathcal{L}_i^\textup{up}$ is given by
$$\min_{f\bot B^i(X;\myboldmath{R})} \frac{\langle \mathcal{L}_i^\textup{up} f,f\rangle}{\langle f,f\rangle}=\min_{f\bot B^i(X;\myboldmath{R})}\frac{\|\delta_if\|^2}{\|f\|^2}.$$
Thus, we see that the minimal nontrivial eigenvalue of $\mathcal{L}_i^\textup{up}$ is at least $\varepsilon^2$ iff $X$ has $(i+1)$-dimensional expansion at least $\varepsilon$ with respect to the given weighted $\ell_2$-norms on real cochains. In this case, we will also say that $X$ is \emph{spectrally expanding} in dimension $i$.
We focus on the operator $\mathcal{L}_i^\textup{up}$, more precisely we consider $\mathcal{L}_{k-1}^\textup{up}$ for $k$-dimensional complexes because it corresponds to coboundary expansion with respect to real coefficients and the $\ell_2$-norm.
The spectra of the other two maps are related: By the Hodge decomposition \eqref{eqn:hodge-decomp} the spectrum of $\mathcal{L}_i$ is determined by the spectra of $\mathcal{L}_i^\down$ and $\mathcal{L}_i^\textup{up}$. For any linear map $A$, the spectra of $AA^{\ast}$ and $A^{\ast}A$ differ only in the multiplicity of $0$; in particular, this holds for the spectra of $\mathcal{L}_i^\textup{up}$ and $\mathcal{L}_{i+1}^\down$.
Nevertheless, as we cover only $\mathcal{L}_{k-1}^\textup{up}$ for $k$-dimensional complexes, our results do not yield corresponding statements on $\mathcal{L}_{k-1}$.
\subsubsection*{Combinatorial Laplacians} The combinatorial Laplacian $L_i=L_i^\down+L_i^\textup{up}$ corresponds to the special case of the standard inner product $\langle f,g\rangle = \sum_{f\in X_i}f(F)g(F)$, that is, the case of \emph{unit weights} $w(F)=1$ for all $F\in X$. Thus, $L_i^\textup{up}=L^\textup{up}_i(X) = \partial_{i+1}\delta_i.$
Recall that the matrix corresponding to the coboundary map $\delta_i$ with respect to the orthogonal basis of elementary cochains is, by abuse of notation, also denoted by $\delta_i=\delta_i(X)$, and its transpose $\delta_i^T$ corresponds to the boundary map $\partial_{i+1}$. The combinatorial Laplacian $L^\textup{up}_i$ can be expressed as the matrix $\delta_i^T\delta_i$.
We can now motivate the signs in the definition of the adjacency matrix $A_i(X)$: Recall that for a graph $G$ the combinatorial Laplacian satisfies $L(G) = D(G) - A(G)$. If we let $D_i(X)$ denote the diagonal matrix with entry ${D_i}_{F,F}=|\{H \in X_{i+1}: F \subset H\}|$ for $F \in X_i$, we also have $L^\textup{up}_i(X) = D_i(X) - A_i(X)$.
\subsubsection*{Normalized Laplacians} Suppose that $X$ is a pure $k$-dimensional simplicial complex.
The normalized Laplacian $\Delta_i=\Delta_i^\down+\Delta_i^\textup{up}$ is the special case of the weighted Laplacian obtained by taking the weight function $w(F):=\deg(F)$. That is, the corresponding weighted inner product is
$$\langle f,g\rangle=\sum_{F\in X_i}\deg(F)f(F)g(F).$$
Let $\delta_i^\ast$ be the adjoint of $\delta_i$ with respect to this weighted inner product. Thus,
$$(\delta_i^\ast f) (G)=\sum_{F\in X_{i+1}} \frac{\deg(F)}{\deg(G)}[F:G]f(F).$$
Note that we have $\deg(F) > 0$ for every $F \in X$, since we assume that $X$ is pure.
The normalized Laplacian is then $\Delta_i^\textup{up}=\Delta_i^\textup{up}(X)=\delta_i^\ast\delta_i$.
With respect to the basis of elementary cochains, the map $\Delta^\textup{up}_i$ corresponds to the matrix $W_i^{-1}\delta_i^TW_{i+1}\delta_i$, where $W_i(X)$ denotes the diagonal matrix with entry ${W_i}_{F,F}=\deg(F)$.
As $W_{k-1} = D_{k-1}$ and $W_k = I$, for $i = k-1$ we can write $\Delta^\textup{up}_{k-1}$ as the matrix $D_{k-1}^{-1}L^\textup{up}_{k-1}=I-D_{k-1}^{-1} A_{k-1}$.
\subsubsection*{Eigenvalues of the Complete Complex}
As an example we consider the spectra of the three matrices $L^\textup{up}_{k-1}(K_n^k)$, $\Delta^\textup{up}_{k-1}(K_n^k)$ and $A_{k-1}(K_n^k)$ for the complete complex $K_n^k$.
First recall the following well-known (and easily verifiable) lemma:
\begin{lemma}\label{BasisBoundariesCoboundaries}
For a complex $X$ with complete $(k-1)$-skeleton, the space $B^{(k-1)}(X) = \operatorname{im} \delta_{k-2}$ has dimension $\binom{n-1}{k-1}$. A basis is given by $\big\{\delta_{k-2} e_F \!:\! 1 \notin F \in \binom{[n]}{k-1}\big\}$.
For the complete complex $K_n^k$, the space $\operatorname{im} \delta_{k-1}^\ast(K_n^k)$ is $\binom{n-1}{k}$-dimensional and has $\big\{\delta_{k-1}^\ast e_F \!:\! 1 \in F \in \binom{[n]}{k+1}\big\}$ as a basis.
\end{lemma}
\begin{lemma}\label{EigenvaluesCompleteComplex}
The eigenvalues of the combinatorial Laplacian $L^\textup{up}_{k-1}(K_n^k)$ are $0$ with multiplicity $\binom{n-1}{k-1}$ and $n$ with multiplicity $\binom{n-1}{k}$.
The normalized Laplacian $\Delta^\textup{up}_{k-1}(K_n^k)$ has eigenvalues $0$ with multiplicity $\binom{n-1}{k-1}$ and $\frac{n}{n-k}$ with multiplicity $\binom{n-1}{k}$.
The eigenvalues of $A_{k-1}(K_n^k)$ are $n-k$ with multiplicity $\binom{n-1}{k-1}$ and $-k$ with multiplicity $\binom{n-1}{k}$.
\end{lemma}
\begin{proof}
Because $K_n^k$ is $(n-k)$-regular, it suffices to consider the spectrum of $L^\textup{up}_{k-1}(K_n^k)$.
The following equality is contained implicitly in \cite{Kalai} and follows from a straightforward calculation using the matrix representations of the Laplacians:
\[
L^\textup{up}_{k-1}(K_n^k) + L^\down_{k-1}(K_n^k) = nI.
\]
Any non-zero element of $\ker L^\down_{k-1}(K_n^k) = \ker \delta_{k-2}^\ast(K_n^k) = \operatorname{im} \delta_{k-1}^\ast(K_n^k)$ is hence an eigenvector of $L^\textup{up}_{k-1}$ with eigenvalue $n$. Naturally, any non-zero element of $\ker L^\textup{up}_{k-1}(K_n^k) = Z^{k-1}(K_n^k) = B^{k-1}(K_n^k)$ is an eigenvector of $L^\textup{up}_{k-1}$ with eigenvalue $0$.
By Lemma~\ref{BasisBoundariesCoboundaries} $\operatorname{im} \delta_{k-1}^\ast(K_n^k)$ and $B^{k-1}(K_n^k)$ have dimensions $\binom{n-1}{k}$ and $\binom{n-1}{k-1}$, respectively.
As these add up to $\binom{n}{k}$, the dimension of $C^{k-1}(K_n^k)$, we have determined the complete spectrum.
\end{proof}
\section{Garland's Estimate Revisited}
\label{sec:Garland}
In \cite{Garland} Garland studies the normalized Laplacian $\Delta^\textup{up}_i(X)$. His main result regards a conjecture of Serre's on the cohomology of certain groups. As a technical lemma, he proves a bound for the nontrivial eigenvalues of $\Delta^\textup{up}_i(X)$ in terms of the eigenvalues of the Laplacian on links of lower-dimensional faces (see also \cite{Borel} for a very clear exposition).
We state the result for the case of $\Delta^\textup{up}_{k-1}(X)$ and the links of $(k-2)$-dimensional faces $F \in X_{k-2}$. In this case, $\lk F = \lk(F,X)$ is a graph and the normalized Laplacian $\Delta_{0}^\textup{up}(\lk F)$ agrees with the usual normalized graph Laplacian $\Delta(\lk F)$.
Furthermore, we show an analogous result for the generalized adjacency matrix $A_{k-1}(X)$.
For a combinatorial application of Garland's ideas (to clique complexes of graphs) see \cite{AharoniBergerMeshulam:eigenvaluesHomologyFlagComplexes-2005}. Garland's estimate was subsequently further strengthened and extended. In particular, \.{Z}uk \cite{Zuk:1996}
proved that if a $2$-dimensional complex $X$ satisfies $\lambda_2(\Delta(\lk(v,X)))>1/2$ for all vertex links, then the fundamental group of $X$ has \emph{Kazhdan's} \emph{Property} (\emph{T}).
\subsection*{Normalized Laplacian}
\begin{theorem}[\cite{Garland}, see also {\cite[Theorem~1.5,1.6]{Borel}}]\label{Garland}
Let $X$ be a pure $k$-dimensional complex and let $\Delta_{k-1}^\textup{up} = \Delta_{k-1}^\textup{up}(X)$ be its normalized Laplacian. Denote by $\langle,\rangle$ the weighted inner product on $C^{k-1}(X;\myboldmath{R})$ that is defined by $\langle f,g\rangle=\sum_{F\in X_{k-1} }\deg(F)f(F)g(F)$.
Assume that for all $F \in X_{k-2}$ $$\lambda_{\min} \leq \lambda_2(\Delta(\lk F)) \leq \lambda_{n-k+1}(\Delta(\lk F)) \leq \lambda_{\max}.$$
Then for all $f \in \orcomp{k-1}{X}$ (where the orthogonal complement is taken with respect to $\langle,\rangle$)
\[
(1 + k\lambda_{\min} - k) \langle f,f \rangle \leq \langle \Delta_{k-1}^\textup{up} f,f \rangle \leq (1 + k\lambda_{\max} - k) \langle f,f \rangle.
\]
Hence, all nontrivial eigenvalues of $\Delta_{k-1}^\textup{up}$ on $\orcomp{k-1}{X}$ lie in $[1 + k\lambda_{\min} - k, 1 + k\lambda_{\max} - k].$
\end{theorem}
We remark that Garland only states the lower bound. The upper bound follows directly from the proof, which we reproduce here in our notation.
The main idea of the proof is to present the normalized Laplacian as a sum of matrices each of which has non-zero entries only on the link of some $(k-2)$-face. These matrices then correspond to the Laplacians of the links.
For a pure $k$-dimensional simplicial complex $X$, fix a face $F \in X_{k-2}$ of dimension $k-2$.
Let $\rho_F$ be the diagonal $|X_{k-1}|\times|X_{k-1}|$-matrix defined by
$$(\rho_F)_{G,H} = \begin{cases}
1 & \text{if } G = H \text{ and } F \subset G,\\
0& \text{otherwise.}
\end{cases}
$$
We set $\Delta_{k-1}^{\textup{up},F}(X):=\rho_F\Delta_{k-1}^{\textup{up}}(X)\rho_F$ and for $f \in C^{k-1}(X)$ furthermore define $f_F \in C^0(\lk F)$ by $f_F(\{u\}) = [F \cup \{u\}:F]f(F \cup \{u\}).$
\begin{lemma}\label{GarlandLemma}
Let $X$ be a pure $k$-dimensional complex.
\begin{itemize}
\item[a)] $\sum_{F \in X_{k-2}}\Delta_{k-1}^{\textup{up},F}(X) = \Delta_{k-1}^\textup{up}(X) + (k-1)I$.
\item[b)] For $u, v \in V(\lk F)$ let $F_u=F \cup \{u\}$ and $F_v=F \cup \{v\}$. Then
$(\Delta_{k-1}^{\textup{up},F}(X))_{F _u,F_v} = [F_u:F][F_v:F](\Delta(\lk F))_{u,v}.$
So, for $f \in C^{k-1}(X)$, $\langle\Delta_{k-1}^{\textup{up},F}(X)f,f\rangle= \langle\Delta(\lk F)f_F,f_F\rangle.$
\item[c)] If $f \in \orcomp{k-1}{X}$ then $f_F \in \mathbf{1}^\perp$.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[a)] Observe that $\Delta_{k-1}^{\textup{up},F}(X)$ is obtained by replacing by $0$ all entries of $\Delta_{k-1}^\textup{up}(X)$ that are contained in a row or column corresponding to some $G$ with $F\nsubseteq G$. The non-zero entries of $\Delta_{k-1}^\textup{up}(X)$ lie on the diagonal or correspond to faces $G, H \in X_{k-1}$ that share a common $(k-2)$-face and for which $G \cup H \in X_k$.
Hence, every non-zero entry $(\Delta_{k-1}^\textup{up}(X))_{G,H}$ with $G \neq H$ is contained in exactly one summand and the diagonal entries, which are $1$, are each contained in exactly $k$ summands.
\item[b)]
First consider $u \neq v$ with $F \cup \{u,v\} \in X$.
Straightforward calculations show that $\deg_X(F_u)= \deg_{\lk F}(u)$ and that furthermore
$[F_{u,v}:F_u][F_{u,v}:F _v] = -[F_u:F][F_v:F]$ where $F_{u,v}$ stands for $F\cup\{u,v\}$.
Hence,
\[
(\Delta_{k-1}^{\textup{up},F}(X))_{F_u,F_v} = \frac{[F_{u,v}:F_u][F_{u,v}:F_v]}{\deg_X(F_u)}
= - \frac{[F_u:F][F_v:F]}{\deg_{\lk F}(u)}
= [F_u:F][F_v:F](\Delta(\lk F))_{u,v}.
\]
If $F \cup \{u,v\} \notin X$, the corresponding entry is $0$ in both matrices.
For the diagonal entries we get
$$
(\Delta_{k-1}^{\textup{up},F}(X))_{F_u,F_u} = 1 = [F_u:F][F_u:F] \Delta(\lk F)_{u,u}.
$$
\item[c)] Let $f \in \orcomp{k-1}{X}$. Then
$\sum_{G \in X_{k-1}}\deg(G)f(G)[G:F] = \langle f , \delta_{k-2}e_F \rangle = 0$
and therefore
\[
\langle f_F,\mathbf{1}\rangle = \sum_{v\in V(\lk F)}\deg_{\lk F}(v) f_F(\{v\}) =
\sum_{v\in V(\lk F)}\deg(F_v) [F_v:F]f(F_v) =0.
\]
\end{itemize}
\end{proof}
The statements of Lemma~\ref{GarlandLemma} can easily be combined to prove Garland's estimate:
\begin{proof}[Proof of Theorem~\ref{Garland}]
Let $f \in \orcomp{k-1}{X}$. Then
$$\langle \sum\nolimits_{F \in X_{k-2}}\Delta_{k-1}^{\textup{up},F}(X)f,f \rangle = \sum\nolimits_{F \in \mathcal{F}_f} \langle \Delta(\lk F)f_F,f_F \rangle,$$
where $\mathcal{F}_f = \{F \in X_{k-2} | F \subset G \text{ for some } G \text{ with } f(G)\neq 0\}$.
Now, since $f \in \orcomp{k-1}{X}$, we have $f_F \in \mathbf{1}^\perp$ and $f_F \neq 0$ for $F \in \mathcal{F}_f$.
As furthermore $\sum_{F \in \mathcal{F}_f}\langle f_F, f_F \rangle = k \langle f,f \rangle$,
$$k \lambda_{\min} \langle f,f \rangle \leq \langle \sum\nolimits_{F \in X_{k-2}}\Delta_{k-1}^{\textup{up},F}(X)f,f \rangle \leq k \lambda_{\max} \langle f,f \rangle.$$
By Lemma~\ref{GarlandLemma} we have furthermore
$$\langle \Delta_{k-1}^\textup{up}(X) f,f \rangle = \langle\sum\nolimits_{F \in X_{k-2}}\Delta_{k-1}^{\textup{up},F}(X) f,f \rangle - (k-1)\langle f,f \rangle,$$
which concludes the proof.
\end{proof}
\subsection*{Adjacency Matrix}
We now turn to the generalized adjacency matrix $A_{k-1}(X)$. The same methods as above can be applied to achieve a result of similar nature (Proposition~\ref{Prop:Conclusionzz}). However, this only enables us to cover vectors from $\orcomp{k-1}{X}$. Controlling the behaviour on this space sufficed for the normalized Laplacian, where $B^{k-1}(X)$ is always a subspace of the eigenspace of zero. For the generalized adjacency matrix we know much less about its eigenspaces, in particular we do not know of any trivial eigenvalues.
This is analogous to the situation for graphs, where $\mathbf{1}$, the all-ones vector, which is known to be the first eigenvector of the Laplacian (with eigenvalue $0$), is not necessarily an eigenvector of the adjacency matrix.
In \cite{Feige:2005hp} Feige and Ofek, considering the adjacency matrix of random graphs $G(n,p)$, show that for $p$ large enough the first eigenvector can in some sense be replaced by $\mathbf{1}$.
Following their strategy, we show that controlling the behaviour of the generalized adjacency matrix $A_{k-1}(X)$ on the two spaces $B^{k-1}(X)$ and $\orcomp{k-1}{X}$ suffices to give concentration results for the spectrum of $A_{k-1}(X)$.
The results of this section together will yield the following theorem which can be considered as an analogue of Garland's Theorem~\ref{Garland} for the generalized adjacency matrix $A_{k-1}(X)$.
\begin{theorem}\label{GarlandAdjacency}
\renewcommand{\labelenumi}{(\roman{enumi})}
\renewcommand{\theenumi}{(\roman{enumi})}
Let $X$ be a $k$-dimensional simplicial complex with $n$ vertices and complete $(k-1)$-skeleton and let $A_{k-1}=A_{k-1}(X)$ be its generalized adjacency matrix. Fix a positive value $d$ and let $u = (1/\sqrt{n-k+1})\mathbf{1}$. Suppose that we have for all $F \in X_{k-2}$:
\begin{enumerate}
\item $|\langle A(\lk F)u,u \rangle - d| \leq f(n)$,
\item $|\langle A(\lk F)u,w \rangle| \leq g(n)$ for all $w \bot \mathbf{1}$ with $\|w\|=1$ and
\item $|\langle A(\lk F)w,w \rangle| \leq h(n)$ for all $w \bot \mathbf{1}$ with $\|w\|=1$.
\end{enumerate}
Let $\varphi(n) = f(n) +g(n) + h(n)$. Then:
\renewcommand{\labelenumi}{(\alph{enumi})}
\renewcommand{\theenumi}{(\alph{enumi})}
\begin{enumerate}
\item $|\langle A_{k-1}b,b \rangle - d| \leq k \cdot \varphi(n)$ for all $b\in B^{k-1}(X)$ with $\|b\|=1$,\label{Conclusionbb}
\item $|\langle A_{k-1}b,z \rangle| \leq k \cdot \varphi(n)$ for all $z \in\orcomp{k-1}{X}$ and $b\in B^{k-1}(X)$ with $\|b\|=\|z\|=1$ and\label{Conclusionbz}
\item $|\langle A_{k-1}z,z \rangle| \leq k \cdot h(n)$ for all $z \in \orcomp{k-1}{X}$ with $\|z\|=1$.\label{Conclusionzz}
\end{enumerate}
Hence, the largest $\binom{n-1}{k-1}$ eigenvalues of $A_{k-1}$ lie in the interval $[d-k\varphi(n),d+2k\varphi(n)+kh(n)]$, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[-k(\varphi(n)+h(n)),kh(n)]$.
\end{theorem}
The following lemma explains the connection of Conclusions \ref{Conclusionbb}, \ref{Conclusionbz} and \ref{Conclusionzz} with the spectrum of $A_{k-1}(X)$. It is a generalization of \cite[Lemma~2.1]{Feige:2005hp}, which gives the a corresponding statement for graphs and deals with a single vector $u$, here replaced by the subspace $\mathcal{B}$, and is then used with $u= \frac{1}{\sqrt{n}}\mathbf{1}$. We will use $\mathcal{B} = B^{k-1}(X)$.
Note that $B^{k-1}(X) = B^{k-1}(K_n^k)$ if $X$ has a complete $(k-1)$-skeleton.
\begin{lemma}\label{LEMMAConditions}
\renewcommand{\labelenumi}{(\roman{enumi})}
\renewcommand{\theenumi}{(\roman{enumi})}
Let $X$ be a $k$-dimensional simplicial complex with $n$ vertices and complete $(k-1)$-skeleton, let $A_{k-1}=A_{k-1}(X)$ be its generalized adjacency matrix and let $\mathcal{B}$ be an $\binom{n-1}{k-1}$-di\-men\-sion\-al subspace of $C^{k-1}(X)$.
Suppose we have:
\begin{enumerate}
\item $0 \leq f_1(n) \leq \langle A_{k-1}b,b \rangle \leq f_2(n)$ for all $b\in \mathcal{B}$ with $\|b\|=1$,\label{Assumptionbb}
\item $|\langle A_{k-1}b,z \rangle| \leq g(n)$ for all $z \in \mathcal{B}^\bot$ and $b\in \mathcal{B}$ with $\|b\|=\|z\|=1$ and\label{Assumptionbz}
\item $|\langle A_{k-1}z,z \rangle| \leq h(n)$ for all $z \in \mathcal{B}^\bot$ with $\|z\|=1$.\label{Assumptionzz}
\end{enumerate}
Then the largest $\binom{n-1}{k-1}$ eigenvalues of $A_{k-1}$ lie in the interval $[f_1(n),f_2(n)+g(n)+h(n)]$, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[-(g(n)+h(n)),h(n)]$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{LEMMAConditions}]
Write $A=A_{k-1}$.
Let $v$ be an arbitrary unit vector. Then there are unit vectors $b \in \mathcal{B}$, $z \in \mathcal{B}^\bot$ and $-1\leq \alpha, \beta \leq 1$ such that $v =\alpha b + \beta z$ and $\alpha^2 + \beta^2 =1$.
Because $A$ is symmetric, we get
$$\langle Av,v \rangle = \alpha^2\langle Ab,b\rangle + 2 \alpha\beta \langle Ab,z \rangle + \beta^2\langle Az,z \rangle.$$
Using \ref{Assumptionbb},\ref{Assumptionbz} and \ref{Assumptionzz} as well as $\alpha\beta \leq 1/2$ and $0\leq \alpha, \beta \leq 1$, we can conclude that
$$-g(n)-h(n)\leq \langle Av,v \rangle \leq f_2(n)+g(n)+h(n).$$
Hence, all eigenvalues of $A$ are contained in $[-g(n)-h(n),f_2(n)+g(n)+h(n)]$.
Now, let $\mu_1 \leq \mu_2 \leq \ldots \leq \mu_{\binom{n}{k}}$ be the eigenvalues of $A$.
Applying \ref{Assumptionbb} and \ref{Assumptionzz} we get
$$\mu_{\binom{n-1}{k}} \leq \max_{z \in \mathcal{B}^\bot, \|z\|=1} \langle Az,z \rangle \leq h(n) \;\;\text{ and }\;\; \mu_{\binom{n-1}{k}+1} \geq \min_{b\in \mathcal{B}, \|b\|=1} \langle Ab,b \rangle \geq f_1(n),$$
by the variational characterization of eigenvalues (Theorem~\ref{CourantFischer}), since $\dim\mathcal{B}^\bot = \binom{n-1}{k}$.
\end{proof}
The proof of Theorem~\ref{GarlandAdjacency} makes up the remainder of this section and is divided into two parts. We first deal with Conclusion \ref{Conclusionzz} and then turn to Conclusions \ref{Conclusionbb} and \ref{Conclusionbz}.
\subsection*{Conclusion \ref{Conclusionzz} - Behaviour on $\orcomp{k-1}{X}$}
We address Conclusion \ref{Conclusionzz} with the same methods that we used to prove Garland's Theorem~\ref{Garland}.
\begin{proposition}\label{Prop:Conclusionzz}
Let $X$ be a $k$-dimensional complex and let $A_{k-1}=A_{k-1}(X)$ be its generalized adjacency matrix.
Assume that for all $F \in X_{k-2}$ and for all $w \in C^0(\lk F)$ with $w \bot \mathbf{1}$
$$|\langle A(\lk F)w ,w\rangle| \leq h(n) \langle w,w\rangle.$$
Then for all $z \in \orcomp{k-1}{X}$ (where the orthogonal complement is taken with respect to the standard, non-weighted inner product)
$$|\langle A_{k-1} z,z \rangle| \leq k \cdot h(n) \langle z,z \rangle.$$
\end{proposition}
\begin{proof}
For any face $F \in X_{k-2}$ set $A_{k-1}^F:=\rho_FA_{k-1}\rho_F$, the matrix obtained from $A_{k-1}$ by replacing all rows and columns corresponding to $(k-1)$-faces not containing $F$ by all-zero rows/columns.
Similar as in Lemma~\ref{GarlandLemma}, straightforward calculations show:
\begin{enumerate}
\item[a)] $\sum_{F \in X_{k-2}}A_{k-1}^F = A_{k-1}$,
\item[b)] $(A_{k-1}^F)_{F \cup \{u\},F \cup \{v\}} = [F \cup \{u\}:F][F \cup \{v\}:F]A(\lk F)_{u,v}$ for $F \in X_{k-2}$ and $u, v \in V(\lk F)$
and hence $\langle A_{k-1}^F f,f \rangle = \langle A(\lk F) f_F,f_F \rangle$ for any $f \in C^{k-1}(X)$.
\end{enumerate}
As $z\in \orcomp{k-1}{X}$ implies $z_F \in \mathbf{1}^\perp$ also with respect to the non-weighted inner product, this proves the proposition:
$$
|\langle A_{k-1} z,z \rangle| = |\sum_{F \in X_{k-2}} \langle A_{k-1}^F z,z \rangle| \leq \sum_{F \in X_{k-2}} |\langle A(\lk F) z_F,z_F \rangle| \leq k \cdot h(n) \langle z,z \rangle.
$$
\end{proof}
As explained above, in contrast to the Laplacian, for the adjacency matrix we are also interested in the behaviour on $B^{k-1}(X)$. For this space, we can not apply a proof similar to the one above because $f \in B^{k-1}(X)$ does not imply that $f^F$ is constant for every $F \in X_{k-2}$. (For a $k$-dimensional complex with complete $(k-1)$-skeleton, the basis vectors $\delta_{k-2} e_F$ are a simple counterexample.)
\subsection*{Conclusions \ref{Conclusionbb} and \ref{Conclusionbz} - Behaviour on $B^{k-1}(X)$}
For $b \in B^{k-1}(X)$ we have $A_{k-1}(X)b = D_{k-1}(X)b$. If the complex $X$ was regular, i.e.~all $(k-1)$-faces would have the same degree $d$, $B^{k-1}(X)$ would be a subspace of the eigenspace of $d$.
The random complex $X^k(n,p)$ is not regular but with high probability the degrees of all $(k-1)$-faces lie close to the expected average degree $d=p(n-1)$.
For an arbitrary complex we can fix any positive value $d$ and study the divergences of the degrees from $d$ by considering the diagonal matrix $E(X) = D_{k-1}(X)-dI$ which has entries $E(X)_{F,F} = \deg_X(F) - d$. Then $A_{k-1}(X)b = E(X) b + db$ for $b \in B^{k-1}(X)$.
It will turn out that our main task is to control the behaviour of $\|E(X) b\|$ for all $b \in B^{k-1}(X)$.
We manage to reduce this to a question on the links of $(k-2)$-faces: Proposition~\ref{Prop:ReducingToLinks} relates $\|E(X) b\|$ for every $b \in B^{k-1}(X)$ to the values $\|E(X)\delta_{k-2} e_F\|$ for $F \in X_{k-2}$, to the behaviour of $E(X)$ on the coboundaries of elementary cochains. These values in turn match the values $\|E(\lk F)\mathbf{1}\|$ on the corresponding links.
\begin{proposition}\label{Prop:ReducingToLinks}
Let $X$ be a $k$-dimensional complex with vertex set $[n]$ and complete $(k-1)$-skeleton. Fix some positive value $d$ and let $E = E(X) = D_{k-1}(X)-dI$.
Assume that for all $F \in X_{k-2}$ we have
$$\|E\delta e_F\| \leq f(n) \|\delta e_F\|.$$
Then for all $b \in B^{k-1}(X)$
$$\|E b\| \leq k \cdot f(n) \|b\|.$$
\end{proposition}
\begin{remark} Proposition~\ref{Prop:ReducingToLinks} also holds if $E$ is replaced by any diagonal $|X_{k-1}|\times|X_{k-1}|$-matrix.
\end{remark}
The proof of Proposition~\ref{Prop:ReducingToLinks} is deferred to the end of this section. Here is how we use it to address Conclusions \ref{Conclusionbb} and \ref{Conclusionbz}.
\begin{proposition}\label{Prop:ConclusionbbConclusionbz}
\renewcommand{\labelenumi}{(\roman{enumi})}
Let $X$ be a $k$-dimensional simplicial complex with $n$ vertices and complete $(k-1)$-skeleton. Fix some postive value $d$ and suppose that we have $$\sum_{v\in V(\lk F)}(\deg_{\lk(F)}(v)-d)^2 = \|E(\lk F)\mathbf{1}\|^2 \leq f(n)^2 (n-k+1)$$
for all $F \in X_{k-2}$. Then
\begin{enumerate}
\item $|\langle A_{k-1}b,b \rangle-d| \leq k \cdot f(n)$ for all $b \in B^{k-1}(X)$ with $\|b\|=1$ and
\item $|\langle A_{k-1}b,z \rangle| \leq k \cdot f(n)$ for all $b \in B^{k-1}(X)$, $z \in \orcomp{k-1}{X}$ with $\|b\| = \|z\|=1$.
\end{enumerate}
\end{proposition}
\begin{proof}
As $\deg(F \cup \{v\}) = \deg_{\lk F}(v)$ for $v \notin F$, we have
$$\|E\delta e_F\|^2 = \sum\nolimits_{H \supset F} (\deg(H) -d)^2 = \sum\nolimits_{v \notin F} (\deg_{\lk F}(v) -d)^2 \leq f(n)^2 (n-k+1).
$$
By Proposition~\ref{Prop:ReducingToLinks} we hence have $\|E b\| \leq k \cdot f(n) \|b\|$ for all $b \in B^{k-1}(X)$.
Now, let $b \in B^{k-1}(X)$ and $z \in \orcomp{k-1}{X}$.
As $A_{k-1}b = D_{k-1}b = db + E b,$
we get
$$|\langle A_{k-1}b,b \rangle - d\|b\|^2 | \leq \|b\| \cdot \|E b\| \leq k \cdot f(n)\|b\|^2$$ and $$ |\langle A_{k-1}b,z \rangle| \leq |\langle E b,z\rangle| \leq \|z\| \cdot \|E b\| \leq k \cdot f(n)\|z\|\|b\|.$$
\end{proof}
To conclude the proof of Theorem~\ref{GarlandAdjacency} we are missing a small lemma:
\begin{lemma}
\renewcommand{\labelenumi}{(\roman{enumi})}
Let $G$ be a graph with $n$ vertices with adjacency matrix $A=A(G)$ and let $u=\frac{1}{\sqrt{n}}\mathbf{1}$. Fix a positive value $d$.
Assume that
\begin{enumerate}
\item $|\langle Au,u \rangle - d| \leq f(n)$,
\item $|\langle Au,w \rangle| \leq g(n)$ for all $w \bot \mathbf{1}$ with $\|w\|=1$ and
\item $|\langle Aw,w \rangle| \leq h(n)$ for all $w \bot \mathbf{1}$ with $\|w\|=1$.
\end{enumerate}
Then $\| E(G) \mathbf{1} \|^2 = \sum_{v\in V}(\deg(v)-d)^2 \leq (f(n) +g(n) + h(n))^2 n$.
\end{lemma}
\begin{proof}
We have $\| E(G) \mathbf{1} \| = \| (\frac{d}{n}J -A)\mathbf{1}\| \leq \|\frac{d}{n}J -A\|\cdot\|\mathbf{1}\|$
and the conditions above imply $\|\frac{d}{n}J -A\| \leq f(n) +g(n) + h(n)$.
\end{proof}
\subsubsection*{Proof of Proposition~\ref{Prop:ReducingToLinks}}
The proof of Propositon~\ref{Prop:ReducingToLinks} is based on the observations in the following lemma.
Its proof will use the following simple consequence of the Cauchy-Schwarz inequality:
\begin{equation}
\left(\sum_{i\in I} a_i\right)^2 \leq |I| \sum_{i\in I} a_i^2. \label{CauchySchwarz}
\end{equation}
\begin{lemma}\label{ReducingToLinks}
Let $X$ be a $k$-complex with vertex set $[n]$ and complete $(k-1)$-skeleton and let $b \in B^{k-1}(X)$. For every $(k-2)$-face $F \in X_{k-2}$ define
$$h_b(F) := \sum_{v \notin F} [F \cup \{v\}:F] b(F \cup \{v\}).$$
Then
\begin{enumerate}
\item[a)] $b(H) = \frac{1}{n} \sum_{F \subset H, F \in X_{k-2}} [H:F] h_b(F)$ for $H \in X_{k-1}$,
\item[b)] $\langleE b,E b\rangle \leq \frac{k}{n^2} \sum_{F \in X_{k-2}} h_b(F)^2 \langleE \delta e_F,E \delta e_F\rangle$,
\item[c)] $\sum_{F \in X_{k-2}} h_b(F)^2 \leq k(n-k+1)\langle b,b \rangle$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[a)] As $X$ has a complete $(k-1)$-skeleton, we have $b \in B^{k-1}(X)= B^{k-1}(K_n^k)$ and $\delta_{k-1}(K_n^k) b = 0$. Thus, for any $H \in X_{k-1}$ and $v \notin H$:
$$
0 = (\delta_{k-1}(K_n^k)b) (H \cup \{v\}) = [H \cup \{v\}: H] b(H) + \sum_{F \subset H} [H \cup \{v\}: F \cup \{v\}] b(F \cup \{v\}).
$$
Note that $- [H \cup \{v\}: H] [H \cup \{v\}: F \cup \{v\}] = [H:F][F \cup \{v\}:F]$. Thus, we can rearrange:
$$
b(H) = - [H \cup \{v\}: H] \sum_{F \subset H} [H \cup \{v\}: F \cup \{v\}] b(F \cup \{v\}) = \sum_{F \subset H} [H:F][F \cup \{v\}:F] b(F \cup \{v\}).
$$
Summing over all $v \notin H$ and adding additional multiples of $b(H)$, we get
\begin{multline*}
n \cdot b(H) = \sum_{v \notin H} \sum_{F \subset H} [H:F][F \cup \{v\}:F] b(F \cup \{v\}) + k \cdot b(H)\\
= \sum_{F \subset H} [H:F] \sum_{v \notin F} [F \cup \{v\}:F] b(F \cup \{v\})
= \sum_{F \subset H} [H:F] h_b(F).
\end{multline*}
\item[b)] By a) and inequality~\eqref{CauchySchwarz} and because $\langleE \delta e_F,E \delta e_F\rangle = \sum_{H \supset F} E (H)^2$ for $F \in X_{k-2}$:
\begin{multline*}
\langleE b,E b\rangle = \sum_{H \in X_{k-1}} E (H)^2 b(H)^2
= \frac{1}{n^2} \sum_{H \in X_{k-1}} E (H)^2 \left(\sum_{F \subset H} [H:F] h_b(F)\right)^2\\
\leq \frac{k}{n^2} \sum_{H \in X_{k-1}} E (H)^2 \sum_{F \subset H} h_b(F)^2
= \frac{k}{n^2} \sum_{F \in X_{k-2}} h_b(F)^2 \langleE \delta e_F,E \delta e_F\rangle.
\end{multline*}
\item[c)] Again by inequality~\eqref{CauchySchwarz}:
\begin{multline*}
\sum_{F \in X_{k-2}} h_b(F)^2 \leq \sum_{F \in X_{k-2}} (n-k+1) \cdot \sum_{v \notin F} b(F \cup \{v\})^2\\
= (n-k+1) \cdot \sum_{H \in X_{k-1}} k \cdot b(H)^2 = k(n-k+1)\langle b,b \rangle.
\end{multline*}
\end{enumerate}
\end{proof}
The statements of Lemma~\ref{ReducingToLinks} together yield Proposition~\ref{Prop:ReducingToLinks}:
\begin{proof}[Proof of Propositon~\ref{Prop:ReducingToLinks}]
Let $b \in B^{k-1}(X)$. As $\|\delta e_F\| = \sqrt{n-k+1}$ for $F \in X_{k-2}$, by Lemma~\ref{ReducingToLinks}:
\begin{multline*}
\langleE b,E b\rangle \leq \frac{k}{n^2} \sum_{F \in X_{k-2}} h_b(F)^2 \langleE \delta e_F,E \delta e_F\rangle \leq \frac{k}{n^2} \sum_{F \in X_{k-2}} h_b(F)^2 f(n)^2 \langle\delta e_F,\delta e_F\rangle\\
\leq k^2\cdot \frac{(n-k+1)^2}{n^2} \cdot f(n)^2 \langle b,b \rangle \leq k^2 \cdot f(n)^2 \langle b,b \rangle.
\end{multline*}
\end{proof}
\section{The Spectra of Random Complexes}\label{sec:EigenvaluesRandomComplexes}
In this section, we prove Theorem~\ref{EigenvaluesRandomComplexes}, the concentration result on the spectra of the normalized Laplacian and the generalized adjacency matrix of random complexes $X^k(n,p)$.
The basic idea is to reduce the statement to a question on the links of $(k-2)$-faces by applying Theorems~\ref{Garland} and \ref{GarlandAdjacency}. Since for every $(k-2)$-face $F$, the link $\lk(F,X^k(n,p))$ is a random graph with the same distribution as $G(n-k+1,p)$, we can then apply results on the eigenvalues of random graphs. For convenience, we repeat Theorem~\ref{EigenvaluesRandomComplexes}:
{%
\renewcommand\thetheorem{\savedtheoremnumber}%
\begin{theorem}
Let $k\geq2$. For every $c>0$ and every $\gamma > c$ there exists a constant $C>0$ with the following property:
Assume $p \geq (k+\gamma)\log(n)/n$ and let $d:= p(n-k)$.
Then for $\gamma_A=C\cdot\sqrt{d}$ and $\gamma_\Delta=C/\sqrt{d}$ the following statements hold with probability at least $1-n^{-c}$:
\begin{enumerate}
\item[\textup{(i)}] The largest $\binom{n-1}{k-1}$ eigenvalues of $A_{k-1}(X^k(n,p))$ lie in the interval $[d-\gamma_A,d+\gamma_A]$, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[-\gamma_A,+\gamma_A]$.
\item[\textup{(ii)}] The smallest $\binom{n-1}{k-1}$ eigenvalues of $\Delta_{k-1}^\textup{up}(X^k(n,p))$ are \textup{(}trivially\textup{)} zero, and the remaining $\binom{n-1}{k}$ eigenvalues lie in the interval $[1-\gamma_\Delta,1+\gamma_\Delta]$. In particular, $\tilde{H}^{k-1}(X^k(n,p);\myboldmath{R})=0$.
\end{enumerate}
For the adjacency matrix \textup{(i)} even holds for $p\geq \gamma\cdot \log n/n$.
\end{theorem}
\addtocounter{theorem}{-1}%
}%
Observe that $B^{k-1}(K_n^k)\subseteq \ker \Delta^\textup{up}_{k-1}(X^k(n,p))$ because $X^k(n,p)$ has a complete $(k-1)$-skeleton, so the multiplicity of $0$ as an eigenvalue of $ \Delta^\textup{up}_{k-1}(X^k(n,p))$ is at least $\binom{n-1}{k-1}$.
\begin{proof}[Proof of Theorem~\ref{EigenvaluesRandomComplexes}]
Let $c>0$ and let $\gamma>c$. For $F \in \binom{[n]}{k-1}$, the link $\lk F = \lk(F,X^k(n,p))$ is a random graph $G(n-k+1,p)$. By Theorem~\ref{thm:concentration-graph-eigenvalues} (and \eqref{preciseStatementAdjacencyGraphs} in Section~\ref{subsec:EigenvaluesAdjacencyRandomGraphs}) we can hence choose $C>0$ such that for $p \geq (k+\gamma)\log(n)/n$ the following holds with probability at least $1-n^{-c-k+1}$:
\begin{enumerate}
\item[\textup{(i)}] $|\langle Ax,y\rangle| \leq C \sqrt{d}$ for all unit vectors $x,y$ with $x\perp\mathbf{1}$ and $\frac{1}{n-k+1}\langle A\mathbf{1},\mathbf{1}\rangle\in [d - C\sqrt{d}, d +C\sqrt{d}]$.
\item[\textup{(ii)}] All nontrivial eigenvalues of $\Delta(\lk F)$ are contained in the interval $[1-C/(k\sqrt{d}),1+C/(k\sqrt{d})]$.
\end{enumerate}
We first focus on the adjacency matrix:
A union bound yields that for $p \geq (k+\gamma)\log(n)/n$:
\[
\Pr\left[\exists F \in X_{k-2}:|\tfrac{1}{n-k+1}\langle A\mathbf{1},\mathbf{1}\rangle-d| > C\sqrt{d} \;\text{ or }\;|\langle Ax,y\rangle| > C \sqrt{d}\text{ for some } x\perp\mathbf{1},y\right] \leq n^{-c}.
\]
This implies that the conditions of Theorem~\ref{GarlandAdjacency} with $f(n),g(n),h(n)=O(\sqrt{d})$, and hence the desired concentration bounds, are fulfilled with probability at least $1-n^{-c}$.
Note that so far by Theorem~\ref{thm:concentration-graph-eigenvalues} it would have sufficed to choose $p\geq \gamma\log(n)/n$.
Now consider the normalized Laplacian. Again, a union bound gives for $p \geq (k+\gamma)\log(n)/n$
$$\Pr\left[ \forall F \in X_{k-2}: 1-C/(k\sqrt{d}) \leq \lambda_2(\Delta(\lk F)) \leq \lambda_{n-k+1}(\Delta(\lk F)) \leq 1+C/(k\sqrt{d})]\right] \geq 1 - n^{-c}.$$
For every $(k-1)$-face $H \in \tbinom{[n]}{k}$ of $X^k(n,p)$, the random variable $\deg(H)$ is binomially distributed with parameters $(n-k)$ and $p$. So, for $n$ large enough, the complex $X^k(n,p)$ is pure with probability at least $1-n^{-c}$.
Hence, also the conditions of Theorem~\ref{Garland} are fulfilled with probability at least $1-n^{-c}$.
\end{proof}
\begin{remark}\label{rem:ConditionsEigenvaluesLaplacians}
Note that that the preceding proof works for any random distribution $\mathcal{X}_k(n,p)$ on $k$-dimensional simplicial complexes with $n$ vertices and complete $(k-1)$-skeleton with the property that the link $\lk(F,\mathcal{X}_k(n,p))$ of every $F \in \binom{[n]}{k-1}$ is a random graph with distribution $G(n-k+1,p)$.
\end{remark}
\section{Spectral vs. Coboundary Expansion}
\label{sec:CheegerCounterexample}
In this section, we prove Theorem~\ref{thm:counterexample}. As mentioned in the introduction, the examples are obtained by a probabilistic construction.
\subsubsection*{Basic Construction} Denote by $Y^k(n,p)$ the random $k$-dimensional simplicial complex with vertex set $V=[n]$ and complete $(k-1)$-skeleton obtained as follows: Randomly choose a map $a\colon \binom{V}{k}\rightarrow \myboldmath{Z}_2$ by setting $a(F)=1$ with probability $1/2$ and $a(F)=0$ otherwise, independently for each $F\in \binom{V}{k}$. Thus, the support of $a$ has the same distribution as the $(k-1)$-faces of the Linial-Meshulam random complex $X^{k-1}(n,1/2)$.
Call $H\in \binom{V}{k+1}$ ``\emph{good}'' iff $H$ contains an \emph{even} number of $(k-1)$-faces $F$ with $a(F)=1$. Every good $H$ is added as a $k$-face to $Y^{k}(n,p)$ independently with probability $p$.
Note that, by construction, $a$ is a $\myboldmath{Z}_2$-\emph{cocycle} in the complex $Y^k(n,p)$, i.e., $a\in Z^{k-1}(Y^k(n,p);\myboldmath{Z}_2)$.
For any fixed $b\in C^{k-1}(Y^k(n,p);\myboldmath{Z}_2)=\myboldmath{Z}_2^{\binom{V}{k}}$, the expected normalized Hamming distance between $b$ and the randomly chosen $a$ equals $1/2$. Since there are fewer than $2^{\binom{n}{k-1}}$ \emph{coboundaries} $b\in B^{k-1}(Y^k(n,p);\myboldmath{Z}_2)$ and $\binom{n}{k}$ independent random choices for the entries of $a$, a straightforward application of a Chernoff bound (see, e.g., \cite[Theorem~1]{Janson}, \cite[Theorem~2.1]{JLR}) plus a union bound implies that, a.a.s.,
$a$ has normalized Hamming distance $1/2-o(1)$ from any coboundary, i.e.,
\[
\|[a]\|\geq 1/2-o(1).
\]
In particular, a.a.s.\ $\tilde{H}^{k-1}(Y^k(n,p),\myboldmath{Z}_2)\neq 0$.
Note that for $H \in \binom{V}{k+1}$, the probability that $H$ is a $k$-face of $Y^k(n,p)$ equals $p/2$. However, in contrast to the model $X^k(n,p/2)$, the decisions for different $k$-faces that share some $(k-1)$-face are not independent. Nevertheless, we can still easily analyze the links of $(k-2)$-faces in $Y^k(n,p)$:
\begin{lemma}\label{LinkIndependence}
For every $(k-2)$-face $H \in (Y^k(n,p))_{k-2}=\binom{V}{k-1}$, the random graph $\lk(H,Y^k(n,p))$ has the distribution $G(n-k+1,p/2)$.
\end{lemma}
\begin{proof}
First note that it suffices to consider the case $p=1$, because $\lk(H,Y^k(n,p))$ carries the distribution attained by taking every edge in $\lk(H,Y^k(n,1))$ independently with probability $p$.
For simplicity, we write $Y$ instead of $Y^k(n,1)$.
Let $U:=V\setminus H$. For $e\in \binom{U}{2}$, consider the event that $e\in \lk(H,Y)$, i.e., that $H\cup e\in Y$. We need to show that these events are mutually independent. To see this, choose and fix, for each $e\in \binom{U}{2}$, an arbitrary $(k-1)$-simplex $F_e$ with $e\subseteq F_e \subseteq H\cup e$; we call these the ``\emph{undecided}'' $(k-1)$-simplices, and let $\mathcal{D}:=\binom{V}{k}\setminus \{F_e\colon e\in \binom{U}{2}\}$ be the set of remaining, ``\emph{decided}'' $(k-1)$-simplices. Note that, by construction, each $k$-simplex of the form $H\cup e$, $e\in \binom{U}{2}$, contains exactly one undecided $(k-1)$-simplex $F_e$ and that these are pairwise distinct. Fix a map $r\colon \mathcal{D} \rightarrow \myboldmath{Z}_2$ and condition upon the event that $r$ is the restriction of $a$ to $\mathcal{D}$. For each $e\in \binom{U}{2}$, we have $e\in \lk(H,Y)$ iff $a(F_e)=\sum_{F\in \mathcal{D}, F\subset H\cup e} r(F)$. For a fixed $r$, the (conditional) probability of this happening is $1/2$, and the values $a(F_e)$ are mutually independent since the $F_e$ are pairwise distinct. Thus, for any set of edges $e_1,\ldots, e_\ell\in \binom{U}{2}$ and for any fixed $r$, we get the conditional probability
$\Pr[\forall i: e_i \in \lk(H,Y)\mid a|_\mathcal{D}=r]=(1/2)^\ell.$
Since this holds for all choices of $r$, it also holds
unconditionally, which proves the lemma.
\end{proof}
For $p \geq (k+\gamma) \log(n)/n$ with $\gamma>0$ we can thus, by this lemma and Remark~\ref{rem:ConditionsEigenvaluesLaplacians}, proceed as in the proof of Theorem~\ref{EigenvaluesRandomComplexes} to show that there exists $\gamma_\Delta=O(1/\sqrt{pn})$ such that a.a.s.~the nontrivial part of the spectrum of $\Delta_{k-1}^\textup{up}(Y^k(n,p))$ lies in the interval $[1-\gamma_\Delta,1+\gamma_\Delta]$.
\subsubsection*{Modification} We have so far shown the existence of an infinite family of $k$-dimensional complexes that is spectrally but not $\myboldmath{Z}_2$-coboundary expanding.
However, the complexes constructed have non-trivial cohomology groups $\tilde{H}^{k-1}(Y,\myboldmath{Z}_2)$, and hence also $\tilde{H}_{k-1}(Y,\myboldmath{Z}) \neq 0$, because $a$ is a $\myboldmath{Z}_2$-cocycle by construction.
To change this we can add a second round to our experiment and randomly add possible further $k$-simplices as follows: After constructing $Y^k(n,p)$, we add each $H \in \binom{V}{k+1}$ independently with some probability $q$. We denote the obtained random complex by $Z^k(n,p,q)$. Thus, $Z^k(n,p,q)$ is the union of $Y^k(n,p)$ and the Linial-Meshulam random complex $X^k(n,q)$.
We assume that $p,q \geq C\cdot \log(n)/n$ for some suitably chosen $C$.
To analyze the $\myboldmath{Z}_2$-coboundary expansion of $Z=Z^k(n,p,q)$, we first argue that $Z$, a.a.s., contains at least $\frac{p}{2} (1-o(1)) \binom{n}{k+1}$ many $k$-faces:
\[
f_k(Z^k(n,p,q)) \geq \tfrac{p}{2} (1-o(1)) \tbinom{n}{k+1}.
\]
Applying the second moment method it is not hard to see that the number of good $k$-faces, after choosing $a$, is at least $\frac{1}{2}(1-o(1)) \binom{n}{k+1}$ with probability tending to $1$. A Chernoff bound then tell us that a.a.s.\ $f_k(Y^k(n,p)) \geq \frac{p}{2} (1-o(1)) \binom{n}{k+1}$.
As $Y^k(n,p)$ is a subcomplex of $Z$, this yields the desired bound.
With a similar argument, also applying a Chernoff bound, we get that a.a.s.
\[
|\delta a| \leq \frac{q}{2}(1-o(1)) \binom{n}{k+1}.
\]
As we have $\|[a]\|\geq 1/2-o(1)$ with the same probability as before, we see that a.a.s.\
\[
\varepsilon(Z) \leq \frac{\|\delta a\|}{\|a\|}=O\Big(\frac{q}{p}\Big) = o(1),
\]
if $q=o(p)$. In the extremal case $q = C\cdot \log(n)/n$ and $p=1$, we achieve $\varepsilon(Z)=O(\log(n)/n)$.
Furthermore, since $Z$ has $X^k(n,q)$ as a subcomplex, we know that the groups $\tilde{H}^{k-1}(Z,\myboldmath{Z}_2) $ and $\tilde{H}_{k-1}(Z,\myboldmath{Z})$ are a.a.s.\ trivial if $q\geq C\cdot\log n/n$ for $C$ sufficiently large (see \cite{HoffmanKahlePaquette-2013, LinialMeshulam:HomologicalConnectivityRandom2Complexes-2006, MeshulamWallach:HomologicalConnectivityRandomComplexes-2009}).
For the analysis of the spectrum of $\Delta_{k-1}^\textup{up}(Z)$, we can again consider the links of \mbox{$(k-2)$}-faces. For $H \in \binom{V}{k-1}$, the random graph $\lk(H,Z)$ is the union of $\lk(H,Y^k(n,p/2))$ and $\lk(H,Z^k(n,q))$. Hence, it has the distribution $G(n-k+1,r)$ with $r=p/2+q-pq/2$, the union of $G(n-k+1,p/2)$ and $G(n-k+1,q)$. As $r \geq p/2$, we see that also for this construction, a.a.s., the nontrivial part of the spectrum of the normalized Laplacian $\Delta_{k-1}^\textup{up}(Z)$ lies in the interval $[1-\gamma_\Delta,1+\gamma_\Delta]$ with $\gamma_\Delta=O(1/\sqrt{rn})$.
\subsubsection*{Acknowledgements} The second author is grateful to Roy Meshulam for helpful discussions during which, in particular, he learned about Garland's results.
We would also like to thank Matt Kahle and the referees of this paper and of the extended abstract for helpful comments.
\bibliographystyle{abbrv}
| {
"timestamp": "2015-08-26T02:10:29",
"yymm": "1411",
"arxiv_id": "1411.4906",
"language": "en",
"url": "https://arxiv.org/abs/1411.4906",
"abstract": "We consider higher-dimensional generalizations of the normalized Laplacian and the adjacency matrix of graphs and study their eigenvalues for the Linial-Meshulam model $X^k(n,p)$ of random $k$-dimensional simplicial complexes on $n$ vertices. We show that for $p=\\Omega(\\log n/n)$, the eigenvalues of these matrices are a.a.s. concentrated around two values. The main tool, which goes back to the work of Garland, are arguments that relate the eigenvalues of these matrices to those of graphs that arise as links of $(k-2)$-dimensional faces. Garland's result concerns the Laplacian; we develop an analogous result for the adjacency matrix. The same arguments apply to other models of random complexes which allow for dependencies between the choices of $k$-dimensional simplices. In the second part of the paper, we apply this to the question of possible higher-dimensional analogues of the discrete Cheeger inequality, which in the classical case of graphs relates the eigenvalues of a graph and its edge expansion. It is very natural to ask whether this generalizes to higher dimensions and, in particular, whether the higher-dimensional Laplacian spectra capture the notion of coboundary expansion - a generalization of edge expansion that arose in recent work of Linial and Meshulam and of Gromov. We show that this most straightforward version of a higher-dimensional discrete Cheeger inequality fails, in quite a strong way: For every $k\\geq 2$ and $n\\in \\mathbb{N}$, there is a $k$-dimensional complex $Y^k_n$ on $n$ vertices that has strong spectral expansion properties (all nontrivial eigenvalues of the normalised $k$-dimensional Laplacian lie in the interval $[1-O(1/\\sqrt{n}),1+O(1/\\sqrt{n})]$) but whose coboundary expansion is bounded from above by $O(\\log n/n)$ and so tends to zero as $n\\rightarrow \\infty$; moreover, $Y^k_n$ can be taken to have vanishing integer homology in dimension less than $k$.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "On Eigenvalues of Random Complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650146,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811466553536
} |
https://arxiv.org/abs/2207.09358 | Disoriented homology and double branched covers | This paper provides a convenient and practical method to compute the homology and intersection pairing of a branched double cover of the 4-ball.To projections of links in the 3-ball, and to projections of surfaces in the 4-ball into the boundary sphere, we associate a sequence of homology groups, called the disoriented homology. We show that the disoriented homology is isomorphic to the homology of the double branched cover of the link or surface. We define a pairing on the first disoriented homology group of a surface and show that this is equal to the intersection pairing of the branched cover. These results generalize work of Gordon and Litherland, for embedded surfaces in the 3-sphere, to arbitrary surfaces in the 4-ball. We also give a generalization of the signature formula of Gordon-Litherland to the general setting.Our results are underpinned by a theorem describing a handle decomposition of the branched double cover of a codimension-2 submanifold in the $n$-ball, which generalizes previous results of Akbulut-Kirby and others. | \section{Introduction}
Branched covering spaces have proved to be an extremely efficient way of encoding embedding information about submanifolds \cite{ak,greene,lisca,OSz}. The basic information about a covering space is its homology; this is often the starting point for extracting other invariants, such as various gauge theoretic invariants. In \cite{gl}, Gordon and Litherland showed that the first homology of an embedded spanning surface $F$ for a link $L$ in $S^3$ is isomorphic to the second homology of the double cover $X$ of the 4-ball branched along the properly-embedded surface obtained by pushing the interior of $F$ into the ball. Moreover, they defined a bilinear form on $H_1(F)$ and showed that it is isomorphic to the intersection form of $X$; they also derived a formula for the signature of $L$ in terms of this form.
The main goal of this paper is to generalize these results to embedded surfaces in the 4-ball. As a warm-up we consider links and tangles in the 3-ball. We use the radial distance function to induce a bridge decomposition on the radial projection $P \subset S^2$ of the link or tangle $L \subset B^3$, as in the example of the trefoil shown in
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{trefoilDH}
\caption
{{\bf A bridge decomposition of the left-handed trefoil.} Underbridges are shown in green, with overbridges in blue. Indicated is a choice of disorientations of the overbridges.}
\label{fig:trefoilDH}
\end{figure}
Figure \ref{fig:trefoilDH}. We choose a disorientation of each overbridge, again as shown in the figure: each segment of the complement of the underbridges in the overbridge is given an orientation. These are chosen so that the orientation switches at each crossing.
We use this data to define the disoriented chain complex $\mathcal{DC}_*(P)$ (see \Cref{sec:tangle}): $\mathcal{DC}_1(P)$ is the free abelian group generated by the overbridges, $\mathcal{DC}_0(P)$ is generated by the underbridges, and the boundary operator between them is given by counting with sign how many times each overbridge points into or out of each underbridge. The boundary operator from $\mathcal{DC}_0(P)$ to $\mathcal{DC}_{-1}(P)={\mathbb Z}$ is the augmentation homomorphism. We show that the homology of this complex computes that of the double branched cover of $L$ (for a precise statement see \Cref{prop:3d-homology}; we expect that this is similar to existing methods for computing homology of 3-dimensional double branched covers):
\begin{maintheorem}
\label{thm:tangleintro}
The disoriented homology of a link or tangle $L$ in $B^3$ is isomorphic to the shifted reduced homology of the double cover of $B^3$ branched along $L$, i.e.,
$$H_*(\mathcal{DC}_*(P)) \cong \widetilde H_{*+1}(\Sigma_2(B^3,L)).$$
\end{maintheorem}
The disoriented homology of a compact surface $F$ properly embedded in the 4-ball, with or without boundary, may be defined in a similar manner (see \Cref{sec:slice-disoriented}). Starting with a handle decomposition of $F \subset B^4$ induced by the radial distance function, we consider the images of these handles in the radial projection $F_s \subset S^3$ of $F$ as handles of $F_s$. Assuming the projection to be regular, $F_s$ may be decomposed as a ribbon-immersed surface and a union of disjoint disks that are 2-handles of $F_s$, as in \Cref{fig:proj-plane-intro}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.5]{proj-plane-intro}
\caption
{{\bf The radial projection of a projective plane with a compatible handle decomposition.} The round disk is the 0-handle, the green band the 1-handle, and the red and blue disks combine to give the 2-handle. The green arc signifies the ribbon singularity. The 2-handle is split into four subdisks by its intersection with the ribbon surface.}
\label{fig:proj-plane-intro}
\end{figure}
The group $\mathcal{DC}_k(F_s)$ for $k \ge 0$ is freely generated by the $k$-handles of $F_s$. The boundary operator from $\mathcal{DC}_2$ to $\mathcal{DC}_1$ for each 2-handle essentially counts with sign how many times the intersection of the 2-handle with a 1-handle goes over that 1-handle; this is computed using \emph{disorientations} of handles. Disorientations of 1-handles are orientations of their cores, switching each time they pass through a ribbon singularity; disorientations of 2-handles are determined by chessboard coloring of the regions into which 2-handles are split by their intersections with the ribbon surface. The remaining boundary homomorphisms are defined similarly to the 3-dimensional case. We also show that taking linking numbers with double normal pushoffs gives rise to a pairing $\lambda$ on the first disoriented homology group of $F_s$, which we call the GL-pairing of $F_s$. To define the pairing we use a more geometric description of the first disoriented homology group for a ribbon-immersed surface (see \Cref{sec:ribbon-disoriented}); \Cref{fig:DHgen} shows an example. We prove the following (see \Cref{thm:4d-homology} for a more precise formulation):
\begin{maintheorem}
\label{thm:surfaceintro}
The disoriented homology of a properly-embedded compact surface $F$ in the 4-ball is isomorphic to the shifted reduced homology of the double cover $\Sigma_2(B^4,F)$ of the 4-ball branched along $F$, i.e.,
$$H_*(\mathcal{DC}_*(F_s)) \cong \widetilde H_{*+1}(\Sigma_2(B^4,F)).$$
Moreover, the intersection pairing of $\Sigma_2(B^4,F)$ under this identification agrees with the GL-pairing $\lambda$.
\end{maintheorem}
The proof of this theorem relies on a Kirby diagram for the branched double cover $\Sigma_2(B^4,F)$; in particular, we give a recipe for drawing the attaching spheres of 3-handles. This is illustrated in \Cref{ex:proj-Kirby} for the surface in \Cref{fig:proj-plane-intro}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{DHgen}
\caption
{{\bf A generating set for disoriented homology.} A ribbon-immersed annulus with a generator of its first disoriented homology group. The ribbon singularity is shown in green.}
\label{fig:DHgen}
\end{figure}
A widely-used application of the celebrated paper of Gordon and Litherland is a convenient formula to compute the signature of a link using the signature of the bilinear form associated to a spanning surface for the link in $S^3$. We generalise this to give a signature formula based on the GL-pairing of a slice surface as follows:
\begin{maintheorem}
\label{thm:sigintro}
Let a link $\L$ be the boundary of a slice surface $F \subset B^4$, and let $\lambda$ be the GL-pairing on the first disoriented homology group of $F$. Then for any choice $\vec\L$ of orientation for $\L$, its signature is given by
$$\sigma(\vec\L)=\sigma(\lambda) - \frac 12 \lk(\vec\L,\vec\L^F),$$
where $\vec\L^F$ is a parallel copy of $\vec\L$ on the radial projection $F_s$ of $F$, oriented consistently with $\vec\L$.
\end{maintheorem}
The organization of the paper is as follows: we define the disoriented homology of a properly embedded tangle $L \subset B^3$ in \Cref{sec:tangle}. For a properly embedded surface $F \subset B^4$ we define in \Cref{sec:surface} its description $F_s \subset S^3$ which in the case of a ribbon surface is its ribbon immersion; for a general surface it is its regular projection with a compatible handle decomposition. Based on this description we define the disoriented homology $DH_*(F_s)$, for ribbon surfaces in \Cref{sec:ribbon-disoriented} and for general slice surfaces in \Cref{sec:slice-disoriented}. In case of a ribbon surface the first disoriented homology $DH_1(F_s)$ is a subgroup of the singular homology group $H_1(F_s;{\mathbb Z})$ of the ribbon immersed surface $F_s \subset S^3$ generated by those cycles that in a neighborhood of every ribbon sigularity are multiples of the chain pictured in \Cref{fig:localdis}; the structure of this chain also gives the homology its name. We then extend the definition of disoriented homology to general slice surface descriptions $F_s$ and give several alternative descriptions of the groups; see \Cref{def:DHv2} for the description used to relate the disoriented homology of the surface to the homology of the double branched cover as in \Cref{thm:surfaceintro}.
We define the pairing $\lambda$ on $DH_1(F_s)$ in \Cref{sec:pairing}, generalizing the Gordon-Litherland pairing.
\Cref{sec:dbc} is the technical core of the paper, in which we relate a handle decomposition of a codimension-2 submanifold $F$ on the $n$-ball to a handle decomposition of its double branched cover $X$.
In \Cref{subsec:dbc3} we show how a bridge decomposition of $L$ gives rise to a handle decomposition of the double cover $Y$ of $B^3$ branched along $L$ and give a recipe for drawing a Heegaard diagram of the double branched cover of a link in $S^3$. We show in \Cref{prop:3d-homology} that the disoriented homology of a tangle is isomorphic to the shifted homology of $Y$, proving \Cref{thm:tangleintro}.
In \Cref{subsec:dbc4} we consider the case of a surface $F$ in the 4-ball, and show how to construct a Kirby diagram for $X$ based on a handle decomposition of $F$. We use this to prove \Cref{thm:surfaceintro}.
In \Cref{sec:signature} we prove \Cref{thm:sigintro}.
\noindent {\bf Acknowledgements:} The authors thank Josh Greene for many helpful conversations. The second author was partially supported by Slovenian Research Agency (ARRS) Research program P1-0288.
\section{Disoriented homology of tangles}\label{sec:tangle}
Let $L$ be a properly embedded compact 1-manifold in the 3-ball, i.e., a tangle or a link, to which the radial distance function $\rho$ restricts to be Morse, giving a handle decomposition of $L$. This is known as a bridge decomposition of $L$. We assume that the radial projection $P \subset S^2$ of $L$ has only ordinary double points. The bridge decomposition of $L$ induces a bridge decomposition of $P$ which then carries the same information as a diagram of $L$; we refer to double points of $P$ as crossings. In this context $0$-handles and $1$-handles are called underbridges and overbridges respectively. We further assume that
\begin{itemize}
\item all endpoints of $P$ are contained in underbridges, and
\item at each crossing, an overbridge crosses over an underbridge.
\end{itemize}
For each overbridge of $P$ choose a \emph{disorientation} as follows: split the overbridge into subarcs separated by crossings and give consecutive subarcs opposite orientations. Denote the projection with this extra information (bridge decomposition and disorientations of overbridges) by $P^\flat$. Define the \emph{disoriented chain complex} $\mathcal{DC}_*(P^\flat)$ of $L$ as follows. Let $\mathcal{DC}_0(P^\flat)$ be the free abelian group generated by the underbridges and
$\mathcal{DC}_1(P^\flat)$ be the free abelian group generated by the disoriented overbridges. The boundary homomorphism $\partial_P^\flat : \mathcal{DC}_1(P^\flat) \to \mathcal{DC}_0(P^\flat)$ associates to each overbridge a linear combination of underbridges, where if an oriented arc of the overbridge points to/from an underbridge, it contributes plus/minus that underbridge.
Note that the contribution at each crossing is $\pm 2$ times the underbridge at the crossing.
Let $\mathcal{DC}_{-1}(P^\flat) ={\mathbb Z}$ and $\varepsilon : \mathcal{DC}_0(P^\flat) \to {\mathbb Z}$ be the augmentation homomorphism mapping every underbridge to $1$.
Then
$$ 0 \to \mathcal{DC}_1(P^\flat) \stackrel{\partial_P^\flat}{\longrightarrow} \mathcal{DC}_0(P^\flat) \stackrel{\epsilon}{\to} \mathcal{DC}_{-1}(P^\flat) \to 0,$$
is a chain complex that we refer to as the \emph{disoriented chain complex} of $L$. The homology of this complex is the \emph{disoriented homology} of $L$. We will show in \Cref{subsec:dbc3} that this is isomorphic to the shifted homology of the double branched cover of $B^3$ with branch set $L$. In particular this implies that the disoriented homology of $L$ is independent of the choices involved in its definition.
\begin{example}
\label{eg:trefoilDH}
We illustrate the above with the example of the trefoil $L$ as in Figure \ref{fig:trefoilDH}, using the bridge decomposition and the disorientations of the overbridges as in that figure.
Relative to the labellings of the underbridges and overbridges, the boundary homomorphism is given by the matrix
$$\partial_P^\flat=\left[
\begin{matrix} 1 & -1 & 2 \\ 1 & 2 & -1 \\ -2 & -1 & -1 \end{matrix}
\right].$$
Since the rank of this matrix is $2$, it follows that $H_1(\mathcal{DC}_*(P^\flat)) \cong {\mathbb Z}$.
To compute $H_0(\mathcal{DC}_*(P^\flat))$, observe that we may project the kernel of $\epsilon$ onto the subspace generated by any two of the underbridges.
Hence we omit the first row of $\partial_D$. Then the columns generate an index 3 subgroup of ${\mathbb Z}^2$, showing that $H_0(\mathcal{DC}_*(P^\flat)) \cong {\mathbb Z}/3{\mathbb Z}$.
\end{example}
\section{Surfaces in the 4-ball and their representations in the 3-sphere}\label{sec:surface}
Let $F \subset B^4$ be a properly embedded compact surface, not necessarily connected or orientable. We will refer to $F$ as a \emph{slice} surface. Denote by $\L\subset S^3$ the link consisting of the boundary components of $F$. We may assume (after an isotopy rel boundary) that the radial distance function $\rho$ in $B^4$ restricts to a Morse function $\rho_F$ on $F$. If $\rho_F$ has no critical points of index 2, then $F$ is called a \emph{ribbon} surface and it admits a ribbon immersion into $S^3$, in which case we denote the image of this immersion by $F_r$. The immersed surface $F_r$ can be described by first choosing pairwise disjoint embeddings of the 0-handles of $F$ into $S^3$ and then connecting them with pairwise disjoint 1-handles that may form ribbon singularities with the images of the 0-handles. Such a surface has a finite number of ribbon singularities as shown in \Cref{fig:ribbon}; the preimage of each consists of two arcs in $F$, one of which is contained in the interior of $F$ (called the \emph{interior arc}) and one which has its endpoints on the boundary of $F$ (called the \emph{properly embedded arc}). The ribbon immersed surface $F_r$ is embedded away from the ribbon singularities where it has two kinds of singular points: interior double points (in the interior of a ribbon singularity) and boundary double points (endpoints of a ribbon singularity).
Note that $F \subset B^4$ is obtained from $F_r \subset S^3$ by pushing its interior into the interior of $B^4$, where the interior arc of each ribbon singularity is pushed further in than the properly embedded arc. This may be done so that $\rho_F$ is a Morse function with no maxima.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.6\textwidth}
\centering
\includegraphics[scale=0.8]{ribbon}
\caption{A ribbon-immersed annulus, showing the preimage of the ribbon singularity.}\label{fig:ribbon}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[scale=0.8]{cut}
\caption{The associated cut surface.}\label{fig:cut}
\end{subfigure}
\caption
{\bf A ribbon surface $F$ and associated cut surface $F_c$.}
\label{fig:local}
\end{figure}
Sometimes it will be convenient to convert the immersed surface $F_r$ into an embedded surface by removing a small neighborhood of the properly embedded arc in the preimage of each ribbon singularity; we call this the \emph{cut surface} associated to $F_r$, and denote it by $F_c$; see \Cref{fig:cut}.
For a general slice surface $F \subset B^4$ we may assume that $\rho_F$ is a Morse function for which the ordering of the critical points in $F$ given by the distance from the center of $B^4$ agrees with the index --- in particular, we may assume that all minima and saddle points lie in $\rho^{-1}(0,1/3)$, and all maxima in $\rho^{-1}(1/3,1)$. After a further isotopy, supported near the non-critical level $1/2$, we may assume that $F$ is transverse to the sphere of radius $1/2$. Then the sublevel set $\hat{F}=F \cap B^4_{1/2}$ is a properly embedded ribbon surface to which we associate a ribbon-immersed surface $F_r \subset S^3$ as above. We also assume that the radial projection of $F$ to $S^3$ restricts to an embedding on the union of $2$-handles of $F$, that on the interior of each 2-handle this projection is transverse to $F_r$, and that this projection is generic. The last condition restricts possible types of singularities of the projected surface (cf.\ \cite{CKS}, and further detail in \Cref{sec:dbc}).
The boundary $\hat{\L}$ of $\hat{F}$ is the union of two sublinks, $\hat{\L}_0$ and $\hat{\L}_1$. The first of these corresponds to $\L$ (in the sense that a part of the surface $F \smallsetminus \Int \hat{F}$ defines an isotopy between $\hat{\L}_0$ and $\L$) and the second is an unlink consisting of those boundary components of $\hat{F}$ that are capped off in $F$ by the 2-handles. In particular, the components $L_i$ of $\hat{\L}_1$ bound pairwise disjoint embedded disks $d_i \subset S^3$ (images of 2-handles of $F$) that do not intersect $\hat{\L}_0$, but may intersect the interior of the immersed surface $F_r$. We call $\hat{\L}_1$ a \emph{separated sublink} of $\hat{\L}$. This yields a 3-dimensional description of the slice surface $F$ as the union, $F_s$, of the ribbon-immersed surface $F_r$ and the disks $d_i$.
Note that $F_s$ may not be smoothly embedded along the boundaries of the disks $d_i$. The double points interior to $d_i$ form a 1-manifold that may have closed components and arcs that end on the boundary of $F_r$. These endpoints may either be endpoints of ribbon singularities of the ribbon surface $F_r$ (where two sheets of the projected surface $F_s$ meet transversely) or may indeed be singular points of the projected surface, called \emph{pinch points} or Whitney umbrella singularities which occur when the framing curve from $d_i$ intersects $F_r$; note that the framings determined by $d_i$ and $F_r$ along the common boundary agree modulo 2. The standard model for the Whitney umbrella is given by the solutions of $x^2=y^2z$, with $z\ge0$. One may then consider the subset with $xy\le 0$ as a part of the ribbon surface and the subset with $xy\ge 0$ as a part of the 2-handle. The pinch point is at the origin and the double points lie on the $z$-axis.
The last type of singularities in $F_s$ are triple points, where $d_i$ passes through the interior of a ribbon singularity of $F_r$.
Conversely, a slice surface description $F_s \subset S^3$ determines a slice surface $F \subset B^4$. First its ribbon immersed subsurface $F_r$ determines a ribbon surface $\hat{F} \subset B^4_{1/2}$. If $\L_1$ is a separated sublink of the boundary of $F_r$ (or equivalently $\hat{F}$), then $\hat{F}$ may be extended to a (possibly closed) slice surface in $B^4$ obtained by capping off the boundary components in $\L_1$.
\section{Disoriented homology of a ribbon surface}\label{sec:ribbon-disoriented}
The domain of the Gordon-Litherland-type pairing for a ribbon-immersed surface $F_r \subset S^3$ is a subgroup of the first homology group of $F_r$ which we now describe.
Number the ribbon singularities from $1$ to $k$, and choose coordinates in a cylindrical ball $B_j$ centred on the $j$th ribbon singularity in $F_r$, with the surface inside the ball consisting of the disk of radius 2 in the $(x,y)$-plane and the vertical strip $\{0\}\times[-1,1]\times[-1,1]$ lying in the $(y,z)$-plane. The \emph{local disoriented $1$-chain} $\ell_j$ associated to the $j$th ribbon singularity is the sum of four oriented line segments in this coordinate patch: two vertical segments, running from $(0,0,\pm1)$ to the origin, and two horizontal segments, running from the origin to $(\pm2,0,0)$. This is sketched in \Cref{fig:localdis}.
\begin{figure}[htbp]
\centering
\includegraphics{localdis}
\caption
{{\bf Local picture near a ribbon singularity.} The local disoriented 1-chain is shown in blue.}
\label{fig:localdis}
\end{figure}
A \emph{disoriented cycle} is a 1-cycle on $F_r$ of the form
$$a=\sum_{j=1}^k n_j\ell_j + a',$$
where the $n_j$ are integers and $a'$ is a 1-chain supported in the complement of $\Int(B_1\cup\dots\cup B_k)$.
The \emph{(first) disoriented homology group} $DH_1(F_r)$ of the immersed surface $F_r \subset S^3$ is defined to be the subgroup of $H_1(F_r;{\mathbb Z})$ consisting of classes represented by disoriented cycles.
The disoriented homology group $DH_1(F_r)$ of a ribbon-immersed surface is a free abelian group (as a subgroup of $H_1(F_r)$). In simple situations (such as in the lemma below) it is abstractly isomorphic to the first homology of the underlying surface $F$ which in the presence of ribbon singularities is a smaller group than the homology group of the immersed surface $F_r$.
\begin{lemma}
\label{lem:DHsimple}
Let $F_r \subset S^3$ be a ribbon-immersed surface with associated ribbon surface $F\subset B^4$. Suppose that $F$ has a handle decomposition with a single 0-handle and no 2-handles, such that all ribbon singularities are formed by 1-handles passing through the 0-handle; or in other words, the interior arcs are contained in the 0-handle and the properly-embedded arcs are contained in the 1-handles. Then $DH_1(F_r)$ is isomorphic to $H_1(F;{\mathbb Z})$.
\end{lemma}
\begin{proof}
For each 1-handle $h$ of $F$ choose an orientation of its core; these oriented 1-chains, which can be completed to 1-cycles $\gamma_h$ with the addition of oriented arcs in the 0-handle, give a generating set for $H_1(F;{\mathbb Z})$ as a free abelian group. We now describe a corresponding set of generators for $DH_1(F_r)$. If a 1-handle $h$ contains no ribbon singularities, let $\alpha_h=\gamma_h$. Otherwise construct a representative for the class $\alpha_h$ by starting with the core of $h$, split into subarcs by the ribbon singularities; orient the first arc arbitrarily and propagate the orientation along the core by changing the orientation of the arc after every ribbon singularity. Let $a_h$ be obtained from this chain by adding appropriately oriented short pairs of arcs in the 0-handle emanating from its intersections with ribbon singularities as prescribed in \Cref{fig:localdis}. We claim that $a_h$ can be completed to a 1-cycle in $F_r$ by connecting its endpoints with oriented arcs in the 0-handle. Indeed, let $m$ be the number of ribbon singularities along $h$. If $m=2s+1$ is odd, then the endpoints of the core have the same orientation (pointing into or out of the 0-handle) as $2s$ of the other endpoints of oriented arcs comprising $a_h$, whereas the remaining $2(s+1)$ endpoints have the opposite orientation. Similarly, if $m=2s$ is even, the endpoints of the core have opposite orientation, and the other endpoints of oriented subarcs of $a_h$ may be split into two sets of size $2s$ each containing points of one orientation. Hence the endpoints of $a_h$ may be connected up by oriented arcs supported in the 0-handle; we denote the resulting disoriented class by $\alpha_h$. This homology class is well defined, since any two choices of oriented arcs in the 0-handle differ by a trivial cycle.
We now show that any class $\alpha\in DH_1(F_r)$ can be uniquely expressed as a linear combination of $\alpha_h$. By definition, $\alpha$ is represented by a 1-cycle in $F_r$ of the form
$$a=\sum_{j=1}^k n_j\ell_j + a',$$
where $\{\ell_j\}_{j=1}^k$ are the local disoriented 1-chains at ribbon singularities and $a'$ is supported in the complement of the chosen neighborhood of the ribbon singularities. Note that the coefficients $n_j$ are uniquely determined by the homology class $\alpha$ in $F_r$. Consider a 1-handle $h$ of $F$ with $m>0$ ribbon singularities, which we label $j_1,\ldots,j_m$ in the order one encounters them traveling from one end of $h$ to the other. We claim that for all $i<m$, the coefficients satisfy the relation $n_{j_i}+n_{j_{i+1}}=0$. To see this, consider the rectangular part of $h$ between the $i$th and $(i+1)$st ribbon singularity. At the $i$th singularity, the part of $a$ in the rectangle consists of an arc of multiplicity $n_{j_i}$ pointing towards it, and at the $(i+1)$st singularity of an arc of multiplicity $n_{j_{i+1}}$. Since $a$ is a cycle, the sum of the multiplicities of the endpoints of these arcs must be 0 from which the claim follows. We let $n_h=\pm n_{j_i}$ where the sign is chosen so that $n_h\alpha_h$ has the same local multiplicities as $a$ at the ribbon singularities along $h$. Then $\alpha-\sum_h n_h\alpha_h$ is represented by a cycle in the cut surface $F_c$ and hence uniquely expressible as a linear combination of the classes $\alpha_h$ for those 1-handles $h$ that do not contain ribbon singularities.
\end{proof}
\Cref{fig:DHgen} shows an example of a ribbon-immersed surface satisfying the hypotheses of the lemma. In general the conclusion of the lemma does not hold as can be seen in the example in Figure \ref{fig:virtualband}.
Suppose that $G_r \subset S^3$ is a ribbon-immersed surface and $F_r$ is a ribbon-immersed subsurface of $G_r$. If $G_r$ may be obtained from $F_r$ by adding 1-handles (possibly containing ribbon singularities), then the inclusion map of $F_r$ into $G_r$ induces a monomorphism $DH_1(F_r)\to DH_1(G_r)$.
We give another description of $DH_1(F_r)$ which is often easier to compute with and enables us to also define the 0-dimensional disoriented homology group of $F_r$.
We assume that $F_r$ is connected; if not, the same construction may be applied to each of its components.
If the cut surface $F_c$ is disconnected we attach some embedded 1-handles, which we call \emph{virtual bands}, to $F_r$ to form a new ribbon-immersed surface $G_r \subset S^3$. Denote the collection of virtual bands added to $F_r$ to form $G_r$ by $\mathcal{V}$. These handles intersect $F_r$ only along their attaching arcs and are pairwise disjoint. They must also satisfy the following conditions:
\begin{enumerate}[(i)]
\item\label{vb0} a virtual band connects two distinct components of the cut surface $F_c$;
\item\label{vb1} a virtual band is attached to each component of the cut surface $F_c$ that is not a topological disk with two cuts on the boundary;
\item\label{vb2} a virtual band is attached to each component of the cut surface $F_c$ containing the interior arc of a ribbon singularity;
\item\label{vb3} the graph $\Pi(\mathcal{V})$ with vertices corresponding to the components of $F_c$ to which virtual bands are attached, and edges corresponding to virtual bands is connected.
\end{enumerate}
For example, virtual bands may be attached to all components of $F_c$.
An orientation of a virtual band is an orientation of its core; we fix a choice of orientation for each virtual band. We call $G_r$ a \emph{virtually-banded surface} associated to $F_r$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{virtualband}
\caption
{{\bf A ribbon surface (in grey) with a virtual band (shown in violet).} A generator for the disoriented homology of the virtually banded surface is shown in blue.}
\label{fig:virtualband}
\end{figure}
To a virtually-banded surface $G_r$ corresponding to $(F_r,\mathcal{V})$ we associate a chain complex $\mathcal{DC}_*(F_r,\mathcal{V})$ with two nontrivial groups. The group $\mathcal{DC}_1(F_r,\mathcal{V})$ is the disoriented homology group $DH_1(G_r)$ of $G_r$; this is typically easier to work with than $DH_1(F_r)$ since it is possible to choose a generating set with at most one generator intersecting each ribbon singularity. The group $\mathcal{DC}_0(F_r,\mathcal{V})$ is the free abelian group on $\mathcal{V}$. The boundary homomorphism
\begin{align}
\partial_{\mathcal{V}}:\mathcal{DC}_1(F_r,\mathcal{V})&\to \mathcal{DC}_0(F_r,\mathcal{V}),\label{eq:boundary}\\
[a]&\mapsto\sum_{V \in \mathcal{V}} \lk(a,K_V)v\notag
\end{align}
is defined as follows. For a virtual band $V\in\mathcal{V}$ let $K_V$ be the boundary of an oriented disk in $S^3$ whose intersection with $G_r$ is a cocore of $V$, where the orientation of the disk is fixed by the requirement that the intersection number between the disk and the core of $V$ is $+1$. Then the boundary map $\partial_{\mathcal{V}}$ is given by the linking numbers with the $K_V$'s, or in other words by the signed count of how many times a disoriented homology class passes over each virtual band in the chosen direction.
\begin{proposition}
\label{prop:dishomviaG}
Let $F_r \subset S^3$ be a ribbon-immersed surface and let $G_r$ be a virtually-banded surface corresponding to $(F_r,\mathcal{V})$ as above. Then $H_*(\mathcal{DC}_*(F_r,\mathcal{V}))$ is (up to isomorphism) independent of the choices in the construction of $G_r$, and the inclusion of $F_r$ in $G_r$ induces a canonical isomorphism
$$DH_1(F_r)\cong H_1(\mathcal{DC}_*(F_r,\mathcal{V})).$$
We call the homology of the chain complex $\mathcal{DC}_*(F_r,\mathcal{V})$ the \emph{disoriented homology} of $F_r$, denoted by $DH_*(F_r)$.
\end{proposition}
The relation of $DH_0(F_r)$ to the 4-dimensional description will be made apparent in \Cref{subsec:dbc4}.
\begin{proof}
We assume $F_r$ and consequently $G_r$ is connected.
We construct a handle decomposition of $G_r$ without 2-handles and with a single 0-handle containing all the interior arcs of ribbon singularities as follows. Start with a handle decomposition without 2-handles of (the underlying surface of) $F_r$, so that any component of the cut surface $F_c$ to which a virtual band is attached contains a (single) 0-handle. We may assume that the virtual bands are attached to the 0-handles, that the interior arcs of the ribbon singularities are contained in the interiors of the 0-handles, and that the properly embedded arcs of ribbon singularities are contained in the 1-handles.
Recall the graph $\Pi(\mathcal{V})$ in condition \eqref{vb3} governing the attachment of virtual bands. Since this graph is connected, we may choose $\mathcal{V}_0 \subset \mathcal{V}$ so that the graph $\Pi(\mathcal{V}_0)$ is a maximal tree of $\Pi(\mathcal{V})$. Then the union of the 0-handles in $F_r$ and the virtual bands in $\mathcal{V}_0$ forms a single 0-handle in the decomposition of $G_r$ that contains all the interior arcs. Hence by Lemma \ref{lem:DHsimple} the group $\mathcal{DC}_1(F_r,\mathcal{V})=DH_1(G_r)$ is isomorphic to the free abelian group with one generator for each 1-handle of $G_r$; these are 1-handles of $F_r$ and virtual bands not in $\mathcal{V}_0$.
Since the ribbon-immersed surface $G_r$ is obtained from $F_r$ by adding embedded 1-handles, the inclusion of $F_r$ into $G_r$ induces an inclusion of $DH_1(F_r)$ as a subgroup of $DH_1(G_r)$. Note that disoriented 1-cycles in $F_r$ do not intersect the virtual bands, so these are in the kernel of $\partial_{\mathcal{V}}$. On the other hand, if a disoriented 1-cycle in $G_r$ is in the kernel of $\partial_{\mathcal{V}}$, then it is homologous to a disoriented 1-cycle in $F_r$ by a homology supported in the virtual bands of $G_r$. This proves that the inclusion $F_r \hookrightarrow G_r$ induces a canonical isomorphism $H_1(\mathcal{DC}_*(F_r,\mathcal{V}))\cong DH_1(F_r)$.
To prove independence of $H_0(\mathcal{DC}_*(F_r,\mathcal{V}))$ from the choices made in the construction first note that adding a virtual band $V$ to $\mathcal{V}$ and hence to $G_r$ subject to the above conditions yields a new surface $G_r'$ corresponding to $\mathcal{V}'=\mathcal{V} \cup\{V\}$ with isomorphic $H_0$. Indeed, the chain complex $\mathcal{DC}_*(F_r,\mathcal{V}')$ is obtained from $\mathcal{DC}_*(F_r,\mathcal{V})$ by adding a generator to each of its groups:
$$ \mathcal{DC}_0(F_r,\mathcal{V}')=\mathcal{DC}_0(F_r,\mathcal{V})\oplus {\mathbb Z} v, \ \ \mathcal{DC}_1(F_r,\mathcal{V}')=\mathcal{DC}_1(F_r,\mathcal{V})\oplus {\mathbb Z} \alpha, $$
where $\alpha$ is represented by a 1-cycle in $G_r'$ that passes over the virtual band $V$ geometrically once. This shows that the inclusion of $\mathcal{DC}_*(F_r,\mathcal{V})$ into $\mathcal{DC}_*(F_r,\mathcal{V}')$ is a chain equivalence. Note that by condition \eqref{vb3} above at least one of the two components of $F_c$ that $V$ connects is in $G_r$ already connected to a virtual band, and hence to the 0-handle of $G_r$. If this holds for both components, then we take $\mathcal{V}_0'=\mathcal{V}_0$ and $V$ becomes a 1-handle of $G_r'$. The core of $V$ may be completed in the 0-handle of $G_r$ to a 1-cycle in $G_r'$ proving the claim in this case. Otherwise $V$ is connected to a component $A$ that in $G_r$ does not have a virtual band attached to it, thus $A$ is a part of a 1-handle $h$ in $G_r$ between two ribbon singularities. We take $\mathcal{V}_0'=\mathcal{V}_0 \cup \{V\}$ and change the handle decomposition of $F_r$ by introducing another 0-handle in $A$; this splits $h$ into two 1-handles. Recall from the proof of \Cref{lem:DHsimple} that to $h$ corresponds a generator $\alpha_h$ in $DH_1(G_r)$. Then one half of the cycle for $\alpha_h$ (corresponding to one of the new 1-handles) along with the core of $V$ can be as in the proof of that lemma completed to a cycle in $G_r'$ defining the class $\alpha$. Since the boundary homomorphism $\partial_{\mathcal{V}'}$ maps $\alpha$ to $\pm v$, the claim follows.
It follows from the previous paragraph that we may assume virtual bands in $\mathcal{V}$ are attached to all components of the cut surface $F_c$ and that the corresponding graph $\Pi(\mathcal{V})$ is a tree. We now verify that $H_0$ agrees for such choices of collections of virtual bands. Let $\mathcal{V}_1$ and $\mathcal{V}_2$ be two such collections and denote by $G_r^1$ and $G_r^2$ the corresponding virtually banded surfaces. Then $G_r^1$ can be transformed into $G_r^2$ by a sequence of steps where in each step a virtual band $V_1 \in \mathcal{V}_1$ is replaced by a virtual band $V_2 \in \mathcal{V}_2$; we may assume by an isotopy that $V_2$ is disjoint from virtual bands in $\mathcal{V}_1$. By induction we assume there is just one such step, so that $\mathcal{V}_1 \smallsetminus \{V_1\} = \mathcal{V}_2 \smallsetminus \{V_2\}$. Adding $V_2$ to $G_r^1$ gives a new larger surface $\widehat G$ whose graph $\Pi(\mathcal{V}_1 \cup \{V_2\})$ contains a cycle that includes $V_1$ and $V_2$. This graph cycle gives rise to a cycle in the first homology group of $\widehat G$. The homology class of this cycle (oriented consistently with $V_1$) is represented by a 1-chain $b$, which may be assumed to be
disjoint from all ribbon singularities. Then for any class $\alpha=[a] \in \mathcal{DC}_1(F_r, \mathcal{V}_1)$ we let $\varphi(\alpha)=[a - \lk(a,K_{V_1})b]$; clearly the cycle on the right may be represented in $G_r^2$. On $\mathcal{DC}_0(F_r, \mathcal{V}_1)$ we let $\varphi$ act as the identity except that it sends $v_1$ to $-\sum_{V\in \mathcal{V}_2} \lk(b,K_V)v$. Clearly $\varphi$ is a well defined isomorphism of the chain complexes.
\end{proof}
\begin{example}
Consider the ribbon immersed surface $F_r \subset S^3$ shown in \Cref{fig:virtualband}. The disoriented chain complex for the indicated virtual band is
$$ \mathcal{DC}_1 \cong {\mathbb Z},\quad \mathcal{DC}_0 \cong {\mathbb Z},$$
where the boundary homomorphism is multiplication by $2$. Hence the disoriented homology is
$$DH_1(F_r)=0,\quad DH_0(F_r) \cong {\mathbb Z}/2{\mathbb Z}.$$
Let $F \subset B^4$ be a properly embedded surface obtained by pushing the interior of $F_r$ into the 4-ball and let $X$ be the branched double cover of the 4-ball with branch set $F$. Then according to \Cref{thm:4d-homology} the reduced homology of $X$ is nontrivial only in dimension $1$ and
$$H_1(X;{\mathbb Z}) \cong {\mathbb Z}/2{\mathbb Z}.$$
\end{example}
\section{Disoriented homology of a slice surface}\label{sec:slice-disoriented}
A slice surface $F \subset B^4$ can be described (as in Section \ref{sec:surface}) by $F_s \subset S^3$ which consists of a ribbon-immersed surface $F_r \subset S^3$ along with a separated sublink $\L_0=\{L_1,\ldots,L_m\}$ of its boundary. Boundary components in $\L_0$ bound pairwise disjoint disks $d_i \subset S^3$ that do not intersect the rest of the boundary. Choose disjoint small closed regular neighborhoods $N_i$ of $d_i$. We assume $N_i$ is small enough so that $N_i \cap F_r$ is a regular neighborhood of $d_i \cap F_r$ and that the boundary spheres $S_i$ of $N_i$ intersect $F_r$ transversely in its interior. Then the intersection $S_i \cap F_r$ is a 4-valent graph whose vertices are the intersections of $S_i$ with the ribbon singularities.
\begin{lemma}
With the notation as above, the intersection $S_i \cap F_r$ determines a disoriented 1-cycle $b_i$ and hence a homology class $\beta_i$ in $DH_1(F_r)$, well defined up to sign.
\end{lemma}
\begin{proof}
Color the faces of the graph $S_i \cap F_r$ on $S_i$ in a chessboard fashion. A choice of orientation of the sphere induces orientations of the faces. Orient the edges of the graph consistently with the black faces, as shown in Figure \ref{fig:discyc}. Then the orientation on the graph is consistent with it representing a disoriented 1-cycle $b_i$ in $F_r$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{discyc.pdf}
\caption
{{\bf The disoriented cycle $b_i$.} Orienting the edges of the graph $S_i \cap F_r$ as the boundary of the black faces yields a disoriented homology class. Recall that vertices of the graph come from ribbon singularities.}
\label{fig:discyc}
\end{figure}
If $N_i'$ is another small neighborhood of $D_i$ as above, then the corresponding disoriented cycle $b_i'$ is homologous to $\pm b_i$ since there is a homotopy transforming one into the other.
\end{proof}
Note that in fact $L_i$ uniquely determines the class $\beta_i$. The sphere $S_i$ can be chosen as any separating sphere for this component of $\partial F_r$. Any two such spheres are isotopic in the complement of $\partial F_r$ and thus their intersections with $F_r$ determine the same disoriented homology class (up to sign).
Let $G_r$ be any virtually-banded surface associated to $F_r$ through a choice of virtual bands $\mathcal{V}$. Together with the link $\L_0$ it determines a disoriented chain complex $\mathcal{DC}_*(F_r,\mathcal{V},\L_0)$ as follows. We let $\mathcal{DC}_k(F_r,\mathcal{V},\L_0)=\mathcal{DC}_k(F_r,\mathcal{V})$ for $k=0,1$, and extend this complex to include another group, $\mathcal{DC}_2(F_r,\mathcal{V},\L_0)$, which is the free abelian group with basis the disks $d_i$. The boundary homomorphism $\partial_{\mathcal{V}} \colon \mathcal{DC}_2(F_r,\mathcal{V},\L_0) \to \mathcal{DC}_1(F_r,\mathcal{V})$ sends $d_i$ to $\beta_i$. Since the support of $\beta_i$ lies in $F_r$, it follows that $(\mathcal{DC}_*(F_r,\mathcal{V},\L_0),\partial_{\mathcal{V}})$ is indeed a chain complex.
\begin{definition}\label{def:DHv1}
We call the homology of the complex $(\mathcal{DC}_*(F_r,\mathcal{V},\L_0),\partial_{\mathcal{V}})$ the \emph{disoriented homology} of the slice surface description $F_s$ and denote it by $DH_*(F_s)$.
\end{definition}
It is clear from the case of ribbon surfaces that the resulting homology is independent of the choice of virtual bands.
We give yet another description of the disoriented homology of a slice surface that is defined in terms of a handle decomposition of the surface. This is analogous to the disoriented homology of a link and provides a convenient way of identification with the homology of the double branched cover of the 4-ball.
A handle decomposition of $F$ determines a ribbon subsurface of $F$. We will refer to the images of the handles of $F$ in the corresponding ribbon-immersed surface $F_r$ also as handles. We assume that all the ribbon singularities are formed by 1-handles passing through the 0-handles of $F_r$. The handle decomposition also determines a separated sublink $\L_0$ of the boundary of $F_r$ whose components bound disks $d_i$ (the 2-handles). Given this, choose for each 1-handle $h_j$ of $F_r$ a \emph{disorientation} of its core, i.e., orient the arcs into which ribbon singularities split the core in such a way that any two consecutive arcs have opposite orientations. Denote the disoriented core of $h_j$ by $c_j$. Let $\Gamma_i$ be the intersection of the disk $d_i$ with $F_r$; we assume that this intersection is transverse in the interior of $d_i$. Then $\Gamma_i$ is a graph that contains all of $\partial d_i$ and whose interior vertices are 4-valent corresponding to ribbon singularities. Its vertices on the boundary are 3-valent and correspond to pinch points or ribbon singularities. Choose a chessboard coloring of the faces of $\Gamma_i$ on $d_i$. Then orienting all the black faces consistently with one orientation of the disk $d_i$ and giving all the white faces the opposite orientation determines a \emph{disorientation} of the 2-handle $d_i$ -- we denote the disk along with the chosen disorientation by $d_i^\flat$. We denote this slice surface description of $F$ with a chosen handle decomposition of $F_r$ and chosen disorientations of its 1- and 2-handles as described above by $F_s^\flat$.
The disoriented chain complex for $F_s^\flat$ is given as follows:
\begin{itemize}
\item $\mathcal{DC}_0(F_s^\flat)$ is the free abelian group generated by the 0-handles;
\item $\mathcal{DC}_1(F_s^\flat)$ is the free abelian group generated by the disoriented cores of the 1-handles;
\item $\mathcal{DC}_2(F_s^\flat)$ is the free abelian group generated by the disoriented 2-handles.
\end{itemize}
The boundary homomorphism $\partial_1^\flat : \mathcal{DC}_1(F_s^\flat) \to \mathcal{DC}_0(F_s^\flat)$ is given by the signed count of the number of times a given disoriented core points into (positive contribution) or away from (negative contribution) a 0-handle; note that the contribution at each ribbon singularity is $\pm 2$ times the zero handle containing the interior arc of the singularity. To define the boundary homomorphism $\partial_2^\flat : \mathcal{DC}_2(F_s^\flat) \to \mathcal{DC}_1(F_s^\flat)$, orient the edges of $\Gamma_i$ as the boundary of the black regions in $d_i^\flat$. This data determines a disoriented homology class $\beta_i^\flat=[b_i^\flat]$ in the ribbon immersed surface $F_r$ as follows: $b_i^\flat$ is the linear combination of the oriented edges of $\Gamma_i$, where the edges lying in $\partial d_i$ have multiplicity 1 and the interior edges have multiplicity 2. For each 1-handle $h_j$ of $F_r$ we count how many times $b_i^\flat$ passes over it as follows: choose an orientation of $h_j$
and orient one of its attaching arcs $a_j$ so that the intersection number of $a_j$ and $c_j$ equals 1, $a_j \cdot c_j=1$. Then the coefficient of $c_j$ in $\partial_2^\flat(d_i^\flat)$ is equal to the intersection number $a_j \cdot b_i^\flat$.
\begin{definition}\label{def:DHv2}
We call the augmented chain complex $(\mathcal{DC}_*(F_s^\flat),\partial_*^\flat)$, where the augmentation homomorphism $\partial_0^\flat=\varepsilon : \mathcal{DC}_0(F_s^\flat) \to \mathcal{DC}_{-1}(F_s^\flat)={\mathbb Z}$ sends each 0-handle to 1, the \emph{cellular disoriented complex} of the slice surface description $F_s^\flat$.
\end{definition}
\begin{proposition}\label{prop:DHiso}
The homology of the cellular disoriented complex $(\mathcal{DC}_*(F_s^\flat),\partial_*^\flat)$ is isomorphic to the disoriented homology of $F_s$.
\end{proposition}
\begin{proof}
We will construct a chain equivalence $f_* : \mathcal{DC}_*(F_s^\flat) \to \mathcal{DC}_*(F_r,\mathcal{V},\L_0)$ for a particular choice of a virtually banded surface $G_r$, determined by a collection of virtual bands $\mathcal{V}$ for $F_r$. Choose a 0-handle $m_0$ of $F_r$ and connect this 0-handle to every other 0-handle $m_i$, $i \ge 1$, by a virtual band $V_i$. Orient virtual bands so that they point to $m_0$. Let $f_0$ be given by
$$f_0(m_i)=v_i,\ i \ge 1, \quad f_0(m_0)=0.$$
By \Cref{lem:DHsimple}, $\mathcal{DC}_1(F_r,\mathcal{V},\L_0)=DH_1(G_r)$ is generated by elements corresponding to 1-handles of $F_r$. In fact, a generator $\alpha_j$ corresponding to a 1-handle $h_j$ may be obtained by completing the disoriented core $c_j$ of $h_j$ to a disoriented 1-cycle $\bar c_j$ in $G_r$. This defines the homomorphism $f_1$:
$$f_1(c_j)=\alpha_j=[\bar c_j].$$
Finally, $f_2$ is given by sending each disoriented 2-handle $d_i^\flat$ to the disk $d_i$.
$$
\begin{CD}
\mathcal{DC}_2(F_s^\flat) @>{\partial_2^\flat}>> \mathcal{DC}_1(F_s^\flat) @>{\partial_1^\flat}>> \mathcal{DC}_0(F_s^\flat) @>{\varepsilon}>> {\mathbb Z} \\
@V{\cong}V{f_2}V @V{\cong}V{f_1}V @VV{f_0}V @VV{f_{-1}}V \\
\mathcal{DC}_2(F_r,\mathcal{V},\L_0) @>{\partial_{\mathcal{V}}}>> \mathcal{DC}_1(F_r,\mathcal{V},\L_0) @>{\partial_{\mathcal{V}}}>{\phantom{\partial}}> \mathcal{DC}_0(F_r,\mathcal{V},\L_0) @>{\partial_{\mathcal{V}}}>> 0
\end{CD}
$$
We verify that $f_*$ is a chain map. Note that the (algebraic count of the) number of times $\bar c_j$ goes over the virtual band $V_i$ connecting $m_i$ to $m_0$ is the same as the coefficient of $m_i$ in $\partial_1^\flat c_j$, hence
$$\partial_{G_r} \circ f_1=f_0 \circ \partial_1^\flat.$$
To show the commutativity of the left square, we need to see that for each disk $d_i$, the resulting 1-cycles $b_i$ and $f_1(b_i^\flat)$ give the same element of disoriented homology (up to sign).
The sphere $S_i$ is a double push-off of the disk $d_i$ and therefore a chessboard coloring of $d_i$ determines a chessboard coloring of $S_i$ by changing all the colors on one of the hemispheres. This orients the two edges of $S_i \cap F_r$ corresponding to a given interior edge of $\Gamma_i$ consistently. A homotopy collapsing the sphere $S_i$ onto the disk $d_i$ now induces a homology between $b_i$ and $\pm b_i^\flat$, as elements of the first homology group of the immersed surface. We choose the sign of $b_i$ so that
$$\partial_{G_r} \circ f_2=f_1 \circ \partial_2^\flat$$
holds.
We claim that $f_*$ has a chain homotopy inverse $g_* : \mathcal{DC}_*(F_r,\mathcal{V},\L_0) \to \mathcal{DC}_*(F_s^\flat)$, where $g_i=f_i^{-1}$ for $i=1,2$, and $g_0$ is given by $g_0(v_i)=m_i-m_0$. Clearly $g_*$ is also a chain map: the commutativity of the left and right squares is clear, for the middle square it follows from the argument for $f_*$ and the choice of $g_0$.
Note that also $f_0 \circ g_0=id$, whereas $g_0 ( f_0(m_i))=m_i-m_0$ for $i \ge 1$, and $g_0 ( f_0(m_0))=0$. Hence a chain homotopy between $id$ and $g_* \circ f_*$ is given by $H : \mathcal{DC}_*(F_s^\flat) \to \mathcal{DC}_{*+1}(F_s^\flat)$ whose only nontrivial component is $H_{-1}$ which sends $1 \in {\mathbb Z}$ to $m_0$:
\begin{equation*}
id-g_0 \circ f_0= H_{-1} \circ \varepsilon. \qedhere
\end{equation*}
\end{proof}
\section{The Gordon-Litherland type pairing on the disoriented homology group}
\label{sec:pairing}
Let $F_r \subset S^3$ be a ribbon immersed surface. Given two disoriented homology classes $\alpha$, $\beta \in DH_1(F_r)$ represented by disoriented cycles $a$ and $b$ we wish to follow Gordon and Litherland \cite{gl}, and define the pairing of $\alpha$ and $\beta$ to be the linking number of $a$ and $\tau b$,
where $\tau b$ is obtained by pushing $b$ off $F_r$ in the normal direction on both sides.
Of course, we need to take care in defining $\tau b$ in the vicinity of a ribbon singularity.
Recall that $b$ is represented by a 1-chain on $F_r$ whose support near each ribbon singularity is an integer multiple of the local disoriented 1-chain $\ell_j$ shown in \Cref{fig:localdis}. We take coordinates in a ball neighbourhood $B_j$ of each ribbon singularity as before. The push-off $\tau \ell_j$ of $\ell_j$ then consists of two disjoint oriented line segments in each of the planes $z=\pm1$. The starting points of the segments in the plane $z=1$ are $(\pm1,0,1)$ and the endpoints are $(\pm2,0,1)$. We then take the vertical translates of these two segments in the plane $z=-1$. Away from the ribbon singularity we take normal pushoffs on either side of $F_r$ as usual, chosen to match up with (the given multiple of) $\tau \ell_j$. The result is a (singular if $\max |n_j|>1$) oriented link $\tau b$ in $S^3\smallsetminus F_r$, as illustrated in \Cref{fig:pushoff}.
\begin{figure}[htbp]
\centering
\includegraphics{pushoff}
\caption
{{\bf The double push-off near a ribbon singularity.} The local disoriented 1-chain is shown in blue, with its double push-off in red.}
\label{fig:pushoff}
\end{figure}
The Gordon-Litherland type form for the ribbon-immersed surface $F_r \subset S^3$ is now defined to be
$$\lambda_{F_r}(\alpha,\beta)=\lk(a,\tau b).$$
Note that $a$ and $b$ in the above formula may be singular; see \Cref{subsec:lk} for discussion of linking numbers in this case.
\begin{example}
\label{eg:DHgen}
One may check that the square $\lambda_{F_r}(\alpha,\alpha)$ of the generator shown in \Cref{fig:DHgen} is $6$, which agrees with the determinant of the boundary of the given surface.
\end{example}
\begin{proposition}
For a ribbon-immersed surface $F_r$, $\lambda_{F_r}$ is a well-defined symmetric bilinear form on $DH_1(F_r)$. Moreover, if $G_r$ is a ribbon-immersed surface obtained from $F_r$ by adding 1-handles, then the restriction of $\lambda_{G_r}$ to the disoriented homology of $F_r$ agrees with $\lambda_{F_r}$.
\end{proposition}
\begin{proof}
Since the linking number of two disjoint cycles $a$ and $\tau b$ depends only on the homology classes of the cycles, and since a homology between $b$ and $b'$ in $F_r$ naturally gives rise to a homology between $\tau b$ and $\tau b'$ in the complement of $F_r$, it follows that $\lambda_{F_r}$ is well-defined. That $\lambda_{F_r}$ is symmetric follows similarly as in the case of embedded surfaces \cite{gl}. Let $N$ be the immersed normal $B^1$-bundle of $F_r$ in $S^3$. Self-intersections of $N$ are cubes located at ribbon singularities of $F_r$. Denote by $\partial ' N$ the part of the boundary of $N$ that comes from the $S^0$-bundle; we smooth the corners in $\partial ' N$ along the edges of the cubes at ribbon singularities. Let the positive normal direction to $\partial ' N$ be given by the outward pointing normal and for any 1-cycle $c$ on $\partial ' N$ denote by $c^+$ a nearby pushoff of $c$ in this direction and by $c^-$ a nearby pushoff of $c$ in the opposite direction. Note that $\tau a$ may be viewed as a 1-cycle in $\partial ' N$ and that it is homologous to $2a$ in $N$. Then
$$\lk(a,\tau b)= \lk(a,\tau b^+)=\lk(\tau a,\tau b^+)/2$$
and therefore
\begin{align*}
2\big(\lambda_{F_r}([a],[b])-\lambda_{F_r}([b],[a])\big) &= \lk(\tau a,\tau b^+)- \lk(\tau b,\tau a^+)\\
&= \lk(\tau a, \tau b^+ - \tau b^-) = \tau a \cdot B,
\end{align*}
where $B$ is the 2-chain with boundary $\tau b^+ - \tau b^-$, obtained by restricting the normal $B^1$-bundle of $\partial ' N$ to $\tau b$.
Note that each intersection point $x$ between $a$ and $b$ in $F_r$ gives rise to a pair of intersection points $\tau x$ between $\tau a$ and $\tau b$ in $\partial ' N$ at which the orientations of the normal to $\partial ' N$ are opposite. In other words, the two patches of $\partial ' N$ at the points $\tau x$ have opposite orientations. Hence the local intersection numbers at the two points in $\tau x$ are of the opposite sign and the intersection number above vanishes.
\end{proof}
Consider now a description $F_s \subset S^3$ of a slice surface, consisting of a ribbon-immersed surface $F_r \subset S^3$ and a separated sublink $\L_0$ of its boundary. The following lemma shows that the form $\lambda_{F_r}$ induces a well-defined symmetric bilinear form $\lambda_{F_s}$ on $DH_1(F_s)$. Recall that to any component $L_i$ of $\L_0$ we associate a class $\beta_i \in DH_1(F_r)$ represented by a disoriented 1-chain $b_i$ whose support is the intersection of a separating sphere $S_i$ for $L_i$ with $F_r$.
\begin{lemma}
With the notation as above, $\lambda_{F_r}(\alpha,\beta_i)=0$ for any $\alpha \in DH_1(F_r)$.
\end{lemma}
\begin{proof}
Since the sphere $S_i$ is transverse to $F_r$ we may take the double pushoff $\tau b_i$ to be the boundary of a bicollar neighborhood of $b_i$ in $S_i$. Recall that $b_i$ is oriented consistently with the black regions in a chessboard coloring of its complement. The complement of the open bi-collar is a union of disks and if we let $c$ be the 2-chain which is given by the sum of all the black disks minus the sum of all the white disks, then $\tau b_i$ is the boundary of this 2-chain. Since $c$ does not intersect $F_r$, it follows that the linking number of $\tau b_i$ with any (disoriented) 1-cycle on $F_r$ is zero.
\end{proof}
\begin{example}[The positive unknotted real projective plane]\label{ex:proj-homology}
We compute the disoriented homology and the GL-pairing of the unknotted real projective plane $P={\mathbb R}{\mathbb P}^2$ in $B^4$ with radial projection $P_s$ given on the left of \Cref{fig:proj-homology}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{proj-homology}
\caption
{{\bf The real projective plane.} The left picture shows the radial projection $P_s$ of $P$: the round disk is the 0-handle, the green band the 1-handle, and the red and blue disks combine to give the 2-handle. The middle picture shows a generator for the first disoriented homology (in blue) and its pushoff (in red). The right picture gives a disorientation of the 2-handle and the resulting cycle $b^\flat$.}
\label{fig:proj-homology}
\end{figure}
Denote the 0-handle of $P$ by $m$, its 1-handle by $h$ and 2-handle by $d$. Let $c_h$ be the disoriented core of $h$, given by the part of the blue generator in the middle picture on \Cref{fig:proj-homology} lying on $h$. Choose a disorientation of $d$ as in the right picture on \Cref{fig:proj-homology}. Then the cellular disoriented chains of $P$ are
$$\mathcal{DC}_0(P_s^\flat)={\mathbb Z} m,\quad \mathcal{DC}_1(P_s^\flat)={\mathbb Z} c_h,\quad \mathcal{DC}_2(P_s^\flat)={\mathbb Z} d.$$
The boundary homomorphism on $\mathcal{DC}_1$ is trivial as there is only one 0-handle. Also the boundary homomorphism on $\mathcal{DC}_2$ is trivial as can be seen from the right picture on \Cref{fig:proj-homology} since the two arcs of the boundary cycle $b^\flat$ of $d$ have opposite disorientations. Taking into account the augmentation homomorphism it follows that
$$DH_0(P_s^\flat)=0,\quad DH_1(P_s^\flat)\cong {\mathbb Z},\quad DH_2(P_s^\flat)\cong{\mathbb Z}.$$
The self-pairing of the generator of $DH_1(P_s^\flat)$ is equal to $+1$ as can be seen from the middle picture on \Cref{fig:proj-homology}, since the linking number between the disoriented cycle in blue and its pushoff in red is equal to $+1$.
We return to this example in \Cref{ex:proj-Kirby} where we exhibit a Kirby diagram of the branched double cover of the 4-ball with branch set $P$.
\end{example}
\subsection{Remarks on linking numbers}
\label{subsec:lk}
Given two disjoint oriented knots $K$ and $K'$ in ${\mathbb R}^3$, represented as smooth maps from $S^1$ to ${\mathbb R}^3$, their linking number $\lk(K,K')$ may be defined as the degree of the map
$$(u,v)\mapsto\frac{K(u)-K'(v)}{|K(u)-K'(v)|}$$
from $S^1\times S^1$ to $S^2$. This implies that the linking number is an invariant of homotopy classes of maps with disjoint images and also that it is symmetric. The linking number is then extended to links by requiring it to be bilinear: if $L=K_1\cup\dots\cup K_m$ and $L'=K'_1\cup\dots\cup K'_n$ are disjoint oriented links, then
$$\lk(L,L')=\sum_{i,j}\lk(K_i,K'_j).$$
Alternatively, $\lk(K,K')$ may be defined as the multiple of the homology class determined by $K'$ in $H_1({\mathbb R}^3 \smallsetminus K;{\mathbb Z}) \cong {\mathbb Z}$, where the generator is a positively oriented meridian of $K$. So in fact the linking number depends only on the homology class of $K'$ in the complement of $K$. To explicitly compute $\lk(K,K')$, one usually relies on combinatorial description via diagrams: starting with a diagram of $K \cup K'$, assign to each crossing $c$ between $K$ and $K'$ a sign $\varepsilon_c \in \{\pm 1\}$, where $\varepsilon_c=1$ if a bug travelling along the overcrossing arc in the chosen direction sees the undercrossing arc oriented from right to left. Then
$$\lk(K,K') = \frac12 \sum_c \varepsilon_c;$$
if one counts only overcrossings of one knot over the other, the same formula without the half applies.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{thetalinking}
\caption
{\bf The linking number of the embedded $\Theta$-graph with the knot $K$ is $+1$. Two isotopy representatives of $K$ are shown.}
\label{fig:thetalinking}
\end{figure}
As pointed out in \cite{rolfsen}, the definition allows for each of $L$ and $L'$ to be singular, as long as they are disjoint. In fact, $L$ and $L'$ may be any two disjoint 1-cycles. The case of interest to us is when $L$ and/or $L'$ is an embedded graph with oriented edges, with nonnegative integer multiplicities associated to each edge, in such a way that the signed weighted count of edges at each vertex (inward minus outward) is zero. An example is shown in \Cref{fig:thetalinking}; any such graph has an interpretation as a singular link in which the multiplicity of an edge is the signed number of times it is traversed by the components of the link. Clearly one may apply the above combinatorial formula to compute the linking number of such objects.
\section{Double branched covers and handlebody decompositions}\label{sec:dbc}
In this section we describe the double cover of the $n$-ball $B^n$ branched along a smoothly and properly embedded codimension-two submanifold $F$. We assume that the radial distance function on $B^n$ restricts to a Morse function on the branch locus $F$. Recall the branched cover of an $n$-ball with branch locus an unknotted properly-embedded codimension-two disk is again a copy of $B^n$. By considering the gluing of this branch cover ball we show that the induced handle decomposition of $F$ determines a handle decomposition of the branched cover.
A brief description of our method is as follows: we use an imaginary ice cream scoop to remove a single handle of the branch locus at a time from the ball. Taking the double cover of a small scooped-out ball containing a $k$-handle of the branch locus results in a $(k+1)$-handle to attach to the previously constructed double branched cover.
Our main interest is in dimension 4 but we begin with a consideration of the general case, followed by a warm-up in dimension 3. Working from a suitable projection of the branch locus to $\partial B^n$ we produce either a Heegaard diagram of the double branched cover if $n=3$, or a Kirby diagram if $n=4$.
Other sources dealing with Heegaard diagrams of branched covers include \cite{josh,eli,IPY,ciprian,WB}. Our Kirby calculus description in dimension 4 generalises those in \cite{a2016,ak,gs}, and will be used to prove that the disoriented homology of a slice surface $F$ is isomorphic to the homology of the double branched cover of $B^4$ with branch set $F$.
\subsection{Handles and double branched covers}
In this subsection we consider the general problem of obtaining a handle decomposition of the double branched cover of a ball given a handle decomposition of the branch locus induced by the radial distance function on the ball.
Recall that a $k$-handle $H$ of an $n$-dimensional manifold $M$ is the image of the product $B^k\times B^{n-k}$ under an embedding $\varphi$. The attaching region of $H$ is $\varphi(\partial B^k\times B^{n-k})$ and its attaching sphere is $\Sigma:=\varphi(\partial B^k\times\{0\})$. The framing of $\Sigma$ is given by the product structure on the normal disk bundle of $\Sigma$ determined by $\varphi$. The remainder of the boundary of $H$, $\varphi(B^k\times \partial B^{n-k})$, is its coattaching region and $\varphi(\{0\}\times \partial B^{n-k})$ is its coattaching sphere.
Denote by $\rho$ the radial distance function on $B^n$. For any subset $X \subseteq B^n$ and any $r_1 < r_2 \in [0,1]$ let $X_{r_1,r_2}$ denote $X \cap \rho^{-1}([r_1,r_2])$ and for $r \in (0,1]$ let $X_r=X_{0,r}$.
Assume that $F \subset B^n$ is a properly embedded compact codimension-two submanifold such that the restriction $\rho_F$ of $\rho$ to $F$ is Morse. Let $R$ be a critical level of $\rho_F$ that contains a single critical point $c$ whose index is $k$. Let $\varepsilon>0$ be small enough so that $c$ is the only critical point of $\rho_F$ in $F_{{R - \varepsilon},{R + \varepsilon}}$. We may choose a closed ball neighborhood $D \subset B^n_{{R - \varepsilon},{R + \varepsilon}}$ about $c$ so that $h=D \cap F$ has the structure of a $k$-handle of $F$ corresponding to $c$ (see \Cref{fig:nbdD}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{nbdD.pdf}
\caption
{{\bf A ball neighborhood $D$ of a critical point.} The part of $F$ contained in $D$ is a handle $h$ corresponding to the critical point $c$. We may imagine that $D$ is attached to the sublevel set $B^n_{R - \varepsilon}$ along its southern hemisphere $S$ and that its northern hemisphere $N$ is contained in $B^n_{R + \varepsilon}$ by flowing the rest of these level sets into $B^n_R$.}
\label{fig:nbdD}
\end{figure}
Denote the southern hemisphere $(\partial D)_R$ of $D$ by $S$ and the northern $(\partial D)_{R,1}$ by $N$. Let $C_S$ (respectively, $C_N$) be the radial projection of the core (resp., cocore) of $h$ to $S$ (resp., $N$). The projection $h_S$ of $h$ to $S$ determines a framing $\mathcal{F}_h$ of $C_S$ in $S$ as follows: the product structure on $h_S$ given by the framing of $h$ along with the normal direction to $h_S \subset S$ determine the product structure of the normal bundle of $C_S$ in $S$. Note that this framing is uniquely determined by the framing of $h$. To simplify notation we identify $S$ and $N$ with their corresponding subsets of $\partial B^n_{R - \varepsilon}$ and $\partial B^n_{R + \varepsilon}$ in the rest of this section.
We denote the double branched covering projection (and its restriction to any subset) by $\pi : \Sigma_2(B^n,F) \to B^n$ and the preimage of any subset $X \subseteq B^n$ under $\pi$ by $\widetilde X$.
The following theorem is the key technical result of this section.
\begin{theorem}
\label{thm:handledbc}
With notation as above there is the following identification of double branched covering spaces:
\begin{equation}\label{Eq:dbc}
\Sigma_2(B^n_{R + \varepsilon},F_{R + \varepsilon})\cong\Sigma_2(B^n_{R - \varepsilon},F_{R - \varepsilon})\cup H,
\end{equation}
where $H$ is a $(k+1)$-handle corresponding to $\Sigma_2(D,h)$. The attaching region of $H$ in $\partial \Sigma_2(B^n_{R - \varepsilon},F_{R - \varepsilon})$ is $\widetilde S$, the preimage of $S$ under $\pi$. The attaching sphere of $H$ is $\widetilde C_S$ and its framing $\mathcal{F}_H$ is given by the preimage under $\pi$ of the framing $\mathcal{F}_h$.
Using identification \eqref{Eq:dbc}, the restriction of $\pi$ to $\Sigma_2(B^n_{R + \varepsilon},F_{R + \varepsilon})$ agrees with that on $\Sigma_2(B^n_{R - \varepsilon},F_{R - \varepsilon})$ away from $H$. There are identifications of $H$ and $D$ with $B^n$, such that the branch set $h$ corresponds to $B^{n-2} \times \{(0,0)\}$ and $\pi$ is the product of the identity on $B^{n-2}$ and the standard branched double covering projection on the normal $2$-disks. The coattaching sphere $\widetilde C_N$ of $H$ then corresponds to $\{0^k\} \times S^{n-2-k} \times \{0\}$ and the coattaching region $\widetilde N$ of $H$ is a regular neighborhood of $\widetilde C_N$ in $S^{n-1}$ that is diffeomorphic to $B^k \times S^{n-2-k} \times B^1$.
\end{theorem}
\begin{proof}
A standard Morse theory argument shows that $(B^n_{R + \varepsilon},F_{R + \varepsilon})\cong (B^n_R,F_R) \cup (D,h)$ and that $(B^n_{R - \varepsilon},F_{R - \varepsilon})\cong \overline{(B^n_R,F_R) \smallsetminus (D,h)}$ modulo corners along the equator $S \cap N$ of $D$ (see \Cref{fig:nbdD}). Here and later we suppress standard details regarding smoothing of corners. The equality of the branched covering spaces then follows from this after recognizing the branched double cover $H$ of $(D,h)$ as a $(k+1)$-handle which is the goal of the rest of the proof.
We start by choosing a convenient model for the pair $(D,h)$. Identifying $D$ with $B^n$, where the equator of $D$ is identified with the equator of $B^n$, the handle $h$ may be identified with a part of the graph of the standard index $k$ Morse function $f : {\mathbb R}^k \times {\mathbb R}^{n-2-k} \times \{0\} \to {\mathbb R}$, $(x,y,0) \mapsto -||x||^2+||y||^2$ (see the left side of \Cref{fig:std-model}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.3]{std-nbd.pdf}
\caption
{{\bf A standard model for the pair $(D,h)$.} Both figures show only the slice $t=0$. The left figure gives a model using a standard Morse function description of $h$ inside $B^n$. On the right the handle has been moved to the level $z=0$ of the product ball $B$ and the subsets of $\partial B$ corresponding to $S$ and $N$ were adjusted accordingly; for $t \in (-1,1)$ the same picture describes the intersections of $S$ and $N$ with the $t$-slice.}
\label{fig:std-model}
\end{figure}
Here and below the factor ${\mathbb R}^k$ gives the direction of the core of $h$ and we will denote the coordinate in this factor by $x$, the factor ${\mathbb R}^{n-2-k}$ gives the direction of the cocore of $h$ and we will denote the coordinate in this factor by $y$, the normal direction to ${\mathbb R}^{n-2}$ in the domain of $f$ corresponds to the normal direction to the radial projection of $h$ into a level sphere and we will denote the coordinate in this factor by $t$, and finally the codomain of $f$ corresponds to the radial direction which we will denote by $z$.
Applying a diffeomorphism of $B^n$ we may assume that $h$ is identified with $B^{n-2} \times \{(0,0)\}$. As a final modification we replace $B^n$ by $B:=B^k \times B^{n-2-k} \times B^1 \times B^1$ (preserving the product structure in the ambient space), where the core of $h$ corresponds to $B^k \times \{(0^{n-2-k},0,0)\}$ and its cocore to $ \{0^k\} \times B^{n-2-k} \times \{(0,0)\}$ (see the right side of \Cref{fig:std-model}). Thus we identify $D$ with the product of $h$ with a 2-disk; the first factor of the 2-disk corresponds to the normal direction to the radial projection of $h$ and the second to the radial direction. This already shows that $H$, the double branched cover of $(D,h)$, is also diffeomorphic to $B^k \times B^{n-2-k} \times B^1 \times B^1$, with the branched covering projection $\pi$ acting nontrivially on the 2-disk $B^1 \times B^1$ given by the last two factors. This projection is essentially described by identifying $B^1 \times B^1$ with the round disk $B^2$ and using the standard branched double covering projection on that space. More precisely, we let $\pi$ be the cone (with vertex at the origin) of the orientation preserving map $\partial (B^1 \times B^1) \to \partial (B^1 \times B^1)$ that maps each of the two vertical sides $\{\pm 1\} \times [-1,1]$ diffeomorphically onto the union of the top and the right sides and each of the two horizontal sides $[-1,1] \times \{\pm 1\}$ diffeomorphically onto the union of the bottom and left sides (see \Cref{fig:disk-dbc}).
\begin{figure}[htbp]
\centering
\includegraphics{disk-dbc.pdf}
\caption
{{\bf The double branched covering projection on $B^1 \times B^1$.} The left and right sides of the square map onto the union of right and top, and the bottom and top sides map onto the union of left and bottom respecting orientations.}
\label{fig:disk-dbc}
\end{figure}
After the last modification we may assume that (see the right side of \Cref{fig:std-model}):
\begin{itemize}
\item $S$ is the union of $S_{-1}:=B^k \times B^{n-2-k} \times \big( \{-1\} \times B^1 \cup B^1 \times \{-1\}\big)$ and $S_0:=\partial B^k \times B^{n-2-k} \times B^1 \times B^1$;
\item $N$ is the closure of the complement of $S$ in $\partial B$, hence it is the union of $N_1:=B^k \times B^{n-2-k} \times \big( \{1\} \times B^1 \cup B^1 \times \{1\} \big)$ and $N_0:=B^k \times \partial B^{n-2-k} \times B^1 \times B^1$.
\end{itemize}
Note that we made a choice to include one of $B^k \times B^{n-2-k} \times \{\pm 1\} \times B^1$ into $N$ and one into $S$.
Since $S_{-1}$ does not intersect the branch set, $\widetilde S_{-1}$ consists of two copies of this set, which are identified with $B^k \times B^{n-2-k} \times B^1 \times \{\pm 1\}$. However, $S_0$ intersects the branch set in the attaching region $\partial B^k \times B^{n-2-k} \times \{(0,0)\}$ of $h$ and so $\widetilde S_0$ may be identified with $\partial B^k \times B^{n-2-k} \times B^1 \times B^1$ where $\pi$ is nontrivial on the 2-disk $B^1 \times B^1$. Hence $\widetilde S=\widetilde{S}_{-1}\cup\widetilde{S}_0$ is identified with $\partial B^k \times B^{n-2-k} \times B^1 \times B^1 \cup B^k \times B^{n-2-k} \times B^1 \times \partial B^1 \cong S^k \times B^{n-1-k}$.
This implies that the attaching sphere of $H$ is $\partial (B^k \times \{(0^{n-2-k},0)\} \times B^1) $ which corresponds to $\widetilde C_S$ and its framing is given by the pull-back of the product structure on the projection of $h$ to $S$ along with the direction normal to this projection in $S$.
A similar argument as above shows that the coattaching sphere of $H$ is $\{0^k\} \times \partial (B^{n-2-k} \times B^1) \times \{0\}=\widetilde C_N$ and its coattaching region is $\widetilde N = B^k \times \partial (B^{n-2-k} \times B^1) \times B^1$. The restriction of $\pi$ to $ (B^k \times \partial B^{n-2-k}) \times (B^1 \times B^1)$ is the product of the identity on $B^k \times \partial B^{n-2-k}$ and the standard branched covering projection on the 2-disk $B^1 \times B^1$, and its restriction to $B^k \times B^{n-2-k} \times \{\pm 1\} \times B^1$ stretches each of the vertical sides of the 2-disk $B^1 \times B^1$ to the union of its right and top sides as described above. This description agrees with the one in the statement of the lemma after replacing the product ball $B$ by the round ball $B^n$.
\end{proof}
We give a more explicit description of the branched covering projection $\pi$ on the coattaching region of a handle as described in the proof of the previous lemma. This is important for understanding gluings of handles in the branched double cover of the ball of index greater than 1 as parts of their attaching regions go over coattaching regions of lower index handles.
\begin{corollary}
\label{cor:coattachingdbc}
Consider a handle $h$ of $F$ and its corresponding handle $H$ in the branched double cover $\Sigma_2(B^4,F)$ as in \Cref{thm:handledbc}. Identify $N$ with $B^k \times 2B^{n-1-k}$ where the radial projection $C_N$ of the cocore of $h$ corresponds to $\{0^k\} \times B^{n-2-k} \times \{0\}$ and the remaining direction in $2B^{n-1-k}$ is normal to the radial projection of $h$ in $N$. Let $\Delta:=B^{n-2-k} \times B^1$ be obtained by cutting $2B^{n-1-k}$ along the annulus $(2B^{n-2-k} \smallsetminus B^{n-2-k}) \times \{0\}$; the lateral boundary $\partial_1 \Delta:= \partial B^{n-2-k} \times B^1$ of $\Delta$ corresponds to the cut (see \Cref{fig:coattach}). Then the coattaching region $\widetilde N \cong B^k \times S^{n-2-k} \times B^1$ of $H$ may be obtained from $B^k \times \Delta_\pm$ by gluing pairs of points $(x,y,z)_- \sim (x,y,-z)_+$ in $B^k \times \partial_1\Delta_\pm$.
\end{corollary}
\begin{proof}
In the proof of the previous lemma we identified the coattaching region $\widetilde N$ of $H$ with $B^k \times \partial(B^{n-2-k} \times B^1) \times B^1$, where the coattaching sphere $\widetilde C_N$ is $\{0^k\} \times \partial(B^{n-2-k} \times B^1) \times \{0\}$. Recall that the covering transformation acts by the identity on $B^k \times B^{n-2-k}$ and by the half-turn rotation on the disk $B^1 \times B^1$. A fundamental domain for this action on the coattaching region is $B^k \times \big(B^{n-2-k} \times \{1\} \cup \partial B^{n-2-k} \times [0,1]\big) \times B^1$, as shown in \Cref{fig:coattach}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{coattach.pdf}
\caption
{{\bf The coattaching region of $H$ for $n=4$ and $k=1$.} The top picture represents a round model of $\widetilde N$ as described in \Cref{thm:handledbc}; the $z$ direction is projected into the $yt$-plane as the thickness of the annulus.
The bottom left picture contains the fundamental domain $\Delta$ (in fact, $B^k \times \Delta$). The bottom right picture shows $N$, and the black rectangle is the radial projection of the band $h$.
}
\label{fig:coattach}
\end{figure}
The branched covering projection $\pi$ maps the first set bijectively to $N_1=B^k \times \big(B^{n-2-k} \times \{1\} \times B^1 \cup B^{n-2-k} \times B^1 \times \{1\}\big)$ and the second onto $N_0=B^k \times \partial B^{n-2-k} \times B^1 \times B^1$ identifying the points $(x,y,0,z)$ and $(x,y,0,-z)$. The branch set for $\pi$ restricted to $\widetilde N$ is $B^k \times \partial B^{n-2-k} \times \{0\} \times \{0\}$. Since the maps act as identity on the first factor $B^k$, we restrict our attention to the remaining factors. We further identify the rest of $\widetilde N$ with the annulus $S^{n-2-k} \times B^1$, the fundamental domain for the action with $S^{n-2-k}_+ \times B^1 \cong B^{n-2-k} \times B^1=: \Delta$ and the branch set with the equator $S^{n-3-k} \times \{0\}$. Then $N$ (modulo $B^k$) is obtained from $\Delta$ by identifying pairs of points $(y,z) \sim (y,-z)$ in $S^{n-3-k} \times B^1$ and hence may be identified with $2B^{n-1-k}$ with the radial projection of the cocore of $h$ corresponding to $B^{n-2-k} \times \{0\}$. Conversely, $\widetilde N$ may be obtained from two copies of $2B^{n-1-k}$ cut along the annulus $(2B^{n-2-k} \smallsetminus B^{n-2-k}) \times \{0\}$. The cut ball is diffeomorphic to $\Delta$ and the gluing of the two copies $\Delta_\pm$ identifies pairs of points $(y,z)_- \sim (y,-z)_+$ in $(S^{n-3-k} \times B^1)_\pm$.
\end{proof}
As usual with Morse theory arguments, the assumption that there is a unique critical point in each critical level is unnecessary as the construction affects only a neighborhood of the critical point and its preimage. In fact, we may assume all the critical points of a given index are contained in the same level set which we do in the following discussion.
In the rest of this section we give a more detailed description of gluings of handles of small indices; we refer to the notation in the proof of \Cref{thm:handledbc}.
\subsubsection{Critical points of $\rho_F$ of index $k=0$}\label{subsec:ind0}
The sublevel set $B^n_{R - \varepsilon}$ is a ball that does not intersect the branch set $F$ and hence its branched double cover is the disjoint union of two $n$-balls oriented consistently with $B^n_{R - \varepsilon}$, which we denote by $B^n_\pm$. Each critical point gives rise to a 1-handle connecting the two balls. Consider a 0-handle $m$ of $F$ corresponding to a critical point $c$. The attaching sphere $\{c_-, c_+\}$ of the resulting 1-handle $M$ is the preimage under $\pi$ of the radial projection of $c$ to $S$, and the attaching region $\widetilde S$ consists of two copies of $S$. Recall that $S$ is an $(n-1)$-ball centered at the radial projection of $c$; more precisely, we identify it with $B^{n-2} \times B^1$ (where the radial projection $m_S$ of $m$ to $S$ is contained in the interior of $B^{n-2}$) and the attaching map is given by
$$\varphi : (B^{n-2} \times B^1) \times \partial B^1 \to (B^{n-2} \times B^1)_- \sqcup (B^{n-2} \times B^1)_+ \subset B^n_- \sqcup B^n_+ ,$$
$$(y,t,z) \mapsto (y,zt)_{\sign z}.$$
Note that this is an orientable gluing.
Alternatively, the addition of the 1-handle $M$ may be realized by gluing the balls $B^n_\pm$ along the attaching regions $ (B^{n-2} \times B^1)_\pm$ via the map
$$(y,t) \mapsto (y,-t).$$
This identifies the cocore $B^{n-2} \times B^1 \times \{0\}$ of the handle with the attaching regions and pushes one half of the handle into each of the $n$-balls. In this case it is convenient to replace the product ball $B^{n-2} \times B^1$ with the round ball $B^{n-1}$. This ball is split in half by $B^{n-2} \times \{0\}$ which we identify with the $\pi$-preimage $\widetilde m$ of the 0-handle $m$ of $F$, and we identify the boundary $\partial B^{n-1}$ with the coattaching sphere $\widetilde C_N$. For each point in $m$, which is identified with a point $y \in B^{n-2}$, we identify its corresponding points in $\widetilde C_N$ with the points $(y,\pm t)_- = (y,\mp t)_+$. When considering attachments of higher index handles we can therefore imagine that the ball $B^{n-1}$ is being inflated from the flat $B^{n-2}$, pushing the rest of the radial projection of $F$ (cut along the interior of this $B^{n-2}$) away while keeping the $y$-coordinates of the (doubled) points on the boundary fixed.
\subsubsection{Critical points of $\rho_F$ of index $k=1$}\label{subsec:ind1}
The sublevel set $B^n_{R - \varepsilon}$ is a ball that intersects the branch locus $F$ in its 0-handles $m_i$ and hence the branched double cover of $B^n_{R - \varepsilon}$ is the disjoint union $B^n_- \sqcup B^n_+$ along with a 1-handle $M_i$ connecting the two $n$-balls for each $i$. We choose to replace all the 1-handles by gluings as described above. Denote by $P$ the radial projection of $F$ into the boundary sphere $S^{n-1}$; we refer to the projections of the handles of $F$ into $P$ as handles of $P$. We assume that the 1-handles of $P$ are pairwise disjoint and that the cores of the 1-handles intersect the interiors of the 0-handles transversely in $P$. More precisely, there are two types of intersections:
\begin{itemize}
\item the attaching spheres of 1-handles lie in the union of the boundaries of 0-handles and we assume that $P$ is smooth along the attaching regions of 1-handles;
\item all other intersections are transverse and are interior to the cores of the 1-handles and to the 0-handles of $P$.
\end{itemize}
This means that the union of 0- and 1-handles of $P$ is smoothly embedded with the exception of ribbon singularities at which the cores of the 1-handles intersect the 0-handles transversely. Consider a 1-handle $h$ of $F$. The attaching circle $\widetilde C_S$ of the 2-handle $H$ corresponding to $h$ consists of two copies of $C_S$ cut along the interiors of the 0-handles of $F$ projected into $S$ as described above. If $h$ is attached to $m_i$, then $\widetilde C_S$ intersects the coattaching sphere of $M_i$ transversely once and hence goes over $M_i$ once. If the radial projection of a point in $m_i$ (identified with $y \in B^{n-2}$) in $P$ belongs to the radial projection of the core of $h$, then $\widetilde C_S$ intersects the coattaching sphere of $M_i$ twice (in points $(y,\pm t)_-$) and hence it goes over $M_i$ twice.
The gluing of $H=B^1 \times B^{n-3} \times B^1 \times B^1$ is determined by the attaching circle $\widetilde C_S$ corresponding to $\partial (B^1 \times \{(0,0)\} \times B^1)$ and by its framing. For $n=3$ there is a unique framing, and for $n=4$ the framing is uniquely determined by the framing of $h$, given by a parallel to the core of $h$, so by a boundary component of the $\pi$-preimage of $h_S$, the radial projection of $h$ to $S$.
For $n=4$ we recall the description of the coattaching region $\widetilde N$ of $H$ from \Cref{cor:coattachingdbc}. Choose a disk $2B^2$ in $S^3$ that intersects the radial projection of $h$ in its cocore transversely and contains this cocore in its interior as $B^1\times \{0\}$. Thicken this disk to a 3-ball $N=B^1 \times 2B^2$, where the $B^1$ factor corresponds to the core of the handle, and cut it along $B^1 \times (2B^1 \smallsetminus B^1) \times \{0\}$ to obtain $B^1 \times B^1 \times B^1$ (see \Cref{fig:coattach}). Then $\widetilde N$ is a solid torus $B^1 \times S^1 \times B^1$ obtained from two copies $(B^1 \times B^1 \times B^1)_\pm$ of the cut-up $N$ by gluing pairs of points $(x,y,z)_- \sim (x,y,-z)_+$ for $y\in \partial B^1$. If the radial projection of a 2-handle of $F$ intersects $N$ in a subset $K$, then a part of the attaching sphere of the corresponding 3-handle intersects $\widetilde N$ in two copies of $K$ cut as $N$ by the 0-handles of $F$ and glued as described above.
\subsection{Double branched covers of the 3-ball and 3-sphere}\label{subsec:dbc3}
Let $L$ be a properly embedded compact 1-manifold in the 3-ball, i.e., a tangle or a link, to which the radial distance function $\rho$ restricts to be Morse, giving a handle decomposition of $L$. This is known as a bridge decomposition of $L$. We assume that the radial projection $P \subset S^2$ of $L$ has only ordinary double points. The bridge decomposition of $L$ induces a bridge decomposition of $P$ which then carries the same information as a diagram of $L$; we refer to double points of $P$ as crossings. In this context $0$-handles and $1$-handles are called underbridges and overbridges respectively. We further assume that
\begin{itemize}
\item minima of $L$ have $\rho\in(0,1/2)$ and maxima have $\rho\in(1/2,1)$,
\item all endpoints of $P$ are contained in underbridges, and
\item at each crossing, an overbridge crosses over an underbridge.
\end{itemize}
We build a handle decomposition of $\Sigma_2(B^3,L)$ using \Cref{thm:handledbc}. We begin with a description of this which takes as a starting point any projection $P \subset S^2$ of $L$ with a chosen bridge decomposition as above. An example is shown in Figure \ref{fig:tangledbc}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{tangledbc}
\caption
{{\bf Double cover of a tangle in the 3-ball.} The top row shows a tangle $L$, a bridge decomposition of its projection $P$, and the associated diagram $D$ with underbridges inflated to disks. Below these we see a handle decomposition of $\Sigma_2(B^3,L)$ with two 0-handles, four 1-handles and three 2-handles. Matching pairs of green disks are glued preserving the direction along $L$ and reversing the normal direction. The preimage of each blue overbridge from $D$ gives a single circle in the handlebody resulting from these disk gluings, and these are the attaching circles for the 2-handles.}
\label{fig:tangledbc}
\end{figure}
Inflate each underbridge $u$ to a closed disk $U= B^2$ containing the underbridge as the equator $B^1\times\{0\}$, cutting any overbridge which crosses over $u$, and denote by $D$ the resulting union of disks connected by overbridge segments (see Figure \ref{fig:tangledbc}).
Let $Y$ be the oriented 3-manifold obtained from the disjoint union $B^3_-\sqcup B^3_+$ of two copies of $B^3$, each with an identical copy of $D$ in the boundary, as follows:
\begin{enumerate}
\item glue each disk $U=B^2$ in $\partial B^3_-$ to the corresponding disk in $\partial B^3_+$ by the map $(y,t)\mapsto(y,-t)$, then
\item attach a 2-handle to the resulting handlebody for each overbridge, with the attaching circle being the image in the handlebody of the union of the corresponding pair of overbridges in $\partial B^3_- \sqcup \partial B^3_+$.
\end{enumerate}
Note that each intersection of an overbridge with the boundary of a disk $U$ in $D$ results in the corresponding 2-handle attaching circle passing once over the corresponding 1-handle. In particular, the attaching circle passes once over a 1-handle for each endpoint, and twice for each crossing. The following proposition is immediate from the earlier discussion in this section (\Cref{thm:handledbc} and subsections \ref{subsec:ind0} and \ref{subsec:ind1}).
\begin{proposition}
\label{prop:tangledbc}
Let $L$ be a compact 1-manifold properly embedded in $B^3$, and let $Y$ be the 3-manifold with boundary constructed as above from a bridge decomposition of a projection of $L$. Then $Y$ is diffeomorphic to the double cover $\Sigma_2(B^3,L)$ of $B^3$ branched along $L$.
\end{proposition}
We would like to modify the description of $\Sigma_2(B^3,L)$ from \Cref{prop:tangledbc} to obtain a Heegaard diagram for the double branched cover of the 3-sphere along $L\subset B^3\subset S^3$. It is convenient, though not essential, to isotope the projection $P$ so that the underbridges lie along the $x$-axis in ${\mathbb R}^2\subset S^2$, and we will number them $u_0, u_1,\dots,u_g$ from left to right.
The resulting disks in the diagram $D$ are correspondingly denoted $U_0, U_1,\dots,U_g$ from left to right. We may assume no part of $D$ lies to the left of $U_0$ with a possible exception of an arc emanating from the left endpoint of $u_0$; any other arcs may be swung across the point at infinity to the other side.
We now form a new planar diagram obtained as the connected sum of two copies of $D$, with the connected sum taken at the disk $U_0$, as follows. Draw one copy of $D$, with the interior of $U_0$ removed, in the right half-plane, with the disks $U_1,\dots,U_g$ drawn along the positive $x$-axis, and the boundary of $U_0$ being along the $y$-axis, with the rightmost boundary point of $U_0$ at the origin and the leftmost one at infinity. If there is an arc emerging from this leftmost boundary point of $U_0$, redraw it as being asymptotic to the positive $x$-axis as in the second diagram of \Cref{fig:HD}.
For each crossing involving $u_0$, the corresponding pair of overbridge arcs should intersect the $y$-axis in a pair of points symmetric about the origin. Draw a second copy of $D$ in the left half-plane as the rotated image of the right half-plane about the origin. Draw a red $\alpha$ curve surrounding each of the disks in the left half-plane as in Figure \ref{fig:HD}. The pairs of disks along the $x$-axis are now taken to be the attaching disks for 3-dimensional 1-handles, identified via reflection across the $y$-axis; thus the diagram now represents a surface $\Sigma$ of genus $g$ as the boundary of a 3-dimensional handlebody. The blue curves coming from the overbridges form $g+1$ simple closed curves in $\Sigma$.
Let $\mathcal{H}'$ denote the resulting triple consisting of the surface $\Sigma$ together with the red $\alpha$ and blue $\beta$ curves, and let $\mathcal{H}$ be the triple obtained from $\mathcal{H}'$ by omitting an arbitrarily chosen $\beta$ curve.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{HD}
\caption
{{\bf A Heegaard diagram for the double branched cover of the left-handed trefoil in the 3-sphere.} The chessboard colouring in the second diagram shows that the union of the blue curves is nullhomologous.}
\label{fig:HD}
\end{figure}
\begin{proposition}
\label{prop:heegaard}
Let $L$ be a link in $S^3$. Then
$\mathcal{H}$ is a Heegaard diagram for the double cover of $S^3$ branched along $L$.
\end{proposition}
\begin{proof}
It is straightforward to see that $\mathcal{H}'$ agrees with the description of $\Sigma_2(B^3,L)$ from \Cref{prop:tangledbc}: one first glues the two $0$-handles together using the 1-handle corresponding to $U_0$ to get a single $0$-handle. The remaining 1-handles are indicated by the pairs of disks, and the red curves are the belt spheres of the 1-handles. The blue curves are the attaching circles of 2-handles.
For a link $L$, the boundary of $\Sigma_2(B^3,L)$ is a disjoint union of two 2-spheres, and the double branched cover of $S^3$ branched along $L$ can be obtained from this by attaching two 3-handles. We claim that one of these 3-handles can be cancelled with an arbitrary choice of $\beta$ curve. Morally, this follows from turning our construction upside-down, but
we make a different argument. There are $g+1$ blue $\beta$ curves on the genus $g$ surface $\Sigma$, and compressing these curves converts $\Sigma$ to a disjoint union of two spheres. It follows that the collection of all the $\beta$ curves spans $H_1(\Sigma)$. We claim that with an appropriate orientation, the sum of all the $\beta$ curves is nullhomologous, from which it follows that any $g$ of them span $H_1(\Sigma)$. To see this begin by choosing a chessboard colouring of the projection $P$ of the link $L$, as in the first diagram in Figure \ref{fig:HD}. Regions on opposite sides of an overbridge have opposite colours (shaded and unshaded). This chessboard colouring is then inherited by the planar diagram $D$ in which the overbridges are inflated to disks. The Heegaard surface $\Sigma$ is obtained by taking two copies of this planar diagram, with the interiors of the inflated disks removed, and gluing them together along the boundaries of the disks. This glues together regions from opposite sides of each underbridge, so if we choose the opposite chessboard colouring in one of the two copies
of the planar surface being glued then the colours will match up in $\Sigma$. Thus the union of the $\beta$ curves bounds the union of the shaded regions.
\end{proof}
Lastly we observe that the handle decomposition of the double branched cover $Y=\Sigma_2(B^3,L)$ described in \Cref{prop:tangledbc} gives a simple way of computing the homology of $Y$ directly from a projection $P$ of $L$, equipped with a bridge decomposition. In fact, this homology is isomorphic to the disoriented homology of $L$, defined in \Cref{sec:tangle}.
\begin{proposition}
\label{prop:3d-homology}
Let $L$ be a link or tangle in $B^3$, with projection $P \subset S^2$.
Choose a bridge decomposition of $L$ consistent with $P$ and disorientations of the overbridges, determining the data $P^\flat$. Then the homology of the disoriented chain complex $\mathcal{DC}_*(P^\flat)$ is isomorphic to the shifted reduced homology of $\Sigma_2(B^3,L)$, i.e.,
$$H_*(\mathcal{DC}_*(P^\flat)) \cong \widetilde H_{*+1}(\Sigma_2(B^3,L)).$$
\end{proposition}
\begin{proof}
This follows from the handle decomposition of $Y=\Sigma_2(B^3,L)$ described in Proposition \ref{prop:tangledbc}; we also use notation from there. Recall that 1-handles of this decomposition correspond to underbridges and 2-handles to overbridges. One of the 1-handles connects the two 0-handles $B^3_-$ and $B^3_+$, and the rest of them generate $H_1(Y)$. The relations in $H_1(Y)$ come from the 2-handles.
We label the overbridges $o_0,\dots,o_g$. We claim that the chosen disorientation of each $o_k$ determines an orientation of the attaching circle $\beta_k$ for the corresponding 2-handle. Orient the copy of $o_k$ in $B^3_-$ consistently with $o_k$ and choose the opposite disorientation for the copy in $B^3_+$. Since for an endpoint $a$ of $o_k$ its two copies $a_\pm$ in $B^3_- \sqcup B^3_+$ are identified in $Y$, the chosen orientations match up. For a pair of endpoints $c,d$ of $o_k$ at a crossing, $c_-$ is identified with $d_+$ and $d_-$ with $c_+$, hence the chosen orientations also match up and indeed a choice of disorientation of $o_k$ determines an orientation of $\beta_k$ (see Figure \ref{fig:DHdetail}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.7]{DHdetail}
\caption
{{\bf Disorientations of the overbridges determine orientations for the attaching circles of the 2-handles.} Arcs of overbridges $o_k$ in the two 0-handles have opposite orientations.}
\label{fig:DHdetail}
\end{figure}
We orient the 1-handles of $Y$ in such a way that the positive direction is from $B^3_-$ to $B^3_+$. Hence a 2-handle $\beta_k$ goes over a 1-handle corresponding to $u_j$ in the positive/negative direction at an endpoint $e$ of one of its subarcs if at $e$ this subarc points to/from $u_j$.
Since $Y$ is connected and there are no 3-handles in the decomposition, the claimed isomorphism follows.
\end{proof}
Thus from \Cref{eg:trefoilDH} we see that the double cover of the 3-ball branched along a trefoil knot has first homology group isomorphic to ${\mathbb Z}/3{\mathbb Z}$, and its second homology group is a copy of ${\mathbb Z}$. This is in agreement with the well-known fact that the double cover of $S^3$ branched along the left-handed trefoil is the lens space $L(3,1)$, and the double cover of $B^3$ branched along the same knot is thus obtained from $L(3,1)$ by removing two balls.
\subsection{Double branched covers of the 4-ball}\label{subsec:dbc4}
We now consider the case of most interest to us. Let $F$ be a compact surface, with or without boundary, properly embedded in $B^4$. We assume that $\rho_F$, the restriction of the radial distance function to the surface, is Morse giving a handle decomposition of $F$, and that all minima have $\rho\in(0,1/3)$, saddles have $\rho\in(1/3,2/3)$, and maxima have $\rho\in(2/3,1)$. We further assume that
\begin{itemize}
\item the radial projection to $S^3$ restricts to an embedding on the union of $k$-handles of $F$, for each $0\le k\le2$, and we refer to the images of the handles as the handles of the projection;
\item the radial projection of the union of $0$- and $1$-handles is a ribbon immersed surface $F_r$, and moreover all ribbon singularities are formed by 1-handles passing through 0-handles of $F_r$;
\item the radial projection $P$ of $F$ is generic, and the intersection of the interior of each 2-handle with $F_r$ is transverse.
\end{itemize}
Under these assumptions we defined a description $F_s \subset S^3$ of $F$ in \Cref{sec:surface}, which is given by the decomposition of $P$ into the ribbon immersed surface $F_r$ and the 2-handles $d_i$. We also discussed possible singular points of $F_s$ in that section. Recall that the ribbon immersed surface $F_r$ and the 2-handles $d_i$ may form pinch point singularities along their common boundaries. To simplify the description of the attaching spheres of 3-handles of the double branched cover we also assume that
\begin{itemize}
\item pinch points do not occur along the boundaries of the 1-handles of $F_r$.
\end{itemize}
Indeed, they can always be transferred along the boundary of $F_r$ by rotating the disk $d_i$ about this boundary. Hence an essential intersection of $d_i$ with some 1-handle of $F_r$ is either a component of the coattaching region of the 1-handle or is disjoint from the coattaching region, so it runs along the core of the 1-handle.
We now describe a smooth 4-dimensional handlebody $X$ diffeomorphic to the double branched cover of $(B^4,F)$, using the data above. The sublevel set $B^4_{2/3}$ is a ball that intersects the branch locus $F$ in a ribbon surface $F_{2/3}$ with projected ribbon immersed surface $F_r$. Then the description of the branched double cover $X_2=\Sigma_2(B^4_{2/3},F_{2/3})$ is as in subsections \ref{subsec:ind0} and \ref{subsec:ind1}. Let $d$ be a 2-handle of $F$ and let $d_P$ be its radial projection into $\partial B^{4}_{2/3}$. We know from \Cref{thm:handledbc} that $d$ gives rise to a 3-handle $D$ of $X$ attached to $\partial X_2$. The attaching sphere for $D$ is the preimage of $d_P$ under the branched covering projection.
This sphere is formed by the union of the two copies of $d_P$ cut along the interiors of the 0-handles of $F_r$ in the boundaries of $B^4_\pm$, in the complement of the attaching regions of the 2-handles of $X_2$, together with the preimages of $d_P$ in the coattaching regions of the 2-handles of $X_2$.
The coattaching region $\widetilde N_h$ of a 2-handle $H$ of $X_2$ corresponding to 1-handle $h$ of $F$ is a solid torus $B^1 \times S^1 \times B^1$ whose core circle $S^1$ is the $\pi$-preimage of the radially projected cocore of $h$. The image $N_h$ of the coattaching region under the branched covering projection $\pi$ may be identified with $B^1 \times 2B^1 \times B^1$, where the first factor corresponds to the core of $h$, the second to an extended cocore of $h$, and the third to the normal direction to $h$. With the assumptions on the projection $P$ of the surface $F$ we made above there are two types of intersections between $d_P$ and 1-handles of $F_r$. An arc $A$ in the boundary of $d_P$ may be glued to one of the coattaching arcs of a 1-handle $h$ of $F$. Then the component $\Delta_A$ of $d_P \cap N_h$ containing $A$ is a collar on $A$ in $d_P$; we denote the rest of the boundary of $\Delta_A$ by $A'$ (compare Figure \ref{fig:coattach}). The preimage $\pi^{-1}(\Delta_A)$ is a disk isotopic to $B^1 \times \{*\} \times B^1$ whose boundary circle is $\pi^{-1}(A')$. This disk connects the two copies $A'_\pm$ of $A'$ inside the balls $B^4_\pm$ glued along the attaching regions of the 1-handles. Since $\pi^{-1}(\Delta_A)$ intersects the core circle of the coattaching region of $H$ transversely once, the subdisk $\pi^{-1}(\Delta_A)$ of the attaching sphere $\pi^{-1}(d_P)$ goes over the 2-handle $H$ once.
The second possibility is that $d_P$ intersects $h$ in an interior arc $B$ that in $h$ runs parallel to the core of $h$. Then the component $\Delta_B$ of $d_P \cap N_h$ containing $B$ is identified with $B \times [-1,1]$ (with $B$ corresponding to $B \times \{0\}$) and $\pi^{-1}(\Delta_B)$ consists of two disks transverse to the core of the coattaching region, each capping-off one component of $(\partial \Delta_B)_\pm$.
We have now achieved the main goal of this section: a description of a handlebody corresponding to the double branched cover of a slice surface in the 4-ball.
\begin{proposition}
\label{prop:dbc4v1}
Let $F$ be a compact surface properly embedded in $B^4$ as above, and let $X$ be the 4-dimensional handlebody constructed above using the slice surface description $F_s$ of $F$. Then $X$ is diffeomorphic to the double cover $\Sigma_2(B^4,F)$ of $B^4$ branched along $F$.
\end{proposition}
We next describe how to draw a Kirby diagram of $\Sigma_2(B^4,F)$ based on the handle decomposition from \Cref{prop:dbc4v1}. For ribbon surfaces, this is similar to diagrams described in \cite[\S 6.3]{gs} and \cite[\S 11.3]{a2016}. The main adjustment that needs to be made to the description above is that we need to cancel one of the two 0-handles, and draw the diagram in the boundary of the remaining 0-handle. This is similar to what we did in the 3-dimensional case to obtain a Heegaard diagram (see \Cref{prop:heegaard}). We begin by isotoping the radial projection $P$ of $F$ in $S^3$ to facilitate this. We assume that $P$ is contained in the upper half-space of ${\mathbb R}^3 \subset S^3$ and that the 0-handles are round disks in the $xz$-plane, with their centers along a horizontal line $L$ one unit above the $x$-axis. We then want to ``comb up" the 1- and 2-handles of $P$, so that, as much as possible, they lie above the 0-handles in the upper half space $z\ge1$ and close to the $xz$-plane. The 1-handles (bands) are attached at their ends to the 0-handles, and pass through the 0-handles making ribbon singularities. Away from the ends and the ribbon singularities, they are isotoped to lie close to the $xz$-plane, allowing for twisting in bands and crossings of bands over each other. We also isotope so that the ribbon singularities all lie on the line $L$. This gives the preferred position of the ribbon immersed surface $F_r$. The embedded disks of the 2-handles are attached along their boundaries to the boundary of $F_r$. Their interiors may intersect the interior of $F_r$. Finally they may ``wrap around" the 0-handles.
By changing the point at infinity (placing it below a chosen 0-handle and above any 2-handles wrapping around it) we may isotope $P$ so that no 2-handle wraps around a particular 0-handle $m_0$. We then make a further isotopy, pulling $m_0$ downwards so that it lies on the $x$-axis, below the other 0-handles.
Having isotoped the diagram in this way, we then construct the corresponding handlebody description of $X=\Sigma_2(B^4,F)$ given prior to \Cref{prop:dbc4v1}. We inflate $m_0$ into a 3-ball in the boundary of $B^4$. Since the interior of the 3-ball becomes interior to the 4-manifold after gluing two copies of the diagram along the two copies of this 3-ball, we may consider the complement of this interior in the boundary of $B^4$, puncture the resulting boundary 2-sphere at the south pole and isotope it onto the $xy$-plane so that the boundary of $m_0$ is mapped onto the $x$-axis. This may be done without modifying the rest of the diagram which is all drawn above the $xy$-plane. This gives one copy of the diagram, corresponding to $B^4_+$. The other copy, corresponding to $B^4_-$, is obtained by revolving the first diagram about the $x$-axis so that it appears below the $xy$-plane. We have now drawn the whole diagram in a single ${\mathbb R}^3$ with rotational symmetry about the $x$-axis. We inflate the remaining 0-handles of the diagram into 3-balls, remove the interiors and identify their boundaries in pairs by the reflection in the $xy$-plane. Recall that inflating cuts parts of the diagram that intersect interiors of the 0-handles. We can push the glued 2-spheres together in the standard way to replace them by dotted circles as in \cite[Section 1.1]{a2016},\cite[Section 5.4]{gs}. The rest of the construction proceeds as described prior to \Cref{prop:dbc4v1}. The two copies of the core of each 1-handle of $P$ above and below the $xy$-plane glue to form the attaching circle for the corresponding 2-handle of $X$ whose framing is given by one component of the boundary of the annulus into which glue the two copies of the 1-handle of $P$; in fact, in the absence of 2-handles of $P$ we may consider one component of the boundary of the annulus to be the attaching circle of the 2-handle and the other to be its framing. For each 2-handle $d$ of $P$ (cut by the interiors of the 0-handles) remove from its two copies above and below the $xy$-plane the intersections with the images of coattaching regions $N_h$ for 1-handles $h$ of $P$. If a removed component $\Delta$ contains a boundary arc $A$ of $d$, the two copies $A'_\pm$ of $\overline{\partial \Delta \smallsetminus A}$ together bound a disk in the coattaching region of $h$ that goes once over the corresponding 2-handle $H$ of $X$. If a removed component $\Delta$ lies in the interior of $d$, each component of $\partial \Delta_\pm$ bounds a disk in the coattaching region of $h$ that goes once over the corresponding 2-handle $H$ of $X$.
\begin{example}[The positive unknotted real projective plane]\label{ex:proj-Kirby}
To illustrate the above results we return to the example of the unknotted real projective plane $P={\mathbb R}{\mathbb P}^2$ in $B^4$ with radial projection $P_s$ given on the left of \Cref{fig:proj-plane}. Recall that we computed the disoriented homology of this surface in \Cref{ex:proj-homology}. We complete the story now by constructing a Kirby diagram for $X=\Sigma_2(B^4,P)$.
The projection $P_s$ consists of a disk $m$ representing the 0-handle, a band with a positive half-twist representing the 1-handle $h$ that forms a ribbon singularity with the 0-handle, and a disk $d$ split into four subdisks representing the 2-handle. The red curves represent intersections between $d$ and $m$ and these divide $d$ into subdisks -- the dashed parts of $d$ lie behind the 0-handle. The right part of \Cref{fig:proj-plane} shows $P_s$ in the preferred position: the attachment of the 1-handle $h$ has been moved to the side and the top arc of intersection between $d$ and $m$ has been pushed away from the 1-handle. This is realized by shortening the top right subdisk and enlarging the top left subdisk that contains the hood at the top of the figure.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{proj-plane}
\caption
{{\bf A radial projection of the positive unknotted real projective plane in the 4-ball.} The 0-handle (bounded by the circle) and the 1-handle (the green band) combine to give an immersed M\"{o}bius band, with a positive half-twist and a ribbon singularity (shown as green arc). The 2-handle consists of the red and blue disks and is split into four subdisks by its intersections with the ribbon surface shown as arcs. In the left figure, the upper two disks lie in front of the 0-handle, while the lower ones lie behind. The lower two arcs of intersection connect a pinch point with an endpoint of the ribbon singularity, the remaining arc is bounded by pinch points. The projection on the left is not in the preferred position while the one on the right is.}
\label{fig:proj-plane}
\end{figure}
The double cover $X_2$ of $B^4$ branched along the union of the 0- and 1-handles of $P$ consists of two 0-handles $B^4_-\sqcup B^4_+$ glued along the inflated copies of the 0-handle $m$ with a single 2-handle attached. The attaching region for the 2-handle is the union of the two copies of the 1-handle $h$ of $P_s$ in the boundaries of $B^4_\pm$ cut at the ribbon singularity and pushed away by inflation of $m$. The resulting four bands form an annulus with a full positive twist. The core of the annulus is the attaching circle for the 2-handle of $X_2$ and the framing is given by either boundary component of the annulus, thus the framing coefficient is $+1$. The Kirby diagram for $X_2$ is obtained as described above by cancelling the 1-handle with one of the 0-handles. This results in a single 0-handle and 2-handle.
It remains to describe the attaching sphere $S$ of the 3-handle of $X$. The part of $S$ contained in the boundary of $X_1$ away from the attching region of the 2-handle $H$ of $X$ is obtained from the two copies $d_\pm$ of $d$, cut along the 0-handle of $P_s$ and with a neighborhood of the 1-handle $h$ removed. The boundary of the resulting surface consists of two oppositely oriented framing curves of $H$ and the attaching sphere is completed by adding the two disk parallel to the core of $H$ bounded by these curves (see \Cref{fig:attach-sphere}). Of course these disks may be pushed into the coattaching region of $H$ showing that $S$ is isotopic to an unknotted 2-sphere in the boundary of the 0-handle of $X$. We conclude that $X$ is diffeomorphic to a twice-punctured ${\mathbb C}{\mathbb P}^2$, as expected.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1.2]{attach-sphere}
\caption
{{\bf A Kirby diagram for $X=\Sigma_2(B^4,P)$.} The attaching circle (dashed black curve) of the 2-handle $H$ of $X$ is the $+1$-framed core of the annulus. The attaching sphere $S$ of the 3-handle is built from two copies of the subdisks of $d$ (shown with the same color scheme as in \Cref{fig:proj-plane}) with a neighborhood of the annulus removed. The solid black curves are framing curves of $H$ along which two disks parallel to the core of $H$ are attached to form $S$. The indicated orientations of the framing curves come from a choice of orientation of the visible part of $S$.}
\label{fig:attach-sphere}
\end{figure}
\end{example}
We now prove the main theorem, establishing an isomorphism between the disoriented homology of a slice surface and the homology of the 4-ball branched along the surface.
\begin{theorem}
\label{thm:4d-homology}
Let $F \subset B^4$ be a properly embedded compact surface and let $F_s \subset S^3$ be its description.
Choose disorientations of the cores of the 1-handles and disorientations of the 2-handles of $F_s$. Then the homology of the cellular disoriented complex $\mathcal{DC}_*(F_s^\flat)$ is isomorphic to the shifted reduced homology of the branched double cover $\Sigma_2(B^4,F)$, i.e.,
$$H_*(\mathcal{DC}_*(F_s^\flat)) \cong \widetilde H_{*+1}(\Sigma_2(B^4,F)).$$
Moreover, the intersection pairing of $\Sigma_2(B^4,F)$ under this identification agrees with the GL-pairing $\lambda$ on $DH_1(F_s^\flat)$.
\end{theorem}
\begin{proof}
We first show that the homology of the double branched cover $X=\Sigma_2(B^4,F)$ with branch set a slice surface $F \subset B^4$ is isomorphic to the disoriented homology of the slice surface description $F_s \subset S^3$ of $F$. Recall that a handle decomposition of $F$ determines a handle decomposition of the projected surface $F_s$; the union of $0$- and $1$-handles of $F_s$ forms a ribbon immersed surface $F_r$. According to \Cref{prop:dbc4v1} there is a bijection between $k$-handles of $F_s$ and $(k+1)$-handles of $X$; more precisely, the attaching sphere of a 4-dimensional handle is determined by the core of the 2-dimensional handle. Inspecting this correspondence we see that the boundary homomorphisms in the cellular disoriented complex of the surface, $\mathcal{DC}_*(F_s^\flat)$, and in the cellular chain complex of $X$, $C_{*+1}(X)$, agree in nonnegative dimensions. By a slight abuse we treat a handle of index $k$ as a $k$-cell. We describe below a chain equivalence inducing the claimed isomorphism. Our description relies also on the Kirby diagram described after \Cref{prop:dbc4v1}.
$$
\begin{CD}
\mathcal{DC}_2(F_s^\flat) @>{\partial_2^\flat}>> \mathcal{DC}_1(F_s^\flat) @>{\partial_1^\flat}>> \mathcal{DC}_0(F_s^\flat) @>{\varepsilon}>> {\mathbb Z} @>>> 0 \\
@V{\cong}V{f_2}V @V{\cong}V{f_1}V @V{\cong}V{f_0}V @VV{f_{-1}}V @VVV \\
C_3(X) @>{\partial}>> C_2(X) @>{\partial}>{\phantom{\partial}}> C_1(X) @>{\partial}>> C_0(X) @>{\varepsilon}>> {\mathbb Z}
\end{CD}
$$
Recall that $X$ is built from the disjoint union of two 4-balls $X_0:=B^4_- \sqcup B^4_+$ by attaching handles. The preimage of the core of each $k$-handle of $F_s$ is a $k$-dimensional sphere in the boundary of the handlebody $X_k$, built from $X_0$ by attaching handles of index at most $k$. Recall that the attaching sphere contains the two copies of the core in $B^4_\pm$ away from its intersection with the surface of lower index handles, connected over the coattaching regions of the corresponding handles in $X_k$.
Let $f_{-1}(1)=x_+ - x_-$, where $x_\pm$ is the generator of $C_0(X)$ corresponding to $B^4_\pm$. This makes the rightmost square commutative.
To each 0-handle $m$ of $F_s$ corresponds a 1-handle $M$ in $X$ (realized by gluing the two 4-balls in $X_0$ along the two copies of a 3-disk obtained by inflating $m$). We orient (the core of) $M$ from $B^4_-$ to $B^4_+$ and let $f_0(m)=M$, so $f_0$ sends $m$ to the oriented (core of the) 1-handle $M$. Since all the 1-handles connect the two 0-handles of $X$, the equality $\partial \circ f_0= f_{-1} \circ \varepsilon$ follows from the definition of $f_{-1}$.
For a 1-handle $h$ of $F_s$ let $c$ be its disoriented core.
At each intersection of $c$ with a 0-handle $m$ of $F_s$, the preimage $\pi^{-1}(c)=c_- \cup c_+$ of $c$ in $X_1$ goes over the corresponding 1-handle $M$ of $X$ and continues in the other ball. Orienting $c_-$ consistently with the chosen disorientation of $c$ and giving $c_+$ the opposite disorientation, yields an oriented circle that is the attaching circle for the 2-handle $H$ of $X$ corresponding to $h$. Setting $f_1(h)=H$ it follows that $f_0 \circ \partial_1^\flat=\partial \circ f_1$ since at each point of disorientation the attaching circle goes over the 1-handle twice in the same direction.
The attaching sphere $S$ of a 3-handle $D$ of $X$ corresponding to a disoriented 2-handle $d$ of $F_s$ is obtained from the two preimages $d_\pm$ of $d$ in $X_2$. Recall that $d$ is split into faces of the graph $\Gamma=d \cap F_r$ and a disorientation of $d$ is given by a chessboard coloring of these faces. To construct $S$, change the disorientation of $d_+$ and then connect different colored faces of $\Gamma_\pm$ along the boundary of $d_\pm$ and same colored faces along the interior arcs of $\Gamma_\pm$, where all the connections are made over the handles in $X_2$. More precisely, the two copies $A_\pm$ of an arc $A$ along which $d$ is attached to a 1-handle $h$ correspond to the inclusion of a disk that goes once over the 2-handle $H$ into $S$. The sign of this contribution to the boundary is determined by the chosen orientation of $S$: if the disorientation of $d$ induces in $A$ the chosen disorientation of $h$, the sign of $H$ is positive, and negative otherwise. Similarly, the two copies $B_\pm$ of an interior arc of intersection $B \subset d$ with a 1-handle $h$ correspond to the inclusion into $S$ of two disks each of which goes once over the 2-handle $H$. The sign of this contribution may be determined from $d_-$ as before and is the same also for the other component, since both the intervening disorientations (of 1- and 2-handle) have been changed. Interpreting $S$ in $C_2(X)$ now shows that it corresponds to the disoriented 1-cycle $b^\flat$ in the definition of $\partial_2^\flat d$, proving the commutativity of the left square above.
We now turn to the pairing. Note that it is enough to establish the correspondence between pairings for ribbon surfaces. Start with a ribbon immersed surface $F_r$ in preferred position as described in the construction of a Kirby diagram for $X$ following \Cref{prop:dbc4v1}. To simplify the discussion we additionally assume that
no 1-handle of $F_r$ is attached along the boundary of the slice of a 0-handle bounded by the projection of a 1-handle forming a ribbon singularity (see \Cref{fig:ribbon-framing}).
Choose two disoriented cycles $a$ and $b$ in $\mathcal{DC}_1(F_r)$. The Gordon-Litherland pairing $\lambda_{F_r}([a],[b])$ is computed as $\lk(a,\tau b)$, where $\tau$ is the double normal push-off away from ribbon singularities and is described close to a ribbon singularity in \Cref{sec:pairing}. We construct a push-off $\tau b$ that is compatible with a ``framing'' of the curve corresponding to $b$ in $\partial X_1$, i.e. the push-off of the representative of the 2-dimensional homology class in $X$ corresponding to $b$. Recall that any disoriented 1-cycle is homologous to a linear combination of cores of 1-handles of $F_r$ whose endpoints are connected by 1-chains in the union of 0-handles. For each 1-handle $h$ of $F_r$ let its disoriented core $c_h$ be the central curve of $h$ with a chosen disorientaton. Then construct its double push-off $\tau c_h$ as follows: starting at one end of $h$ the two arcs of $\tau c_h$, one in front of $F_r$ and the other behind, both project to one side of $c_h$ in the $xz$-plane and are oriented consistently with $c_h$. These arcs can be extended along the handle by retaining relative positions of arcs with respect to $h$ as it twists and turns in space. The only exceptions to this are neighborhoods of ribbon singularities where the rule is as described in \Cref{fig:ribbon-framing}; if the projection of $\tau c_h$ along $h$ arrives to the other side of the projected core as in the picture just switch their side relative to $c_h$ by passing one over and the other under $c_h$ (alternatively one could use analogous models for the arcs on the other side).
\begin{figure}
\includegraphics{ribbon-framing.pdf}
\caption{\textbf{Comparison of pairings for a surface description in $S^3$ and for the corresponding branched double cover $X$.} The left figure shows the specific choice of framing curves (red) for a generator (blue) corresponding to a 1-handle (green) of $F_r$ near a ribbon singularity. The local contribution of the ribbon singularity to the self-pairing is $-1$. The right figure shows the corresponding attaching circle for the 2-handle (blue) and its framing (red) which also yield a local contribution of $-1$ to the linking number.}
\label{fig:ribbon-framing}
\end{figure}
To compute linking numbers we use the standard recipe of counting signs of all double points in the projection and then dividing by two (we assume all intersection points in the projection of two curves to the $xz$-plane are regular). Note first that 1-chains connecting (multiples of) disoriented cores inside the 0-handles do not contribute to the linking number $\lk(a,\tau b)$ as an intersection between $a$ and $b$ gives rise to a canceling pair of crossings between $a$ and $\tau b$. Similarly there is no contribution to $\lk(a,\tau b)$ from intersections between projections of disoriented cores and 1-chains contained in the 0-handles: if any such crossing appears, then it involves a piece of disoriented core $c_h$ pointing into/out-of a ribbon singularity, but then the same arc of a 1-chain forms an intersection also with the other piece of $c_h$ emanating from the same ribbon singularity. Since the two arcs of $c_h$ have the same orientation (pointing into/out-of the ribbon singularity) and one lies above and the other below the 0-handle, local contributions to the linking number cancel in pairs. The only contributions of arcs contained in the 0-handles of $F_r$ thus come from ribbon singularities when $[a]=[b]$. \Cref{fig:ribbon-framing} shows one possible configuration: the piece of the 1-handle $h$ lying in front of the 0-handle is on the left of the one behind. In this case the local contribution is $-1$. The other configuration is symmetric and yields local contribution $+1$.
This is consistent with the framing curve $\Phi_h$ for the attaching circle $C_h$ of the 2-handle $H$ in $X$ corresponding to $h$. The framing curve is obtained from $\tau c_h$ by keeping the ``front'' curve starting at the chosen end of $h$ in the upper half-space and rotating the other along with the diagram to the lower half-space. At a ribbon singularity this results in keeping both the front curves above the $xy$-plane and rotating the behind ones or vice versa, disregarding the parts of the curves going away from $h$ and connecting resulting arcs in the obvious way. Then any pair of crossings between $c_{h_i}$ and $\tau c_{h_j}$ corresponding to an intersection point between the projection of $c_{h_i}$ and $c_{h_j}$ results in two crossings between $C_{h_i}$ and $\Phi_{h_j}$ (see \Cref{fig:local-contribution}).
\begin{figure}
\includegraphics[width=\textwidth]{local-contribution.pdf}
\caption{\textbf{Comparison of pairings for a surface description in $S^3$ and for the corresponding branched double cover $X$.} The left figure corresponds to a crossing between disoriented cycles $a$ and $b$ in a surface, and the right one to a point of intersection. In the surface diagram, the blue curves represent parts of $a$ and the red ones parts of $\tau b$. In $X$, the blue curves represent parts of the attaching circle for the 2-handle corresponding to $a$ and the red ones parts of the framing curve for the 2-handle corresponding to $b$. In each case the two crossings in the surface diagram give rise to two crossings of the curves in the boundary of $X_1$ with the same local contribution to the linking number.}
\label{fig:local-contribution}
\end{figure}
The signs of these crossings agree with the signs of the original crossings since disorientations of the cores are preserved below and reversed above the $xy$-plane. The result now follows by using above remarks and bilinearity of linking numbers.
\end{proof}
\section{The signature formula}\label{sec:signature}
Let $F$ be a properly embedded surface in $S^3 \times [0,1]$ without closed components whose boundary consists of two links $\L_0 \subset S^3 \times \{0\}$ and $\L_1 \subset S^3 \times \{1\}$ (one of which could be empty). We make no assumption on orientability of $F$. We choose an orientation of the links $\L_i$ and denote the oriented links by $\vec\L_i$; recall that a link's signature is unaltered by the overall reversal of its orientation. The following proposition expresses the change in the signature of the two links in terms of the data determined by the cobordism $F$. This is a slight generalization of the signature formula in \cite{gl} and follows similarly to that.
Since the normal bundle of $F$ is trivial, we may choose a section $F'$ of the normal circle bundle. Let $\vec\L_i'$ denote the boundary links of $F'$, oriented consistently with $\vec\L_i$. Finally, let $W_F$ be the double branched cover of $S^3 \times [0,1]$ with branch set $F$.
\begin{lemma}\label{lem:sig}
With the notation as above we have
$$\sigma(\vec\L_1) - \sigma(\vec\L_0) = \sigma(W_F) + \frac 12 \left(\lk(\vec\L_0,\vec\L_0') - \lk(\vec\L_1,\vec\L_1')\right).$$
\end{lemma}
\begin{proof}
Let $\Sigma_i$ be a Seifert surface for $\L_i$. Form a (smooth) 4-sphere by adding a 4-disk to each of the boundary components of $S_3 \times [0,1]$. Then by pushing interiors of $\Sigma_i$ into the disks we may obtain a smooth surface $\hat F$ as the union of $F$ and the pushed-in Seifert surfaces. Denoting the double branched cover of $S^4$ with branch set $\hat F$ by $\hat W_F$, we obtain using Novikov additivity and the G-signature theorem
$$\sigma(\hat W_F)= \sigma(\vec\L_0) + \sigma(W_f) - \sigma(\vec\L_1)= - \frac 12 e(\hat F),$$
where $e(\hat F)$ is the normal Euler number of $\hat F$. Recall that the normal Euler number may be computed by choosing a generic section of the normal bundle of the surface and assigning intersection numbers to intersection points by local choice of orientation of the surface and orienting the section consistently with this choice. The section $\hat F'$ may be constructed by adding to $F'$ generic perturbations $\Sigma_i'$ of pushed-in $\Sigma_i$. As is well known, the linking number $\lk(\vec\L_0,\vec\L_0')$ is equal to the sum of local intersection numbers between $\Sigma_0$ and $\Sigma_0'$. It follows that
$$e(\hat F)=\lk(\vec\L_0,\vec\L_0') - \lk(\vec\L_1,\vec\L_1'),$$
which proves the claimed formula.
\end{proof}
If the surface $F \subset S^3 \times [0,1]$ projects injectively to the sphere, giving an embedded cobordism between the links, the signature of the double branched cover manifold may be computed from the Gordon-Litherland pairing $\lambda_{p(F)}$. Also the links $\vec\L_i'$ may be replaced by nearby parallels of $\vec\L_i$ on the projected image of $F$.
\begin{proposition}\label{prop:sig}
Let $F \subset S^3 \times [0,1]$ be a properly embedded surface such that the restriction of the projection $p$ along the interval to $F$ is an embedding. Then for any choice of orientations of the boundary links $\vec\L_i \subset S^3 \times \{i\}$ of $F$ we have
$$\sigma(\vec\L_1) - \sigma(\vec\L_0) = \sigma(\lambda_{p(F)}) + \frac 12 \left(\lk(\vec\L_0,\vec\L_0^F) - \lk(\vec\L_1,\vec\L_1^F)\right),$$
where $\vec\L_i^F$ is a nearby parallel of $\vec\L_i$ on $p(F)$.
\end{proposition}
\begin{proof}
This follows immediately from the above lemma after noting that $\lambda_{p(F)}$ is the intersection pairing of $W_F$ and that since $F$ is a graph of $p(F)$, we can choose a section $F'$ for which $\L_i'$ is homotopic to $\L_i^F$. Indeed, a section $F'$ can be constructed starting with $\L_0'=\L_0^F$ and pushing $p(F)$ (with the collar between $\L_0$ and $\L_0^F$ removed) slightly below $F$. This has to be completed by adding a collar on the image of $\L_1$ that interpolates to $S^3 \times \{1\}$. Clearly $\L_1'$ is then homotopic to $\L_1^F$.
\end{proof}
Consider now a general slice surface $F \subset B^4$ with boundary link $\L$. We continue assuming that $F$ has no closed components. Let $F_s \subset S^3$ be a description of $F$ and $\lambda_{F_s}$ the corresponding pairing on $DH_1(F_s)$. Recall that $F_s$ consists of a ribbon surface description $F_r$ and a separated sublink of $\partial F_r$ consisting of those components that are in $F$ capped-off. Let $\L^F$ be a nearby parallel of $\L$ on $F_r$.
\begin{theorem}\label{thm:sig}
Let a link $\L$ be the boundary of a slice surface $F \subset B^4$. Then for any choice $\vec\L$ of orientation for $\L$ its signature is given by
$$\sigma(\vec\L)=\sigma(\lambda_{F_s}) - \frac 12 \lk(\vec\L,\vec\L^F),$$
where $\vec\L^F$ is oriented consistently with $\vec\L$.
\end{theorem}
\begin{proof}
We may assume the radial distance function in $B^4$ induces a Morse function on $F$ so that the ball $D_0$ of radius $1/3$ contains exactly all critical points of index $0$ and the radial shell $E_1$ between $1/3$ and $2/3$ contains exactly all critical points of index $1$. Then the part of $F$ contained in $D_1=D_0 \cup E_1$ is a ribbon surface. We further assume that the interior arcs of ribbon sigularities of $F_r$ are contained in the 0-handles.
Since only the double branched cover of $E_1$ may have nontrivial signature, it follows by Novikov additivity that the signature of the branched cover of $E_1$ equals to the signature of the branched cover of $D_1$ and to that of $B^4$, which is equal to $\sigma(\lambda_{F_r})=\sigma(\lambda_{F_s})$. The result now follows from Lemma \ref{lem:sig} after noting that the lower boundary of the intersection of $F$ with $E_1$ is a 0-framed unlink. That $\L'$ may be replaced by $\L^F$ follows as in the proof of the previous proposition since the radial projection restricted to $F \cap E_1$ is an embedding.
\end{proof}
\bibliographystyle{amsplain}
| {
"timestamp": "2022-07-20T02:20:51",
"yymm": "2207",
"arxiv_id": "2207.09358",
"language": "en",
"url": "https://arxiv.org/abs/2207.09358",
"abstract": "This paper provides a convenient and practical method to compute the homology and intersection pairing of a branched double cover of the 4-ball.To projections of links in the 3-ball, and to projections of surfaces in the 4-ball into the boundary sphere, we associate a sequence of homology groups, called the disoriented homology. We show that the disoriented homology is isomorphic to the homology of the double branched cover of the link or surface. We define a pairing on the first disoriented homology group of a surface and show that this is equal to the intersection pairing of the branched cover. These results generalize work of Gordon and Litherland, for embedded surfaces in the 3-sphere, to arbitrary surfaces in the 4-ball. We also give a generalization of the signature formula of Gordon-Litherland to the general setting.Our results are underpinned by a theorem describing a handle decomposition of the branched double cover of a codimension-2 submanifold in the $n$-ball, which generalizes previous results of Akbulut-Kirby and others.",
"subjects": "Geometric Topology (math.GT)",
"title": "Disoriented homology and double branched covers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650145,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811466553535
} |
https://arxiv.org/abs/1006.3030 | Satisfiability Thresholds for k-CNF Formula with Bounded Variable Intersections | We determine the thresholds for the number of variables, number of clauses, number of clause intersection pairs and the maximum clause degree of a k-CNF formula that guarantees satisfiability under the assumption that every two clauses share at most $\alpha$ variables. More formally, we call these formulas $\alpha$-intersecting and define, for example, a threshold $\mu_i(k,\alpha)$ for the number of clause intersection pairs $i$, such that every $\alpha$-intersecting k-CNF formula in which at most $\mu_i(k,\alpha)$ pairs of clauses share a variable is satisfiable and there exists an unsatisfiable $\alpha$-intersecting k-CNF formula with $\mu_m(k,\alpha)$ such intersections. We provide a lower bound for these thresholds based on the Lovasz Local Lemma and a nearly matching upper bound by constructing an unsatisfiable k-CNF to show that $\mu_i(k,\alpha) = \tilde{\Theta}(2^{k(2+1/\alpha)})$. Similar thresholds are determined for the number of variables ($\mu_n = \tilde{\Theta}(2^{k/\alpha})$) and the number of clauses ($\mu_m = \tilde{\Theta}(2^{k(1+\frac{1}{\alpha})})$) (see [Scheder08] for an earlier but independent report on this threshold). Our upper bound construction gives a family of unsatisfiable formula that achieve all four thresholds simultaneously. | \section{Introduction}
Satisfiability of CNF is one of the most studied and versatile problems in computer science with its own journal (JSAT), competitions and an yearly conference, International Conference on Theory and Applications of Satisfiability Testing (SAT). In this paper we investigate a simple class of criteria that can guarantee satisfiability of a given $k$-CNF formula. We consider threshold criteria, i.e., for several quantities connected to a CNF (like the number of clauses, variables or variable intersections) we determine a maximum magnitude leading to satisfiable formulas. We would like to determine the exact threshold of such quantities, in the sense that there exist unsatisfiable formulas for which this quantity is greater than the threshold. A tightly determined threshold can be used as a simple satisfiability test: given a formula $F$, determine or count the specific quantities in $F$ and declare $F$ satisfiable if one of these quantities is below the threshold. Observe that such thresholds help in deciding satisfiability only if the considered quantity are below the threshold. The problem of deciding satisfiability when all these quantities are above the threshold is still a hard problem.\\
One such threshold that we consider is the \emph{number of clauses} $m$. We denote this threshold by $\mu_m(k)$ and it denotes the smallest number of clauses in an unsatisfiable formula. The trivial lower bound of $\mu_m(k)\geq 2^k$ is easily seen: each formula that consists of less than $2^k$ clauses is satisfiable since each clause eliminates only one out of the $2^k$ possible satisfying assignments. On the other hand there is an unsatisfiable $k$-CNF formula with $2^k$ clauses, namely the formula consisting of all possible $2^k$ clauses (all positive/negative literal combinations) on $k$ variables. Hence, $\mu_m(k)=\Theta(2^k)$.\\
Yet another prominent threshold is the \emph{maximum clause degree} $\Delta$ of a $k$-CNF formula, i.e. the maximum number of clauses that share at least one variable with a fixed clause. The complete formula on $k$ variables once again has maximum degree $2^k$ and gives an easy upper bound for this threshold. On the other hand an application of the powerful Lov\'asz Local Lemma \cite{ErdoesLovasz} shows that every formula with $\Delta < 2^k/e$ is satisfiable leading to the conclusion that $\mu_{\Delta} = \Theta(2^k)$.\\
In this paper we focus on satisfiability-threshold for a special class of formulas which guarantee that two clauses intersect only in a bounded number (henceforth we denote this by $\alpha$) of variables. These formulas are a natural extension of linear CNF formulas, i.e., formulas with $\alpha = 1$, which have been introduced in \cite{porschen2009linear}. The naming and concept of linear CNF formula comes from hypergraphs with bounded intersections as studied for example in \cite{ErdoesLovasz}. Intuitively, the restriction to bounded intersection makes it harder to build conflicting clauses which lead to unsatisfiability. And indeed it was the original goal of the authors to prove a higher satisfiability-threshold for $\Delta$ in linear $k$-CNF using stronger versions of the LLL, e.g., the soft-core LLL version of \cite{scott2005repulsive}. While it turned out that the satisfiability threshold for $\Delta$ remains $\tilde{\Theta}(2^k)$ even for linear CNFs we got interesting dependencies on $\alpha$ in the thresholds for other quantities, namely the number of variables, the number of clauses and the number of clause intersection pairs.
\section{Related work}
This paper builds highly on the techniques developed by Erd\H{o}s and Lov\'asz in the classical paper ``Problems and results on 3-chromatic hypergraphs and some related questions'' \cite{ErdoesLovasz}. Our proofs are built on the powerful Lov\'asz Local Lemma and also make use of and extend the shrinking operation (see Section \ref{sec:shrinking-hypergraphs}) that was used in \cite{ErdoesLovasz} to construct interesting linear hypergraphs. Independently but roughly a year before the authors conducted this research the paper \cite{scheder08almostdisjoint} by Dominik Scheder examined the satisfiability threshold for the number of clauses/constraints applying essentially the same techniques as here and in \cite{ErdoesLovasz}. While Scheder considers multi-value constraint satisfaction problems -- essentially a non-binary variant of CNF formula---he restricts himself to the threshold $\mu_m$. All results presented here directly extend to these multi-value CSPs too and to our knowledge this paper is the first to states the thresholds for the number of clause intersection pairs, variables and the max degree explicitly. More complicated algebraic constructions based on ideas of Kuzjurin \cite{Kuzjurin} and Kostochka and R\"{o}dl \cite{roedl} work for the restricted case $\alpha=1$ and can be found in Lemma 2.2. of \cite{Scheder10} without explicit statement of thresholds. Most notably, we use the $\alpha$-shrinking procedure not just in the lower bound but apply it to a maximal $(k+\alpha)$-uniform $\alpha$-intersecting formula in our upper bound construction. This is the key to obtaining bounds on the number of clause intersections and gives an unsatisfiable $\alpha$-intersecting formula that is extremal (up to $\log$-factors) in all considered quantities. \\
Another very interesting related work by Scheder and Zumstein is the paper ``How many conflicts does it need to be Unsatisfiable''~\cite{scheder08conflicts} in which upper and lower bounds on the threshold for conflicts are given. The notion of a conflict is closely related to clause intersections. Instead of counting the pairs of clauses that share a variable the number of conflict only counts clause pairs in which at least one variable is shared in an opposite literal. The reason why conflicts are interesting is because the lopsided version of the Lov\'asz Local Lemma \cite{erds1991lopsided} can be applied to $k$-CNF formulas in which each clause is involved in at most $2^k/e$ conflicts and thus guarantees their satisfiability. In contrast to the nearly tight threshold $\mu_i(k,\alpha) = \tilde{\Theta}(2^{k(2+1/\alpha)})$ for clause intersections in $\alpha$-intersecting formula established here, the conflict threshold is much harder to determine: the best known result for $\alpha=k$ is $\omega(2.69^k) \leq \mu_c(k,k) \leq O(4^k \frac{\log^2 k}{k})$ \cite{scheder08conflicts}.
\section{Preliminaries}
A hypergraph is $\mathbf k${\bfseries-uniform} if all edges contain exactly $k$ vertices. Two edges are called {\bfseries intersecting} if they share at least one vertex and a hypergraph is called $\mathbf \alpha${\bfseries-intersecting} if any two intersecting edges share at most $\alpha$ vertices. A $1$-intersecting hypergraph is called {\bfseries linear}. The {\bfseries edge intersection pairs} of a hypergraph are all pairs of edges that are intersecting. The {\bfseries degree of a vertex} is the number of edges it appears in and the {\bfseries degree of an edge} is the number of edges it intersects with.\\
Every $k$-CNF formula $F$ {\bf induces} a $k$-uniform (multi)-hypergraph $G_F=(V,E)$ where $V$ is the set of variables and the edge (multi)-set $E$ contains an hyperedge over vertices $\{v_1,\cdots,v_k\}$ if and only if there exists a clause consisting of the corresponding variables. This gives a one-to-one mapping between clauses and edges in the induced hypergrah and we adopt all previously introduced hypergraph terminology for $k$-CNF formula accordingly, e.g., we define clause intersection pairs as all pairs of clauses that intersect in at least one variable.\\
Throughout this paper we are interested in satisfiability thresholds for $\alpha$-intersecting $k$-CNF formula. We consider the following quantities: number of clauses $m$, number of variables $n$, maximum degree $\Delta$ and number of clause intersection pairs $i$. Denote the thresholds for a quantity $q$ with $\mu_q(\alpha,k)$. A {\bfseries satisfiability threshold} $\mu_q(\alpha,k)$ is the smallest number such that there exists an unsatisfiable $\alpha$-intersecting $k$-CNF with $q=\mu_m(\alpha, k)$. Phrased differently it is the largest number such that every $\alpha$-intersecting $k$-CNF formula with $q < \mu_q(\alpha, k)$ is satisfiable.\\
Our lower bounds to the thresholds are based on a classical application of the Lov\'asz Local Lemma \cite{ErdoesLovasz} and its more recent constructive algorithmic versions that give randomized \cite{moser08} and deterministic \cite{MT-JACM,llldeterministic} algorithms:
\begin{theorem}\label{thm:lll}
Every $k$-CNF with maximum clause degree $\Delta$ at most $\frac{2^k}{e}$ is satisfiable and there is an efficient algorithm to find such an assignment.
\end{theorem}
\section{Results}
We present lower bounds (Theorem \ref{thm:lowerbound}) and nearly matching constructive upper bounds (Theorem \ref{thm:upperbound}) that determine all thresholds $\mu_i,\mu_m,\mu_n,\mu_\Delta$ up to $\log$-factors (Theorem \ref{thm:thresholds}). Our lower bound in Theorem \ref{thm:lowerbound} consists of an algorithm based on Theorem \ref{thm:lll} that efficiently finds a satisfying assignment for any $\alpha$-intersecting $k$-CNF formula with few clause intersection pairs, variables or clauses. The upper bound in Theorem \ref{thm:upperbound} proves the existence of unsatisfiable formulas which have only slightly more clause intersections, variables and clauses. Note that while our proof of Theorem \ref{thm:lowerbound} is algorithmic, one needs an efficient implementation of Lemma \ref{lemma:construction-unsat} to make Theorem \ref{thm:upperbound} constructive(see also \cite{Scheder10}). We suspect that some of the bounds below can be improved by $O(k)$-factors but since all bounds are exponential in $k$ we did not optimize for these polylogarithmic factors.
\begin{theorem}\label{thm:lowerbound}
Every $\alpha$-intersecting $k$-CNF with less than
$$L_i = \frac{1}{2\alpha} \left(\frac{2^{(k-\alpha)}}{ek}-1\right)^{(2+1/\alpha)} \text{ clause intersections} $$
or
$$L_n = \left(\frac{2^{(k-\alpha)}}{ek}\right)^{1/\alpha} \text{ variables}$$
or
$$L_m = \frac{1}{k} \left(\frac{2^{(k-\alpha)}}{ek}\right)^{1+1/\alpha} \text{ clauses}$$
is satisfiable and a satisfying assignment can be found efficiently.
\end{theorem}
\medskip
\begin{theorem}\label{thm:upperbound}
For any $k$ and $\alpha < k$ there is an unsatisfiable $\alpha$-intersecting $k$-CNF with at most
$$U_i = \alpha^2 2^{(k+\alpha)(2+1/\alpha)} k^{(5+2/\alpha)} \text{ clause intersections}$$
and
$$U_n = 2\alpha 2^{k/\alpha} k^{2(1+1/\alpha)} \text{ variables}$$
and
$$U_m = \alpha 2^{(k+\alpha)(1+1/\alpha)}{k^{2(1+1/\alpha)}} \text{ clauses}$$
and
$$U_{\Delta} = \alpha 2^{(k+\alpha)}k^2 \text{ maximum degree}.$$
\end{theorem}
\medskip
In the following $\tilde{\Theta}(x)$ means $\Theta(x (\log x)^c)$ for some absolute
(positive or negative) constant $c$. Combining the above two theorems yields good estimates for the thresholds:
\begin{corollary}\label{thm:thresholds}
The thresholds for satisfiability are:
\begin{itemize}
\item number of clause intersections: $\mu_{i} = \tilde{\Theta}(2^{k(2+1/\alpha)})$
\item number of variables: $\mu_n = \tilde{\Theta}(2^{k/\alpha})$
\item number of clauses: $\mu_m = \tilde{\Theta}(2^{k(1+\frac{1}{\alpha})})$
\item maximum degree: $\mu_{\Delta} = \tilde{\Theta}(2^{k})$
\end{itemize}
\end{corollary}
\section{Shrinking and Maximal $\alpha$-intersecting Hypergraphs}\label{sec:shrinking-hypergraphs}
This section contains useful lemmas about hypergraphs needed to prove the main theorems. One operation that will be particularly helpful for both the lower and the upper bound is the $\mathbf \beta${\bfseries -shrinking} operation. The shrinking operation creates a $k$-uniform hypergraph $H'$ from a $(k+\beta)$-uniform hypergraph $H$ by deleting the $\beta$ vertices of maximum degree from each edge breaking ties arbitrarily. Shrinking is similarly defined for $(k+\beta)$-CNF formulas where the variables with highest degree are deleted from each clause. The next lemma shows that a high degree vertex can survive the $\beta$-shrinking procedure to remain a high degree vertex only if many such high degree vertices are present in the original hypergraph.\\
\begin{lemma}\label{lemma:shrinking}
Let $H$ be a $(k+\alpha)$-uniform $\alpha$-intersecting hypergraph and $H'$ be the result of $\alpha$-shrinking $H$. If $H'$ has a vertex of degree $d$, then $H$ has more than $d^{1/\alpha}$ vertices of degree at least $d$.
\end{lemma}
\begin{proof}
Let $v$ be the vertex in $H'$ of degree $d$. Since $H'$ was created by shrinking $H$ there are at least $d$ edges in $H$ in which $v$ is present but did not get deleted. We call the set of those edges $\C$; then we know that $|\C|\geq d$. From each edge $e \in \C$, exactly $\alpha$ vertices got deleted all of which are of degree of at least $d$. We claim that the mapping that maps each $e \in \C$ to this $\alpha$-sized set of deleted vertices is injective:\\
Suppose two edges $e_1, e_2\in \C$ get mapped to the same $\alpha$-sized set of vertices. Then, the edges $e_1$ and $e_2$ intersect in these $\alpha$ vertices; furthermore they also intersect in the vertex $v$ and thus intersect in $\alpha+1$ vertices. This is a contradiction to the $\alpha$-intersecting property of $H$.\\
Injectivity gives us that there are $|\C| \geq d$ different $\alpha$-sized subsets of vertices which got deleted instead of $v$ while shrinking. All vertices in those subsets must have degree at least $d$ by definition of the shrinking operation. Furthermore if $N$ is the number of distinct vertices in those subsets then we have $d \leq \binom{N}{\alpha} < N^{\alpha}$. Therefore there are at least $N > d^{1/\alpha}$ vertices with degree at least $d$ in $H$.
\end{proof}
\medskip
The next lemma proves that any maximal $\alpha$-intersecting hypergraph on $n$ vertices must have a large number of edges. It uses a bound on the Tur{\'a}n number that is due to de Caen \cite{de1983extension}. The Tur{\'a}n number $T(n,k,r)$ for $r$-uniform hypergraphs with $n$ vertices is the smallest number of edges possible such that every set of $k$ vertices contains at least one edge. This number was determined for graphs by Tur{\'a}n \cite{turan1941extremal} and extended to hypergraphs by himself in the report ''Research Problems''\cite{turan1961research}.\\
\begin{lemma}\label{lem:maximal-hypergraphs}
Every maximal $\alpha$-intersecting hypergraph $H$ on $n$ vertices has at least $m \geq \frac{\binom{n}{\alpha+1}}{\binom{k}{\alpha+1}^2}$ edges.
\end{lemma}
\begin{proof}
Let $H$ be a maximal $\alpha$-intersecting hypergraph on $m$ edges. Since $H$ is $\alpha$-intersecting each of the $\binom{n}{\alpha+1}$ subsets of vertices of size $\alpha+1$ is covered by at most one distinct hyperedge of $H$. Also, $H$ covers exactly $m \binom{k}{\alpha+1}$ distinct subsets of size $\alpha+1$ in $H$. If $m{\binom{k}{\alpha+1}} < {T(n,k,\alpha+1)}$ the $\alpha+1$-uniform hypergraph consisting of all covered $\alpha+1$-size subsets has less than $T(n,k,\alpha+1)$ edges and therefore $\exists$ a $k$-subset $K$ that does not contain any covered edge. This $k$-subset can be added as an edges into $H$ while preserving it to be $\alpha$-intersecting. Indeed, if some edge $e$ intersects $K$ in at least $\alpha+1$ vertices, then the corresponding set of vertices is covered contradicting the choice of $K$. Thus if $m<\frac{T(n,k,\alpha+1)}{\binom{k}{\alpha+1}}$ then $H$ is not maximal $\alpha$-intersecting. To finish we use a lower bound of de Caen \cite{de1983extension} on the Tur{\'a}n number: $T(n,k,\alpha+1) \geq \frac{n-k+1}{n-\alpha}\binom{n}{\alpha+1}/\binom{k-1}{\alpha}$; plugging this in gives the desired result.
\end{proof}
We remark that the same result also appears in Scheder
\cite{scheder08almostdisjoint} with somewhat simpler and
self-contained proof.
\section{A Constructive Lower Bound}
This section gives the proof for the lower bound in Theorem \ref{thm:lowerbound}:\\
\iffalse
\begin{theorem}\label{thm:lowerbound}
Every $\alpha$-intersecting $k$-CNF with less than
\[\i = \frac{1}{2\alpha} \left(\frac{2^{(k-\alpha)}}{ek}-1\right)^{(2+1/\alpha)} \]
clause intersections or
$$n = \left(\frac{2^{(k-\alpha)}}{ek}\right)^{1/\alpha} $$
variables or
$$m = \frac{1}{k} \left(\frac{2^{(k-\alpha)}}{ek}\right)^{1+1/\alpha} $$
clauses is satisfiable and a satisfying assignment can be found efficiently.
\end{theorem}
\fi
\begin{proof} (of Theorem \ref{thm:lowerbound})\\
We prove that every $\alpha$-intersecting $k$-CNF $F$ is either satisfiable by Theorem \ref{thm:lll} after $\alpha$-shrinking it or it must have large clause intersection pairs, variables, clauses and a high maximum degree contradicting the hypothesis about the formula $F$.\\
Let $F'$ be the resulting $(k-\alpha)$-CNF we get from $\alpha$-shrinking $F$. If all variables in $F'$ have degree less than $d = 2^{(k-\alpha)}/ek$ then the Lov\'asz Local Lemma guarantees that $F'$ is satisfiable and Theorem \ref{thm:lll} states that a satisfying assignment can be efficiently found. Note that a satisfying assignment for $F'$ is also a satisfying assignment for $F$.\\
In the other case, suppose $F'$ has at least one variable of degree $d$. Then, Lemma \ref{lemma:shrinking} shows that $F$ must have at least $d^{1/\alpha}$ variables of degree at least $d$.\\
To count the number of clause intersection pairs in $F$, we count the intersections of clauses containing one of the $d^{1/\alpha}$ high degree variables. For each such variable the clauses containing it induce a clique with $(d-1)^2/2 $ intersections. Taking the disjoint union of these intersections we get at least $(d-1)^{2+1/\alpha}/2$ intersections but overcount each intersection up to $\alpha$-times since two clauses can intersect in up to $\alpha$ variables. Therefore $F$ has at least $\frac{1}{2\alpha} (d-1)^{2+1/\alpha}$ intersections.\\
To count the number of clauses in $F$ we look at the union of the clauses containing one of the $d^{1/\alpha}$ variables. There are at least $d^{1+1/\alpha}$ clauses in the non disjoint union and each clause can get added because of each of its $k$ variables at most once. Thus $F$ has at least $d^{1+1/\alpha}/k$ clauses.\\
Finally it is clear that $F$ has at least $d^{1/\alpha}$ variables.
\end{proof}
\section{Upper bounds for the thresholds}
This section gives the proof for the upper bounds in Theorem \ref{thm:lowerbound}.\\
Before we prove the theorem itself, the following lemma gives a general way to transform a sufficiently dense $k$-uniform hypergraph into an unsatisfiable $k$-CNF formula by iteratively taking a hyperedge and greedily choosing positive or negative literals for the variables:\\
\begin{lemma}\label{lemma:construction-unsat}
If there is a $k$-uniform hypergraph $H$ on $n$ vertices and at least $m = n 2^k$ edges than there exists an unsatisfiable $k$-CNF $F$ inducing $H$.
\end{lemma}
\begin{proof}
Denote the vertices in $H$ by $v_1,\ldots,v_n$ and associate with them, the variables $x_1,\cdots,x_n$ that will occur in $F$.
We will denote $\A\in\{0,1\}^n$ to be an assignment if we pick an assignment of values to variables $x$ by setting $x_i=\A_i$.
We say that a clause \emph{covers} an assignment $\A\in\{0,1\}^n$ if it is not satisfied by the assignment. We will iteratively create a clause for every edge in $H$ greedily covering the maximum number of yet uncovered assignments. We have to show that in the end all $2^n$ assignments are covered. Consequently, the conjunction of the created clauses forms an unsatisfiable $k$-CNF.\\
We pick edges $e$ from $H$ in an arbitrary order. We want to create a clause for $e$ on the $k$ variables associated with the $k$ vertices in $e$. For each variable we have the choice to pick the positive or the negative literal. These are $2^k$ different choices and the assignments covered by two different choices are disjoint. Since every assignment can be covered in this way the assignments get partitioned into $2^k$ parts. Simple averaging then guarantees that there exists a choice covering at least $1/2^k$ fraction of the assignments not covered so far. After $m$ iterations of greedily creating clauses covering the maximal number of uncovered assignments is at most $2^n \left(1 - 1/2^k\right)^m = \left(\frac{2}{(1 - 2^{-k})^{2^{-k}}}\right)^n < 1$. With all assignments covered the created formula $F$ is unsatisfiable and by construction also induces $H$ as required.
\end{proof}
\medskip
The above lemma shifts the focus towards finding a suitable dense $k$-uniform hypergraph in order to find an unsatisfying $k$-CNF. The following proof of Theorem \ref{thm:lowerbound} shows that $\alpha$-shrinking a maximal $\alpha$-intersecting $(k+\alpha)$-uniform hypergraph results in hypergraphs with nice additional extremal properties. Furthermore choosing a large number of vertices results in hypergraphs that obey the bound in Lemma \ref{lemma:construction-unsat} and can thus be transformed into the desired unsatisfiable $k$-CNF.
\iffalse
\begin{theorem}\label{thm:upperbound}
For any $k$ and $\alpha < k$ there is an unsatisfiable $\alpha$-intersecting $k$-CNF with at most
$$i = \alpha^2 2^{(k+\alpha)(2+1/\alpha)} k^{(5+2/\alpha)}$$
clause intersections
$$n = 2\alpha 2^{k/\alpha} k^{2(1+1/\alpha)}$$
variables
$$m = \alpha 2^{(k+\alpha)(1+1/\alpha)}{k^{2(1+1/\alpha)}}$$
clauses and a maximum degree of
$$\Delta < \alpha 2^{(k+\alpha)}k^3$$.
\end{theorem}
\fi
\begin{proof} (of Theorem \ref{thm:upperbound})\\
We create the formula by applying Lemma \ref{lemma:construction-unsat} to an $\alpha$-intersecting hypergraph. We obtain this hypergraph by $\alpha$-shrinking a maximal $\alpha$-intersecting $(k+\alpha)$-uniform hypergraph. Observe that it makes the resulting hypergraph $k$-uniform.\\
We choose $n = \alpha \left(2^{k+\alpha}k^{2(\alpha+1)}\right)^{1/\alpha}$ and build a $\alpha$-intersecting $(k+\alpha)$-uniform hypergraph on $n$ vertices. The choice of $n$ is such that Lemma \ref{lem:maximal-hypergraphs} guarantees that we can find a $k+\alpha$-uniform hypergraph $H$ with $$m = \frac{n^{\alpha+1}}{k^{2(\alpha+1)}\alpha^\alpha} = \alpha 2^{(k+\alpha)(1+1/\alpha)}{k^{2(1+1/\alpha)}} = 2^{k+\alpha}n$$ edges. This is sufficiently large number of edges to construct an unsatisfiable formula $F$ for hypergraph $H$ using Lemma \ref{lemma:construction-unsat}. Having constructed $H$, we $\alpha$-shrink it to obtain hypergraph $H'$ and its correspoding formula $F'$. Note that $F'$ is unsatisfiable because $F$ is unsatisfiable. The significant advantage about $H'$ obtained this way is that it has guarantees on the maximum degree and on the number of clause intersections. More precisely, we claim that $H'$ has maximum degree less than $(mk)^{1/(1+1/\alpha)}$. Suppose that after the shrinking there is a vertex of degree $d > (m(k+ \alpha))^{1/(1+1/\alpha)}$. Lemma \ref{lemma:shrinking} shows that in this case $H$ contains at least $d^{1/\alpha}$ vertices of degree larger than $d$. The disjoint union of the edges containing those vertices has size at least $d^{1+1/\alpha}$ and each edge gets counted at most $(k+\alpha)$ times this way. Therefore $H$ would have at least $d^{1+1/\alpha}/(k+\alpha) > m$ edges --- a contradiction.\\
Lemma \ref{lemma:construction-unsat} transforms the hypergraph $H$ into an unsatisfiable $k$-CNF formula $F$. This formula has $n$ variables and $m$ edges since shrinking preserves these quantities. Furthermore, the maximum degree $\Delta$ of $F$ is at most $(mk)^{1/(1+1/\alpha)}$ which also implies that the number of clause intersections is at most $m\Delta$.
\end{proof}
\medskip
{\bfseries Acknowledgments}\\
The research for this paper was done in the summer of 2009 while the authors where at MSR India. We thank Aravind Srinivasan for pointing out the questions about satisfiability thresholds for almost disjoint $k$-CNF formula which lead to this paper. We also want to thank Dominik Scheder.
\bibliographystyle{abbrv}
| {
"timestamp": "2010-06-16T02:01:42",
"yymm": "1006",
"arxiv_id": "1006.3030",
"language": "en",
"url": "https://arxiv.org/abs/1006.3030",
"abstract": "We determine the thresholds for the number of variables, number of clauses, number of clause intersection pairs and the maximum clause degree of a k-CNF formula that guarantees satisfiability under the assumption that every two clauses share at most $\\alpha$ variables. More formally, we call these formulas $\\alpha$-intersecting and define, for example, a threshold $\\mu_i(k,\\alpha)$ for the number of clause intersection pairs $i$, such that every $\\alpha$-intersecting k-CNF formula in which at most $\\mu_i(k,\\alpha)$ pairs of clauses share a variable is satisfiable and there exists an unsatisfiable $\\alpha$-intersecting k-CNF formula with $\\mu_m(k,\\alpha)$ such intersections. We provide a lower bound for these thresholds based on the Lovasz Local Lemma and a nearly matching upper bound by constructing an unsatisfiable k-CNF to show that $\\mu_i(k,\\alpha) = \\tilde{\\Theta}(2^{k(2+1/\\alpha)})$. Similar thresholds are determined for the number of variables ($\\mu_n = \\tilde{\\Theta}(2^{k/\\alpha})$) and the number of clauses ($\\mu_m = \\tilde{\\Theta}(2^{k(1+\\frac{1}{\\alpha})})$) (see [Scheder08] for an earlier but independent report on this threshold). Our upper bound construction gives a family of unsatisfiable formula that achieve all four thresholds simultaneously.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "Satisfiability Thresholds for k-CNF Formula with Bounded Variable Intersections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787864878115,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7093811463842976
} |
https://arxiv.org/abs/2212.12962 | Move-reduced graphs on a torus | We determine which bipartite graphs embedded in a torus are move-reduced. In addition, we classify equivalence classes of such move-reduced graphs under square/spider moves. This extends the class of minimal graphs on a torus studied by Goncharov-Kenyon, and gives a toric analog of Postnikov's results on a disk. | \section*{Introduction}\label{sec:intro}
Let ${\mathbb{T}}={\mathbb{R}}^2/{\mathbb{Z}}^2$ be a torus, and let $\Gamma$ be a bipartite graph embedded in ${\mathbb{T}}$. We say that two such graphs $\Gamma,\Gamma'$ are \emph{move-equivalent} if they are related by the moves \Msq--\Mres shown in \cref{fig:intro:moves}. We say that $\Gamma$ is \emph{move-reduced} if there does not exist a graph $\Gamma'$ move-equivalent to $\Gamma$ to which we can apply one of the \emph{reduction moves} (R1)--(R3)\xspace shown in \cref{fig:localmoveplabic}. The goal of this paper is to describe which graphs $\Gamma$ are move-reduced, and which pairs of move-reduced graphs are move-equivalent.
%
%
%
%
%
%
%
A similar problem has been considered in~\cite{GK13} for the class of \emph{minimal graphs}. Each minimal graph is move-reduced, however, the converse is not true; see \cref{fig:move_red_vs_minimal}.
We briefly summarize our main results; see \cref{sec:main} for more details. It was shown in~\cite{GK13} that move-equivalence classes of minimal graphs are classified by their Newton polygons $N$. The sides of $N$ are obtained by taking the homology classes of strands in $\Gamma$. Here, a \emph{strand} is a path making a sharp right (resp., left) turn at each black (resp., white) vertex. A strand of a move-reduced (as opposed to minimal) graph $\Gamma$ may intersect itself, and this induces a \emph{weak decoration} $\bfla=({\lambda}^\e)_{\e\in E(N)}$ of $N$, labeling each side $\e=(i,j)$ of $N$ by a partition ${\lambda}^\e$ of $\gcd(i,j)$. Our first main result (\cref{thm:intro:move_red}) gives a characterization of move-reduced graphs in terms of weakly decorated Newton polygons that parallels the results of~\cite{GK13,Postnikov}.
Our second main result concerns move-equivalence classes of move-reduced graphs. The solution to this problem turns out to be more subtle than its counterparts in~\cite{GK13,Postnikov}. First, we show that in a move-reduced graph, different strands corresponding to the same side of $N$ never cross each other. This induces a \emph{strong decoration} $\bfcc=(\cc^\e)_{\e\in E(N)}$ of $N$, labeling each side $\e=(i,j)$ of $N$ with a cyclic composition $\cc^\e$ of $\gcd(i,j)$. We associate a \emph{rotation number} $\operatorname{d}(\bfcc)$ to $\bfcc$, and our second main result (\cref{thm:intro:move_eq}) is that the set of all move-reduced graphs with strongly decorated Newton polygon $(N,\bfcc)$ is a union of $\operatorname{d}(\bfcc)$ move-equivalence classes. The classes are distinguished by the value of an explicit \emph{modular invariant} $\mu(\Gamma)\in{\mathbb{Z}}/\operatorname{d}(\bfcc){\mathbb{Z}}$ associated to each move-reduced graph $\Gamma$.
%
%
%
%
Our motivation to study move-reduced graphs arises from the dimer model on $\Gamma$ and the associated \emph{spectral transform} of~\cite{KOS,KeOk}. Each weighted bipartite graph $(\Gamma,\operatorname{wt})$ with positive real edge weights embedded in ${\mathbb{T}}$ determines a simple Harnack curve with a distinguished line bundle. It is thus natural to study which limiting objects appear when one sends some edge weights to zero. This corresponds to deleting edges from $\Gamma$ and then applying reduction moves. Note in particular that the move-reduced graph $\Gamma_2$ in \figref{fig:move_red_vs_minimal}(b) is obtained from the minimal graph $\Gamma_1$ in \figref{fig:move_red_vs_minimal}(a) by removing a single edge, which demonstrates that the class of move-reduced graphs is more naturally suited for this problem. %
For the case of planar bipartite graphs in a disk, the resulting space of limiting objects is the \emph{totally nonnegative Grassmannian}~\cite{Postnikov}, where the role of the spectral transform is played by Postnikov's boundary measurement map. In particular, Postnikov characterized move-reduced graphs on a disk and showed that their move-equivalence classes are classified by \emph{positroids}.
The present manuscript is the first in a series of papers aimed at studying the toric analog of the totally nonnegative Grassmannian and its positroid stratification.
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[width=0.4\textwidth]{figures/square_move}
& \qquad &
\includegraphics[width=0.4\textwidth]{figures/resplit_move}
\\
(M1) The spider move. & & (M2) The contraction-uncontraction move.
\end{tabular}
\caption{\label{fig:intro:moves} Equivalence moves for bipartite graphs in ${\mathbb{T}}$. One can also apply these moves with the roles of white and black swapped. For \Msq, the vertices of the square are assumed to have degree at least three. For (M2)\xspace, the two white vertices are assumed to be distinct and have degree at least two. The shaded area denotes a small open disk inside ${\mathbb{T}}$.}
\end{figure}
\begin{figure}
\def0.18\textwidth{0.23\textwidth}
\begin{tabular}{ccccc}
\includegraphics[width=0.18\textwidth]{figures/move_R1}
& \qquad &
\includegraphics[width=0.18\textwidth]{figures/move_R2}
& \qquad &
\includegraphics[width=0.18\textwidth]{figures/move_Rdipole}
\\
%
(R1) Parallel edge reduction. & & (R2) Leaf reduction. & & (R3) Dipole reduction.
\end{tabular}
\caption{\label{fig:localmoveplabic} Reduction moves for bipartite graphs. (R1) removes one of two parallel edges, (R2) removes a leaf together with its single neighbor, and (R3) removes an isolated edge.
%
%
}
\end{figure}
\section{Main results}\label{sec:main}
%
%
%
%
%
%
In \cref{sec:intro:newton}, we introduce the notions of \emph{weakly} and \emph{strongly decorated polygons}. In \cref{sec:intro:move_reduced}, we will associate a weakly decorated polygon with any bipartite graph embedded in the torus, and we will use it to characterize move-reduced graphs. In \cref{sec:intro:move_eq}, we will associate a strongly decorated polygon to any move-reduced graph $\Gamma$, and will use it to characterize which graphs are move-equivalent to $\Gamma$.
%
\subsection{Decorated polygons}\label{sec:intro:newton}
A convex polygon $N$ in the plane ${\mathbb{R}}^2$ is called \textit{integral} if its vertices are contained in ${\mathbb{Z}}^2 \subset {\mathbb{R}}^2$. We denote the set of edges of $N$ by $E(N)$, and orient them counterclockwise around the boundary of $N$ so that each edge is a vector in ${\mathbb{Z}}^2$. %
For an edge $\e=(a,b)$ of $N$, let $\ilen|\e|:=\gcd(a,b)$ be its \emph{integer length}. For vectors $\e,\e'\in{\mathbb{Z}}^2$, let ${ \operatorname{det}}(\e,\e')$ be the determinant of the $2\times2$ matrix with columns $\e,\e'$.
%
A \emph{partition} of $n$ with $k$ parts is a tuple ${\lambda}=({\lambda}_1\geq{\lambda}_2\geq\dots\geq{\lambda}_k>0)$ such that $|{\lambda}|:={\lambda}_1+{\lambda}_2+\dots+{\lambda}_k=n$. A \textit{composition} of $n$ with $k$ parts is a tuple $\alpha=(\alpha_1,\alpha_2,\dots, \alpha_k) \in {\mathbb{Z}}^k_{> 0}$ such that $|\alpha|:=\alpha_1+\dots+\alpha_k=n$. A \textit{cyclic composition} of $n$ with $k$ parts is an equivalence class of compositions of $n$ with $k$ parts under cyclic shifts $(\alpha_1,\alpha_2,\dots,\alpha_k)\sim(\alpha_2,\dots,\alpha_k,\alpha_1)$. Thus, forgetting the order of the parts of a (cyclic) composition yields a partition.
\begin{definition}\label{dfn:decor}\
\begin{itemize}
\item A \emph{weakly decorated polygon} is a pair $\Nwdec=(N,\bfla)$, where $N$ is a convex integral polygon, and $\bfla=({\lambda}^\e)_{\e \in E(N)}$, where ${\lambda}^\e$ is a partition of $\ilen|\e|$.
\item A \emph{strongly decorated polygon} is a pair $\Ndec=(N,\bfcc)$, where $N$ is a convex integral polygon, and $\bfcc=(\cc^\e)_{\e \in E(N)}$, where $\cc^\e$ is a cyclic composition of $\ilen|\e|$.
\end{itemize}
\end{definition}
\begin{figure}
\begin{tabular}{ccccccccc}
\includegraphics[width=0.2\textwidth]{figures/grid_22}
&
\includegraphics[width=0.2\textwidth]{figures/fav_ex}
&
\includegraphics[width=0.2\textwidth]{figures/fav_ex_strands}
&
\includegraphics[width=0.2\textwidth]{figures/fav_ex_no_edge}
\\
(a) Graph $\Gamma_1$. & (b) Graph $\Gamma_2$. & (c) $\Gamma_2$ with strands. & (d) $\Gamma_3$ with strands.
\\
&
&
\includegraphics[width=0.2\textwidth]{figures/newton_fav}
&
\includegraphics[width=0.2\textwidth]{figures/newton_fav_no_edge}\\
& & (e) $\Nwdec(\Gamma_2)$. & (f) $\Nwdec(\Gamma_3)$.
\end{tabular}
\caption{\label{fig:move_red_vs_minimal} The graphs $\Gamma_1$ and $\Gamma_3$ are minimal in the sense of~\cite{GK13} and therefore are move-reduced. The graph $\Gamma_2$ is not minimal but is move-reduced. See \cref{sec:intro:move_reduced} for a definition of strands and $\Nwdec(\Gamma)$.}
\end{figure}
\subsection{Move-reduced graphs}\label{sec:intro:move_reduced}
Recall that a \emph{strand} or a \emph{zig-zag path} $\sa$ is a walk in $\Gamma$ that turns maximally right at the black vertices and maximally left at the white vertices of $\Gamma$. The set of strands of $\Gamma$ is denoted by $\bm{S}(\Gamma)$. Since $\Gamma$ is finite, a strand $\sa$ is a (not necessarily simple) closed walk, and we let $[\sa]\in{\mathbb{Z}}^2=H_1({\mathbb{T}},{\mathbb{Z}})$ denote its homology. %
Since each edge of $\Gamma$ is contained in two strands that traverse it in opposite directions, the sum $\sum_{\sa\in\bm{S}(\Gamma)} [\sa]$ is zero, so we can associate to $\Gamma$ a weakly decorated polygon $\Nwdec(\Gamma)=(N,\bfla)$ as follows. We let $N$ be the convex integral polygon $N$ (possibly degenerate, i.e., having $0$ area), unique up to translation, whose counterclockwise-oriented boundary consists of the vectors $([\sa])_{\sa \in\bm{S}(\Gamma)}$ in some order. We say that two strands $\sa,\sa'\in\bm{S}(\Gamma)$ are \emph{parallel} if $[\sa],[\sa']\neq0$ and $[\sa]\in{\mathbb{R}}_{>0}[\sa']$. For each edge $\e \in E(N)$, we let
\begin{equation*}%
\bm{S}^e(\Gamma):=\{\sa\in\bm{S}(\Gamma) \mid [\sa]\in{\mathbb{R}}_{>0}\e\}
\end{equation*}
denote the corresponding set of parallel strands. Thus, we have $\e=\sum_{\sa\in\bm{S}^e(\Gamma)} [\sa]$,
%
and we let ${\lambda}^\e:=(\ilen|[\sa]|)_{\sa\in\bm{S}^e(\Gamma)}$ be the corresponding partition of $\ilen|\e|$. The polygon $N$ is called the \emph{Newton polygon} of $\Gamma$, and we call $\Nwdec(\Gamma)$ the \emph{weakly decorated Newton polygon} of $\Gamma$. The weakly decorated Newton polygon is invariant under \Msq--\Mres but not under (R1)--(R3)\xspace.
In \cref{prop:intro:exists}, we will see that for any weakly decorated polygon $\Nwdec$, there exists a move-reduced graph $\Gamma$ satisfying $\Nwdec(\Gamma)=\Nwdec$. On the other hand, it is clear that any graph $\Gamma$ can be transformed into a move-reduced graph using the moves \Msq--\Mres and (R1)--(R3)\xspace. %
%
%
%
%
%
\begin{definition} \label{dfn:exc}
For a partition ${\lambda}=({\lambda}_1\geq{\lambda}_2\geq\dots\geq{\lambda}_k>0)$ of $n$ with $k$ parts, the \emph{excess} of ${\lambda}$ is defined by $\excess{{\lambda}}:=n-k=\sum_{i=1}^k({\lambda}_i-1)$. If $\bfla=({\lambda}^\e)_{\e\in E}$ is a collection of partitions, we denote $\excess{\bfla}:=\sum_{\e\in E} \excess{{\lambda}^e}$.
%
%
%
%
\end{definition}
%
%
%
%
%
%
%
%
%
A \emph{face} of $\Gamma$ is a connected component of ${\mathbb{T}}\setminus \Gamma$. Thus, a face of $\Gamma$ is contractible if and only if it is homeomorphic to an open disk.
%
%
We are ready to state our first main result.
\begin{theorem}\label{thm:intro:move_red}
Let $\Gamma$ be a bipartite graph embedded in ${\mathbb{T}}$ with weakly decorated Newton polygon $\Nwdec(\Gamma)=(N,\bfla)$. Assume that $\Gamma$ has a perfect matching. The following conditions are equivalent.
\begin{enumerate}
\item\label{item:intro:move_red} $\Gamma$ is move-reduced.
%
%
\item\label{item:intro:area} $\Gamma$ has $2\operatorname{Area}(N)+\excess{\bfla}$ contractible faces, no contractible connected components, and no leaf vertices.
%
%
%
%
%
%
%
%
%
%
%
%
\end{enumerate}
Moreover, if $\Gamma$ is move-reduced and $\sa,\sa'\in\bm{S}(\Gamma)$ are two distinct parallel strands, then $\sa,\sa'$ do not share any vertices or edges of $\Gamma$.
%
%
%
\end{theorem}
%
\begin{figure}
\def0.18\textwidth{0.18\textwidth}
\scalebox{0.9}{
\begin{tabular}{ccc%
}
\includegraphics[width=0.18\textwidth]{figures/loops} &
\qquad
&
\includegraphics[width=0.18\textwidth]{figures/loops_strands}
%
%
%
%
%
%
%
\end{tabular}
}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\caption{\label{fig:parallelbigons}A move-reduced graph with no perfect matchings and whose Newton polygon is a single point.}
\end{figure}
\begin{remark}
The assumption that $\Gamma$ has a perfect matching is essential; for example, \cref{thm:intro:move_red} fails for the graph $\Gamma$ in Figure~\ref{fig:parallelbigons}. This graph is move-reduced and does not have any perfect matchings. Thus, $\Gamma$ satisfies condition~\eqref{item:intro:move_red} but does not satisfy condition~\eqref{item:intro:area} of \cref{thm:intro:move_red}. Alternatively, if $\Gamma$ has no isolated vertices, the assumption that $\Gamma$ has a perfect matching can be replaced with either one of the following assumptions:
\begin{itemize}
\item the Newton polygon of $\Gamma$ is not a single point, or
\item the number of black and white vertices in $\Gamma$ is the same;
\end{itemize}
%
see part~\ref{intro:Gamma_monogon} of \cref{thm:intro:vertical} below.
\end{remark}
\begin{remark} \label{rmk:intro:area}
The condition that $\Gamma$ has $2\operatorname{Area}(N)+\excess{\bfla}$ contractible faces in~\eqref{item:intro:area} is equivalent to a statement that $\Gamma$ has the minimal possible number of contractible faces among all graphs with weakly decorated Newton polygon $\Nwdec(\Gamma)$.
\end{remark}
%
%
%
%
%
%
%
%
%
\begin{example}
For the graphs $\Gamma_1,\Gamma_2,\Gamma_3$ shown in \cref{fig:move_red_vs_minimal}, the weakly decorated Newton polygons $\Nwdec(\Gamma_2),\Nwdec(\Gamma_3)$ are computed in \figref{fig:move_red_vs_minimal}(e--f). In particular, letting $\Nwdec(\Gamma_2)=(N,\bfla)$ and $\Nwdec(\Gamma_3)=(N,\bfla')$, we see that $\operatorname{Area}(N)=1$, $\excess{\bfla}=1$, and $\excess{\bfla'}=0$. This is consistent with \cref{thm:intro:move_red} since $\Gamma_2$ has $3$ faces, while $\Gamma_3$ has $2$ faces, all of which are contractible.
%
\end{example}
%
%
%
%
%
\subsection{Move-equivalence classes of move-reduced graphs}\label{sec:intro:move_eq}
In this section, each graph is assumed to be bipartite and to have a perfect matching. Let $\Gamma$ be a move-reduced graph with Newton polygon $N$. By \cref{thm:intro:move_red}, for $\e\in E(N)$, any two strands $\sa\neq\sa'$ in $\bm{S}^e(\Gamma)$ do not share vertices or edges. Thus, we have a natural cyclic ordering on $\bm{S}^e(\Gamma)$ given by the direction of the normal vector to $\e$ that points into the interior of $N$. Let $\cc^\e=(\ilen|[\sa]|)_{\sa\in\bm{S}^e(\Gamma)}$ be the corresponding cyclic composition of $\ilen|\e|$. We set $\bfcc=(\cc^\e)_{\e \in E(N)}$, and we refer to $\Ndec(\Gamma):=(N,\bfcc)$ as the \emph{strongly decorated Newton polygon} of $\Gamma$. The following result is shown in \cref{sec:proof:exists}.
\begin{proposition}\label{prop:intro:exists}
For any strongly decorated polygon $\Ndec$, there exists a move-reduced graph $\Gamma$ that admits perfect matching and satisfies $\Ndec(\Gamma)=\Ndec$.
\end{proposition}
The moves \Msq--\Mres never change the homology of the strands and preserve the class of move-reduced graphs. Thus, if two move-reduced graphs $\Gamma,\Gamma'$ are move-equivalent then we have $\Ndec(\Gamma)=\Ndec(\Gamma')$. One is tempted to conjecture that the converse is also true, but that is not the case; for instance, the two graphs in \cref{fig:twsq} have the same strongly decorated Newton polygons, but they are not move-equivalent, since one graph is connected and the other one is not. See \cref{fig:two_big} for a more subtle example.
To remedy this issue, we make the following definition.
\begin{figure}
\def0.18\textwidth{0.15\textwidth}
\scalebox{0.975}{
\begin{tabular}{ccccc}
\includegraphics[width=0.18\textwidth]{figures/two_squares_1}
&
\includegraphics[width=0.18\textwidth]{figures/two_squares_2}
&
\includegraphics[width=0.18\textwidth]{figures/twsq1_strands}
&
\includegraphics[width=0.18\textwidth]{figures/twsq2_strands}
&
\begin{tikzpicture}[baseline=(Z.base)]
\coordinate(Z) at (0,-2);
\node(A) at (0,0) {\includegraphics[width=0.22\textwidth]{figures/twsq_newton}};
\end{tikzpicture}
\\
(a) Graph $\Gamma_1$. & (b) Graph $\Gamma_2$. & (c) Strands in $\Gamma_1$. & (d) Strands in $\Gamma_2$. & (e) $\Ndec(\Gamma_1)=\Ndec(\Gamma_2)$.
\end{tabular}
}
\caption{\label{fig:twsq} Two move-reduced graphs that are not move-equivalent but have the same strongly decorated Newton polygons. The graph $\Gamma_2$ has vertices of degree $2$ at the vertical boundaries of the rectangle.}
%
\end{figure}
\begin{definition}\label{dfn:clicks}
Let $\cc=(\cc_1,\cc_2,\dots,\cc_m)$ be a cyclic composition of $n=\cc_1+\cc_2+\dots+\cc_m$. Consider a partition
$\Ibm_{\cc}=\{I_1,I_2,\dots,I_m\}$ of ${\mathbb{Z}}/n{\mathbb{Z}}$ into cyclic intervals of size $|I_j|=\cc_j$ given by $I_1=[1,\cc_1]$, $I_2=[\cc_1+1,\cc_1+\cc_2]$, etc.
%
%
The \emph{rotation number $\operatorname{rot}(\cc)$} is the smallest integer $r\in[n]:=\{1,2,\dots,n\}$ such that $\sigma^r(\Ibm_{\cc})=\Ibm_{\cc}$, where $\sigma:{\mathbb{Z}}/n{\mathbb{Z}}\to{\mathbb{Z}}/n{\mathbb{Z}}$ is the map sending $i\mapsto i+1\pmod n$ for all $i$, and $\sigma(\Ibm_{\cc}):=\{\sigma(I_1),\sigma(I_2),\dots,\sigma(I_m)\}$.
%
\end{definition}
\noindent For example, $\operatorname{rot}((1,1,1,1,1,1))=1$, $\operatorname{rot}((2,1,2,1))=3$, and $\operatorname{rot}((2,2,1,1))=6$. We have $\operatorname{rot}((n))=n$ because by convention, we distinguish between cyclic intervals $[j,j+n-1]$ in ${\mathbb{Z}}/n{\mathbb{Z}}$ for different $j\in[n]$.
The \emph{rotation number} of a collection $\bfcc=(\cc^\e)_{\e\in E}$ of cyclic compositions is given by
\begin{equation}\label{eq:clicks_dfn}
\operatorname{d}(\bfcc):=\gcd\{\operatorname{rot}(\cc^\e)\mid \e\in E\}.
\end{equation}
%
The following is our second main result.
\begin{theorem}\label{thm:intro:move_eq}
Let $\Ndec=(N,\bfcc)$ be a strongly decorated polygon. The set of move-reduced graphs $\Gamma$ satisfying $\Ndec(\Gamma)=\Ndec$ is a union of $\operatorname{d}(\bfcc)$ move-equivalence classes. Explicitly, two move-reduced graphs $\Gamma,\Gamma'$ are move-equivalent if and only if
\begin{equation*}%
(\Ndec(\Gamma),\mu(\Gamma))=(\Ndec(\Gamma'),\mu(\Gamma')),
\end{equation*}
%
where $\mu(\Gamma)\in{\mathbb{Z}}/\operatorname{d}(\bfcc){\mathbb{Z}}$ is the \emph{modular invariant} defined in \cref{sec:intro:minv}.
\end{theorem}
\subsection{Modular invariant}\label{sec:intro:minv}
We explain the construction of the modular invariant $\mu(\Gamma)$. Let $\Gamma$ be move-reduced and let $\Ndec(\Gamma)=(N,\bfcc)$ be its strongly decorated Newton polygon. Let $\e\in E(N)$ and set $r:=\operatorname{rot}(\cc^\e)$, $n:=|\cc^\e|=\ilen|\e|$. Thus, $r$ divides $n$. Let $\Regs^\e$ be the set of connected components of ${\mathbb{T}}\setminus \bigcup_{\sa\in\bm{S}^e(\Gamma)} \sa$, which we call \emph{$\e$-regions}. Construct a labeling $\gamma^\e:\Regs^\e\to{\mathbb{Z}}/n{\mathbb{Z}}$
%
so that for any segment of a strand $\sa\in\bm{S}^e(\Gamma)$ adjacent to $\e$-regions $F_-$ (resp., $F_+$) to the right (resp., left) of $\sa$, the labels $\gamma^\e(F_-),\gamma^\e(F_+)\in{\mathbb{Z}}/n{\mathbb{Z}}$ satisfy $\gamma^\e(F_+)\equiv \gamma^\e(F_-)+1\pmod n$.
\begin{figure}
\def0.18\textwidth{0.145\textwidth}
\begin{tabular}{cccccc}
\includegraphics[width=0.18\textwidth]{figures/twsq1_lab_red}
&
\includegraphics[width=0.18\textwidth]{figures/twsq1_lab_blue}
&
\includegraphics[width=0.18\textwidth]{figures/twsq1_lab_rb}
&
\includegraphics[width=0.18\textwidth]{figures/twsq2_lab_red}
&
\includegraphics[width=0.18\textwidth]{figures/twsq2_lab_blue}
&
\includegraphics[width=0.18\textwidth]{figures/twsq2_lab_rb}
\\
(a) $\gamma^{\er}$ for $\Gamma_1$. & (b) $\gamma^{\eb}$ for $\Gamma_1$. & (c) $\gamma$ for $\Gamma_1$. & (d) $\gamma^{\er}$ for $\Gamma_2$. & (e) $\gamma^{\eb}$ for $\Gamma_2$. & (f) $\gamma$ for $\Gamma_2$.
\end{tabular}
\caption{\label{fig:twsq_minv} Computing the modular invariants (\cref{sec:intro:minv}) of graphs $\Gamma_1$ and $\Gamma_2$ from \cref{fig:twsq}. See \cref{ex:intro:minv}.}
\end{figure}
Clearly, there are $n$ ways to choose a labeling $\gamma^\e$ that satisfies the above conditions. We shall choose a particular one as follows. The labeling $\gamma^\e$ induces a partition $\Ibm_{\gamma^\e}$ of ${\mathbb{Z}}/n{\mathbb{Z}}$ into cyclic intervals so that for each strand $\sa\in\bm{S}^e(\Gamma)$, the associated cyclic interval contains $\gamma^\e(F)$ for all $F\in\Regs^\e$ appearing immediately to the right of $\sa$; see \cref{fig:twsq_minv} and \cref{ex:intro:minv}. Now, recall that $\cc^\e$ is a cyclic composition. Of all the cyclic shifts of $\cc^\e$, let $\cc^\e=(\cc^\e_1,\cc^\e_2,\dots,\cc^\e_m)$ be the lexicographically maximal one, and let $\Ibm_{\cc^\e}$ be the associated partition of ${\mathbb{Z}}/n{\mathbb{Z}}$ into cyclic intervals from \cref{dfn:clicks}. We say that the labeling $\gamma^\e$ is \emph{lex-maximal} if $\Ibm_{\gamma^\e}=\Ibm_{\cc^\e}$. Since $\sigma^r(\Ibm_{\cc^\e})=\Ibm_{\cc^\e}$, we see that there are $n/r$ lex-maximal labelings $\gamma^\e$. Fix one such labeling and let $\bar\gamma^\e:\Regs^\e\to{\mathbb{Z}}/r{\mathbb{Z}}$ be obtained by taking the values of $\gamma^\e$ modulo $r$. Thus, $\bar\gamma^\e$ does not depend on the choice of $\gamma^\e$, and is an invariant of $\Gamma$.
Repeat the above procedure for all $\e\in E(N)$. Let $\bm{F}(\Gamma)$ be the set of faces of $\Gamma$. We will construct a labeling $\gamma:\bm{F}(\Gamma)\to{\mathbb{Z}}/d{\mathbb{Z}}$, where $d:=\operatorname{d}(\bfcc)$. For each face $F\in\bm{F}(\Gamma)$, we set $\gamma(F):=\sum_{\e\in E(N)} \bar\gamma^\e(F) \pmod d$. This is a well-defined element of ${\mathbb{Z}}/d{\mathbb{Z}}$ in view of~\eqref{eq:clicks_dfn}. Moreover, any two adjacent faces $F,F'$ of $\Gamma$ are separated by two strands going in the opposite directions, so $\gamma(F)=\gamma(F')$. In other words, the labeling $\gamma$ is constant. By definition, its value is the \emph{modular invariant} $\mu(\Gamma)\in {\mathbb{Z}}/d{\mathbb{Z}}$.
The moves \Msq--\Mres induce bijections between $\e$-regions. Since all the faces involved in \Msq--\Mres except the middle face in (M1) are in the same $\e$-regions, $\mu(\Gamma)$ is invariant under move-equivalence.
\begin{example}\label{ex:intro:minv}
Consider the graphs $\Gamma_1$ and $\Gamma_2$ from \cref{fig:twsq}. Let $\Ndec=(N,\bfcc)$ be their strongly decorated Newton polygon shown in \figref{fig:twsq}(e). Thus, $N$ is a line segment of length $4$, and let $\textcolor{red}{\e}=(4,0)$ and $\textcolor{blue}{\e'}=(-4,0)$ be the two edges of $N$. We have $\cc:=\cc^{\textcolor{red}{\e}}=\cc^{\textcolor{blue}{\e'}}=(2,2)$, and $\operatorname{rot}(\cc)=2$. Examples of lex-maximal labelings $\gamma^{\er}$ and $\gamma^{\eb}$ for $\Gamma_1$ and $\Gamma_2$ are shown in \figref{fig:twsq_minv}(a--b,d--e). The labeling $\gamma$ for $\Gamma_1$ and $\Gamma_2$ is obtained by taking the labeling $\gamma^{\er}+\gamma^{\eb}$ shown in \figref{fig:twsq_minv}(c,f) modulo $\operatorname{d}(\bfcc)=2$. We see that in fact $\gamma(F_1)= 0\in{\mathbb{Z}}/2{\mathbb{Z}}$ is even for each face $F_1$ of $\Gamma_1$, while $\gamma(F_2)= 1\in{\mathbb{Z}}/2{\mathbb{Z}}$ is odd for each face $F_2$ of $\Gamma_2$. Therefore, $\mu(\Gamma_1)=0\in{\mathbb{Z}}/2{\mathbb{Z}}$ and $\mu(\Gamma_2)=1\in{\mathbb{Z}}/2{\mathbb{Z}}$, which is consistent with \cref{thm:intro:move_eq} since the graphs $\Gamma_1$ and $\Gamma_2$ are not move-equivalent.
%
\end{example}
%
%
%
%
\subsection{Overview of the proof}
We shall proceed by relating bipartite graphs embedded in ${\mathbb{T}}$ to elements of the \emph{double affine symmetric group}, i.e., pairs of affine permutations. In \cref{sec:plabic,sec:cylinder}, we show the following result.
\begin{theorem}\label{thm:intro:vertical}
For any move-reduced graph $\Gamma$, exactly one of the following holds:
%
\begin{enumerate}[label=(\roman*)]
\item\label{intro:Gamma_monogon} $\Gamma$ has a single strand that is a simple zero-homology\xspace loop. In this case, $\Gamma$ has no perfect matchings and has a different number of black and white vertices.
\item\label{intro:Gamma_vertical} $\Gamma$ is move-equivalent to a graph $\Gamma'$ such that, for a suitable choice of the fundamental domain, each strand $\sa\in \bm{S}(\Gamma')$ with $[\sa]=(i,j)$ intersects the vertical line $x=0$ minimally, i.e., exactly $|i|$ times.
\end{enumerate}
\end{theorem}
%
%
%
%
%
%
%
%
%
%
%
%
%
\noindent In part~\ref{intro:Gamma_monogon}, a \emph{zero-homology\xspace loop} is a strand $\sa$ satisfying $[\sa]=0$, and a zero-homology\xspace loop $\sa$ is called \emph{simple} if the lift of $\sa$ to ${\mathbb{R}}^2$ under the covering map ${\mathbb{R}}^2\to{\mathbb{T}}$ is a simple (i.e., non self-intersecting) closed curve; see e.g. \figref{fig:parallelbigons}(a). In part~\ref{intro:Gamma_vertical}, choosing a fundamental domain corresponds to the standard $\operatorname{SL}_2({\mathbb{Z}})$-action on the Newton polygon of~$\Gamma$.
%
In \cref{sec:affine_plabic_fence}, we show that if~\ref{intro:Gamma_vertical} holds, then $\Gamma'$ can be put into a particular form called an \emph{affine plabic fence}. Such graphs correspond to shuffles of reduced words of two affine permutations on commuting sets of indices. In \cref{sec:affine-perm-cycl,sec:c-equiv-structure}, we study the associated conjugation problem for the affine symmetric group, relying on the results of~\cite{HN,Mar}. Finally, we complete the proof in Sections~\ref{sec:proof:exists}--\ref{sec:proof:move_eq}. %
%
\subsection{Previous results}
Our results specialize to~\cite[Theorem~2.5]{GK13} in the case of \emph{minimal graphs}, i.e., when no strand intersects itself in ${\mathbb{T}}$. This corresponds to all parts of ${\lambda}^\e$ and $\cc^\e$ being equal to $1$ for all $\e\in E(N)$.
The idea of relating bipartite graphs embedded in ${\mathbb{T}}$ to conjugation of double affine permutations is not new and appears in~\cite{FM,GSZ}. A discussion of graphs that are move-reduced but not minimal in the sense of~\cite{GK13}, and in particular the graph $\Gamma_2$ in \figref{fig:move_red_vs_minimal}(b), appears in~\cite[Section~8.3]{FM}.
In~\cite[Section~4.4]{GSZ}, the authors also consider the problem of classifying move-reduced graphs and their move-equivalence classes. They associate a weakly decorated Newton polygon to each graph and prove a lemma classifying conjugacy classes in the double affine symmetric group. However, this classification does not imply a classification of move-reduced bipartite graphs and their move-equivalence classes. The reason is that the moves \Msq--\Mres correspond only to particular kinds of conjugation in the affine symmetric group (see \cref{dfn:c_equiv}), not to arbitrary conjugation.
%
%
This discrepancy leads us to studying strongly decorated Newton polygons and modular invariants. We also note that in~\cite[Section~4.4]{GSZ}, the authors rely on \cref{thm:intro:vertical} and refer to~\cite{FM} for its proof, however, the argument in~\cite[Section~4.1]{FM} only applies to graphs whose strands go monotonously from left to right.
%
%
%
%
\subsection*{Acknowledgments}
We thank Xuhua He for bringing the paper~\cite{Mar} to our attention. We also thank Timothée Marquis for discussions related to~\cite{Mar} and for updating and extending~\cite{Mar} to the generality suited for our needs (cf. Remarks~\ref{rmk:alcove} and~\ref{rmk:Mar}). The first author is grateful to Thomas Lam for ideas that originated during the development of~\cite{GL_cat_combin}, which were influential for our overall proof strategy and specifically for the arguments in \cref{sec:affine-perm-cycl,sec:c-equiv-structure}.
\section{Plabic graphs and triple-crossing diagrams}\label{sec:plabic}
We discuss the properties of bipartite graphs embedded in ${\mathbb{T}}$ and explain how to recast them in the equivalent languages of \emph{plabic graphs}~\cite{Postnikov} and \emph{triple-crossing diagrams}~\cite{Thurston}.
\subsection{Triple-crossing diagrams in the disk} \label{sec:tcddisk}
The results of this section were independently discovered by \cite{Postnikov} and \cite{Thurston}. We state the results in terms of Thurston's notion of triple-crossing diagrams.
\begin{definition}\label{dfn:tcd}
A \textit{triple-crossing diagram $D$ in the disk} ${\mathbb{D}}:=[0,1]^2$ is a smooth immersion of a disjoint union of oriented circles and closed intervals into ${\mathbb{D}}$, defined up to isotopy. The image of a connected component is called a \textit{strand}. The image of a circle is called a \textit{loop} and the image of a closed interval is called an \textit{arc}. The immersion is required to satisfy the following conditions:
\begin{enumerate}
\item Three strands cross at each intersection point. We call these intersection points \textit{triple crossings}.
\item The endpoints of the arcs are distinct points on the boundary of ${\mathbb{D}}$, and there are no other points of $D$ on the boundary of ${\mathbb{D}}$.
\item The orientations of the strands induce consistent orientations on the boundaries of the faces of $D$.
\end{enumerate}
\end{definition}
Here, a \emph{face} of $D$ is a connected component of ${\mathbb{D}}\setminus D$. Property (3) implies that around every triple crossing, the orientations of strands alternate in and out, and that the orientations of the boundary vertices alternate in and out. If $D$ has $n$ arcs, then it has $2n$ boundary points, and the connectivity of the arcs induces a matching of the in-boundary points with the out-boundary points, called the \textit{trip permutation} in \cite{Postnikov}.
\begin{definition}
A triple-crossing diagram $D$ in the disk ${\mathbb{D}}$ is said to be \textit{reduced} if it has the fewest number of triple crossings among all triple-crossing diagrams with the same boundary matching.
\end{definition}
\begin{definition}\label{dfn:tcd_move_eq_red}
Two triple-crossing diagrams are said to be \textit{move-equivalent} if they are related by move (M1)$'$ in Figure \ref{fig:localmovetcd}. A triple-crossing diagram $D$ is called \textit{move-reduced} if it is not move-equivalent to a triple-crossing diagram $D'$ to which one of the reduction moves (R1)$'-$(R2)$'$ can be applied.
\end{definition}
\begin{remark}
Postnikov's reduction move (R1)$'$ in Figure \ref{fig:localmovetcd} differs from Thurston's $1-0$ move (see Figure \ref{fig:thurston10}). Postnikov's move will be more important for our eventual goal of understanding the behavior of the dimer model under taking limits, since it preserves dimer partition functions (cf. \cite[Theorem 12.1]{Postnikov}). On the other hand, Thurston's move preserves the boundary matching. It appears naturally in connection with double-affine permutations (see~\cref{sec:affine_plabic_fence} and~\cref{rem:postnikov10}), and will also be used in the proof of~\cref{thm:intro:move_red}. %
\end{remark}
\begin{figure}
\def0.18\textwidth{0.23\textwidth}
\def\hspace{-0.25in}{\qquad\qquad}
\begin{tabular}{ccccc}
\includegraphics[width=0.18\textwidth]{figures/22move}
& \hspace{-0.25in} &
\includegraphics[width=0.18\textwidth]{figures/10move}
& \hspace{-0.25in} &
\includegraphics[width=0.18\textwidth]{figures/loopmove}
\\
(M1)$'$. & & (R1)$'$. & & (R2)$'$.
\end{tabular}
\caption{\label{fig:localmovetcd} Equivalence move (M1)$'$ and reduction moves (R1)$'-$(R2)$'$ for triple-crossing diagrams. Each move has two possible strand orientations. (R2)$'$ removes a strand that is a simple loop. }
\end{figure}
A \textit{monogon} in ${\mathbb{D}}$ is a strand with a self-intersection. A \textit{parallel bigon} in ${\mathbb{D}}$ is a pair of strands with two intersection points $x\neq y$, with both strands oriented from $x$ to $y$.
\begin{theorem}[{\cite[Theorem 13.2 and Lemma 13.6]{Postnikov} and \cite[Theorem 7]{Thurston}}] \label{thm:diskminimal} Let $D$ be a triple-crossing diagram in ${\mathbb{D}}$. Then, the following are equivalent:
\begin{enumerate}
\item $D$ is move-reduced;
\item $D$ is reduced;
\item $D$ contains no loops, monogons, or parallel bigons.
\end{enumerate}
\end{theorem}
\begin{theorem}[{\cite[Corollary 14.7]{Postnikov} and \cite[Theorem 3]{Thurston}}]\label{thm:disk}
All $n!$ matchings of in- and out-boundary points are realizable by move-reduced triple-crossing diagrams.
\end{theorem}
\begin{theorem} [{\cite[Theorem 13.4]{Postnikov} and \cite[Theorem 5]{Thurston}}]
Any two move-reduced triple-crossing diagrams with the same boundary matching are move-equivalent.
\end{theorem}
%
Each pair of in- and out-endpoints in the matching divides the boundary of ${\mathbb{D}}$ into two intervals. Suppose that $I$ is a minimal such interval with respect to inclusion. We say that a strand $\sa$ whose endpoints are the endpoints of $I$ is \textit{boundary-parallel} if there are no triple crossings within the region between $\sa$ and $I$.
\begin{proposition}[{\cite[Proof of Theorem 13.4 and Figure 13.4]{Postnikov} and \cite[Lemma 12]{Thurston}}] \label{prop:parallel}
Suppose $I$ is an inclusion-minimal interval of the boundary matching of a move-reduced triple-crossing diagram $D$, and let $\sa$ be the strand in $D$ whose endpoints are the endpoints of $I$. Then, $D$ is move-equivalent to a triple-crossing diagram $D'$ in which $\sa$ is boundary-parallel.
\end{proposition}
\subsection{Plabic graphs and triple-crossing diagrams on the torus} \label{sec:plabictcd}
A \textit{plabic graph} $\Gamma=(B \sqcup W,E)$ on a torus ${\mathbb{T}}$ is a (finite) graph embedded in ${\mathbb{T}}$ such that:
\begin{enumerate}
\item The vertices of $\Gamma$ are colored black or white. The set of black vertices (resp., white vertices) is denoted by $B$ (resp., $W$).
\item The set of edges of $\Gamma$ is denoted by $E$. Each edge is incident to two vertices of opposite colors or incident to two white vertices.
\item The black vertices are trivalent.
\end{enumerate}
We identify plabic graphs that are related by contracting an edge incident to two distinct white vertices into a single white vertex. Therefore, we can assume that each white-white edge is a loop based at a white vertex.
\begin{remark}
Our definition of a plabic graph is more restrictive than that of~\cite{Postnikov}. Such plabic graphs were previously studied under the name \emph{white-partite}~\cite[Definition~7.14]{GPW} or \emph{black-trivalent}~\cite[Definition~8.1, Remark~8.2]{G_crit}.
\end{remark}
%
\begin{definition}\label{dfn:tcd_T}
A \textit{triple-crossing diagram $D$ on the torus} ${\mathbb{T}}$ is a smooth immersion of a disjoint union of oriented circles into ${\mathbb{T}}$. The image of a circle is called a \textit{strand}, and the set of strands of $D$ is denoted $\SD$. The immersion is required to satisfy the following conditions:
\begin{enumerate}
\item\label{TCD1} Three strands cross at each intersection point. We call these intersection points \textit{triple crossings}.
\item\label{TCD2} The orientations of the strands induce consistent orientations on the boundaries of the faces of $D$.
\end{enumerate}
\end{definition}
Similarly to \cref{dfn:tcd}, a \emph{face} of $D$ is a connected component of ${\mathbb{T}}\setminus D$. The property~\eqref{TCD2} implies that around every triple crossing, the orientations of the strands alternate in and out. (However, the converse need not hold if $D$ has a non-contractible face.) Each strand $\sa$ in $D$ determines a homology class $[\sa] \in H_1({\mathbb{T}},{\mathbb{Z}}) \cong {\mathbb{Z}}^2$.
\begin{lemma}\label{lemma::sumhomiszero}
The sum of the homology classes of all strands is $0$ in $H_1({\mathbb{T}},{\mathbb{Z}})$.
\end{lemma}
\begin{proof}
Let $R_+$ (resp., $R_-$) denote the union of the faces of $D$ such that the induced orientation is counterclockwise (resp., clockwise). Then, by property (2), we have that $\sum_{\sa \in \SD}\sa=\partial R_+ = -\partial R_-$ as $1$-cycles in ${\mathbb{T}}$, and $\overline{R}_+ \cup \overline{R}_-={\mathbb{T}}$. Therefore,
\[
2 \sum_{\sa \in \SD}[\sa]=[\partial R_+]-[\partial R_-]=[\partial {\mathbb{T}}]=0. \qedhere
\]
\end{proof}
\begin{figure}
\def0.18\textwidth{0.1\textwidth}
\def\hspace{-0.25in}{\hspace{0.1in}}
\begin{tabular}{cccccc}
\includegraphics[width=0.18\textwidth]{figures/tcdplabicb}
& \qquad &
\hspace{-0.25in}
\includegraphics[width=0.18\textwidth]{figures/tcdplabic0}
\hspace{-0.25in}
&
\hspace{-0.25in}
\includegraphics[width=0.18\textwidth]{figures/tcdplabicw1}
\hspace{-0.25in}
&
\hspace{-0.25in}
\includegraphics[width=0.18\textwidth]{figures/tcdplabicw2}
\hspace{-0.25in}
&
\hspace{-0.25in}
\includegraphics[width=0.18\textwidth]{figures/tcdplabicw4}
\hspace{-0.25in}
\\
%
(a) Black vertex (degree-three).& &\multicolumn{4}{c}{(b) White vertex (arbitrary degree).} %
\end{tabular}
\caption{The procedure to convert plabic graphs into triple-crossing diagrams and vice versa. }\label{fig:tcd=plabic}
\end{figure}
\begin{remark} \label{rmk:translate1}
A triple-crossing diagram can be converted into a plabic graph and vice versa using the local procedure shown in Figure \ref{fig:tcd=plabic}. The notions of \emph{move-reduced} and \emph{move-equivalent} plabic graphs are given by \cref{dfn:tcd_move_eq_red}.
\end{remark}
\begin{remark}\label{rmk:translate2}
If $\Gamma$ is a bipartite graph in ${\mathbb{T}}$ with all black vertices of degree at least three then one can convert $\Gamma$ into a plabic graph by applying a sequence of black uncontraction moves~(M2)\xspace. When $\Gamma$ has black vertices of degree zero, one, or two,\footnote{This applies especially to the case of a degree-two black vertex connected to the same white vertex by both edges.} extra care needs to be taken; see \cref{sec:bipartite-to-tcd}. Conversely, any plabic graph can be converted into a bipartite graph by placing a black vertex of degree two in the middle of each white-white edge.
The notions of weakly/strongly decorated Newton polygons %
and modular invariants introduced in \cref{sec:intro:move_reduced} for bipartite graphs extend to plabic graphs in an obvious way. Using \cref{rmk:translate1}, we can transfer them to triple-crossing diagrams.
\end{remark}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
In what follows, we will prove the versions of our main results translated into the language of triple-crossing diagrams and plabic graphs. The proof for bipartite graphs will then follow from the construction in \cref{sec:bipartite-to-tcd}. For instance, in \cref{sec:proof:move_red}, we will prove the following version of \cref{thm:intro:move_red}.
%
\begin{theorem} \label{thm:tcdmove_red}
Let $D$ be a triple-crossing diagram with weakly decorated polygon $\Nwdec=(N,\bfla)$. Assume that $N$ is not a single point. Then, the following are equivalent:
\begin{enumerate}
\item\label{item:D:move_red} $D$ is move-reduced;
\item\label{item:D:area} $D$ has no connected components that are contractible in ${\mathbb{T}}$ and contains $2\operatorname{Area}(N)+\excess{\bfla}$ triple crossings.
\end{enumerate}
Similarly to \cref{rmk:intro:area}, $2\operatorname{Area}(N)+\excess{\bfla}$ is the minimal possible number of triple crossings for a triple-crossing diagram with weakly decorated Newton polygon $(N,\bfla)$.
\end{theorem}
Let $\pi:{\mathbb{R}}^2 \rightarrow {\mathbb{T}}$ denote the universal covering map. The following result will be proved in~\cref{sec:proof:propertiestcdmovered}.
\begin{proposition} \label{prop:propertiesofmoveredtcd}
Let $D$ be a move-reduced triple-crossing diagram with Newton polygon~$N$.
\begin{enumerate}
\item\label{prop:appB_propertiesofmoveredtcd1} The preimage $\tilde D$ contains no closed loops, and any lift $\tilde S$ of a strand $\sa \in \SD$ does not intersect itself;
\item\label{prop:appB_propertiesofmoveredtcd2} Any strand $\sa \in \SD$ intersects itself $\ilen|[\sa]|-1$ times;
\item\label{prop:appB_propertiesofmoveredtcd3} Any two distinct parallel strands $\sa,\sa' \in \SD$ do not intersect, and there is no face of $D$ that contains portions of both strands in its boundary.
\end{enumerate}
\end{proposition}
By part~\eqref{prop:appB_propertiesofmoveredtcd3} of~\cref{prop:propertiesofmoveredtcd}, there is a natural cyclic order on each set of parallel strands, so the strongly decorated Newton polygon $\Ndec$ is well-defined.
\section{Reduction to the cylinder}\label{sec:cylinder}
The goal of this section is to prove the triple-crossing diagram version of \cref{thm:intro:vertical}.
%
Consider a move-reduced triple-crossing diagram $D$ on the torus ${\mathbb{T}}$. Let $\rectangle$ be a fundamental rectangle for ${\mathbb{T}}$, and let $u, d, l, \r$ denote the up, down, left and right sides of $\rectangle$, respectively. Identifying the $u$ and $d$ sides, we get a cylinder ${\mathbb{A}}$, and further identifying the $l$ and $\r$ sides, we get a torus ${\mathbb{T}}$. We have quotient maps $\rectangle \rightarrow {\mathbb{A}} \rightarrow {\mathbb{T}}$. The images of the $u$ and $d$ sides in ${\mathbb{A}}$ or in ${\mathbb{T}}$ coincide and are referred to as the \emph{$u-d$ side}. Similarly, the images of the $l$ and $\r$ sides in ${\mathbb{T}}$ are referred to as the \emph{$l-\r$ side}.
%
We say that triple-crossing diagrams $D$ and $D'$ are \textit{isotopic in ${\mathbb{T}}$} if there is an ambient isotopy of ${\mathbb{T}}$ taking $D$ to $D'$. When applying such isotopies, we fix the fundamental rectangle $\rectangle$. Using an isotopy in ${\mathbb{T}}$ if necessary, we assume that the intersections of strands with the sides of $\rectangle$ are transverse. A strand $\sa$ with homology class $(i,j)$ must intersect the $l-\r$ side at least $i$ times and the $u-d$ side at least $j$ times. The preimage of a strand under the map ${\mathbb{A}} \rightarrow {\mathbb{T}}$ is either a union of arcs with endpoints on the boundary of ${\mathbb{A}}$ or a closed loop in ${\mathbb{A}}$.
\subsection{Pushing strands through the boundary}
\begin{figure}
\def0.18\textwidth{0.3\textwidth}
\def0.4909090909\textwidth{0.4909090909\textwidth}
\begin{tabular}{ccc}
\includegraphics[width=0.18\textwidth]{figures/movep}
& \qquad \qquad &
\includegraphics[width=0.4909090909\textwidth]{figures/movet}\\
(a) Move (P). && (b) Move (T).
\end{tabular}
\caption{The move (P), pushing a boundary-parallel strand past the $d$-side of $\rectangle$ removing the intersection points with the $d$-side, and the move (T), interchanging the relative order of the endpoints of the red and blue strands along the $d$-side of ${\mathbb{D}}$. thereby removing the triangular region bounded by the strands and the $d$-side of $\rectangle$.} \label{fig:movePT}
\end{figure}
%
\begin{lemma} \label{lemma:movep}
Suppose we have a strand $\sa$ in $\rectangle$ with both endpoints on a side $s$ of $\rectangle$. Then, using moves and isotopy
%
in ${\mathbb{T}}$, we can remove the endpoints of $\sa$ in $s$ without increasing the number of intersections of any other strands with the boundary of $\rectangle$.
\end{lemma}
\begin{proof}
Let $I$ denote the interval in $s$ between the endpoints of $\sa$.
Suppose $I$ is minimal with respect to inclusion among all intervals on the boundary of $\rectangle$ between endpoints of strands. Using Proposition \ref{prop:parallel}, we make $\sa$ boundary-parallel, and then apply an isotopy in ${\mathbb{T}}$ pushing the strand $\sa$ past the $s$-side of $\rectangle$
(\figref{fig:movePT}(a)).
If $I$ is not minimal, we use induction on the number of intervals contained in $I$. Suppose $I'$ is an inclusion-minimal interval contained in $I$. Using the above procedure, we can remove the endpoints of $I'$ and thereby reduce the number of intervals contained in $I$.
\end{proof}
\begin{definition}
We call the above procedure \textit{move (P)}; see \figref{fig:movePT}(a).
\end{definition}
%
\begin{lemma}[{\cite[Figure~12]{GK13}}] \label{lemma:gk}
Suppose we have a pair of strands in $\rectangle$ that have consecutive in- or out-endpoints on a side of $\rectangle$, and moreover, suppose that these two strands cross in $\rectangle$. Then, the relative order of the two endpoints along $s$ can be reversed using moves and isotopy in ${\mathbb{T}}$ without increasing the number of intersections of any other strands with the boundary of $\rectangle$.
\end{lemma}
We remove the consecutiveness assumption from Lemma \ref{lemma:gk}.
\begin{lemma} \label{lemma:movet}
Suppose we have a pair $(\sa, \sa')$ of strands with endpoints on a side $s$ of $\rectangle$ with the same orientation (i.e., both in or both out), and suppose that $\sa,\sa'$ cross in $\rectangle$. Then, the relative order of the endpoints of $\sa,\sa'$ in $s$ can be reversed using moves and isotopy in ${\mathbb{T}}$, thereby removing the triangular region bounded by $\sa,\sa'$ and the side $s$ without increasing the number of intersections of any other strands with the boundary of~$\rectangle$.
\end{lemma}
\begin{proof}
Assume without loss of generality that both $\sa,\sa'$ have an out-endpoint in $s$. Let $I$ be the interval in $s$ between the endpoints of $\sa,\sa'$. Use move (P) to remove any strands that have both endpoints in $I$. Then, any strand with an out-endpoint in $I$ must cross at least one of $\sa,\sa'$. Let $\ell$ be the number of crossings formed by pairs of strands having an out-endpoint in $I$. Repeatedly using Lemma \ref{lemma:gk}, we can decrease $\ell$ until it becomes equal to $1$, in which case we can swap the endpoints of $\sa,\sa'$.
\end{proof}
\begin{definition}
We call the procedure in Lemma \ref{lemma:movet} \textit{move (T)}; see \figref{fig:movePT}(b).
\end{definition}
\subsection{Affine matchings}\label{sec:affine-matchings}
Fix $n\geq1$. Consider an infinite vertical strip $\mathbb{S}$ with points labeled
\begin{equation}\label{eq:label_strip}
\dots, A_{\ovl0},A_1,A_{\ovl1},A_2,A_{\ovl2},\dots \quad\text{and}\quad \dots,B_{\ovl0},B_1,B_{\ovl1},B_2,B_{\ovl2},\dots
\end{equation}
on the left and the right boundary of $\mathbb{S}$,
%
so that the points $A_i,B_i$ are at the same height, and the points $A_{\ovl i},B_{\ovl i}$ are at the same height, for each $i\in{\mathbb{Z}}$. Let $A:=\{A_i\mid i\in{\mathbb{Z}}\}$, $\ovl A:=\{A_{\ovl i}\mid i\in{\mathbb{Z}}\}$, $B:=\{B_i\mid i\in{\mathbb{Z}}\}$, $\ovl B:=\{A_{\ovl i}\mid i\in{\mathbb{Z}}\}$.
\begin{definition}
An \emph{affine matching} is a bijection $\am: A \sqcup \ovl B \to \ovl A \sqcup B$ such that $\am(A_{i+n})=\am(A_i)+n$ and $\am(B_{\ovl{i+n}})=B_{\ovl i}+n$ for all $i \in {\mathbb{Z}}$.
\end{definition}
\noindent This notion is closely related to the classical notion of \emph{affine permutations} discussed in \cref{sec:aff_perm_backgr}. An affine matching is represented by drawing an arrow from $x$ to $\am(x)$ inside $\mathbb{S}$ for all $x\in A\sqcup\ovl B$.
%
A triple-crossing diagram $D$ in ${\mathbb{T}}$ gives rise to an affine matching $\am_D$ as follows. Let $\rectangle$ be a fundamental rectangle. Using an $\operatorname{SL}_2({\mathbb{Z}})$ transformation, we can assume that there are no strands with homology classes in $({\mathbb{Z}} \times \{0\}) \cup (\{0\} \times {\mathbb{Z}})$ other than zero-homology\xspace loops. Let $n$ denote the number of intersection points of strands with the $l-\r$ side. Let $\mathbb S$ denote the infinite vertical strip that is the universal cover of ${\mathbb{A}}$. Then, $\mathbb S$ consists of ${\mathbb{Z}}$-many copies of $\rectangle$ glued along the $u-d$ sides, which we label $\dots, \rectangle_{-1}, \rectangle_0, \rectangle_{1},\dots$ from bottom to top. Applying an isotopy in ${\mathbb{T}}$, we may assume that the bottom-most intersection point of a strand with the left side of $\rectangle$ is oriented in.
%
Label the intersection points of strands with the boundary of $\mathbb{S}$ as in~\eqref{eq:label_strip}. Thus, the points in $A \sqcup \ovl B$ are in-endpoints and the points in $\ovl A \sqcup B$ are out-endpoints.
%
The connectivity of strands in $\mathbb{S}$ determines an affine matching $\am_D$, which, moreover, has total signed number of crossings through any horizontal line equal to $0$.
\begin{remark}\label{rmk:strand_word}
Each strand $\sa$ in $\mathbb S$ determines a word $w_{\sa}$ in the alphabet $\{u,d,l,\r\}$ which records the crossings of $\sa$ with the sides of $\rectangle$. Using move (P), we can assume that there are no occurrences of $u d$ or $d u$ in $w_{\sa}$; thus, we have $w_{\sa}=xy^kz$ for some $x,z\in\{l,\r\}$, $y\in\{u,d\}$, and $k\geq 0$. For a strand $\sa$ such that $w_{\sa}=xy^kz$, we denote by $\sa_1,\dots,\sa_{k+1}$ the corresponding strands in $\rectangle$.%
\end{remark}
\begin{lemma}\label{lemma:cross}
If the strands $\sa$ and $\sb$ emanating from $A_i$ and $A_{i+1}$ cross in $\mathbb{S}$, then we can swap their endpoints $A_i$ and $A_{i+1}$ using moves and isotopy in ${\mathbb{T}}$ without increasing the number of intersections of any other stands with the boundary of $\mathbb A$.
\end{lemma}
\begin{proof}
By translating the fundamental rectangle, we can assume that $i=1$. Without loss of generality, we can assume that $w_\sa= \r u^k z$ where $k \geq 0$ and $z\in\{l,\r\}$. If $w_\sb=\r \r$ or $w_\sb=\r d v$ for some word $v$, then the segments $\sa_1$ and $\sb_1$ cross in $\rectangle$ and we use Lemma \ref{lemma:movet}. Suppose $w_{\sb}=\r u^m z$ with $m\geq0$ and $z \in \{l,\r\}$. Consider a crossing of $\sa$ and $\sb$ in $\mathbb S$. If this crossing belongs to $\sa_j$ then it must also belong to $\sb_j$. Let $j\geq1$ be the minimal index such that $\sa_j$ crosses $\sb_j$. If $j\geq2$, then applying move (T) at the $u-d$ side, we can swap the bottom endpoints of $\sa_j$ and $\sb_j$ so that the strands $\sa_{j-1},\sb_{j-1}$ would cross. We continue this process until $j=1$, in which case we apply \cref{lemma:movet} at the $l-\r$ side.
%
%
%
\end{proof}
\subsection{Proof of Theorem~\ref{thm:intro:vertical}} We are now ready to prove~Theorem~\ref{thm:intro:vertical}.
\begin{lemma}
The number of intersections of strands in $D$ with the sides of ${\mathbb{A}}$ can be made either the minimum possible (i.e., equal to $\sum_{\sa\in\SD} |i|$, where $[\sa]=(i,j)$) or equal to $2$ using moves and isotopy in ${\mathbb{T}}$.
\end{lemma}
\begin{proof}
Consider the affine matching $\am_D$ and consider a strand $\sa$ from $A_i$ to $A_{\ovl{j}}$ with the smallest value of $\operatorname{dist}(A_i,A_{\ovl j})$. Assume $i<j$. By minimality of $\operatorname{dist}(A_i,A_{\ovl j})$, the strand $\sb$ emanating from $A_{i+1}$ must cross the strand $\sa$. By Lemma \ref{lemma:cross}, we can swap the endpoints $A_i$ and $A_{i+1}$, decreasing $\operatorname{dist}(A_i,A_{\ovl j})$. Eventually, we force $\operatorname{dist}(A_i,A_{\ovl j})$ to be less than the height of $\rectangle$, in which case we apply move (P) and decrease $n$. We proceed until either $n=1$ or when there are no arcs from $A_i$ to $A_{\ovl j}$, in which case there will be also no arcs from $B_{\ovl i}$ to $B_j$ (for any $i,j\in{\mathbb{Z}}$).
\end{proof}
We now study the case when $n=1$. Let $\am$ be an affine matching with $n=1$ such that the total signed number of crossings through any horizontal line is equal to $0$. Then, either:
\begin{enumerate}
\item\label{am1} $\am(A_1)=B_{{k+1}}$ and $\am(B_{\ovl1})=A_{\ovl{-k+1}}$ for some $k \in {\mathbb{Z}}$; or
\item\label{am2} $\am(A_1)=A_{\ovl{{k+1}}}$ and $\am(B_{\ovl{1}})=B_{-k+1}$ for some $k \in {\mathbb{Z}}$.
\end{enumerate}
Therefore, if $D$ is a triple-crossing diagram move-reduced in ${\mathbb{D}}$ such that $\am_D$ satisfies~\eqref{am1}, then the number of intersections of strands with the sides of ${\mathbb{A}}$ is minimal and equal to $2$. Suppose that $\am_D$ satisfies~\eqref{am2}. If $k=0$, we can use move (P) to remove the two intersection points, so the number of intersections of strands with sides of ${\mathbb{A}}$ is minimal and equal to $0$. However, if $k \neq 0$, the number of intersections of strands with the sides of ${\mathbb{A}}$ is not minimal (since we have $\sum_{\sa\in\SD} |i|=0$), and we call such a triple-crossing diagram \textit{exceptional}. In this case, $D$ consists of a single strand $\sa$ in ${\mathbb{T}}$ that is a zero-homology\xspace loop (see~\figref{fig:parallelbigons}(a) for the associated bipartite graph when $k=1$). It is not hard to see that the strand $\sa$ is simple, that is, lifts to a non self-intersecting closed curve in ${\mathbb{R}}^2$. This is case~\ref{intro:Gamma_monogon} of \cref{thm:intro:vertical}. In order to complete the proof, we need to show that in this case, the associated bipartite graph $\Gamma$ has no perfect matchings.
%
%
%
%
\begin{proposition}\label{prop:monogon}
Let $\Gamma$ be a move-reduced bipartite graph in ${\mathbb{T}}$. Suppose that $\Gamma$ has a single strand which is a simple zero-homology\xspace loop. Then $\Gamma$ has a different number of black and white vertices, and, in particular, has no perfect matchings.
\end{proposition}
\begin{proof}
Given a closed immersed curve $\rho:S^1\to{\mathbb{R}}^2$ with non-vanishing differential, we let $\wind\rho\in{\mathbb{Z}}$ denote its \emph{winding number}, which is the number counterclockwise turns made by the tangent vector of $\rho$. For a collection ${\bm\rho}$ of such curves, we let $\wind{\bm\rho}:=\sum_{\rho\in{\bm\rho}} \wind\rho$ denote their total winding number. One can check that the total winding number $\wind{\bm\rho}$ is invariant under the skein relation
\begin{equation}\label{eq:skein}
\begin{tikzpicture}[scale=0.4,baseline=(Z.base)]
\coordinate(Z) at (0,0);
\def1pt{1pt}
\def1.414{1.414}
\fill[black!5] (0,0) circle (1.414);
\draw[line width=1pt,->,>={latex}] (-1,-1)--(1,1);
\draw[line width=1pt,->,>={latex}] (1,-1)--(-1,1);
\node[scale=1] (A) at (2.5,0) {$\to$};
\begin{scope}[xshift=5cm]
\fill[black!5] (0,0) circle (1.414);
\def5{5}
\draw[line width=1pt,->,>={latex},rounded corners=5] (-1,-1)--(0,0)--(-1,1);
\draw[line width=1pt,->,>={latex},rounded corners=5] (1,-1)--(0,0)--(1,1);
\end{scope}
\end{tikzpicture}.
\end{equation}
Let ${\tilde\Gamma}$ be the lift of $\Gamma$ to the universal cover ${\mathbb{R}}^2$ of ${\mathbb{T}}={\mathbb{R}}^2/{\mathbb{Z}}^2$. Let $\sa$ be the unique strand of $\Gamma$, and let ${\tilde\sa}$ be some lift of $\sa$ to ${\mathbb{R}}^2$. Thus, ${\tilde\sa}$ is a simple closed curve, and therefore $\wind{\tilde\sa}=\pm1$, depending on whether ${\tilde\sa}$ is oriented counterclockwise or clockwise. Any two lifts of $\sa$ differ by a shift in ${\mathbb{Z}}^2$. Let $N\gg1$ be a large positive integer, and let ${\bm\rho}$ be the collection of all ${\mathbb{Z}}^2$-shifts of ${\tilde\sa}$ that are contained inside the square $[0,N]^2\subset{\mathbb{R}}^2$. There are $\Omega(N^2)$ such shifts, and therefore $\wind{\bm\rho}=\pm\Omega(N^2)$. On the other hand, resolving all crossings in ${\bm\rho}$ using the skein relation~\eqref{eq:skein}, we obtain a collection ${\bm\rho}'$ of simple closed curves satisfying $\wind{\bm\rho}=\wind{{\bm\rho}'}$. Each of these curves will contain a single vertex of ${\tilde\Gamma}$ inside of it. Moreover, if the vertex of ${\tilde\Gamma}$ inside $\rho'\in{\bm\rho}'$ is black (resp., white), then $\rho'$ is oriented counterclockwise (resp., clockwise). Therefore, the difference between the numbers of black and white vertices of ${\tilde\Gamma}$ contained inside $[0,N]^2$ is of size $\Omega(N^2)$. This implies that $\Gamma$ must have a different number of black and white vertices.
\end{proof}
%
%
\section{Affine permutations, cycles, and slopes}\label{sec:affine-perm-cycl}
As we will explain in \cref{sec:affine_plabic_fence}, \cref{thm:intro:vertical} allows one to recast bipartite graphs embedded in ${\mathbb{T}}$ and their moves into certain conjugation moves on pairs of affine permutations. In this and the next section, we develop the properties of affine permutations needed to complete the proofs of our main results.
Our proof strategy is inspired by that of~\cite{Mar}. The reader familiar with the theory of affine Coxeter groups and their reflection representations is encouraged to consult Remarks~\ref{rmk:alcove} and~\ref{rmk:Mar}.%
\subsection{Background and notation}\label{sec:aff_perm_backgr}
Let $n\geq1$ and recall that $[n]:=\{1,2,\dots,n\}$.
%
An \emph{affine permutation} is a bijection $f:{\mathbb{Z}}\to{\mathbb{Z}}$ satisfying $f(i+n)=f(i)+n$ for all $i\in{\mathbb{Z}}$. The group of affine permutations is denoted $\Saff_n$ (where the group operation is given by composition of maps ${\mathbb{Z}}\to{\mathbb{Z}}$). For $f\in\Saff_n$, set
\begin{equation*}%
\operatorname{n}(f):=n,\quad \operatorname{k}(f):=\frac1n\sum_{i\in[n]} (f(i)-i), \quad\text{and}\quad \operatorname{d}(f):=\gcd(\operatorname{k}(f),\operatorname{n}(f)).
\end{equation*}
It is known (see \cref{rmk:kopf} below) that $\operatorname{k}(f)$ is always an integer. We have $\Saff_n=\bigsqcup_{k\in{\mathbb{Z}}} \Saff^{(k)}_n$, where $\Saff^{(k)}_n:=\{f\in\Saff_n\mid \operatorname{k}(f)=k\}$. For $f\in\Saff_n$, let $\bar f\inS_n$ be the unique permutation (i.e., bijection $[n]\to[n]$) satisfying $\bar f(i)\equiv f(i)\pmod n$ for all $i\in[n]$. For $k\in{\mathbb{Z}}$, we denote by $f_{k,n}\in\Saff^{(k)}_n$ the affine permutation sending $i\mapsto i+k$ for all $i\in{\mathbb{Z}}$. The affine permutation $f$ can be written in \emph{window notation} as $[f(1),f(2),\dots,f(n)]$, which completely determines $f(i)$ for all $i\in{\mathbb{Z}}$.
%
\begin{figure}
\def0.18\textwidth{0.45\textwidth}
\begin{tabular}{ccc}
\includegraphics[width=0.18\textwidth]{figures/arrow_diagram1}
& \qquad &
\includegraphics[width=0.18\textwidth]{figures/arrow_diagram2}
%
%
\\
%
(a) Standard arrow diagram of $f$. && (b) Arrow diagram $\Diag(x)$.
\end{tabular}
\caption{\label{fig:arr_diag}Arrow diagrams of affine permutations; see \cref{sec:aff_perm_backgr,sec:arrow-diagrams}.}
\end{figure}
The group $\Saff^{(0)}_n$ is a Coxeter group with generators $\Pi:=\{s_i\mid i\in[n]\}$, where the affine permutation $s_i:{\mathbb{Z}}\to{\mathbb{Z}}$ sends $i\mapsto i+1$, $i+1\mapsto i$, and $j\mapsto j$ for $j\not\equiv i,i+1 \pmod n$. For $i\in{\mathbb{Z}}$, we let $s_i:=s_{\bar i}$ where $\bar i\in[n]$ satisfies $\bar i\equiv i\pmod n$.
The group $\Saff^{(0)}_n$ is also known as the \emph{affine Weyl group of type $\widetilde{A}_{n-1}$}. Let $\La:=f_{1,n}\in\Saffxx_n^1$. Thus, $\Saff_n=\Saff^{(0)}_n\rtimes\<\La\>$. We will also be interested in the quotient group $\Sext_n:=\Saff_n/\<\La^n=\operatorname{id}\>$, known as the \emph{extended affine Weyl group of type $\widetilde{A}_{n-1}$}.
%
The group $\Saff^{(0)}_n$ is a subgroup of both $\Saff_n$ and $\Sext_n$. %
We denote by $\sigma:\Saff_n\to\Saff_n$ the \emph{rotation operator} given by $\sigma(f):= \La f\La^{-1}$.
Let $f\in\Saff_n$. Define
\begin{align*}%
\operatorname{Inv}(f)&:=\{(i,j)\in{\mathbb{Z}}\times{\mathbb{Z}}\mid \text{$i<j$ and $f(i)>f(j)$}\},\\
\ell(f)&:=\#\{(i,j)\in{\mathbb{Z}}\times{\mathbb{Z}}\mid \text{$i<j$, $f(i)>f(j)$, and $i\in[n]$}\}.
\end{align*}
The \emph{standard arrow diagram} of $f$ is obtained by drawing an arrow $(i,1)\to(f(i),0)$ for all $i\in{\mathbb{Z}}$; see \figref{fig:arr_diag}(a) for an example when $f=[7,-1,2,5,8,3,11]$ in window notation. The set $\operatorname{Inv}(f)$ consists of pairs of crossing arrows, and $\ell(f)$ counts the number of crossing arrows modulo the equivalence relation generated by $(i,j)\sim(i+n,j+n)$ for all $i,j\in{\mathbb{Z}}$. Alternatively, $\ell(f)$ is the minimal integer $l$ such that $f$ can be written as a product $f=s_{i_1}s_{i_2}\cdots s_{i_l} \La^k$ for some indices $i_1,i_2,\dots,i_l,k$; in this case, $s_{i_1}s_{i_2}\cdots s_{i_l} \La^k$ is called a \emph{reduced expression} for $f$.
For the example in \figref{fig:arr_diag}(a), we have %
\begin{equation*}%
\operatorname{k}(f)=1,\quad \ell(f)=11, \quad\text{and}\quad f=s_{3} s_{4} s_{6} s_{7} s_{2} s_{5} s_{6} s_{1} s_{4} s_{3} s_{2} \La.
\end{equation*}
\begin{remark}\label{rmk:kopf}
In general, the integer $\operatorname{k}(f)$ is equal to the signed number of intersections of the arrows with one of the dashed vertical lines.
\end{remark}
\begin{figure}
\def0.18\textwidth{0.5\textwidth}
\def\hspace{-0.25in}{\hspace{-0.25in}}
\begin{tabular}{cc}
\includegraphics[width=0.18\textwidth]{figures/c_equiv1}
& \hspace{-0.25in}
\includegraphics[width=0.18\textwidth]{figures/c_equiv2}
%
%
\end{tabular}
\caption{\label{fig:c_equiv} The two affine permutations on the left are c-equivalent. The affine permutations $f,f'$ on the right satisfy $f\xrightarrow{s_i} f'$ but are not c-equivalent. See \cref{dfn:c_equiv}. Figure reproduced from~\cite[Figure~5]{GL_cat_combin}.}
\end{figure}
Following~\cite{GP93,GKP00,He07,He10,HN,Mar}, for $f,f'\in\Saff_n$, we write $f\xrightarrow{s_i} f'$ if $f'=s_ifs_i$ and $\ell(f')\leq \ell(f)$. We write $f\to f'$ if there exists a sequence $f=f_0,f_1,\dots,f_m=f'$ of affine permutations such that for each $j\in[m]$, we have $f_{j-1}\xrightarrow{s_i} f_j$ for some $i\in[n]$.
\begin{definition}\label{dfn:c_equiv}
We say that $f,f'\in\Saff_n$ are \emph{c-equivalent} if $f\to f'$ and $f'\to f$. In this case, we write $f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f'$.
\end{definition}
\noindent This terminology is borrowed from~\cite{GL_cat_combin}.
When talking about conjugacy classes, we always mean $\Saff^{(0)}_n$-conjugacy classes, which we will usually denote by $\Ocal$. Given a conjugacy class $\Ocal$, let $\O_{\min}$ be the set of elements of $\Ocal$ of minimal length. We have the following important result.
\begin{theorem}[{\cite[Theorem~2.9]{HN}}]\label{thm:HN}
Let $f\in\Saff_n$ and let $\Ocal$ be the $\Saff^{(0)}_n$-conjugacy class containing~$f$.
%
%
Then there exists $f'\in\O_{\min}$ such that $f\to f'$.
\end{theorem}
%
\begin{definition}
We say that $f\in\Saff_n$ is \emph{c-reduced} if for all $f'\in\Saff_n$ such that $f\to f'$, we have $\ell(f)=\ell(f')$ (or equivalently, $f'\to f$).
\end{definition}\label{cor:HN}
%
\noindent The following result follows immediately from \cref{thm:HN}.
\begin{corollary} \label{cor:HN}
An affine permutation $f\in\Saff_n$ is c-reduced if and only if it has minimal length in its conjugacy class $\Ocal$ (i.e., $f\in\O_{\min}$).%
\end{corollary}
%
%
It is clear that $\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim}$ yields an equivalence relation on the set of c-reduced elements in $\Saff_n$. The goal of \cref{sec:affine-perm-cycl,sec:c-equiv-structure} is to give a solution to the following problem.
\begin{problem}\label{prob:c_eq}
Determine the structure of c-equivalence classes of c-reduced elements in $\Saff_n$.
\end{problem}
%
%
%
\subsection{Cycles and slopes}\label{sec:cycles_and_slopes}
A set $C\subset{\mathbb{Z}}$ is called \emph{$n$-periodic} if for all $i\in{\mathbb{Z}}$, we have $i\in C$ if and only if $i+n\in C$.
\begin{definition}
Let $f\in\Saff_n$. A set $C\subset{\mathbb{Z}}$ is called \emph{$f$-closed} if it is nonempty, $n$-periodic, and for all $i\in{\mathbb{Z}}$, we have $i\in C$ if and only if $f(i)\in C$.
\end{definition}
\begin{definition}\label{dfn:restrict}
Let $C$ be an $f$-closed set. Because it is $n$-periodic, the set $C\cap[n]$ is nonempty. Let $\operatorname{n}_f(C):=\#(C\cap[n])$. There exists a unique order-preserving bijection $r_{\Cycle}: C\to{\mathbb{Z}}$ sending $\min(C\cap[n])$ to $1\in{\mathbb{Z}}$. The \emph{restriction $f|_C\in\Saffx_{\operatorname{n}_f(C)}$} is an affine permutation defined by
\begin{equation}\label{eq:restrict}
f|_C:=r_{\Cycle}\circ f \circr_{\Cycle}^{-1}.
\end{equation}
\end{definition}
Given an $f$-closed set $C$, we let
%
\begin{equation*}%
\operatorname{n}_f(C)=\operatorname{n}(f|_C),\quad \operatorname{k}_f(C):=\operatorname{k}(f|_C),\quad \dop_f(C):=\operatorname{d}(f|_C), \quad\text{and}\quad
\slp_f(\Cycle):=\frac{\operatorname{k}_f(C)}{\operatorname{n}_f(C)}.
\end{equation*}
The rational number $\slp_f(\Cycle)$ is called the \emph{slope} of $C$. Thus, we have $f|_C\in\Saffxx_{n'}^{k'}$ for $n'=\operatorname{n}_f(C)$ and $k'=\operatorname{k}_f(C)$.
%
%
%
%
%
%
%
%
%
\begin{definition}
A \emph{cycle of $f$} is a minimal by inclusion $f$-closed set $C$. The set of cycles of $f$ is denoted $\Cycles_f$.
\end{definition}
\noindent Thus, the cycles of $f$ are in bijection with the cycles of $\bar f$, and a nonempty subset of ${\mathbb{Z}}$ is $f$-closed if and only if it is a disjoint union of cycles of $f$. For $i\in {\mathbb{Z}}$, we write $\slp_f(i):=\slp_f(\Cycle)$, where $C$ is the cycle of $f$ containing $i$.
\begin{example}
Let $f=[7,-1,2,5,8,3,11]$ in window notation be the affine permutation in \figref{fig:arr_diag}(a). Then $f$ has two cycles: $\textcolor{red}{\Cycle}$ (resp., $\textcolor{blue}{\Cycle'}$) consists of all $i\in{\mathbb{Z}}$ congruent to $1,4,5,7$ (resp., to $2,3,6$) modulo $n=7$. We have
\begin{align*}
%
%
\operatorname{n}_f(\textcolor{red}{\Cycle})&=4, &\operatorname{k}_f(\textcolor{red}{\Cycle})&=2, &\dop_f(\textcolor{red}{\Cycle})&=2, &\slpfx(\textcolor{red}{\Cycle})&=1/2,\\
\operatorname{n}_f(\textcolor{blue}{\Cycle'})&=3, &\operatorname{k}_f(\textcolor{blue}{\Cycle'})&=-1, &\dop_f(\textcolor{blue}{\Cycle'})&=1, &\slpfx(\textcolor{blue}{\Cycle'})&=-1/3.
\end{align*}
\end{example}
Given $f\in\Saff_n$ and $\nu\in{\mathbb{Q}}$, we set
\begin{equation*}%
\begin{split}
\Cycles_f(\slp)&:=\{C\in\Cycles_f\mid\slp_f(\Cycle)=\nu\},
%
%
\qquad
C_{f,\slp}:=\bigsqcup_{C\in\Cycles_f(\slp)} C, \\ \operatorname{n}_f(\nu)&:=\sum_{C\in\Cycles_f(\slp)} \operatorname{n}_f(C), \quad \operatorname{k}_f(\nu):=\sum_{C\in\Cycles_f(\slp)}\operatorname{k}_f(C),\quad \dop_f(\nu):=\sum_{C\in\Cycles_f(\slp)} \dop_f(C).
\end{split}
\end{equation*}
%
%
%
%
%
%
%
%
%
%
%
%
For $f\in\Saff_n$, we set
\begin{equation*}%
\slopes_f:=\{\nu\in{\mathbb{Q}}\mid\text{ $C_{f,\slp}$ is nonempty}\}; \quad\text{therefore,}\quad \bigsqcup_{\nu\in\slopes_f} C_{f,\slp}={\mathbb{Z}}.
\end{equation*}
For $\nu\in\slopes_f$, we have $\nu=\operatorname{k}_f(\nu)/\operatorname{n}_f(\nu)$ and $\gcd(\operatorname{k}_f(\nu),\operatorname{n}_f(\nu))=\dop_f(\nu)$.
\begin{definition}\label{dfn:bflaf}
Let $\bfla^f:=(\la^{f,\slp})_{\nu\in\slopes_f}$, where $\la^{f,\slp}$ is the integer partition of $\dop_f(\nu)$ induced by $(\dop_f(C))_{C\in\Cycles_f(\slp)}$.
\end{definition}
\subsection{Arrow diagrams}\label{sec:arrow-diagrams}
Let $\mathcal{L}_n$ be the quotient of the space
\begin{equation*}%
\{x:{\mathbb{Z}}\to{\mathbb{R}}\mid \xx(i+n)=\xx(i)+1\text{ for all $i\in{\mathbb{Z}}$}\}
\end{equation*}
by the space of constant maps ${\mathbb{Z}}\to{\mathbb{R}}$.
%
It is an affine space of dimension $n-1$. For any $g\in \Saff_n$, we have a point $\frac1n g\in\mathcal{L}_n$ sending $i\mapsto \frac1n g(i)$. To a point $x\in\mathcal{L}_n$, we associate a \emph{labeled point configuration} $\LPConf(x)$, that is, a collection of labeled points on the real line: a point labeled $i\in{\mathbb{Z}}$ is located at coordinate $\xx(i)$. We denote ${ \operatorname{Im}}(x):=\{\xx(i)\mid i\in{\mathbb{Z}}\}\subset{\mathbb{R}}$.
Recall the notion of a standard arrow diagram from \cref{sec:aff_perm_backgr}. More generally, to each $f\in\Saff_n$ and $x\in\mathcal{L}_n$ one can associate an \emph{arrow diagram} $\Diag(x)$ obtained by drawing an arrow $(\xx(i),1)\to (\xx({f(i)}),0)$ for each $i\in{\mathbb{Z}}$. For example, the standard arrow diagram of $f$ is just $\Diag(x)$ for $x=\frac1n\operatorname{id}$, where $\operatorname{id}\in\Saff^{(0)}_n$ is the identity map.
We say that $x\in\mathcal{L}_n$ is \emph{generic} if $\xx(i)\neq \xx(j)$ for all $i\neq j\in{\mathbb{Z}}$. We denote by $\Ln^\circ$ the set of generic elements of $\mathcal{L}_n$. The \emph{cutoff point} for $x\in\Ln^\circ$ is the midpoint of the interval of all $c\in{\mathbb{R}}\setminus{ \operatorname{Im}}(x)$ such that
%
\begin{equation}\label{eq:cutoff_dfn}
\#\{i\leq0\mid x_i>c\}=\#\{i\geq1\mid x_i<c\}.
\end{equation}
For $x\in\Ln^\circ$, we let $g_x$ be the affine permutation in $\Saff^{(0)}_n$ such that for all $i,j\in{\mathbb{Z}}$, $g_x(i)<g_x(j)$ if and only if $\xx(i)<\xx(j)$. Explicitly, if $c\in{\mathbb{R}}$ is the cutoff point for $x$ and $i_1,i_2,\dots,i_n\in{\mathbb{Z}}$ are such that $c<x_{i_1}<x_{i_2}<\cdots<x_{i_n}<c+1$ then we have $g_x^{-1}=[i_1,i_2,\dots,i_n]$ in window notation; cf. \cref{ex:Diag_f} below.
For $f\in\Saff_n$ and $x\in\Ln^\circ$, the arrow diagram $\Diag(x)$ is topologically equivalent to the standard arrow diagram of $g_xfg_x^{-1}$. That is, we have an order-preserving bijection $\phi_x:=x\circ g_x^{-1}: {\mathbb{Z}}\to{ \operatorname{Im}}(x)$ such that $(i,j)\in\operatorname{Inv}(g_xfg_x^{-1})$ if and only if the arrows starting at $(\phi_x(i),1)$ and $(\phi_x(j),1)$ cross in $\Diag(x)$.
\begin{figure}
\def0.18\textwidth{0.45\textwidth}
\begin{tabular}{ccc}
\includegraphics[width=0.18\textwidth]{figures/arrow_diagram2}
&
\qquad &
\includegraphics[width=0.18\textwidth]{figures/arrow_diagram3}
\\
%
(a) Arrow diagram $\Diag(x)$.& & (b) Standard arrow diagram of $g_xfg_x^{-1}$.
\end{tabular}
\caption{\label{fig:arr_diag2} The arrow diagram $\Diag(x)$ is topologically equivalent to the standard arrow diagram of $g_xfg_x^{-1}$; see \cref{ex:Diag_f}.}
\end{figure}
\begin{example}\label{ex:Diag_f}
Let $f=[7,-1,2,5,8,3,11]$ be the affine permutation shown in \figref{fig:arr_diag}(a). An example of the arrow diagram $\Diag(x)$ for some $x\in\Ln^\circ$ is shown in \figref{fig:arr_diag}(b). We have $g_x^{-1}=[2,3,1,4,6,5,7]$ in window notation, which is obtained by reading off the labels between the two vertical dashed lines. These dashed lines are located at positions $c$ and $c+1$, where $c$ is the cutoff point of $x$. We find $g_x=s_5s_2s_1\in\Saff^{(0)}_n$, and thus $g_xfg_x^{-1}=[-2, 1, 7, 6, 2, 10,11]$ in window notation. The standard arrow diagram of $g_xfg_x^{-1}$ is shown in \figref{fig:arr_diag2}(b). Comparing it with $\Diag(x)$ shown in \figref{fig:arr_diag2}(a), we see that indeed the two arrow diagrams are topologically equivalent (modulo a relabeling of the points given by the map $\phi_x$).
\end{example}
We think of an arrow diagram $\Diag(x)$ for $(f,x)\in\Saff_n\times\Ln^\circ$ as a ``geometric realization'' of the affine permutation $g_xfg_x^{-1}$, and extend our definitions and notation to this case. For example, we denote by $\ell_f(x):=\ell(g_xfg_x^{-1})$ the number of crossings in $\Diag(x)$ modulo the shift $(i,j)\mapsto (i+n,j+n)$. For $f\in\Saff_n$ and $x,x'\in\Ln^\circ$, write $\Diag(x)\to\Diag(x')$ if $g_xfg_x^{-1}\to g_{x'}fg_{x'}^{-1}$, and $\Diag(x)\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} \Diag(x')$ if $g_xfg_x^{-1}\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} g_{x'}fg_{x'}^{-1}$. We say that $\Diag(x)$ is \emph{c-reduced} if so is $g_xfg_x^{-1}$.
%
%
%
We say that $x$ is \emph{almost generic} if there exist $(i_0,j_0)\in{\mathbb{Z}}^2$ such that for all $i\neq j$, we have $\xx(i)\neq \xx(j)$ unless $\{i,j\}=\{i_0+dn,j_0+dn\}$ for some $d\in{\mathbb{Z}}$. Thus, $\Diag(x)\to\Diag(x')$ if there exists a continuous curve $x(t)\in\mathcal{L}_n$, $t\in[0,1]$, such that $\xtt_0=x$, $\xtt_1=x'$, $x(t)$ is almost generic for $t$ in some finite set $B$ and generic for $t\in[0,1]\setminus B$,
%
and $\ell_f(x(t))$ is a weakly decreasing function on $[0,1]\setminus B$.
%
%
%
%
%
\subsection{\texorpdfstring{${\epsilon}$}{Epsilon}-straight arrow diagrams}\label{sec:eps_straight}
Fix $f\in\Saff_n$. Recall that for $C\in\Cycles_f$ and $i\inC$, we write $\slp_f(i):=\slp_f(\Cycle)$.
For ${\epsilon}>0$ and $x\in\mathcal{L}_n$, we say that the arrow diagram $\Diag(x)$ is \emph{${\epsilon}$-straight} if for all $i\in{\mathbb{Z}}$, $\xx({f(i)})$ is ${\epsilon}$-close to $\xx(i)+\slp_f(i)$. For example, the arrow diagram $\Diag(x)$ shown in \figref{fig:arr_diag}(b) is ${\epsilon}$-straight for some $0<{\epsilon}<0.15$.
Denote by $\Straight_{\eps}(f)$ the set of ${\epsilon}$-straight elements in $\mathcal{L}_n$:
%
\begin{equation*}%
%
\Straight_{\eps}(f):=\{x\in\mathcal{L}_n: |\xx({f(i)})-(\xx(i)+\slp_f(i))|\leq{\epsilon}\text{ for all $i\in{\mathbb{Z}}$}\}.
\end{equation*}
We set $\Straight_{\eps}^\circ(f):=\Straight_{\eps}(f)\cap \Ln^\circ$.
%
The following result is an analog of~\cite[Lemma~6.8(1)]{Mar}; see also~\cite[Lemma~5.4]{Mar18} and~\cite[Proposition~3.4]{Mar14}. See \cref{rmk:alcove} below for the relation between our results and those of Marquis.
%
%
\begin{proposition}\label{prop:ODE}
For any $f\in\Saff_n$, $x\in\Ln^\circ$, and ${\epsilon}>0$, there exists $y\in\Straight_{\eps}^\circ(f)$ such that $\Diag(x)\to\Diag(y)$.
\end{proposition}
\begin{example}
The diagram in \figref{fig:arr_diag}(a) can be continuously deformed into the diagram in \figref{fig:arr_diag}(b). During the deformation, the point labeled $1$ passes to the right through the points labeled $2,3$ while the point labeled $5$ passes to the right through the point labeled~$6$. The resulting sequence of swaps is recorded in the reduced word for $g_x=s_5s_2s_1$; cf. \cref{ex:Diag_f}.
\end{example}
%
\begin{proof}
We will find a smooth curve $x(t),t\in{\mathbb{R}}_{\geq0}$ in $\mathcal{L}_n$ such that $\xtt_0=x$ and such that we can take $y:=x(t)$ for $t$ sufficiently large. The curve will be defined via the following linear system of first order ordinary differential equations (ODEs): %
\begin{equation}\label{eq:ODE}
\partial \xtx(i)/\partial t=\xtx({f(i)})-\xtx(i),\quad\text{for all $i\in{\mathbb{Z}}$.}
\end{equation}
Rewriting each $\xtx(i)$ in terms of $(\xtx(j))_{j\in[n]}$, we obtain an $n\times n$ inhomogeneous linear system of ODEs. It splits into independent systems for each cycle of $f$.
Fix a single cycle $C$ of $f$, and let $m:=\operatorname{n}_f(C)$. We have an $m\times m$ system of ODEs of the form $\partial z(t)/\partial t=Az(t)+b$, for a constant $m\times m$ matrix $A$ and a constant vector $b\in{\mathbb{R}}^m$. Let $w:=\bar f|_C\in S_m$ be the permutation obtained by taking $f|_C$ modulo $m$. The permutation matrix $P_w$ of $w$ has eigenvalues $e^{2\pi i r/m}$ for $r=0,1,\dots, m-1$. We have $A=P_w-I_m$, where $I_m$ is an $m\times m$ identity matrix. Thus, the eigenvalues of $A$ are
%
${\lambda}_r:=e^{2\pi i r/m}-1$ for $r=0,1,\dots,m-1$. (In particular, they are all distinct and have nonpositive real part.) A general solution to the homogeneous system $\partial z(t)/\partial t=Az(t)$ is then a linear combination of vector-valued functions of the form $\exp({\lambda}_r t)z_r$, where $z_r$ is the eigenvector of $A$ corresponding to~${\lambda}_r$.
One eigenvalue of $A$ is ${\lambda}_0=0$, and the corresponding eigenvector is $z_0:=(1,1,\dots,1)^T$. The vector $b$ is a $0,1$-vector with $1$'s in positions corresponding to $i\in[n]\cap C$ such that $f(i)>n$. In particular, the sum of coordinates of $b$ is $\operatorname{k}_f(C)$, and thus $\slp_f(\Cycle) z_0-b$ belongs to the image of $A$. Letting $\tilde z_0$ be one of its preimages under $A$, we see that $z(t)=\slp_f(\Cycle) t z_0 -\tilde z_0$ is a solution to the inhomogeneous system $\partial z(t)/\partial t=Az(t)+b$. Thus, an arbitrary solution differs from it by a linear combination of the functions $\exp({\lambda}_r t)z_r$, each of which is constant (for $r=0$) or decays exponentially (for $r\neq0$).
It follows that for $t$ large enough and $i\in{\mathbb{Z}}$, we have $\xtx(i)=\slp_f(i) t+o(t)$, and $\partial \xtx(i)/\partial t=\slp_f(i)+o(1)$. By~\eqref{eq:ODE}, we get $x(t)\in\Straight_{\eps}(f)$ for all $t$ sufficiently large. It is also clear that for $t$ outside a discrete set, we have $x(t)\in\Straight_{\eps}^\circ(f)$.
Since $x=\xtt_0$ was generic, we can change it slightly so that each point $x(t)$ is almost generic for $t$ in some discrete set $B$ and generic for $t\in[0,\infty)\setminus B$. We claim that $\ell_f(x(t))$ is weakly decreasing for $t\in[0,\infty)\setminusB$. Indeed, let $t_0\inB$ be such that $\xtzx(i)=\xtzx(j)$ for some $i,j\in{\mathbb{Z}}$. Since $x(t_0)$ is almost generic, we have $\xtzx({f(i)})\neq \xtzx({f(j)})$.\footnote{This statement is true unless $f(i)=i+kn$ and $f(j)=j+kn$ for some $k$. But in that case, we have $\xx(i)\neq \xx(j)$ (because $x$ was generic) and $\xtzx(i)=i+kt_0$, $\xtzx(j)=j+kt_0$, so $\xtzx(i)\neq \xtzx(j)$ for all $t_0\geq0$.} Thus, $\partial \xtx(i)/\partial t\neq \partial \xtx(j)/\partial t$ at $t=t_0$. Suppose that $\partial \xtx(i)/\partial t> \partial \xtx(j)/\partial t$ at $t=t_0$, so $\xtzx({f(i)})>\xtzx({f(j)})$. Then $\xtxt_{t^-}(i)<\xtxt_{t^-}(j)$ and $\xtxt_{t^+}(i)>\xtxt_{t^+}(j)$ for some $t^-<t_0<t^+$ very close to $t_0$. We still have $\xtxt_{t^-}({f(i)})>\xtxt_{t^-}({f(j)})$. Thus, the arrows starting at $\xtxt_{t^-}(i)$ and $\xtxt_{t^-}(j)$ form a crossing in $\Diag(\xtt_{t^-})$ but do not form a crossing in $\Diag(\xtt_{t^+})$. Therefore $\ell_f(\xtt_{t^-})\geq \ell_f(\xtt_{t^+})$.
%
%
\end{proof}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\begin{remark}\label{rmk:alcove}
Our constructions can be translated into the well-studied geometric setup as we now explain. The group $\Saff^{(0)}_n$ acts simply transitively on the set $\Sigma$ of chambers of an infinite hyperplane arrangement $\{x_i=x_j+k\mid i\neq j\in [n], k\in{\mathbb{Z}}\}$ in ${\mathbb{R}}^{n}/\<(1,1,\dots,1)\>$. Choosing a distinguished fundamental chamber $C_0$, the map $g\mapsto gC_0$ yields a bijection $\Saff^{(0)}_n\xrightarrow{\sim} \Sigma$. Identifying $\mathcal{L}_n\xrightarrow{\sim} {\mathbb{R}}^{n}/\<(1,1,\dots,1)\>$ by a linear isomorphism sending $x\mapsto (\xx(1),\dots,\xx(n))$, the $\Saff_n$-action on $\mathcal{L}_n$ coincides with its action on ${\mathbb{R}}^{n}/\<(1,1,\dots,1)\>$. For $g\in\Saff^{(0)}_n$, the point $\frac1n g$ gets mapped to the center of the corresponding chamber $gC_0$. An element $x\in\mathcal{L}_n$ is generic if and only if it belongs to the interior of a chamber, and almost generic if and only if it belongs to the interior of a facet of a chamber. The set $\Straight_{\eps}(f)$ for ${\epsilon}=0$ equals the set denoted $\operatorname{Min}(f)$ in~\cite{Mar}. The map sending $f$ to the tuple $(f|_{\nu})_{\nu\in\slopes_f}$ of its restrictions is the map denoted $\pi_{\Sigma^\eta}$ in~\cite{Mar} (whose image is an element of finite order; cf. \cref{lemma:fin}). Our proof strategy may be considered an adaptation of~\cite[Proof of Proposition~6.20]{Mar}: given an arbitrary chamber $C$, construct a walk from $C$ to a chamber intersecting $\operatorname{Min}(f)$, and then use the projection $\pi_{\Sigma^\eta}$ to obtain an element of finite order. The notion of a modular invariant was inspired by~\cite[Part $(A_\ell^{(1)})$ of Theorem~10.12]{Mar}.
%
\end{remark}
%
\begin{remark}
One key point that allows for a significant simplification in our approach in type A (compared to the approach of~\cite{Mar} for arbitrary Coxeter groups) is a new proof of \cref{prop:ODE} using ODEs. We hope that this argument can be of independent interest. It appears to generalize to affine Weyl groups but not to arbitrary Coxeter groups.
\end{remark}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsection{Vector configurations and conjugacy} We return to \cref{prob:c_eq}. Our first goal is to describe $\Saff^{(0)}_n$-conjugacy classes in $\Saff_n$.
Let $f\in\Saff_n$. For $\nu\in\slopes_f$ and $C\in\Cycles_f$, set
\begin{equation*}%
\edge_f(\nu):=(\operatorname{n}_f(\nu),\operatorname{k}_f(\nu)) \quad\text{and}\quad \edge_f(C):=(\operatorname{n}_f(C),\operatorname{k}_f(C)).
\end{equation*}
Clearly, $\edge_f(\nu)=\sum_{C\in\Cycles_f(\slp)} \edge_f(C)$ is a sum of collinear vectors, and their integer lengths are given by $\ilen|\edge_f(C)|=\dop_f(C)$, so $\ilen|\edge_f(\nu)|=\dop_f(\nu)$.
%
We let $E_f:=(\edge_f(\nu))_{\nu\in\slopes_f}$ be the \emph{vector configuration} associated to $f$. By analogy with \cref{dfn:decor}, we call $\dot{E}_f:=(E_f,\bfla^f)$ the \emph{weakly decorated vector configuration} associated to $f$, where $\bfla^f$ was introduced in \cref{dfn:bflaf}.
\begin{proposition}\label{prop:conjugate_VCFD}
Let $f,f'\in\Saff_n$. Then $f$ is $\Saff^{(0)}_n$-conjugate to $f'$ if and only if $\dot{E}_f=\dot{E}_{f'}$.
\end{proposition}
\begin{proof}
Since $\dot{E}_f$ depends only on the cycles of $f$ and their slopes, it is clearly invariant under conjugation, which shows the ``only if'' direction. Suppose now that $f,f'\in\Saff_n$ are such that $\dot{E}_f=\dot{E}_{f'}$. Because the permutations $\bar f,\bar f'\inS_n$ have the same cycle type, they are conjugate in $S_n$. We may therefore apply $S_n$-conjugation to $f'$ (permuting the cycles along the way) to obtain an element $f''$ such that $\bar f=\bar f''$ (in particular, $f$ and $f''$ have the same sets of cycles), and such that for each cycle $C$ of $f$, we have $\operatorname{n}_f(C)=\nopx_{f''}(C)$ and $\operatorname{k}_f(C)=\kopx_{f''}(C)$. Let $t_{e_i}\in\Saff_n$ be the affine permutation sending $i\mapsto i+n$ and $j\mapsto j$ for $j\not\equiv i\pmod n$, called a \emph{translation element}. Thus, $t_{e_i-e_j}:=t_{e_i}t_{e_j}^{-1}$ belongs to $\Saff^{(0)}_n$, and we see that $f$ can be obtained from $f''$ via conjugations by such elements $t_{e_i-e_j}$ for $i,j$ belonging to the same cycle of $f$.
%
%
\end{proof}
%
\subsection{A characterization of minimal-length elements}
Our next goal is to give an explicit characterization of c-reduced affine permutations; see \cref{cor:c_red_charact}.
Given two subsets $A,B\subset{\mathbb{R}}^2$, define their \emph{Minkowski sum} by $A+B:=\{a+b\mid a\in A,\ b\in B\}$. Given a vector configuration $E=\{e_1,e_2,\dots,e_m\}\subset{\mathbb{Z}}^2$, the associated \emph{zonotope} $\Zcal(E)$ is the convex polygon in ${\mathbb{R}}^2$ obtained as the Minkowski sum of line segments
\begin{equation*}%
\Zcal(E):=[0,e_1]+[0,e_2]+\cdots+[0,e_m].
\end{equation*}
For $e_1,e_2\in{\mathbb{R}}^2$, recall that ${ \operatorname{det}}(e_1,e_2)$ is the determinant of the $2\times2$ matrix with columns $e_1,e_2$. The following formula for the area of $\Zcal(E)$ is well known~\cite{McMullen}:
\begin{equation*}%
\operatorname{Area}(\Zcal(E))=\sum_{1\leq i<j\leq m} |{ \operatorname{det}}(e_i,e_j)|.
\end{equation*}
Recall the notion of $\excess{\bfla}$ from \cref{dfn:exc}.
%
%
%
%
%
%
%
\begin{lemma}\label{lemma:c_red_ell}
Let $f\in\Saff_n$. Then $f$ is c-reduced if and only if
\begin{equation}\label{eq:c_red_ell}
\ell(f)=\operatorname{Area}(\Zcal(E))+\excess{\bfla}, \quad\text{where $\dot{E}_f=(E,\bfla)$.}
%
\end{equation}
\end{lemma}
In the proof of the lemma, we will count inversions $(j,j')\in\operatorname{Inv}(f)$ according to the cycles containing $j$ and $j'$.
\begin{definition}
Given two cycles $C,C'\in\Cycles_f$, their \emph{ordered crossing number} is defined as
\begin{equation*}%
\operatorname{xing}(C,C'):=\#\{(j,j')\in \operatorname{Inv}(f)\mid j\in[n]\capC\text{ and }j'\inC'\}.
\end{equation*}
%
%
\end{definition}
\noindent Thus, we have $\sum_{C,C'\in\Cycles_f} \operatorname{xing}(C,C') = \ell(f)$.
\begin{proof}[Proof of \cref{lemma:c_red_ell}]
Denote the right-hand side of~\eqref{eq:c_red_ell} by $\ell(\dot{E}_f)$. First, we show that for any $f\in\Saff_n$, we have $\ell(f)\geq\ell(\dot{E}_f)$. Observe that if $g\in\Saff^{(k)}_n$ has a single cycle then $\ell(g)\geq \operatorname{d}(g)-1$ (where $\operatorname{d}(g)=\gcd(k,n)$), because the map $f_{k,n}=\La^k$ has $\gcd(k,n)$ cycles, and for each $i\in[n]$, $s_ig$ has either one more or one less cycle than $g$. Thus, each cycle $C$ of $f$ contributes at least $\dop_f(C)-1$ to $\ell(f)$:
\begin{equation}\label{eq:xing_geq_CC}
\operatorname{xing}(C,C)\geq \dop_f(C)-1.
\end{equation}
It follows that for each $\nu\in\slopes_f$, we have
\begin{equation*}%
%
\sum_{C\in\Cycles_f(\slp)} \operatorname{xing}(C,C) \geq \excess{\la^{f,\slp}}.
\end{equation*}
To each cycle $C$ we can associate a piecewise-linear curve $P^{(f)}(C)$ in ${\mathbb{R}}^2$ obtained by choosing some $i\inC$ and joining the points $p_d:=\left(d,\frac1n f^d(i)\right)$ for $d=0,1,\dots,\operatorname{n}_f(C)$; cf.~\cite[Section~4]{GL_cat_combin}. We have $p_0=(0,\frac in)$ and $p_{\operatorname{n}_f(C)}=(\operatorname{n}_f(C),\frac in+\operatorname{k}_f(C))$, thus $P^{(f)}(C)$ gives rise to a closed curve on ${\mathbb{T}}={\mathbb{R}}^2/{\mathbb{Z}}^2$ with homology $\edge_f(C)=(\operatorname{n}_f(C),\operatorname{k}_f(C))$. It is well known that given integers $n',k',n'',k''$ with $k'/n'>k''/n''$, a curve in ${\mathbb{T}}$ with homology $(n',k')$ intersects a curve with homology $(n'',k'')$ from below at least
%
$\left|{ \operatorname{det}} \begin{pmatrix}
n' & k'\\ n'' & k''
\end{pmatrix}\right|$ times. Thus, given cycles $C\neqC'$, we have
\begin{equation}\label{eq:xing_geq_CC'}
%
%
\operatorname{xing}(C,C')\geq
\begin{cases}
0, &\text{if $\slp_f(\Cycle)\leq \slpfx(C')$;}\\
|{ \operatorname{det}}(\edge_f(C),\edge_f(C'))|,&\text{otherwise.}
\end{cases}
\end{equation}
We have shown that $\ell(f)\geq\ell(\dot{E}_f)$.
%
%
%
%
Conversely, consider a weakly decorated vector configuration $\dot{E}=(E,\bfla)$. By \cref{prop:conjugate_VCFD}, $\Ocal:=\{f\in\Saff_n\mid \dot{E}_f=\dot{E}\}$ is an $\Saff^{(0)}_n$-conjugacy class.
%
By \cref{cor:HN}, $f\in\Ocal$ is c-reduced if and only if $f\in\O_{\min}$. We have shown above that for any $f\in\Ocal$, $\ell(f)\geq \ell(\dot{E})$. It remains to construct $g\in\Ocal$ such that $\ell(g)=\ell(\dot{E})$. Such an affine permutation will be constructed in \cref{sec:eps_straight_construct}.
%
%
%
\end{proof}
\begin{corollary}\label{cor:c_red_charact}
Let $f\in\Saff_n$. Then $f$ is c-reduced if and only if all of the following conditions are satisfied.
\begin{enumerate}
\item For each $C\in\Cycles_f$, $\operatorname{xing}(C,C)=\dop_f(C)-1$.
\item For each $C\neqC'$ in $\Cycles_f$, we have
\begin{equation*}%
\operatorname{xing}(C,C')=
\begin{cases}
0, &\text{if $\slp_f(\Cycle)\leq \slpfx(C')$;}\\
|{ \operatorname{det}}(\edge_f(C),\edge_f(C'))|,&\text{otherwise.}
\end{cases}
\end{equation*}
%
%
%
%
%
%
%
\end{enumerate}
\end{corollary}
\begin{proof}
We have lower bounds on $\operatorname{xing}(C,C)$ and $\operatorname{xing}(C,C')$ given by~\eqref{eq:xing_geq_CC}--\eqref{eq:xing_geq_CC'}. Moreover, we showed in \cref{lemma:c_red_ell} that $f$ is c-reduced if and only if all of these inequalities are equalities.
%
%
\end{proof}
\begin{remark}
%
\cref{cor:c_red_charact} was obtained jointly with Thomas Lam during the development of~\cite{GL_cat_combin}.
\end{remark}
%
\begin{corollary}\label{cor:restr_c_reduced}
If $f\in\Saff_n$ is c-reduced and $C\subset{\mathbb{Z}}$ is $f$-closed then $f|_C$ is c-reduced.
\end{corollary}
\section{The structure of c-equivalence classes}\label{sec:c-equiv-structure}
The goal of this section is to give a complete description of c-equivalence classes of c-reduced affine permutations; see \cref{thm:c-equivalent}.
\subsection{Cyclic compositions}
Let $f\in\Saff_n$ be c-reduced. Fix a slope $\nu\in\slopes_f$. By \cref{cor:c_red_charact}, we have $\operatorname{xing}(C,C')=0$ for all $C\neqC'$ in $\Cycles_f(\slp)$. We thus get a natural cyclic order on the set $\Cycles_f(\slp)$ induced by the cyclic order on $[n]\cong{\mathbb{Z}}/n{\mathbb{Z}}$. Recall that $\sum_{C\in\Cycles_f(\slp)} \dop_f(C)=\dop_f(\nu)$. In other words, the cyclic order on $\Cycles_f(\slp)$ yields a cyclic composition $\cc^{f,\slp}$ of $\dop_f(\nu)$.
%
%
Letting $\bfcc_f:=(\cc^{f,\slp})_{\nu\in\slopes_f}$, we consider the \emph{strongly decorated vector configuration} $\ddot{E}_f:=(E_f,\bfcc_f)$.
\begin{lemma}\label{lemma:c-red=>VCF}
Let $f,f'\in\Saff_n$ be c-reduced. If $f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f'$ then $\ddot{E}_f=\ddot{E}_{f'}$.
\end{lemma}
\begin{proof}
By assumption, $\ell(f)=\ell(f')$. It suffices to consider the case $f'=s_ifs_i$ for some $i\in[n]$. By \cref{lemma:c_red_ell}, we only need to check that the relative order on $\Cycles_f(\slp)$ is preserved for each slope $\nu\in\slopes_f$. This is clear since $f,f'$ have no crossings between different cycles of the same slope by \cref{cor:c_red_charact}.
\end{proof}
%
%
%
%
%
To give the converse to \cref{lemma:c-red=>VCF}, we need to consider \emph{modular invariants} discussed in \cref{sec:intro:minv}. Recall from \cref{dfn:clicks} that for a cyclic composition $\cc$, we have the rotation number $\operatorname{rot}(\cc)$, and for a family $\bfcc$ of cyclic compositions, $\operatorname{d}(\bfcc)$ is the greatest common divisor of their rotation numbers.
%
%
%
%
%
%
Given a conjugacy class $\Ocal$ and a strongly decorated vector configuration $\ddot{E}$, let
\begin{equation*}%
\O_{\min}[\ddot{E}]:=\{f\in\O_{\min}\mid \ddot{E}_f=\ddot{E}\}.
\end{equation*}
The goal of this section is to prove the following result.
\begin{theorem}\label{thm:c-equivalent}
Let $f\in\Saff_n$ be c-reduced. Let $\Ocal$ be the $\Saff^{(0)}_n$-conjugacy class of $f$. Then $\O_{\min}[\ddot{E}_f]$ is a union of $\operatorname{d}(\bfcc_f)$-many c-equivalence classes. Moreover, for any two c-reduced $f,f'\in\O_{\min}$, we have %
\begin{equation}\label{eq:c-equivalent}
%
f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f' \quad\Longleftrightarrow \quad
(\ddot{E}_f,\mu(f))=(\ddot{E}_{f'},\mu(f')),
\end{equation}
where $\mu(f)\in {\mathbb{Z}}/\operatorname{d}(\bfcc_f){\mathbb{Z}}$ is the \emph{modular invariant} defined in~\eqref{eq:minv_f_dfn}.
\end{theorem}
\begin{remark}\label{rmk:Mar}
Alternatively, \cref{thm:c-equivalent} may be deduced from the recently updated version of \cite[Theorem~B]{Mar}.
\end{remark}
%
\subsection{Constructing \texorpdfstring{${\epsilon}$}{epsilon}-straight diagrams explicitly}\label{sec:eps_straight_construct}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=0.15\textwidth]{figures/VC1}
&
\includegraphics[width=0.4\textwidth]{figures/arrowdiagram_VC1}
\\
(a) $\ddot{E}$. &(b) $\DiagOp(\ddot{E})$.
\\
\includegraphics[width=0.22\textwidth]{figures/VC_big}
&
\includegraphics[width=0.6\textwidth]{figures/arrow_diagram_big}
\\
(c) $\ddot{E}$. &(d) $\DiagOp(\ddot{E})$.
\end{tabular}
%
%
%
%
%
%
%
%
\caption{\label{fig:arrowdiagram} A strongly decorated vector configuration (left) and an ${\epsilon}$-straight arrow diagram (right); see \cref{sec:eps_straight_construct}.}
\end{figure}
Let $\ddot{E}=(E,\bfcc)$ be a strongly decorated vector configuration and fix a small $\epsilon>0$. Our goal is to construct an ${\epsilon}$-straight arrow diagram $\DiagOp(\ddot{E})=\Diagx_g(x)$ for some $x\in\Ln^\circ$ and c-reduced $g\in\Saff_n$ with $\ddot{E}_g=\ddot{E}$. We start with an example and then proceed with a formal description.
\begin{example}
Let $\ddot{E}=(E,\bfcc)$ denote the strongly decorated vector configuration shown in \figref{fig:arrowdiagram}(a). Thus, the vectors in $E$ are ${\textcolor{red}{\e_1}} = (2,0)$, ${\textcolor{blue}{\e_2}}=(2,2)$, and $\cc^{{\textcolor{red}{\e_1}}}=\cc^{{\textcolor{blue}{\e_2}}}=(2)$. An ${\epsilon}$-straight arrow diagram $\DiagOp(\ddot{E})$ is shown in \figref{fig:arrowdiagram}(b).
On the other hand, if $\ddot{E}=(E,\bfcc)$ is the strongly decorated vector configuration shown in \figref{fig:arrowdiagram}(c), then $E$ consists of a single vector $\e=(18,12)$ decorated by a cyclic composition $\cc^\e=(2,1,3)$. The associated ${\epsilon}$-straight arrow diagram $\DiagOp(\ddot{E})$ is constructed in \figref{fig:arrowdiagram}(d).
\end{example}
For a vector $\e=(a,b)\in{\mathbb{Z}}^2$, we denote $\operatorname{n}(\e):=a$ and $\operatorname{k}(\e):=b$. For $\e \in E$, let $\nu(\e)=\operatorname{k}(\e)/\operatorname{n}(\e)$ denote its slope. Assume that $\operatorname{n}(\e) > 0$ for all $\e \in E$. Let $\bfcc=(\cc^\e)_{\e \in E}$ and $\cc^\e=(\cc^\e_1,\dots,\cc^\e_{m_\e})$. Consider the circle ${\mathbb{R}}/{\mathbb{Z}}$ and choose a collection of \emph{starting points} $\bm{p}=(\bar p^\e_i)_{\e \in E, i \in [m_\e]}$, where $\bar p^\e_i \in {\mathbb{R}}/{\mathbb{Z}}$. Let $\bar P^\e_i:=\{\bar p^\e_i+r\nu(\e) \mid r\in{\mathbb{Z}}\}\subset{\mathbb{R}}/{\mathbb{Z}}$ be the set containing $\bar p^\e_i$ and consisting of $\operatorname{n}(\e)/\ilen|\e|$ equally spaced points on a circle. We choose $\bm{p}$ so that we additionally have:
\begin{enumerate}
\item $\operatorname{dist}_{{\mathbb{R}}/{\mathbb{Z}}}(\bar P^\e_i,\bar P^{\e'}_{i'})>{\epsilon}$ for all $(\e,i) \neq (\e',i')$; and
\item the points $(\bar p^\e_1,\bar p^\e_2,\dots,\bar p^\e_{m_\e})$ are cyclically ordered in ${\mathbb{R}}/{\mathbb{Z}}$.
%
\end{enumerate}
Now, for each fixed $\e \in E$ and $i \in [m_e]$, we construct an arrow diagram $\operatorname{D}^\e_i$. Let $P^\e_i\subset{\mathbb{R}}$ be the preimage of $\bar P^\e_i$ under the projection ${\mathbb{R}}\to{\mathbb{R}}/{\mathbb{Z}}$, and choose $p'\inP^\e_i$. Set $d:=\alpha^\e_i$. For each $r \in [d]$, set $p'_r:=p'+\frac {r{\epsilon}}{d}$. We refer to the points $(p'_r)_{r\in[d]}$ as the \emph{block} associated to $p'$, and denote by $P'_{\e,i}:=P^\e_i+\frac{{\epsilon}}d[d]$ the set of points in all such blocks. Let $\bar{p}'\in\bar P^\e_i$ be the image of $p'$ in ${\mathbb{R}}/{\mathbb{Z}}$. If $\bar{p}'\not\equiv \bar p^\e_i\pmod{{\mathbb{Z}}}$ then we draw an arrow $(p'_r,1)\to(p'_r+\nu(\e),0)$ for each $r\in[d]$. Otherwise, we draw an arrow $(p'_r,1)\to(p'_{\sigma(r)}+\nu(\e),0)$ for each $r\in[d]$, where $\sigma=(1\,2\,\dots\,d)\in S_d$ is a $d$-cycle. The resulting arrow diagram is denoted $\operatorname{D}^\e_i$.
Let $\bm{P}:=\bigsqcup_{\e \in E, i \in [m_e]} P'_{\e,i}\subset{\mathbb{R}}$ be the resulting set of points, and let $\DiagOp(\ddot{E}):=\bigcup^\e_{i \in [m_e]} \operatorname{D}^\e_i$ be the corresponding arrow diagram.
%
%
Let $x:{\mathbb{Z}}\to\bm{P}$ be an order-preserving map. Then there exists a unique affine permutation $g\in\Saff_n$ such that $\DiagOp(\ddot{E})=\Diagx_g(x)$. By construction, $\ddot{E}_g{}=\ddot{E}$ and $\ell(g)=\ell(\ddot{E})$, which completes the proof of \cref{lemma:c_red_ell}. By~\cref{lemma:c_red_ell}, $g$ is c-reduced.
\subsection{Affine permutations of constant slope}
\begin{definition}
Let $f\in\Saff_n$ and $\nu\in{\mathbb{Q}}$. We say that $f$ is \emph{of constant slope $\nu$} if $\slopes_f=\{\nu\}$. (That is, if all cycles of $f$ are of the same slope $\nu$.)
\end{definition}
\noindent It is clear that if $f\in\Saff^{(k)}_n$ is of constant slope $\nu$ then we must have $\nu=k/n$.
Recall that $\Sext_n$ is a quotient of $\Saff_n$ by $\La^n$. We denote the quotient map $\Saff_n\to\Sext_n$ by $f\mapsto \hat f$.
\begin{lemma}\label{lemma:fin}
Let $f\in\Saff_n$. Then $\hat f\in\Sext_n$ has finite order if and only if $f$ is of constant slope.
\end{lemma}
\begin{proof}
Let $N$ be the least common multiple of $\operatorname{n}_f(C)$ for all $C\in\Cycles_f$. Then $f^N$ is a translation element; that is, $f^N(i)=i+d_in$ for all $i\in{\mathbb{Z}}$, where $(d_i)_{i\in{\mathbb{Z}}}$ is some sequence of integers. Explicitly, if $i\inC$ then $d_i=N\slp_f(\Cycle)\in{\mathbb{Z}}$. This implies the result.
\end{proof}
%
Let $f\in\Saff^{(k)}_n$ be c-reduced and of constant slope, and set $d:=\gcd(k,n)$. By \cref{cor:c_red_charact}, the arrows between different cycles of $f$ do not cross. Therefore, for each cycle $C\in\Cycles_f$, we have $C=C+d$ as subsets of ${\mathbb{Z}}$. Denoting by $I_C\subset{\mathbb{Z}}/d{\mathbb{Z}}$ the image of $C$ under the map ${\mathbb{Z}}\mapsto {\mathbb{Z}}/d{\mathbb{Z}}$, we get a partition $\Ibm_f=\{I_C\mid C\in\Cycles_f\}$ of ${\mathbb{Z}}/d{\mathbb{Z}}$ into cyclic intervals.\footnote{The case where $f$ is a single cycle requires special care. As mentioned after \cref{dfn:clicks}, we distinguish between different cyclic intervals $[j,j+d-1]$ of ${\mathbb{Z}}/d{\mathbb{Z}}$. Topologically, the standard arrow diagram of $f$ (viewed as a union of arrows) will be disconnected, and we choose $\Ibm_f:=\{[j,j+d-1]\}$ for $j\in{\mathbb{Z}}/d{\mathbb{Z}}$ such that the points $(j,1)$ and $(j-1,1)$ belong to different connected components.} It is clear that $\Ibm_f$ is invariant under c-equivalence.
%
%
%
%
%
%
\begin{proposition}[{\cite[Proposition~A]{Mar}}]\label{prop:Mar_A}
Let $f,f'\in\Saff^{(k)}_n$ be c-reduced and of constant slope. Then %
\begin{equation*}%
%
f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f' \quad\text{if and only if} \quad \Ibm_f=\Ibm_{f'}.
\end{equation*}
\end{proposition}
%
We say that a cyclic composition $\cc=(\cc_1,\cc_2,\dots,\cc_m)$ is written in \emph{normal form} if the sequence $(\cc_1,\cc_2,\dots,\cc_m)$ is lexicographically maximal out of all sequences obtained by rotating $\cc$, i.e., $(\cc_r,\cc_{r+1},\dots,\cc_m,\cc_1,\dots,\cc_{r-1})$ for $r\in[m]$. As in \cref{dfn:clicks}, we associate to $\cc$ a partition $\Ibm_{\cc}=(I_1,I_2,\dots,I_m)$ of ${\mathbb{Z}}/d{\mathbb{Z}}$ (where $d=\cc_1+\cc_2+\cdots+\cc_m$) into cyclic intervals given by $I_1=[1,\cc_1]$, $I_2=[\cc_1+1,\cc_1+\cc_2]$, etc.
Note that if $\cc=\cc^{f,\slp}$ then we have $d=\cc_1+\cc_2+\cdots+\cc_m=\gcd(k,n)$, and therefore we have two partitions $\Ibm_{\cc}$ and $\Ibm_f$ of ${\mathbb{Z}}/d{\mathbb{Z}}$ into cyclic intervals. These partitions are related by a rotation $\sigma^r$ of ${\mathbb{Z}}/d{\mathbb{Z}}$ for some $r$; however, this rotation is only defined up to a symmetry of $\Ibm_{\cc}$, i.e., up to $\sigma^{\operatorname{rot}(\cc)}$. (Here, $\operatorname{rot}(\cc)$ divides $d$.) %
\begin{definition}\label{dfn:minv_const_slp}
Let $f\in\Saff^{(k)}_n$ be c-reduced of constant slope $\nu=k/n$, and let $\cc:=\cc^{f,\slp}$ be written in normal form. The \emph{modular invariant} $\mu(f)\in{\mathbb{Z}}/\operatorname{rot}(\cc){\mathbb{Z}}$ is the unique element such that $\sigma^{\mu(f)}(\Ibm_{\cc})=\Ibm_f$.
%
%
\end{definition}
\begin{corollary}\label{cor:c_red_const_slp}
Let $f,f'\in\Saff^{(k)}_n$ be c-reduced and of constant slope $\nu=k/n$. Then %
\begin{equation*}%
f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f' \quad\text{if and only if} \quad (\cc^{f,\slp},\mu(f)) = (\cc^{f',\slp},\mu(f')).
\end{equation*}
\end{corollary}
\begin{proof}
The $\Longrightarrow$ direction is clear since both $\cc^{f,\slp}$ and $\mu(f)$ are invariant under c-equivalence. Conversely, having $\cc^{f,\slp}=\cc^{f',\slp}$ implies that $\Ibm_f$ and $\Ibm_{f'}$ coincide up to cyclic shift, and $\mu(f)=\mu(f')$ guarantees that $\Ibm_f=\Ibm_{f'}$. The result then follows from \cref{prop:Mar_A}.
%
\end{proof}
%
\subsection{Finishing the proof}
%
For $f\in\Saff_n$ and $\nu\in\slopes_f$, let $f|_\slp:=f|_{C_{f,\slp}}$. Thus, $f|_\slp$ has constant slope $\nu$. If in addition $f$ is c-reduced then by \cref{cor:restr_c_reduced}, so is $f|_\slp$. In this case, recall from \cref{dfn:minv_const_slp} that the modular invariant $\mu(f|_\slp)$ is an element of ${\mathbb{Z}}/\operatorname{rot}(\cc^{f,\slp}){\mathbb{Z}}$. By~\eqref{eq:clicks_dfn}, $\operatorname{d}(\bfcc_f)$ is defined as the greatest common divisor of the numbers $\operatorname{d}(\cc^{f,\slp})$ over all $\nu\in\slopes_f$.
\begin{definition}
For c-reduced $f\in\Saff_n$, define the \emph{modular invariant} $\mu(f)\in {\mathbb{Z}}/\operatorname{d}(\bfcc_f){\mathbb{Z}}$ by
\begin{equation}\label{eq:minv_f_dfn}
\mu(f):=\sum_{\nu\in\slopes_f} \mu(f|_\slp) \quad {\operatorname{mod}}\ \operatorname{d}(\bfcc_f).
\end{equation}
\end{definition}
%
\begin{lemma}\label{lemma:c-red=>minv}
Let $f,f'\in\Saff_n$ be c-reduced. If $f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f'$ then $\mu(f)=\mu(f')$.
\end{lemma}
\begin{proof}
Suppose that $f\xrightarrow{s_i} f'$ for some $i\in[n]$. The restrictions $f|_\nu$ and $f'|_\nu$ are c-equivalent for all $\nu\in\slopes_f$ (which implies the result by \cref{cor:c_red_const_slp}) unless $i=n$ and $\slpfx(0)\neq \slpfx(1)$. Suppose that we are in that case and let $\nu_0:=\slpfx(0)$, $\nu_1:=\slpfx(1)$.
%
%
Since $\nu_0\neq\nu_1$, by the definition of $f|_{\nu_0}$ in~\eqref{eq:restrict}, we see that $f'|_{\nu_0}=\sigma(f|_{\nu_0})$ and $f'|_{\nu_1}=\sigma^{-1} (f|_{\nu_1})$. Here, $\sigma(g)=\La g \La^{-1}$ is the rotation operator introduced in \cref{sec:aff_perm_backgr}. Thus, $\mu(f'|_{\nu_0})=\mu(f|_{\nu_0})+1$ and $\mu(f'|_{\nu_1})=\mu(f_{\nu_1})-1$, and the sum in~\eqref{eq:minv_f_dfn} remains the same.
%
%
\end{proof}
We will need one more tool for working with ${\epsilon}$-straight diagrams from \cref{sec:eps_straight}. Fix c-reduced $f\in\Saff_n$ and small ${\epsilon}>0$. For $x\in\Straight_{\eps}^\circ(f)$ such that $\Diag(x)$ is c-reduced, recall from \cref{cor:c_red_charact} that $\Diag(x)$ contains no crossings between distinct cycles of the same slope.
\begin{definition}
Let $x\in\Straight_{\eps}^\circ(f)$ be c-reduced and let $\bm{a}:=(a_C)_{C\in\Cycles_f}$ be a family of real numbers associated to the cycles of $f$. Consider a curve $x(t)$, $t\geq0$, given for each $i\in{\mathbb{Z}}$ by $\xtx(i)=\xx(i)+ta_{C}$, where $C$ is the cycle containing $i$. %
Let $T>0$ be such that for $t\in[0,T]$, $\xtx(i)\neq \xtx(j)$ for any $i\neq j$ such that $\slp_f(i)=\slp_f(j)$. In this case, we say that $x':=\xtt_T$ is obtained from $x=\xtt_0$ by \emph{block-shifting}.
%
\end{definition}
In other words, block-shifting allows us to move the collections of points $(\xx(i))_{i\inC}$ independently for each cycle $C$, subject to the condition that two cycles of the same slope never collide. It is clear that for ${\epsilon}$ sufficiently small, if $x\in\Straight_{\eps}^\circ(f)$ is c-reduced and $x'\in\Straight_{\eps}^\circ(f)$ is obtained from $x$ by block-shifting then $x'$ is c-reduced and $\Diag(x)\to\Diag(x')$.
%
\begin{proof}[Proof of \cref{thm:c-equivalent}]
The $\Longrightarrow$ direction follows from \cref{lemma:c-red=>VCF,lemma:c-red=>minv}.
For the $\Longleftarrow$ direction, let $f,f'\in\O_{\min}$. Thus, $f'=gfg^{-1}$ for some $g\in\Saff^{(0)}_n$. Let $x,x'\in\Straight_{\eps}^\circ(f)$ be obtained from $\frac1n \operatorname{id}, \frac1n g\in\Ln^\circ$ via \cref{prop:ODE} so that $\Diag(\frac1n\operatorname{id})\to\Diag(x)$ and $\Diagx_{f'}(\frac1n\operatorname{id})=\Diag(\frac1n g) \to \Diag(x')$.
%
Set $h:=g_xfg_x^{-1}$ and $h':=g_{x'}fg_{x'}^{-1}$. We have $f\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} h$ and $f'\stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} h'$. Since $f,f'$ are c-reduced, so are $\Diag(x),\Diag(x')$ and $h,h'$. Since $\VCDD_f=\VCDD_{f'}$, and thus $\VCDD_h=\VCDD_{h'}$, we see that the
partitions $\Ibm_{h|_\nu}$ and $\Ibm_{h'|_\nu}$ of ${\mathbb{Z}}/\dop_f(\nu){\mathbb{Z}}$ into cyclic intervals
%
differ by rotation for all $\nu\in\slopes_f$. Our goal is to apply block-shifting to $\Diag(x)$ with the aim of achieving $\Ibm_{h|_\nu}=\Ibm_{h'|_\nu}$ for all $\nu\in\slopes_f$. To do so, consider the following operation on the partitions $(\Ibm_{h|_\nu})_{\nu\in\slopes_f}$:
\begin{equation}\label{eq:rot_pm}
\text{for some $\nu\neq\nu'$ in $\slopes_f$, replace $\Ibm_{h|_\nu}\mapsto \sigma(\Ibm_{h|_\nu})$ and $\Ibm_{h|_{\nu'}}\mapsto\sigma^{-1}(\Ibm_{h|_{\nu'}})$.}
\end{equation}
We first explain how to obtain~\eqref{eq:rot_pm} via block-shifting.
%
%
%
%
%
%
Applying block-shifting to $\Diag(x)$ corresponds to applying a sequence ${h\xrightarrow{s_{i_1}} h_1\xrightarrow{s_{i_2}} \cdots \xrightarrow{s_{i_l}} h'}$ of c-equivalences. In order to control how each restriction $h|_\nu$ changes under such operations, we need to distinguish between the cases $i_j=n$ and $i_j\neq n$ as we did in the proof of \cref{lemma:c-red=>minv}.
%
%
%
%
%
Recall the notion of the cutoff point from~\eqref{eq:cutoff_dfn}. Suppose that applying block-shifting to $x$ switches the positions of adjacent points $x_j$ and $x_k$ for some $j,k$. If the cutoff point of $x$ is between $x_j+d$ and $x_k+d$ for some $d\in{\mathbb{Z}}$ then the corresponding c-equivalence corresponds to $s_n$, otherwise it corresponds to $s_i$ for $i\in [n-1]$.
%
Consider slopes $\nu\neq\nu'$ in $\slopes_f$. We may apply block-shifting to move $C_{f,\slp}$ (resp., $C_{f,\slp'}$) to the right (resp., left) so that no point in ${ \operatorname{Im}}(x)$ passes through the cutoff point $c$ of $x$, until $c$ is located in an interval of ${\mathbb{R}}\setminus{ \operatorname{Im}}(x)$ between a point of $C_{f,\slp}$ and a point of $C_{f,\slp'}$. We may then shift $C_{f,\slp}$ (resp., $C_{f,\slp'}$) further to the right (resp., left) until these two points swap places. This corresponds to replacing $h|_\nu$ with $\sigma(h|_\nu)$ and $h|_{\nu'}$ with $\sigma^{-1}(h|_{\nu'})$, which results in applying~\eqref{eq:rot_pm} to $\Ibm_{h|_\nu}$ and $\Ibm_{h|_{\nu'}}$.
%
Recall that for $\nu\in\slopes_f$, by the definition of $\operatorname{rot}(\cc^{f,\slp})$, we have $\sigma^{\operatorname{rot}(\cc^{f,\slp})}(\Ibm_{h|_\nu})=\Ibm_{h|_\nu}$.
Let $d:=\operatorname{d}(\bfcc_f)=\gcd\{\cc^{f,\slp}\mid\nu\in\slopes_f\}$. Write $d=\sum_{\nu'\in\slopes_f} a_{\nu'} \operatorname{rot}(\cc^{f,\slp'})$ for some integers $a_{\nu'}$. Then, for each fixed $\nu\in\slopes_f$, we have $(a_{\nu}\operatorname{rot}(\cc^{f,\slp})-d)+ \sum_{\nu'\neq\nu} a_{\nu'} \operatorname{rot}(\cc^{f,\slp'})=0$, and therefore we can use~\eqref{eq:rot_pm} to rotate each $\Ibm_{h|_{\nu'}}$ by the corresponding coefficient. The result of this operation is
\begin{equation}\label{eq:rot_d}
\text{replace $\Ibm_{h|_{\nu}}\mapsto \sigma^{-d}(\Ibm_{h|_{\nu}})$ and preserve $\Ibm_{h|_{\nu'}}$ for all $\nu'\neq \nu$.}
\end{equation}
Fix $\nu\in\slopes_f$. Applying~\eqref{eq:rot_pm}, we can achieve $\Ibm_{h|_{\nu'}}=\Ibm_{h'|_{\nu'}}$ for all $\nu'\neq\nu$. Since $\mu(h)=\mu(h')$, we see that $\Ibm_{h|_{\nu}}$ and $\Ibm_{h'|_{\nu}}$ differ by rotation by a multiple of $d$, so applying~\eqref{eq:rot_d}, we achieve $\Ibm_{h|_{\nu}}=\Ibm_{h'|_{\nu}}$.
%
%
%
By \cref{prop:Mar_A}, we have $h|_{\nu} \stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} h'|_{\nu}$ for all $\nu\in\slopes_f$. Since $h|_{\nu}$ and $h'|_{\nu}$ are c-reduced, they have no crossings between different cycles. Thus, each c-equivalence in $h|_{\nu} \stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} h'|_{\nu}$ swaps points from the same cycle. Such points are close together in $x$, and we therefore can lift these c-equivalences to $h$ so that we get $h|_{\nu} = h'|_{\nu}$ for all $\nu\in\slopes_f$. Replacing $h,h'$ with $\sigma^r(h),\sigma^r(h')$ for some $r$, we may assume that the cutoff points of $x$ and $x'$ are not ${\epsilon}$-close to any point in ${ \operatorname{Im}}(x)\cup{ \operatorname{Im}}(x')$. In this case, we still have $h|_{\nu} = h'|_{\nu}$ for all $\nu\in\slopes_f$.
%
Applying block-shifting to $x$ so that for $\nu\neq\nu'$, no point in $C_{f,\slp}$ is ${\epsilon}$-close to a point in $C_{f,\slp'}$, we find that $h$ and $h'$ are c-equivalent.
\end{proof}
\section{Relating affine permutations to bipartite graphs on a torus}\label{sec:aff_to_bip}
The goal of this section is to apply the results of \cref{sec:affine-perm-cycl,sec:c-equiv-structure} to bipartite graphs embedded in ${\mathbb{T}}$ and to finish the proof of our main results, \cref{thm:intro:move_red,thm:intro:move_eq}.
\subsection{The double affine symmetric group}
%
The \textit{double affine symmetric group} $\ddot{S}_n$ is generated by
$S \sqcup \bar{S} \sqcup \{\La\}$, where $S:=\{s_i \mid i \in {\mathbb{Z}}/n{\mathbb{Z}} \}$ and $\bar S := \{ s_{\overline{i}} \mid i \in {\mathbb{Z}}/n{\mathbb{Z}} \}$, subject to the relations
\begin{align}
s_i s_{i+1} s_{i}&=s_{i+1} s_i s_{i+1}, & \La s_{i+1} &= s_{i}\La, & s_i^2&=1, & s_i s_j &=s_j s_i \quad\text{if }|i-j|>1, \nonumber\\
s_{\overline{i}} s_{\overline{i+1}} s_{\overline{i}} &=s_{\overline{i+1}} s_{\overline{i}} s_{\overline{i+1}}, &
%
\La s_{\overline{i+1}} &= s_{\overline{i}}\La, &
s_{\overline{i}}^2&=1, & s_{\overline{i}} s_{\overline{j}} &=s_{\overline{j}} s_{\overline{i}} \quad\text{if }|i-j|>1, \label{eq:dsaffnrelations}\\
\La^n &=1, & s_i s_{\overline{j}}&=s_{\overline{j}} s_i. & &\nonumber
\end{align}
In other words, we have an isomorphism $\ddot{S}_n\cong(\Saffon \times \Saffon)\rtimes \langle \Lan \rangle/\langle \Lan^n \rangle$, where $\La$ acts on each copy of $\Saff^{(0)}_n$ by conjugation. %
%
%
%
%
%
Any element $w \in \ddot{S}_n$ can be written as a product $w=s_{i_1} s_{i_2} \cdots s_{i_l} \La^k s_{\overline{j_m}}s_{\overline{j_{m-1}}} \cdots s_{\overline{j_1}} $ for some $k \in \{0,1,\dots,n-1\}$ and $l,m \geq 0$. If $l+m$ is minimal among all such ways of writing $w$ as a product, then $s_{i_1} s_{i_2} \cdots s_{i_l} \La^k s_{\overline{j_m}}s_{\overline{j_{m-1}}} \cdots s_{\overline{j_1}}$ is called a \textit{reduced expression} for $w$, and $l+m$ is called the \textit{length} of $w$ and denoted $\ell(w)$. Note that
\begin{equation*}%
f:=s_{i_1} s_{i_2} \cdots s_{i_l} \La^k \quad\text{and}\quad {\overline{f}}:= s_{{n-j_1+1}}s_{{n-j_2+1}} \cdots s_{{n-j_m+1}} \La^{k}
\end{equation*}
%
are then reduced expressions for affine permutations $f,{\overline{f}}\in\Saff_n$. %
%
We denote $\phi(w):=(f,{\overline{f}})$ and call $(f,{\overline{f}})$ the \textit{pair of affine permutations} associated to $w$. We have $\ell(w)=\ell(f)+\ell({\overline{f}})$. We explain the reasoning behind the formula for ${\overline{f}}$
%
%
in~\cref{rem:barsmap}.
\begin{remark}\label{rmk:perab_invertible}
For any $k\in{\mathbb{Z}}$ and $f,{\overline{f}}\in\Saff^{(k)}_n$, there exists $w\in\ddot{S}_n$ satisfying $\phi(w)=(f,{\overline{f}})$.
\end{remark}
\subsection{Relating triple-crossing diagrams in ${\mathbb{A}}$ to double affine permutations}\label{sec:affine_plabic_fence}
%
%
%
\begin{figure}
\centering
%
\def\phantomwhite(#1){\draw[black!5] (0.3,0.3)--(0.3,0.3)}
\begin{tikzpicture}[yscale=0.4,xscale=0.45]
\def1.0{1.0}
\def0.8{0.8}
\def5.5{5.5}
\def6.0{6.0}
\def0.8{0.8}
\def\newlabels{
\node[scale=1.0] (no) at (-1,0.5) {$i+1$};
\node[scale=1.0] (no) at (-1,-1.5) {$i$};
\node[scale=1.0] (no) at (-1,-3.5) {$i-1$};
\node[scale=0.8] (no) at (-1,-4.25) {$\vdots$};
\node[scale=1.0] (no) at (-1,-5.5) {$1$};
\node[scale=1.0] (no) at (-1,2.5) {$i+2$};
\node[scale=0.8] (no) at (-1,4.25) {$\vdots$};
\node[scale=1.0] (no) at (-1,5.5) {$n$};
}
\node[] (no) at (4,-8) {$D(s_i)$};
\node[] (no) at (5.5+6.0+4,-8) {$D(s_{\bar i})$};
\node[] (no) at (2*5.5+2*6.0+4,-8) {$D(\La)$};
\draw[-] (9.25,6.5) -- (9.25,-8.5);
\draw[-] (5.5+6.0+9.25,6.5) -- (5.5+6.0+9.25,-8.5);
\begin{scope}[shift={(0,0)}] %
\def\r{2};
\fill[black!5] (0,-6) rectangle (3,6);
\draw[dashed,\dashcolor,-] (0,-6) rectangle (3,6);
\draw[] (0,-5.5) -- (3,-5.5);
\phantomwhite(1.5,-5.5);
\draw[] (0,-3.5) -- (3,-3.5);
\phantomwhite(1.5,-3.5);
\draw[] (0,-1.5) -- (3,-1.5);
\coordinate[wvert] (w) at (1.5,-1.5);
\draw[] (0,.5) -- (3,.5);
\coordinate[bvert] (b) at (1.5,.5);
\phantomwhite(0,.5);
\phantomwhite(3,.5);
\draw (b)--(w);
\draw[] (0,2.5) -- (3,2.5);
\phantomwhite(1.5,2.5);
\draw[] (0,5.5) -- (3,5.5);
\phantomwhite(1.5,5.5);
\node[] (no) at (1.5,-4.25) {$\vdots$};
\node[] (no) at (1.5,4.25) {$\vdots$};
\newlabels
\end{scope}
\begin{scope}[shift={(5.5+6.0,0)}] %
\def\r{2};
\fill[black!5] (0,-6) rectangle (3,6);
\draw[dashed,\dashcolor,-] (0,-6) rectangle (3,6);
\draw[] (0,-5.5) -- (3,-5.5);
\phantomwhite(1.5,-5.5);
\begin{scope}[yshift=-1cm]
\draw[] (0,-2.5) -- (3,-2.5);
\phantomwhite(1.5,-2.5);
\draw[] (0,1.5) -- (3,1.5);
\coordinate[wvert] (w) at (1.5,1.5);
\draw[] (0,-.5) -- (3,-.5);
\coordinate[bvert] (b) at (1.5,-.5);
\phantomwhite(0,-.5);
\phantomwhite(3,-.5);
\draw (b)--(w);
\draw[] (0,3.5) -- (3,3.5);
\phantomwhite(1.5,3.5);
\end{scope}
\draw[] (0,5.5) -- (3,5.5);
\phantomwhite(1.5,5.5);
\node[] (no) at (1.5,-4.25) {$\vdots$};
\node[] (no) at (1.5,4.25) {$\vdots$};
\newlabels
\end{scope}
\begin{scope}[shift={(2*5.5+2*6.0,0)}]%
\def\r{2};
\fill[black!5] (0,-6) rectangle (3,6);
\draw[dashed,\dashcolor,-] (0,-6) rectangle (3,6);
%
%
%
%
%
%
%
\node[] (no) at (1.5,0) {$\vdots$};
%
%
\begin{scope}
\clip (0,-6) rectangle (3,6);
\draw[] (3,-1.5) -- (0,-3.5);
\phantomwhite(1.5,-2.5);
\draw[] (3,-3.5) -- (0,-5.5);
\phantomwhite(1.5,-4.5);
\draw[] (3,-5.5) -- (0,-7.5);
\phantomwhite(1.5,-4.5);
\draw[] (3,7.5) -- (0,5.5);
\phantomwhite(1.5,6.5);
\draw[] (3,5.5) -- (0,3.5);
\phantomwhite(1.5,4.5);
\end{scope}
\node[scale=1.0] (no) at (-1,-5.5) {$1$};
\node[scale=1.0] (no) at (-1,-3.5) {$2$};
\node[scale=0.8] (no) at (-1,-0.5) {$\vdots$};
\node[scale=1.0] (no) at (-1,3.5) {$n-1$};
\node[scale=1.0] (no) at (-1,5.5) {$n$};
%
\end{scope}
\begin{scope}[shift={(5.5,0)},yscale=1.090909]
\def\r{2};
\fill[black!5] (0,-5.5) rectangle (3,5.5);
\draw[dashed,\dashcolor,-] (0,-5.5) rectangle (3,5.5) ;
\draw[red,->,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,->,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,-1) ;
\draw[red,->,line width=1pt] (3,0) -- (0,0);
\draw[red,->,line width=1pt]
(3,2)--(0,2);
\draw[red,->,line width=1pt]
(0,3)--(3,3)
;
\draw[red,->,line width=1pt]
(3,-2)--(0,-2);
\draw[red,->,line width=1pt]
(0,-3)--(3,-3)
;
\draw[red,->,line width=1pt]
(3,5)--(0,5);
\draw[red,->,line width=1pt]
(0,-5)--(3,-5)
;
\node[scale=0.8] (no) at (-1,0) {$\overline{i}$};
\node[scale=0.8] (no) at (-1,1) {$i+1$};
\node[scale=0.8] (no) at (-1,-1) {$i$};
\node[scale=0.8] (no) at (-1,-2) {$\overline{i-1}$};
\node[scale=0.8] (no) at (-1,-3) {$i-1$};
\node[scale=0.8] (no) at (-1,-4) {$\vdots$};
\node[scale=0.8] (no) at (-1,-5) {$1$};
\node[scale=0.8] (no) at (-1,2) {$\overline{i+1}$};
\node[scale=0.8] (no) at (-1,3) {$i+2$};
\node[] (no) at (1.5,-4) {$\vdots$};
\node[] (no) at (1.5,4) {$\vdots$};
\node[scale=0.8] (no) at (-1,4) {$\vdots$};
\node[scale=0.8] (no) at (-1,5) {$\overline n$};
\end{scope}
\begin{scope}[shift={(2*5.5+6.0,0)},yscale=1.090909]
\def\r{2};
\fill[black!5] (0,-5.5) rectangle (3,5.5);
\draw[dashed,\dashcolor,-] (0,-5.5) rectangle (3,5.5) ;
\draw[red,<-,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,<-,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,-1) ;
\draw[red,<-,line width=1pt] (3,0) -- (0,0);
\draw[red,<-,line width=1pt]
(3,2)--(0,2);
\draw[red,<-,line width=1pt]
(0,3)--(3,3)
;
\draw[red,<-,line width=1pt]
(3,-2)--(0,-2);
\draw[red,<-,line width=1pt]
(0,-3)--(3,-3)
;
\draw[red,->,line width=1pt]
(3,5)--(0,5);
\draw[red,->,line width=1pt]
(0,-5)--(3,-5)
;
\node[scale=0.8] (no) at (-1,0) {$i$};
\node[scale=0.8] (no) at (-1,1) {$\overline{i}$};
\node[scale=0.8] (no) at (-1,-1) {${\overline {i-1}}$};
\node[scale=0.8] (no) at (-1,-2) {$i-2$};
\node[scale=0.8] (no) at (-1,-3) {$\overline{i-2}$};
\node[scale=0.8] (no) at (-1,-4) {$\vdots$};
\node[scale=0.8] (no) at (-1,-5) {$1$};
\node[scale=0.8] (no) at (-1,2) {$i+1$};
\node[scale=0.8] (no) at (-1,3) {$\overline{i+1}$};
\node[] (no) at (1.5,-4) {$\vdots$};
\node[] (no) at (1.5,4) {$\vdots$};
\node[scale=0.8] (no) at (-1,4) {$\vdots$};
\node[scale=0.8] (no) at (-1,5) {$\overline{n}$};
\end{scope}
\begin{scope}[shift={(3*5.5+2*6.0,0)},yscale=1.090909]
\def\r{2};
\fill[black!5] (0,-5.5) rectangle (3,5.5);
\draw[dashed,\dashcolor,-] (0,-5.5) rectangle (3,5.5) ;
\begin{scope}
\clip (0,-5.5) rectangle (3,5.5) ;
\draw[red,<-,line width=1pt] (0,-4) .. controls +(1,0) and +(-1,0) ..
(3,-2);
\draw[red,<-,line width=1pt] (0,-6) .. controls +(1,0) and +(-1,0) ..
(3,-4);
\draw[red,<-,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,3);
\draw[red,<-,line width=1pt] (0,3) .. controls +(1,0) and +(-1,0) ..
(3,5);
\draw[red,<-,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,7);
\draw[red,->,line width=1pt] (0,-5) .. controls +(1,0) and +(-1,0) ..
(3,-3);
\draw[red,->,line width=1pt] (0,-3) .. controls +(1,0) and +(-1,0) ..
(3,-1);
\draw[red,->,line width=1pt] (0,-7) .. controls +(1,0) and +(-1,0) ..
(3,-5);
\draw[red,->,line width=1pt] (0,2) .. controls +(1,0) and +(-1,0) ..
(3,4);
\draw[red,->,line width=1pt] (0,4) .. controls +(1,0) and +(-1,0) ..
(3,6);
\end{scope}
\node[scale=0.8] (no) at (-1,4) {$n$};
\node[scale=0.8] (no) at (-1,3) {$\overline{n-1}$};
\node[scale=0.8] (no) at (-1,2) {$n-1$};
\node[scale=0.8] (no) at (-1,1) {$\overline{n-2}$};
\node[scale=0.8] (no) at (-1,5) {$\overline{n}$};
\node[scale=0.8] (no) at (-1,-4) {$\overline{1}$};
\node[scale=0.8] (no) at (-1,-3) {$2$};
\node[scale=0.8] (no) at (-1,-1) {$\vdots$};
\node[] (no) at (1.5,0) {$\vdots$};
\node[scale=0.8] (no) at (-1,-5) {$1$};
\end{scope}
\end{tikzpicture}
\caption{Plabic graphs and triple-crossing diagrams in ${\mathbb{A}}$ associated to generators.}\label{fig:tcdgenerators}
\end{figure}
\begin{figure}
\def0.18\textwidth{0.23\textwidth}
\includegraphics[width=0.18\textwidth]{figures/10movethurston}
\caption{(R1)$''$ Thurston's $1-0$ move. }\label{fig:thurston10}
\end{figure}
Let $w$ be a double affine permutation and let $w_1 w_2 \cdots w_l$ be an expression for $w$, where $w_i \in S \sqcup \bar S \sqcup \{\La\}$. Following Fock and Marshakov \cite{FM}, we associate to the expression $w_1 w_2 \cdots w_l$ a triple-crossing diagram in ${\mathbb{A}}$ as follows. Each generator $s \in S \sqcup \bar S \sqcup \{\La\}$ is assigned a triple-crossing diagram $D(s)$ in ${\mathbb{A}}$ as shown in Figure \ref{fig:tcdgenerators}. The triple-crossing diagram $D(w_1 w_2 \cdots w_l)$ for the expression $w_1 w_2 \cdots w_l$ is obtained by concatenating the diagrams $D({w_1}),D(w_2),\dots,D({w_l})$ from left to right, so that the right boundary of $D({w_i})$ is glued to the left boundary of $D({w_{i+1}})$ for $i \in {\mathbb{Z}}/l{\mathbb{Z}}$. The corresponding plabic graph in ${\mathbb{T}}$ is called an \textit{affine plabic fence}. As explained in \cite[Appendix D]{FM}, each relation in \cref{eq:dsaffnrelations} can be realized using isotopy and moves on the corresponding triple-crossing diagrams, except for the relations $s_i^2=1$ and $s_{\overline{i}}^2=1$, which are realized using Thurston's $1-0$ move (R1)$''$ (Figure \ref{fig:thurston10}). Note that the left-hand side of (R1)$''$ is the same as (R1)$'$ (but the right-hand side is not), and therefore a triple-crossing diagram $D$ is move-reduced if and only if it is not move-equivalent to another triple-crossing diagram $D'$ to which either (R1)$''$ or (R2)$'$ can be applied.
\begin{remark}\label{rem:postnikov10}
Postnikov's reduction (R1)$'$ leads to the relations $s_i^2=s_i$ and $s_{\overline {i}}^2=s_{\overline{i}}$ of the \emph{$0$-Hecke monoid}.
\end{remark}
\begin{remark} \label{rem:barsmap}
Rotation by $180$ degrees acts on the triple-crossing diagrams by
\[
D(s_i) \mapsto D({s_{\overline{n-i+1}}}), \quad D(s_{\overline{i}}) \mapsto D({s_{{n-i+1}}}), \quad D(\La) \mapsto D(\La),\]
and induces an antiautomorphism of $\ddot{S}_n$ sending $s_i \mapsto s_{\overline{n-i+1}}$, $s_{\overline{i}} \mapsto s_{n-i+1}$, and $\La \mapsto \La$. We have chosen $\phi(w)=(f,{\overline{f}})$ so that rotation of $D(w)$ by $180$ degrees translates under $\phi$ into an automorphism of $\Saff_n\times\Saff_n$ sending $(f,{\overline{f}}) \mapsto ({\overline{f}},f)$.
\end{remark}
%
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.5]
\begin{scope}[shift={(3,0)}]
\def\r{2};
\fill[black!5] (0,-2) rectangle (3,2);
\draw[dashed,\dashcolor,-] (0,-2)--(0,2) ;
\draw[red,->,line width = 1pt] (0,-1) --
(3,-1);
\draw[green!80!black,->,line width = 1pt] (0,1)--
(3,1) ;
\draw[blue,->,line width = 1pt] (3,0) -- (0,0);
\node[] (no) at (-1,0) {$A_{\overline{i}}$};
\node[] (no) at (-1,1) {$A_{i+1}$};
\node[] (no) at (-1,-1) {$A_{i}$};
\end{scope}
\begin{scope}[shift={(-5,0)}]
\def\r{2};
\fill[black!5] (0,-2) rectangle (3,2);
\draw[dashed,\dashcolor,-] (0,-2)--(0,2) ;
\draw[red,->,line width = 1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[green!80!black,->,line width = 1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,-1) ;
\draw[blue,->,line width = 1pt] (3,0) -- (0,0);
\node[] (no) at (-1,0) {$A_{\overline{i}}$};
\node[] (no) at (-1,1) {$A_{i+1}$};
\node[] (no) at (-1,-1) {$A_{i}$};
\end{scope}
\node[](no) at (0,0){$\longrightarrow$};
\end{tikzpicture}
\caption{Uncrossing a triple crossing near the left boundary of ${\mathbb{A}}$ (dashed).}\label{fig:uncrossing}
\end{figure}
\begin{lemma} \label{lemma:tcdtoper}
Suppose $D$ is a move-reduced triple-crossing diagram in ${\mathbb{T}}$ whose Newton polygon $N$ is not a single point. Then, there is a double affine permutation $w=w(D)$ such that $D$ is move-equivalent to $D(w)$, and such that $f,{\overline{f}}$ are both c-reduced, where $\phi(w)=(f,{\overline{f}})$.
%
\end{lemma}
\begin{proof}
\defD{D}
\def\am_{\Dp}{\am_{D}}
Since $N$ is not a point, after applying a move-equivalence using \cref{thm:intro:vertical}, we may assume that
%
the number of intersections of strands in $D$ with the sides of ${\mathbb{A}}$ is minimal. Let $\am:=\am_{\Dp}$ denote the affine matching of $D$; cf. \cref{sec:affine-matchings}. Then, we have $\am(A)=B$ and $\am(\overline B)=\overline A$.
By \cref{rmk:strand_word}, for any strand $\sa$ in $D$, the word $w_{\sa}$ is given by $w_{\sa}=xy^kx$ for $x\in\{l,\r\}$ and $y\in\{u,d\}$. If the strand $\sa$ intersects itself in ${\mathbb{A}}$ then, after repeatedly applying move (T) to the $u-d$ side of ${\mathbb{A}}$ and applying \cref{thm:diskminimal}, we see that $D$ is not move-reduced, a contradiction. From now on, we assume that no strand of $D$ has a self-intersection in ${\mathbb{A}}$.
%
We construct the element $w=w(D)$ by induction on the number of triple crossings in $D$. Suppose $D$ contains no triple crossings. Then, the affine matching is of the form $\am(A_{i})=B_{i+m}$ and $\am(B_{\overline i})=A_{\overline{i-m}}$ for some $m \in {\mathbb{Z}}$. We assign the double affine permutation $w:=\La^m$ to $D$.
Suppose the number of triple crossings in $D$ is nonzero. Since no strand has a self-intersection, there must be three distinct strands in ${\mathbb{A}}$ at a triple crossing, so two of them must have their in-endpoints on the same side of ${\mathbb{A}}$. This implies that there exists $i \in [n]$ such that the strands $\sa$ and $\sb$ emanating respectively from either $A_i$ and $A_{i+1}$ or from $B_{\overline i}$ and $B_{\overline {i+1}}$ cross in ${\mathbb{A}}$. Arguing as in the proofs of Lemmas \ref{lemma:movep} and \ref{lemma:cross}, we can create a triple crossing between $\sa_1$ and $\sb_1$ near the boundary of ${\mathbb{A}}$. Let $D'$ be the triple-crossing diagram in ${\mathbb{A}}$ obtained by uncrossing this triple crossing (Figure \ref{fig:uncrossing}). We let $w := s_i w(D')$ (resp., $w:=w(D') s_{\overline{i}}$) if the two strands emanate from $A_i$ and $A_{i+1}$ (resp., $B_{\overline i}$ and $B_{\overline {i+1}}$).
Clearly, $D(w)$ is isotopic to $D$. Let $\phi(w)=(f,{\overline{f}})$. We show that $f$ and ${\overline{f}}$ are c-reduced. Suppose not. By~\cref{thm:HN}, there is a c-reduced pair $(f',{\overline{f}}')$ such that $f \rightarrow f'$ and ${\overline{f}} \rightarrow {\overline{f}}'$, and we must have used either $s_i^2=1$ or $s_{\overline{i}}^2=1$ at least once. This implies that $D(w)$ is not move-reduced, a contradiction.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:tcdmove_red}}\label{sec:proof:move_red}
\eqref{item:D:move_red} $\Longrightarrow$ \eqref{item:D:area}: Suppose $D$ is move-reduced. Since $N$ is not a point, by~\cref{lemma:tcdtoper}, $D$ is move-equivalent to a triple-crossing diagram $D(w)$, where $w$ is a double affine permutation. Therefore, $D$ and $D(w)$ have the same number of triple crossings. By~\cref{lemma:c_red_ell}, the number of triple crossings in $D(w)$ is $\ell(f)+\ell({\overline{f}})=\operatorname{Area}(\Zcal(E_f))+\operatorname{Area}(\Zcal(E_{\overline{f}}))+\excess{\bfla}$, where $\phi(w)=(f,{\overline{f}})$. %
By~\eqref{eq:zon_vs_N} below, we have $\ell(f)+\ell({\overline{f}})=2\operatorname{Area}(N)+\excess{\bfla}$. If $D$ had a contractible connected component $D'$, then $D'$ must have a loop strand. By~\cref{thm:diskminimal}, $D'$, and therefore $D$ is not move-reduced.
For the converse implication, we will need the following result.
\begin{lemma}\label{lem:movereduction}
Let $D$ be a triple-crossing diagram with weakly decorated Newton polygon $\Nwdec$. If $D$ is not move-reduced, then there is a move-reduced triple-crossing diagram $D'$ with weakly decorated Newton polygon $\Nwdec$ containing strictly fewer triple crossings than $D$.
\end{lemma}
\begin{proof}
Recall the reduction move (R1)$''$ shown in Figure~\ref{fig:thurston10}. The move (R1)$''$ preserves the connectivity of the strands, and therefore the does not change $\Nwdec$. If $D$ is not move-reduced, then we can use moves (M1)$'$, (R1)$''$ and (R2)$'$ to get a move-reduced $D'$ with weakly decorated Newton polygon $\Nwdec$. Since $D$ has no contractible components, (M1)$'$ cannot create contractible components, and therefore we must use (R1)$''$ at least once before we can use (R2)$'$. Since we decrease the number of triple crossings when we apply (R1)$''$, $D'$ contains strictly fewer triple crossings than $D$.
\end{proof}
\eqref{item:D:area} $\Longrightarrow$\eqref{item:D:move_red}: Suppose that $D$ has $2\operatorname{Area}(N)+\excess{\bfla}$ triple crossings and that $D$ has no contractible connected components. If $D$ is not move-reduced, there is a move-reduced $D'$ with weakly decorated Newton polygon $\Nwdec$ with fewer than $2\operatorname{Area}(N)+\excess{\bfla}$ triple crossings by~\cref{lem:movereduction}, contradicting \eqref{item:D:move_red} $\Longrightarrow$ \eqref{item:D:area}. \qed
%
\subsection{Proof of Proposition~\ref{prop:propertiesofmoveredtcd}}\label{sec:proof:propertiestcdmovered}
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[width=0.23\textwidth]{figures/uncrossingmove}
& \qquad &
\includegraphics[width=0.23\textwidth]{figures/22move_uncrossing}
\\
(a) & & (b)
\end{tabular}
\caption{\label{fig:uncrossingmove} (a) Uncrossing the strands $\sa_1$ (blue) and $\sa_2$ (green), while the strand $\sa_3$ (red) is unaffected. (b) The uncrossing move applied to the two strands participating in both triple crossings on the left-hand side of (M1)$'$.}
\end{figure}
Let $p$ be a triple-crossing at which three strands $\sa_1,\sa_2,\sa_3$ meet. We call the variant of the skein relation shown in \figref{fig:uncrossingmove}(a) \textit{uncrossing $\sa_1$ and $\sa_2$ at $p$}.
By~\cref{lemma:tcdtoper}, $D$ is move-equivalent to a triple-crossing diagram $D(w)$, where $w \in \ddot{S}_n$ for some $n$ and the associated affine permutations $f,{\overline{f}}$ are c-reduced.
To show part~\eqref{prop:appB_propertiesofmoveredtcd1}, suppose there is a closed loop $\tilde \sa$ in $\tilde D$. Then, the projection $\sa:=\pi(\tilde \sa)$ of this closed loop is a strand with $[\sa]=(0,0)$. Since move-equivalence preserves homology classes of strands, $\sa$ becomes a zero-homology strand in $D(w)$. Since every strand in $D(w)$ moves monotonously to the left or to the right, there are no zero-homology strands in $D(w)$, a contradiction. If $\tilde D$ contains a strand $\tilde{\sa}$ with a self-intersection, then uncrossing $\sa:=\pi(\tilde \sa)$ at the triple point with the self-intersection yields a triple-crossing diagram with the same weakly decorated Newton polygon but with fewer triple crossings, contradicting~\cref{thm:tcdmove_red}.
We now show part~\eqref{prop:appB_propertiesofmoveredtcd2}. By~\cref{cor:c_red_charact}, parts~\eqref{prop:appB_propertiesofmoveredtcd2} and~\eqref{prop:appB_propertiesofmoveredtcd3} are true for $D(w)$. Suppose part~\eqref{prop:appB_propertiesofmoveredtcd2} is false for $D$. Then, $D$ is move-equivalent to $D'$ for which part~\eqref{prop:appB_propertiesofmoveredtcd2} is false, but upon applying (M1)$'$ to $D'$, it becomes true. Since (M1)$'$ only removes crossings between the two anti-parallel strands, the two anti-parallel strands $T_1$ and $T_2$ that cross on the left-hand side of (M1)$'$ should both be portions of $\sa$. Upon uncrossing $T_1$ and $T_2$ at both the triple crossings (see~\figref{fig:uncrossingmove}(b)), the Newton polygon is unchanged, and the strand $\sa$ splits into a loop and at most two other strands, so $2\operatorname{Area}(N)+\excess{\bfla}$ can decrease by at most one, but the number of triple crossings decreases by two, contradicting~\cref{thm:tcdmove_red}.
\begin{figure}
\resizebox{1.5in}{!}{
\begin{tikzpicture}[scale=0.4]
\def\r{2};
%
\fill[black!5] (0,0) circle (1*3*\r cm);
\draw[] (0,0) circle (1.0*\r cm);
\draw [line width = 1pt,red,->] (190:3*\r) -- (190:2.5*\r);
\draw [line width = 1pt,blue,->] (170:3*\r) -- (170:2.5*\r);
\draw [line width = 1pt,red,<-] (-10:3*\r) -- (-10:2.5*\r);
\draw [line width = 1pt,blue,<-] (10:3*\r) -- (10:2.5*\r);
\begin{scope}[rotate=45+90]
\coordinate[] (b1) at (-0.5,0.5);
\coordinate[] (b2) at (0.5,-0.5);
\coordinate[] (t1) at (15:\r);
\coordinate[] (t2) at (120-45:\r);
\coordinate[] (t3) at (150-45:\r);
\coordinate[] (t4) at (210-45:\r);
\coordinate[] (t5) at (240-45:\r);
\coordinate[] (t6) at (300-45:\r);
\coordinate[] (t7) at (330-45:\r);
\coordinate[] (t8) at (30-45:\r);
\draw [line width = 1pt,red,->] plot [smooth, tension=1] coordinates {(t5) (b1) (t2)};
\draw [line width = 1pt,blue,->] plot [smooth, tension=1] coordinates {(t1) (b2) (t6)};
\end{scope}
\end{tikzpicture}
}
\caption{There is no way to complete the red and blue strands so that they do not cross without creating a self-intersection.}\label{fig:monogonparallel}
\end{figure}
To show part~\eqref{prop:appB_propertiesofmoveredtcd3}, we will need the following lemma.
%
\begin{lemma}\label{lem:abcd}
Suppose $\sa,\sa' \in \SD$ are two distinct parallel strands that do not intersect. Let $R$ be a closed topological disk in ${\mathbb{T}}$ whose interior contains some portion of $\sa$ and $\sa'$. Let $a$ and $b$ (resp., $c$ and $d$) denote the in- and out-endpoints of $\sa$ (resp., $\sa'$) around the boundary of $R$. Then, the cyclic order of the endpoints around the boundary of $R$ cannot be $abcd$ or $dcba$.
\end{lemma}
\begin{proof}
Let $N\gg1$ be a large positive integer, and consider a circle of radius $N$ centered at a lift of $R$. Then, we have a strand with a self-intersection in the preimage of $D'$ (Figure \ref{fig:monogonparallel}) in ${\mathbb{R}}^2$ which contradicts part~\eqref{prop:appB_propertiesofmoveredtcd1} of~\cref{prop:propertiesofmoveredtcd}.
\end{proof}
%
%
%
%
%
Similarly to the above, suppose that part~\eqref{prop:appB_propertiesofmoveredtcd3} is false for $D'$, but upon applying (M1)$'$ to $D'$, it becomes true. The two anti-parallel strands that cross on the left-hand side of (M1)$'$ should be portions of $\sa, \sa'$ respectively. Upon uncrossing $\sa$ and $\sa'$ at both triple crossings, the union of $\sa$ and $\sa'$ becomes the union of a loop and a strand $T$ with homology class $[T]=[\sa]+[\sa']$. Therefore,
$N$ is unchanged and $2\operatorname{Area}(N)+\excess{\bfla}$ decreases by one, but the number of triple crossings decreases by two, again contradicting~\cref{thm:tcdmove_red}.
Finally, suppose there is a face $F$ of $D$ with portions of $\sa,\sa'$ in its boundary. Recall from \cref{dfn:tcd_T} that the strands in $D$ induce a consistent orientation around the boundary of $F$. We let $R$ be a disk that contains a portion of $F$ together with parts of $\sa$ and $\sa'$, and get a contradiction with \cref{lem:abcd}.
%
\qed
%
%
%
\subsection{Proof of Proposition~\ref{prop:intro:exists}}\label{sec:proof:exists}
Suppose $\Ndec=(N,\bfcc)$ is a strongly decorated Newton polygon and $\mu \in {\mathbb{Z}}/\operatorname{d}(\bfcc)$. Recall from \cref{sec:eps_straight_construct} that for $\e=(a,b)\in{\mathbb{Z}}^2$, we denote $\operatorname{n}(\e):=a$ and $\operatorname{k}(\e):=b$, and $\nu(\e)=\operatorname{k}(\e)/\operatorname{n}(\e)$. Using an $\operatorname{SL}_2({\mathbb{Z}})$ transformation, we can assume that $\operatorname{n}(\e) \neq 0$ for all $\e \in E(N)$. We assign to $\Ndec$ the pair $(\ddot{E}_+,\ddot{E}_-)$ of strongly decorated vector configurations, consisting of edges of $N$ oriented to the right and left, respectively, as follows. We define:
\begin{enumerate}
\item $E_+:=\{\e\mid \e \in E(N),\ \operatorname{n}(\e)>0\}$ and $\bfcc_+=(\cc^\e)_{\e \in \ddot{E}_+}$; and
\item $E_-:=\{-\e\mid \e \in E(N),\ \operatorname{n}(\e)< 0\}$ and $\bfcc_-=(\operatorname{rev}(\cc^\e))_{-\e \in \ddot{E}_-}$, where for a cyclic composition $\cc=(\cc_1,\cc_2,\dots,\cc_m)$, $\operatorname{rev}(\cc) := (\cc_m,\cc_{m-1},\dots,\cc_1)$ is the cyclic composition with the cyclic order reversed.
\end{enumerate}
Similarly to \cref{rem:barsmap}, we have rotated the vectors in $E_-$ by $180$ degrees. We have the following basic relation between the area of $N$ and the areas of the zonotopes $\Zcal(E_+)$, $\Zcal(E_-)$:
\begin{equation}\label{eq:zon_vs_N}
2\operatorname{Area}(N)=\operatorname{Area}(\Zcal(E_+))+\operatorname{Area}(\Zcal(E_-)).
\end{equation}
To see this, observe that the lower boundary of $\Zcal(E_+)$ coincides with the lower boundary of $N$ (given by the vectors in $E_+$), and the upper boundary of $\Zcal(E_+)$ is obtained by rotating its lower boundary by $180$ degrees. A similar statement holds for $\Zcal(E_-)$, from which the result follows; see \cref{fig:zon_vs_N}.
\begin{figure}
\def0.18\textwidth{0.55\textwidth}
\includegraphics[width=0.18\textwidth]{figures/zon_vs_N}
\caption{\label{fig:zon_vs_N} Proof of~\eqref{eq:zon_vs_N}: the dashed line subdivides $N$ into two polygons whose areas are $\frac12\operatorname{Area}(\Zcal(E_+))$ and $\frac12\operatorname{Area}(\Zcal(E_-))$.}
\end{figure}
Let $f$ and ${\overline{f}}$ be a pair of c-reduced affine permutations with $\ddot{E}_{f}=\ddot{E}_+$ and $\ddot{E}_{{\overline{f}}}=\ddot{E}_-$ constructed as in \cref{sec:eps_straight_construct}. Observe that $\sum_{\e\in E_+} \operatorname{k}(\e)=\sum_{\e\inE_-}\operatorname{k}(\e)$, and thus by \cref{rmk:perab_invertible}, there exists $w\in\ddot{S}_n$ satisfying $\phi(w)=(f,{\overline{f}})$.
%
By~\eqref{eq:zon_vs_N}, the triple-crossing diagram $D:=D(w)$ has the correct number of triple crossings, it is move-reduced by~\cref{thm:tcdmove_red}.
%
Let $\Gamma:=\Gamma(D)$ be the associated bipartite graph (cf.~\cref{sec:bipartite-to-tcd}). By construction, $\Ndec(\Gamma)=\Ndec$, and replacing $f$ with $\sigma^r(f)$ for some $r$ while keeping ${\overline{f}}$ fixed, we can achieve $\mu(\Gamma)=\mu$.
Finally, we show that $\Gamma$ has a perfect matching. Let $ w=s_{i_1} s_{i_2} \cdots s_{i_l} \La^k s_{\overline{j_1}}s_{\overline{j_2}} \cdots s_{\overline{j_m}} $ be a reduced expression. Omitting all generators $s_{i_k}$ and $s_{\overline{i_k}}$ such that the corresponding vertical edge in $\Gamma$ is traversed by the same strand in the opposite directions (i.e., yields a self-intersection in $D(w)$), we get a triple-crossing diagram $D'$ with strongly decorated Newton polygon $(N,\bfcc')$ satisfying $(\cc')^e=(1,1,\dots,1)$ for all $\e \in E(N)$. By~\cref{thm:tcdmove_red} and part~\eqref{prop:appB_propertiesofmoveredtcd2} of~\cref{prop:propertiesofmoveredtcd}, $D'$ is move-reduced, so it is minimal in the sense of \cite{GK13}.
%
The corresponding bipartite graph $\Gamma':=\Gamma(D')$ has a perfect matching by \cite[Lemma 3.11]{GK13}, and since $\Gamma'$ is obtained from $\Gamma$ by deleting a subset of edges, so does $\Gamma$.\qed
\begin{figure}
\begin{tikzpicture}[xscale=0.3,yscale=0.4] %
\def\r{2};
\def1.0{1.0}
\fill[black!5] (0,-1.5) rectangle (27,6.5);
\draw[dashed,\dashcolor,-] (0,-1.5) rectangle (27,6.5);
\def0.8{0.8}
\node[scale=0.8] (no) at (-1,0) {$\overline{1}$};
\node[scale=0.8] (no) at (-1,1) {$2$};
\node[scale=0.8] (no) at (-1,-1) {$1$};
\node[scale=0.8] (no) at (-1,2) {$\overline{2}$};
\node[scale=0.8] (no) at (-1,3) {$3$};
\node[scale=0.8] (no) at (-1,4) {$\overline{3}$};
\node[scale=0.8] (no) at (-1,5) {$4$};
\node[scale=0.8] (no) at (-1,6) {$\overline{4}$};
\begin{scope}[]
\draw[red,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,-1) ;
\draw[blue,line width=1pt] (0,3) -- (3,3);
\draw[blue,line width=1pt] (0,5) -- (3,5);
\draw[green!80!black,->,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,->,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,->,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,->,line width=1pt] (3,6) -- (0,6);
\node[] (no) at (1.5,-2) {$s_1$};
\end{scope}
\begin{scope}[shift={(3,0)}]
\draw[blue,line width=1pt] (0,3) .. controls +(1,0) and +(-1,0) ..
(3,5);
\draw[blue,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,3) ;
\draw[red,line width=1pt] (0,1) -- (3,1);
\draw[red,line width=1pt] (0,-1) -- (3,-1);
\draw[green!80!black,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,line width=1pt] (3,6) -- (0,6);
\node[] (no) at (1.5,-2) {$s_3$};
\end{scope}
\begin{scope}[shift={(6,0)}]
\begin{scope}
\clip(0,-1.5) rectangle (3,6.5);
\draw[blue,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,7);
\draw[red,line width=1pt] (0,7) .. controls +(1,0) and +(-1,0) ..
(3,5) ;
\draw[blue,line width=1pt] (0,-3) .. controls +(1,0) and +(-1,0) ..
(3,-1);
\draw[red,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,-3) ;
\draw[red,line width=1pt] (0,1) -- (3,1);
\draw[blue,line width=1pt] (0,3) -- (3,3);
\draw[green!80!black,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,line width=1pt] (3,6) -- (0,6);
\end{scope}
\node[] (no) at (1.5,-2) {$s_4$};
\end{scope}
\begin{scope}[shift={(9,0)}]
\draw[blue,line width=1pt] (0,3) .. controls +(1,0) and +(-1,0) ..
(3,5);
\draw[red,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,3) ;
\draw[red,line width=1pt] (0,1) -- (3,1);
\draw[blue,line width=1pt] (0,-1) -- (3,-1);
\draw[green!80!black,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,line width=1pt] (3,6) -- (0,6);
\node[] (no) at (1.5,-2) {$s_3$};
\end{scope}
\begin{scope}[shift={(12,0)}]
\draw[blue,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,-1) ;
\draw[red,line width=1pt] (0,3) -- (3,3);
\draw[blue,line width=1pt] (0,5) -- (3,5);
\draw[green!80!black,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,line width=1pt] (3,6) -- (0,6);
\node[] (no) at (1.5,-2) {$s_1$};
\end{scope}
\begin{scope}[shift={(15,0)}]
\begin{scope}
\clip(0,-1.5) rectangle (3,6.5);
\draw[blue,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,7);
\draw[red,line width=1pt] (0,7) .. controls +(1,0) and +(-1,0) ..
(3,5) ;
\draw[blue,line width=1pt] (0,-3) .. controls +(1,0) and +(-1,0) ..
(3,-1);
\draw[red,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,-3) ;
\draw[blue,line width=1pt] (0,1) -- (3,1);
\draw[red,line width=1pt] (0,3) -- (3,3);
\draw[green!80!black,line width=1pt] (3,0) -- (0,0);
\draw[green!80!black,line width=1pt] (3,2) -- (0,2);
\draw[green!80!black,line width=1pt] (3,4) -- (0,4);
\draw[green!80!black,line width=1pt] (3,6) -- (0,6);
\end{scope}
\node[] (no) at (1.5,-2) {$s_4$};
\end{scope}
\begin{scope}[shift={(18,0)}]
\begin{scope}
\clip(0,-1.5) rectangle (3,6.5);
\draw[red,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,7);
\draw[red,line width=1pt] (0,3) .. controls +(1,0) and +(-1,0) ..
(3,5);
\draw[blue,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,3);
\draw[blue,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,line width=1pt] (0,-3) .. controls +(1,0) and +(-1,0) ..
(3,-1);
\draw[green!80!black,line width=1pt] (0,-2) .. controls +(1,0) and +(-1,0) ..
(3,0);
\draw[green!80!black,line width=1pt] (0,0) .. controls +(1,0) and +(-1,0) ..
(3,2);
\draw[green!80!black,line width=1pt] (0,2) .. controls +(1,0) and +(-1,0) ..
(3,4);
\draw[green!80!black,line width=1pt] (0,4) .. controls +(1,0) and +(-1,0) ..
(3,6);
\draw[green!80!black,line width=1pt] (0,6) .. controls +(1,0) and +(-1,0) ..
(3,8);
\end{scope}
\node[] (no) at (1.5,-2) {$\La$};
\end{scope}
\begin{scope}[shift={(21,0)}]
\begin{scope}
\clip(0,-1.5) rectangle (3,6.5);
\draw[red,line width=1pt] (0,5) .. controls +(1,0) and +(-1,0) ..
(3,7);
\draw[blue,line width=1pt] (0,3) .. controls +(1,0) and +(-1,0) ..
(3,5);
\draw[blue,line width=1pt] (0,1) .. controls +(1,0) and +(-1,0) ..
(3,3);
\draw[red,line width=1pt] (0,-1) .. controls +(1,0) and +(-1,0) ..
(3,1);
\draw[red,line width=1pt] (0,-3) .. controls +(1,0) and +(-1,0) ..
(3,-1);
\draw[green!80!black,line width=1pt] (0,-2) .. controls +(1,0) and +(-1,0) ..
(3,0);
\draw[green!80!black,line width=1pt] (0,0) .. controls +(1,0) and +(-1,0) ..
(3,2);
\draw[green!80!black,line width=1pt] (0,2) .. controls +(1,0) and +(-1,0) ..
(3,4);
\draw[green!80!black,line width=1pt] (0,4) .. controls +(1,0) and +(-1,0) ..
(3,6);
\draw[green!80!black,line width=1pt] (0,6) .. controls +(1,0) and +(-1,0) ..
(3,8);
\end{scope}
\node[] (no) at (1.5,-2) {$\La$};
\end{scope}
\begin{scope}[shift={(24,0)}]
\draw[green!80!black,line width=1pt] (0,4) .. controls +(1,0) and +(-1,0) ..
(3,6);
\draw[green!80!black,line width=1pt] (0,6) .. controls +(1,0) and +(-1,0) ..
(3,4) ;
\draw[green!80!black,line width=1pt] (0,2) -- (3,2);
\draw[green!80!black,line width=1pt] (0,0) -- (3,0);
\draw[blue,line width=1pt,->] (0,3) -- (3,3);
\draw[blue,line width=1pt,->] (0,5) -- (3,5);
\draw[red,line width=1pt,->] (0,1) -- (3,1);
\draw[red,line width=1pt,->] (0,-1) -- (3,-1);
\node[] (no) at (1.5,-2) {$s_{\overline{4}}$};
\end{scope}
\end{tikzpicture}
\caption{The triple-crossing diagram $D(w)$ with strongly decorated Newton polygon $\Ndec=(N,\bfcc)$ from~\cref {example:graphfromsdcnp}. }\label{fig:eg182}
%
%
%
\bigskip
\def0.18\textwidth{0.35\textwidth}
\scalebox{0.95}{
\begin{tabular}{ccc}
\includegraphics[width=0.18\textwidth]{figures/two_big_graphs} &
\includegraphics[width=0.18\textwidth]{figures/two_big_graphs_2}
&
\begin{tikzpicture}[baseline=(Z.base)]
\coordinate(Z) at (0,-1.75);
\node(A) at (0,0) { \includegraphics[width=0.23\textwidth]{figures/newton_example_18}};
\end{tikzpicture}
\\
(a) $\Gamma(w)$. & (b) $\Gamma(w')$. & (c) $\Ndec(\Gamma(w))=\Ndec(\Gamma(w'))$.
\end{tabular}
}
\caption{\label{fig:two_big} Two plabic graphs $\Gamma(w)$, $\Gamma(w')$ from \cref{ex:two_big} having the same strongly decorated Newton polygons but different modular invariants. According to \cref{thm:intro:move_eq}, these graphs are not move-equivalent.
%
}
\end{figure}
\begin{example} \label{example:graphfromsdcnp}
Let $\Ndec=(N,\bfcc)$ be the strongly decorated Newton polygon with edges ${\textcolor{red}{\e_1}} = (2,0)$, ${\textcolor{blue}{\e_2}}=(2,2)$ and ${\textcolor{green!80!black}{\e_3}}=(-4,-2)$, and $\cc^{\textcolor{red}{\e_1}}=\cc^{\textcolor{blue}{\e_2}}=\cc^{\textcolor{green!80!black}{\e_3}}=(2)$ shown in \figref{fig:two_big}(c). The strongly decorated vector configuration $\ddot{E}_+$ and its ${\epsilon}$-straight arrow diagram $\DiagOp(\ddot{E}_+)$ are shown in \figref{fig:arrowdiagram}(a--b). From $\DiagOp(\ddot{E}_+)$, we find the reduced expression $f = s_1 s_3 s_4 s_3 s_1 s_4 \La^2$. Similarly, we have ${\overline{f}}= s_{{1}} \La^2$, so that $w=s_1 s_3 s_4 s_3 s_1 s_4 \La^2 s_{\overline{4}}$. The corresponding triple-crossing diagram $D(w)$ is shown in~\cref{fig:eg182}.
\end{example}
\begin{example}\label{ex:two_big}
Let $w'=s_1 s_3 s_4 s_3 s_1 s_4 \La^2 s_{\overline{3}}$ be obtained from $w$ in \cref{example:graphfromsdcnp} by replacing $s_{\overline{4}}$ with $s_{\overline{3}}$. The associated plabic\footnote{Strictly speaking, the graphs shown in \cref{fig:two_big} are not plabic in the language of \cref{sec:plabictcd} since they have edges with both endpoints black. To convert them into plabic graphs, one has to add a degree two white vertex in the middle of each such edge.} graphs $\Gamma(w),\Gamma(w')$ shown in \cref{fig:two_big} have the same strongly decorated Newton polygons but different modular invariants in ${\mathbb{Z}}/\operatorname{d}(\bfcc){\mathbb{Z}}$, where $\operatorname{d}(\bfcc)=2$.
\end{example}
\subsection{Proof of Theorem~\ref{thm:intro:move_eq}}\label{sec:proof:move_eq}
The $\Longrightarrow$ direction is clear, since both $\Ndec$ and $\mu$ are invariant under move-equivalence; see Sections~\ref{sec:intro:move_eq} and~\ref{sec:intro:minv}.
For the $\Longleftarrow$ direction, using Lemma \ref{lemma:tcdtoper}, we assume that the triple-crossing diagram $D$ (resp., $D'$) associated to $\Gamma$ (resp., $\Gamma'$) is of the form $D(w)$ (resp., $D(w')$) for some double affine permutations $w,w'$. Let $(f,{\overline{f}})$ (resp., $(f',{\overline{f}}')$) be the pair of affine permutations associated to $w$ (resp., $w'$). Let $\sigma(w)=\La w \La^{-1}$ be the rotation operator, and let $\sigma(\Gamma)$ be the bipartite graph associated to the triple-crossing diagram $D(\sigma(w))$. Note that $\mu(\sigma(\Gamma)) = \mu(\Gamma)$, but $\mu(\sigma(f))=\mu(f)+1$ and $\mu(\sigma({\overline{f}}))=\mu({\overline{f}})-1$. Therefore, replacing $\Gamma$ with $\sigma^{\mu(f')-\mu(f)}(\Gamma)$, we can assume that $\mu(f)=\mu(f')$.
We will show that there is an $r \in {\mathbb{Z}}$ such that $\sigma^r (f ) \stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} f'$ and $\sigma^r ({\overline{f}} ) \stackrel{\scalebox{0.6}{\ \normalfont{c}}}{\sim} {\overline{f}}'$. Since $\Ndec(\Gamma)=\Ndec(\Gamma')$ implies that $\VCDD_{f}= \VCDD_{f'}$ and $\VCDD_{{\overline{f}}}= \VCDD_{{\overline{f}}'}$, by~\cref{thm:c-equivalent}, it suffices to show that there is an $r \in {\mathbb{Z}}$ such that $\mu(\sigma^r(f))=\mu(f')$ and $\mu(\sigma^r({\overline{f}}))=\mu({\overline{f}}')$, or equivalently, such that $r \equiv 0\pmod{\operatorname{d}(\bfcc_{f})}$ and $r \equiv \mu({\overline{f}} )-\mu({\overline{f}}' ) \pmod{\operatorname{d}(\bfcc_{{\overline{f}}})}$.
%
Note that $\operatorname{d}(\bfcc)= \gcd(\operatorname{d}(\bfcc_{f}),\operatorname{d}(\bfcc_{{\overline{f}}}))$ and $\mu(\Gamma)\equiv\mu(f )+\mu({\overline{f}} )\pmod{\operatorname{d}(\bfcc)}$. Since $\mu(\Gamma)=\mu(\Gamma')$ and $\mu(f)=\mu(f')$, we have $\mu({\overline{f}})-\mu({\overline{f}}') \equiv 0 \pmod{\operatorname{d}(\bfcc)}$. The existence of such an $r$ follows from \cref{lemma:rexists}.
\begin{lemma}\label{lemma:rexists}
Let $d_1,d_2$ be positive integers, and let $d = \gcd(d_1,d_2)$. Then, there exists $r \in {\mathbb{Z}}$ such that $r \equiv 0\pmod{d_1}$ and $r\equiv d\pmod{d_2}$.
\end{lemma}
\begin{proof}
Let $x,y \in {\mathbb{Z}}$ be such that $x d_1 + y d_2 = d$. Take $r:=x d_1$.
\end{proof}
| {
"timestamp": "2022-12-27T02:12:53",
"yymm": "2212",
"arxiv_id": "2212.12962",
"language": "en",
"url": "https://arxiv.org/abs/2212.12962",
"abstract": "We determine which bipartite graphs embedded in a torus are move-reduced. In addition, we classify equivalence classes of such move-reduced graphs under square/spider moves. This extends the class of minimal graphs on a torus studied by Goncharov-Kenyon, and gives a toric analog of Postnikov's results on a disk.",
"subjects": "Combinatorics (math.CO)",
"title": "Move-reduced graphs on a torus",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787861106088,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7093811461132419
} |
https://arxiv.org/abs/2103.07913 | Factorizations of regular graphs of infinite degree | Let $\mathcal{H}=\{H_i: i<\alpha \}$ be an indexed family of graphs for some ordinal number $\alpha$. $\mathcal{H}$-decomposition of a graph $G$ is a family $\mathcal{G}=\{G_i: i<\alpha \}$ of edge-disjoint subgraphs of $G$ such that $G_i$ is isomorphic to $H_i$ for every $i<\alpha$ and $\bigcup\{E(G_i):i<\alpha\}=E(G)$. $\mathcal{H}$-factorization of $G$ is a $\mathcal{H}$-decomposition of $G$ such that every element of $\mathcal{H}$ is a spanning subgraph of $G$. Let $\kappa$ be an infinite cardinal. Kőnig in 1936 proved that every $\kappa$-regular graph has a factorization into perfect matchings. Andersen and Thomassen using this theorem proved in 1980 that every $\kappa$-regular connected graph has a $\kappa$-regular spanning tree. We generalize both these results and establish the existence of a factorization of $\kappa$-regular graph into $\lambda$-regular subgraphs for every non-zero $\lambda\leq \kappa$. Furthermore, we show that every $\kappa$-regular connected graph has a $\mathcal{H}$-factorization for every family $\mathcal{H}$ of $\kappa$ forests with $\kappa$ components of order at most $\kappa$ and without isolated vertices. | \section{Introduction}
The study of matchings and related notions is arguably one of the most popular topics in graph theory.
This includes matchability, factorization, packing and decomposition problems. The most basic necessary condition for the existence of a decomposition of a given graph into perfect matchings is its regularity. In the case of finite graphs this condition is far from being sufficient, even for very simple classes of graphs. The same applies for infinite locally finite graphs (graphs without vertices of infinite degree). However, it turns out that this condition is indeed sufficient in the case of non-locally-finite graphs as it was shown by Kőnig \cite{konigbookpage} in 1936. Andersen and Thomassen \cite{Thomassen} in 1980 proved that for every infinite cardinal $\kappa$ the sufficient and necessary condition for the existence of $\kappa$-regular spanning tree in a given graph is its $\kappa$-regularity.
In this paper we focus on possibly the most general decomposition properties for which the only obvious condition is the regularity of a given graph. We prove that regularity is the necessary and sufficient condition for non-locally-finite graphs even for the strongest of these properties.
This covers the mentioned problems such as matchability, factorizations, packings and decompositions, and generalizes the mentioned results of Kőnig, Andersen and Thomassen. Variants of well-known conjectures by Ringel \cite{Ringel} from 1963 and Gy\'arf\'as \cite{Gyarfas} from 1977 for non-locally-finite graphs and their further strengthenings are discussed in this paper. The proofs of all these variants for non-locally-finite graphs shall follow from the main theorem of this paper.
Let $\mathcal{H}=\{H_i: i<\alpha \}$ be an indexed family of graphs for some ordinal number $\alpha$.
We say that $\mathcal{H}$ \emph{packs} into graph $G$ if there exists a family $\mathcal{G}=\{G_i: i<\alpha \}$ of edge-disjoint subgraphs of $G$ such that $G_i$ is isomorphic to $H_i$ for every $i<\alpha$.
If $\mathcal{H}$ packs into $G$ and furthermore $\bigcup\{E(G_i):i<\alpha\}=E(G)$, then $\mathcal{G}$ is called a \emph{$\mathcal{H}$-decomposition} of $G$.
\emph{Factor} of a graph $G$ is a spanning subgraph of $G$. \emph{$\mathcal{H}$-factorization} of $G$ is a $\mathcal{H}$-decomposition of $G$ such that every element of $\mathcal{H}$ is a factor of $G$. If $\lambda$ is a cardinal number, then
a factorization into $\lambda$-regular subgraphs is simply called a \emph{$\lambda$-factorization}.
The most popular problems in this area include 1-factorization which is a decomposition into perfect matchings and 2-factorization which is a decomposition into spanning unions of cycles and double rays. As an arbitrary regular graph does not have to contain a cycle, we consider only forests as possible elements of decompositions.
Ringel \cite{Ringel} conjectured that $2n+1$ copies of every finite tree with $n$ edges pack into $K_{2n+1}$. The most straightforward variant for non-locally finite graphs would be the conjecture stating that for every infinite cardinal number $\kappa$ and every tree $T$ on $\kappa$ vertices there exists a packing of $\kappa$ copies of $T$ into $K_\kappa$. As packing in Ringel's Conjecture is a decomposition, we can demand that said packing to be a decomposition of $K_\kappa$. Furthermore, we can consider arbitrary $\kappa$-regular connected graphs instead of $K_\kappa$.
In contrast, Gy\'arf\'as \cite{Gyarfas} conjectured that every family $\mathcal{T}=\{T_i: 2\leq i\leq n \}$ of trees such that $T_i$ has order $i$ packs into $K_{n}$. Again, such packing is a decomposition but, unlike Ringel's Conjecture, trees in family $\mathcal{T}$ are pairwise non-isomorphic. We can propose a variant of Gy\'arf\'as' Conjecture stating that for every infinite cardinal $\kappa$ every family $\mathcal{T}$ of at most $\kappa$ pairwise non-isomorphic trees of order at most $\kappa$ packs into $K_\kappa$. Again, we can consider arbitrary $\kappa$-regular connected graphs instead of $K_\kappa$ and demand this packing to be a decomposition.
We can combine all the proposed conjectures and ask for which non-locally-finite connected graph $G$ there exists a $\mathcal{T}$-decomposition for an arbitrary family $\mathcal{T}$ of $\kappa$ trees of order at least two and at most $\kappa$. It is easy to see that a necessary condition is the $\kappa$-regularity of $G$. We show that it is also a sufficient condition.
The main result of this paper is Theorem \ref{thm:main} which provides a positive answer to all the mentioned conjectures for non-locally finite graphs. Theorem \ref{thm:main} is even stronger as it includes not only packings and decompositions but also factorizations.
\begin{thm}\label{thm:main}
Let $\kappa$ be an infinite cardinal and let $\mathcal{T}=\{T_i:i<\kappa\}$ be an indexed family of forests without isolated vertices and with $\kappa$ components each of order at most $\kappa$. If $G$ is a $\kappa$-regular connected graph, then $G$ has a $\mathcal{T}$-factorization.
\end{thm}
Theorem \ref{thm:main} states that there are no non-trivial conditions for factorizations, decompositions and packings of non-locally-finite graphs of order $\kappa$ into family of forests of cardinality $\kappa$. Note that the statement of said theorem is strictly stronger that the existence of $\kappa$-factorization of $G$ into $\kappa$ factors as not every factor of $G$ is a forest if $G$ is not a forest itself.
We can apply Theorem \ref{thm:main} to various problems by setting a suitable family $\mathcal{T}$. To show that every family $\mathcal{T}'$ of $\kappa$ forests of order at most $\kappa$ packs into every $\kappa$-regular tree it is enough to partition $\mathcal{T}'$ into $\kappa$ sets of cardinality $\kappa$, thus obtaining a family $\mathcal{T}$ of $\kappa$ forests with $\kappa$ components of order at most $\kappa$ for which we can apply Theorem \ref{thm:main}. The same method may be applied to obtain an arbitrary decomposition into $\kappa$ non-trivial forests. $\lambda$-factorizations for non-zero $\lambda\leq\kappa$ may be obtained by setting $\mathcal{T}$ to be a family of $\kappa$ forests with $\kappa$ components each isomorphic to $\lambda$-regular tree.
For $\lambda=1$ we obtain the mentioned result of Kőnig. For $\lambda=\kappa$ gives a strengthening of the theorem of Andersen and Thomassen about the existence of $\kappa$-regular spanning trees in $\kappa$-regular connected graphs.
\section{Factorizations and decompositions}
When considering $\mathcal{H}$-decomposition, we always implicitly assume that graphs in $\mathcal{H}$ are vertex-disjoint. We consider each ordinal $\alpha=\{\beta: \beta<\alpha \}$ as a well-ordered set with the standard well-ordering of ordinals. If $\alpha,\beta, \gamma$ are ordinals, then we treat the Cartesian products $\alpha \times \beta$ and $\alpha \times \beta \times \gamma$ as well-ordered sets with lexicographic order induced by the well-ordering of ordinals. For notions of graph theory which are not defined in this paper see \cite{Diestel}.
Our goal is to prove Theorem \ref{thm:main}. We divide the proof into two parts. The first part is Theorem \ref{thm:kapparegularfactorization} which covers the case of a factorization into $\kappa$ many $\kappa$-regular forests. Note that we cannot replace forests in the statement of Theorem \ref{thm:kapparegularfactorization} with $\kappa$-regular trees. It follows from the fact that for arbitrary cardinal $\kappa$ if a graph has edge-connectivity less than $\kappa$, then it cannot have factorization into $\kappa$ connected subgraphs. As mentioned earlier, we cannot replace forests with more general graph with cycles as not every regular graph contains a cycle.
\begin{thm}\label{thm:kapparegularfactorization}
Let $\kappa$ be an infinite cardinal and let $G$ be a connected graph. Then $G$ has a factorization into $\kappa$ regular forests of degree $\kappa$ if and only if it is $\kappa$-regular.
\end{thm}
\begin{proof}
Assume first that $G$ contains a $\kappa$-regular spanning forest. Hence, $G$ has $\kappa$ vertices each of degree at least $\kappa$. As $G$ has $\kappa$ vertices, then each of them has degree at most $\kappa$. This proves the necessity of $\kappa$-regularity of $G$. Therefore, it remains to prove only the sufficiency of $\kappa$-regularity of $G$.
Let $F$ be a spanning $\kappa$-regular forest of $G$ which exists by the mentioned result of Andersen and Thomassen. Let $v_0$ be an arbitrary vertex of $G$. Consider the enumeration $(v_i:i<\kappa)$ of vertices of $G$ such that in the rooted forest $(F,v_0)$ if $v_i$ is a son of $v_j$, then $i>j$. For every $j<\kappa$ we partition the set of sons of $v_j$ in $(F,v_0)$ into sets $X_j^m(t)$ for every $m,t<\kappa$, each of cardinality $\kappa$. Note that every vertex of $G$ except $v_0$ belongs to exactly one set $X_j^m(t)$.
In the proof, we construct a family $\{ C_j^m: j,m<\kappa \}$ satisfying:
\begin{enumerate}[label = \textnormal{(C\arabic*)}]
\item $C_j^m \subset N(v_j)\cap \{v_i: i>j \}$, \label{itm:C1}
\item $C_i^m \cap C_j^m=\emptyset$, for $i \neq j$, \label{itm:C2}
\item $C_j^m\cap C_j^n= \emptyset$, for $m \neq n$, \label{itm:C3}
\item $|C_j^m| =\kappa$, \label{itm:C4}
\item if $v_j v_i \in E(G)$ and $i>j$, then there exists $m<\kappa$ such that $v_i \in C_j^m$. \label{itm:C5}
\end{enumerate}
First, we describe how to construct a $\kappa$-factorization of $G$ into $\kappa$ forests using the family $\{ C_j^m: j,m<\kappa \}$ satisfying the above conditions.
We construct a $\kappa$-factorization $\mathcal{F}= \{F^m : m < \kappa \}$ by setting $E(F_m)= \{v_j v_i : v_i \in C_j^m, j < i < \kappa \}$. Index $m$ is related to the $m$-th factor. If $v_j$ is a vertex in a component $F$ of $F^m$, then $C_j^m$ is the set of sons of $v_j$ in $(F,v)$ where $v$ denotes the vertex $v_i \in F$ with the least index $i$.
By the condition \ref{itm:C4}, every graph in $\mathcal{F}$ is a $\kappa$-regular spanning subgraph of $G$.
Let $F^m$ be an element of $\mathcal{F}$, and assume that there exists a cycle in $F^m$. The vertex of the greatest index in this cycle has two neighbours in $F^m$. Hence, it belongs to $C_j^m$ and $C_i^m$ for some distinct $i,j<\kappa$. This contradicts \ref{itm:C2}. It follows that $F^m$ is a forest for every $m<\kappa$.
By the conditions \ref{itm:C5} and \ref{itm:C3} every edge of $G$ appears in exactly one element of $\mathcal{F}$. Therefore, $\mathcal{F}$ is a factorization of graph $G$ into $\kappa$ regular forests of degree $\kappa$.
It remains to construct the family $\{ C_j^m: j,m<\kappa \}$. We construct sets $\{A_{j}^m:j,m<\kappa\}$ and $\{B_{j}^m:j,m<\kappa\}$, and then we obtain $C_{j}^m$ by setting $C_j^m=A_{j}^m \cup B_{j}^m$ for every $j,m < \kappa$. We shall construct sets $\{A_{j}^m:j,m<\kappa\}$ and $\{B_{j}^m:j,m<\kappa\}$ by induction on $(m,\tau,i) \in \kappa \times \kappa \times \kappa$ with respect to the lexicographic order on $\kappa \times \kappa \times \kappa$.
During step $(m,\tau,i)$ we either assign vertex $v_i$ to $a^m_j(y)$ for some $j,y<\kappa$, we put $v_i$ in $B^m_j$ for some $j<\kappa$, or we proceed to the next step without doing anything. Assigning $v_i$ to $a^m_j(y)$ is equivalent to defining $a^m_j(y)$ as $v_i$. Without loss of generality we can assume that $V(G) \cap \kappa =\emptyset$. Initially, we temporarily assign a different ordinal number less than $\kappa$ to each element of $\{a^m_j(y): m,j,y<\kappa\}$ but still refer to $a^m_j(y)$ as not defined until some vertex $v_i\in V(G)$ is assigned to it.
Recall that index $m$ is related to the $m$-th factor.
After executing steps $(m,\tau,i)$ for every $\tau,i<\kappa$ set $\{a^m_j(y):y <\kappa\}$ has been defined for every $j<\kappa$, and we denote $A_{j}^m=\{a^m_j(y):y <\kappa\}$. During steps $(m,\tau,i)$ for $\tau,i<\kappa$ we define set $B^m_j$ by putting vertices in it. At the start of the induction no vertex lies in $B^m_j$ for every $m,j<\kappa$.
For a fixed $(m,\tau,i)$ let $\sigma_\tau^m(i)=0$ if no $a_j^m(y)$ has been defined, let $\sigma_\tau^m(i)$ be the least ordinal for which there exist $j, y \leq\sigma^m_\tau(i)$ such that $a^m_{j}(y)$ has not been defined or let $\sigma_\tau^m(i)=\kappa$ if every element of $\{a^m_j(y):j,y <\kappa\}$ has already been defined. The above parameter and the family $X_j^m(t)$ shall ensure that every element of $\{a^m_j(y):j,y <\kappa\}$ is defined after executing steps $(m,\tau',i')$ for $\tau',i'<\kappa$. In particular, it shall ensure that $|A^m_j|\leq|C^m_j|=\kappa$. Index $\tau$ in a triple $(m,\tau,i)$ is an auxiliary index which serves the purpose of considering every vertex $v_i$ multiple times. Throughout the induction every vertex can be assigned to more than one element of $\{a_i^{m'}(y): i,m',y<\kappa\}$ but it can be assigned to at most one of them for the fixed $m'$. We consider vertices of $G$ one by one, and we assign $v_i$ to $a_j^m(y)$ if all the above conditions are satisfied:
\begin{enumerate}[label = \textnormal{(D\arabic*)}]
\item vertex $v_i$ has not been assigned to any $a_{j'}^m(y')$ nor put to $B_{j'}^m$ for every $j',y'<\kappa$, \label{itm:D1}
\item $v_i$ is a neighbour of $v_j$ and $i>j$, \label{itm:D2}
\item for every $y'<y$, vertex $a^m_{j}(y')$ has already been defined but $a^m_{j}(y)$ has been not, \label{itm:D3}
\item $v_i\notin A_j^{m'}$ for every $m'<m$, \label{itm:D4}
\item $v_i\notin B_j^{m'}$ for every $m'< m$, \label{itm:D5}
\item $v_i \notin X_j^{m''}(t)$ for every $(m'',t)>(m,\tau)$ and $v_i\notin X^m_{j'}(\tau)$ for every $j'\neq j$, \label{itm:D6}
\item $j \leq \sigma_\tau^m(i)$ and $y \leq \sigma_\tau^m(i)$, \label{itm:D7}
\item $j$ is the least index for which conditions \ref{itm:D2}--\ref{itm:D7} are satisfied for some $y<\kappa$. \label{itm:D8}
\end{enumerate}
If $v_i$ has not been assigned to any $a_j^m(y)$ by the above conditions (for the fixed $m$), then we consider putting $v_i$ in $B_j^m$ for some $j<\kappa$. If condition \ref{itm:D1} is satisfied and $j$ is the least index for which conditions \ref{itm:D2}, \ref{itm:D4}, \ref{itm:D5} and \ref{itm:D6} are satisfied, then we put $v_i$ in $B_j^m$. Otherwise, we do nothing and proceed to the next index. Induction on $\tau<\kappa$ simply means that we repeat the above procedure for every $i \in \kappa$. Similarly, induction on $m<\kappa$ means that we repeat the procedure for every $(\tau, i) \in \kappa \times \kappa$.
After the induction on $(m,\tau,i) \in\kappa \times \kappa \times \kappa$ we define $C_{j}^m=A_{j}^m \cup B_{j}^m$ for every $j,m < \kappa$. Note that for every $m,j< \kappa$ sets $A^m_j$ and $B^m_j$ are disjoint. We prove that the family $\{C_j^m:j,m<\kappa\}$, obtained by the recursive construction, satisfy conditions \ref{itm:C1}--\ref{itm:C5}. The first three conditions are easy to check and follow directly from the construction. Condition \ref{itm:C1}, \ref{itm:C2} are satisfied by conditions \ref{itm:D2} and \ref{itm:D1} respectively in the construction of sets $A_j^m$ and $B_j^m$. Condition \ref{itm:C3} follows from conditions \ref{itm:D4} and \ref{itm:D5}. We need to show that conditions \ref{itm:C4} and \ref{itm:C5} are satisfied.
Now, we prove that $|A_j^m| =\kappa$ for every $m,j< \kappa$. This is equivalent to $\{ \sigma^m_\tau(0): \tau<\kappa \}$ not being bounded by any ordinal less than $\kappa$ for every $m <\kappa$. We show that $\sigma^m_\tau(0)$, as a function of $\tau$, is strictly increasing on the set $\{\tau<\kappa: \sigma^m_\tau(0) <\kappa \}$ for every $m<\kappa$.
Fix $m< \kappa$. For any non-zero $\alpha<\kappa$ we have $\sup \{\sigma^m_\tau(0):\tau<\alpha \} \leq \sigma^m_\alpha(0)$. Therefore, it is enough to prove that for every $\alpha<\kappa$, we have $s= \sigma^m_\alpha(0) <\sigma^m_{\alpha+1}(0)$ or $s=\kappa$.
Before the execution of step $(m,\alpha,0)$ every $a_{j}^m(y)$ for $j, y<s$ has already been defined. Let $C=\{a^m_s(y):y\leq s \}\cup \{a^m_j(s):j\leq s \}$. At least one element of $C$ has not been defined.
For a fixed $\alpha<\kappa$ consider the induction on $(m,\alpha,i)$. Notice that for every $a_j^m(y) \in C$ we have $\kappa$ vertices $v_i \in V(G)$ that were satisfying conditions \ref{itm:D1}, \ref{itm:D2}, \ref{itm:D4}, \ref{itm:D5} and \ref{itm:D6} when they were considered.
Indeed, each element of $X^m_j(\alpha)$ satisfies these conditions for $a_j^m(y) \in C$.
If $a_j^m(y) \in C$ has not been assigned before step $i$ and $v_i \in X^m_j(\alpha)$, then $v_i$ satisfy conditions \ref{itm:D1}--\ref{itm:D7} when considered as a candidate for $a_j^m(y')$ for some $y' \leq s$. It follows from the second part of \ref{itm:D6} that condition \ref{itm:D8} is also satisfied and therefore $v_i=a_j^m(y')$.
Hence, $\sigma^m_\alpha(0) <\sigma^m_{\alpha+1}(0)$. We proved that for every $j,m<\kappa$ we have $|A_j^m| =\kappa$. As $A_j^m\subseteq C_j^m$, we obtained that the family $\{C_j^m:j,m<\kappa\}$ satisfies \ref{itm:C4}.
It remains to prove that condition \ref{itm:C5} holds for $\{C_j^m:j,m<\kappa\}$. Assume that $\{C_j^m:j,m<\kappa\}$ does not satisfy condition \ref{itm:C5}.
Let $(i,j)$ be the least element in $\kappa \times \kappa$ such that $i>j$ and $v_{j}v_{i}\in E$ but $v_i \notin C_j^m$ for every $m<\kappa$. Notice that there exists an index $m''<\kappa$ such that $v_i \notin X_{j'}^m(\tau)$ for every $ m''\leq m, \tau < \kappa, j'<\kappa$. It follows from the above paragraph that for every $m < \kappa$ there exists an index $t(m)$ such that $\sigma^m_{t(m)}(i)\geq \max \{i,j \}=i$. For $m''<m$ and $\tau\geq t(m)$ we consider step $(m,\tau,i)$, and we check which of the conditions \ref{itm:D1}-\ref{itm:D8} would be satisfied for assigning $v_i$ to $a^m_j(y)$ for some $y\leq \sigma^m_{t(m)}(i)$.
Conditions \ref{itm:D4}--\ref{itm:D7} and condition \ref{itm:D2} are satisfied for such choice of $j$ and $y$. However, it may happen that condition \ref{itm:D1}, \ref{itm:D3} or \ref{itm:D8} fails. If condition \ref{itm:D1} fails, then it also fails for every successive step $(m,t',i)$ within $m$. Furthermore, condition \ref{itm:D3} may be satisfied for at most one $y$ and by the assumption we did not put $v_i$ to $B_j^m$.
When assigning $v_i$ to $a^m_j(y)$, condition \ref{itm:D2} is satisfied only for $j<i$, hence for at most $|i|<\kappa$ indices $j$. Therefore, the satisfaction of condition \ref{itm:D1} depends on only these indices $j'$ for which $j'<i$. Therefore, for all but at most $|i|$ indices $m$ condition \ref{itm:D1} was satisfied at $(m,t,i)$ for every $t < \kappa$. Take $m$ such that $m > m''$ and condition \ref{itm:D1} is satisfied at $(m,t,i)$ for every $t < \kappa$. It means that $v_i$ is not assigned to any element of $A^m_{j'}$ or put in $B^m_{j'}$ for every $j'<\kappa$ during the induction on $(m,t,i)$ for the fixed $m$ and $i$. Consider the assigning of $v_i$ to $a^m_j(y)$. By the assumption conditions \ref{itm:D4} and \ref{itm:D5} are satisfied. By the choice of $m$ conditions \ref{itm:D1} and \ref{itm:D6} are satisfied. Furthermore, $j$ is the only (and therefore the least) index for which all the conditions \ref{itm:D2}, \ref{itm:D4}, \ref{itm:D5}, \ref{itm:D6} are satisfied.
Therefore, $v_i$ in step $(m,t,i)$ is assigned to an element of $A^m_j$ or $v_i$ is put in $B^m_j$, which contradicts the assumption.
\end{proof}
The next theorem allows us to further factorize $\kappa$-regular forests from Theorem \ref{thm:kapparegularfactorization}. For an arbitrary graph $H$ denote by $S_H(v,d)$ the sphere of radius $d$ and centre $v$ in graph $H$. Similarly, denote by $B_H(v,d)$ the ball of radius $d$ and centre $v$ in graph $H$.
\begin{thm}\label{thm:factorization}
Let $\kappa$ be an infinite cardinal and let $\mathcal{T}=\{T^m: m<\kappa\}$ be an indexed family of forests without isolated vertices and with $\kappa$ components each of order at most $\kappa$. Then there exists a $\mathcal{T}$-factorization of $\kappa$-regular tree.
\end{thm}
\begin{proof}
Denote the $\kappa$-regular tree by $G$. For $m<\kappa$ let $(t^m_{i}:i<\kappa)$ be an enumeration of vertices of $T^m$. We shall define set $\{y^m_{i}:m,i<\kappa \}$ and graph $Y^m$ for every $m<\kappa$ such that $V(Y^m)=\{y^m_i:i<\kappa\}, E(Y^m)=\{y^m_iy^m_j:i,j<\kappa, t^m_i t^m_j\in E(T^m)\}$, and the following conditions shall be satisfied:
\begin{enumerate}[label = \textnormal{(E\arabic*)}]
\item $f^m:t^m_i\mapsto y^m_i$ is an isomorphism of $T^m$ into $Y^m$ for every $m<\kappa$, \label{itm:E1}
\item if $xy \in E(G)$, then there exists a unique $(m,i,j)$ such that $i<j$ and $xy=y^m_i y^m_j$,\label{itm:E2}
\item $V(Y^m)=V(G)$ for every $m<\kappa$. \label{itm:E3}
\end{enumerate}
Now we show that if conditions \ref{itm:E1}--\ref{itm:E3} hold, then the family $\{Y^m: m<\kappa\}$ is a $\mathcal{T}$-factorization of $G$.
Condition \ref{itm:E3} means that each $Y^m$ is a factor of $G$.
It follows from conditions \ref{itm:E1} and \ref{itm:E2} that $\{Y^m: m<\kappa\}$ is a $\mathcal{T}$-factorization of $G$.
Pick any $v_0 \in V(G)$ as a root of $G$. For every $m<\kappa$ we define $y^m_0=v_0$. First, for every $m<\kappa$ we partition the family of components of $T^m$ into sets $T^m_d$ for every $d<\omega$ in such a way that $T^m_0$ is a singleton of component containing $t^m_0$ and $|T^m_d|=\kappa$ for every non-zero $d<\omega$. Furthermore, for a component $T$ of $T^m$ denote by $x_T$ the vertex $t^m_i$ with the least index $i$ in $T$.
For induction on $d \in \omega$, assume that we already assigned elements in $B_G(v_0,d)$ to some elements of $\{y^m_i:i<\alpha\}$ in such a way that the following conditions are satisfied:
\begin{enumerate}[label = \textnormal{(F\arabic*)}]
\item every vertex in $B_G(v_0,d)$ has been assigned to exactly one vertex in $\{y^m_j:j<\kappa\}$ for every $m<\kappa$ and only vertices in $B_G(v_0,d)$ have been assigned, \label{itm:F1}
\item if $y^m_i$ and $y^m_j$ have been defined, then $y^m_i y^m_j \in E(G)$ if and only if $t^m_it^m_j \in E(T^m)$, \label{itm:F2}
\item if $xy$ is an edge in $G$ between two vertices in $B_G(v_0,d)$, then there exists a unique $(m,i,j)$ such that $i<j$ and $xy=y^m_i y^m_j$, \label{itm:F3}
\item $y^m_i$ has been defined if and only if $t^m_i \in B_T(x_T,d-d')$ for some $d'\leq d$ and some $ T\in T^m_{d'}$. \label{itm:F4}
\end{enumerate}
For $y\in S_G(v_0,d)$ we define $W_d(y)$ as the set of these vertices $t^m_i\in T^m$ such that $y=y^m_i$ for some $m,i<\kappa$. For every $y\in S_G(v_0,d)$ we assign each son of $y$ in $G$ to a unique $y^m_j$ such that $t^m_i\in W_d(y)$ is a neighbour of $t^m_j$ in $T^m$ and $y^m_j$ has not been defined.
Moreover, every such $y^m_j$ has to be assigned to some son of $y$. Such assignment is possible because $y$ has $\kappa$ sons and if we put $d'=d$ in condition \ref{itm:F4}, we obtain that there are $\kappa$ possible $y^m_j$ which we can assign to each son of $y$.
Let $X^m_{d+1}=\{x_T: T \in T^m_{d+1}\}$.
For each $t^m_i \in X^m_{d+1}$ we assign vertex $y^m_i$ to some vertex $v$ in $S_G(v_0,d+1)$ such that $v$ has not been defined yet as a $y^m_j$ for every $j<\kappa$.
For a fixed $m$ each vertex $y^m_i$ has to be assigned with a different vertex from $S_G(v_0,d+1)$ and each possible $y^m_i$ has to be assigned.
Now we show that before executing step $d$ conditions \ref{itm:F1}--\ref{itm:F4} are satisfied. Each of these conditions are trivially satisfied before executing step $0$. Assume then that $d>1$. It follows directly from the construction that conditions \ref{itm:F1}, \ref{itm:F2} and \ref{itm:F4} are satisfied.
Let $y^m_iy^m_j$ be an edge between two vertices in $E(G)$ and assume that $i<j$. Further, assume that $y^m_i \in S_G(v_0,d-1)$ and $y^m_j \in S_G(v_0,d)$.
Notice that if $y^m_j=y^{m'}_{j'}$ for some $j'<\kappa, m' \neq m$, then $t^{m'}_{j'}=x_T$ for some $T \in T^{m'}$. Therefore, no neighbour in $Y^{m'}$ of $y^{m'}_{j'}$ lies in $S_G(v_0,d-1)$. It follows that the condition \ref{itm:F3} is satisfied.
It remains to prove that $\{y^m_{i}:m,i <\kappa \}$ satisfies conditions \ref{itm:E1}--\ref{itm:E3}. Condition \ref{itm:E1} is satisfied by \ref{itm:F2} and \ref{itm:F4}. Condition \ref{itm:E2} is satisfied by \ref{itm:F3}. It follows directly from condition \ref{itm:F1} that condition \ref{itm:E3} is satisfied.
\end{proof}
The proof of Theorem \ref{thm:main} follows easily from Theorems \ref{thm:kapparegularfactorization} and \ref{thm:factorization}. By Theorem \ref{thm:kapparegularfactorization} we obtain a factorization $\{Y^m:m<\kappa \}$ of $G$ into $\kappa$ regular forests of degree $\kappa$. Then we partition $\mathcal{T}$ into family $\{U^m:m<\kappa\}$ of sets each of cardinality $\kappa$. For every $m<\kappa$ set $U^m$ is a family of $\kappa$ forests without isolated vertices and with $\kappa$ components each of order at most $\kappa$. By Theorem \ref{thm:factorization}, there exists a $U^m$-factorization $W^m$ of $Y^m$ for every $m<\kappa$. It follows that $\{W_i:i<\kappa \}=\{W :W \in W^m, m<\kappa\}$ forms a desired $\mathcal{T}$-factorization of $G$.
\bibliographystyle{abbrv}
| {
"timestamp": "2021-03-16T01:18:04",
"yymm": "2103",
"arxiv_id": "2103.07913",
"language": "en",
"url": "https://arxiv.org/abs/2103.07913",
"abstract": "Let $\\mathcal{H}=\\{H_i: i<\\alpha \\}$ be an indexed family of graphs for some ordinal number $\\alpha$. $\\mathcal{H}$-decomposition of a graph $G$ is a family $\\mathcal{G}=\\{G_i: i<\\alpha \\}$ of edge-disjoint subgraphs of $G$ such that $G_i$ is isomorphic to $H_i$ for every $i<\\alpha$ and $\\bigcup\\{E(G_i):i<\\alpha\\}=E(G)$. $\\mathcal{H}$-factorization of $G$ is a $\\mathcal{H}$-decomposition of $G$ such that every element of $\\mathcal{H}$ is a spanning subgraph of $G$. Let $\\kappa$ be an infinite cardinal. Kőnig in 1936 proved that every $\\kappa$-regular graph has a factorization into perfect matchings. Andersen and Thomassen using this theorem proved in 1980 that every $\\kappa$-regular connected graph has a $\\kappa$-regular spanning tree. We generalize both these results and establish the existence of a factorization of $\\kappa$-regular graph into $\\lambda$-regular subgraphs for every non-zero $\\lambda\\leq \\kappa$. Furthermore, we show that every $\\kappa$-regular connected graph has a $\\mathcal{H}$-factorization for every family $\\mathcal{H}$ of $\\kappa$ forests with $\\kappa$ components of order at most $\\kappa$ and without isolated vertices.",
"subjects": "Combinatorics (math.CO)",
"title": "Factorizations of regular graphs of infinite degree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787857334057,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811458421857
} |
https://arxiv.org/abs/1706.08167 | Phase retrieval using alternating minimization in a batch setting | This paper considers the problem of phase retrieval, where the goal is to recover a signal $z\in C^n$ from the observations $y_i=|a_i^* z|$, $i=1,2,\cdots,m$. While many algorithms have been proposed, the alternating minimization algorithm has been one of the most commonly used methods, and it is very simple to implement. Current work has proved that when the observation vectors $\{a_i\}_{i=1}^m$ are sampled from a complex Gaussian distribution $N(0, I)$, it recovers the underlying signal with a good initialization when $m=O(n)$, or with random initialization when $m=O(n^2)$, and it conjectured that random initialization succeeds with $m=O(n)$. This work proposes a modified alternating minimization method in a batch setting, and proves that when $m=O(n\log^{3}n)$, the proposed algorithm with random initialization recovers the underlying signal with high probability. The proof is based on the observation that after each iteration of alternating minimization, with high probability, the angle between the estimated signal and the underlying signal is reduced. | \section{Introduction}
This article concerns the phase retrieval problem as follows: let $\boldsymbol{z}\in\mathbb{C}^n$ be an unknown vector; given $m$ known sensing vectors $\{\boldsymbol{a}_i\}_{i=1}^m\in\mathbb{C}^n$ and the observations
\[
y_i=|\boldsymbol{a}_i^*\boldsymbol{z}|, i=1,2,\cdots,m,
\]
then can we reconstruct $\boldsymbol{z}$ from the observations $\{y_i\}_{i=1}^m$? This problem is motivated from the applications in imaging science, and we refer interested readers to ~\cite{Shechtman2015} for more
detailed discussions on the background in engineering. In addition, this problem has applications in other areas of sciences and engineering as well, as discussed in \cite{Candes7029630}.
Because of the practical ubiquity of the phase retrieval problem, many algorithms and theoretical analysis have been developed for this problem. For example, an interesting recent approach is based on convex relaxation~\cite{Chai2011,Candes_PhaseLift,Waldspurger2015}, that replaces the non-convex measurements by convex measurements through relaxation. Since the associated optimization problem is convex, it has interesting properties such as convergence to the global minimizer, and it has been shown that under some assumptions on the sensing vectors, this method recovers the correct $\boldsymbol{z}$~\cite{Candes2014,Gross2015}. However, since these algorithms involve semidefinite programming for $n\times n$ positive semidefinite matrices, the computational cost could be prohibitive when $n$ is large. Recently, several works \cite{pmlr-v54-bahmani17a,Goldstein2016,Hand2016,Hand20162} proposed and analyzed an alternate convex method that uses linear programming instead of semidefinite programming, which is more computationally efficient, but the program itself requires an ``anchor vector'', which needs to be a good approximate estimation of $\boldsymbol{z}$.
Another line of works are based on Wirtinger
flows, i.e., gradient flow in the complex setting~\cite{Candes7029630,NIPS2015_5743,Zhang:2016:PNP:3045390.3045499,NIPS2016_6319,cai2016,NIPS2016_6061,Soltanolkotabi2017}. Some theoretical justifications are also provided \cite{Candes7029630,Soltanolkotabi2017}. However, since the objective functions are nonconvex, these algorithms require careful initializations, which are usually only justified when the measurement vectors follow a very specific model, for example, when the observation vectors $\{\boldsymbol{a}_i\}_{i=1}^m$ are sampled from a complex normal distribution $CN(0,\mathbf{I})$. That is, both its real component and its imaginary component follows from a real Gaussian distribution of $N(0,\mathbf{I}/2)$. In addition, there are technical issues in implementation such as choosing step sizes, which makes the implementation slightly more complicated.
To cope with the nonconvexity of the phase retrieval problem, Sun et al. \cite{7541725} tries to understand the geometric landscape of a nonconvex objective function associated with phase retrieval, and proved that when $m=O(n\log^3n)$, their cost function has no bad critical point, and as a result, arbitrary initialization is sufficient and a trust-region method (TRM) can be applied to obtain the solution. However, this method is more complicated than the alternate minimization algorithm as described below, due to its specific objective function and the associated trust-region method.
The most widely used method is perhaps the alternate minimization algorithm and its variants~\cite{Gerchberg72,Fienup78,Fienup82}, that is based on alternating projections onto nonconvex sets~\cite{Bauschke03}. This method is very simple to implement and is parameter-free. However, since it is a nonconvex algorithm, its properties such as convergence are only partially known. Netrapalli et al. \cite{Netrapalli7130654} studied a resampling version of this algorithm and established its convergence as the number of measurements $m$ goes to infinity when the measurement vectors are independent standard complex normal vectors. Marchesini et al. \cite{Marchesini2016815} studied and demonstrated the necessary and sufficient conditions for the local convergence of this algorithm. Recently, Waldspurger \cite{Waldspurger2016} showed that when $m \geq Cn$ for sufficiently large $C$, the alternating minimization algorithm succeeds with high
probability, provided that the algorithm is carefully initialized. In addition, with random initialization, the algorithm succeeds with $m\geq C n^2$. This work also conjectured that the alternate minimizations algorithm with random initialization succeeds with $m\geq Cn$.
The contribution of this work is to show that a modified version of the alternating minimization algorithm and random initialization succeeds with high probability when $m=O(n\log^{3}n)$, which partially verifies the conjecture that the alternating minimization algorithm succeeds with high probability when $m=O(n)$. Compared with the previous methods based on Wirtinger flows and linear programming, the proposed algorithm is more practical since it does not require a good initialization, and compared with the existing works that also do not depend on good initializations such as semidefinite programming and \cite{7541725}, the proposed alternating minimization algorithm is simpler and easier to implement
The paper is organized as follows. Section~\ref{sec:main} presents the algorithm and the main results of the paper, and the proof of the key component, Theorem~\ref{thm:main1}, is given in Section~\ref{sec:proof}. We run simulations to verify Theorem~\ref{thm:main1} in Section~\ref{sec:simu}
\section{Algorithm and Main Results}\label{sec:main}
The alternating minimization method is one of the earliest methods that was introduced for phase retrieval problems~\cite{Gerchberg72,Fienup78,Fienup82}, and it is based on alternating projections onto nonconvex sets~\cite{Bauschke03}. Let $\boldsymbol{A}\in\mathbb{C}^{m\times n}$ be a matrix with rows given by $\boldsymbol{a}_1^*,\boldsymbol{a}_2^*,\cdots,\boldsymbol{a}_m^*$, the goal of this algorithm is to find a vector in $\mathbb{C}^m$ such that it lies in both the set $\mathcal{S}=\operatorname{range}(\boldsymbol{A})\in\mathbb{C}^m$ and the set of correct amplitude $\mathcal{A}=\{\boldsymbol{w}\in\mathbb{C}^m: |\boldsymbol{w}_i|=y_i\}$. For this purpose, the algorithm picks an initial guess in $\mathbb{C}^m$, and alternatively projects it to both sets.
The projections $P_\mathcal{S}, P_\mathcal{A}: \mathbb{C}^m\rightarrow\mathbb{C}^m$ can be defined by
\[
P_\mathcal{S}(\boldsymbol{w})=\boldsymbol{A}(\boldsymbol{A}^*\boldsymbol{A})^{-1}\boldsymbol{A}^*\boldsymbol{w},\,\,\,[P_\mathcal{A}(\boldsymbol{w})]_i=y_i\frac{\boldsymbol{w}_i}{|\boldsymbol{w}_i|},
\]
and the alternating minimization algorithm is given by
\begin{equation}\label{eq:alternate_minimization}
\boldsymbol{w}^{(k+1)}=P_{\mathcal{S}}P_{\mathcal{A}}\boldsymbol{w}^{(k)}.
\end{equation}
In fact, the alternating minimization method can be explicitly written down as follows. Writing $\boldsymbol{w}^{(k)}=\boldsymbol{A}\boldsymbol{x}^{(k)}$ and let $\boldsymbol{e}_i\in\mathbb{C}^m$ be the indicator vector of the $i$-th coordinate, then the update formula is
\[
\boldsymbol{A}\boldsymbol{x}^{(k+1)}=\boldsymbol{A}(\boldsymbol{A}^*\boldsymbol{A})^{-1}\boldsymbol{A}^*\left(\sum_{i=1}^m|\boldsymbol{a}_i^*\boldsymbol{z}|\frac{\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}}{|\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}|}\boldsymbol{e}_i\right)
=\boldsymbol{A}(\boldsymbol{A}^*\boldsymbol{A})^{-1}\left(\sum_{i=1}^m\frac{|\boldsymbol{a}_i^*\boldsymbol{z}|}{|\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}|}\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}\boldsymbol{a}_i\right),
\]
which implies
\begin{equation}\label{eq:algorithm}
\boldsymbol{x}^{(k+1)}=(\boldsymbol{A}^*\boldsymbol{A})^{-1}\left(\sum_{i=1}^m\frac{|\boldsymbol{a}_i^*\boldsymbol{z}|}{|\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}|}\boldsymbol{a}_i\boldsymbol{a}_i^*\boldsymbol{x}^{(k)}\right).
\end{equation}
Define
\begin{equation}\label{eq:gx}
g_i(\boldsymbol{x})=\frac{|\boldsymbol{a}_i^*\boldsymbol{z}|}{|\boldsymbol{a}_i^*\boldsymbol{x}|}\boldsymbol{a}_i\boldsymbol{a}_i^*\boldsymbol{x}, \,\,\,\,g(\boldsymbol{x})=\sum_{i=1}^m g_i(\boldsymbol{x}),\,\,\,T(\boldsymbol{x})=(\boldsymbol{A}^*\boldsymbol{A})^{-1}g(\boldsymbol{x})
\end{equation}
then the algorithm \eqref{eq:algorithm} can be written as
\begin{equation}\label{eq:algorithm1}
\boldsymbol{x}^{(k+1)}=T(\boldsymbol{x}^{(k)}).
\end{equation}
In this work, we will consider the algorithm~\eqref{eq:algorithm} in a batch setting. Similar to AltMinPhase \cite{Netrapalli7130654}, we divide the sampling vectors $\boldsymbol{a}_i$ (the rows of the matrix $\boldsymbol{A}$) and corresponding observations $y_i$ into $B$ disjoint blocks $(\boldsymbol{y}^{(1)},\boldsymbol{A}^{(1)}), \cdots, (\boldsymbol{y}^{(B)},\boldsymbol{A}^{(B)})$ of roughly equal size, and perform alternating minimization \eqref{eq:alternate_minimization} to the disjoint blocks cyclically. The procedure is summarized as Algorithm~\ref{alg:main}, where $T^{(k)}$ represents the alternating minimization operator with the $k$-th block $(\boldsymbol{y}^{(k)},\boldsymbol{A}^{(k)})$. We remark that while it is similar to AltMinPhase, this algorithm uses partitions cyclically, rather than only using each partition once. As a result, it only requires finite observations to estimate $\boldsymbol{z}$ exactly, which is different than the method in~\cite{Netrapalli7130654}.
\begin{algorithm
\caption{\ Alternating minimization in a batch setting}
\label{alg:main}
{\bf Input:} The sampling vectors $\boldsymbol{A}\in\mathbb{C}^{m\times n}$ and corresponding observations $\boldsymbol{y}\in\mathbb{C}^m$ partitioned into $B$ disjoint blocks $(\boldsymbol{y}^{(1)},\boldsymbol{A}^{(1)}), \cdots, (\boldsymbol{y}^{(B)},\boldsymbol{A}^{(B)})$ of roughly equal size. \\
{\bf Output:} An estimator of the underlying signal $\boldsymbol{z}$.\\
{\bf Steps:}\\
{\bf 1:} Let $\boldsymbol{x}^{(0)}$ be a random unit vector in $\mathbb{C}^n$, $k=0$.\\
{\bf 2:} Repeat \\
{\bf 3:} $\boldsymbol{x}^{(k+1)}\leftarrow T^{(\mathrm{mod}(k,B)+1)}\boldsymbol{x}^{(k)}$, $k=k+1$ \\
{\bf 4:} Until Convergence\\
{\bf Output:} $\lim_{k\rightarrow\infty} \boldsymbol{x}^{(k)}$.
\end{algorithm}
\subsection{Main Result}
Before we state our main result, we present an auxiliary function and its related properties as follows. We remark that in the following statements and proofs, we use $c, c'
, C, C'$ to denote any fixed constants as $m, n \rightarrow\infty$. Depending on the context, they might denote different values in different equations and expressions.
\begin{thm}\label{thm:main}
There exists $C_0,C_0',C_1,C_2,C_3$ that does not depend on $n$ and $m$, such that when $m>C_0C_0' n\log^{5}n$ and $B=C_0\log n$ satisfies $n>C_3\log m$ and $m/B>C_3 n$, then with probability at least $1-C/\log n-\exp(-Cn)-2B/\log^2 n-BC_1\exp(-C_2m/B)$, Algorithm~\ref{alg:main} recovers the underlying $\boldsymbol{z}$ multiplication by a global phase in the sense that $\lim_{k\rightarrow\infty}|\boldsymbol{z}^*\boldsymbol{x}^{(k)}|=1$.
\end{thm}
We remark that when $n$ and $m$ goes to infinity together under the assumption that $\frac{m}{n\log^5n}\rightarrow \infty$ and $\frac{n}{\log m}\rightarrow \infty$, then the conditions in Theorem~\ref{thm:main} are satisfied and the probability in Theorem~\ref{thm:main} goes to $1$.
\subsection{Sketch of the proof}
The proof of the main result, Theorem~\ref{thm:main}, can be divided into three steps. First, the random initialization in step 1 Algorithm~\ref{alg:main} exhibits a slight correlation with the ground truth. Then one may run a batched version of alternating projections by partitioning the
measurements into $O(\log n)$ batches. Since the batches are independent of each other, the second step proves that projecting onto the measurements of each batch will (with high probability) iteratively improve the estimation until it has a constant correlation with the ground truth. Finally, Theorem 3.1 of [25] gives that (with high probability) alternating projections converges to the ground truth provided the seed has a constant correlation with the ground truth.
\subsubsection{Step 1: random initialization}
Throughout the paper, we define the $\theta(\boldsymbol{x})$ by $\sin^{-1}(|\boldsymbol{x}^*\boldsymbol{z}|/\|\boldsymbol{x}\|\|\boldsymbol{z}\|)$, which can be understood as the ``angle'' between $\boldsymbol{x}$ with the hyperplane that is orthogonal $\boldsymbol{z}$ (though here the angle is not well defined since $\boldsymbol{x}$ and $\boldsymbol{z}$ are complex-valued). For example, when $\theta(\boldsymbol{x})=\pi/2$, then there exists a constant $c\in\mathbb{C}$ such that $\boldsymbol{x}=c\boldsymbol{z}$; when $\theta(\boldsymbol{x})=0$, then $\boldsymbol{x}$ is orthogonal to $\boldsymbol{z}$ in the sense that $\boldsymbol{x}^*\boldsymbol{z}=0$.
For Algorithm~\ref{alg:main}, the random initialization has a slight correlation with $\boldsymbol{z}$ as follows:
\begin{lemma}\label{lemma:initialization}
For any fixed $\boldsymbol{z}\in\mathbb{C}^n$ and random unit vector $\boldsymbol{x}_0\in\mathbb{C}^n$, with probability $1-C/\log n-\exp(-Cn)$, $\theta(\boldsymbol{x}^{(0)})>\sin^{-1}(\frac{1}{2\log n\sqrt{n}})$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:initialization}]
WLOG assume $\boldsymbol{z}=(1,0,\cdots,0)$, then $|\boldsymbol{x}^{(0)*}\boldsymbol{z}|=|\boldsymbol{x}^{(0)}_1|/\|\boldsymbol{x}^{(0)}\|$. Using Hanson-Wright inequality~\cite{rudelson2013} with $\|\boldsymbol{x}^{(0)}\|^2=\boldsymbol{x}^{(0)*}\mathbf{I}\boldsymbol{x}^{(0)}$, we have that with probability $1-\exp(-Cn)$, $\|\boldsymbol{x}^{(0)}\|<2\sqrt{n}$. In addition, with probability at least $1-C/\log n$, $|\boldsymbol{x}^{(0)}_1|>1/\log n$. Combing these two observations, Lemma~\ref{lemma:initialization} is proved. We remark that while~\cite{rudelson2013} presents the Hanson-Wright inequality for real-valued vectors and matrices, it is straightforward to generalize it to the complex-valued vectors and matrices, by writing any complex number as a pair of real numbers.
\end{proof}
\subsection{Step 2: iterative improvement}
In the second step, we prove that the correlation between $\boldsymbol{x}^{(i)}$ and $\boldsymbol{z}$ over each iteration improves (with high probability). We first introduce a function $h(\theta): \mathbb{R}\rightarrow\mathbb{R}$ and an auxiliary lemma on the property of $h(\theta)$.
\begin{lemma}\label{lemma:conjecture} Let $a_1$ and $a_2$ be two complex variables independently sampled from a complex normal distribution $CN(0,1)$. Let $h(\theta)=\operatorname{\mathbb{E}}_{a_1,a_2\sim CN(0,1)} |a_1||a_1\sin\theta+a_2\cos\theta|$, then there exists $c>0$ such that for all $0<\theta<\pi/2$,
$h'(\theta)\geq c\min(\theta,\pi/2-\theta).$
In addition, there exists $c'>0$ such that $\min_{0\leq \theta<\pi/2}h(\theta)<c'.$
\end{lemma}
For the main result in this step, we investigate $T(\boldsymbol{x})$ as defined in \eqref{eq:algorithm1}, rather than $T^{(k)}$ as defined in Algorithm~\ref{alg:main}. However, $T$ is a random operator that exhibits the same distribution as each $T^{(k)}$.
\begin{thm}\label{thm:main1}
Assuming that $\{\boldsymbol{a}_i\}_{i=1}^m$ are i.i.d. sampled from complex normal distribution $CN(0,1)$, then there exists $C_3,C_4>0$ such that if $m>C_3n$ and $n>C_3\log m$, then for any fixed $\boldsymbol{x}\in\mathbb{C}^n$, with probability at least $1-2/\log^2n$,
\[
\theta(T(\boldsymbol{x}))>\left(1-C_4\frac{n}{\theta(\boldsymbol{x})\sqrt{m}}\right)\left(\theta(\boldsymbol{x})+\tan^{-1} \frac{h'(\theta(\boldsymbol{x}))}{h(\theta(\boldsymbol{x}))}\right).
\]
\end{thm}
Theorem~\ref{thm:main1} is the key element of this work since it describes the performance of the alternating minimization in each iteration. Its proof is rather technical and it is deferred to Section~\ref{sec:proof}.
\subsection{Step 3: complete the proof}
To complete the proof of Theorem~\ref{thm:main}, we apply the following lemma, which is a result of \cite[Theorem 3.1]{Waldspurger2016}. Similar to Theorem~\ref{thm:main1}, it is a result for the operator $T$ defined in \eqref{eq:algorithm1}, instead of $T^{(i)}$ as defined in Algorithm~\ref{thm:main1}.
\begin{lemma}[Theorem 3.1 in \cite{Waldspurger2016}]\label{lemma:convergence}
Assuming that $\{\boldsymbol{a}_i\}_{i=1}^m$ are i.i.d. sampled from complex normal distribution $CN(0,1)$, then there exists $\epsilon, C_1, C_2, M >0$ and $0<\delta<1$ such that if $m\geq M n$, then with probability $1-C_1\exp(-C_2m)$, for all $\boldsymbol{x}$ such that \[
\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}\|\leq \epsilon \|\boldsymbol{z}\|,
\] then
\[
\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-T(\boldsymbol{x})\|\leq \delta\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}\|.
\]
\end{lemma}
Combining this result with the previous steps, we proved Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Applying Lemma~\ref{lemma:convergence} to the $B$ operators $T^{(i)}$ with $i=1, \cdots, B$, then we have the following result: if $m/B>Mn$, then with probability $1-BC_1\exp(-C_2m)$, for all $\boldsymbol{x}$ such that \[
\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}\|\leq \epsilon \|\boldsymbol{z}\|,
\] and for all $1\leq i\leq B$,
\begin{equation}\label{eq:contr5}
\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-T^{(i)}(\boldsymbol{x})\|\leq \delta\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}\|.
\end{equation}
Then as long as \begin{equation}\label{eq:sufficient}
\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}^{(i)}\|\leq \epsilon \|\boldsymbol{z}\|,\,\,\text{ for some $0\leq i\leq B$}
\end{equation} then the sequence $\inf_{\phi\in \mathbb{R}}\|e^{i\phi}\boldsymbol{z}-\boldsymbol{x}^{(k)}\|$ for $k=i, i+1,\cdots$ will converge linearly to zero.
Consider that the operator $T^{(i)}(\boldsymbol{x})$ are invariant to the scale of $\boldsymbol{x}$ and $\inf_{\phi\in \mathbb{R},c\in\mathbb{C}}\|e^{i\phi}\boldsymbol{z}-c\boldsymbol{x}^{(i)}\|\leq \theta(\boldsymbol{x}^{(i)})\|\boldsymbol{z}\|$, the sufficient condition in \eqref{eq:sufficient} can be further reduced to
\begin{equation}\label{eq:sufficient1}
\theta(\boldsymbol{x}^{(i)})\leq \epsilon,\,\,\text{ for some $0\leq i\leq B$}
\end{equation}
That is, it is sufficient to prove that \eqref{eq:sufficient1} holds with high probability. If for all $0\leq i< B$, $\theta(\boldsymbol{x}^{(i)})<\frac{\pi}{2}-\epsilon$, then Lemma~\ref{lemma:conjecture} implies that there exists $c_{\epsilon}>0$ such that $\tan^{-1}\frac{h'(\theta(\boldsymbol{x}^{(i)}))}{h(\theta(\boldsymbol{x}^{(i)}))}>c_{\epsilon}\theta(\boldsymbol{x}^{(i)})$ for all $0\leq i< B$. Since each batch has $m/B$ observations and for $1\leq i\leq B$, $T^{(i)}$ is independent with $\boldsymbol{x}^{(i)}$, $m/B>C_3n$, and $n>C_3\log m$, Theorem~\ref{thm:main1} implies that for each $0\leq i<B$, with probability $1-2/\log^2n$,
\begin{equation}\label{eq:contr}
\theta(\boldsymbol{x}^{(i+1)})>\left[(1+c_{\epsilon})\left(1-C_4\frac{\log n\sqrt{B}}{\theta(\boldsymbol{x}^{(i)})\sqrt{m}}\right)\right] \theta(\boldsymbol{x}^{(i)}).
\end{equation}
We choose $C_0$ such that $(1+c_\epsilon/2)^{C_0\log n}\sin^{-1}(1/2\log n\sqrt{n})>\pi/2-\epsilon$, and $C_0'$ such that
\begin{equation}\label{eq:contr1}
(1+c_{\epsilon})\left(1-C_4\frac{1}{\theta(\boldsymbol{x}^{(0)})\sqrt{C_0'}}\right)>1+c_\epsilon/2,
\end{equation}
then when $B=C_0\log n$ and $m=C_0C_0'\log^5n$, applying \eqref{eq:contr} and induction we can prove that with probability $1-2B/\log^2n$,
\begin{equation}\label{eq:contr2}
\theta(\boldsymbol{x}^{(B)})>\left[1+\frac{c_{\epsilon}}{2}\right]^B \theta(\boldsymbol{x}^{(0)})>\frac{\pi}{2}-\epsilon.
\end{equation}
then this is a contradiction to the assumption that \eqref{eq:sufficient1} does not hold, i.e., $\theta(\boldsymbol{x}^{(B)})$ can not be larger than $\pi/2-\epsilon$. Therefore, there exist $0\leq i< B$ such that $\theta(\boldsymbol{x}^{(i)})>\frac{\pi}{2}-\epsilon$, and Theorem~\ref{thm:main} is proved. The probabilistic estimation in Theorem~\ref{thm:main} comes from the union bound of Lemma~\ref{lemma:initialization}, \eqref{eq:contr5} and \eqref{eq:contr2}.
\end{proof}
\subsection{Discussion}\label{sec:discussion}
Theorem~\ref{thm:main} has several interesting connections with the results within the current literature. First of all, it complements the analysis of AltMinPhase in~\cite{Netrapalli7130654}. While the analysis of AltMinPhase in~\cite{Netrapalli7130654} is one of the first theoretical guarantees for the alternating minimization algorithm, the work has no instruction on how we should divide the samples into distinct blocks, or how we should choose the number of size of blocks. In addition, the analysis requires infinite observations to recover $\boldsymbol{z}$ exactly. In comparison, Theorem~\ref{thm:main} gives an estimation of the number of blocks to use. In addition, Theorem~\ref{thm:main1} also shows that when the size of each block is on the order of $O(n)$ up to a logarithmic factor, then each iteration of the algorithm improves the estimation of $\boldsymbol{z}$, in the sense that every iteration decreases the angle between $\boldsymbol{z}$ and the estimator.
Our work also partially answers the conjecture from the work \cite{Waldspurger2016} that when the initialization is randomly chosen and $m=O(n)$, the alternating minimization algorithm succeeds with high probability. In comparison, we proved that the alternating minimization algorithm in a batch setting succeeds with $m=O(n\log^{5}n)$, which is an improvement from the estimation $m=O(n^2)$ in \cite{Waldspurger2016} (though we remark that the result in \cite{Waldspurger2016} is for the non-batch setting).
An interesting observation from \cite{Waldspurger2016} is the existence of stationary points when $m<O(n^2)$. In comparison, Theorem \ref{thm:main} shows that the algorithm avoids these stationary points from random initialization. In this sense, Theorem \ref{thm:main} is very different from most existing theoretical guarantees for phase retrieval, which are based on the observations that there is no stationary point (or there is no stationary point within a neighborhood of $\boldsymbol{z}$).
We also emphasize the result in this work can be applied to settings other than $\boldsymbol{a}_i\sim CN(0,\mathbf{I})$. In fact, most existing works on algorithms that succeed with $m=O(n)$ requires a good initialization, which is constructed under the setting $\boldsymbol{a}_i\sim CN(0,\mathbf{I})$. For example, \cite{Netrapalli7130654} uses the top eigenvector of $\sum_{i=1}^m |\boldsymbol{a}_i^*\boldsymbol{z}|^2\boldsymbol{a}_i\boldsymbol{a}_i^*$, and \cite{NIPS2015_5743} applies a similar estimator with a thresholding-based scheme by using the top eigenvector of
\[
\sum_{i=1}^m |\boldsymbol{a}_i^*\boldsymbol{z}|^2\boldsymbol{a}_i\boldsymbol{a}_i^*\mathrm{1}_{|\boldsymbol{a}_i^*\boldsymbol{z}|^2\leq \frac{9}{m}\sum_{j=1}^{m}|\boldsymbol{a}_i^*\boldsymbol{z}|^2},
\]
and a similar scheme is also used in \cite{cai2016}. The only exception that we are aware of is \cite{NIPS2016_6061}, which introduces an orthogonality-promoting initialization that is obtained with a few simple power iterations and the initialization works when the distribution of $\boldsymbol{a}_i$ is heavy-tailed. In comparison, random initialization is a much simpler procedure and can be used in the setting that $\{\boldsymbol{a}_i\}_{i=1}^m$ are i.i.d. sampled from the complex normal distribution $CN(0,\Sigma)$ in Corollary \ref{cor:generalization} as follows, which suggests that Theorem~\ref{thm:main} still holds under the setting $\boldsymbol{a}_i\sim CN(0,\Sigma)$.
\begin{cor}\label{cor:generalization}
Assuming that $\boldsymbol{a}_i\sim CN(0,\Sigma)$, $\frac{\|\Sigma\boldsymbol{z}\|}{\|\Sigma^{\frac{1}{2}}\boldsymbol{z}\|\sqrt{\mathrm{tr}(\Sigma)}}>\frac{c'}{\sqrt{n}}$, $\mathrm{tr}(\Sigma)\geq \|\Sigma\|_F\log n$, and $\Sigma$ is nonsingular, then Algorithm~\ref{alg:main} converges to the underlying $\boldsymbol{z}$ under the assumptions stated in Theorem~\ref{thm:main}.
\end{cor}
\begin{proof}
The proof is based on the observation that it is equivalent to the setting where $\boldsymbol{a}_i\sim CN(0,\mathbf{I})$. If we let $\tilde{\boldsymbol{a}}_i=\Sigma^{-\frac{1}{2}}\boldsymbol{a}_i$, $\tilde{\boldsymbol{x}}^{(k)}=\Sigma^{\frac{1}{2}}\boldsymbol{x}^{(k)}$, and $\tilde{\boldsymbol{z}}=\Sigma^{\frac{1}{2}}\boldsymbol{z}$, then the update formula \eqref{eq:algorithm} is equivalent to the setting of estimating $\tilde{\boldsymbol{z}}$ with sensing vectors $\{\tilde{\boldsymbol{a}}_i\}_{i=1}^m$, with initialization $\tilde{\boldsymbol{x}}^{(0)}=\Sigma^{\frac{1}{2}}\boldsymbol{x}^{(0)}$ sampled from $CN(0,\Sigma)$.
Now let us investigate the angle between $\tilde{\boldsymbol{x}}^{(0)}$ and $\tilde{\boldsymbol{z}}^{(0)}$:
\begin{equation}\label{eq:initial0}
\frac{|\tilde{\boldsymbol{x}}^{(0)*}\tilde{\boldsymbol{z}}|}{\|\tilde{\boldsymbol{x}}^{(0)}\|\|\tilde{\boldsymbol{z}}\|}
=\frac{|{\boldsymbol{x}}^{(0)*}\Sigma {\boldsymbol{z}}|}{\|\Sigma^{\frac{1}{2}}{\boldsymbol{x}}^{(0)}\|\|\Sigma^{\frac{1}{2}}\boldsymbol{z}\|}.
\end{equation}
WLOG we may assume that all elements of $\boldsymbol{x}^{(0)}$ are i.i.d. sampled from the complex normal distribution $CN(0,1)$. Then ${\boldsymbol{x}}^{(0)*}\Sigma {\boldsymbol{z}}$ is distributed according to $CN(0,\|\Sigma\boldsymbol{z}\|^2)$, and $|{\boldsymbol{x}}^{(0)*}\Sigma {\boldsymbol{z}}|>\|\Sigma\boldsymbol{z}\|/\log n$ with probability $1-C/\log n$. In addition, Hanson-Wright inequality implies that \[\Pr\{|\|\Sigma^{\frac{1}{2}}{\boldsymbol{x}}^{(0)}\|^2-\mathrm{tr}(\Sigma)|>t\}\leq 2\exp(-c\min(\frac{t^2}{\|\Sigma\|_F^2},\frac{t}{\|\Sigma\|})).\]
Since $\mathrm{tr}(\Sigma)\geq \|\Sigma\|_F\log n\geq \|\Sigma\| \log n$, with probability at least $1-2\exp(-c\log n)$, $|\|\Sigma^{\frac{1}{2}}{\boldsymbol{x}}^{(0)}\|^2-\mathrm{tr}(\Sigma)|\leq c\mathrm{tr}(\Sigma)$. As a result, the RHS of \eqref{eq:initial0} is larger than
\[
\frac{\|\Sigma\boldsymbol{z}\|}{\log n\|\Sigma^{\frac{1}{2}}\boldsymbol{z}\|\sqrt{\mathrm{tr}(\Sigma)}}.
\]
If $\frac{\|\Sigma\boldsymbol{z}\|}{\|\Sigma^{\frac{1}{2}}\boldsymbol{z}\|\sqrt{\mathrm{tr}(\Sigma)}}>\frac{c'}{\sqrt{n}}$, this recovers Lemma~\ref{lemma:initialization}. Following the proof of Theorem~\ref{thm:main}, $\tilde{\boldsymbol{x}}^{(n)}$ converges to $\tilde{\boldsymbol{z}}$. Since $\Sigma$ is nonsingular, Corollary~\ref{cor:generalization} is proved.
\end{proof}
At last, we emphasize that Theorem~\ref{thm:main} does not apply to the standard alternating minimization algorithm (i.e., not in a batch setting). The reason is that the probabilistic estimation in Theorem~\ref{thm:main1} only holds for a fixed $\boldsymbol{x}$ that is independent of $\boldsymbol{A}$. However, in the standard alternating minimization algorithm, $\boldsymbol{x}^{(k)}$ for $k>1$ depends on $\boldsymbol{A}$, and Theorem~\ref{thm:main1} cannot be used to estimate $\theta(\boldsymbol{x}^{(k+1)})$. In comparison, Theorem 3.1 in \cite{Waldspurger2016} applies for all $\boldsymbol{x}$ as long as $\boldsymbol{x}^{(k)}$ is sufficiently close to $\boldsymbol{z}$. It is unclear how we can find a method generalizing Theorem~\ref{thm:main} to the standard alternating minimization algorithm, by ``decoupling'' the dependence of $\boldsymbol{x}^{(k)}$ and $\boldsymbol{A}$. This is an open question and we consider it as an interesting future direction.
\section{Proof of Theorem~\ref{thm:main1}}\label{sec:proof}
To prove Theorem~\ref{thm:main1}, we first present Lemma~\ref{lemma:expectation}, which gives the exact formula for the expectation of $g_i(\boldsymbol{x})$ for $g_i$ defined in \eqref{eq:gx}. We also present Lemma~\ref{lemma:mean}, which shows that the expectation of $T(\boldsymbol{x})$ is a scalar multiplication of the expectation of $g_i(\boldsymbol{x})$, and Lemma~\ref{lemma:var}, which shows that $T(\boldsymbol{x})$ has a small variance. Combining these three results together, we proved Theorem~\ref{thm:main1}. These lemmas apply the probabilistic setting of Theorem~\ref{thm:main1} by assuming that $\{\boldsymbol{a}_i\}_{i=1}^m\sim CN(0,1)$ and $\boldsymbol{x}$ is fixed. In the proof, we assume WLOG that $\|\boldsymbol{x}\|=\|\boldsymbol{z}\|=1$.
\begin{lemma}\label{lemma:expectation}
Let $\eta\in[0,2\pi]$ and $\boldsymbol{w}$ be chosen such that $\|\boldsymbol{w}\|=1$, $\boldsymbol{w}\perp\boldsymbol{z}$ (i.e., $\boldsymbol{w}^*\boldsymbol{z}=0$), and $\boldsymbol{x}=\sin(\theta)\boldsymbol{z}\exp(i\eta)+\cos(\theta)\boldsymbol{w}$. Then for $g_i$ defined in \eqref{eq:gx},
\[
\operatorname{\mathbb{E}} g_i(\boldsymbol{x})= h(\theta) \boldsymbol{x} + h'(\theta)\boldsymbol{d},
\]
where $\boldsymbol{d}=\cos(\theta)\boldsymbol{z}\exp(i\eta)-\sin(\theta)\boldsymbol{w}$.
\end{lemma}
\begin{lemma}\label{lemma:mean}
For any $1\leq i\leq m$ and $\Sigma_i=\sum_{1\leq j\leq m, j\neq i}\boldsymbol{a}_j\boldsymbol{a}_j^*$,
\begin{equation}\label{eq:mean1}
\left\|\operatorname{\mathbb{E}} T(\boldsymbol{x})-m\operatorname{\mathbb{E}}\left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\Sigma_i^{-1}\right)\operatorname{\mathbb{E}} g_i(\boldsymbol{x})\right\|<Cn/m\end{equation}
\begin{equation}\label{eq:mean2}
\left\|\boldsymbol{z}^*\operatorname{\mathbb{E}} T(\boldsymbol{x})-m\boldsymbol{z}^*\operatorname{\mathbb{E}}\left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\Sigma_i^{-1}\right)\operatorname{\mathbb{E}} g_i(\boldsymbol{x})\right\|<Cn\sqrt{n}/m
\end{equation}
\end{lemma}
\begin{lemma}\label{lemma:estimation_g}
For $g(x)$ defined in \eqref{eq:gx}, there exists $C>0$ such that
\[
\Pr(\|g(\boldsymbol{x})\|>Ctm)<\exp(-t^2).
\]
\end{lemma}
\begin{lemma}\label{lemma:var}
There exists $C>0$ such that for all $1\leq i\leq n$,
\[
\operatorname{\mathbb{E}}[\|T(\boldsymbol{x})-\operatorname{\mathbb{E}} T(\boldsymbol{x})\|^2]<Cn/m,\,\, \text{and}\,\,\mathrm{Var} [\boldsymbol{z}^* T(\boldsymbol{x})] < C/m.
\]
\end{lemma}
We first prove Theorem~\ref{thm:main1}, with the proofs of lemmas deferred.
\begin{proof}[Proof of Theorem~\ref{thm:main1}]
Applying the Chebyshev's inequality to Lemma~\ref{lemma:var}, we have that with probability at least $1-2/\log n^2$, we have
\begin{equation}\label{eq:Chebyshev}
\|T(\boldsymbol{x})-\operatorname{\mathbb{E}} T(\boldsymbol{x})\|<C\sqrt{n}\log n/\sqrt{m}, \|\boldsymbol{z}^*T(\boldsymbol{x})-\boldsymbol{z}^* \operatorname{\mathbb{E}} T(\boldsymbol{x})\|<C\log n/\sqrt{m}.
\end{equation}
In addition, $\operatorname{\mathbb{E}}\left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\Sigma_i^{-1}\right)$ is a scalar matrix and \eqref{eq:gassian_sigma} implies that with probability $1/2$, the largest singular value and the smallest singular value of $\Sigma_i$ are both in the order of $1/m$, so there exists some $c=O(1)$ such that its diagonal entries are larger than $c/m$.
Lemma~\ref{lemma:expectation} implies that angle between $\boldsymbol{z}^*$ and $g_i(\boldsymbol{x})$ satisfies
\[
\frac{|\boldsymbol{z}^*\operatorname{\mathbb{E}} g_i(\boldsymbol{x})|}{\|\operatorname{\mathbb{E}} g_i(\boldsymbol{x})\|}=\sin\left(\theta(\boldsymbol{x})+\tan^{-1}\frac{h'(\theta(\boldsymbol{x}))}{h(\theta(\boldsymbol{x}))}\right).
\]
Combining it with $\|\operatorname{\mathbb{E}} g_i(\boldsymbol{x})\|\geq 1$ (which follows from Lemma~\ref{lemma:expectation}), $\operatorname{\mathbb{E}}\left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\Sigma_i^{-1}\right)=\frac{c}{m}\mathbf{I}$ with $c=o(1)$, \eqref{eq:Chebyshev}, and Lemma~\ref{lemma:mean},
\begin{align*}
\frac{|\boldsymbol{z}^*T(\boldsymbol{x})|}{\|T(\boldsymbol{x})\|}\geq \frac{c\sin\left(\theta(\boldsymbol{x})+\tan^{-1}\frac{h'(\theta(\boldsymbol{x}))}{h(\theta(\boldsymbol{x}))}\right)-C\frac{\log n}{\sqrt{m}}}{c+C\frac{\sqrt{n}\log n}{\sqrt{m}}}.
\end{align*}
Then Theorem~\ref{thm:main1} is proved by applying $ \theta(T(\boldsymbol{x}))=\sin^{-1}\left(\frac{|\boldsymbol{z}^*T(\boldsymbol{x})|}{\|T(\boldsymbol{x})\|}\right)$.
\end{proof}
\subsection{Proof of Auxiliary Lemmas for Theorem~\ref{thm:main1}}
\begin{proof}[Proof of Lemma~\ref{lemma:expectation}]
The proof is based on the observation that $g_i(\boldsymbol{x})$ is the derivative of $|\boldsymbol{a}_i^*\boldsymbol{x}||\boldsymbol{a}_i^*\boldsymbol{z}|$. In particular, this work defines the derivatives of real valued functions over complex variables as follows: $\nabla f(x)$ is chosen such that
\[
f(x+\Delta x)=f(x)+\mathrm{re}(\nabla f(x)^*\Delta x)+o(|\Delta x|).
\]
Then we can define $G(\boldsymbol{x})=\sum_{i=1}^nG_i(\boldsymbol{x})$ with $G_i(\boldsymbol{x})=|\boldsymbol{a}_i^*\boldsymbol{x}|$. Then we have $g_i(\boldsymbol{x})=\nabla G_i(\boldsymbol{x})$ and $g(\boldsymbol{x})=\nabla G(\boldsymbol{x})$.
In addition, we can calculate $\operatorname{\mathbb{E}} G_i(\boldsymbol{x})$. Since the expectation is invariant to unitary transformations of $\boldsymbol{x}$ and $\boldsymbol{z}$ and $\theta(\boldsymbol{x})=\sin^{-1}(\frac{|\boldsymbol{x}^*\boldsymbol{z}|}{\|\boldsymbol{x}\|\|\boldsymbol{z}\|})$, WLOG we may phase $\boldsymbol{z}$ so that $\boldsymbol{x}^*\boldsymbol{z}$ is nonnegative and assume that $\boldsymbol{z}=(1,0,\cdots,0)$ and $\boldsymbol{x}=(\sin(\theta),\cos(\theta),0,\cdots,0)$. Then it is clear that \begin{align*}&\operatorname{\mathbb{E}}[G_i(\boldsymbol{x})]
=\operatorname{\mathbb{E}}_{[a_1,a_2]\sim CN(0,\mathbf{I})}\Big[|a_1||a_1\sin\theta+a_2\cos\theta|\Big]
=h(\theta).
\end{align*}
Since $\operatorname{\mathbb{E}}[G_i(\boldsymbol{x})]$ only depends on the $\theta(\boldsymbol{x})$ and $\|\boldsymbol{x}\|$, its derivative is only nonzero at two directions: $\boldsymbol{x}$ and the direction where $\theta(\boldsymbol{x})$ changes most.
Since the function $G_i$ has the property $G_i(\boldsymbol{x}+t\boldsymbol{x})=(1+t)G_i(\boldsymbol{x})$, we have
\[
\boldsymbol{x}^*\nabla \operatorname{\mathbb{E}}[G_i(\boldsymbol{x})] = \operatorname{\mathbb{E}}[G_i(\boldsymbol{x})].
\]
By definition, $\boldsymbol{d}$ is the direction where $\theta(\boldsymbol{x})$ changes most, that is, $\boldsymbol{d}=\operatorname*{arg\; max}_{\|\boldsymbol{y}\|=1,\boldsymbol{y}\in\mathbb{C}^n}\frac{\theta(\boldsymbol{x}+t\boldsymbol{y})-\theta(\boldsymbol{x})}{t}$, and $\theta(\boldsymbol{x}+t\boldsymbol{d})=\theta(\boldsymbol{x})+t+O(t^2)$. Combining it with $\|\boldsymbol{x}+t\boldsymbol{d}\|=\|\boldsymbol{x}\|+O(t^2)$, we have
\[
\boldsymbol{d}^*\nabla \operatorname{\mathbb{E}}[G_i(\boldsymbol{x})]=h'(\theta)_{\theta=\theta(\boldsymbol{x})}.
\]
Combining the above observations together, Lemma~\ref{lemma:expectation} is proved.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:mean}]
The proof of Lemma~\ref{lemma:mean} is based on an upper bound of $\|\Sigma^{-1}\|$ for $\Sigma=\boldsymbol{A}^*\boldsymbol{A}$. To start, we apply the result from~\cite[Theorem 1.1]{Tao2010} that for any for any $n\times n$ complex normal matrix $\boldsymbol{A}$,
\begin{equation}\label{eq:inverse_norm0}
\Pr\left(\sigma_{\min}(\boldsymbol{A})\leq t\sqrt{n}\right)<t.
\end{equation}
For any $m\times n$ complex normal matrix $\boldsymbol{A}$, we denote its smallest singular value by $\sigma_{\min}(\boldsymbol{A})$. Since $\boldsymbol{A}$ contains $\lfloor{\frac{m}{n}}\rfloor$ independent submatrices of size $n\times n$, and $\sigma_{\min}(\boldsymbol{A})$ is larger than the smallest singular value of any submatrix of $\boldsymbol{A}$, we have
\begin{equation}\label{eq:inverse_norm0}
\Pr\left(\sigma_{\min}(\boldsymbol{A})\leq t\sqrt{n}\right)<t^{\lfloor{\frac{m}{n}}\rfloor},
\end{equation}
We may also apply the result from~\cite[Theorem II.13]{Szarek:survey} that for any $m\times n$ matrix $\Gamma$ that is i.i.d. sampled from real Gaussian distribution $CN(0,1)$, we have
\[
\Pr\left(\sqrt{m}+\sqrt{n}+t \geq \sigma_1(\Gamma)\geq \sigma_{n}(\Gamma)\geq \sqrt{m}-\sqrt{n}-t \right)>1-\exp(-t^2/2).
\]
Combining it with $\sigma_{n}(\boldsymbol{A})\geq \sigma_{n}(\mathrm{im}(\boldsymbol{A}))$ and $\sigma_{1}(\boldsymbol{A})\leq \sigma_1(\mathrm{re}(\boldsymbol{A}))+\sigma_1(\mathrm{im}(\boldsymbol{A}))$, \begin{equation}\label{eq:gassian_sigma}
\Pr\left\{\sigma_{n}(\boldsymbol{A})\geq \frac{1}{\sqrt{2}}(\sqrt{m}-\sqrt{n}-t),\,\,\sigma_{1}(\boldsymbol{A})\leq {\sqrt{2}}(\sqrt{m}+\sqrt{n}+t)\right\}>1-2\exp(-t^2/2).
\end{equation}
As a result, we have
\begin{align}\label{eq:inverse_norm}
\Pr\left(\sigma_{\min}(\boldsymbol{A})\leq t\right)\leq \min\left(\frac{t}{\sqrt{n}}^{\lfloor{\frac{m}{n}}\rfloor},2\exp\left(-\frac{(\sqrt{m}-\sqrt{n}-t\sqrt{2})^2}{2}\right)\right)\\
\leq \begin{cases}\frac{t}{\sqrt{n}}^{\lfloor{\frac{m}{n}}\rfloor},\,\,\text{if $t<\exp\left(-n\right)$}\\2\exp\left(-\frac{(\sqrt{m}-\sqrt{n}-t\sqrt{2})^2}{2}\right),\,\,\text{if $t\geq \exp\left(-n\right)$}. \end{cases}
\end{align}
Now let us estimate the upper bound of $\operatorname{\mathbb{E}} \|\Sigma^{-1}\|^2$. Since $\|\Sigma^{-1}\|=\sigma_{\min}(\boldsymbol{A})^{-2}$, so
\begin{align*}
&\operatorname{\mathbb{E}} \|\Sigma^{-1}\|^2\leq \frac{8}{m^2}+\int_{t=1/m^2}^{\infty}\Pr(\|\Sigma^{-1}\|^2>t)
\leq \frac{8}{m^2}+\int_{t=8/m^2}^{\infty}\Pr(\sigma_{\min}(\boldsymbol{A})<\frac{1}{\sqrt{t}}){\,\mathrm{d}} t\\\leq& \frac{8}{m^2}+\int_{t=8/m^2}^{\exp(n/2)}2\exp\left(-\frac{(\sqrt{m}-\sqrt{n}-\sqrt{2/t})^2}{2}\right){\,\mathrm{d}} t+\int_{t=\exp(n/2)}^{\infty}\frac{1}{\sqrt{tn}}^{\lfloor{\frac{m}{n}}\rfloor}{\,\mathrm{d}} t\\
\leq & \frac{8}{m^2}+2\exp(n/2)\exp\left(-\frac{(\sqrt{m}/2-\sqrt{n})^2}{2}\right)+\exp(-n/4)\leq \frac{C}{m^2},
\end{align*}
where the last two steps uses the assumption that $m>Cn$ and $n>C\log m$.
In addition, for any fixed $n\times n$ matrix $\Sigma$, Hanson-Wright inequality~\cite{rudelson2013}
\[
\Pr\left(|\boldsymbol{a}_i^*\Sigma\boldsymbol{a}_i-\mathrm{tr}(\Sigma)|>t\right)\leq 2\exp\left[-c\min\left(\frac{t^2}{\|\Sigma\|_F^2},\frac{t}{\|\Sigma\|}\right)\right]
\]
implies
\begin{equation}\label{eq:intermediate}
\Pr\left(|\boldsymbol{a}_i^*\Sigma\boldsymbol{a}_i-\mathrm{tr}(\Sigma)|>t\sqrt{n}\|\Sigma\|\right)\leq 2\exp\left[-c\min(t^2,t\sqrt{n})\right].
\end{equation}
Applying the Sherman–Morrison formula,
\[
T(\boldsymbol{x})= \Sigma^{-1}\sum_{i=1}^mg_i(\boldsymbol{x})=\sum_{i=1}^m[\Sigma_i^{-1}-\frac{\Sigma_i^{-1}\boldsymbol{a}_i\boldsymbol{a}_i^*\Sigma_i^{-1}}{1+\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}]g_i(\boldsymbol{x})
=\sum_{i=1}^m\frac{1}{1+\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}\Sigma_i^{-1}g_i(\boldsymbol{x}),
\]
we have
\begin{align}\nonumber
&\left\|\operatorname{\mathbb{E}} T(\boldsymbol{x})- \sum_{i=1}^m \operatorname{\mathbb{E}} \left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})} \Sigma_i^{-1}\right)\operatorname{\mathbb{E}} g_i(\boldsymbol{x})
\right\|=m\left\|\operatorname{\mathbb{E}}\left[ \left(\frac{1}{1+\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\right)\Sigma_i^{-1}g_i(\boldsymbol{x})\right]\right\|\\
\leq &m\operatorname{\mathbb{E}}\left\| \left(\frac{1}{1+\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\right)\Sigma_i^{-1}g_i(\boldsymbol{x})\right\|\leq m\operatorname{\mathbb{E}}\left\| \left({\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-{ \mathrm{tr}(\Sigma_i^{-1})}\right)\Sigma_i^{-1}g_i(\boldsymbol{x})\right\|\label{eq:mean_diff},
\end{align}
where the last inequality follows from the fact that
\[
\left|\frac{1}{1+\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})}\right|\leq \left|{\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-{ \mathrm{tr}(\Sigma_i^{-1})}\right|.
\]
Applying \cite[Proposition 2.2(1)]{measure2005}, for any $t>1$, with probability $1-t^2\exp(-t^2+1)$, $|\boldsymbol{a}_i^*\boldsymbol{z}|<t$ and with probability $1-t^{2n}\exp(-(t^2-1)n)$, $\|\boldsymbol{a}_i\|<t\sqrt{n}$, which means that with probability $1-t^{2n}\exp(-(t^2-1)n)-t^2\exp(-t^2+1)$, $\|g_i(\boldsymbol{x})\|<t^2\sqrt{n}$. Combining it with the \eqref{eq:intermediate}, the RHS of \eqref{eq:mean_diff} can be estimated by
\begin{align*}
&m\operatorname{\mathbb{E}}\left\| \left({\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-{ \mathrm{tr}(\Sigma_i^{-1})}\right)\Sigma_i^{-1}g_i(\boldsymbol{x})\right\|
\\=&m\operatorname{\mathbb{E}}_{\{\boldsymbol{a}_j\}_{1\leq j\leq m, j\neq i}}\left[\operatorname{\mathbb{E}}_{\boldsymbol{a}_i}\left\| \left({\boldsymbol{a}_i^*\Sigma_i^{-1}\boldsymbol{a}_i}-{ \mathrm{tr}(\Sigma_i^{-1})}\right)\Sigma_i^{-1}g_i(\boldsymbol{x})\right\|\right]\\
\leq & C n m\operatorname{\mathbb{E}}_{\{\boldsymbol{a}_j\}_{1\leq j\leq m, j\neq i}}\|\Sigma_i^{-1}\|^2.
\end{align*}
Combining it with \eqref{eq:inverse_norm}, we have \eqref{eq:mean1}:
\[
\Big|\operatorname{\mathbb{E}} T(\boldsymbol{x})- \sum_{i=1}^m \operatorname{\mathbb{E}} \left(\frac{1}{1+ \mathrm{tr}(\Sigma_i^{-1})} \Sigma_i^{-1}\right)\operatorname{\mathbb{E}} g_i(\boldsymbol{x})\Big|<C.
\]
The proof of \eqref{eq:mean2} is similar to the proof of \eqref{eq:mean1}, with the estimation of $\|g_i(\boldsymbol{x})\|$ replaced by $|\boldsymbol{z}^*g_i(\boldsymbol{x})|$. For $|\boldsymbol{z}^*g_i(\boldsymbol{x})|$ we have
\[
|\boldsymbol{z}^*g_i(\boldsymbol{x})|=\left|\boldsymbol{z}^*\frac{\boldsymbol{a}_i^*\boldsymbol{z}}{\boldsymbol{a}_i^*\boldsymbol{x}}\boldsymbol{a}_i\boldsymbol{a}_i^*\boldsymbol{z}\right|=|\boldsymbol{a}_i^*\boldsymbol{z}|^2,
\]
and $\boldsymbol{a}_i^*\boldsymbol{z}\sim CN(0,1)$, so with probability at least $1-2\exp(-t^2/2)$, $|\boldsymbol{a}_i^*g_i(\boldsymbol{x})|<t^2$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:estimation_g}]
Let $\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})$ be the two-dimensional subspace spanned by $\boldsymbol{x}$ and $\boldsymbol{z}$, and $\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}\in\mathbb{C}^{n\times n-2}$ be a projector matrix to the $n-2$-dimensional subspace orthogonal to $\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})$, then
\[
\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}g(\boldsymbol{x})=\sum_{i=1}^m|\boldsymbol{a}_i^*\boldsymbol{z}|\frac{\boldsymbol{a}_i^*\boldsymbol{x}}{|\boldsymbol{a}_i^*\boldsymbol{x}|}\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}\boldsymbol{a}_i,
\]
where $\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}\boldsymbol{a}_i\in \mathbb{C}^{n-2}$ is i.i.d. sampled from $CN(0,\mathbf{I})$ and is independent with respect to $|\boldsymbol{a}_i^*\boldsymbol{z}|\frac{\boldsymbol{a}_i^*\boldsymbol{x}}{|\boldsymbol{a}_i^*\boldsymbol{x}|}$. As a result, $\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}g(\boldsymbol{x})\in\mathbb{C}^{n-2}$ is a vector whose elements are i.i.d. sampled from $CN(0,\sum_{i=1}^m |\boldsymbol{a}_i^*\boldsymbol{z}|^2)$.
Applying Hansen-Wright inequality~\cite{rudelson2013}, we have
\[
\Pr(\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}g(\boldsymbol{x})\|^2>2tn \sum_{i=1}^m |\boldsymbol{a}_i^*\boldsymbol{z}|^2)<\exp(-Cnt^2),
\]
and
\begin{equation*}
\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})}g(\boldsymbol{x})\|\leq \sum_{i=1}^n\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})}\boldsymbol{a}_i\boldsymbol{a}_i^*\boldsymbol{z}\|\leq \sum_{i=1}^n\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})}\boldsymbol{a}_i\|^2.
\end{equation*}
In addition, Berstein's inequality implies that there exists $C>0$ such that
\begin{equation}
\Pr(\sum_{i=1}^n|\boldsymbol{a}_i^*\boldsymbol{z}|^2>Ct)<\exp(-t^2),\,\,\,\,\,\Pr(\sum_{i=1}^n\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})}\boldsymbol{a}_i\|^2>Ct)<\exp(-t^2).
\end{equation}
Combining these estimations together with
\[
\|g(\boldsymbol{x})\|\leq \|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})}g(\boldsymbol{x})\|+\|\boldsymbol{P}_{\mathrm{Sp}(\boldsymbol{x},\boldsymbol{z})^\perp}g(\boldsymbol{x})\|,
\]
the lemma is proved.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lemma:var}]
First, we apply the following Lemma, which is a straightforward generalization of the Tensorization of variance theorem~\cite[Theorem 2.3]{Handel_course} to the complex setting:
\begin{lemma}\label{lemma:tensorization}
For complex random variables $X_1,\cdots,X_n$ and $f: \mathbb{C}^n\rightarrow\mathbb{C}$, we have
\[
\mathrm{Var}[f(X_1,\cdots,X_n)]\leq \operatorname{\mathbb{E}}\left[\sum_{i=1}^n\mathrm{Var}_i(f(X_1,\cdots,X_n))\right],
\]
where $\mathrm{Var}_i$ is the variance of $f$ with respect to the variable $X_i$ only, the remaining variables being kept fixed.
\end{lemma}
\begin{proof}
Applying $\mathrm{Var}[f]=\mathrm{Var}[\mathrm{re}(f)]+\mathrm{Var}[\mathrm{im}(f)]$ and the same argument as in the proof of \cite[Theorem 2.3]{Handel_course} for both the real and the imaginary part, the lemma is proved.
\end{proof}
Applying Lemma~\ref{lemma:tensorization}, denote the variance when $\{\boldsymbol{a}_i\}_{i\neq j}$ are fixed by
\[
\mathrm{Var}_j(\boldsymbol{z}^* T(\boldsymbol{x})),
\]
then we have
\begin{equation}\label{eq:tensor}
\mathrm{Var}(\boldsymbol{z}^* T(\boldsymbol{x}))\leq \operatorname{\mathbb{E}} \sum_{j=1}^m[\mathrm{Var}_j(\boldsymbol{z}^* T(\boldsymbol{x}))]
\end{equation}
Then
\begin{align*}
&\mathrm{Var}_j(\boldsymbol{z}^* T(\boldsymbol{x})) \leq [\boldsymbol{z}^*\Sigma^{-1}\sum_{i=1}^mg_i(\boldsymbol{x})-\boldsymbol{z}^*\Sigma_i^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})]^2\\=&\Big[\boldsymbol{z}^*\Sigma^{-1}g_j(\boldsymbol{x})-(1+\boldsymbol{a}_j^*\Sigma_j^{-1}\boldsymbol{a}_j)^{-1}\boldsymbol{z}^*\Sigma_j^{-1}\boldsymbol{a}_j\boldsymbol{a}_j^*\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\Big]^2\\ \leq & 2\Big[\boldsymbol{z}^*\Sigma^{-1}\boldsymbol{a}_j \frac{|\boldsymbol{a}_j^*\boldsymbol{z}|}{|\boldsymbol{a}_j^*\boldsymbol{x}|}\boldsymbol{a}_j^*\boldsymbol{x}\Big]^2+2\Big[\boldsymbol{z}^*\Sigma_j^{-1}\boldsymbol{a}_j\boldsymbol{a}_j^*\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\Big]^2\\
\leq & 4\Big[\boldsymbol{z}^*\Sigma^{-1}_j\boldsymbol{a}_j \frac{|\boldsymbol{a}_j^*\boldsymbol{z}|}{|\boldsymbol{a}_j^*\boldsymbol{x}|}\boldsymbol{a}_j^*\boldsymbol{x}\Big]^2+4\Big[\frac{1}{1+\boldsymbol{a}_j^*\Sigma_j^{-1}\boldsymbol{a}_j}\boldsymbol{z}^*\Sigma^{-1}_j\boldsymbol{a}_j\boldsymbol{a}_j^*\Sigma^{-1}_j\boldsymbol{a}_j \frac{|\boldsymbol{a}_j^*\boldsymbol{z}|}{|\boldsymbol{a}_j^*\boldsymbol{x}|}\boldsymbol{a}_j^*\boldsymbol{x}\Big]^2+2\Big[\boldsymbol{z}^*\Sigma_j^{-1}\boldsymbol{a}_j\boldsymbol{a}_j^*\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\Big]^2\\
\leq &8\Big[\boldsymbol{z}^*\Sigma^{-1}_j\boldsymbol{a}_j |\boldsymbol{a}_j^*\boldsymbol{z}|\Big]^2+2\Big[\boldsymbol{z}^*\Sigma_j^{-1}\boldsymbol{a}_j\boldsymbol{a}_j^*\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\Big]^2.
\end{align*}
Consider that when $\{\boldsymbol{a}_i\}_{1\leq i\leq n, i\neq j}$ are fixed and $\boldsymbol{a}_j\sim CN(0,1)$, then $\boldsymbol{z}^*\Sigma^{-1}_j\boldsymbol{a}_j\sim CN(0,\|\boldsymbol{z}^*\Sigma^{-1}_j\|^2)$, $\boldsymbol{a}_j^*\boldsymbol{z}\sim CN(0,\|\boldsymbol{z}\|^2)=CN(0,1)$, and $\boldsymbol{a}_j^*\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\sim CN(0, \|\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\|^2)$, so
\begin{align*}
&\operatorname{\mathbb{E}} \mathrm{Var}_j(\boldsymbol{z}^* T(\boldsymbol{x}))\leq C \operatorname{\mathbb{E}} \left[\|\boldsymbol{z}^*\Sigma^{-1}_j\|^2 +\|\boldsymbol{z}^*\Sigma^{-1}_j\|^2\|\Sigma_j^{-1}\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\|^2\right]\\\leq& C \operatorname{\mathbb{E}} \left[\|\Sigma^{-1}_j\|^2 +\|\Sigma^{-1}_j\|^4\|\sum_{i=1,i\neq j}^mg_i(\boldsymbol{x})\|^2\right].
\end{align*}
Combining it with the estimation of $\|\Sigma_j^{-1}\|$ in \eqref{eq:inverse_norm} and the estimation of $\|\sum_{1\leq i\leq m, i\neq j}^mg_i(\boldsymbol{x})\|$ in Lemma~\ref{lemma:estimation_g} (note that the estimation of $\|\sum_{1\leq i\leq m, i\neq j}^mg_i(\boldsymbol{x})\|$ is identical to the estimation of $\|g(\boldsymbol{x})\|=\|\sum_{1\leq i\leq m}^mg_i(\boldsymbol{x})\|$), we have
\[
\operatorname{\mathbb{E}} \mathrm{Var}_j(\boldsymbol{z}^*T(\boldsymbol{x}))<C/m^2.
\]
Applying \eqref{eq:tensor}, we have $\mathrm{Var} [\boldsymbol{z}^* T(\boldsymbol{x})]<C/m$.
Similarly, we can prove the other inequality by showing that any vector $\boldsymbol{e}_i$, whose $i$-th element is $1$ and other elements are zero, $\mathrm{Var} [\boldsymbol{e}_i^* T(\boldsymbol{x})]<C/m$.
\end{proof}
\section{Simulations}\label{sec:simu}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{simu1.png}
\includegraphics[width=0.45\textwidth]{simu2.png}
\includegraphics[width=0.45\textwidth]{simu4.png}
\includegraphics[width=0.45\textwidth]{simu3.png}
\end{center}
\caption{Comparison between the predicted and the empirical value of $\theta(T(\boldsymbol{x}))$, with various settings of $(n,m)$.}\label{fig:conjecture2}
\end{figure}
This section aims to verify the result in Theorem~\ref{thm:main1}. In particular, we would like to investigate whether empirically, $\theta(\boldsymbol{x})$ and $\theta(T(\boldsymbol{x}))$ have the relation predicted by Theorem~\ref{thm:main1} and its proof:
\begin{equation}\label{eq:predicted}
\theta(T(\boldsymbol{x}))\approx \theta(\boldsymbol{x})+\tan^{-1}\frac{h'(\theta(\boldsymbol{x}))}{h(\theta(\boldsymbol{x}))}.
\end{equation}
For this purpose, we run simulations and compare the empirically observed $\theta(T(\boldsymbol{x}))$ and the predicted values. We run two simulations with different settings of $n, m$. For each setting and each $\theta(\boldsymbol{x})$, we repeat the alternating minimization algorithm randomly by $1000$ times and visualize the $10\%, 50\%, 90\%$ quantile of the observed $\theta(T(\boldsymbol{x}))$ in Figure~\ref{fig:conjecture2}, as well as the predicted value in \eqref{eq:predicted}. The figure clearly indicates that our predicted value is close to the empirical values, and as a result, $T(\theta(\boldsymbol{x}))>\theta(\boldsymbol{x})$ with high probability as long as $\theta(\boldsymbol{x})$ is not too small, which means that with high probability, the alternating minimization algorithm monotonically reduces the angle between the estimated and the underlying signal. In addition, the variance of the distribution of $\theta(T(\boldsymbol{x}))$ is shown to be on the order of $1/\sqrt{m}$.
\section{Summary and Future Directions}
This work analyzes the performance of the alternating minimization algorithm for phase retrieval. Theoretical analysis shows that the angle between the current iteration and the underlying signal is reduced at each iteration with high probability. Based on this observation, it is shown that alternating minimization in a batch setting with random initialization can recover the underlying signal as long as $m=O(n\log^{5}n)$
A future direction is the analysis of standard alternating minimization without the batch setting. Current work only analyzes the performance of phase retrieval per iteration, as discussed at the end of Section~\ref{sec:discussion}, it does not apply to the standard alternating minimization algorithm. We hope to find a way to uncouple the correlation between $\boldsymbol{x}^{(k)}$ and $\boldsymbol{A}$, to prove the conjecture that alternating minimization algorithm succeeds with $m=O(n)$. It is also interesting to improve the probabilistic estimation in this work, for example, finding the exact value of $C_0$ and possibly remove the logarithmic factors from the current estimation.
\section{Appendix}
\begin{proof}[Proof of Lemma~\ref{lemma:conjecture}]
Write it in terms of real variables, we have
\[
h(\theta)=\operatorname{\mathbb{E}}_{a_1,a_2,b_1,b_2\sim N(0,1)}\sqrt{a_1^2+b_1^2}\sqrt{(a_1\sin\theta+a_2\cos\theta)^2+(b_1\sin\theta+b_2\cos\theta)^2}
\]
Using $(\sqrt{f(x)})''=(\frac{1}{2}f(x)^{-1/2}f'(x))'=\frac{1}{2}f(x)^{-1/2}f''(x)-\frac{1}{4}f(x)^{-3/2}f'(x)^2$ and \begin{align*}&[(a_1\sin\theta+a_2\cos\theta)^2+(b_1\sin\theta+b_2\cos\theta)^2]'\\=&2(a_1^2-a_2^2+b_1^2-b_2^2)\sin\theta\cos\theta+2(a_1a_2+b_1b_2)(\cos^2\theta-\sin^2\theta)\\
&[(a_1\sin\theta+a_2\cos\theta)^2+(b_1\sin\theta+b_2\cos\theta)^2]''\\=&2(a_1^2-a_2^2+b_1^2-b_2^2)(\cos^2\theta-\sin^2\theta)-8(a_1a_2+b_1b_2)\cos\theta\sin\theta,\end{align*}
where
\begin{align*}
h''(\theta)=\operatorname{\mathbb{E}} \frac{\sqrt{a_1^2+b_1^2}}{[f(\theta)]^{\frac{3}{2}}}&\Big[f(\theta)[(a_1^2-a_2^2+b_1^2-b_2^2)(\cos^2\theta-\sin^2\theta)-4(a_1a_2+b_1b_2)\cos\theta\sin\theta]\\-&[(a_1^2-a_2^2+b_1^2-b_2^2)\sin\theta\cos\theta+(a_1a_2+b_1b_2)(\cos^2\theta-\sin^2\theta)]^2\Big],
\end{align*}
and
\[
h''(0)=\operatorname{\mathbb{E}} \frac{\sqrt{a_1^2+b_1^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[[a_2^2+b_2^2][a_1^2+b_1^2-a_2^2-b_2^2]-[a_1a_2+b_1b_2]^2\Big].
\]
Using the fact that when $a_1^2+b_1^2$ and $a_2^2+b_2^2$ are fixed, then under this conditional distribution, $\operatorname{\mathbb{E}} [a_1a_2+b_1b_2]^2=\frac{1}{2}[a_1^2+b_1^2][a_2^2+b_2^2]$, we have
\begin{align*}
&h''(0)=\operatorname{\mathbb{E}} \frac{\sqrt{a_1^2+b_1^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[\frac{1}{2}[a_2^2+b_2^2][a_1^2+b_1^2]-[a_2^2+b_2^2]^2\Big]
\\=&\operatorname{\mathbb{E}} \frac{1}{2}[a_2^2+b_2^2]^{-\frac{1}{2}}[a_1^2+b_1^2]^{\frac{3}{2}}-\sqrt{a_1^2+b_1^2}\sqrt{a_2^2+b_2^2}.
\end{align*}
Applying
\[
\operatorname{\mathbb{E}} (a_1^2+b_1^2)^k=\frac{1}{\pi}\int_{x,y}(x^2+y^2)^ke^{-{x^2-y^2}}{\,\mathrm{d}} x{\,\mathrm{d}} y=2 \int_{r=0}^\infty r^{2k+1}e^{-r^2}{\,\mathrm{d}} r=\int_{z=0}^\infty z^k e^{-z}{\,\mathrm{d}} z=\Gamma(k+1),
\]
$h''(0)=\frac{1}{2}\Gamma(\frac{1}{2})\Gamma(\frac{5}{2})-\Gamma(\frac{3}{2})^2=\frac{\pi}{8}>0$.
Using the fact that
\[
h''(\phi)=\frac{{\,\mathrm{d}}}{{\,\mathrm{d}}\theta}\operatorname{\mathbb{E}}
\sqrt{(-a_1\sin\phi+a_2\cos\phi)^2+(-b_1\sin\phi+b_2\cos\phi)^2}\sqrt{(a_1\sin\theta+a_2\cos\theta)^2+(b_1\sin\theta+b_2\cos\theta)^2}\Big|_{\theta=0}
\]
and applying the same procedure as in the calculation of $h''(0)$, we have
\begin{equation}\label{eq:second_derivative}
h''(\phi)=\operatorname{\mathbb{E}} \frac{\sqrt{(-a_1\sin\phi+a_2\cos\phi)^2+(-b_1\sin\phi+b_2\cos\phi)^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[[a_2^2+b_2^2][a_1^2+b_1^2-a_2^2-b_2^2]-[a_1a_2+b_1b_2]^2\Big],
\end{equation}
and as a special case,
\begin{align*}
&h''(\frac{\pi}{2})=\operatorname{\mathbb{E}} \Big[\frac{1}{2}[a_1^2+b_1^2]-[a_2^2+b_2^2]\Big]=-\frac{1}{2}\Gamma(2)=-1.
\end{align*}
Next, we will show that $h''(\theta)$ is well-defined and Lipschitz continuous. In fact, applying \eqref{eq:second_derivative} and the fact that $(-a_1\sin\phi_1+a_2\cos\phi_1)^2-(-a_1\sin\phi_2+a_2\cos\phi_2)^2<|\phi_1-\phi_2|^2(a_1^2+a_2^2)$,
\begin{align*}
|h''(\phi_1)-h''(\phi_2)|
\leq &\operatorname{\mathbb{E}}|\phi_1-\phi_2|\frac{\sqrt{a_1^2+b_1^2}+\sqrt{a_2^2+b_2^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[[a_2^2+b_2^2][a_1^2+b_1^2]+[a_2^2+b_2^2]^2+[a_1a_2+b_1b_2]^2\Big]\\
\leq &\operatorname{\mathbb{E}}|\phi_1-\phi_2|\frac{\sqrt{a_1^2+b_1^2}+\sqrt{a_2^2+b_2^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[\frac{3}{2}[a_2^2+b_2^2][a_1^2+b_1^2]+[a_2^2+b_2^2]^2\Big].
\end{align*}
Then we obtain the Lipschitz continuity of $h''(\theta)$ with Lipschitz factor given by
\[
L=\operatorname{\mathbb{E}}\frac{\sqrt{a_1^2+b_1^2}+\sqrt{a_2^2+b_2^2}}{[a_2^2+b_2^2]^{\frac{3}{2}}}\Big[\frac{3}{2}[a_2^2+b_2^2][a_1^2+b_1^2]+[a_2^2+b_2^2]^2\Big]=\frac{3}{2}\Gamma(\frac{1}{2})\Gamma(\frac{5}{2})+\Gamma(2).
\]
Then to prove for all $0<\theta<\pi/2$, $h'(\theta)\geq c\min(\theta,\pi/2-\theta)$,
it is sufficient to verify that \begin{equation}\label{eq:verify}\text{$\min_{\frac{\pi}{16L}<\theta<\frac{\pi}{2}-\frac{1}{2L}}h'(\theta)>c$ for some $c>0$.}\end{equation} Since $h''(\pi/2)=-1$ and $h''(\theta)$ has a Lipschitz factor $L$, $h'(\theta)$ is also Lipschitz continuous with a Lipschitz factor $1 + \pi L$. Therefore, \eqref{eq:verify} can be verified by numerically by checking a few values of $h'(\theta)$ in the interval $\frac{\pi}{16L}<\theta<\frac{\pi}{2}-\frac{1}{2L}$. More specifically, it is sufficient to verify that for $\theta=\frac{\pi}{16L},\frac{\pi}{16L}+\delta,\frac{\pi}{16L}+2\delta,\cdots,\frac{\pi}{2}-\frac{1}{2L}$, $h'(\theta)>c+\delta/(1 + \pi L)$. Using a computer with $\delta=1/10$, it is verified as shown in Figure~\ref{fig:conjecture}.
Based on the Lipschitz continuity of $h'(\theta)$ we can verify the Lipschitz continuity of $h$ in $[0,\pi/2]$. Using a similar procedure as above, we can show that there exists $c'>0$ such that $\min_{0\leq \theta<\pi/2}h(\theta)<c'$, by checking a few functional values of $h(\theta)$ for $\theta\in[0,\pi/2]$.
\end{proof}
To visualize Lemma~\ref{lemma:conjecture}, we randomly reproduce $10^6$ samples of $(a_1,a_2)$, calculate the average values of $h(\theta)$ and $h'(\theta)$ and plot them in Figure~\ref{fig:conjecture}. The right figure verifies that Lemma~\ref{lemma:conjecture} holds. We remark that if $a_1$ and $a_2$ are sampled from real Gaussian distribution $N(0,1)$, then $h(\theta)={\frac{1}{\pi}} [2\theta\sin\theta+2\cos\theta]$, but in the complex setting, the calculation is more complicated and there is no known explicit formula.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{conjecture1.png}
\includegraphics[width=0.45\textwidth]{conjecture2.png}
\end{center}
\caption{$h(\theta)$ and $h'(\theta)$, calculated from the average of $10^6$ simulations.}\label{fig:conjecture}
\end{figure}
\bibliographystyle{abbrv}
| {
"timestamp": "2018-09-17T02:02:32",
"yymm": "1706",
"arxiv_id": "1706.08167",
"language": "en",
"url": "https://arxiv.org/abs/1706.08167",
"abstract": "This paper considers the problem of phase retrieval, where the goal is to recover a signal $z\\in C^n$ from the observations $y_i=|a_i^* z|$, $i=1,2,\\cdots,m$. While many algorithms have been proposed, the alternating minimization algorithm has been one of the most commonly used methods, and it is very simple to implement. Current work has proved that when the observation vectors $\\{a_i\\}_{i=1}^m$ are sampled from a complex Gaussian distribution $N(0, I)$, it recovers the underlying signal with a good initialization when $m=O(n)$, or with random initialization when $m=O(n^2)$, and it conjectured that random initialization succeeds with $m=O(n)$. This work proposes a modified alternating minimization method in a batch setting, and proves that when $m=O(n\\log^{3}n)$, the proposed algorithm with random initialization recovers the underlying signal with high probability. The proof is based on the observation that after each iteration of alternating minimization, with high probability, the angle between the estimated signal and the underlying signal is reduced.",
"subjects": "Optimization and Control (math.OC); Statistics Theory (math.ST)",
"title": "Phase retrieval using alternating minimization in a batch setting",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787857334057,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811458421857
} |
https://arxiv.org/abs/2102.09289 | Note on induced paths in sparse random graphs | We show that for $d\ge d_0(\epsilon)$, with high probability, the random graph $G(n,d/n)$ contains an induced path of length $(3/2-\epsilon)\frac{n}{d}\log d$. This improves a result obtained independently by Luczak and Suen in the early 90s, and answers a question of Fernandez de la Vega. Along the way, we generalize a recent result of Cooley, Draganić, Kang and Sudakov who studied the analogous problem for induced matchings. | \section{Introduction}
Let $G(n,p)$ denote the binomial random graph on $n$ vertices, where each edge is included independently with probability~$p$. In this note, we are concerned with \emph{induced} subgraphs of $G(n,p)$, specifically trees and paths.
The study of induced trees in $G(n,p)$ was initiated by Erd\H{o}s and Palka~\cite{EP:83} in the 80s. Among other things, they showed that for constant $p$, with high probability (\textbf{whp}) the size of a largest induced tree in $G(n,p)$ is asymptotically equal to $2\log_q(np)$, where $q=\frac{1}{1-p}$. The obtained value coincides asymptotically with the \emph{independence number} of $G(n,p)$, the study of which dates back even further to the work of Bollob\'as and Erd\H{o}s~\cite{BE:76}, Grimmett and McDiarmid~\cite{GM:75} and Matula~\cite{matula:76}.
As a natural continuation of their work, Erd\H{o}s and Palka~\cite{EP:83} posed the problem of determining the size of a largest induced tree in \emph{sparse} random graphs, when $p=d/n$ for some fixed constant~$d$. More precisely, they conjectured that for every $d>1$ there exists $c(d)>0$ such that \textbf{whp} $G(n,p)$ contains an induced tree of order at least $c(d)\cdot n$.
This problem was settled independently in the late 80s by Fernandez de la Vega~\cite{fernandez-de-la-Vega:86}, Frieze and Jackson~\cite{FJ:87a}, Ku\v{c}era and R\"{o}dl~\cite{KR:87} as well as \L{}uczak and Palka~\cite{LP:88}.
In particular, Fernandez de la Vega~\cite{fernandez-de-la-Vega:86} showed that one can take $c(d)\sim \frac{\log d}{d}$, and a simple first moment calculation reveals that this is tight within a factor of~$2$.
Two natural questions arise from there. First, one might wonder whether it is possible to find not only some \emph{arbitrary} induced tree, but a \emph{specific} one, say a long induced path. Indeed, Frieze and Jackson~\cite{FJ:87b} in a separate paper showed that \textbf{whp} there is an induced path of length $\tilde{c}(d)\cdot n$. Two weaknesses of this result were that their proof only worked for sufficiently large~$d$, and that the value obtained for $\tilde{c}(d)$ was far away from the optimal one.
Later, \L{}uczak~\cite{luczak:93} and Suen~\cite{suen:92} independently remedied this situation twofold. They proved that an induced path of length linear in $n$ exists for all $d>1$, showing that the conjecture of Erd\H{o}s and Palka holds even for induced paths. Moreover, they showed that one can take $\tilde{c}(d)\sim \frac{\log d}{d}$ as in the case of arbitrary trees.
A second obvious question is to determine the size of a largest induced tree (and path) more precisely. The aforementioned results were proved by analysing the behaviour of certain constructive algorithms which produce large induced trees and paths. The value $\frac{\log d}{d}$ seems to constitute a natural barrier for such approaches. On the other hand, recall that in the dense case, the size of a largest induced tree coincides asymptotically with the independence number. In 1990, Frieze~\cite{frieze:90} showed that the first moment bound $\sim2\frac{n}{d}\log d$ is tight for the independence number, also in the sparse case. His proof is based on the profound observation that the second moment method can be used even in situations where it apparently does not work, if one can combine it with a strong concentration inequality.
Finally, in 1996, Fernandez de la Vega~\cite{fernandez-de-la-Vega:96} observed that the earlier achievements around induced trees can be combined with Frieze's breakthrough to prove that the size of a largest induced tree is indeed $\sim 2\frac{n}{d}\log d$. This complements the result of Erd\H{o}s and Palka~\cite{EP:83} in the dense case. (When $p=o_n(1)$, we have $2\log_q(np)\sim 2\frac{n}{d}\log d$.)
Fernandez de la Vega~\cite{fernandez-de-la-Vega:96} also posed the natural problem of improving the \L{}uczak--Suen bound~\cite{luczak:93,suen:92} for induced paths, for which his approach was ``apparently helpless''. Despite the widely held belief (see~\cite{CDKS:ta,DS:18} for instance) that the upper bound $\sim 2\frac{n}{d}\log d$ obtained via the first moment method is tight, the implicit constant $1$ has not been improved in the last 30 years.
\subsection{Long induced paths}\label{sec:paths}
Our main result is the following, which solves the problem of Fernandez de la Vega. Unfortunately, we only get ``halfway'' towards the optimal bound, and the obvious problem left open is to close the remaining gap.
\begin{theorem}\label{thm:path}
For any ${\epsilon}>0$ there is $d_0$ such that \textbf{whp} $G(n,p)$ contains an induced path of length $(3/2-{\epsilon})\frac{n}{d}\log d$ whenever $d_0\le d=pn =o(n)$.
\end{theorem}
For the sake of generality, we state our result for a wide range of functions~$d=d(n)$.
However, we remark that the most interesting case is when $d$ is a sufficiently large constant, and any improvement in this regime is likely to generalize straightforwardly.
In fact, for dense graphs, when $d\ge n^{1/2}\log^2 n$, much better results are already known (cf.~\cite{DS:18,rucinski:87}).
Some of the earlier results~\cite{FJ:87b,luczak:93} are phrased in terms of induced cycles (\emph{holes}). Using a simple sprinkling argument, one can see that aiming for a cycle instead of a path does not make the problem any harder.
We also note that our proof is self-contained, except for well-known facts from probability and graph theory.
We briefly explain our strategy, and also discuss how the approach might be used to eventually match the upper bound.
The idea is to find a long induced path in two steps. First, we find many disjoint paths of some chosen length $L$, such that the subgraph consisting of their union is induced. To achieve this, we generalize a recent result of Cooley, Dragani\'c, Kang and Sudakov~\cite{CDKS:ta} who obtained large induced matchings. We will discuss this further in Section~\ref{sec:forests}.
Assuming now we can find such an induced linear forest $F$, the aim is to connect almost all of the small paths into one long induced path, using a few additional vertices. (In order to maintain randomness for the connection step, we only expose half the vertices to find~$F$.)
To model this, we give each path in $F$ a direction, and define an auxiliary digraph whose vertices are the paths, and two paths $(P_1,P_2)$ form an edge if there exists a new ``connecting'' vertex $a$ that has some edge to the last ${\epsilon} L$ vertices of $P_1$ and some edge to the first ${\epsilon} L$ vertices of $P_2$, but no edge to the rest of~$F$.
Our goal is to find an almost spanning path in this auxiliary digraph. Observe that this will provide us with a path in $G(n,p)$ of length roughly~$|F|$. Moreover, if we can ensure that the new connecting vertices form an independent set, this path will be induced.
The intuition is that the auxiliary digraph behaves quite randomly, which gives us hope that, even though it is very sparse, we can find an almost spanning path. In order to back this up and illustrate the interplay between the above parameters, let us assume that in the first step we can find an induced linear forest $F$ of order $c\frac{n}{d}\log d=cp^{-1}\log d$ with components of order $L\approx d^{\alpha}$, where $c$ and $\alpha$ are constants to be specified later.
Consider now two paths $P_1,P_2$ in~$F$. For a new vertex $a$, the probability that it joins to the desired segments of $P_1,P_2$ is $\approx (Lp)^2$, and the probability that it has no edge to the rest of $F$ is at least $(1-p)^{|F|}\approx \exp(-p|F|) = d^{-c}$. Since there are $\Theta(n)$ potential connecting vertices, we estimate the probability that $(P_1,P_2)$ is an edge in the auxiliary digraph to $\approx L^2p^2d^{-c}n$. Noting that the order of our digraph is $N= c\frac{n}{dL}\log d$, we infer that its average degree is $\approx d^{1-c}L\log d$.
It is well-known in random graph theory that a random $N$-vertex digraph contains a path of length $(1-{\epsilon})N$ if the average degree is sufficiently large as a function of~${\epsilon}$.
This suggests that our strategy could work if $c\le 1+\alpha$. Indeed, it turns out that we can ensure $L\approx d^{1/2}$ in the first step of our argument (see Lemma~\ref{lem:forests}), hence the constant $3/2$ in Theorem~\ref{thm:path}.
If one could ensure that $L$ is almost linear in~$d$, that is, $\alpha\approx 1$, then our argument would directly yield an induced path of the asymptotically optimal length $\sim 2\frac{n}{d}\log d$. The proof of the connecting step will be given in Section~\ref{sec:connect}.
\subsection{Induced forests with small components}\label{sec:forests}
As outlined above, in the first step of our argument, we seek an induced linear forest whose components are paths of length $L\approx d^{1/2}$.
For this, we generalize a recent result of Cooley, Dragani\'c, Kang and Sudakov~\cite{CDKS:ta}. They proved that \textbf{whp} $G(n,p)$ contains an induced matching with $\sim 2\log_q (np)$ vertices, which is asymptotically best possible. They also anticipated that using a similar approach one can probably obtain induced forests with larger, but bounded components. As a by-product, we confirm this.
To state our result, we need the following definition. For a given graph $T$, a \emph{$T$-matching} is a graph whose components are all isomorphic to~$T$. Hence, a $K_2$-matching is simply a matching, and the following for $T=K_2$ implies the main result of~\cite{CDKS:ta}.
\begin{theorem}\label{thm:forests}
For any ${\epsilon}>0$ and tree $T$, there exists $d_0>0$ such that \textbf{whp} the order of the largest induced $T$-matching in $G(n,p)$ is $(2\pm {\epsilon})\log_q(np)$, where $q=\frac{1}{1-p}$, whenever $\frac{d_0}{n}\le p\le 0.99$.
\end{theorem}
We use the same approach as in~\cite{CDKS:ta}, which goes back to the work of Frieze~\cite{frieze:90} (see also~\cite{bollobas:88,SS:87}).
The basic idea is as follows. Suppose we have a random variable $X$ and want to show that \textbf{whp}, $X\ge b-t$, where $b$ is some ``target'' value and $t$ a small error.
For many natural variables, we know that $X$ is ``concentrated'', say $\prob{|X- \expn{X}| \ge t/2} < \rho$ for some small~$\rho$. This is the case for instance when $X$ is determined by many independent random choices, each of which has a small effect.
However, it might be difficult to estimate $\expn{X}$ well enough.
But if we know in addition that $\prob{X\ge b}\ge \rho$, then we can combine both estimates to $\prob{X\ge b}> \prob{X\ge \expn{X}+t/2}$, which clearly implies that $b\le \expn{X}+t/2$. Applying now the other side of the concentration inequality, we infer $\prob{X\le b-t} \le \prob{X\le \expn{X}-t/2}< \rho$, as desired.
In our case, say $X$ is the maximum order of an induced $T$-matching in~$G(n,p)$. Since adding or deleting edges at any one vertex can create or destroy at most one component, we know that $X$ is $|T|$-Lipschitz and hence concentrated (see Section~\ref{sec:concentration}). Using the above approach, it remains to complement this with a lower bound on the probability that $X\ge b$. Introduce a new random variable $Y$ which is the \emph{number} of induced $T$-matchings of order~$b$ (a multiple of $|T|$). Then we have $X\ge b$ if and only if $Y>0$. The main technical work is to obtain a lower bound for the probability of the latter event using the second moment method.
We note that by applying the second moment method to labelled copies (instead of unlabelled copies as in~\cite{CDKS:ta}) we obtain a shorter proof even in the case of matchings (see Section~\ref{sec:2ndmoment}).
More crucially, it turns out that one can even find induced forests where the component sizes can grow as a function of~$d$. As discussed above, for the proof of Theorem~\ref{thm:path} we need an induced linear forest where the components are paths of length roughly $d^{1/2}$. This is provided by the following auxiliary result. We note that the same holds for forests with arbitrary components of bounded degree, and one can also let the degree slowly grow with~$d$, but we choose to keep the presentation simple.
\begin{lemma}\label{lem:forests}
For any ${\epsilon}>0$, there exists $d_0>0$ such that \textbf{whp} $G(n,p)$ contains an induced linear forest of order at least $(2-{\epsilon})p^{-1}\log(np)$ and component paths of order $d^{1/2}/\log^4 d$, whenever $d_0\le d=np \le n^{1/2}\log^2 n$.
\end{lemma}
It would be interesting to find out whether the length of the paths can be improved to $d^{1-o(1)}$. As remarked earlier, this would lead to the asymptotically optimal result for the longest induced path problem.
\subsection{Notation}
We use standard graph theoretical notation. In particular, for a graph $G$ and $U\subset V(G)$, we let $e(G)$ denote the number of edges, $\Delta(G)$ the maximum degree and $G[U]$ the subgraph induced by~$U$.
Recall that a forest is called \emph{linear} if its components are paths.
For functions $f(n),g(n)$, we write $f\sim g$ if $\lim_{n\to\infty}\frac{f(n)}{g(n)}=1$. We also use the standard Landau symbols $o(\cdot),\Omega(\cdot),\Theta(\cdot),O(\cdot),\omega(\cdot)$, where subscripts disclose the variable that tends to infinity if this is not clear from the context.
We use $\approx$ non-rigorously in informal discussions and ask the reader to interpret it correctly.
An event $\mathcal{E}_n$ holds \emph{with high probability} (\textbf{whp}) if $\prob{\mathcal{E}_n}= 1-o_n(1)$.
We use $\log$ to denote the natural logarithm with base~${e}$. Moreover, $[n]=\Set{1,\dots,n}$ and $(n)_k=n(n-1)\cdots (n-k+1)$.
Recall the standard estimates $\binom{n}{k}\le \left(\frac{en}{k}\right)^k$, $1+x\le {e}^x$ and $\log(1+x)=x+O\left(\frac{x^2}{1-|x|}\right)$, where the latter holds for $|x|<1$ and implies $1-x\ge {e}^{-x-O(x^2)}$ for $0\le x\le 0.99$, say.
As customary, we tacitly treat large numbers like integers whenever this has no effect on the argument.
\section{Second moment}\label{sec:2ndmoment}
In this section, we use the second moment method to derive a lower bound on the probability that $G(n,d/n)$ contains a given induced linear forest of size $\sim 2\frac{n}{d}\log d$. Here, it does not matter that the components are small.
More precisely, we prove that for fixed ${\epsilon}>0$ and $d\ge d_0({\epsilon})$, \emph{any} bounded degree forest $F$ on $k\le (2-{\epsilon})\frac{n}{d}\log d$ vertices is an induced subgraph of $G(n,d/n)$ with probability at least $\exp(-O(\frac{n\log^2 d}{d^2}))$.
Moreover, when $d=\omega(n^{1/2}\log n)$, the obtained probability bound tends to~$1$. In particular, in this regime, the lemma readily implies the existence of an induced path of the asymptotically optimal length $\sim 2\frac{n}{d}\log d$ \textbf{whp}.
\begin{lemma}\label{lem:2ndmoment}
For any ${\epsilon}>0$, there exists $d_0$ such that the following holds for all $d_0\le d < n$, where $p=\frac{d}{n}$ and $q=\frac{1}{1-p}$. For any forest $F$ on $k\le (2-{\epsilon})\log_q d$ vertices with maximum degree $\Delta\le d^{{\epsilon}/6}$, the probability that $G(n,p)$ contains an induced copy of $F$ is at least $$\exp\left(-10^4\Delta^2\frac{n\log^2 d}{d^2} -2d^{-{\epsilon}/7}\right) .$$
\end{lemma}
The proof of Lemma~\ref{lem:2ndmoment} is based on the second moment method and will be given below. We start off with some basic preparations which will also motivate the main counting tool.
Fix a forest~$F$ of order~$k$. Let $Y$ be the random variable which counts the number of \emph{labelled} induced copies of $F$ in $G(n,p)$. More formally, let $\mathcal{F}$ be the set of all injections $\sigma\colon V(F)\to [n]$, and for $\sigma\in \mathcal{F}$, let $F_\sigma$ be the graph with vertex set $\set{\sigma(x)}{x\in V(F)}$ and edge set $\set{\sigma(x)\sigma(y)}{xy\in E(F)}$.
Let $A_\sigma$ be the event that $F_\sigma$ is an induced subgraph of~$G(n,p)$.
Hence, $$\prob{A_\sigma}=p^{e(F)}(1-p)^{\binom{k}{2}-e(F)},$$
and setting $Y=\sum_{\sigma\in \mathcal{F}}\mathbbm{1}(A_\sigma)$, we have
\begin{align}
\expn{Y}=(n)_k p^{e(F)}(1-p)^{\binom{k}{2}-e(F)}.\label{expn}
\end{align}
Ultimately, we want to obtain a lower bound for $\prob{Y>0}$.
Fix some $\sigma_0\in \mathcal{F}$. By symmetry, the second moment of $Y$ can be written as $$\expn{Y^2}=\expn{Y} \sum_{\sigma\in \mathcal{F}}\cprob{A_\sigma}{A_{\sigma_0}}.$$
Applying the Paley--Zygmund inequality, we thus have
\begin{align}
\prob{Y>0}\ge \frac{\expn{Y}^2}{\expn{Y^2}} = \frac{\expn{Y}}{\sum_{\sigma\in \mathcal{F}}\cprob{A_\sigma}{A_{\sigma_0}}}.\label{2ndmoment inequality}
\end{align}
The remaining difficulty is to control the terms $\cprob{A_\sigma}{A_{\sigma_0}}$.
We say that $\sigma\in \mathcal{F}$ is \emph{compatible} (with $\sigma_0$) if $\cprob{A_\sigma}{A_{\sigma_0}}>0$. This means that, in the intersection $V(F_\sigma)\cap V(F_{\sigma_0})$, a pair $uv$ which is an edge in $F_\sigma$ cannot be a non-edge in $F_{\sigma_0}$, and vice versa, as otherwise $F_\sigma$ and $F_{\sigma_0}$ could not be induced subgraphs of $G(n,p)$ simultaneously. From now on, we can ignore all
$\sigma$ that are not compatible with $\sigma_0$.
If $\sigma\in\mathcal{F}$ is compatible with $\sigma_0$, we denote by $I_\sigma:=F_\sigma \cap F_{\sigma_0}$ the graph on $S=V(F_\sigma)\cap V(F_{\sigma_0})$ with edge set $E(F_\sigma[S])=E(F_{\sigma_0}[S])$. This ``intersection graph'' assumes a crucial role in the analysis.
Suppose that $I_\sigma$ has $s$ vertices and $c$ components. Since $I_\sigma$ is a forest, we have $e(I_\sigma)=s-c$. These are the edges of $F_\sigma$ that we already know to be there when conditioning on~$A_{\sigma_0}$, and for $F_\sigma$, we need $e(F)-e(I_\sigma)$ ``new'' edges.
Moreover, there are $\binom{k}{2}-\binom{s}{2}-e(F)+e(I_\sigma)$ additional non-edges.
Therefore,
\begin{align}
\cprob{A_\sigma}{A_{\sigma_0}} = p^{e(F)-s+c} (1-p)^{\binom{k}{2}-\binom{s}{2}-e(F)+s-c}.\label{cond prob}
\end{align}
Note here that when the number of components $c$ is large, then the exponent of $p$ is large and hence we have a stronger upper bound on $\cprob{A_\sigma}{A_{\sigma_0}}$. On the other hand, if $c$ is small, then $\cprob{A_\sigma}{A_{\sigma_0}}$ is larger, but this will be compensated by the fact that there are fewer such~$\sigma$.
In the following, we bound the number of compatible $\sigma\in\mathcal{F}$ for which $I_\sigma$ has $s$ vertices and $c$ components. We remark that this kind of analysis was also carried out in~\cite{draganic:20} in the study of dense random graphs. We include the details for completeness, with an improved dependence on~$\Delta$.
We make use of the following elementary counting result.
\begin{prop}\label{prop:branching}
For a graph $H$ with $\Delta(H)\le \Delta$ and $v\in V(H)$, the number of (unlabelled) trees in $H$ of order $s$ which contain $v$ is at most $(e\Delta)^{s-1}$.
\end{prop}
In the case that is relevant for our application, namely when $\Delta=2$, this is trivially true. The more general case follows easily from the formula for the number of rooted subtrees of a given order in the $\Delta$-regular infinite tree (see~\cite{KNP:20}).
\begin{prop}\label{prop:counting extension}
For all $0\le c\le s$, the number of compatible $\sigma\in\mathcal{F}$ for which $I_\sigma$ has $s$ vertices and $c$ components is at most $$\binom{k}{c} k^c (6\Delta^2)^s (n-k)_{k-s}.$$
\end{prop}
\begin{proof}
Fix $s$ and~$c$. We can obviously assume that $s\ge c\ge 1$, as otherwise the bound is easily seen to hold.
The first claim is that the number of subgraphs of $F_{\sigma_0}$ with $s$ vertices and $c$ components is at most $\binom{k}{c} (2e\Delta)^s$.
To see this, we first choose root vertices $v_1,\dots,v_c$ for the components, for which there are at most $\binom{k}{c}$ choices. For $i\in [c]$, let $T_i$ denote the component which will contain~$v_i$. Next, we fix the sizes of the components. Writing $s_i=|T_i|$, the number of possibilities is given by the number of positive integer solutions of $s_1+\dots+s_c=s$, which is $\binom{s-1}{c-1}\le 2^s$ by a well-known formula.
Now, having fixed the sizes, we can apply Proposition~\ref{prop:branching} for each $i\in[c]$, with $F,v_i$ playing the roles of $H,v$, to see that the number of choices for $T_i$ is at most $(e\Delta)^{s_i-1}$, which combined amounts to $(e\Delta)^{s-c}$.
This implies the claim, and immediately yields an upper bound on the number of possibilities for the intersection graph~$I_\sigma$.
Now, fix a choice of $I_\sigma$. Since $I_\sigma$ is a forest with $c$ components, its vertices can be ordered such that every vertex, except for the first $c$ vertices, has exactly one neighbour preceding it.
In order to count the number of possibilities for $\sigma$, we proceed as follows.
First, choose the preimages under $\sigma$ for the first $c$ vertices, for which there are at most $(k)_c$ choices. Now, we choose the preimages of the remaining vertices of $I_\sigma$ one-by-one in increasing order. In each step, there are at most $\Delta$ choices, since one neighbour of the current vertex has already chosen its preimage, and $I_\sigma$ has to be an induced subgraph of $F_\sigma$. Hence, there are at most $\Delta^{s-c}$ choices for the preimages of the remaining vertices of~$I_\sigma$.
Finally, we have used $s$ vertices of $F$ as preimages for the vertices in $I_\sigma$. The remaining $k-s$ vertices of $F$ must be mapped to $[n]\setminus V(F_{\sigma_0})$, so there are at most $(n-k)_{k-s}$ possibilities.
\end{proof}
With the preparations done, the proof of the lemma reduces to a chain of estimates.
\lateproof{Lemma~\ref{lem:2ndmoment}}
By~\eqref{2ndmoment inequality}, it suffices to show that
\begin{align*}
\frac{\sum_{\sigma\in \mathcal{F}}\cprob{A_\sigma}{A_{\sigma_0}}}{\expn{Y}} \le \exp\left(10^4\Delta^2\frac{n\log^2 d}{d^2} + 2d^{-{\epsilon}/7}\right).
\end{align*}
We split the sum over compatible $\sigma\in \mathcal{F}$ according to the number of vertices and components of~$I_\sigma$. Applying \eqref{expn},~\eqref{cond prob} and Proposition~\ref{prop:counting extension}, we obtain
\begin{align*}
\frac{\sum_{\sigma\in \mathcal{F}}\cprob{A_\sigma}{A_{\sigma_0}}}{\expn{Y}} &\le
\sum_{s=0}^k \sum_{c=0}^s \frac{\binom{k}{c}k^c (6\Delta^2)^s (n-k)_{k-s} p^{e(F)-s+c} (1-p)^{\binom{k}{2}-\binom{s}{2}-e(F)}}{(n)_k p^{e(F)}(1-p)^{\binom{k}{2}-e(F)}} \\
&= \sum_{s=0}^k \frac{(n-k)_{k-s}}{(n)_k}p^{-s}q^{\binom{s}{2}}(6\Delta^2)^s \sum_{c=0}^s \binom{k}{c} (kp)^c \\
&\le \sum_{s=0}^k (4/n)^s p^{-s} q^{s^2/2} (6\Delta^2)^s \frac{(16k\log d)^s}{s!}\\
&= \sum_{s=0}^k \frac{\left(\frac{384\Delta^2 k \log d}{d} q^{s/2} \right)^s}{s!}.
\end{align*}
To verify the last inequality, note that we always have $k\le 2\log_q(np)\le 2\frac{n}{d}\log d$ since $\log(q)\ge p$. Hence, $kp\le 2\log d$. Moreover, we have
$\frac{(n-k)_{k-s}}{(n)_k} \le \frac{1}{(n)_s} \le (4/n)^s$ since $s\le k \le 2n/{e}$. We also used the fact that $\binom{k}{c} \le 4^s \binom{k}{s}\le \frac{(4k)^s}{s!}$. To see this, observe that when $s\le k/2$, we have $\binom{k}{c}\le \binom{k}{s}$, and otherwise, $\binom{k}{c}\le 2^k \le 2^{2s}$. Finally, $c$ takes only $s+1\le 2^s$ values.
We split the final sum into two terms.
First, consider the range $s\le k/\log d$. Then $q^{s/2}\le q^{1/\log q}= {e}$. Hence, recalling the power series $e^x=\sum_{s\ge 0}\frac{x^s}{s!}$, we obtain the bound
$$\sum_{s=0}^{\lfloor k/\log d\rfloor } \frac{\left(\frac{384\Delta^2 k \log d}{d} q^{s/2} \right)^s}{s!}\le \exp\left(\frac{384{e} \Delta^2 k\log d}{d} \right) \le \exp\left(\frac{10^4 \Delta^2 n\log^2 d}{d^2} \right).$$
Finally, for $s\ge k/\log d$, we use $s!\ge (s/{e})^s$ to bound each summand as
\begin{align*}
\frac{\left(\frac{384\Delta^2 k \log d}{d} q^{s/2} \right)^s}{s!} \le \left(\frac{384{e} \Delta^2 k\log d}{ds} q^{s/2} \right)^s \le \left(\frac{384{e} \Delta^2 \log^2 d}{d} q^{s/2} \right)^s.
\end{align*}
Crucially, since $s\le k\le (2-{\epsilon})\log_q d$, we have $q^{s/2} \le q^{(1-{\epsilon}/2)\log_q d} = d^{1-{\epsilon}/2}$. Now, for sufficiently large $d\ge d_0$ the bracket is bounded by $\frac{384{e} \Delta^2 \log^2 d}{d^{{\epsilon}/2}} \le d^{-{\epsilon}/7}<1$. Therefore the geometric series tells us that
$$\sum_{s=\lceil k/\log d \rceil}^{k} \frac{\left(\frac{384\Delta^2 k \log d}{d} q^{s/2} \right)^s}{s!} \le \frac{1}{1-d^{-{\epsilon}/7}} -1 \le 2d^{-{\epsilon}/7}.$$
Altogether, we conclude that
\begin{align*}
\frac{\sum_{\sigma\in \mathcal{F}}\cprob{A_\sigma}{A_{\sigma_0}}}{\expn{Y}} \le \exp\left(\frac{10^4 \Delta^2 n\log^2 d}{d^2} \right) + 2d^{-{\epsilon}/7} \le \exp\left(\frac{10^4 \Delta^2 n\log^2 d}{d^2} + 2d^{-{\epsilon}/7}\right),
\end{align*}
completing the proof.
\noproof\bigskip
\section{Concentration}\label{sec:concentration}
In this section, we deduce Theorem~\ref{thm:forests} and Lemma~\ref{lem:forests} from Lemma~\ref{lem:2ndmoment}.
We will use Talagrand's inequality.\COMMENT{One could also make the argument work with Azuma's inequality, but Talagrand's inequality is more convenient to use.}
To state it, we need the following definitions.
Given a product probability space $\Omega=\prod_{i=1}^n \Omega_i$ (endowed with the product measure) and a random variable $X\colon \Omega\to \mathbb{R}$, we say that $X$ is
\begin{itemize}
\item \emph{$L$-Lipschitz} (for some $L>0$) if for any $\omega,\omega'\in \Omega$ which differ only in one coordinate, we have $|X(\omega)-X(\omega')|\le L$;
\item \emph{$f$-certifiable} (for a function $f\colon \mathbb{N}\to \mathbb{N}$) if for every $s$ and $\omega$ such that $X(\omega)\ge s$, there exists a set $I\subset [n]$ of size $\le f(s)$ such that $X(\omega')\ge s$ for every $\omega'$ that agrees with $\omega$ on the coordinates indexed by~$I$.
\end{itemize}
\begin{theorem}[Talagrand's inequality, see~\cite{AS:08}]
Suppose that $X$ is $L$-Lipschitz and $f$-certifiable. Then, for all $b,t\ge 0$,
$$\prob{X\le b-tL\sqrt{f(b)}} \prob{X\ge b} \le \exp\left(-t^2/4\right).$$
\end{theorem}
Our probability space is of course $G(n,p)$. Although this comes naturally as a product of $\binom{n}{2}$ elementary probability spaces $\Omega_{ij}$, one for each potential edge $ij$, it can be more effective, depending on the problem, to consider a description that is vertex-oriented, where the edges incident to a vertex are combined into one probability space.
Concretely, for $i\in [n-1]$, let $\Omega_i=\prod_{j>i}\Omega_{ij}$ represent all edges from vertex $i$ to vertices $j>i$.
Then $G(n,p)=\prod_{i=1}^{n-1}\Omega_i$. Note here that the vertices are ordered to describe the product space in a way that every edge appears exactly once. Apart from that, this ordering plays no role.
\lateproof{Theorem~\ref{thm:forests}}
Fix ${\epsilon}>0$, a tree~$T$, and assume $d_0$ is sufficiently large. Let $L=|T|$ and $d=np$.
A standard first moment computation shows that \textbf{whp} there is no induced $T$-matching of order at least $(2+{\epsilon})\log_q (np)$. We include the short computation for the sake of completeness.
Let $r=(2+{\epsilon})\log_q (np)/L$ and let $Z$ be the number of (unlabelled) induced $T$-matchings in $G(n,p)$ with $r$ components.
Then we have
\begin{align*}
\expn{Z} \le \frac{n^{rL}}{r!} p^{r(L-1)} (1-p)^{\binom{rL}{2}-r(L-1)} \le (np)^{rL}(rp/{e})^{-r}q^{-(rL)^2/2 + 2rL}.
\end{align*}
Using $p\le 0.99$, we see $\log q=\Theta(p)$ and $q=O(1)$. In particular, $rp/{e} =\Omega\left(\frac{\log d_0}{L}\right)\ge 1$ and $rL=\Theta\left(\frac{\log (np)}{p}\right)=\omega_n(1)$. Hence,
\begin{align*}
\expn{Z} \le \left(O(1) npq^{-rL/2} \right)^{rL} \le \left(O(1) d_0^{-{\epsilon}/2} \right)^{rL} \le 2^{-rL} = o_n(1).
\end{align*}
By Markov's inequality, \textbf{whp} we have $Z=0$.
We now turn to the lower bound.
Let $X$ be the maximum order of an induced $T$-matching in $G(n,p)$.
Our goal is to show that $X\ge (2-{\epsilon})\log_q d$ \textbf{whp}. Set $b=(2-{\epsilon}/2)\log_q d$.
First, by Lemma~\ref{lem:2ndmoment}, we have
\begin{align*}
\prob{X\ge b} \ge \exp\left(-10^4L^2\frac{n\log^2 d}{d^2} -2d^{-\Omega({\epsilon})}\right).
\end{align*}
This means that in the case $d\ge n^{1/2}\log^{2}n$, we are already done. Assume now that $d\le n^{1/2}\log^{2}n$. Then the above bound simplifies to
\begin{align}
\prob{X\ge b} \ge \exp\left(-\frac{n\log^5 d}{d^2}\right).\label{2nd moment}
\end{align}
Recall also that in the regime $d=o(n)$ we have $\log_q d\sim \frac{n}{d}\log d$.
It is easy to check that $X$ is $L$-Lipschitz and $f$-certifiable, where $f(s)=s+L$. Indeed, adding or deleting edges arbitrarily at one vertex can change the value of $X$ by at most~$L$, hence $X$ is $L$-Lipschitz. Moreover, if $X\ge s$, this means there is a set $I\subset[n]$ of size $s\le |I|< s+L$ which induces a $T$-matching. If we leave the coordinates indexed by $I$ unchanged, this means in particular that $I$ still induces a $T$-matching, hence we still have $X\ge s$.
Hence, Talagrand's inequality applied with $t=\frac{\sqrt{n}\log^3 d}{d}$ yields
$$\prob{X\le b-tL\sqrt{b+L}} \prob{X\ge b} \le \exp\left(-\frac{n\log^6 d}{4d^2}\right).$$
Together with~\eqref{2nd moment} and since
\begin{align}
tL\sqrt{b+L}\le \frac{\sqrt{n}\log^3 d}{d} L \sqrt{2\frac{n}{d}\log d}\le \frac{n}{d},\label{Lipschitz}
\end{align
we infer that the probability of $X\le b-\frac{n}{d}$ is at most $\exp\left(-\frac{n\log^6 d}{5d^2}\right)=o_n(1)$.
This completes the proof since $b-\frac{n}{d}\ge (2-{\epsilon})\log_q d$.
\noproof\bigskip
In the above proof, we had some room to spare in~\eqref{Lipschitz}. We will now exploit this to allow the component sizes to grow with~$d$. The proof is almost verbatim the same, so we only point out the differences.
\lateproof{Lemma~\ref{lem:forests}}
Note that we are only interested in the case $d\le n^{1/2}\log^{2}n$ and when $T$ is a path of order~$L$. Since $\Delta(T)$ is bounded, Lemma~\ref{lem:2ndmoment} still provides the lower bound in~\eqref{2nd moment}.
All we have to ensure now is that~\eqref{Lipschitz} still holds, and this is easily seen to be the case as long as $L\le d^{1/2}/\log^4 d$.
\noproof\bigskip
\section{Connecting}\label{sec:connect}
In this section, we use Lemma~\ref{lem:forests} to prove Theorem~\ref{thm:path} as outlined in Section~\ref{sec:paths}. Recall that we intend to define an auxiliary digraph on the components of a linear forest, where an edge corresponds to a suitable connection between two paths.
Our goal is to find an almost spanning path in this random digraph. The tool which enables us to achieve this, Lemma~\ref{lem:DFS} below, is based on the well-known graph exploration process \emph{depth-first-search} (DFS).
The usefulness of DFS to find long paths in random graphs was demonstrated impressively by Krivelevich and Sudakov~\cite{KS:13} in a paper where they give surprisingly short and elegant proofs of classical results in random graph theory. For instance, a straightforward consequence of DFS is the following: If an $n$-vertex graph $G$ has the property that any two disjoint sets of size $k$ are joined by an edge, then $G$ contains a path of length $n-2k+1$. The condition needed here can be conveniently checked in random graphs.
In order to connect the paths of the linear forest, we will use some new vertices. For the final path to be induced, we require these new vertices to form an independent set. One potential and clean way to guarantee this is to first find an independent set and then to only use vertices from this set as connecting vertices. However, this reduces the number of potential connecting vertices by a factor of roughly $d$, which is too costly.
Instead, we develop a variant of DFS that can encode conflicts, and use it to find a sufficient condition for the existence of a long ``conflict-free'' path.
We now introduce the notation we need.
Given a set $E$ (which in our case will be the edge set of the auxiliary digraph $D$), a \emph{conflict system} on $E$ is a graph $C$ together with an assignment $\Lambda\colon E \to 2^{V(C)}$. We say that a subset $E'\subset E$ is \emph{admissible} (with respect to $C,\Lambda$) if one can select a representative from $\Lambda(e)$ for all $e\in E'$ such that the chosen representatives are distinct and form an independent set in~$C$.
We say that an element $y\in V(C)$ has no conflict with $X\subset V(C)$ if there is no edge in $C$ between $y$ and~$X$.
\begin{lemma}\label{lem:DFS}
Let $G$ be a digraph on $n$ vertices and $C,\Lambda$ a conflict system on~$E(G)$. Suppose that, for any two disjoint sets $S,T\subset V(G)$ of size $k$ and any subset $X\subset V(C)$ of size at most~$n-1$, there exists an edge $e\in E(G)$ from $S$ to~$T$ and a representative in $\Lambda(e)\setminus X$ which has no conflict with~$X$. Then $G$ contains an admissible path of length $n-2k+1$.
\end{lemma}
\begin{proof}
We proceed in a depth-first-search manner. As usual, we maintain three sets of vertices: the set $S$ of vertices whose exploration is complete, the set $T$ of unvisited vertices, and the set $U=V(G)\setminus(S\cup T)$ which functions as a stack (last in, first out). Additionally, we keep track of the set $X\subset V(C)$ of chosen representatives. Initially, we have $S=U=X=\emptyset$ and $T=V(G)$.
In each round, the algorithm proceeds as follows. If the stack $U$ is empty, then some vertex is removed from $T$ and pushed into~$U$. If the stack $U$ is non-empty, consider the last vertex $u$ that was inserted into~$U$. If there exist $v\in T$ such that $uv\in E(G)$ and $y\in \Lambda(uv)\setminus X$ such that $y$ has no conflict with $X$, then delete (one such) $v$ from $T$ and insert it into $U$, and add $y$ to~$X$. Otherwise, delete $u$ from $U$ and add it to~$S$.
The algorithm terminates when $U=T=\emptyset$ and $S=V(G)$.
Clearly, in each round, exactly one vertex ``moves'', either from $T$ to $U$ or from $U$ to~$S$.
Hence, there exists a moment when $|S|=|T|$. Moreover, the vertices in $U$ always form an admissible directed path by construction.
Suppose now, for the sake of contradiction, that $G$ does not contain an admissible path of the desired length. Then $|U|\le n-2k+1$ and hence $|S|=|T|\ge k$. By assumption of the lemma, there exist $u\in S$ and $v\in T$ such that $uv\in E(G)$ and $y\in \Lambda(uv)\setminus X$ such that $y$ has no conflict with~$X$.
However, this contradicts the fact that $u$ was moved from $U$ to $S$ at some point, since instead the algorithm would have moved $v$ from $T$ to $U$ and added $y$ to~$X$.
\end{proof}
Now, we use Lemma~\ref{lem:DFS} to connect the components of an induced linear forest obtained via Lemma~\ref{lem:forests}.
\lateproof{Theorem~\ref{thm:path}}
Fix ${\epsilon}>0$ and assume that $d\ge d_0$ is sufficiently large.
We will assume that $d\le n^{1/2}\log^2 n$. For the case $d=\omega(n^{1/2}\log n)$, Lemma~\ref{lem:2ndmoment} implies that \textbf{whp} there exists an induced path even of the asymptotically optimal length $(2-{\epsilon})\frac{n}{d}\log d$.
Split the vertex set $[n]$ into $V_1$ and $V_2$ each of size at least $n/3$. We explore the random edges of $G\sim G(n,p)$ in two stages. First, we expose the edges inside~$V_1$. Here, we find an induced linear forest $F$ with large components. In the second round, we expose the remaining edges. Our goal is to use some vertices from $V_2$ to connect almost all of the components of $F$ into a large induced path.
Set $$L=d^{1/2}/\log^5 d, \quad m = {\epsilon} L/8, \quad k=(3/2-{\epsilon}/4)\frac{n}{d}\log d, \quad N=k/L.$$
Expose first the edges inside~$V_1$. By Lemma~\ref{lem:forests}, \textbf{whp} we can find in $G[V_1]$ an induced linear forest $F$ with $N$ components of order~$L$.\footnote{We could get an even larger forest from Lemma~\ref{lem:forests}, but this would not help us here since the bottleneck is to ensure that the new connecting vertices only have two edges to the given forest.} Note here that the edge probability $p$ in the application is the same, and $\log(np/3)\sim \log(d)$.
From now on, we assume that any such $F$ is given. It suffices to prove that, when exposing the edges between $V_1,V_2$ and inside $V_2$, \textbf{whp} we can find the desired induced path.
Let $\mathcal{P}$ be the set of components of~$F$. We give every path $P\in \mathcal{P}$ an arbitrary direction, which will be fixed for the rest of the proof.
Let $P^-$ denote the first $m$ vertices on $P$, and $P^+$ the last $m$ vertices on~$P$, according to the chosen direction.
We define a (random) auxiliary digraph $D$ with vertex set~$\mathcal{P}$, where an edge $(P_1,P_2)$ represents a suitable connection between $P_1^+$ and~$P_2^-$.
Formally, for distinct $P_1,P_2\in \mathcal{P}$ and $a\in V_2$, we say that $(P_1,P_2)$ is \emph{$a$-connected} if $a$ has exactly one edge to both $P_1^+$ and $P_2^-$, but no edge to any vertex in $V(F)\setminus (P_1^+\cup P_2^-)$. The pair $(P_1,P_2)$ forms an edge in $D$ if it is $a$-connected for some $a\in V_2$.
In order to facilitate the application of Lemma~\ref{lem:DFS}, we need to specify a suitable conflict system on~$E(D)$.
The conflict graph is simply the random graph $G[V_2]$, and to an edge $(P_1,P_2)$ of $D$, we assign the set of all $a\in V_2$ for which $(P_1,P_2)$ is $a$-connected.
Clearly, an admissible path in $D$ yields an induced path in~$G$.
To complete the proof, it suffices to show that \textbf{whp} $D$ contains an admissible path of length $(1-{\epsilon}/4)N$, as then the induced path in $G$ has length at least $(1-{\epsilon}/4)N(L-2m)=(1-{\epsilon}/4)(1-{\epsilon}/4)NL\ge (3/2-{\epsilon})\frac{n}{d}\log d$, as desired.
To achieve this, we show that the conditions of Lemma~\ref{lem:DFS} hold \textbf{whp}.
Fix any disjoint sets $S,T\subset \mathcal{P}$ of size ${\epsilon} N/8$ and any $X\subset V_2$ of size at most~$N-1$. Ultimately, we want to use a union bound, so we note that the number of choices for these sets are at most $2^N$ for each of $S$ and $T$, and at most $$\sum_{j=0}^{N-1}\binom{n}{j} \le N\binom{n}{N}\le N \left(\frac{{e} n}{N} \right)^N \le \exp(4N\log d)$$ for $X$, where we used $N\le n/2$ in the first and $N\ge n/d^2$ in the last inequality.
Call $a\in V_2\setminus X$ \emph{good} if some pair in $S\times T$ is $a$-connected and $a$ has no conflict with~$X$.
Observe that $S,T,X$ satisfy the condition of Lemma~\ref{lem:DFS} if and only if some $a\in V_2\setminus X$ is good.
For $(P_1,P_2)\in S\times T$ and $a\in V_2\setminus X$, the probability that $(P_1,P_2)$ is $a$-connected and $a$ has no conflict with $X$ is exactly $$\alpha=m^2p^2(1-p)^{|F|-2+|X|}$$ as there are $m^2$ choices for the two neighbours of $a$ in $P_1^+$ and $P_2^-$, and in any such case we need $|F|-2+|X|$ ``non-edges''.
Since $|F|+|X| \le (3/2-{\epsilon}/5) \frac{n}{d}\log d$ and $1-p\ge \exp(-p-O(p^2))$, we have $$(1-p)^{|F|-2+|X|}\ge \exp(-(1+O(p))p(3/2-{\epsilon}/5)p^{-1}\log d )) \ge d^{-3/2+{\epsilon}/6}.$$
Moreover, two distinct pairs $(P_1,P_2)$, $(P_1',P_2')$ cannot both be $a$-connected at the same time, thus the probability that $a$ is good
is simply $|S||T|\alpha$.
Finally, whether $a$ is good is determined solely by the potential edges between $a$ and $V(F)\cup X$. Therefore, these events are independent for distinct $a$'s.
Hence, the probability that $S,T,X$ violate the condition of Lemma~\ref{lem:DFS}, that is, no $a$ is good, is at most
$$(1-|S||T|\alpha)^{|V_2\setminus X|} \le \exp(-|S||T|\alpha|V_2\setminus X|) \le \exp(-({\epsilon} N/8)^2 \alpha(n/4) ) \le \exp(-Nd^{{\epsilon}/7}) ,$$
where the last inequality holds since
$$\frac{{\epsilon}^2}{256}N\alpha n \ge \frac{n}{dL} L^2 (d/n)^2 d^{-3/2+{\epsilon}/6}n = Ld^{-1/2+{\epsilon}/6}\ge d^{{\epsilon}/7}.$$
The said union bound completes the proof.
\noproof\bigskip
\section{Concluding remarks}
\begin{itemize}
\item We proved that the random graph $G(n,d/n)$ \textbf{whp} contains an induced path of length $(3/2-o_d(1))\frac{n}{d}\log d$. It would be very nice to improve the constant $3/2$ to $2$, which would be optimal. One possible way to achieve this, using parts of our argument, is to show that there exists an induced linear forest of size $\sim 2\frac{n}{d}\log d$ where each component path has length $d^{1-o(1)}$.
\item Our proof is not constructive, since the first part of the argument uses the second moment method.
The previously best bound $\sim \frac{n}{d}\log d$ due to \L{}uczak~\cite{luczak:93} and Suen~\cite{suen:92} was obtained via certain natural algorithms.
It seems that this could be a barrier for such approaches. A (rather unsophisticated) heuristic giving evidence is that when we have grown an induced tree of this size, and assume the edges outside are still random, then the expected number of vertices which could be attached to a given vertex of the tree is less than one.
Moreover, such an ``algorithmic gap'' has been discovered for many other natural problems.
In particular, Coja-Oghlan and Efthymiou~\cite{COE:15} proved that the space of independent sets of size $k$ becomes ``shattered'' when $k$ passes $\sim \frac{n}{d}\log d$, which seems to cause local search algorithms to get stuck.
\item
In~\cite{CDKS:ta} it is conjectured that one should not only be able to find an induced path of size $\sim 2\frac{n}{d}\log d$, but any given bounded degree tree. For dense graphs, when $d=\omega(n^{1/2}\log n)$, this follows from the second moment method (see~\cite{draganic:20}). In fact, Lemma~\ref{lem:2ndmoment} shows that the maximum degree can even be a small polynomial. On the contrary, the sparse case seems to be more difficult, mainly because the vanilla second moment method does not work.
However, Dani and Moore~\cite{DM:11} demonstrated that one can actually make the second moment method work, at least for independent sets, by considering a \emph{weighted} version. This even gives a more precise result than the classical one due to Frieze~\cite{frieze:90}. It would be interesting to find out whether this method can be adapted to induced trees.
\end{itemize}
\section*{Acknowledgement}
Thanks to Benny Sudakov and Nemanja Dragani\'c for very useful discussions.
\bibliographystyle{amsplain_v2.0customized}
| {
"timestamp": "2021-02-19T02:16:37",
"yymm": "2102",
"arxiv_id": "2102.09289",
"language": "en",
"url": "https://arxiv.org/abs/2102.09289",
"abstract": "We show that for $d\\ge d_0(\\epsilon)$, with high probability, the random graph $G(n,d/n)$ contains an induced path of length $(3/2-\\epsilon)\\frac{n}{d}\\log d$. This improves a result obtained independently by Luczak and Suen in the early 90s, and answers a question of Fernandez de la Vega. Along the way, we generalize a recent result of Cooley, Draganić, Kang and Sudakov who studied the analogous problem for induced matchings.",
"subjects": "Combinatorics (math.CO)",
"title": "Note on induced paths in sparse random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787849789998,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.709381145300074
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.